source
stringlengths 2
152
| text
stringlengths 7
541k
|
|---|---|
https://ieeexplore.ieee.org/xpl/ebooks/bookPdfWithBanner.jsp?fileName=6813339.pdf&bkn=6813338&pdfType=book
|
Series ISSN: 1939-5221
Series ISSN: 1939-5221
Series ISSN: 1939-5221
SYNTHESIS LECTURES ON ENGINEERING
SYNTHESIS LECTURES ON ENGINEERING
SYNTHESIS LECTURES ON ENGINEERING
Series Editor: Steven F. Barrett, University of Wyoming
Series Editor: Steven F. Barrett, University of Wyoming
Series Editor: Steven F. Barrett, University of Wyoming
Fundamentals of Engineering
Fundamentals of Engineering
Fundamentals of Engineering
Economics and Decision Analysis
Economics and Decision Analysis
Economics and Decision Analysis
David L. Whitman, University of Wyoming
David L. Whitman, University of Wyoming
David L. Whitman, University of Wyoming
Ronald E. Terry, Brigham Young University
Ronald E. Terry, Brigham Young University
Ronald E. Terry, Brigham Young University
The authors cover two general topics: basic engineering economics and risk analysis in this text. Within
The authors cover two general topics: basic engineering economics and risk analysis in this text. Within
The authors cover two general topics: basic engineering economics and risk analysis in this text. Within
the topic of engineering economics are discussions on the time value of money and interest relationships.
the topic of engineering economics are discussions on the time value of money and interest relationships.
the topic of engineering economics are discussions on the time value of money and interest relationships.
These interest relationships are used to define certain project criteria that are used by engineers and
These interest relationships are used to define certain project criteria that are used by engineers and
These interest relationships are used to define certain project criteria that are used by engineers and
project managers to select the best economic choice among several alternatives. Projects examined will
project managers to select the best economic choice among several alternatives. Projects examined will
project managers to select the best economic choice among several alternatives. Projects examined will
include both income-and service-producing investments. The effects of escalation, inflation, and taxes
include both income-and service-producing investments. The effects of escalation, inflation, and taxes
include both income-and service-producing investments. The effects of escalation, inflation, and taxes
on the economic analysis of alternatives are discussed. Risk analysis incorporates the concepts of probability
on the economic analysis of alternatives are discussed. Risk analysis incorporates the concepts of probability
on the economic analysis of alternatives are discussed. Risk analysis incorporates the concepts of probability
and statistics in the evaluation of alternatives. This allows management to determine the probability of
and statistics in the evaluation of alternatives. This allows management to determine the probability of
and statistics in the evaluation of alternatives. This allows management to determine the probability of
success or failure of the project. Two types of sensitivity analyses are presented.The first is referred to as
success or failure of the project. Two types of sensitivity analyses are presented.The first is referred to as
success or failure of the project. Two types of sensitivity analyses are presented.The first is referred to as
the range approach while the second uses probabilistic concepts to determine a measure of the risk
the range approach while the second uses probabilistic concepts to determine a measure of the risk
the range approach while the second uses probabilistic concepts to determine a measure of the risk
involved. The authors have designed the text to assist individuals to prepare to successfully complete the
involved. The authors have designed the text to assist individuals to prepare to successfully complete the
involved. The authors have designed the text to assist individuals to prepare to successfully complete the
economics portions of the Fundamentals of Engineering Exam.
economics portions of the Fundamentals of Engineering Exam.
economics portions of the Fundamentals of Engineering Exam.
About SYNTHESIs
About SYNTHESIs
About SYNTHESIs
This volume is a printed version of a work that appears in the Synthesis
This volume is a printed version of a work that appears in the Synthesis
This volume is a printed version of a work that appears in the Synthesis
Digital Library of Engineering and Computer Science. Synthesis Lectures
Digital Library of Engineering and Computer Science. Synthesis Lectures
Digital Library of Engineering and Computer Science. Synthesis Lectures
provide concise, original presentations of important research and development
provide concise, original presentations of important research and development
provide concise, original presentations of important research and development
topics, published quickly, in digital and print formats. For more information
topics, published quickly, in digital and print formats. For more information
topics, published quickly, in digital and print formats. For more information
visit www.morganclaypool.com
visit www.morganclaypool.com
visit www.morganclaypool.com
Mor gan Cl aypool Publishers
Mor gan Cl aypool Publishers
Mor gan Cl aypool Publishers
&
&
&
ISBN: 978-1-60845-864-6
ISBN: 978-1-60845-864-6
ISBN: 978-1-60845-864-6
90000
90000
90000
w w w . m o r g a n c l a y p o o l . c o m
w w w . m o r g a n c l a y p o o l . c o m
w w w . m o r g a n c l a y p o o l . c o m
9 78 1 608 458646
9 78 1 608 458646
9 78 1 608 458646
W
W
W
H
H
H
I
I
I
T
T
T
M
M
M
A
A
A
N
N
N
•
•
•
T
E
R
R
Y
T
E
R
R
Y
T
E
R
R
Y
F
F
F
U
U
U
N
N
N
D
D
D
A
A
A
M
M
M
E
E
E
N
N
N
T
T
T
A
A
A
L
L
L
S
S
S
O
O
O
F
F
F
E
E
E
N
N
N
G
G
G
I
I
I
N
N
N
E
E
E
E
E
E
R
R
R
I
I
I
N
N
N
G
G
G
E
E
E
C
C
C
O
O
O
N
N
N
O
O
O
M
M
M
I
I
I
C
C
C
S
S
S
A
A
A
N
N
N
D
D
D
D
D
D
E
E
E
C
C
C
I
I
I
S
S
S
I
I
I
O
O
O
N
N
N
A
A
A
N
N
N
A
A
A
L
L
L
Y
Y
Y
S
S
S
I
I
I
S
S
S
M
M
M
o
o
o
r
r
r
g
g
g
a
a
a
n
n
n
&
&
&
C
C
C
l
l
l
a
a
a
y
y
y
p
p
p
o
o
o
o
o
o
l
l
l
CM& Mor gan Cl aypool Publishers
CM& Mor gan Cl aypool Publishers
CM& Mor gan Cl aypool Publishers
&
&
&
Fundamentals of
Fundamentals of
Fundamentals of
Engineering Economics
Engineering Economics
Engineering Economics
and Decision Analysis
and Decision Analysis
and Decision Analysis
David Whitman
David Whitman
David Whitman
Ronald E. Terry
Ronald E. Terry
Ronald E. Terry
SYNTHESIS LECTURES ON ENGINEERING
SYNTHESIS LECTURES ON ENGINEERING
SYNTHESIS LECTURES ON ENGINEERING
Steven F. Barrett, Series Editor
Steven F. Barrett, Series Editor
Steven F. Barrett, Series Editor
Fundamentals of
Engineering Economics
and Decision Analysis
Synthesis Lectures on
Engineering
Editor
Steven S. Barrett, University of Wyoming
Fundamentals of Engineering Economics and Decision Analysis
David L. Whitman and Ronald E. Terry
2012
A Little Book on Teaching: A Beginner’s Guide for Educators of Engineering and Applied
Science
Steven F. Barrett
2012
Engineering Thermodynamics and 21st Century Energy Problems: A Textbook Companion
for Student Engagement
Donna Riley
2011
MATLAB for Engineering and the Life Sciences
Joseph V. Tranquillo
2011
Systems Engineering: Building Successful Systems
Howard Eisner
2011
Fin Shape Thermal Optimization Using Bejan’s Constructal Theory
Giulio Lorenzini, Simone Moretti, and Alessandra Conti
2011
Geometric Programming for Design and Cost Optimization (with illustrative case study
problems and solutions), Second Edition
Robert C. Creese
2010
Survive and Thrive: A Guide for Untenured Faculty
Wendy C. Crone
2010
Geometric Programming for Design and Cost Optimization (with Illustrative Case Study
Problems and Solutions)
Robert C. Creese
2009
iii
Style and Ethics of Communication in Science and Engineering
Jay D. Humphrey and Jeffrey W. Holmes
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Analog Multimedia
Explorations
Lina J. Karam and Naji Mounsef
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Digital Multimedia and
Robotics Explorations
Lina J. Karam and Naji Mounsef
2008
CAD/CAM of Sculptured Surfaces on Multi-Axis NC Machine: The DG/K-Based
Approach
Stephen P. Radzevich
2008
Tensor Properties of Solids, Part Two: Transport Properties of Solids
Richard F. Tinder
2007
Tensor Properties of Solids, Part One: Equilibrium Tensor Properties of Solids
Richard F. Tinder
2007
Essentials of Applied Mathematics for Scientists and Engineers
Robert G. Watts
2007
Project Management for Engineering Design
Charles Lessard and Joseph Lessard
2007
Relativistic Flight Mechanics and Space Travel
Richard F. Tinder
2006
Copyright © 2012 by Morgan & Claypool
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in
any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations in
printed reviews, without the prior permission of the publisher.
Fundamentals of Engineering Economics and Decision Analysis
David L. Whitman and Ronald E. Terry
www.morganclaypool.com
ISBN: 9781608458646
paperback
ISBN: 9781608458653
ebook
DOI 10.2200/S00410ED1V01Y201203ENG018
A Publication in the Morgan & Claypool Publishers series
SYNTHESIS LECTURES ON ENGINEERING
Lecture #18
Series Editor: Steven S. Barrett, University of Wyoming
Series ISSN
Synthesis Lectures on Engineering
Print 1939-5221 Electronic 1939-523X
Fundamentals of
Engineering Economics
and Decision Analysis
David L. Whitman
University of Wyoming
Ronald E. Terry
Brigham Young University
SYNTHESIS LECTURES ON ENGINEERING #18
CM&
Morgan
&
cLaypool
publishers
ABSTRACT
The authors cover two general topics: basic engineering economics and risk analysis in this text.
Within the topic of engineering economics are discussions on the time value of money and
interest relationships. These interest relationships are used to define certain project criteria that are
used by engineers and project managers to select the best economic choice among several alternatives.
Projects examined will include both income- and service-producing investments. The effects of
escalation, inflation, and taxes on the economic analysis of alternatives are discussed. Risk analysis
incorporates the concepts of probability and statistics in the evaluation of alternatives. This allows
management to determine the probability of success or failure of the project. Two types of sensitivity
analyses are presented.The first is referred to as the range approach while the second uses probabilistic
concepts to determine a measure of the risk involved. The authors have designed the text to assist
individuals to prepare to successfully complete the economics portions of the Fundamentals of
Engineering Exam.
KEYWORDS
engineering economics, time value of money, net present value, internal rate of return,
cash flow analysis, probability, statistics, risk analysis
vii
To our parents, wives, children, and grandchildren
with much love and gratitude for everything.
Contents
ix
1
2
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1
Engineering Economics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Basic Engineering Economics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 Risk Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Decision Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3
Fundamentals of Engineering Exam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Interest and the Time Value of Money . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1
2.2
2.3
2.4
2.5
2.6
Time Value of Money . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Sources of Capital . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Interest Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3.1 Simple Interest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3.2 Compound Interest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3.3 Nominal, Effective, and Continuous Interest Rates . . . . . . . . . . . . . . . . . . . . 5
Cash Flow Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Interest Formulas for Discrete Compounding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.5.1 Single Payments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.5.2 Uniform Series (Annuities) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.5.3 Uniform Gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.5.4 The use of Financial Functions in Excel® . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5.5 Example Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Interest Formulas for Continuous Compounding . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.6.1 Continuous Compounding for Discrete Payments . . . . . . . . . . . . . . . . . . . . 19
2.6.2 Continuous Compounding for Continuous Payments . . . . . . . . . . . . . . . . . 19
2.7
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3
Project Evaluation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.1
3.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Alternate Uses of Capital . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
x
4
5
3.3 Minimum Acceptable Rate of Return (MARR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.4
3.5
3.6
3.7
3.8
3.9
Equivalence Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Net Present Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.5.1 Analysis of a Single Investment Opportunity . . . . . . . . . . . . . . . . . . . . . . . . 27
3.5.2 Do Nothing Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.5.3 Analysis of Multiple Investment Opportunities . . . . . . . . . . . . . . . . . . . . . . 30
Rate of Return Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.6.1 Internal Rate of Return (IRR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.6.2 Spreadsheet Formula for IRR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.6.3 External Rate of Return (ERR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.6.4 Spreadsheet Formula for ERR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
The Reinvestment Question in Rate of Return Calculations . . . . . . . . . . . . . . . . . . 37
3.7.1 Perception #1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.7.2 Perception #2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.7.3 Final Comments on ERR and IRR Relationships . . . . . . . . . . . . . . . . . . . . 41
Acceleration Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Payout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.10 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Service Producing Investments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.1
4.2
4.3
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Equal Life Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.2.1 Equivalence Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.2.2 Rate of Return Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Unequal Life Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.3.1 Least Common Multiple Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.3.2 Common Study Period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.4
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Income Producing Investments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.1
5.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Investment in a Single Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.3 Mutually Exclusive Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.3.1 Equivalence Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.3.2 Rate of Return Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.3.3 Using Excel® . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.4
5.5
5.6
5.7
Unequal Life Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Independent and Contingent Investments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.5.1 Independent Investments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.5.2 Contingent Investments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.5.3 Limited Investment Capital . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Ranking Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
xi
6
Determination of Project Cash Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6.1
6.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Escalation and Inflation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6.3 Depreciation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
6.3.1 Straight-Line Depreciation (SL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
6.3.2 Declining-Balance Depreciation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
6.3.3 Sum-of-the-Years-Digits (SYD) Depreciation . . . . . . . . . . . . . . . . . . . . . . . 97
6.3.4 Modified Accelerated Cost Recovery System (MACRS) . . . . . . . . . . . . . 102
6.4
Cash Flow Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
6.4.1 Capital Investment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
6.4.2 Gross Revenue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
6.4.3 Operating Expenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
6.4.4 Before-Tax Profit Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
6.4.5 Before-Tax Cash Flow Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
6.4.6 Depreciation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.4.7 Taxable Income . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6.4.8 State and Federal Income Tax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6.4.9 Net Profit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
6.4.10 Cash Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
6.5
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
7
Financial Leverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
7.1
7.2
7.3
7.4
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Financial Leverage and Associated Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Adjustment to Cash Flow Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
7.3.1 Leverage and Mutually Exclusive Projects . . . . . . . . . . . . . . . . . . . . . . . . . . 132
7.3.2 Excel® Spreadsheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
xii
8
Basic Statistics and Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
8.1
8.2
8.3
8.4
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
8.2.1 Measures of Central Tendency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
8.2.2 Measures of Dispersion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
8.2.3 Frequency Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
8.2.4 Relative Frequency Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
8.3.1 Classical Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
8.3.2 Relative Frequency Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
8.3.3 Subjective Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
8.3.4 Probability Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
9
Sensitivity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
9.1
9.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
9.1.1 Range Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
9.1.2 Monte Carlo Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
A Compound Interest Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Authors’ Biographies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
xiii
Preface
Those individuals working on the development of an income-generating project, either for personal
use or company use, are frequently called upon to determine if the endeavor will prove profitable if
fully developed. By profitable, we simply mean that the project will provide a desirable rate of return
on investment through the generation of revenue that offsets any capital and/or operating costs.
The intent of this book is to provide individuals with the tools to evaluate projects to determine
profitability.The subject has been called: Engineering Economics or Project Evaluation or Economic
Evaluation or Decision Analysis. Whatever one chooses to call it, the reader who studies this material
and becomes proficient in its content, will be able to analyze project cash flows and make a decision
as to the profitability of the project. The authors, mainly because of their engineering backgrounds,
have chosen to refer to the subject matter as engineering economics.
In addition to income-generating projects, this book will also assist those individuals who are
analyzing two or more ways of doing a service-producing project. A service-producing project is
one, that instead of generating income for the investor, provides a service at a cost to the investor.
An example could be the renting versus purchasing of a vehicle to provide a needed service for a
company.
The authors cover two general topics: basic engineering economics and risk analysis in the
text. Chapters 2-6 contain content relative to basic engineering economics and Chapters 7-9 present
material on risk analysis.
Within the topic of engineering economics are discussions on the time value of money and
interest relationships. These interest relationships are used to define certain project criteria that are
used by engineers and project managers to select the best economic choice among several alterna-
tives. Projects examined will include both income and service producing investments. The effects of
escalation, inflation, and taxes on the economic analysis of alternatives are discussed.
There is always risk involved in undertaking a project. Risk analysis incorporates the concepts
of probability and statistics in the evaluation of alternatives. This allows management to determine
the probability of success or failure of the project. Two types of sensitivity analyses are presented. The
first is referred to as the range approach while the second uses probabilistic concepts to determine a
measure of the risk involved.
The authors have designed the text to assist individuals to prepare to successfully complete
the economics portions of the Fundamentals of Engineering Exam.
xiv PREFACE
The authors wish to thank Joel Claypool and his associates at Morgan & Claypool for their
encouragement and excellent work on the preparation and production of this text.
David L. Whitman and Ronald E. Terry
May 2012
C H A P T E R 1
Introduction
1
1.1 ENGINEERING ECONOMICS
Nearly all projects that are proposed to be undertaken by any engineering firm will be, at some point,
subjected to close economic scrutiny. The results of this analysis will be a basis (perhaps one of many)
for deciding whether or not to proceed with the project. The major emphasis of this text, therefore,
is to provide the engineer with the tools necessary to make the aforementioned economic decision.
There are two general topics which are included in this textbook: basic engineering economics
and risk analysis. A very brief overview of each of these topics is presented in the following paragraphs.
1.1.1 BASIC ENGINEERING ECONOMICS
Within this topic are discussions on the time value of money and interest relationships. These
interest relationships are used to define certain project criteria that are used by engineers and project
managers to select the best economic choice among several alternatives. Projects examined will
include traditional projects that generate a profit for the company and service producing projects
which do not provide income, but do provide a needed service. The effects of escalation, inflation,
and taxes on the economic analysis of alternatives will be discussed.
1.1.2 RISK ANALYSIS
There is always risk involved in undertaking a project. Management is interested in the quantification
of that risk. Risk analysis incorporates the concepts of probability and statistics in the evaluation
of alternatives. This allows management to determine the probability of success or failure of the
project. While there are a variety of ways to incorporate risk analysis into the evaluation of a project,
the authors will present two methods that utilize what is known as sensitivity analysis. That is, deter-
mining the sensitivity of the economic viability of a project as the costs and/or incomes vary about
estimated values. The first is referred to as the range approach, while the second uses probabilistic
concepts to determine a measure of the risk involved.
1.2 DECISION ANALYSIS
As described above, the overall objective of any economic analysis is to provide a basis for making
a sound decision regarding any particular project. For example, suppose that an engineer is given
the assignment to implement a project for which there are multiple alternative methods that will
achieve the goals of the project. The question is: which alternative should be chosen? The reader
2
1. INTRODUCTION
should recognize that there is always a choice among two or more alternatives. However, if only one
technical alternative is found, then that alternative must be compared with the “do nothing” case.
The “do nothing” case represents the situation where a company keeps its money invested in other
alternatives which earn some minimum rate of return. The minimum rate of return will be referred
to as the minimum acceptable rate of return (MARR) and will be discussed in detail in Chapter 2.
Thus, there are always at least two alternatives in any economic decision.
This textbook will provide the engineer with the necessary tools to determine the “best”
economic choice among the alternatives. However, one must realize that final decisions are not only
made on the results of the economic evaluations. Other general areas of consideration could be
classified as financial and intangible issues.
The “best” economic choice will be made through the proper use of the time value of money
formulas that will be presented. Financial aspects have to do with the obtaining of funds required
to initiate the project. There are several sources which may be considered, i.e., internal company
funds, lending institutions, the issuing of bonds, or the issuing of new stock. The intangible area of a
project is the most difficult to analyze. Included in the intangible aspects are environmental, social,
and political impacts. These are the most difficult to quantify. The focus of this textbook will be on
the economic aspects of a project and very little time will be devoted to the areas of financial and
intangible aspects. However, they are alluded to from time to time in order to remind the engineer
of their importance and the obligation to consider them in the final decision.
1.3
FUNDAMENTALS OF ENGINEERING EXAM
It is envisioned that information found in this textbook will prepare students to successfully complete
the economics portions of the Fundamentals of Engineering Exam. The specifications for this exam
can be found at http://www.ncees.org/Exams/FE_exam.php.
C H A P T E R 2
3
Interest and the Time Value of
Money
2.1 TIME VALUE OF MONEY
When an individual or a company desires to invest an amount of capital in a long-term project, the
effect of time on the value of that capital needs to be considered. The effect of time on the value of
money can be illustrated by the following examples.
Consider a sum of $1000 that an individual has accumulated. If the $1000 were buried in a
can under a tree for some future need, the individual, one year later, would still have $1000. However,
if the $1000 were placed in an insured savings account earning 3% interest for one year, the amount
would have grown to $1030. Obviously, the length of time and the different investment opportunities
(represented by different interest rates) lead to varying amounts of money that the $1000 can yield
at some future date.
A second example deals with the same $1000 and its purchasing power as a function of time.
Suppose an individual has a choice of purchasing 1000 items now at a price of $1.00 per item or
waiting until a future date to make the purchase. If, over the course of one year, the price increased
to $1.03 per item, the $1000 will only be able to purchase 970 items. Thus, the value, in terms of
purchasing power, has decreased with time.
The longer the life of the project, the more important will be the considerations of the time
value of money. Other factors that affect the outcome of investment projects are inflation, taxes, and
risk. These will be discussed later in the text.
2.2
SOURCES OF CAPITAL
There are, in general, two sources of capital needed to make an investment. Capital can be obtained
either from the investor’s own funds or from a lender. Wherever capital is obtained, there is a cost
associated with the use of the funds. If they are obtained from a lender, the cost of capital is the
interest rate at which the funds are loaned to the investor. This interest rate reflects the current
state of the economy as a whole, the bank’s administrative costs, and, perhaps, the risk associated
with the particular loan as viewed by the lender. If the investor chooses to use his own funds for
the required capital, then the cost is called the opportunity cost of capital. The opportunity cost
reflects the income that could be generated from other opportunities the investor might have for
his funds. This opportunity cost is often referred to as the minimum acceptable rate of return
4
2. INTEREST AND THE TIME VALUE OF MONEY
(MARR). This minimum acceptable rate of return could be the interest rate obtained by placing
the funds in a certificate of deposit or savings account at a bank or it could be the rate of return on
another investment opportunity.The MARR is an important concept in the evaluation of investment
opportunities and will be discussed in Chapter 3. For now, the MARR will just be treated as an
interest rate, i.
2.3
2.3.1
INTEREST CONCEPTS
SIMPLE INTEREST
The amount of interest earned by an investment (for example, a single principal deposit in a savings
account) is called simple interest when the interest is found by Equation 2.1:
I = P in
(2.1)
where,
I = total interest, dollars
P = amount of principal, dollars
i = interest rate per interest period, fraction
n = number of interest periods.
Consider the following example. Individual A agrees to loan individual B $1000 for a time
period of 3 years. B agrees to pay A the $1000 at the end of the 3 years, plus an amount of interest
determined by applying a simple interest rate of 10% per year. The total interest charge will be:
I = (1000)(0.10)(3) = $300
Therefore, at the end of 3 years, B will pay a total of $1300 to A which would represent the $1000
initially borrowed plus $300 interest for the use of A’s money.
2.3.2 COMPOUND INTEREST
Simple interest concepts are used infrequently in today’s business dealings, but they do provide the
basis for compounded interest rate concepts that are utilized. Compounded interest is computed
by applying the interest rate to the remaining unpaid principal plus any accumulated interest. One
could consider it as “the interest earns interest.” Referring back to the example presented above, the
total interest that B will pay A over 3 years would be calculated as the following:
Iyr 1 = (1000)(0.1) = $100, which would result in a balance at the end of year 1 of $1100
Iyr 2 = (1100)(0.1) = $110, which would result in a balance at the end of year 2 of $1210
Iyr 3 = (1210)(0.1) = $121, which would result in a balance at the end of year 3 of $1331
Therefore, at the end of 3 years, B will pay a total of $1331 to A which is $31 higher than for the
simple interest case. This difference results from compounding the interest. One should note that
the difference between these two methods will become larger as the interest rate and number of
interest periods increase.
2.3. INTEREST CONCEPTS 5
2.3.3 NOMINAL, EFFECTIVE, AND CONTINUOUS INTEREST RATES
The length of the interest period can and does vary from application to application. Common
interest rate periods are annually, semi-annually, quarterly, monthly, daily, and in the limiting case,
continuously. The amount of interest that is earned or charged to a principal will increase as the
compounding period becomes smaller.
Usually, a lending institution will quote a nominal annual percentage rate. However, payments
on the loan are made more often than annually. For example, consider a loan that is quoted at
10% nominal with semi-annual compounding (and, thus, semi-annual payments). The 10% annual
interest compounded semi-annually means that every one-half year, 5% interest is earned or charged
to the principal. This leads to the concept of effective yearly interest rate. The effective yearly interest
rate can be found by computing the value that the principal has grown to at the end of year one, F ,
subtracting the original principal, and then dividing by the principal:
F = 1000 + 1000(.05) + (1000 + 1000(.05))(.05) = 1000(1.05)2
Therefore, the effective rate is:
ie = (1000(1.05)2 − 1000)/1000 = 0.1025
or 10.25% per year
In general, the effective rate can be found by:
(cid:2)
ie =
(cid:3)
m
− 1
1 + i
m
where, m = number of interest periods per year
i = yearly nominal interest rate, fraction
ie = yearly effective interest rate, fraction.
In the limiting case of continuous compounding, the effective rate is given by:
ie = ei − 1
(2.2)
(2.3)
Table 2.1 lists the effective rates for various compounding time periods for a 10% nominal rate.
As can be observed in Table 2.1, the difference between the effective rates generated by the
various compounding periods is relatively small. The differences can become insignificant when
considering the many uncertainties associated with analyzing most economic investments.
One should be careful with the term Annual Percentage Rate (APR) when dealing with
lending institutions. The APR is a yearly percentage rate that expresses the total finance charge on
a loan over its entire term. The APR includes the nominal interest rate, fees, points, and mortgage
insurance, and is therefore a more complete measure of a loan’s cost than the interest rate alone.
The loan’s nominal interest rate, not its APR, is used to calculate the monthly principal and interest
payment.
6
2. INTEREST AND THE TIME VALUE OF MONEY
Table 2.1: Example of effective interest rates for
compounding time periods
2.4 CASH FLOW DIAGRAMS
The construction of a cash flow diagram, sometimes referred to as a time line, will greatly aid in
the analysis of an investment opportunity. The cash flow diagram is a way of accounting for all cash
incomes and outflows at their appropriate position in time. That is, in general terms, the cash flow
for any particular period is the income received during that period minus the expenses incurred
during that same period. A good analogy to a cash flow diagram is one’s checkbook. Deposits and
checks are written at specific points in time. These transactions could be consolidated on a monthly
basis to show the net cash flow in or out of the checkbook each month. Usually, once the cash flow
diagram is constructed properly, the economic analysis becomes relatively easy to complete.
There are several ways of constructing a cash flow diagram and the following method is
utilized by the authors. A horizontal line is drawn which represents the length of time (life) of the
investment opportunity (project). The interest periods are then marked off and labeled above the
line. At the extreme left of the time line is time zero (or, as will be defined in the next section,
the Present). Time zero represents the time when the first cash flow is made for this project. Time
zero is, therefore, defined by each project and not by a specific calendar date. Time zero can also be
interpreted as the beginning of time period 1. All cash flows are then placed beneath the time line,
corresponding to the position in time (or interest periods) in which they occurred. Negative cash
flows (expenses exceeding revenues) are given a minus sign. In the time line illustrated below, CF1,
CF2, etc., represent the cash flows occurring at the end of interest period 1, 2, etc. The authors often
use a break in the time line for brevity.
When dealing with investments in engineering projects, the normal approach is to assume
that all investments for a particular year are made at the beginning of the year, while all revenues
and operating expenses occur at the end of the year. This will lead to a conservative evaluation of
the project using the techniques presented in Chapter 3.
0
1
2
3
CF0
CF1
CF2
CF3
…
…
2.4. CASH FLOW DIAGRAMS 7
n-2
n-1
n
CFn-2
CFn-1
CFn
Example 2.1
Consider the example of a 3-year auto loan from the view of the lender. The lender provides
$20,000 to the client (a negative cash flow for the lender) at month 0 at an interest rate of 0.5% per
month. In exchange, the lender receives $608 per month from the client over the next 36 months.
The resulting cash flow diagram would be:
0
1
2
3
…
34
35
36
-20,000
608
608
608
...
608
608
608
Before equations can be developed that relate the time value of money, it is necessary to define
a set of notations that will be used throughout the text.
P = Present sum of money. The present (time zero) is defined as any point from which the
analyst wishes to measure time.
F = Future sum of money. The future is defined as any point n that is greater than time zero.
A = Annuity. This is a uniform set of equal payments that occur at the end of each interest
period from one to n.
G = Uniform gradient. This is a series of payments that uniformly increase or decrease over
the life of the project.
i = Compound interest rate per period.
n = Total number of compounding periods in the cash flow diagram.
The cash flow diagrams that follow should help to define these sums of money.
8
2. INTEREST AND THE TIME VALUE OF MONEY
Present, P :
0
1
2
3
…
n-2
n-1
n
P
Future, F :
0
1
2
3
…
n-2
n-1
n
Annuity, A:
0
1
2
3
…
n-2
n-1
n
F
A
Gradient, G:
0
1
A
2
A
3
…
…
A
A
n-2
n-1
A
n
G
2G
…
(n-3)G
(n-2)G
(n-1)G
2.5
INTEREST FORMULAS FOR DISCRETE
COMPOUNDING
The following section contains the derivation and sample calculations for nine interest formulas used
in most economic calculations. These formulas demonstrate the “equivalency” between the various
2.5. INTEREST FORMULAS FOR DISCRETE COMPOUNDING 9
sums of money described above at specific values of the interest rate, i, and the number of periods,
n. For example, in the example of the 3-year car loan, the $608 monthly payment is “equivalent” to
the $20,000 initial loan at an interest rate of 0.5% per month. These formulas are based on discrete
compounding, i.e., the interest is compounded at the end of each finite interest period. Formulas
used with continuous compounding will be presented later.
SINGLE PAYMENTS
2.5.1
The first formula to be derived allows the calculation of the equivalent future amount F , of a present
sum, P . Suppose P is placed in a bank account that earns i% interest per period. It will grow to a
future amount, F , at the end of n interest periods according to:
F = P (1 + i)n
(2.4)
The derivation of Equation 2.4 is given by:
The factor (1 + i)n is frequently called the Single Payment Compound Amount Factor and
is symbolized in this text by (F /P )i,n. If one is given the amount of P , one uses the (F /P )i,n factor
to find the equivalent value of F . That is,
F = P (F /P )i,n
(2.5)
Similarly, if a future amount, F , is known and it is desired to calculate the equivalent present amount,
P , then Equation 2.4 can be arranged as:
P = F (1 + i)
−n
(2.6)
The factor (1 + i)−n is frequently called the Single Payment Present Worth Factor and is symbolized
in this text by (P /F )i,n. If one is given the amount of F , one uses the (P /F )i,n factor to find the
equivalent value of P . That is,
P = F (P /F )i,n
(2.7)
10
2. INTEREST AND THE TIME VALUE OF MONEY
2.5.2 UNIFORM SERIES (ANNUITIES)
It is often necessary to know the amount of a uniform series payment, A, which would be equivalent
to a present sum, P , or a future sum, F . In the following formulas that relate P , F , and A, it is
imperative that the reader understands that: 1) P occurs one interest period before the first value
of A; 2) A occurs at the end of each interest period; and 3) F occurs at the same time as the last A
(at time n). These relationships were illustrated in the previous cash flow diagrams that originally
defined each of them.
The value of a future sum, F , of a series of uniform payments, each of value A, can be found
by summing the future worth of each individual payment. That is, treat each A as a distinct present
value (but with a different time zero) and use (F /P )i,n to calculate its contribution to the total F :
F = A(1 + i)n−1 + A(1 + i)n−2 + A(1 + i)n−3 + . . . + A(1 + i)1 + A
(2.8)
Multiplying both sides of Equation 2.8 by (1 + i) yields
F (1 + i) = A(1 + i)n + A(1 + i)n−1 + A(1 + i)n−2 + . . . + A(1 + i)2 + A(1 + i)
(2.9)
Subtracting Equation 2.8 from 2.9 yields
F (1 + i) − F = A(1 + i)n − A
Solving for F in terms of A results in:
F = A{[(1 + i)n − 1]/i}
(2.10)
The term in the {} brackets is called the Uniform Series Compound Amount Factor and is symbolized
by (F /A)i,n. If one is given the amount of A, one uses the (F /A)i,n factor to find the equivalent
value of F . That is,
Rearranging Equation 2.10 and solving for A yields
F = A(F /A)i,n
A = F {i/[(1 + i)n − 1]}
(2.11)
(2.12)
The term in the { } brackets is called the Sinking Fund Factor and is symbolized by (A/F )i,n. If one
is given the amount of F , one uses the (A/F )i,n factor to find the equivalent value of A. That is,
A = F (A/F )i,n
(2.13)
Substitution of Equation 2.10 into Equation 2.6 yields Equation 2.14 which contains the Uniform
Series Present Worth Factor, (P /A)i,n in the {} brackets:
P = A{[(1 + i)n − 1]/[i(1 + i)n]}
(2.14)
(2.15)
(2.16)
(2.17)
2.5. INTEREST FORMULAS FOR DISCRETE COMPOUNDING 11
If one is given the amount of A, one uses the (P /A)i,n factor to find the equivalent value of P . That
is,
P = A(P /A)i,n
Rearranging Equation 2.14 and solving for A yields
A = P {[i(1 + i)n]/[(1 + i)n − 1]}
The term in the { } brackets is called the Capital Recovery Factor and is symbolized by (A/P )i,n. If
one is given the amount of P , one uses the (A/P )i,n factor to find the equivalent value of A. That
is,
A = P (A/P )i,n
2.5.3 UNIFORM GRADIENT
In some applications, a series of cash flows will be generated from a project analysis which uniformly
increase or decrease from an initial value. The cash flow diagram is repeated here for clarity.
0
1
2
3
G
2G
…
…
n-2
n-1
n
(n-3)G
(n-2)G
(n-1)G
Without derivation, Equations 2.18, 2.20, and 2.22 can be developed that relate the gradient,
G, to an equivalent annuity, an equivalent present sum, and an equivalent future sum:
A = G{1/i − n/[(1 + i)n − 1]}
(2.18)
The term in the { } brackets is symbolized by (A/G)i,n. If one is given the amount of G, one uses
the (A/G)i,n factor to find the equivalent value of A. That is,
A = G(A/G)i,n
P = G{[(1 + i)n − 1]/[i2(1 + i)n] − n/[i(1 + i)n]}
(2.19)
(2.20)
The term in the { } brackets is symbolized by (P /G)i,n. If one is given the amount of G, one uses
the (P /G)i,n factor to find the equivalent value of P . That is,
P = G(P /G)i,n
F = G{[(1 + i)n − 1]/i2 − n/i}
(2.21)
(2.22)
The term in the { } brackets is symbolized by (F /G)i,n. If one is given the amount of G, one uses
the (F /G)i,n factor to find the equivalent value of F . That is,
F = G(F /G)i,n
(2.23)
12
2. INTEREST AND THE TIME VALUE OF MONEY
The equations for the nine factors are given in Table 2.2 and numerical values are tabulated
in Appendix A for various values of interest rate, i, and number of periods, n so that the user can
look them up rather than use the actual formulas.
Rather than memorizing which factor is needed for a specific equivalency, think about the
formulas in terms of “units conversion.” That is, if the input to a system has units of X and the output
of that system has units of Y, the system provides a units conversion of (Y/X). Thus, if one is given
A (input) and wants to find G (output), the correct formula to use would be (G/A). Knowing the
value of the interest rate and the number of periods, one can look up or compute the value of the
formula.
Table 2.2: Formulas for discrete compounding
Factor Name
Converts
Single Payment
Compound Amount
to F given P
Symbol
(F / P) i,n
Single Payment Present
Worth
to P given F
(P / F) i,n
Uniform Series
Compound Amount
to F given A
(F / A) i,n
Uniform Series Sinking
Fund
Uniform Series Present
Worth
to A given F
(A / F) i,n
to P given A
(P / A) i,n
Capital Recovery
to A given P
(A/ P) i,n
Uniform Gradient
Present Worth
Uniform Gradient
Future Value
Uniform Gradient
Uniform Series
to P given G
(P/ G) i,n
to F given G
(F / G) i,n
to A given G
(A / G)i,n
Formula
(1 + i) n
(1 + i) -n
-1n
(1 + i)
i
i
(1 + i)
-1n
(1 + i)
i
(1 + i)
-1n
n
n
i
(1 + i)
(1 + i) -1n
(1 + i)
2
i
(1 + i)
-1n
n
-
n
(1 + i)
i
n
-1n
(1 + i)
2
i
-
n
i
1 -
i
n
(1 + i) -1n
2.5. INTEREST FORMULAS FOR DISCRETE COMPOUNDING 13
2.5.4 THE USE OF FINANCIAL FUNCTIONS IN EXCEL®
Many cash flow situations can be simulated by using a spreadsheet such as Microsoft Excel®. This
will become more evident in future chapters, but this chapter presents the following useful financial
functions:
Future Value: =FV(rate, nper, pmt, pv, type)
Present Value: =PV(rate, nper, pmt, fv, type)
Annuity: =PMT(rate, nper, pv, fv, type)
Unfortunately, Excel® does not have a built-in function for gradient-type cash flows. That
can, however, be overcome with functions that will be presented in later chapters.
In each of these functions, the variables are as follows:
(cid:129) rate is the interest rate (as a fraction) per period
(cid:129) nper is the number of interest bearing periods
(cid:129) pmt is an annuity (A) sum of money
(cid:129) pv is a present (P ) value sum of money (occurs at time = 0)
(cid:129) fv is a future (F ) value sum of money (occurs at time = nper)
(cid:129) type is 0 for end of period cash flows and 1 for beginning of period cash flows
It should also be noted that in order to use these functions as equivalents for (P /A), (P /F ),
etc., the values of pmt, pv, and fv need to be input as negative numbers.
An example of a simple Excel® spreadsheet that computes the six functions given above is
shown for 10% annual interest rate for 10 years. The actual formulas are shown as well. Recall that
one needs to set “type” equal to zero to designate that the cash flows occur at the end of each period.
An explanation of the values in the various Excel formulas may be necessary. For example, in the
formula that computes F/A (cell B7), the values are as follows: “B1” is the interest rate as a fraction,
“B2” is for 10 periods, “B3” is for an annual annuity payment of $1 per year, “0” represents the fact
that there is no present value payment, and “B6” defines that the various payments are at the end of
the period. Since the formula finds the future value of a $1 annuity, we have effectively computed
(F/A). Some additional Excel® financial functions that might be of some interest at this point are:
14
2. INTEREST AND THE TIME VALUE OF MONEY
A
B
A
B
rate
1
nper
2
pmt(A)
3
pv (P)
4
fv (F)
5
type
6
F/A
7
F/P
8
P/A
9
10
P/F
11 A/P
12 A/F
0.1
10
-1
-1
-1
0
15.937
2.5937
6.1446
0.38554
0.16275
0.062745
rate
1
nper
2
pmt(A)
3
pv(P)
4
fv(F)
5
type
6
F/A
7
F/P
8
P/A
9
10
P/F
11 A/P
12 A/F
0.1
10
-1
-1
-1
0
=FV(B1,B2,B3,0,B6)
=FV(B1,B2,0,B4,B6)
=PV(B1,B2,B3,0,B6)
=PV(B1,B2,0,B5,B6)
=PMT(B1,B2,B4,0,B6)
=PMT(B1,B2,0,B5,B6)
Effective Interest Rate: =EFFECT(normal_rate, npery)
Number of periods: =NPER(rate, pmt, pv, fv, type)
The new variables are defined as follows:
(cid:129) normal_rate = the nominal annual interest rate (as a fraction)
(cid:129) npery = the number of compounding periods per year
The effective interest table for 10% nominal interest rate can be created in Excel® as follows
(note that in the case of continuous compounding, npery=1,000,000 is close enough to give the
answer to the desired number of significant digits).
One can compare Table 2.3 with Table 2.1 to see consistency between the calculations in
Excel® and those performed with the specific formula for ieff .
The NPER function is useful for determining how many compounding periods are necessary
to achieve a desired result. For example, one might want to determine how many years it will take
for an original investment to double in value if the interest rate is varied from 1% per year to 25%
per year. This is shown in Table 2.4.
The explanation of the values in the NPER formulas in Table 2.4 is as follows: “A3/100”
represents the interest rate as a fraction, “0” is for no annuity payment, “-1” is for a present value
amount of $1, “2” is for a future value of $2, and “0” defines the amounts as end of year payments.
One can also note that the product of the interest rate (as a percentage) and the # of periods to
double the value of the investment varies from 70 to 75. This is commonly known as the “Rule of
2.5. INTEREST FORMULAS FOR DISCRETE COMPOUNDING 15
Table 2.3: Using Excel® to compute effective interest rates for a nominal 10%
interest rate.
Table 2.4: Using Excel to compute the number of years needed to double the value of
an initial investment.
72.” If one takes 72 and divides by the interest rate (as a percentage), the resultant value is a close
approximation of how long it will take for an investment to double.
2.5.5 EXAMPLE PROBLEMS
At this point, it would be beneficial to examine some of the practical applications of these formulas.
Example 2.2
If $10,000 is invested in a fund earning 15% compounded annually, what will it grow to in 10
years?
Solution: F = P (F /P )i,n = 10, 000(F /P )15,10 = 10, 000(4.0456) = $40, 456
16
2. INTEREST AND THE TIME VALUE OF MONEY
Example 2.3
It is desired to accumulate $5,000 at the end of a 15-year period. What amount needs to be
invested if the annual interest rate is 10% compounded semi-annually? Assume the given interest
rate is a nominal rate and that the principal is compounded at 5% per period.
Solution: P = F (P /F )i,n = 5, 000(P /F )5,30 = 5000(0.23138) = $1, 157
Example 2.4
What interest rate, compounded annually, will make a uniform series investment (at the end
of each year) of $1,000 equivalent to a future sum of $7,442? The investment period is 5 years.
Solution: F = A(F /A)i,n ⇒ 7, 442 = 1, 000(F /A)i,5 ⇒ (F /A)i,5 = 7.442
Searching the various interest tables in Appendix A for n = 5 yields i = 20%
Example 2.5
An individual wishes to have $6,000 available after 8 years. If the interest rate is 7% com-
pounded annually, what uniform amount must be deposited at the end of each year?
Solution: A = F (A/F )i,n = 6, 000(A/F )7,8 = 6, 000(0.09747) = $585
Example 2.6
An individual wishes to place an amount of money in a savings account and, at the end of
one month and for every month thereafter for 30 months, draw out $1,000. What amount must be
placed in the account if the interest rate is 12% (nominal rate) compounded monthly?
Solution: i(monthly) = 0.12/12 = 0.01(1%)
P = A(P /A)i,n = 1, 000(P /A)1,30 = 1, 000(25.808) = $25, 808
Example 2.7
A principal of $50,000 is to be borrowed at an interest rate of 15% compounded monthly for
30 years. What will be the monthly payment to repay the loan?
Solution: i (monthly) = 0.15/12 = 0.0125(1.25%). Since Appendix A does not contain a
table for that interest rate, one must use the formulas.
A = P (A/P )i,n = 50, 000(A/P )1.25,360
= 50, 000{[(0.0125)(1 + 0.0125)360]/[(1 + 0.0125)360 − 1]}
= 50, 000(0.012644) = $632
2.5. INTEREST FORMULAS FOR DISCRETE COMPOUNDING 17
Example 2.8
An individual deposits $1,000 at the end of each year into an investment account that earns
8% per year compounded monthly. What is the balance in his account after 10 years?
Solution: Since the time frame of the deposits (annually) does not match the time frame of
the interest rate (monthly), one must convert to an effective annual interest rate before computing
the correct formulas.
(cid:2)
1 + i
m
(cid:3)
m
(cid:2)
(cid:3)
12
ie =
F = A(F /A)i,n = 1, 000(F /A)8.30,10 = 1, 000{[(1 + 0.0830)10 − 1]/0.0830}
− 1 = 0.0830
− 1 =
1 + 0.08
12
= 1, 000(14.694) = $14, 694
Example 2.9
Calculate the future worth of the following 6-year cash diagram if the interest rate is 10%
compounded annually.
0
1
2
3
4
5
6
1000
1200
1400
1600
1800
2000
There are a number of ways to solve this economic problem, which is the case for most cash
flow evaluations. One technique might be shorter in terms of the number of formulas to look up or
calculate, but all will result in the same answer.
Solution 1:
Note that this series of cash flows can be broken into an annuity of $1,000 per year and a
gradient of $200 per year. One can compute the future value of each of these contributions separately
and then add to get the final result.
FAnnuity
FGradient
= A(F /A)i,n = 1, 000(F /A)10,6 = 1, 000(7.7156) = $7, 715.60
= G(F /G)i,n = 200(F /G)10,6 = 200(17.156) = $3, 431.22
F = 7, 715.60 + 3, 431.22 = $11, 147
18
2. INTEREST AND THE TIME VALUE OF MONEY
Solution 2:
Convert the gradient to an equivalent annuity, add this value to the $1,000 annuity and then
convert to the future.
AGradient
ATotal
= G(A/G)i,n = 200(A/G)10,6 = 200(2.2236) = $444.72
= 1, 000 + 444.72 = $1, 444.72
F = A(F /A)i,n = 1, 444.72(F /A)10,6 = 1, 444.72(7.7156) = $11, 147
Solution 3:
Treat each cash flow as an individual, single payment, find the future value of each individual
payment and then add to get the total future value.
FCF 1 = P (F /P )i,n = 1, 000(F /P )10,5 = 1, 000(1.6105) = $1, 610.50
FCF 2 = P (F /P )i,n = 1, 200(F /P )10,4 = 1, 200(1.4641) = $1, 756.92
FCF 3 = P (F /P )i,n = 1, 400(F /P )10,3 = 1, 400(1.3310) = $1, 863.40
FCF 4 = P (F /P )i,n = 1, 600(F /P )10,2 = 1, 600(1.2100) = $1, 936.00
FCF 5 = P (F /P )i,n = 1, 800(F /P )10,1 = 1, 800(1.1000) = $1, 980.00
FCF 6 = P (F /P )i,n = 2, 000(F /P )10,0 = 2, 000(1.0000) = $2, 000.00
F = 1.610.50 + 1, 756.92 + 1, 863.40 + 1, 936.00 + 1, 980.00 + 2, 000.00 = $11, 147
Example 2.10
Calculate the present worth of the following 10-year cash flow diagram if the annual interest
rate is 20% compounded annually.
0
1
2
3
…
8
9
10
2000
1900
1800
…
1300
1200
1100
Solution: Again, there are a variety of methods to solve this problem. One technique is to
recognize that the cash flow is made up of an annuity of $2,000 and a gradient of −$100.
ATotal
= 2, 000 − 100(A/G)20,10 = 2, 000 − 100(3.0739) = $1, 692.61
P = A(P /A)i,n = 1, 692.61(P /A)20,10 = 1, 692.61(4.1925) = $7, 096
2.6. INTEREST FORMULAS FOR CONTINUOUS COMPOUNDING 19
2.6
INTEREST FORMULAS FOR CONTINUOUS
COMPOUNDING
In the last section, the assumption was made that money was received or dispersed and interest rates
were compounded at the end of each discrete compounding period. In some projects (consider a
banking institution for example), money is received and dispersed on a nearly continuous basis. If
the evaluator wishes to consider the effect of continuous cash flow and/or continuous compounding
of interest, one needs to utilize a slightly different set of formulas that relate P , F , and A.
2.6.1 CONTINUOUS COMPOUNDING FOR DISCRETE PAYMENTS
The following formulas apply to the situation where payments (or withdrawals) to an account are
made at discrete points in time, while the account accumulates interest on a continuous basis:
(P /F )i,n = e−in
(P /A)i,n = (ein − 1)/[ein(ei − 1)]
(F /A)i,n = (ein − 1)/[(ei − 1)]
(2.24)
(2.25)
(2.26)
2.6.2 CONTINUOUS COMPOUNDING FOR CONTINUOUS PAYMENTS
The other application of continuous compounding is the case where the deposits or withdrawals to
an account are being made on a nearly continuous basis. One example of this situation would be a
credit card company that receives charges and payments on millions of cards throughout each day.
For this case, the following definitions need to be made:
¯P , ¯F , ¯A = the total amount of funds received over one period (present sum, future sum, or
annuity, respectively).
The following figures demonstrate these definitions:
20
2. INTEREST AND THE TIME VALUE OF MONEY
The appropriate formulas are:
(P / ¯F )i,n = [i(1 + i)−n]/[ln(1 + i)]
(F / ¯P )i,n = [i(1 + i)n−1]/[ln(1 + i)]
(F / ¯A)i,n = (ein − 1)/i
(P / ¯A)i,n = (ein − 1)/[i(ein)]
(2.27)
(2.28)
(2.29)
(2.30)
where, i is the nominal interest rate per period.
2.7
PROBLEMS
2.1. Given a nominal rate of 20%, what is the effective annual interest rate if the interest is
compounded under each of the following scenarios:
(a) Quarterly
(b) Monthly
(c) Daily
(d) Continuously
2.2. What is the percentage difference between the effective rates determined by annual and
continuous compounding for nominal interest rates of:
(a) 10%
(b) 20%
(c) 30%
2.3. A company has decided to invest in a project to make a product. The initial investment cost
will be $1,000,000 to be spread over the first two years with $700,000 in the first year and
$300,000 in the second. The plan calls for producing products at the following rates: 5,000
units in year 2; 10,000 in year 3; 30,000 in year 4; 30,000 in year 5; $10,000 in year 6; and
$5,000 in year 7. Products will be sold for $50 each throughout the life of the project and
cash operating expenses will be $60,000 per year for years 2 through 7. Construct a cash
flow diagram for the project.
2.7. PROBLEMS 21
2.4. Example 2.1 presented a cash flow diagram for an automobile loan as seen through the eyes
of the lender. Construct the corresponding cash flow diagram as seen through the eyes of
the borrower.
2.5. A $1,000 investment has grown to $2,476 in 8 years. What interest rate (compounded
annually) has it earned?
2.6. What present sum is equivalent to a future sum of $25,000 (after 5 years) at an interest rate
of 8% compounded annually?
2.7.
If $200 is placed at the end of each year for 10 years in an account earning 7% interest
compounded annually, what amount will be accumulated at the end of 10 years?
2.8. What uniform series would be equivalent to a future sum of $10,000 if the series extends
for 10 years and earns 12% interest compounded semi-annually?
2.9. An annual deposit of $1,000 is placed in an account at the beginning of each year for 5 years.
What is the present value of that series if interest is 12% compounded annually? What is
the future value at the end of the 5th year?
2.10. What will be the future value, 10 years from the first payment, of the series of deposits in
problem 2.9?
2.11. What monthly car payments for the next 30 months are required to amortize a loan of
$4,000 if interest is 12% compounded monthly?
2.12. Payments of $1,000 are to be made at the end of each year for the next 3 years. What is the
present worth of the three payments if interest is 12% compounded monthly? What series
of monthly payments would be equivalent to the $1,000 year payments?
2.13. An individual agrees to lease a building to a firm with yearly payments shown on the cash
flow diagram below.What is the future worth of the payments if interest is 15% compounded
annually?
0
1
2
3
4
5
6
7
8
9
10
3000
3000
3000
3000
3300
3600
3900
4200
4500
4800
2.14. An engineer wishes to buy a house but can only afford monthly payments of $1500. 30-
year loans are available at 5.75% interest compounded monthly. If the engineer can make
a $20,000 down payment, what is the price of the most expensive house that the engineer
can afford to purchase?
22
2. INTEREST AND THE TIME VALUE OF MONEY
2.15. A young woman placed $200.00 in a savings account paying monthly interest. After one
year, her balance has grown to $214.00. What was the effective annual interest rate? What
was the nominal annual interest rate?
2.16. Find the value of cash flow X that will make the two cash flows equivalent. Interest is 10%
compounded annually. Time on the diagram is given in years.
0
1
2
3
4
5
6
100
120
140
160
0
1
2
X
X
X
2.17. It takes a full $10,000 to put on a Festival of Laughingly Absurd Walks (FLAW) each year.
Immediately before this year’s FLAW, the sponsoring committee finds that it has $40,000
in an account paying 15% interest compounded annually. After this year, how many more
FLAWs can be sponsored without raising more money?
2.18. If $10,000 is borrowed at 12% interest compounded monthly, what would the monthly
payments be if the loan is for 5 years? What would the annual payment be if the loan is for
5 years? Assume all payments occur at the end of a given period.
2.19. Calculate the value of the following cash flow diagram at the end of year 4. Interest is 10%
per year compounded annually.
0
1
2
3
4
5
6
7
8
9
10
1000
500
500
750
1000
800
600
400
2000
2.20. Calculate the future worth 5 years from now of a present sum of $2,000 if:
2.7. PROBLEMS 23
(a) Annual interest is 10% compounded annually
(b) Annual interest is 10% compounded quarterly
(c) Annual interest is 10% compounded continuously
2.21. Calculate the present value of 10 uniform $2,000 payments if:
(a) Annual interest is 10% compounded continuously and payments are received at the
end of each year
(b) Annual interest is 10% compounded continuously and payments are received contin-
uously over the year
2.22. A gas station sells $125,000 worth of gasoline over the course of a year. If this revenue is
collected and deposited continuously into an account that earns 8% interest, compounded
annually, how much money would the station have in its account at the end of the year?
2.23. Develop an Excel® spreadsheet that computes the six functions — (P /A), (P /F ), (F /A),
(F /P ), (A/P ), (A/F ) — for a fixed interest rate and the number of periods ranging from
1 to 100.
2.24. Use the Excel® NPER function to determine how long it will take for an investment to
triple in value at interest rates of 1%, 5%, 10%, 15%, 20%, and 25%. Can you determine
an approximate “Rule” for how to quickly calculate how long it takes for an investment to
triple in value?
C H A P T E R 3
25
Project Evaluation Methods
INTRODUCTION
3.1
In order to make informed decisions on one or more potential investments, methods must be devel-
oped that provide a numerical evaluation of a project. Both equivalence and rate of return methods
will be developed in this chapter.
Consider the following cash flow diagrams that contain income generating streams.
1
2
3
…
18
19
20
A:
0
1,000,000
B:
0
1
2
3
…
18
19
20
100,000
100,000
100,000
…
100,000
100,000
100,000
Since there are no cash flows for A after period 0, the present value of cash flow A is simply
$1,000,000. For B, since the $100,000 occurs at the end of each period for 20 periods, multiplying
the $100,000 by (P /A)i,20 will yield a present value for the interest rate used in the formula. For
example, if the interest rate is 12% per year, the present value would be $746,944. If the question
“which cash flow represents the largest present value?” is asked, the answer is obviously cash flow A.
Now consider a different question. Suppose you have just won a lottery and you have a choice
of receiving $1,000,000 now or receiving $100,000 at the end of each year for 20 years. If interest
is expected to be constant at 12% for the next 20 years as in the previous paragraph, which set of
payments would you prefer? Since this question is represented by the cash flow diagrams shown
above and the interest rate of 12%, the choice can be made by analyzing the present values of the two
cash flow diagrams. Since cash flow A yields a larger present value than cash flow B at an interest
rate of 12%, the proper choice would be to accept option A.
26
3. PROJECT EVALUATION METHODS
However, what if the interest rate is expected to be 0% over the 20 year period? What would
the best choice be under that scenario? If interest is 0%, then money is worth the same no matter
when it occurs. At 0% interest, the present value of cash flow B becomes $2,000,000 and cash flow
B becomes the correct choice.
The discussion in the previous two paragraphs infer that at some interest rate between 0%
and 12%, the two cash flow diagrams are equivalent. A trial and error solution yields this interest
rate to be about 7.75%.
This discussion has just introduced two of the more popular techniques (equivalence methods
and rate of return methods) used to evaluate the financial value of projects and help the evaluator
choose between multiple projects. These will be discussed in more detail later in this chapter.
3.2 ALTERNATE USES OF CAPITAL
Investment analysis or project evaluation involves making a decision between alternative uses of
capital. A cash flow diagram is constructed for each alternative according to the specific parameters
of that alternative and evaluated using the concepts of time value of money that were discussed in
Chapter 2. The results of the evaluations are then compared and a decision is made as to which
alternative is the best option.
Several evaluation methods can be used in analyzing investment opportunities. Two general
types of calculations that will be introduced here are: (1) equivalence methods which involve the
determination of an equivalent present, annual, or future worth of a cash flow diagram given a
specific interest rate; and (2) rate of return methods which involve the determination of an interest
rate produced by the cash flow diagram.
3.3 MINIMUM ACCEPTABLE RATE OF RETURN (MARR)
When using either the equivalence method or the rate of return method for comparing alternatives,
a minimum acceptable rate of return, MARR, needs to be defined. The value of MARR is set as
the lower limit for investment which is acceptable to an individual or a company. The MARR may
vary from individual to individual, company to company, and even within the structure of a specific
company.
The lower bound for the MARR is generally set at the cost of capital, which reflects the
expense of obtaining funds for a given project. How much higher the MARR is above the cost
of capital depends on a particular company’s or individual’s position and the particular project. For
example, an individual who borrows money at 5% interest rate in order to invest in a profit-generating
project would have an MARR of at least 5%, but would probably want to set the MARR at, say, 10%
in order to generate a net increase in his/her personal worth based on the estimated profitability of
the project. Similarly, if individuals are using their own funds to invest, their cost of capital would
be the interest rate that their money is currently earning in a savings account, certificate of deposit,
or other investments. A company’s MARR is usually set by the portfolio of projects in which the
company can invest. That is, what is the minimum interest that a company can earn by investing its
money in what it would consider to be a “guaranteed” success? For engineers performing economic
evaluations for their companies, the MARR will be provided by upper management so that they
will not have to make that determination.
3.4. EQUIVALENCE METHODS 27
3.4 EQUIVALENCE METHODS
In the equivalence methods to determine either the acceptability of a single project or to choose
the “best” project, the MARR is used as the interest rate in present, future, or annuity calculations.
A net present value, NPV (sometimes called the net present worth), net future value, NFV, or net
annual value, NAV, is calculated by one of the following equations:
(cid:4)
N P V =
N F V =
N AV =
Present Value of Cash Flows with
Future Value of Cash Flows with
(cid:4)
(cid:4)
Annuity Value of Cash Flows with
i = MARR
i = MARR
i = MARR
(3.1)
(3.2)
(3.3)
Since N P V , N F V , and N AV are related by the interest formulas developed in Chapter 2, any one
of the three calculations will yield the same conclusion (in terms of economic viability of the project)
as the other two. Because of this fact, most analysts concentrate on the NP V method, as do the
authors of this text.
3.5 NET PRESENT VALUE
3.5.1 ANALYSIS OF A SINGLE INVESTMENT OPPORTUNITY
For a single investment opportunity, the NP V would be calculated using the MARR as the interest
rate. A positive value for N P V indicates that the project which is represented by the cash flow
diagram earns an actual interest rate greater than the MARR, a negative value for NP V indicates
that it earns an actual interest rate less than the MARR, and an NP V value of zero indicates that it
earns the MARR. Since the MARR represents the decision point for determining the viability of
a project for a particular investor, a positive NP V would indicate that the project is an acceptable
one.
Example 3.1
Consider the project represented by the following cash flow diagram. The project requires an
initial investment of $1,000 that returns positive cash flows as shown. The MARR is 10%.
0
1
2
3
4
5
-1000
500
600
700
800
900
28
3. PROJECT EVALUATION METHODS
N P V = −1000 + 500(P /A)10,5 + 100(P /G)10,5
= −1000 + 500(3.7908) + 100(6.8618) = $1582
Since the N P V is greater than zero, this project would be an acceptable one to the investor.
An alternative method to calculate the NP V is to treat each individual cash flow as a future
value at various values of n. While this technique might require more formulas than recognizing
annuities and gradients in the cash flow diagram, it will always yield a correct value for NP V :
N P V = − 1000 + 500(P /F )10,1 + 600(P /F )10,2 + 700(P /F )10,3
+ 800(P /F )10,4 + 900(P /F )10,5
N P V = $1582
In Excel®, one can use the NP V function to make the same calculation. However, some
caution is necessary.
The function is:
= NP V (rate, value1, value2, …).
where,
rate = interest rate per period (as a fraction).
value1, value2, ... = cash flows that occur at the end of period 1, end of period 2, etc.
One can see that the NPV function does not include the investment period 0. Therefore, in order
to calculate the N P V of the entire cash flow diagram, one needs to include the initial investment.
For example, the complete Excel formula to compute the NP V of a series of cash flows would be
as shown in Figure 3.1:
= CF0 + NP V (rate, value1, value2,…)
One can see that the results from Excel match the NP V calculations from the other two
methods.
Example 3.2
Consider the project represented by the following cash flow diagram. The project requires an
initial investment of $1,000 that returns positive cash flows as shown. The MARR is 10%.
0
1
2
3
4
5
-1000
150
200
250
300
350
A
MARR =
B
10%
A
B
1
MARR =
0.1
3.5. NET PRESENT VALUE 29
Year
0
1
2
3
4
5
CF
-1000
500
600
700
800
900
NPV =
1582
3
4
5
6
7
8
9
10
11
Year
0
1
2
3
4
5
CF
-1000
500
600
700
800
900
NPV =
=B4+NPV(B1,B5:B9)
1
3
4
5
6
7
8
9
10
11
Figure 3.1: Demonstration of the use of the NP V function in Excel®.
N P V = −1000 + 150(P /A)10,5 + 50(P /G)10,5
= −1000 + 150(3.7908) + 50(6.8618) = −$88
Since the N P V is negative, the project will not earn the MARR and, therefore, is not accept-
able to this investor. Now a question arises: What does the investor do with the $1000? Since the
time-line represents the only ‘new’ investment opportunity available to the investor and the NP V
analysis suggests that it is not acceptable, the investor will choose to do nothing with the $1000.
The concept of the “do nothing” project will be defined in the next section.
3.5.2 DO NOTHING PROJECT
Example 3.2 indicates that there is always a choice to “do nothing” with investment funds. That is,
even if a project, like the one described in Example 3.2, is the only new investment available and
the financial analysis indicates that it is unacceptable, an investor can always choose to keep the
proposed funds, $1000 in the case of Example 3.2, where they currently are and “do nothing” with
those funds.
The “do nothing” project does not mean that the investment funds are going to be buried
in a can in the backyard where they earn nothing. The “do nothing” project means that the funds
are already invested in a project that is earning the MARR. As mentioned before, for individuals,
this could mean leaving their funds in their savings accounts. By definition, the NP V of the “do
30
3. PROJECT EVALUATION METHODS
nothing” project is zero.Thus, when a single investment opportunity is being evaluated, one is always
comparing it against a second opportunity which is to leave the money in the “do nothing” project.
3.5.3 ANALYSIS OF MULTIPLE INVESTMENT OPPORTUNITIES
For the purpose of this initial discussion of investing in multiple projects, assume that all of the
prospective projects to be evaluated require the same initial investment, that the investor only has
enough funds to invest in one of the projects, and that the decision will be based solely on NP V
analysis. These assumptions will be removed in subsequent chapters and discussed further. In addi-
tion, if at least one of the proposed projects has a positive NP V , then the “do nothing” project need
not be considered.
Example 3.3
Consider the following two investment opportunities. The investor’s MARR is 10% and the
investor only has enough funds to invest in one of the projects. Which one should be chosen?
Project A:
0
1
2
3
4
5
-800
215
215
215
215
215
Project B:
0
1
2
3
4
5
-800
100
100
100
100
900
N P V for Project A = −800 + 215(P /A)10,5 = $15.0
N P V for Project B = −800 + 100(P /A)10,5 + 800(P /F )10,5 = $75.8
Both projects show positive values of NP V . Therefore, both would be acceptable as long as
the investor had at least $800 to invest. In addition, the “do nothing” alternative does not need to be
considered. If the investor only has enough funds to invest in one of the projects, the NP V values
indicate that Project B is the best economic choice.
3.6. RATE OF RETURN METHODS 31
3.6 RATE OF RETURN METHODS
The second general type of project evaluation technique involves the determination of an unknown
interest rate for a given cash flow diagram. This interest rate is usually referred to as a rate of return.
There are several rates of return that can be calculated. Two will be presented in this chapter. The
first is called the Internal Rate of Return (IRR) which is also known as the Discounted Cash Flow
Rate of Return (DCFROR). The second is the External Rate of Return (ERR) which is also known
as the Growth Rate of Return. The I RR is the rate of return earned by a particular individual’s or
company’s investment.The ERR represents the overall growth of invested dollars for an individual or
a company. The differences will become apparent in the following discussion and example problems.
INTERNAL RATE OF RETURN (IRR)
3.6.1
The I RR is defined as the interest rate which discounts a series of cash flows to an NP V value of
zero:
(cid:4)
N P V = 0 =
Present Value of Cash Flows with the interest rate equal to I RR
(3.4)
The equation can also be written as:
N P V = 0 =
n(cid:4)
j =0
CFj (P /F )I RR,j =
n(cid:4)
j =0
CFj
(1 + I RR)j
(3.5)
where, CFj = cash flow for period j
j = period of cash flow
n = total number of periods
It should be noted that one cannot normally solve explicitly for the I RR from Equation 3.5.
Therefore, a trial and error solution is usually required. Graphically, the relationship between N P V ,
interest rate, and I RR is demonstrated in Figure 3.2.
Once the I RR is calculated, it is then compared with the MARR. If the I RR is greater than
the MARR, the project is considered to be acceptable to the investor.
32
3. PROJECT EVALUATION METHODS
NPV vs Interest Rate
$
,
V
P
N
600
500
400
300
200
100
0
-100
-200
-300
IRR = 0.125
0
0.05
0.1
0.15
0.2
0.25
Interest Rate, fra(cid:272)(cid:415)on
Figure 3.2: General form of net present value as a function of interest rate. (Note, for this example, when
NP V = 0, the interest rate, or I RR, is 0.125.)
Example 3.4
Consider the two investment opportunities examined in Example 3.3. The investor’s MARR
is 10% and the investor only has enough funds to invest in one of the projects. What are the I RRs
for each project?
Project A:
0
1
2
3
4
5
-800
215
215
215
215
215
Project B:
0
1
2
3
4
5
3.6. RATE OF RETURN METHODS 33
-800
100
100
100
100
900
As noted, the calculation of I RR usually involves a trial and error approach. While the NP V
versus interest rate curve is not a straight line, it is generally accurate enough to bracket the I RR
solution within 5% and then linearly interpolate for the answer.
Project A: N P V for Project A = −800 + 215(P /A)i,5
Interpolating for I RR: I RR = 10.0 +
(cid:5)
(cid:6)
15.0−0
15.0−(−79.3)
(15.0 − 10.0) = 10.8%
Project B: N P V for Project B = −800 + 100(P /A)i,5 + 800(P /F )i,5
Interest rate, % N P V
100.0
75.8
-67.0
(cid:6)
0.0
10.0
15.0
(cid:5)
Interpolating for I RR: I RR = 10.0 +
(15.0 − 10.0) = 12.6%. It should be
noted that Figure 3.2 was generated with the cash flows from Project B. Thus, the “true” answer for
I RR is 12.5% compared to the interpolated value of 12.6%.
75.8−0
75.8−(−67.0)
In this example, the I RRs of both projects are greater than the investor’s MARR, so both
projects are acceptable. It would appear that since the I RR of Project B is greater than the I RR of
Project A, then Project B is the best alternative. This is, indeed, the proper interpretation – but only
because the initial investment values for both projects were the same. One must be very careful in
ranking projects by I RR values as will be shown in Chapter 5.
SPREADSHEET FORMULA FOR IRR
3.6.2
Excel® has a built-in function to calculate Internal Rate of Return.
34
3. PROJECT EVALUATION METHODS
The function is:
where,
= I RR(values, guess)
values = cash flows that occur for the project
guess = initial estimate of the IRR (as a fraction)
This function automatically takes care of the year 0 cash flow without having to include it as a
separate term such as was necessary in the NP V calculation with Excel®. One can see that the cash
flows in Figure 3.3 are the same as Project B in the previous example.
A
MARR =
B
10%
A
B
1
MARR =
0.1
Year
0
1
2
3
4
5
CF
-800
100
100
100
100
900
NPV =
IRR =
75.8
12.5%
3
4
5
6
7
8
9
10
11
12
Year
0
1
2
3
4
5
CF
-800
100
100
100
100
900
NPV =
IRR =
=B4+NPV(B1,B5:B9)
=IRR(B4:B9,0.1)
1
3
4
5
6
7
8
9
10
11
12
Figure 3.3: Demonstration of the use of the NP V and I RR functions in Excel®.
As in Figure 3.2, Excel provides the “true” value for I RR without the need for a trial and
error solution and without interpolating.
3.6.3 EXTERNAL RATE OF RETURN (ERR)
The External Rate of Return (ERR) or Growth Rate of Return is found by determining the interest
rate that will satisfy the following equation.
⎡
⎤
(cid:7)
(cid:7)
n(cid:4)
(cid:7)
(cid:7)
(cid:7)
(cid:7)
j =0
(cid:7)
(cid:7)
(cid:7)
(cid:7)
(cid:7)
(cid:7)
Cj (P /F )MARR,j
n(cid:4)
⎣
=
j =0
Ij (F /P )MARR,n−j
⎦ (P /F )ERR,n
(3.6)
where, Cj = negative cash flow at period j
Ij = positive cash flow at period j
n = life of project
3.6. RATE OF RETURN METHODS 35
The equation states that positive cash flows (Ij s) derived from the project are reinvested at the
MARR to generate a future value, which is called FI , at the end of the project life. All negative cash
flows (investments) are brought back in time at the MARR to generate a present value, which is
called PC, at year zero. The interest rate which will then discount FI to a value equal to the value of
PC is determined to be the ERR.
Another way of looking at the external rate of return is to set up a second project which is
called the reinvestment project. The negative cash flows for the reinvestment project are the positive
cash flows from the original project. A future value of the cash flows of the reinvestment project is
determined using the MARR as the interest rate (FI ). The original project and reinvestment project
are then added together to give a third project. The positive cash flows from the original project and
the costs from the reinvestment project should have netted out to zero. The remaining cash flows for
the third project will be the negative cash flow at year zero, any other negative cash flows from the
original project at the year of occurrence, and the future value determined for the second project. All
negative cash flows are brought back to time zero at the MARR to generate a present value (PC).
The ERR is then determined by finding the interest rate which will bring the future value to a year
zero value equal to the present value of the negative cash flows.
The ERR method has a calculation advantage over the I RR method in that the ERR can
be solved for directly without a trial and error procedure. The steps in the calculation procedure are:
Cj (P /F )MARR,j
(cid:7)
(cid:7)
(cid:7)
(cid:7)
(cid:7)
(cid:7)
(cid:7)
(cid:7)
n(cid:4)
(cid:7)
(cid:7)
(cid:7)
(cid:7)
j =0
n(cid:4)
Ij (F /P )MARR,n−j
PC =
FI =
j =0
ERR = (FI /PC)1/n − 1
(3.7)
(3.8)
(3.9)
Example 3.5
Consider the two investment opportunities examined in Example 3.4. The investor’s MARR
is 10% and the investor only has enough funds to invest in one of the projects. What are the ERRs
for the projects?
Project A:
0
1
2
3
4
5
-800
215
215
215
215
215
36
3. PROJECT EVALUATION METHODS
Project B:
0
1
2
3
4
5
-800
100
100
100
100
900
Project A:
PC = | − 800| = 800
FI = 215(F /A)10,5 = 1312.6
ERR = (1312.6/800)1/5 − 1 = 0.104 = 10.4%
Project B:
PC = | − 800| = 800
FI = 100(F /A)10,5 + 800 = 1410.5
ERR = (1410.5/800)1/5 − 1 = 0.120 = 12.0%
In this example, the ERRs of both projects are greater than the investor’s MARR, so both
projects are acceptable. It would appear that since the ERR of Project B is greater than the ERR
of Project A, then Project B is the best alternative. This is, indeed, the proper interpretation – but
only because the initial investment values for both projects were the same. Again, one must be very
careful in ranking projects by ERR values as will be shown in Chapter 5.
One additional observation can be made about the relationship between MARR, I RR, and
ERR. The ERR will always lie between the MARR and the I RR. Thus,
MARR ≤ ERR ≤ I RR or MARR ≥ ERR ≥ I RR
Example 3.6
Consider the investment opportunity below. The investor’s MARR is 10%. What are the
N P V , I RR, and ERR values for the project?
Project A:
0
1
2
3
4
5
-1000
500
500
-200
500
500
3.7. THE REINVESTMENT QUESTION IN RATE OF RETURN CALCULATIONS 37
N P V = − 1000 + 500(P /F )10,1 + 500(P /F )10,2 − 200(P /F )10,3
+ 500(P /F )10,4 + 500(P /F )10,5
N P V =$369.4
NPV:
IRR:
Interest rate, %
10.0
20.0
30.0
25.0
N P V
369.4
90.2
-100.8
-13.8
(cid:5)
Interpolating between 20% and 25%: I RR = 20.0 +
90.2−0
90.2−(−13.8)
(cid:6)
(25.0 − 20.0) = 24.3%
ERR:
PC = | − 1000 − 200(P /F )10,3| = $1150.3
FI = 500(F /P )10,4 + 500(F /P )10,3 + 500(F /P )10,1 + 500 = $2447.6
ERR = (2447.6/1150.3)1/5 − 1 = 0.163 = 16.3%
All three economic indicators show that this project is an acceptable one.
SPREADSHEET FORMULA FOR ERR
3.6.4
Excel® has a built-in function that can be used to calculate the External Rate of Return.
The function is:
= MI RR(values, finance_rate, reinvestment_rate)
where,
values = cash flows that occur for the project
finance_rate = interest rate for discounting the negative cash flows to year 0 (as a fraction)
reinvestment_rate = interest rate for reinvesting the positive cash flows to year n (as a fraction)
One needs to set both the finance_rate and the reinvestment_rate to MARR. As with I RR, this
function automatically takes care of the year 0 cash flow without having to include it as a separate
term. Figure 3.4 demonstrates this formula (along with NPV and IRR) for the cash flows given in
Example 3.6.
3.7 THE REINVESTMENT QUESTION IN RATE OF RETURN
CALCULATIONS
The virtues of the I RR calculation have been argued for years by evaluators. When the I RR method
was first introduced, it was met with a great deal of enthusiasm and is still one of the most popular
38
3. PROJECT EVALUATION METHODS
A
MARR =
B
10%
A
B
1
MARR =
0.1
Year
0
1
2
3
4
5
CF
-1000
500
500
-200
500
500
NPV =
IRR =
ERR =
369.5
24.3%
16.3%
3
4
5
6
7
8
9
10
11
12
13
Year
0
1
2
3
4
5
CF
-1000
500
500
-200
500
500
NPV =
IRR =
ERR =
=B4+NPV(B1,B5:B9)
=IRR(B4:B9,0.1)
=MIRR(B4:B9,B1,B1)
1
3
4
5
6
7
8
9
10
11
12
13
Figure 3.4: Demonstration of the use of the NP V , I RR, and MI RR(ERR) functions in Excel®.
evaluation methods used. Surveys have indicated that a vast majority of the companies polled use
I RR either by itself or in conjunction with other methods when evaluating projects. However, in
spite of the popularity of the I RR method, many evaluators still question its meaning and validity.
The basic question has to do with whether or not a reinvestment of incomes is implied in the
calculation procedure. That is, one argument is that in order for the original project investment to
“earn” the I RR, the positive cash flows generated by the project must be reinvested in another project
that “earns” the same I RR. The other argument is that reinvestment is not necessary to “earn” the
I RR. In fact, both arguments may be true depending on the evaluator’s perception of what is meant
by the phrase “earning the IRR.”
To begin the discussion of the reinvestment question, consider Example 3.7.
3.7. THE REINVESTMENT QUESTION IN RATE OF RETURN CALCULATIONS 39
Example 3.7
An investment of $5000 will yield $1931.45 at the end of each year for 4 years. What is the
value of the project’s I RR? If the MARR is 15%, what is the project’s ERR?
0
1
2
3
4
-5000
1931.45
1931.45
1931.45
1931.45
I RR :
NP V = −5000 + 1931.45(P /A)i,4
(P /A)I RR,4 = 2.5887
For NP V = 0,
Examining the interest tables in Appendix A, one can determine that the I RR is 20.0%.
ERR :
PC = | − 5000| = $5000
FI = 1931.45(F /A)15,4 = $9644
ERR = (9644/5000)1/4 − 1 = 0.178 = 17.8%
By definition, the calculation of ERR requires that the incomes be reinvested at the MARR of
15%. If the MARR had been higher, say 18%, the value of the ERR would have been higher. If
the MARR were 20%, one can show that the ERR is now equal to 20% (same as the I RR). Thus,
if the interest rate used for the reinvestment of incomes and for finding the present value of the
costs (negative cash flows) is the I RR, then the values of MARR, I RR, and ERR will be identical.
While not shown here, this can be demonstrated, mathematically, for any set of cash flows.
Now, let’s expand on this example in order to determine the effect of different perceptions of
an investment “earning” a particular interest rate.
3.7.1 PERCEPTION #1
The first perception of an investment “earning” a particular interest rate parallels the concept of
investing money in a savings account for a specified period of time. In this perception, “earning”
means that an initial investment will yield a future value given by (F /P )i,n. Using the values from
this example, a $5000 investment earning 20% (the I RR) for 4 years should result in a future sum
of:
F = 5000(F /P )20,4 = $10, 368
However, if the individual cash flows of $1931.45 (recall that these cash flows yielded an I RR of
20%) were buried in a can under a tree (thus earning no interest), the total future accumulated
40
3. PROJECT EVALUATION METHODS
amount would be:
F = (4)(1931.45) = $7, 725.80
Since the four individual cash flows yield a future sum significantly less than $10,368, the initial
investment has not “earned” a 20% interest rate according to this perception of “earning.” In fact,
the actual rate of return would be:
i = (7725.80/5000)1/4 − 1 = 0.115 = 11.5%, not 20%!
However, if the individual cash flows were reinvested in an account that earned 20% interest,
the future sum accumulated in that account would be:
F = 1931.45(F /A)20,4 = $10, 368
and the “earned” interest rate would indeed be 20%. Thus, in this perception, in order to “earn” the
I RR (20%) interest rate on the entire initial investment ($5,000), any cash flows received before the
end of the project must be reinvested in another project that has the same I RR.
3.7.2 PERCEPTION #2
The second perception more closely parallels the concept of making a loan to a project and having
that loan be paid back at some interest rate. In this perception, interest is “earned” only on the portion
of the total loan that is still unpaid. The unpaid portion of the loan is also known as the unamortized
portion.
Again, consider the cash flows in Example 3.7. During the first year, interest is earned on
the entire $5000 investment (or loan). The required interest amount at the calculated I RR of 20%
would be:
I1 = 5000(0.20) = $1000
This means that $931.45 can be used to “payback” a portion of the original investment, leaving an
unamortized amount of $4,068.55. The required interest amount in the second year would then be:
I2 = 4068.55(0.20) = $813.71
The reminder of that year’s cash flow, $1,117.74, would be used to further reduce the unamortized
portion of the investment to $2,950.81. Table 3.1 summarizes this sequence for the entire project
life.
Note that the total interest “earned” is the same as would have been “earned” under perception
#1 if the cash flows were not reinvested. However, banking institutions agree that this repayment
scheme has indeed “earned” 20% on the original loan of $5,000.
In the opinion of the authors, the final conclusion is that the question of whether reinvestment
of the cash flows at the I RR must occur or not is really more of an issue of perceiving what is meant
by “earning a return.” Banking institutions readily “invest” in projects via loans to companies or
Table 3.1: Amortization table for a loan
3.8. ACCELERATION PROJECTS 41
individuals and receive the I RR as defined in perception #2 without automatic reinvestment at that
same rate. However, an individual or company that is expecting to generate a future sum of money
based on earning the I RR on the original investment for the entire life of the project must depend
on reinvestment of the cash flows at that specific I RR in order to actually have the desired future
sum.
It should be noted that, independent of the reinvestment question, I RR analysis still results
in a powerful economic evaluation tool.
3.7.3 FINAL COMMENTS ON ERR AND IRR RELATIONSHIPS
The ERR is a measure of the growth of the investment dollars. The I RR does not have the same
meaning since it is a measure of the project profitability only. If a company wants a true measure of
its growth based on a specific investment, then ERR analysis should be used.
Both the I RR and ERR are valid investment analysis techniques and, if applied correctly,
will yield the same conclusion regarding the viability of an investment to the company or individual.
It will be shown in the next section that the ERR method has some advantages in particular analysis
situations.
3.8 ACCELERATION PROJECTS
When a series of cash flows changes from a positive value to a negative value (or negative to positive)
more than once, the cash flows may generate multiple positive real solutions to the I RR equation.The
number of solutions is governed by Descartes’ rule. The rule states that if the terms of a polynomial
with real coefficients are ordered by descending variable exponent, then the number of positive
roots of the polynomial is equal to the number of sign differences between consecutive nonzero
42
3. PROJECT EVALUATION METHODS
coefficients. Since the I RR equation can be rearranged to form a polynomial of order n, this rule
will apply since the coefficients will be related to the cash flows.
A series of cash flows with more than one sign change is called an acceleration project.This type
of project is created when a second capital investment must occur after one or more years of positive
incomes. For example, consider a manufacturing facility that will require significant upgrading after
several years. The multiple values of I RR rates calculated when there are multiple sign changes are
difficult to interpret as to which might be the correct return on investment.
Since the ERR equation does not form a polynomial, it always has a unique answer and,
therefore, should be the rate of return technique of choice in acceleration projects. A modified I RR
calculation can be made by finding the present value of all of the negative cash flows by discounting
to year 0 at the MARR and then using the normal I RR equation. It should be noted that the investor
can always use the equivalence methods (NP V specifically) in this situation without difficulty.
In Example 3.6, a cash flow was presented that had sign changes between the 2nd and 3rd year
and the 3rd and 4th year. In this case, the analyst should be aware that multiple positive values of I RR
might exist. For that specific example, the nth order polynomial that is created by the NP V = 0
equation is developed as follows:
− 1000 + 500(P /F )I RR,1 + 500(P /F )I RR,2 − 200(P /F )I RR,3
+ 500(P /F )I RR,4 + 500(P /F )I RR,5 = 0
− 1000 +
500
(1 + I RR)
+
500
(1 + I RR)2
−
200
(1 + I RR)3
+
500
(1 + I RR)4
+
500
(1 + I RR)5
= 0
I RR5 + 4.5 I RR3 + 7.5 I RR3 + 5.7 I RR2 + 1.4 I RR − 0.8 = 0
Since the 5th order polynomial only has one sign change, there is only one positive value of
I RR for the cash flows in Example 3.6. Example 3.8 will demonstrate a situation where more than
one positive value exists.
Example 3.8
Given the following cash flow diagram, plot the NP V versus interest rate and determine the
two positive values of I RR that would be predicted by Descartes’ rule. Assume an MARR of 5%.
3.8. ACCELERATION PROJECTS 43
Solution: From the plot of NP V versus interest rate and the Excel® spreadsheet, it can be
seen that there are two values for I RR: 9.1% and 57.2%. One can use Excel® to find both rates of
return by adjusting the initial guess. An initial guess of 10% will yield the 9.1% value and an initial
guess of 50% will yield the 57.2% value. This creates an unfortunate situation in that one must have
an idea of the value of the larger root in order to have Excel® compute it.
The 6th order polynomial that could be developed is:
I RR6 + 5.1 I RR5 + 9.3 I RR4 + 5.4 I RR3 − 2.7 I RR2 − 3.1 I RR + 0.3 = 0
One can see that there are two sign changes in the list of terms and, therefore, two positive values
for I RR.
As mentioned before, the multiple values of IRR cause difficulties in interpretation. With a
total investment (without time value of money) of $320 and the total of the positive incomes (without
time value of money) of $290, one would be hard pressed to accept that this project “earns” 9.1%,
let alone 57.2%! Comparing 9.1% to the MARR of 5% would seem to indicate that this project is
acceptable.
44
3. PROJECT EVALUATION METHODS
Let’s examine the ERR, NP V , and modified I RR for this project:
ERR :
N P V :
PC
FI
ERR
= | − 100 − 90(P /F )5,4 − 80(P /F )5,5 − 50(P /F )5,6| = $274.0
= 90(F /P )5,5 + 120(F /P )5,4 + 80(F /P )5,3 = $353.3
= (353.3/274.0)1/6 − 1 = 0.043 = 4.3% .
The table or the figure show that the NP V at an interest rate of 5%
(the investor’s MARR) is -$10.4.
The modified I RR would be calculated by replacing the negative cash flows with PC calculated
above to create a new set of cash flows as follows:
0
1
2
3
4
5
6
-274
90
120
80
0
0
0
Using the trial and error solution technique or Excel®, the modified I RR is 2.9%.
Thus, the ERR, the modified I RR, and the NPV indicate that this project is not an acceptable
project for the investor.
In summary, acceleration projects have the potential to add another level of complexity to
the calculation of I RR in that multiple positive rates may exist. The authors strongly suggest that
evaluators utilize N P V or ERR calculations to determine the economic viability of acceleration
projects.
3.9
PAYOUT
A supplementary evaluation technique that is frequently used is payout period or simply payout.
Payout may be calculated with or without discounting although it is usually calculated without
considering the time value of money. Payout refers to the time that it takes for a project to return its
initial investment. Thus, it’s a quick measure of how long the investment is at risk. Although this
time may be a very useful piece of information to compute for a particular project, payout analysis is
limited in its use as an evaluation criterion. It does not serve as a useful screening criterion since it
ignores any cash flows occurring past the payout period. Therefore, it must be used in conjunction
with one of the evaluation techniques that have already been presented.
Example 3.9
Given the following cash flow diagram, compute the undiscounted payout time and the
discounted payout time if MARR is 15%.
0
1
2
3
4
3.9. PAYOUT 45
-100
60
60
60
60
Undiscounted Payout:
Year
Cash Flow
0
1
2
(cid:1086) 100
60
60
Cumula(cid:415)ve
Cash Flow
(cid:1086) 100
(cid:1086) 40
20
Interpolate between years 1 and 2 to find when the cumulative cash flow equals zero:
Payout = 1 +
(cid:3)
(cid:2)
−40 − 0
−40 − (20)
(2 − 1) = 1.67 years
Discounted Payout:
Year
Cash Flow
Discounted Cash Flow
0
1
2
3
(cid:1086) 100
60
60
60
100
60(P /F ) 15.1 = 52.2
60(P /F ) 15.2 = 45.4
60(P /F ) 15.3 = 39.4
Cumula(cid:415)ve Discounted
Cash Flow
(cid:1086) 100
(cid:1086) 47.8
(cid:1086) 2.4
37.0
Interpolate between years 2 and 3 to find when the cumulative cash flow equals zero:
Payout = 2 +
(cid:3)
(cid:2)
−2.4 − 0
−2.4 − (37.0)
(3 − 2) = 2.06 years
Discounted payout measures the time for the project to return the initial investment and a 15% rate
of return on that initial investment.
46
3. PROJECT EVALUATION METHODS
3.10 PROBLEMS
3.1. Calculate the present value and annual value of the following cash flow diagram. MARR
is 15%.
0
1
2
3
4
5
6
7
-2500
500
650
800
800
800
800
800
3.2. Calculate the I RR and ERR for the cash flow diagram given in Problem 3.1.
3.3. An individual is considering the purchase of a property that he believes he can resell for
$25,000 at the end of 10 years. The property will generate positive cash flows of $1,500 per
year for the 10 years. What is the maximum that the individual should pay for the property
if his MARR is 12%?
3.4. An investment of $10,000 will yield $33,000 at the end of 5 years with no other cash flows.
What is the I RR of this investment?
3.5. Calculate the I RR for the following cash flow diagram.
0
1
2
3
4
5
-2000
-500
1000
1000
1000
1000
3.6. A company invests $30,650 in a project which yields an income (positive cash flow) of
$10,000 in the first year, $9,000 in the second, $8,000 in the third, … etc … and $1,000 in
the tenth, along with an extra $10,000 income at the end of year 10. The company’s MARR
is 10%. Determine the I RR and ERR of this project.
3.7. Determine the N P V , ERR, and modified I RR for the following cash flow diagram. Use
an MARR of 15%.
3.10. PROBLEMS 47
0
1
2
3
-50
100
100
-100
3.8. Determine the N P V , NAV , modified I RR, and ERR for the following cash flow diagram
if the MARR is 10%.
0
1
2
3
4
-75
50
50
-30
200
3.9. You are a project engineer and you have to make a choice between two contractors to perform
some rebuilding work on a manufacturing facility. One contractor proposes that he will do
the work for $1,300,000 payable immediately. The other contractor proposes that he will
perform the same job for $1,400,000 payable in eight equal quarterly payments, starting 3
months after the job begins. A nominal rate of 14% should be used as the MARR. What
equivalent annual interest rate is the second contractor offering? Which contractor’s offer
would you accept? Repeat the analysis with the NP V technique.
3.10.
John Q. Customer has received his bill for the next 6 months premium on his auto insurance.
The bill allows him two methods to pay his premium of $189.00. He can either pay the
entire amount now, or he can pay $99.00 now, which includes half of the premium plus a
$4.50 prepaid “service charge” and $94.50 in two months, the other half of the premium.
The insurance company is, implicitly, offering John a “loan.” What is the effective annual
interest rate of the loan? Would you take the “loan?” Why or why not?
48
3. PROJECT EVALUATION METHODS
3.11. A project is expected to cost $2,000,000 and have the following net revenues:
Year Net Revenue
1,000,000
800,000
600,000
400,000
200,000
100,000
1
2
3
4
5
6
Calculate the undiscounted and discounted payout periods. The MARR is 15%.
3.12. Engineer A retires at the age of 65 with a retirement account worth $500,000. At what
interest rate would this amount need to be invested in order to withdraw $50,000 at the
end of each of the next 15 years?
3.13. Develop an Excel® spreadsheet to compute NP V , NAV , NF V , I RR, and ERR for the
cash flow diagram given in Problem 3.1.
3.14. Develop an Excel® spreadsheet to solve Problem 3.3 for MARR values of 5%, 10%, 12%,
15% and 20%.
3.15. Develop an Excel® spreadsheet to solve Problem 3.4 for initial investments of $5000,
$10000 and $15000.
3.16. Develop an Excel® spreadsheet to solve Problem 3.5 for initial investments of $2000, $1500,
and $1000.
3.17. Develop an Excel® spreadsheet to solve Problem 3.6.
3.18. Develop an Excel® spreadsheet to solve Problem 3.7.
3.19. Develop an Excel® spreadsheet to solve Problem 3.8.
3.20. Develop an Excel® spreadsheet to solve Problem 3.9.
3.21. Develop an Excel® spreadsheet to solve Problem 3.10.
3.22. Develop an Excel® spreadsheet to solve Problem 3.11 for MARR values of 5%, 10%, and
15%.
3.23. Develop an Excel® spreadsheet to solve Problem 3.12.
C H A P T E R 4
49
Service Producing Investments
4.1
INTRODUCTION
There are, in general, two types of investments—one which produces income and one which produces
a service. A service producing investment is one that results in a cash flow diagram that normally
contains no positive cash flows with the exception of a possible salvage value of the service. Salvage
value is the estimated value of an asset at the end of its useful life. It is assumed that the asset can
be sold (as scrap metal for example) for this value as a positive cash flow to the project. The authors
use the symbol L to represent the positive cash flow due to salvage value.
An example of a service producing investment would be the consideration of either purchasing
a new vehicle for a field office or leasing the vehicle. The vehicle provides a necessary service for the
personnel in the field office but does not directly produce any income for the company. Generally, a
leased vehicle would not have any salvage value since it is just returned to the leasing agency at the
end of the lease period, while a purchased vehicle would have some salvage value since it could be
sold to another owner.
This chapter will discuss evaluation techniques for service producing investments for equal
and unequal life alternatives.
4.2 EQUAL LIFE ALTERNATIVES
Consider the following situation. An investment needs to be made by a company for a particular
service that is necessary for the company to conduct its business. Two or more alternatives have been
identified that provide the same service over the same time period. These alternatives are known
as equal life alternatives and they lend themselves to straight forward application of the evaluation
methods that were presented in Chapter 3.
4.2.1 EQUIVALENCE TECHNIQUES
The equivalence techniques, primarily NPV, are valid methods to choose the correct alternative.
However, since service producing investments deal primarily with costs, NPV is replaced with Net
Present Cost (NPC) which is the absolute value of the NPV. When the evaluator calculates NPC,
the simplest approach is to change the signs of all of the project’s cash flows as will be demonstrated
in Example 4.1. The alternative with the lowest NPC would be the best economic choice. Similarly,
Net Annual Value (NAV) is replaced with Net Annual Cost (NAC).
50
4. SERVICE PRODUCING INVESTMENTS
Example 4.1
Two alternatives are being considered which provide the same service and which have the same
useful life of five years. Alternative A has an initial capital investment of $12,000, annual operating
costs of $3,500, and a salvage value of $5,000. Alternative B has an initial capital investment of
$20,000, annual operating costs of $1,500, and a salvage value of $10,000. If the company’s MARR
is 15%, which alternative would be the best economic choice? Use NPC and NAC analysis.
Alternative A:
0
1
2
3
4
5
-12000
-3500
-3500
-3500
-3500
-3500
L = 5000
Alternative B:
0
1
2
3
4
5
-20000
-1500
-1500
-1500
-1500
-1500
L = 10000
NPC:
NAC:
N P CA = 12000 + 3500(P /A)15,5 − 5000(P /F )15,5 = $21, 250
N P CB = 20000 + 1500(P /A)15,5 − 10000(P /F )15,5 = $20, 060
N ACA = 12000(A/P )15,5 + 3500 − 5000(A/F )15,5 = $6, 340
N ACB = 20000(A/P )15,5 + 1500 − 10000(A/F )15,5 = $5, 980
Both NPC and NAC analysis indicate that Alternative B is the best economic choice since it
has the lowest cost under these conditions.
4.2.2 RATE OF RETURN METHODS
Rate of return methods need to be altered since there are generally no positive cash flows in a service
producing investment except, perhaps, a salvage value. Under that scenario, the definitions of IRR
and ERR don’t make any sense and, in fact, generally do not result in positive values.
4.2. EQUAL LIFE ALTERNATIVES 51
When comparing two service producing investment alternatives, an incremental project rate
of return (either IRR or ERR) is determined and compared to the MARR. The cash flows for the
incremental project are found by taking the cash flows from the investment with the larger initial
capital cost and subtracting the cash flows from the investment with the lower initial capital cost. It
should be fairly obvious that if the alternative with the larger initial capital cost doesn’t have lower
annual costs than the alternative with the lower initial capital cost, it will never be the economic
choice. Therefore, one would expect the incremental project cash flow diagram to be represented by
a negative initial investment, followed by positive cash flows that represent the savings generated by
choosing the alternative with the larger initial capital cost over the alternative with the lower initial
capital cost. Thus, another name for this incremental project is the “savings project.”
The rate of return (either IRR or ERR) can now be calculated for the savings project. If the
rate of return is larger than the MARR, this indicates that the savings project is an acceptable project
which thereby insinuates that the correct economic choice would be the alternative with the larger
initial capital cost. The net savings that occur by choosing the alternative with the larger initial
capital cost more than offset its additional initial capital cost. If the IRR or ERR is less than the
MARR, the savings project is not an acceptable project and, therefore, the alternative with the lower
initial capital cost will be the economic choice.
If there are more than two alternatives, all of the alternatives should first be listed in descending
order of initial capital cost and the various pairings of alternatives would be evaluated using one of
the techniques above. For example, if there were three alternatives (A, B, C) in order of initial capital
costs (with A having the highest and C having the lowest), one would first compare A to B. If A is
the better choice, one would then compare A to C to determine the best overall choice. However, if
B were the better choice, the next comparison would be B to C to determine the best overall choice.
Example 4.2
Compare Alternatives A and B given in Example 4.1 and determine the best economic choice
using IRR and ERR techniques. Recall that the MARR is 15%.
Since Alternative B has the highest initial capital cost, the savings project would be created
by subtracting the cash flows of Alternative A from those of Alternative B:
Savings Project, B-A:
0
1
2
3
4
5
-8000
2000
2000
2000
2000
2000
L = 5000
The N P V of this project is given by:
N P VB−A = −8000 + 2000(P /A)i,5 + 5000(P /F )i,5
52
4. SERVICE PRODUCING INVESTMENTS
Interest Rate, %
15
20
N P V
1190.1
-9.4
Interpolation yields an I RR = 20%. Since I RR > MARR, B is the best economic choice.
ERR :
PC = | − 8000| = 8000
FI = 2000(F /A)15,5 + 5000 = 18485
ERR = (18485/8000)1/5 − 1 = 0.182 = 18.2%
Again, the ERR would indicate that Alternative B is the best economic choice.
Example 4.3
Given the 3 alternatives below that provide the same service over a 4 year period, develop
an Excel® spreadsheet that uses IRR analysis to determine which alternative is the best economic
choice. MARR is 10%.
Alternative A:
0
1
2
3
4
-1000
-300
-350
-400
-450
L = 200
Alternative B:
0
1
2
3
4
-800
-320
-380
-440
-500
L = 100
Alternative C:
0
1
2
3
4
4.2. EQUAL LIFE ALTERNATIVES 53
-700
-340
-410
-480
-550
L = 50
Spreadsheet and Results:
Incremental IRR calculations for Example 4.3:
A
B
C
D
E
F
G
1
2
3
4
5
6
7
8
9
10
11
12
The N P V and I RR functions are the same as presented in Chapter 3. The spreadsheet shows
the comparisons between all three projects. Since the initial investment of Alternative A is greater
than the initial investment of Alternative B and the initial investment of Alternative B is greater
than the initial investment of Alternative C, the alternatives are already correctly ordered by size
of initial investment. NPC analysis shows that Alternative B has the lowest net present cost and,
therefore, should be the alternative of choice.
The analysis of the incremental I RR calculations would be completed as follows:
1. Compare the first two alternatives.
2. Since the I RR of Incremental Project A-B (5.7%) is less than the MARR (10%), Alternative B
is a better choice than Alternative A.
54
4. SERVICE PRODUCING INVESTMENTS
3. Now compare Alternative B with Alternative C.
4. Since the I RR of Incremental Project B-C (23.5%) is greater than the MARR (10%), Alter-
native B is a better choice than Alternative C.
Therefore, Alternative B is the best economic choice (same as determined from the NPC
method).
Note that since Alternative B was a better choice than Alternative A, one never utilizes the incremen-
tal IRR that is calculated for the Incremental Project A-C. However, it is a necessary portion of the
Excel® spreadsheet since one does not know, ahead of time, which Alternatives will be eliminated
during the analysis of the results.
4.3 UNEQUAL LIFE ALTERNATIVES
The analysis of service producing investments that have alternatives which provide the same service
but have unequal project lives cannot be completed without modifications to the alternatives. A
common evaluation life for each alternative must be found before a proper economic decision can
be made. This is because the definition of two alternatives providing the same service includes the
assumption that they provide this service for the same length of time. For example, one cannot
compare an alternative to purchase a vehicle, keep it for 5 years, and then sell it for its salvage value
to a three-year lease option for the same vehicle. Both options are providing the service of a vehicle,
but the service is provided for different lengths of time.
There are, in general, two methods employed by evaluators to find common evaluation lives
in these situations. The first method requires the determination of a least common multiple of
service lives for the alternatives being considered. The second method involves the determination of
a common study period which will be either the life of the shortest or longest alternative. In both
methods, cost assumptions must be made that will impact the final analysis.
4.3.1 LEAST COMMON MULTIPLE METHOD
The least common multiple method of finding a common service life utilizes the same principles
that are involved in determining the common denominator when adding or subtracting fractions.
Consider the example of two alternatives having useful lives of 3 and 4 years. The least common
multiple in this case would be 12 since there is not a smaller number which is divisible by 3 and 4
without leaving partial years as a remainder. The alternative having a useful life of 3 years would be
repeated 4 times on a time line to reach the least common multiple of 12 years. The other alternative
would be repeated 3 times.
A couple of disadvantages of this method should immediately come to mind. First, costs do
not stay constant over time, so one would need to predict the future cost of each alternative. Cost
escalation will be discussed in Chapter 6, but even this approach requires a number of assumptions.
Secondly, one or more of the alternatives may be rendered obsolete by the development of new
technology before the end of the time period that corresponds to the least common multiple is
reached.
4.3. UNEQUAL LIFE ALTERNATIVES 55
4.3.2 COMMON STUDY PERIOD
The common study period method of finding a common service life utilizes either the life of the
shortest alternative or the life of the longest alternative as the common study period. To determine
which of these to use, the length of the common study period should be, if possible, the length of
time that the service is actually required.
If the life of the shortest alternative is used, the extra years of the longer life alternative are
neglected and a new salvage value is assigned at the end of the common study period. The new
salvage value will typically be larger than the original salvage value since it should reflect the value
of the extra years that are neglected.
If the life of the longest alternative is used, the shorter project needs to be extended via one
of two methods. The project can be extended by either estimating the cost involved to repair the
service to get additional years of service from it or by purchasing a new unit of service. Both of these
require some assumptions with regard to future cost.
Example 4.4
The cash flows shown below represent two alternatives which can provide the same service.
Assume that the MARR is 15%. Use both methods described above to determine which alternative
is the best economic choice. (Numbers are in $1,000.)
Alternative A:
0
1
2
3
9 10
-150
-3
-3
-3
……
-3
-3
L = 10
Alternative B:
0
1
2
3
4
5
-50
-18
-18
-18
-18
-18
L = 8
56
4. SERVICE PRODUCING INVESTMENTS
Least Common Multiple Technique: The least common multiple of 5 and 10 is 10. Therefore,
one needs to extend Alternative B from 5 to 10 years. It will be assumed that there is no escalation
in the costs for Alternative B for the second 5 year period. In Chapter 6, we will consider this same
problem with cost escalation. Therefore, Alternative B extended to 10 years would be:
Alternative B (extended to 10 years):
0
1
2
3
5
6 9 10
-50
-18
-18
-18……
-18
L = 8
-50
-18 ……
-18
-18
L = 8
N P C Analysis:
N P CA = 150 + 3(P /A)15,10 − 10(P /F )15,10 = $162.6
N P Cextended B = 50 + 18(P /A)15,10 + 42(P /F )15,5 − 8(P /F )15,10 = $159.2
N P C analysis indicates that Alternative B is the best economic choice under the assumptions
that were made (e.g., no increase in costs for the second 5 years). If costs increase or if technology
makes Alternative B obsolete, then this analysis will be inaccurate and one may need to consider
other non-economic factors in making this decision.
Common Study Period Technique: Let’s shorten Alternate A to 5 years by neglecting the costs
in the final 5 years and by increasing the salvage value that could be received at year 5 to $80,000.
Alternative A (shortened to 5 years):
0
1
2
3
4
5
-150
-3
-3
-3
-3
-3
L = 80
N P C Analysis:
N P Cshortened A = 150 + 3(P /A)15,5 − 80(P /F )15,5 = $120.3
NP CB = 50 + 18(P /A)15,5 − 8(P /F )15,5 = $106.4
N P C analysis indicates that Alternative B is the best economic choice under these set of
assumptions (e.g., the new estimated salvage value for Alternative A and the assumption that one
can actually “sell” Alternative A for salvage at the end of 5 years).
4.4. PROBLEMS 57
4.4
PROBLEMS
4.1. A mining company is in need of four trucks. Suppliers will offer the options of purchasing
or leasing the trucks. The purchase price is $200,000. Maintenance, insurance, and general
operating costs (payable at the end of each year) will be $30,000 in year 1, $40,000 in year 2,
and $50,000 in year 3 with an expected salvage value of $70,000 at the end of year 3. The
lease price is $80,000 per year for the 3 years (payable at the beginning of each year). The
lease covers maintenance costs, but insurance and general operating costs will be $25,000
per year (payable at the end of each year). If the company’s MARR is 20%, determine the
best economic choice.
4.2. A natural gas producing company is considering two engine systems for use in driving a
small compressor. System A can be purchased for $120,000 and is expected to have a life of 4
years. Annual diesel fuel consumption is estimated to be 60 gallons per day of use. System B
can be purchased for $150,000 and is expected to have a life of 4 years. Annual propane fuel
consumption is estimated to be 40 gallons per day of use. Both engines have salvage values
equal to 15% of initial cost and both will accomplish the needed requirements. Estimates
of fuel costs for each system and expected days of use each year are as follows:
Assume that MARR is 8% and that all other costs besides fuel will be the same for both
systems. Which system is the best economic choice?
58
4. SERVICE PRODUCING INVESTMENTS
4.3. Use ERR analysis to determine which alternative would be the best economical choice.
Verify your decision with NPC analysis. Assume the MARR equals 10%.
Alternative A:
0
1
2
3
4
-500
-25
-25
-25
-25
L = 100
Alternative B:
0
1
2
3
4
-300
-50
-50
-50
-50
L = 25
Alternative C:
0
1
2
3
4
-250
-75
-60
-45
-30
L = 10
Alternative D:
0
1
2
3
4
-450
-35
-35
-35
-35
L = 100
4.4. Consider the two service producing projects described below. They will provide the same
service but they do not have equal lives. Use NPC, IRR, and ERR analyses to determine
which alternative should be chosen. For the least common multiple method, assume no
increases in future costs for either project. For the common study period method, assume
that the salvage value for Alternative B will increase to $4,000 at the end of year 3. The
MARR is 10%.
4.4. PROBLEMS 59
Alternative A:
0
1
2
3
-15000
-1000
-1000
-1000
L = 0
Alternative B:
0
1
2
3
4
-10000
-3000
-3000
-3000
-3000
L = 2000
4.5. Use Excel® to solve Problem 4.1 for values of MARR of 10%, 15%, 20%, and 25%.
4.6. Use Excel® to solve Problem 4.2 for values of MARR of 5%, 8%, and 12%.
4.7. Use Excel® to determine what initial cost of Alternative A in Problem 4.2 would make the
two systems equal at an MARR of 8%.
4.8. Use Excel® to solve Problem 4.3.
4.9. Use Excel® to solve Problem 4.4.
C H A P T E R 5
61
Income Producing Investments
5.1
INTRODUCTION
In the previous chapter, investments were considered that only provided a service of some kind for the
investor. In this chapter, investments that generate income (or profit) are discussed. The evaluation
techniques to be used will be identical to those introduced in Chapter 3. However, one additional
concept needs to be introduced when an investor is faced with making decisions between multiple
alternatives. This concept is the fact that income producing investment situations can be classified
as being either mutually exclusive, independent, or contingent as defined in later sections of this
chapter.
5.2
INVESTMENT IN A SINGLE PROJECT
If an investor is being offered the opportunity to invest in a single project (that is, without considering
any other alternatives other than the “do nothing” alternative), he needs to consider the following
two economic issues:
(cid:129) Does he have enough money to invest in this project?
(cid:129) Is the project profitable enough?
If one does not consider the option of the investor borrowing money from a lending institution, the
answer to the first question should be a clear “yes” or “no.” If the answer is “no,” then the investor
cannot invest in the project. Chapter 7 will cover financial leverage which will allow for the borrowing
of money.
If the answer to the first question is “yes,” then project profitability needs to be considered in
order to answer the second question. Utilizing the analysis techniques presented in Chapter 3, this
would mean one of the following:
(cid:129) The N P V of the project, calculated at the investor’s MARR, is greater than zero. (Similarly,
N AV or N F V would be greater than zero.)
(cid:129) The I RR of the project is greater than the investor’s MARR.
(cid:129) The ERR of the project is greater than the investor’s MARR.
Of these three options, the authors strongly suggest the NPV method. This will become clearer as
this chapter proceeds.
62
5. INCOME PRODUCING INVESTMENTS
Example 5.1
An investor with MARR of 15% has been presented with the opportunity to invest in the
following income producing project. Assume that he has $20,000 to invest. Should he invest in this
project based on economic considerations?
0
1
2
3
9 10
-20000
7500
7500
7500
……
7500
7500
L = 10000
Using the NPV, IRR, and ERR techniques described in Chapter 3:
N P V :
I RR :
ERR :
N P V = −20000 + 7500(P /A)15,10 + 10000(P /F )15,10 = $20, 115
N P V = −20000 + 7500(P /A)I RR,10 + 10000(P /F )I RR,10 = 0
Trial and error solution yields
PC = | − 20000| = $20, 000
FI = 7500(F /A)15,10 + 10000 = $162, 300
ERR = (162300/20000)1/10 − 1 = 0.233 = 23.3%
I RR = 36.7%
Since N P V > 0, I RR > MARR, and ERR > MARR, this project would be acceptable to
the investor.
5.3 MUTUALLY EXCLUSIVE ALTERNATIVES
When considering two or more alternatives in an economic analysis situation in which only one
alternative may be chosen, the alternatives are said to be mutually exclusive. Examples of mutually
exclusive alternatives would include the choice between two or more ways to develop a physical
property location (for example, build a gas station or a laundromat, but not both) or the choice
between two or more projects when faced with limited investment capital.
To evaluate choices in mutually exclusive situations, it is necessary to first determine if each
alternative is economically acceptable using the same questions as listed above. Any alternatives that
are not acceptable will be discarded. The remaining alternatives can then be ranked by a couple of
methods and the project at the top of the ranking is the best economic choice.
5.3.1 EQUIVALENCE TECHNIQUES
Equivalence techniques are those that use NPV, NAV, or NFV calculations. As explained earlier,
for a given project, if one of these values is greater than zero then the others will be also. Recall that
values greater than zero indicate that the alternative is an acceptable one. Obviously, if the value
is zero, the project earns exactly the MARR. Thus, the evaluation approach, using NP V as the
calculation choice, is as follows:
5.3. MUTUALLY EXCLUSIVE ALTERNATIVES 63
1. Calculate the N P V for each alternative.
2. Eliminate any alternative with NP V < 0.
3. If all alternatives have NP V < 0, then the investor’s decision should be the “do nothing”
alternative.
4. If one or more alternatives have NP V ≥ 0, the alternative with the largest positive NP V is
the best economic choice.
Example 5.2
In addition to the alternative given in Example 5.1, consider the situation where an investor
with an MARR of 15% has the choice between that alternative and the two additional ones given
below. Assume that the investor has $80,000 to invest. Also assume that the three alternatives are
mutually exclusive projects. This may occur because they represent alternatives in which only one
can actually be “built” or may occur because the investor only has $80,000 to invest so he only has
enough capital to invest in one.
Let’s call the project in Example 5.1 Alternative A. Thus, new alternatives are Alternative B
and Alternative C.
Alternative B:
0
1
2
3
9 10
-80000
20000
20000
20000
……
20000
20000
L = 25000
Alternative C:
0
1
2
3
9 10
-70000
17500
17500
17500
……
17500
17500
L = 21875
64
5. INCOME PRODUCING INVESTMENTS
Using the NPV technique described in Chapter 3:
N P V : N P VB = −80000 + 20000(P /A)15,10 + 25000(P /F )15,10 = $26, 560
N P VC = −70000 + 17500(P /A)15,10 + 21875(P /F )15,10 = $23, 240
Since N P VA, N P VB , and NP VC are all greater than zero, all three alternatives would be
acceptable to the investor. However, since these are mutually exclusive alternatives, Alternative B is
the overall best economic choice because its NP V is the largest.
One might think that the evaluator should directly compare any two projects (such as A
and B in the previous example) by using incremental NPV analysis. The following calculations will
demonstrate that this approach is not necessary because the NPV of an incremental project such as
B-A is governed by the following relationship:
NP V B−A = NP V B − NP V A
From Example 5.1, N P V A = $20, 115 and from Example 5.2, NP V B = $26, 560. Using the
relationship above, N P V B−A should be $6,445. The following cash flow diagram represents the
incremental project B-A:
Alternative B-A:
0
1
2
3
9 10
-60000
12500
12500
12500
……
12500
12500
L = 15000
NPV: N P V B−A = −60000 + 12500 (P
A)15,10
Note that the NPV of the incremental project, B-A, is, indeed, numerically equal to the
+ 15000 (P
= $6, 445
F )15,10
(cid:12)
(cid:12)
difference between the NPV values of alternative B and A (B minus A).
5.3.2 RATE OF RETURN TECHNIQUES
One can use both the internal rate of return (IRR) and external rate of return (ERR) methods to
find the best alternative from a list of mutually exclusive alternatives. However, unlike NP V , it will
be shown that the alternative with the highest I RR or ERR is not necessarily the best economic
choice. One must be very careful not to simply rank the projects by I RR or ERR.
The process to determine the best alternative using I RR or ERR is as follows:
1. Calculate the I RR or ERR for each alternative.
2. Eliminate any alternative with I RR or ERR < MARR.
5.3. MUTUALLY EXCLUSIVE ALTERNATIVES 65
3. If all alternatives have I RR or ERR < MARR, then the investor’s decision should be the “do
nothing” alternative.
4. If one or more alternatives have I RR or ERR ≥ MARR, then those alternatives should be
rank ordered from the one with the highest initial investment to the one with the lowest initial
investment.
5. A comparison is made between the alternatives with the two largest initial investments. Create
an incremental project cash flow diagram by subtracting the cash flows of the lower initial
investment from those of the higher initial investment.
6. Calculate the I RR or ERR of the incremental project. If this I RR or ERR is ≥ MARR,
then the alternative with the larger initial investment is the better economic choice. Similarly,
if this I RR or ERR is < MARR, then the alternative with the lower initial investment is the
better economic choice. Keep the best alternative and discard the other one.
7. If additional alternatives are still available, return to step 5 and compare the alternative that
was kept from step 6 with the one with the next lower initial investment.
8. If no additional alternatives remain, the best economic choice is the alternative that was kept
from step 6.
Example 5.3
Consider the three alternatives A, B, and C introduced in the earlier example problems. Use
IRR and ERR analysis to determine the best economic choice. The MARR is 15%.
Alternative A:
0
1
2
3
9 10
-20000
7500
7500
7500
……
7500
7500
L = 10000
I RR :
N P V = 0 = −20000 + 7500(P /A)I RR,10 + 10000(P /F )I RR,10
Trial and error solution yields I RR = 36.7%
66
5. INCOME PRODUCING INVESTMENTS
ERR :
PC = | − 20000| = $20, 000
FI = 7500(F /A)15,10 + 10000 = $162, 300
ERR = (162300/20000)1/10 − 1 = 0.233 = 23.3%
Alternative B:
0
1
2
3
9 10
-80000
20000
20000
20000
……
20000
20000
L = 25000
I RR :
N P V = 0 = −80000 + 20000(P /A)I RR,10 + 25000(P /F )I RR,10
Trial and error solution yields I RR = 22.7%
ERR :
PC = | − 80000| = $80, 000
FI = 20000(F /A)15,10 + 25000 = $431, 100
ERR = (431100/80000)1/10 − 1 = 0.183 = 18.3%
Alternative C:
0
1
2
3
9 10
-70000
17500
17500
17500
……
17500
17500
L = 21875
I RR :
N P V = 0 = −70000 + 17500(P /A)I RR,10 + 21875(P /F )I RR,10
Trial and error solution yields I RR = 22.7%
5.3. MUTUALLY EXCLUSIVE ALTERNATIVES 67
ERR :
PC = | − 70000| = $70, 000
FI = 17500(F /A)15,10 + 21875 = $377, 200
ERR = (377200/70000)1/10 − 1 = 0.183 = 18.3%
As one can see, all three alternatives have I RR and ERR ≥ MARR. Therefore, all three
alternatives are acceptable. Putting them in ranked order by initial investment yields:
Alterna ve
B
C
A
Ini al
Investment
$80,000
$70,000
$20,000
ERR
I RR
22.7% 18.3%
22.7% 18.3%
36.7% 23.3%
At this point, one cannot simply choose the alternative with the highest I RR or ERR as the
best overall economic choice.
First, compare Alternative B to Alternative C:
Alternative B-C:
0
1
2
3
9 10
-10000
2500
2500
2500
……
2500
2500
L = 3125
Using the techniques described in Chapter 3:
I RR :
N P V = 0 = −10000 + 2500(P /A)I RR,10 + 3125(P /F )I RR,10
Trial and error solution yields I RR = 22.7%
ERR :
PC = | − 10000| = $10, 000
FI = 2500(F /A)15,10 + 3125 = $53, 900
ERR = (53900/10000)1/10 − 1 = 0.183 = 18.3%
Since both the I RR and ERR are greater than the MARR, this indicates that Alternative B
is better than Alternative C. Eliminate Alternative C from further consideration and compare
Alternative B to the next alternative.
68
5. INCOME PRODUCING INVESTMENTS
Comparing Alternative B to Alternative A:
Alternative B-A:
0
1
2
3
9 10
-60000
12500
12500
12500
……
12500
12500
L = 15000
Using the techniques described in Chapter 3:
I RR :
N P V = 0 = −60000 + 12500(P /A)I RR,10 + 15000(P /F )I RR,10
Trial and error solution yields I RR = 17.6%
ERR :
PC = | − 60000| = $60, 000
FI = 12500(F /A)15,10 + 15000 = $268, 800
ERR = (268800/60000)1/10 − 1 = 0.162 = 16.2%
Since both the I RR and ERR are greater than the MARR, this indicates that Alternative B
is better than Alternative A. Since the list of mutually exclusive alternatives has been exhausted,
Alternative B is the best overall economic choice.
In summary, one cannot use the values of the I RR and ERR from individual alternatives to
determine the best economic choice. If one were to do that, the results shown in the table for this
example would indicate that Alternative A is the best economic choice since it has the largest values
of I RR and ERR. However, both NP V and incremental rate of return analyses clearly show that
Alternative B is the best economic choice.
Example 5.4
To further reinforce the fact that one should not rank investments through the use of rate
of return, consider the following example. You are an investor with only $10 in your pocket. Two
friends offer you the following opportunities: Friend #1 needs $1 from you, but will give you $2 back
at the end of the day. Friend #2 needs all $10 of your money, but will give you $12 back at the end
of the day. Which opportunity is better for you from an economic point of view?
Examine this using NPV and incremental IRR approaches. Since the time frame is short (1
day), your daily MARR can be considered to be very close to 0%.
Friend #1 Alternative:
5.3. MUTUALLY EXCLUSIVE ALTERNATIVES 69
1
2
1
12
0
-1
N P V :
N P V = −1 + 2(P /F )0,1 = $1
N P V = −1 + 2(P /F )I RR,1 = 0
I RR:
Trial and error solution yields I RR = 100%
Friend #2 Alternative:
0
-10
N P V :
N P V = −10 + 12(P /F )0,1 = $2
I RR:
Trial and error solution yields I RR = 20%
N P V = −10 + 12(P /F )I RR,1 = 0
NPV analysis indicates that Friend#2 Alternative is the best economic choice, but IRR analysis
appears to indicate that Friend#1 Alternative is the best.
Friend #1 is offering a 100% rate of return and Friend #2 is offering a 20% rate of return. One
might think that Friend #1’s offer is the best. However, at the end of the day, you only have $11 in
your pocket if you invest with Friend #1, but $12 if you invest with Friend #2. It is clear, therefore,
that you should invest with Friend #2 even though that friend is offering a lower rate of return. The
reason that the higher rate of return option is not the best option in this case is that the other $9
in your pocket is earning 0% rate of return. Combining 0% rate of return on $9 and 100% rate of
return on $1 ends up yielding a 10% overall rate of return if you invest with Friend #1.
70
5. INCOME PRODUCING INVESTMENTS
Incremental Alternative of Friend#2 – Friend#1:
0
-9
1
10
N P V = −9 + 10(P /F )I RR,1 = 0
I RR:
Trial and error solution yields I RR = 11.1%
Since the incremental I RR is greater than your MARR, this indicates that Friend #2 Alter-
native is, indeed, the best economic choice.
This example also introduces the notion of risk in an investment. Obviously, the mathematical
analysis has shown that loaning Friend #2 is the better investment. But, it requires you, the lender,
to ‘give up’ all of your money. If there was a chance that neither friend could come through with
their repayment, then it might be better to keep the $9 in your pocket and invest in Friend #1. In
the event that neither friend could provide their repayment, at least you would still have $9 left of
your money. The concept of risk in investments will be discussed much more in Chapter 9.
5.3.3 USING EXCEL®
As shown in the previous chapters, Excel® can be used to choose the best alternative among a group
of mutually exclusive alternatives. Since Excel® offers the ability to quickly calculate incremental
rates of return, there is no need to manually choose the pairs of alternatives to be evaluated. However,
this requires that one needs to evaluate each possible pair of alternatives (starting with all alterna-
tives ordered from highest to lowest initial investment) and then analyze the results table rather
than analyze specific pairs one at a time. For example, for Alternatives A, B, and C presented in
Examples 5.1 through 5.3, an Excel® spreadsheet might look like what is shown in Table 5.1. Recall
that within the original alternatives, Alternative B had the largest initial investment, C had the next
highest initial investment, and A had the lowest initial investment. Thus, the pairs of interest are
B-C, B-A, and C-A.
One would use Table 5.1 as follows:
If using NPV analysis:
1. Note that the values of NP V given in cells B17, C17, and D17 are all positive. This
indicates that all three alternatives are acceptable.
2. Note that Cell C17 contains the largest value of NP V . This would indicate that Alter-
native B is the best economic choice.
5.3. MUTUALLY EXCLUSIVE ALTERNATIVES 71
T
a
b
l
e
5
.
1
:
E
x
c
e
l
®
s
o
l
u
t
i
o
n
o
f
E
x
a
m
p
l
e
s
5
.
1
t
h
r
o
u
g
h
5
.
3
72
5. INCOME PRODUCING INVESTMENTS
If using IRR or ERR analysis (we will use IRR for this analysis):
1. Note that the values of I RR given in cells B18, C18, and D18 are all greater than the
MARR. This indicates that all three alternatives are acceptable.
2. Examine cell F18, which is the result of comparing the first pair of projects: B and C.
Since this value (22.7%) is larger than the MARR, this would indicate that Alternative B
is better than Alternative C. Alternative C is thus removed from further consideration
and the next viable pair would be B-A.
3. Examine cell G18, which is the result of comparing projects B and A. Since this value
(17.6%) is larger than the MARR, this would indicate that Alternative B is better than
Alternative A.
4. Since all necessary pairs have been examined, Alternative B is the best economic choice.
5. While column H is required to calculate the NP V , I RR, and ERR of the C-A pair, it
is not utilized in this example since Alternative C was removed from consideration after
its comparison against Alternative B. However, when developing this spreadsheet, one
does not know the result of the incremental analyses and, thus, all possible pairs must be
included. In addition, depending on the value of MARR, column H might be utilized
in other scenarios.
5.4 UNEQUAL LIFE ALTERNATIVES
Recall in Chapter 4 that if one was comparing service producing investments that have unequal
lives, one must choose one of two methods to force the projects to the same length of time. This is
because, to be comparable, the service must be offered for the same length of time.
In income producing investments, creating a common life is not required for NPV analysis.
However, for NAV, NFV, IRR, or ERR analysis, one must make the lives the same. Usually the life
of the longest alternative is used as the common evaluation life. If should be noted, however, that to
extend an income producing investment, one does not extend the positive cash flows. Instead, zero
cash flows are used to extend the life of the project. This is the case because one is assuming that the
cash flows from the income producing investment have already been estimated out to the full life of
the project and the project will be shut down at that time.
When conducting incremental rate of return analyses on unequal life alternatives, the evaluator
may find that the incremental project has multiple changes in sign of the yearly cash flows. This
was described in Chapter 3 as an acceleration project. Since the alternating signs may yield multiple
I RR values, either the modified IRR or ERR technique will need to be applied in the analysis.
Example 5.5
Use NPV, NAV, NFV, IRR, and ERR analyses to evaluate the unequal life alternatives below.
MARR is 12%.
5.4. UNEQUAL LIFE ALTERNATIVES 73
0
1
2
3
4
-200
100
100
100
100
Alternative A:
Alternative B:
0
1
2
3
4
5
6
-300
90
90
90
90
90
90
NPV analysis:
N P VA = −200 + 100(P /A)12,4 = $103.7
N P VB = −300 + 90(P /A)12,6 = $70.0
Alternative A is the best economic choice based on NPV analysis.
NAV, NFV, IRR, and ERR Analyses:
From the NPV analysis above, both alternatives are acceptable. Now, perform the incremental
analysis for NAV, NFV, IRR and ERR. Use six years as the common evaluation life by extending
Alternative A for two additional years with zero cash flows:
Alternative A extended to six years:
0
1
2
3
4
5
-200
100
100
100
100
0
6
0
74
5. INCOME PRODUCING INVESTMENTS
The incremental project is then:
Alternative B-A:
0
1
2
3
4
5
6
-100
-10
-10
-10
-10
90
90
N AV :
N AVB−A = −100(A/P )12,6 − 10(P /A)12,4(A/P )12,6 + 90(F /A)12,2(A/F )12,6
= $ − 8.20
Since the incremental NAV is less than zero, Alternative A is the best economic choice.
N F V :
N F VB−A = − 100(F /P )12,6 − 10(P /A)12,4(F /P )12,6 + 90(F /A)12,2 = $ − 66.5
Since the incremental NF V is less than zero, Alternative A is the best economic choice.
I RR:
N P VB−A = 0 = −100 − 10(P /A)I RR,4 + 90(P /A)I RR,2(P /F )I RR,4
Trial and error solution yields I RRB−A = 5.4%
Since I RRB−A is less than the MARR, Alternative A is the best economic choice.
ERR:
PCB−A
FIB−A
=| − 100 − 10(P /A)12,4| = $130.4
=90(F /A)12,2 = $190.8
ERRB−A =(190.8/130.4)1/6 − 1 = 0.0655 = 6.55%
Since ERRB−A is less than the MARR, Alternative A is the best economic choice.
In summary, each of the analysis techniques of NPV, NAV, NFV, incremental IRR, and
incremental ERR, indicate that Alternative A is the best economic choice. However, of these five
options, the authors strongly suggest the NPV method because it usually involves the least amount
of calculations and never requires the use of incremental analyses.
5.5. INDEPENDENT AND CONTINGENT INVESTMENTS 75
5.5
5.5.1
INDEPENDENT AND CONTINGENT INVESTMENTS
INDEPENDENT INVESTMENTS
Consider the case when an investor is faced with the choice of investing in one or more projects (rather
than just one from a list of mutually exclusive alternatives) depending upon how much investment
capital is available. These alternatives are said to be independent alternatives. The final decision of
which projects to invest in will be based on maximizing the NP V for the given investment dollars.
This could mean that several combinations of projects will need to be evaluated.
5.5.2 CONTINGENT INVESTMENTS
A contingent project is a project that is conditional on the choice of one or more other projects. For
example, in the discipline of petroleum engineering, consider that the regional office of a large oil
company must make a decision to invest in one of the following projects for a particular producing
field: a series of well workovers to increase production from the existing wells; a polymer flood to
capture more oil from the field; or drilling a number of new wells within the field to expedite the oil
recovery from the field. Unfortunately, prior to investing in a full-scale polymer flood, the regional
office must also invest in a pilot polymer flood that will, most likely, not be an economic success
by itself. However, if the pilot is technically successful, then the full-scale polymer flood could be
considered. Therefore, the full-scale polymer flood would be considered a contingent project because
it could not be implemented without also choosing to invest in the pilot flood.
Example 5.6
Projects A, B, and C are being considered as investments. List the combinations that will need
to be considered under each of the following scenarios:
(a) The projects are mutually exclusive
(b) The projects are independent
(c) Projects A and B are mutually exclusive, but project C is contingent on project B.
(a) If the projects are already mutually exclusive, then the investor can only invest in one project.
Therefore, the list of combinations would be:
Mutually
Exclusive
Alterna(cid:415)ve
Projects
Included
C
A
B
Possible
Combina(cid:415)ons
1
2
3
4
0
1
0
0
0
0
1
0
0
0
0
1
None
A
B
C
76
5. INCOME PRODUCING INVESTMENTS
(b) If the projects are independent, then the investor can invest in any or all projects. Therefore, the
list of combinations would be:
Mutually
Exclusive
Alterna(cid:415)ve
Projects
Included
C
A
B
Possible
Combina(cid:415)ons
1
2
3
4
5
6
7
8
0
1
0
0
1
1
0
1
0
0
1
0
1
0
1
1
0
0
0
1
0
1
1
1
None
A
B
C
A,B
A,C
B,C
A,B,C
(c) For the contingencies given, the list of combinations would be:
Mutually
Exclusive
Alterna(cid:415)ve
Projects
Included
C
A
B
Possible
Combina(cid:415)ons
1
2
3
4
0
1
0
0
0
0
1
1
0
0
0
1
None
A
B
B,C
Of the list from (b) in this example, the following combinations are missing for the following
reasons:
1. C only – Since C is contingent on project B, it cannot stand by itself
2. A,B – Since A and B are mutually exclusive, they cannot be combined together
3. A,C – Since C is contingent on project B and project B is not in this combination, A and C
cannot be combined
4. A,B,C – Since A and B are mutually exclusive, they cannot be combined together
5.5. INDEPENDENT AND CONTINGENT INVESTMENTS 77
5.5.3 LIMITED INVESTMENT CAPITAL
When investment capital is unlimited and more than one project may be chosen, the analysis simply
requires the determination of which project(s) will earn more than the MARR. This can be done
with any of the analysis techniques discussed previously. Once the list of acceptable alternatives has
been generated, the economic choice is to invest in all of them.
When investment capital is limited, the analysis approach is a bit more complicated. The
basic approach is to determine all possible combinations of projects in which the total investment
is within the capital constraints and then to analyze each of the combinations as being mutually
exclusive. The combination with the highest NPV will represent the set of projects in which one
should invest.
Example 5.7
The cash flow diagrams of six projects, A through F, are shown below. For these projects,
determine what combination of projects is the best economic choice using NPV analysis and a
MARR of 10%. Projects B, C, and E are mutually exclusive. Projects A and D are mutually exclusive
but both are contingent on the acceptance of C. Project F is contingent on the acceptance of either
B or E. Consider two separate scenarios:
(a) Assume unlimited capital
(b) Assume limited capital of $30,000
A:
B:
C:
0
1
2
3
-5000
2500
2500
2500
0
1
2
3
-30000
13500
13500
13500
0
1
2
3
-15000
10000
10000
10000
78
5. INCOME PRODUCING INVESTMENTS
0
1
2
3
-10000
6000
6000
6000
0
1
2
3
-20000
10000
10000
10000
0
1
2
3
D:
E:
F:
-15000
11000
11000
11000
It can be shown that the individual projects have the following NP V s:
Project
A
B
C
D
E
F
N P V
$1,220
$3,570
$9,870
$4,920
$4,870
$12,360
The table of mutually exclusive alternatives would be:
5.6. RANKING ALTERNATIVES 79
yllautuM
Exclusive
Alterna ve
1
2
3
4
5
6
7
8
Projects
D
C
0
0
0
0
0
1
0
0
0
1
1
1
0
0
0
0
B
0
1
0
0
0
0
1
0
E
0
0
0
1
0
0
0
1
Possible
Combina ons
None
B
C
E
A,C
C,D
B,F
E,F
F
0
0
0
0
0
0
1
1
A
0
0
0
0
1
0
0
0
tnemtsevnI
Capital
Needed
0
$30,000
$15,000
$20,000
$20,000
$25,000
$45,000
$35,000
NP V
0
$3,570
$9,870
$4,870
$11,090
$14,790
$14,930
$17,230
(a) There are eight mutually exclusive alternatives that result from the original six individual
projects and their interrelationships. When capital is unlimited, the correct economic choice is
the alternative that maximizes the NPV. In this case, alternative #8, which consists of investing
in projects E and F is the correct economic choice because it has the largest NPV.
(b) When capital is limited to $30,000, alternatives #7 and #8 are no longer considered. With
those removed, the correct economic choice will be alternative #6 since it will maximize the
NPV for those projects whose total investment is less than or equal to $30,000.
5.6 RANKING ALTERNATIVES
As mentioned earlier, one can always correctly rank alternatives according to their NP V values.
The combination of projects that is within any constraint of investment capital and has the highest
N P V will be the alternative of choice. However, one cannot correctly rank alternatives by I RR or
ERR values unless one utilizes incremental analyses. To illustrate this further, another example is
presented below.
Example 5.8
For the six projects listed in Example 5.7, use the IRR and ERR techniques to choose the
best mutually exclusive alternative.
It can be shown that mutually exclusive alternatives 1 through 8 have the following I RRs and
ERRs:
80
5. INCOME PRODUCING INVESTMENTS
Alterna(cid:415)ve
1
2
3
4
5
6
7
8
ERR
I RR
10.0% 10.0%
16.7% 14.2%
44.6% 30.2%
23.4% 18.3%
39.5% 27.4%
41.3% 28.4%
29.2% 21.7%
36.3% 25.7%
Direct ranking by I RR or ERR would indicate that Alternative 3 (project C alone) would
be the best economic choice. This is, of course, inconsistent with the previous NPV analysis. To
overcome this inconsistency, the evaluator must perform incremental IRR or incremental ERR
analyses. For example, the incremental IRR technique is shown below:
1. Order the alternatives from the largest investment to the smallest:
Alterna(cid:415)ve
7
8
2
6
4
5
3
1
Capital
Investment
$45,000
$35,000
$30,000
$25,000
$20,000
$20,000
$15,000
$0
Annual
Cash Flow
$24,500
$21,000
$13,500
$16,000
$10,000
$12,500
$10,000
$0
2. Calculate the I RR of the incremental project 7-8:
0
1
2
3
-10000
3500
3500
3500
N P V7−8 = 0 = −10000 + 3500(P /A)I RR,3
Trial and error solution yields I RR7−8 = 2.5%
Since the incremental I RR is less than the MARR, Alternative #8 is better than Alternative #7.
Keep Alternative #8, discard Alternative #7, and compare Alternative #8 with the next one on
the list (#2).
5.6. RANKING ALTERNATIVES 81
3. Calculate the I RR of the incremental project 8-2:
0
1
2
3
-5000
7500
7500
7500
N P V8−2 = 0 = −5000 + 7500(P /A)I RR,3
Trial and error solution yields I RR7−8 = 139.0%
Since the incremental I RR is greater than the MARR, Alternative #8 is better than Alter-
native #2. Keep Alternative #8, discard Alternative #2, and compare Alternative #8 with the
next one on the list (#6).
4. Continue in this manner (comparing the best choice with the next one on the list) until one
has exhausted all of the alternatives. Alternative #8 will be the last remaining alternative and,
thus, will be the best economic choice. This result is now consistent with the one from NPV
analysis. A similar method for ERR will yield the same ultimate results of Alternative #8 being
the best economic choice.
Another way to complete the incremental IRR technique is to compare each alternative with
all other alternatives that have a lower capital investment and compute the incremental IRR. This
would result in a table of incremental IRRs as given below:
Row #
Alterna ve with
higher capital
investment
1
2
3
4
5
6
7
7
8
2
6
4
5
3
*IRR7-8
Alterna ve with lower capital investment
8
2.5*
2
52.8
139
6
6.1
23.4
471
-
4
33.8
52.8
5.2
601
5
20.7
32.1
4.24-
7.84
3
21.2
29.9
9.51-
3.63
4.32
1
29.2
36.3
6.61
3.14
4.32
5.93
6.44
82
5. INCOME PRODUCING INVESTMENTS
The use of this table would be as follows (see the arrows):
1. Start in row 1. Compare incremental alternative 7-8. Since the incremental I RR (2.5%) is less
than the MARR (10%), choose Alternative #8. Drop to row 2 (that belongs to Alternative #8).
2. Compare incremental alternative 8-2. Since the incremental I RR (139%) is greater than the
MARR (10%), choose Alternative #8. Stay in row 2.
3. Compare incremental alternative 8-6. Since the incremental I RR (23.4%) is greater than the
MARR (10%), choose Alternative #8. Stay in row 2.
4. Compare incremental alternative 8-4. Since the incremental I RR (52.8%) is greater than the
MARR (10%), choose Alternative #8. Stay in row 2.
5. Compare incremental alternative 8-5. Since the incremental I RR (32.1%) is greater than the
MARR (10%), choose Alternative #8. Stay in row 2.
6. Compare incremental alternative 8-3. Since the incremental I RR (29.9%) is greater than the
MARR (10%), choose Alternative #8. Stay in row 2.
7. Compare incremental alternative 8-1. Since the incremental I RR (36.3%) is greater than the
MARR (10%), choose Alternative #8. Since there are no more alternatives to be compared
with Alternative #8, then Alternative #8 is the best economic choice.
For the case of limited capital ($30,000), omit Alternatives #7 and #8 from the table. Follow
the arrows to show that Alternative #6 is the best economic choice.
Alterna ve with
higher capital
investment
2
6
4
5
3
Alterna ve with lower capital investment
6
-174
4
2.5
106
5
-42.4
48.7
-
3
-15.9
36.3
-
23.4
1
16.6
41.3
23.4
39.5
6.44
In summary, once the incremental I RR table has been created, start with the alternative
with the largest initial investment and compare it to the alternative with the second largest initial
investment. If the incremental I RR is less than the MARR, drop to the row of the lower initial
investment and proceed to compare with the next alternative. If the incremental I RR is greater than
the MARR, stay on the same row and proceed to compare with the next alternative. Eventually, one
will “exit” from the table on the best economic choice.
5.7. PROBLEMS 83
5.7
PROBLEMS
5.1. Projects A and B below are mutually exclusive alternatives. The cash flow diagrams are
given. Determine which project is the best economic choice using NPV, IRR, and ERR
analyses. Use a value of 15% for MARR.
Project A:
0
1
2
3
9 10
-8000
5000
5000
5000
……
5000
5000
L = 8000
Project B:
0
1
2
3
9 10
-12000
6000
6000
6000
……
6000
6000
L = 12000
5.2. Two mutually exclusive, but unequal life, investment projects A and B are shown below.
Project A:
Project B:
0
1
2
3
4
5
-100
40
40
40
40
140
0
1
2
-120
60
180
84
5. INCOME PRODUCING INVESTMENTS
(a) Determine the best economic choice using NPV, IRR, and ERR analyses. Use an
MARR of 20%.
(b) What value of MARR would reverse the ranking of projects A and B found in part
(a)?
For Problems 5.3 and 5.4.
The following projects are utilized in Problems 5.3 and 5.4. Projects A and B are indepen-
dent. Projects C and D are mutually exclusive and both are dependent on the acceptance of
B. Project E is dependent on the acceptance of A.
B:
C:
D:
-
-
-
-
5.7. PROBLEMS 85
E:
-
5.3.
For the projects described above, do the following:
(a) List all mutually exclusive alternatives.
(b) Which alternative should be chosen if the MARR equals 10% and one has unlimited
capital?
(c) Which alternative should be chosen if the MARR equals 10% and investment capital
is limited to $80?
5.4.
For the projects described above, do the following:
(a) List all mutually exclusive alternatives.
(b) Develop the incremental I RR table.
(c) Use the table to determine which alternative should be chosen if the MARR equals
10% and one has unlimited capital.
(d) Use the table to determine which alternative should be chosen if the MARR equals
10% and investment capital is limited to $80.
5.5. Use NPV and ERR analyses to determine which of the following two mutually exclusive
projects is the best economic choice. Use MARR of 15%.
Project A:
0
1
2
3
4
-500
200
200
200
200
L = 500
86
5. INCOME PRODUCING INVESTMENTS
Project B:
0
1
2
-200
100
100
L = 200
5.6.
Suppose you are considering two independent sets of two mutually exclusive projects each
plus a fifth project. The fifth project is contingent on two of the first four occurring. Make a
table that shows all of the mutually exclusive alternatives that are possible and the projects
that each alternative contains.
5.7. Projects A through E are being considered by an investor. They all are ten-year projects and
the MARR is 10%. Projects A and B are mutually exclusive. Projects C and D are mutually
exclusive and contingent on the acceptance of B. Project E is contingent on the acceptance
of A.
ERR
8%
Project
A
B
C
D
E
N P V
$5,000
$20,000
$15,000
$10,000
Capital Investment
$20,000
$15,000
$30,000
$22,000
$15,000
(a) List all of the possible mutually exclusive alternatives.
(b) Which alternative is the best economic choice with unlimited capital?
(c) Which alternative is the best economic choice with a capital constraint of $40,000?
5.8. Use Excel® to solve Problem 5.1 for values of MARR of 5%, 15%, 25%, 35%, and 45%.
5.9. Use Excel® to solve Problem 5.2 for values of MARR of 10%, 20%, and 30%.
5.10. Use Excel® to solve Problem 5.3 for values of MARR of 10%, 20%, 25%, and 30%.
5.11. Use Excel® to develop the incremental I RR table for Problem 5.4. Use the table to deter-
mine which alternative should be chosen if the MARR equals 10% and one has unlimited
capital.
5.12. Use Excel® to solve Problem 5.5 for values of MARR of 5%, 15%, 25%, and 35%.
C H A P T E R 6
87
Determination of Project Cash
Flow
6.1
INTRODUCTION
This chapter contains a discussion of escalation, depreciation, income taxes, and the subsequent
generation of cash flows when considering taxes. This chapter is not meant to be a detailed presenta-
tion on all of the ramifications of taxes. Most companies will use tax consultants and/or tax lawyers
instead of engineers to handle complicated tax questions. This chapter is meant to provide a basic
working knowledge of taxes so that the engineer can develop a stream of before- and after-tax cash
flows for a particular project.
6.2 ESCALATION AND INFLATION
When considering the effects of escalation on cash flows, it is necessary to define three types of
dollars with which evaluators work. The first is what is called today dollars. Today dollars simply refer
to the situation where all of the cash flows are calculated without any consideration for changes in
prices and costs as a function of time. This, of course, is not consistent with what actually occurs in
real life. A second type is escalated or actual dollars. When an evaluator attempts to estimate price
and cost changes and subsequently incorporates these changes into the cash flow calculations, then
the dollars are said to be escalated. The final type is constant dollars. When inflation is removed from
escalated dollars, then the resulting cash flows are said to be in constant dollars.
In order to more fully understand what is meant by these various types of dollars, the terms
inflation and escalation need to be defined.
Inflation refers to the general increase of prices with time due to an expanded money supply
with no hard assets to support the additional money. By definition, inflation affects prices of all
commodities by the same percentage amount. If the money supply decreases, there could be deflation
or the decrease in prices. There are as many causes of inflation as there are people who talk about it.
It is not the intent of the authors to discuss these causes. Using the Consumer Price Index (CPI)
that is published by the Bureau of Labor Statistics (http://www.bls.gov/data/), the average
inflation rate for 2000 to 2010 was 2.39% per year. The values of the CP I are shown in Figure 6.1
and Table 6.1 for various time periods. Published values are available back to 1913.
One can determine the average inflation rate for a given period of time by using the (F /P )i,n
formula. Consider the CP I values from two different years, n and m (with n > m). The CP I from
88
6. DETERMINATION OF PROJECT CASH FLOW
CPI Since 1960
CPI
250.0
200.0
150.0
100.0
50.0
0.0
1960
1970
1980
Year
1990
2000
2010
Figure 6.1: Consumer price index values for 1960-2010—from http://www.bls.gov/data/.
year n will be considered as a future value and the CP I from year m will be considered a present
value. Thus:
CP In = CP Im(1 + f )n−m which can be solved for the inflation rate, f , as
(cid:13)(cid:14)
f =
CP Iyr n/CP Iyr m
(cid:15)[1/(n−m)] − 1
(cid:16)
∗ 100
(6.1)
For example, the average inflation rate between 1980 and 1990 was:
(cid:13)
(130.7/82.4)
[1/10] − 1
(cid:16)
∗ 100 = 4.72%
Similarly, the average inflation rate between 2009 and 2010 was:
(cid:13)
(218.056/214.537)
[1/1] − 1
(cid:16)
∗ 100 = 1.64%
Escalation, on the other hand, refers to the total change in the price of a specific commodity or
service over a period of time. Prices of individual commodities can change due to supply and demand,
as well as many other factors. While the inflation rate is a single numerical value for all commodities,
the escalation rate may be different for each commodity. For example, for 2000 to 2010, the price
of food increased an average of 2.69% per year (similar to inflation), the price of unleaded gasoline
increased an average of 6.32% per year, and the price of computers actually dropped an average of
Table 6.1: Consumer price index values for 1980-2010—from http://www.bls.gov/
data/
6.2. ESCALATION AND INFLATION 89
Year
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
CPI
82.4
90.9
96.5
99.6
103.9
107.6
109.6
113.6
118.3
124.0
Year
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
CPI
130.7
136.2
140.3
144.5
148.2
152.4
156.9
160.5
163.0
166.6
CPI
Year
172.2
2000
177.1
2001
179.9
2002
184.0
2003
188.9
2004
195.3
2005
2006
201.6
2007 207.342**
215.303
2008
2009
214.537
0102
650.812
** Star ng in 2007, the Bureau of Labor Sta s cs began publishing the CPI with three
decimal places instead of one
16.4% per year over that same time frame. It should be pointed out that escalation includes the effect
of inflation. Figure 6.2 shows the price of unleaded gasoline ($/U.S. gallon) from 1976 to 2010.
One should notice that the CPI curve in Figure 6.1 is relatively smooth, but the price of any
one commodity may fluctuate significantly over the same time frame as shown in Figure 6.2. This
is due to the fact that the CPI “measures the average change in prices paid for a market basket of
goods and services.” (U.S. Department of Labor)
The escalated or actual dollar type of analysis referred to above includes both the effect of
inflation and escalation. This type of analysis attempts to predict the future prices of those elements
that are part of the cash flow calculation. One can either let all income and expenses rise at the
average inflation rate or one can attempt to isolate each commodity and use various escalation rates
for each income or expense item.
The constant dollar analysis reflects the purchasing power of money over the life of the project
by factoring out the effect of inflation. For example, in constant dollars, the price of unleaded gasoline
has increased an average of 3.84% per year and the price of food only increased an average 0.29%
per year from 2000 to 2010. This calculation will be shown later in this chapter.
The today dollar analysis simply uses the current prices for the commodities that are part of
the cash flow calculation for the project and maintains them at this level throughout the life of the
project. Thus, there is no consideration of the effects of inflation and escalation.
The authors believe that one should use either an escalated dollar analysis or a constant dollar
analysis when attempting to determine the economic viability of a project. Today dollar analyses
90
6. DETERMINATION OF PROJECT CASH FLOW
Price of Unleaded Gasoline
$/US gal
3.500
3.000
2.500
2.000
1.500
1.000
0.500
0.000
1970
1980
1990
2000
2010
2020
Year
Figure 6.2: Price of unleaded gasoline from 1976 to 2010 (Government Accounting Office analysis of
Bureau of Labor Statistics (BLS) data).
should only be used for projects that have short enough lives that the costs of the commodities that
are part of the cash flow calculation do not change substantially.
Two rules should be kept clearly in mind when incorporating the effects of inflation and
escalation. The first is that the dollar types, constant or escalated, should never be mixed within
a single cash flow diagram. The second is that the MARR that is used in the evaluation must be
consistent with the type of dollars used. A rate of return calculated from a set of cash flows that are
based on constant dollars should be compared with an MARR that is also based on constant dollars.
Similarly, consistency between escalated dollar cash flows and an MARR that is based on escalated
dollars is necessary. It should be noted that bank interest rates and investment bond interest rates
are based on escalated dollars. Thus, if an investor’s MARR is derived from those types of interest
rates, it should also be considered to be an escalated dollar MARR.
The relationship between interest rates in escalated and constant dollars can be obtained by
comparing the corresponding P /F factors:
(P /F )i,n = (P /F )f,n(P /F )ii,n
where,
i = escalated dollar interest rate, fraction per period
f = inflation rate, fraction per period
ii = constant dollar interest rate, fraction per period
If one substitutes the definition of (P /F ) and does some basic algebra, one can show:
6.2. ESCALATION AND INFLATION 91
ii = (1 + i)/(1 + f ) − 1
(6.2)
This equation was utilized earlier to “factor out” the effect of inflation from escalation. For
example, for the period of 2000-2010, the inflation rate was 2.39% per year and the escalation rate
for unleaded gasoline was 6.32% per year. Using Equation 6.2, the constant dollar growth of this
commodity is:
ii = (1 + 0.0632)/(1 + 0.0239) − 1 = 0.0384 = 3.84%
Note that just subtracting the inflation rate from the escalation rate (a difference of 3.93% in this
example) is not the correct way to factor out inflation from escalation.
The relationships between today dollars, escalated dollars, and constant dollars are shown
below:
Escalated $ price at the end of the year n = (Today $) ∗ (1 + i)n
Constant $ price at the end of the year n = (Escalated $)/(1 + f )n
Constant $ price at the end of the year n = (Today $) ∗ ((1 + i)/(1 + f ))n
(6.3)
(6.4)
(6.5)
where,
i = escalation rate, fraction per year
f = inflation rate, fraction per year
Example 6.1
Cash flow diagrams for projects A and B are shown below. Assume that the cash flows are
in escalated dollars and that the escalated dollar MARR is 15%. (a) Calculate the NP V of each
project as given and (b) calculate the NP V if one assumes a 5% inflation rate.
0
1
2
3
-100
0
55
1
60.5
66.55
2
3
A:
B:
-100
40
60
80
92
6. DETERMINATION OF PROJECT CASH FLOW
(a):
N P VA
N P VB
= −100 + 55(P /F )15,1 + 60.5(P /F )15,2 + 66.55(P /F )15,3
= $37.33
= −100 + 40(P /F )15,1 + 60(P /F )15,2 + 80(P /F )15,3
= $32.75
(b):
For part (b), one needs to factor out the effect of inflation from the escalated cash flows. In
addition, the MARR will have to be adjusted to a constant dollar basis. The cash flows are adjusted
by using the (P /F ) factor at 5% for the corresponding number of years. For example, the 55 (year 1
cash flow for Project A) is multiplied by (P /F )5,1 to yield 52.38. When this is done, the cash flows
become:
0
1
2
3
-100
52.38
54.88
57.49
0
1
2
3
A:
B:
-100
38.10
54.42
69.11
The constant dollar MARR will be:
ii = (1 + i)/(1 + f ) − 1 = (1 + 0.15)/(1 + 0.05) − 1 = 0.0952 = 9.52%
The N P V s then become:
N P VA = −100 + 52.38(P /F )9.52,1 + 54.88(P /F )9.52,2 + 57.49(P /F )9.52,3
= $37.34
N P VB = −100 + 38.10(P /F )9.52,1 + 54.42(P /F )9.52,2 + 69.11(P /F )9.52,3
= $32.66
Note that within numerical round off, the NP V s are the same for either escalated or constant dollar
analysis. This will always be the case.
6.2. ESCALATION AND INFLATION 93
Example 6.2
A five-year life project has an initial capital expenditure of $250,000 and annual operating
costs beginning at the end of the year 1 of $100,000. At the end of the years 3, 4, and 5 the project
receives $500,000 as income. Calculate the I RR for the following cases:
(a) Assume the cash flows given are in escalated dollars and the escalated dollar MARR is 25%.
(b) Assume the cash flows given are in today dollars and that incomes are escalated at 6% and
costs are escalated at 10%.
(c) Assume inflation is 4% and rework part (b) in terms of constant dollars.
(a) (numbers are in $1000):
0
1
2
3
4
5
-250
-100
-100
400
400
400
N P V = 0 = −250 − 100(P /A)I RR,2 + 400(P /A)I RR,3(P /F )I RR,2
Trial and error solution yields I RR = 34.3%. This value would be compared to the escalated
MARR of 25% to indicate that it’s an economically acceptable project.
(b) (numbers are in $1000):
Use Equation 6.3 to convert today dollars to escalated dollars:
94
6. DETERMINATION OF PROJECT CASH FLOW
N P V = 0
= −250 − 110(P /F )I RR,1 − 121(P /F )I RR,2 + 463(P /F )I RR,3
+ 485(P /F )I RR,4 + 508(P /F )I RR,5
Trial and error solution yields I RR = 39.9%. This value would be compared to the escalated
MARR of 25% to indicate that it’s an economically acceptable project.
(c) (numbers are in $1000):
Use Equation 6.4 to convert the today dollars to constant dollars:
Year
0
1
2
3
4
5
Constant $ income
500 (1.06)3 / (1.04)3 = 529
500 (1.06)4 / (1.04)4 = 540
500 (1.06)5 / (1.04)5 = 550
Constant $ costs
250 (1.10)0 / (1.04)0 = -250
-
100 (1.10)1 / (1.04)1 = -106
-
100 (1.10)2 / (1.04)2 = -112
-
-100 (1.10)3 / (1.04)3 = -118
-100 (1.10)4 / (1.04)4 = -125
-100 (1.10)5 / (1.04)5 = -132
Constant $ CF
-250
-106
-112
411
415
418
N P V = 0
= −250 − 106(P /F )I RR,1 − 112(P /F )I RR,2 + 411(P /F )I RR,3
+ 415(P /F )I RR,4 + 418(P /F )I RR,5
Trial and error solution yields I RR = 34.5%. This value would be compared to the constant
dollar MARR that is calculated according to Equation 6.1:
ii = (1 + 0.25)/(1 + 0.04) − 1 = .202 = 20.2%
This value would still indicate that it’s an economically acceptable project.
6.3 DEPRECIATION
Certain capital assets of a company lose their value with use and/or with time. A building or an
item of equipment are examples of such assets. These assets have an initial value that is equal to
the original cost of the asset. However, they may lose value over time due to physical deterioration,
development of improved facilities by technological advances, or different demands of their use. The
reduction in value is called depreciation.
6.3. DEPRECIATION 95
One also needs to recognize that most governments (including the United States) do not
allow companies, for tax purposes, to deduct the entire cost of an asset against their income in
the year that the asset is purchased. Since the asset retains at least some portion of its value over
its life, companies must prorate the deduction of the original asset cost over the usable life of the
asset. Governments will specify particular techniques for this proration. These techniques are called
depreciation methods.
Therefore, there are two interpretations of a depreciation account for a capital asset. Under the
first, a company would set aside actual cash in a depreciation account in order to have the necessary
funds to replace the asset at the end of its useful life. Under the second, rather than setting aside actual
cash in the depreciation account, the company would simply establish depreciation accounts for tax
purposes. That is, the depreciation account represents the allowable annual deduction of the asset
against the project’s income. The second interpretation represents reality. Thus, the depreciation
account that is maintained does not involve real dollars and depreciation expenses are known as
“paper” expenses in that they reduce the tax liability of the project but do not represent actual cash
expenditures. This chapter contains information on how to handle these paper expenses in the
calculation of after-tax cash flows for a project.
The most popular depreciation methods used in the United States are straight-line, sum-
of-the-years-digits, declining-balance, and the accelerated-cost-recovery-system. All four of these
methods will be discussed in this chapter.
In addition to the depreciation account, one also maintains a book value account that represents
the remaining value of the asset. Book value is simply the initial cost of the asset minus all accumulated
depreciation up to a specific point in time.
Depreciation calculations are based on the initial cost of the asset, P , any salvage value of the
asset at the end of its useful life, L, and the length of its useful life, N. The quantity P – L represents
the total allowable depreciation of the asset if it is held for the entire time period N.
6.3.1
STRAIGHT-LINE DEPRECIATION (SL)
When using the straight-line depreciation method, the yearly amount of depreciation is given by
Equation 6.6:
Dn = (P − L)/N
(6.6)
where, Dn = depreciation amount in year n, $
n = year of depreciation
P = initial cost of the asset, $
L = salvage value of the asset at the end of its useful life, $
N = length of the asset’s useful life, years
It should be evident that the depreciation is constant with time when using the straight-line
method.
96
6. DETERMINATION OF PROJECT CASH FLOW
At the end of any given year, the book value of the asset is given by Equation 6.7:
Bn = P − n(P − L)/N
(6.7)
where, Bn = book value of the asset at the end of year n, $
Excel® has a built-in function called SLN that computes straight-line depreciation:
= SLN(Initial_Cost, Salvage, Life)
where,
Initial_Cost = initial cost of the asset (P )
Salvage = salvage value of the asset (L)
Life = asset’s useful life (N)
6.3.2 DECLINING-BALANCE DEPRECIATION
Unlike straight-line depreciation, the annual depreciation amount determined using the declining-
balance method is not constant with time. The declining-balance method provides for a larger
depreciation deduction in the early years of an asset’s life than when using straight-line depreciation.
In this method, the depreciation amount is a fixed percentage of the remaining book value of the
asset.
The equations to calculate the annual depreciation amount and the book value at the end of
each year are given in Equations 6.8 and 6.9:
Dn = f (1 − f )n−1P
Bn = (1 − f )nP
(6.8)
(6.9)
where, f = a fixed percentage as a fraction
It should be noted that while the salvage value, L, is not utilized in the equations, one must
be careful that the total depreciation does not exceed the amount (P − L).
Limits have been placed on the value of f that can be used in the declining-balance method.
The value of f cannot exceed 2/N. When the value of 2/N is used, the method is referred to as the
double-declining-balance (DDB) method.
Excel® has a built-in function called DDB that computes double-declining balance depreci-
ation:
= DDB(Initial_Cost, Salvage, Life, Period, Factor)
6.3. DEPRECIATION 97
where,
Initial_Cost = initial cost of the asset (P )
Salvage = salvage value of the asset (L)
Life = asset’s useful life (N)
Period = the period of interest
Factor = 2 (or omitted) for double-declining balance
6.3.3
SUM-OF-THE-YEARS-DIGITS (SYD) DEPRECIATION
This method, like the declining-balance method, provides for an accelerated depreciation deduction
in the early years of the useful life of an asset.
The equations to calculate the annual depreciation amount and the book value at the end of
each year are given in Equations 6.10 and 6.11:
Dn = [N − (n − 1)](P − L)/S
Bn = P −
n(cid:4)
Dj
j =1
(6.10)
(6.11)
S = sum of the digits of the useful life of the asset = N(N + 1)/2
where,
Excel® has a built-in function called SYD that computes sum-of-the-years-digits deprecia-
tion:
= SYD(Initial_Cost, Salvage, Life, Period)
where,
Initial_Cost = initial cost of the asset (P )
Salvage = salvage value of the asset (L)
Life = asset’s useful life (N)
Period = the period of interest
When calculating depreciation amounts for the determination of after-tax cash flows, it is
advantageous to use the most accelerated depreciation schedule possible. The sum-of-the-years-
digits and the declining-balance methods give larger depreciation amounts in the early years of an
asset. The straight-line method may, however, be more advantageous in later years.
Example 6.3
A device costs $5000 and has a salvage value of $800 after its useful life of 7 years. Calculate
the depreciation deduction that can be taken each year and the book value at the end of each year
for the useful life of the asset. Use the following depreciation methods:
98
6. DETERMINATION OF PROJECT CASH FLOW
(a) Straight-Line (SL)
(b) Double-Declining-Balance (DDB)
(c) Sum-of-the-Years-Digits (SYD)
(a) For Straight-Line:
Dn = (P − L)/N = (5000 − 800)/7 = $600 which remains constant over the 7 years
Bn = P − n(P − L)/N = 5000 − 600 n
(b) For Double-Declining Balance:
f = 2/N = 2/7 = 0.28571
Dn = f (1 − f )n−1P = 0.28571(0.71429)n−15000 = 1428.55(0.71429)n−1
Bn = (1 − f )nP = (0.71429)n5000 = 5000(0.71429)n
0
1
2
3
4
5
6
7
5000
3571
2551
1822
1301
929
800
800
1429
1020
729
521
372
129*
0**
*D6 would have been calculated as $266, but it was limited to $129 because the book value
cannot go below the salvage value.
**D7 would have been calculated as $190, but it was limited to $0 because the book value had
already reached the salvage value at the end of year 6.
(c) For Sum-of-the-Years-Digits:
S = N (N + 1)/2 = (7)(8)/2 = 28
6.3. DEPRECIATION 99
Dn = [N − (n − 1)](P − L)/S = [7 − (n − 1)](5000 − 800)/28 = 150(8 − n)
Bn = P −
Dj = 5000 −
n(cid:4)
j =1
n(cid:4)
Dj
j =1
The depreciation and book values are shown in Figures 6.3 and 6.4 below to further demon-
strate the differences between these three methods.
Solution with Excel®:
A
P =
L =
N =
Year
1
2
3
4
5
6
7
B
5000
800
7
SL
$600
$600
$600
$600
$600
$600
$600
C
D
DDB
$1,429
$1,020
$729
$521
$372
$130
$0
SYD
$1,050
$900
$750
$600
$450
$300
$150
1
2
3
4
5
6
7
8
9
10
11
12
100
6. DETERMINATION OF PROJECT CASH FLOW
,
,
)
6
A
3
$
B
$
2
$
B
$
1
$
B
$
(
D
Y
S
=
,
,
,
)
7
A
3
$
B
$
2
$
B
$
1
$
B
$
(
D
Y
S
=
,
,
,
)
8
A
3
$
B
$
2
$
B
$
1
$
B
$
(
D
Y
S
=
,
,
,
)
9
A
3
$
B
$
2
$
B
$
1
$
B
$
(
D
Y
S
=
,
,
,
)
0
1
A
3
$
B
$
2
$
B
$
1
$
B
$
(
D
Y
S
=
,
,
,
)
1
1
A
3
$
B
$
2
$
B
$
1
$
B
$
(
D
Y
S
=
,
,
,
)
2
1
A
3
$
B
$
2
$
B
$
1
$
B
$
(
D
Y
S
=
,
,
,
)
6
A
3
$
B
$
2
$
B
$
1
$
B
$
(
B
D
D
=
,
,
,
)
7
A
3
$
B
$
2
$
B
$
1
$
B
$
(
B
D
D
=
,
,
,
)
8
A
3
$
B
$
2
$
B
$
1
$
B
$
(
B
D
D
=
,
,
,
)
9
A
3
$
B
$
2
$
B
$
1
$
B
$
(
B
D
D
=
,
,
,
)
0
1
A
3
$
B
$
2
$
B
$
1
$
B
$
(
B
D
D
=
,
,
,
)
1
1
A
3
$
B
$
2
$
B
$
1
$
B
$
(
B
D
D
=
,
,
,
)
2
1
A
3
$
B
$
2
$
B
$
1
$
B
$
(
B
D
D
=
,
,
)
3
$
B
$
2
$
B
$
1
$
B
$
(
N
L
S
=
,
,
)
3
$
B
$
2
$
B
$
1
$
B
$
(
N
L
S
=
,
,
)
3
$
B
$
2
$
B
$
1
$
B
$
(
N
L
S
=
,
,
)
3
$
B
$
2
$
B
$
1
$
B
$
(
N
L
S
=
,
,
)
3
$
B
$
2
$
B
$
1
$
B
$
(
N
L
S
=
,
,
)
3
$
B
$
2
$
B
$
1
$
B
$
(
N
L
S
=
,
,
)
3
$
B
$
2
$
B
$
1
$
B
$
(
N
L
S
=
,
1
2
3
4
5
6
7
D
D
Y
S
C
B
D
D
0
0
0
5
0
0
8
7
B
=
P
=
L
=
N
A
L
S
r
a
e
Y
1
2
3
4
5 6
7
8
9
0
1
1
1
2
1
6.3. DEPRECIATION 101
Deprecia(cid:415)on Values
SL
DDB
SYD
D(n) $
1600
1400
1200
1000
800
600
400
200
0
1
2
3
4
Year
5
6
7
Figure 6.3: Comparison of depreciation values for straight-line, double declining balance, and sum-of-
the-years-digits methods.
Book Values
B(n) $
5000
4500
4000
3500
3000
2500
2000
1500
1000
500
0
SL
DDB
SYD
0
1
2
3
4
5
6
7
Year
Figure 6.4: Comparison of book values for straight-line, double declining balance, and sum-of-the-
years-digits methods.
102
6. DETERMINATION OF PROJECT CASH FLOW
6.3.4 MODIFIED ACCELERATED COST RECOVERY SYSTEM (MACRS)
In 1981, the United States government passed the Economic Recovery Tax Act which made sig-
nificant changes in depreciation calculations. The act was further modified in 1986 which led to
the Modified Accelerated Cost Recovery System (MACRS) for assets that were placed after 1980.
MACRS generally simplified the calculation of depreciation by (a) removing any reference to the
salvage value of the asset at the end of its useful life by assuming that L = 0 and (b) using various
combinations of the three previously presented depreciation methods to calculate annual deprecia-
tion values that are simply percentages of the original asset cost. As in the other methods, the asset’s
book value is the original cost minus all accumulated depreciation. That is:
Dn = P ∗ Depreciation Rate(depreciable life, n)
Bn = P −
n(cid:4)
Dj
j =1
(6.12)
(6.13)
To determine what depreciation rate to use, one must first determine the depreciable life of the
asset. The MACRS method created the following classifications: 3-year property, 5-year property,
7-year property, 10-year property, 15-year property, 20-year property, and 25-year property. IRS
publication 946 (http://www.irs.gov/pub/irs-pdf/p946.pdf) defines the types of assets that
fit in each classification. Table 6.2 shows a summary of this publication. One should note that any
property that doesn’t specifically fit in another category is automatically classified as 7-year property.
Table 6.3 shows the depreciation rates that are used for various classifications, assuming a
half-year convention (most common assumption). A half-year convention simply recognizes that
assets are put into service at various times during any one year. Rather than beginning to depreciate
the asset on the actual day that it is put into service, the U.S. government allows a half year of
depreciation in the first year of use, a full year of depreciation from year two until year N and a half
year of depreciation in year N+1. Thus, depreciation for assets that fit in the 7-year depreciation
category, are actually spread over a total of 8 years.
6.3. DEPRECIATION 103
Table 6.2: Various classifications of depreciable property—from http://www.irs.gov/
pub/irs-pdf/p946.pdf
Property
Classification
3-year property
5-year property
7-year property
Examples
Tractor units for over-the-road use
Qualified rent-to-own property
Automobiles, taxis, buses, and trucks
Computer and peripheral equipment
Office machinery
Certain geothermal, solar, and wind energy property
Office furniture and fixtures
Agricultural machinery and equipment
Any property that does not have a class life and has not been designated
by law as being in any other class
Any natural gas gathering line placed in service after April 11, 2005
10-year property Vessels, barges, tugs and similar water transportation equipment
Qualified small electric meter and qualified smart electric grid system
placed in service after Oct 3, 2008
15-year property Any municipal wastewater treatment plant
Any qualified restaurant property placed in service before Jan 1, 2012
Electric transmission property used in transmission at 69 or more kilovolts
of electricity placed in service after April 11, 2005
Any natural gas distribution line placed in service after April 11, 2005
20-year property Farm buildings
Municipal sewers
25-year property Water utility property that is not included as 20-year property
104
6. DETERMINATION OF PROJECT CASH FLOW
Table 6.3: Depreciation rates for various property lives—from http://www.irs.gov/pub/
irs-pdf/p946.pdf
Although Excel® does not have a built-in function that can be used to directly compute
MACRS depreciation, one can use the VDB function if one recognizes that MACRS is defined as
DDB depreciation, using 1/2 year convention, and then switching to straight-line depreciation:
=VDB(Initial_Cost, Salvage, Life, Start_Period, End_Period, Factor,no_Switch)
6.3. DEPRECIATION 105
where,
Initial_Cost = initial cost of the asset (P )
Salvage = salvage value of the asset (L) – set to zero for MACRS
Life = asset’s useful life (N)
Start_Period to End_Period = the period of interest (which can be fractional time periods).
For 1/2 year convention in year 1, use Start_Period = 0 and End_Period = 0.5.
For full year convention for years 2 through N, use Start_Period = (n − 1.5) and
End_Period = (n − 0.5).
For 1/2 year convention in year N + 1, use Start_Period = (N − 0.5) and
End_Period =N .
Factor = 2 (or omitted for DDB)
no_Switch = FALSE (or omitted for automatic switching to straight-line)
For example, for 7-year property:
106
6. DETERMINATION OF PROJECT CASH FLOW
Example 6.4
Determine the yearly depreciation for the device described in Example 6.3 if it fits in the
7-year life category. Recall that P = $5000.
6.4 CASH FLOW COMPUTATION
As described in Chapter 2, cash flow is simply the net change (+ or -) in a company’s or individual’s
cash balance relative to a given project. That is, a positive project cash flow for a period would
indicate that the company had more cash due to that project at the end of that period than it did at
the beginning. A negative project cash flow would indicate just the opposite.
The following discussion lists the major considerations in determining cash flow for a project.
Cash flows can be calculated as before-tax or after-tax (where the tax is state and federal income tax).
It should be noted that the implications of a specific project on the company’s overall tax situation
will ultimately be determined by the company’s accountants and/or tax attorneys. Therefore, most
engineering economic analyses will be conducted on before-tax cash flows. However, sometimes it
is necessary or informational to evaluate after-tax cash flows. Therefore, both types are covered in
this discussion.
6.4.1 CAPITAL INVESTMENT
Capital investment is the cash that is expended by the company or the individual necessary to get
the project underway. That is, it is money used by the company or individual to purchase fixed assets
such as land, machinery, or buildings rather than money used for day-to-day operations. While
cash expenditures for fixed assets will generally occur over the length of a specific time period, it is
assumed that, for economic evaluation purposes, it all occurs at the beginning of that time period.
Thus, if it takes $500,000 over 6 months to construct a manufacturing facility, one would consider
all $500,000 to be spent at the beginning of year 1 (which, recall, is year 0 on the cash flow diagram).
Capital investment will include all costs associated with the fixed assets that are being purchased.
For example, labor costs, materials, services, etc. that are part of the construction of a manufacturing
facility are considered capital investment.
6.4. CASH FLOW COMPUTATION 107
6.4.2 GROSS REVENUE
Gross revenue is all revenue that is generated through the sale of a product or service. In most cases,
revenue for each product stream can be computed with Equation 6.14:
{Gross Revenue} = {#of items sold during a period} ∗ {price per item}
(6.14)
It should be noted that, for economic evaluation purposes, the period’s gross revenue will be assumed
to occur at the end of the particular time period in which it is generated.
6.4.3 OPERATING EXPENSES
Operating expenses are all cash outlays that are necessary to produce and sell the product or service.
These expenses may include, but are not limited to, items such as labor costs, building rent, utility
costs, raw materials, supplies, interest on loans, etc. Operating expenses are normally classified as
either fixed costs or variable costs. Fixed costs represent costs that are independent of the number of
units produced (for example building rent), whereas variable costs are proportional to the number
of units produced (for example raw materials). Equation 6.15 shows how to compute operating
expenses:
{Operating Expenses} ={Fixed Costs during a period} +
{#of items sold during a period} ∗ {variable cost per item}
(6.15)
It should be noted that, for economic evaluation purposes, the period’s operating expenses will be
assumed to all occur at the end of the particular time period in which it is spent. The assumption
that all capital investment will occur at the beginning of each year and that the income and operating
expenses will occur at the end of each year is known as end-of-year convention.
6.4.4 BEFORE-TAX PROFIT COMPUTATION
For the computation of before-tax profit, one only needs to consider gross revenues and operating
expenses:
{Before tax Profit} = {Gross Revenue} − {Operating Expenses}
(6.16)
6.4.5 BEFORE-TAX CASH FLOW COMPUTATION
For the computation of before-tax cash flows, one needs to have information on capital investment,
gross revenue, and operating expenses for each time period, n:
{Before tax cash flow} = {Gross Revenue} − {Operating Expenses}
− {Capital Investment}
(6.17)
108
6. DETERMINATION OF PROJECT CASH FLOW
Example 6.5
Create the cash flow diagram for the following project. $300,000 is to be expended over 6
months to build a bicycle manufacturing facility. It is assumed that the facility will build 500 bicycles
the first year and 1000 bicycles in years two through five.The bicycles will be sold for $500 in the first
year with an estimated 4% escalation rate in years two through five. In the first year, fixed operating
costs will be $20,000 and variable operating costs will be $100 per bicycle. Assume an estimated 3%
escalation rate in years two through five for both operating costs.
The table below shows the detailed cash flow calculations for each year that results in the
following cash flow diagram ($ in thousands):
0
1
2
3
4
5
-300
180
396.4
413.5
431.3
449.8
Year
0
1
2
3
4
5
Capital
Investment
000,003
0
0
0
0
0
Gross Revenue
Opera ng Expenses
0
500*500 = 250,000
1000*500*(1.04) =
520,000
1000*500*(1.04)2 =
540,800
1000*500*(1.04)3 =
562,400
1000*500*(1.04)4 =
584,900
0
20,000 + 500*100 = 70,000
(20,000 + 1000*100)*(1.03)
= 123,600
(20,000 + 1000*100)*(1.03)2
= 127,300
(20,000 + 1000*100)*(1.03)3
= 131,100
(20,000 + 1000*100)*(1.03)4
= 135,100
Before-tax cash
ow
000,003-
180,000
396,400
413,500
431,300
449,800
6.4.6 DEPRECIATION
As mentioned above, depreciation costs are “paper expenses” that result from the depreciation of a
capital item. That is, there is no actual cash expenditure for this category. The cost does, however,
reduce the company’s income tax burden as will be shown. One can pick any of the methods given
above to calculate the depreciation expenses.
6.4. CASH FLOW COMPUTATION 109
6.4.7 TAXABLE INCOME
Taxable income is the income (or sometimes called gross profit) that is subject to taxation by the
United States government:
{Taxable Income} = {Gross Revenue} − {Operating Expenses} − {Depreciation}
(6.18)
6.4.8
STATE AND FEDERAL INCOME TAX
As shown in Table 6.4, U.S. companies compute their U.S. federal income tax (FIT) as a percentage
of their taxable income. (United States Code: Title 26, Subtitle A, Chapter 1, Part II, § 11) Even
though the FIT rate varies as the taxable income increases, it is common for engineering economic
analyses to use a flat tax rate of 35% on all taxable income. In addition, many states in the U.S. have a
state income tax of a few percent (0 to 12% with a U.S. average of 6.56%). For engineering economic
calculations, it is sufficiently accurate to add the state and federal income tax rates together to arrive
at an effective tax rate.
Table 6.4: United States corporate income tax (FIT) rates—from United States Code: Title 26,
Subtitle A, Chapter 1, Part II, § 11.
Therefore,
{FIT} = {Taxable Income} ∗ {Tax Rate}
(6.19)
In some circumstances, FIT can be allowed to be a negative value. That is, if the taxable
income is negative (a “loss”), multiplying any tax rate by that taxable income would yield a negative
value for FIT. This would be the same as the government paying the project for losing money!!
However, this computation can be defensible if the project that is being evaluated is only one of
many for a large company. Since the company only pays taxes on its total taxable income (that is,
from all projects taken together), a loss from one project will reduce the taxes that would be paid by
a profitable project. Thus, the project that generates a negative taxable income does indeed yield a
negative tax. Allowing negative FIT values is known as a “corporate analysis.”
If the project is a “stand alone” project (that is, its profit or loss will not be combined with any
other project), then any negative values of FIT must be changed to zero for that year. However, the
loss in that year may be carried forward into the future to reduce taxes from a profitable year that
occurs later. This is an area where consultation with a corporate tax expert would be necessary.
110
6. DETERMINATION OF PROJECT CASH FLOW
6.4.9 NET PROFIT
Net Profit is computed as the taxable income minus the income tax:
{Net Profit} = {Taxable Income} − {FIT}
= {Taxable Income} ∗ (1 − Tax Rate)
(6.20)
6.4.10 CASH FLOW
The values defined above can now be combined in order to compute the cash flow (or net cash flow)
for a particular period:
{Cash Flow} = {Net Profit} + {Depreciation} − {Capital Investment}
(6.21)
As mentioned before, since depreciation is only a “paper” expense (that is, no actual cash
payment is made for depreciation), it must be added back into the cash flow calculation. Depreciation’s
only effect, therefore, is to reduce the income tax that is paid.
Any capital investment (cash spent on depreciable assets) made during the particular period
is subtracted after all other cash flow considerations are taken into account.
Example 6.6
Determine the after tax cash flows for the ten years of the following project’s life:
Initial capital investment: $1,000,000
Use 7-year MACRS depreciation
Total tax rate of 40%
Corporate tax analysis
Sales Schedule:
Year
1
2
3
4
5
6-10
# of units sold
5,000
5,000
7,000
7,000
10,000
10,000
Price per unit
$100
$110
$120
$120
$140
$140
Fixed Costs: $200,000 per year
Variable Costs: $30 per unit
Solution:
Year 0
6.4. CASH FLOW COMPUTATION 111
For evaluation purposes, assume that the initial capital investment occurs at the beginning of
year 1 (which, by definition, is year 0).
CF0 = −1, 000, 000
Year 1
Gross Revenue = 5,000 * 100 = $500,000
Operating Costs = 200,000 + 5000 * 30 = $350,000
Depreciation = 0.143 * 1,000,000 = $143,000
Taxable Income = 500,000 - 350,000 - 143,000 = $7,000
FIT = 0.40 * 7,000 = $2,800
CF 1 = 7,000 - 2,800 + 143,000 = $147,200
The remaining nine years are calculated in a similar manner and are shown in the following
cash flow table:
Gross
Revenue
500,000
550,000
840,000
840,000
1,400,000
1,400,000
1,400,000
1,400,000
1,400,000
1,400,000
Year
0
1
2
3
4
5
6
7
8
9
10
Opera ng
Costs Deprecia on
350,000
350,000
410,000
410,000
500,000
500,000
500,000
500,000
500,000
500,000
143,000
245,000
175,000
125,000
89,000
89,000
89,000
45,000
0
0
Taxable
Income
7,000
-45,000
255,000
305,000
811,000
811,000
811,000
855,000
900,000
900,000
FIT
Capital
Investment
000,000,1
2,800
-18,000
102,000
122,000
324,000
324,000
324,000
342,000
360,000
360,000
Cash Flow
000,000,1-
147,200
218,000
327,000
308,000
576,000
576,000
576,000
558,000
540,000
540,000
At a value of MARR of 20%, the NP V of this project can be shown to be $518,000 (after
tax).
One might wish to generate an Excel® spreadsheet to allow additional analysis of this problem
if any or all of the given numerical values change. Such a spreadsheet is shown on the next page.
From the formulas it can be seen that key numerical values can be easily changed and the remainder
of the spreadsheet will change accordingly.
The formulas and/or values in each column are shown on the next pages.
112
6. DETERMINATION OF PROJECT CASH FLOW
6.4. CASH FLOW COMPUTATION 113
114
6. DETERMINATION OF PROJECT CASH FLOW
6.5
PROBLEMS
6.1. Using the CP I , compute the average inflation rate from 1992 to 2009.
6.5. PROBLEMS 115
6.2. Cash flow diagrams for projects A and B are shown below. Assume that the cash flows are
in escalated dollars and that the escalated dollar MARR is 10%.
(a) Calculate the N P V of each project as given.
(b) Calculate the N P V if one assumes a 5% inflation rate.
0
1
2
3
-80
40
0
1
45
2
50
3
-120
100
80
60
6.3. An eight-year life project has an initial capital expenditure of $450,000, annual income of
$300,000 beginning at the end of year 1, and annual operating costs of $80,000 beginning
at the end of year 1. Calculate the I RR for the following cases:
(a) Assume the cash flows given are in escalated dollars and the escalated dollar MARR
is 20%.
(b) Assume the cash flows given are in today dollars and that incomes are escalated at 7%
and costs are escalated at 6%.
(c) Assume inflation is 4% and rework part (b) in terms of constant dollars.
116
6. DETERMINATION OF PROJECT CASH FLOW
6.4. An investment related to developing a new product is estimated to have the following costs
and revenues in today dollars. Do not consider any tax issues.
0
1
2
3
4
5
150,000
Investment: 50,000
Income:
Oper Costs:
Salvage:
200,000
100,000
200,000
100,000
200,000
100,000
200,000
100,000
0
(a) Evaluate the project’s escalated dollar I RR if both capital costs and operating costs
are estimated to escalate at 15% per year from time zero and income is estimated to
escalate at 10% per year from time zero.
(b) Evaluate the project’s escalated dollar I RR assuming a “washout” of escalation of
income and operating costs with a 15% escalation of capital costs per year. “Washout”
means any operating cost escalation is offset by the same dollar escalation of revenue
(not the same percentage escalation) so that the before-tax profit remains uniform.
(c) Compute the constant dollar I RR of case (b) assuming that the rate of inflation will
be 10% per year.
6.5. Determine the breakeven escalated dollar selling price per unit, X, required in each of years
1 and 2 to achieve a 15% constant dollar project I RR, assuming a 12% per year inflation
rate. All values are given in today dollars.
0
1
2
Investment: 100,000
Income:
Oper Costs:
1000(X)
50,000
1000(X)
50,000
Income escalation = 10% per year from time zero when selling price is $X per unit.
Operating Cost escalation = 15% per year from time zero.
1,000 units are to be produced and sold per year.
6.6. Equipment has been purchased for $2,000,000 and put into service with an expected salvage
value at the end of 10 years of $200,000. Calculate the annual depreciation using:
6.5. PROBLEMS 117
(a) 10-year straight-line method
(b) 10-year double-declining balance method
(c) 10-year sum-of-the-years-digits method
(d) 10-year MACRS
6.7. Consider a mining and processing project for an oil tar sands project. From the data given
below, calculate the after-tax cash flows for a 30-year life of the project and the NP V for
an MARR of 15%.
(cid:129) Initial capital expenditures totaled $415.5 million and were distributed over four years
(10% in year 0, 30% in year 1, 40% in year 2, and 20% in year 3).
(cid:129) Beginning in year 4:
– 17.666 million tons of ore will be mined per year
– Bitumen production rate will be 7.347 million barrels per year
– Product yield will be 0.841 barrels of oil per barrel of bitumen
– Product selling price will be $80 per barrel
– Operating costs:
∗ $10.47 per barrel of bitumen for plant and upgrading costs
∗ $9.02 per ton of ore for mining costs
– 10-year straight-line depreciation
– 40% tax rate (state and federal)
6.8. The XYZ oil company owns several natural gas wells and is negotiating a 10-year contract
to sell the gas from these wells to another company. They are negotiating on the price of the
gas in the first year, in dollars per thousand cubic feet ($/MCF), including a 4% escalation
clause. XYZ expects the wells to produce 33,000 MCF the first year and to decline at the
rate of 15% every year thereafter. Operating costs are estimated to be $2/MCF and escalate
at 3% per year. XYZ has agreed to spend $500,000 now to lay pipelines from each well to
the second company’s processing plant. What should the minimum price be the first year
for this to be acceptable to XYZ? Assume an end-of-year convention and an MARR of
15%.
118
6. DETERMINATION OF PROJECT CASH FLOW
6.9. An investment of $80,000 is projected to generate escalated dollar net revenues (income
minus costs) of $10,000 in year 1, $30,000 in year 2, and $40,000 in year 3 with a $40,000
salvage value at the end of year 3.
(a) Calculate the escalated dollar I RR for an escalated dollar MARR of 20%. Is this an
acceptable investment?
(b) Calculate the equivalent constant dollar I RR assuming that inflation will be 8% in
year 1, 10% in year 2, and 12% in year 3. Is this an acceptable investment?
6.10. The projected cost of the Alaskan oil pipeline was $900 million in 1969 dollars. The final
cost estimate was nearly $8.5 billion in 1977. What was the average yearly escalation rate
for the pipeline?
6.11. Boston’s “Big Dig” is one of the most expensive highway projects in the U.S. The project’s
original estimated cost was $2.6 billion in 1982 dollars. The costs in 2005 had risen to over
$14.6 billion.
(a) What is the value of the $14.6 billion in 1982 dollars?
(b) What was the average yearly escalation rate for the project?
6.12. Using Excel® and the CP I values given in Table 6.1, calculate the annual inflation rate for
each year from 1980 to 2010.
6.13. Use Excel® to solve Problem 6.2 for all 9 combinations of the following:
Values of MARR of 5%, 10%, and 15%
Inflation rates of 2%, 5%, and 7%
6.14. Use Excel® to solve the following problem. An eight-year life project has an initial capital
expenditure of $450,000, annual income of $300,000 beginning at the end of year 1, and
annual operating costs of $80,000 beginning at the end of year 1. Calculate the I RR for
the following cases:
(a) Assume the cash flows given are in escalated dollars and the escalated dollar MARR
is 10%, 20%, and 30%.
(b) Assume the cash flows given are in today dollars and pairs of escalation rates are:
a. Incomes are escalated at 7% and costs are escalated at 6%
b. Incomes are escalated at 3% and costs are escalated at 5%
c. Incomes are escalated at 4% and costs are escalated at 4%
(c) Assume inflation is 4% and rework all portions of part (b) in terms of constant dollars.
6.5. PROBLEMS 119
6.15. Use Excel® to solve Problem 6.6. Create a line graph that shows the values generated by
all four of the methods.
6.16. Use Excel® to solve Problem 6.7. The spreadsheet should allow for the user to easily change
any of the numerical values given.
6.17. Use Excel® to solve Problem 6.8. The spreadsheet should allow for the user to easily change
any of the numerical values given.
C H A P T E R 7
Financial Leverage
121
INTRODUCTION
7.1
Earlier in this text, a brief description of the financial aspects involved in economic analyses was
presented. It was pointed out that one of the important financial aspects had to do with obtaining
the funds required to initiate the project. These funds are referred to as the investment capital. As
a source for this investment capital, a company could use its own internal funds (what is known as
equity funds), borrow funds from an external source (known as debt funds), or use a combination
of the two. The ratio of total borrowed funds to the total capital investment is called the financial
leverage factor. The ratio of borrowed funds to equity funds is called the debt to equity ratio. The
degree of financial leverage for any given project will affect the economic analysis of the project.
FINANCIAL LEVERAGE AND ASSOCIATED RISK
7.2
Under the correct conditions, financial leverage will allow an investor (company or individual) to
obtain a higher rate of return on its equity capital than it could achieve with no leverage. However,
there is often a good deal of added risk associated with leveraged projects. This additional risk is
due to the fact that when projects are financed with borrowed funds, those funds must be repaid to
the lender, independent of the ultimate success or failure of the project. If a leveraged project is only
marginally successful during any particular time period, the borrowed funds must be repaid to the
lender before any funds are used to pay a return on the equity portion of the investment.
7.3 ADJUSTMENT TO CASH FLOW EQUATIONS
Equations 6.14 and 6.15, as well as 6.18 through 6.21, allow the analyst to compute the after-tax cash
flows from a project. Some of these equations need to be modified for the case where the project is
leveraged. These modifications will account for the fact that (a) interest paid on the debt is a pre-tax
deduction while (b) the principal paid on the debt is not a pre-tax deduction.
Equation 6.18 is modified as follows:
{Taxable Income} ={Gross Revenue} − {Operational Expenses} −
{Depreciation} − {Interest paid on debt}
Equation 6.21 is modified as follows:
{Cash Flow} ={Net Profit} + {Depreciation} −
{Equity Investment} − {Principal paid on debt}
(7.1)
(7.2)
122
7. FINANCIAL LEVERAGE
It should be noted that the investor is allowed to compute depreciation on the total value of
each asset in the project independent of the source of funds. Despite the source of funds, the investor
owns the full value of the depreciable assets that it procures for the project.
Example 7.1
A company is considering a one year investment which will cost $1000.The company’s before-
tax MARR is 10%. The $1000 will purchase assets that will be fully depreciated in the one year
of operation. There are three possible economic conditions that the company needs to investigate.
Details of these conditions are shown below. In addition, the company will consider three different
leverage factors: 0.0, 0.4, and 0.7. Interest on any borrowed funds will be 10% over the one year of
operation. Use a 40% corporate tax rate and determine the after-tax I RR on the equity funds for
each combination of the three economic conditions and the three leverage factors. Note that, for
economic condition A, the before-tax IRR on total assets (in this case $1000) is less than the interest
rate that will be charged on the loan. For economic condition B, the before-tax I RR on total assets
is equal to the interest rate to be charged on the loan and, for economic condition C, the before-tax
I RR is greater than the loan interest rate.
Revenue – Oper Costs
Depreciation
Taxable income without leverage
I RR on total assets before taxes
Economic Conditions
C
$1200
1000
200
20%
B
$1100
1000
100
10%
A
$1050
1000
50
5%
Before-tax cash flow diagrams for each economic condition:
0
1
A:
0
1
B:
C:
0
1
-1000
1050
-1000
1100
-1000
1200
Table 7.1 shows the cash flows and the computed after-tax I RRs for the 9 different combi-
nations. Figure 7.1 shows the after-tax IRR on equity as a function of the leverage factor for the
three different economic conditions.
7.3. ADJUSTMENT TO CASH FLOW EQUATIONS 123
Table 7.1: Effect of leverage and economic conditions on the after-tax I RR on equity for
Example 7.1
124
7. FINANCIAL LEVERAGE
Economic Condition A
Economic Condition B
Economic Condition C
Figure 7.1: Effect of leverage factor for various economic conditions for Example 7.1.
From the results of Example 7.1, the following observations can be made:
1. Figure 7.1 shows that when the project’s before-tax I RR on assets is less than the interest rate
charged on the loan (economic condition A), the after-tax I RR on equity decreases as the
leverage factor increases. This makes sense because the project must pay the lender a higher
rate of interest than it will be able to pay the owner in rate of return.
2. Figure 7.1 shows that when the project’s before-tax I RR on assets is equal to the interest rate
charged on the loan (economic condition B), the after-tax I RR on equity is not affected as
the leverage factor increases. This makes sense because the project pays the lender the same
rate of interest as it will be able to pay the owner in rate of return.
3. Figure 7.1 shows that when the project’s before-tax I RR on assets is greater than the interest
rate charged on the loan (economic condition C), the after-tax I RR on equity increases as the
leverage factor increases. This makes sense because the project pays the lender a lower rate of
interest than it is able to pay the owner in rate of return.
4. There is more risk to equity capital when projects are leveraged with borrowed money. If the
economic conditions are poorer than originally predicted (such as condition A occurring when
condition C was predicted when the decision to invest was made), the after-tax I RR on equity
will decrease.
5. If enough equity capital exists, companies should not borrow money to fund a project unless
the interest rate paid on the debt is less than the before-tax I RR on the project’s total assets.
Leverage factors vary from company to company and even within a company from project
to project. In general, for most companies other than public utilities (who typically have very high
7.3. ADJUSTMENT TO CASH FLOW EQUATIONS 125
leverage factors of 0.6 or greater), leverage factors usually run from 0.3 to 0.5. A highly leveraged
project can do very well in a favorable economic climate, but may run into some hard times as
economic conditions go from good to bad. Many companies have used this principle to expand
rapidly during thriving business conditions.
Example 7.2
Consider the following five-year project with different methods of financing. A company has
the opportunity to invest in a five-year project that has an initial capital investment of $100,000.
The entire capital investment (total assets) will be depreciated over the five-year life of the project
using straight-line depreciation. Annual incomes and operating costs are expected to be $50,000
and $10,000, respectively. Interest on borrowed money will be 10% compounded annually. Calculate
the after-tax IRR on equity for the following cases and assuming a corporate tax rate of 40%. Use
an after-tax MARR of 12%.
(a) 100% equity.
(b) Leverage factor of 0.4. The principal payments will be constant for each of the five years and
the interest paid each year will be based on the outstanding debt balance.
(c) Leverage factor of 0.7. The principal payments will be constant for each of the five years and
the interest paid each year will be based on the outstanding debt balance.
(d) Leverage factor of 0.4. The principal and interest will be paid with a constant annual payment
as calculated according to: P &I payment = Debt ∗ (A/P )10%,5.
(e) Leverage factor of 0.4. Interest payments are made each year but the principal is paid back
in one lump sum at the end of the project. This is known as yearly interest with a “balloon
payment” of the principal at the end.
126
7. FINANCIAL LEVERAGE
First, solve for the before-tax I RR on assets. This would be represented by a 0.0 leverage
factor and a 0% FIT rate.
Therefore, the before-tax I RR on assets for this project is 28.6%. Since the interest rate on
borrowed funds is less than this value, leveraging the project should increase the after-tax I RR on
equity.
(a) This solution will show the effect of the 40% FIT rate compared to the before-tax solution shown
previously.
7.3. ADJUSTMENT TO CASH FLOW EQUATIONS 127
The 40% FIT tax rate reduces the after-tax I RR on total assets to 18.0%.
128
7. FINANCIAL LEVERAGE
(b) This solution will show the effect of a leverage factor of 0.4.
0.4
40000
4000
3200
2400
1600
800
16000
6400
9600
16800
6720
10080
17600
7040
10560
18400
7360
11040
19200
7680
11520
8000
8000
8000
8000
8000
60000
-60000
21600
22080
22560
23040
23520
40000
32000
24000
16000
8000
25.1%
18691
The after-tax I RR on an equity investment of $60,000 has increased to 25.1%. This increase
is as expected. Also, the after-tax NP V has increased from a leverage factor of 0.0.
(c) This solution will show the effect of a leverage factor of 0.7.
7.3. ADJUSTMENT TO CASH FLOW EQUATIONS 129
Increasing the leverage factor to 0.7 further increases the after-tax I RR on equity and the
after-tax N P V .
130
7. FINANCIAL LEVERAGE
(d) This solution will show the effect of paying constant annual principal and interest payments.
Using a more conventional method to repay the debt, the after-tax I RR and after-tax NP V
both increase slightly from the first repayment method.
(e) This solution will show the effect of paying annual interest and then a balloon payment for the
principal.
7.3. ADJUSTMENT TO CASH FLOW EQUATIONS 131
** Modified I RR using an after-tax MARR of 12%.
The balloon repayment method further increases the after-tax I RR and after-tax NP V .
From the results of Example 7.2, the following observations can be made:
1. If the results from parts (a), (b), and (c) are compared, it is again found that, under these
economic conditions, when the amount of borrowed funds is increased, a higher rate of return
is obtained on the equity investment. It should be stressed that this higher rate of return is on
a smaller amount of equity dollars compared to financing the project with 100% equity funds.
2. It can also be seen that after-tax NP V increases as the leverage factor is increased. NPV
analysis would further emphasize that, under the economic conditions of the before-tax I RR
on assets being greater than the interest rate paid on the debt, the best option is to maximize
132
7. FINANCIAL LEVERAGE
the amount of leverage. By using maximum leverage on each project, a company can invest in
more projects and grow more rapidly.
3. Parts (b), (d), and (e) compare three different, but acceptable, methods of repaying the debt
portion of the investment. Since the interest on the borrowed money is less than the I RR on
assets, it is better to push the repayment of the principal as far forward in time as possible
in order to increase I RR on equity and NP V . The balloon payment technique provides the
highest I RR and N P V .
7.3.1 LEVERAGE AND MUTUALLY EXCLUSIVE PROJECTS
When applying leverage concepts to the evaluation of several projects to determine which one is
best, the leverage factor is an important variable. It has been shown in the example problems, that the
project I RR on equity is a function of the leverage factor. In order to compare projects, the degree
of leverage must be the same on all projects. Many companies have a policy that the comparison of
projects is done without considering any leverage for all of the projects. Once a project is chosen, then
various methods of financing, including different amounts of leverage and repayment techniques,
can be investigated as to their effect upon the project.
7.3.2 EXCEL® SPREADSHEET
As shown on the next page, the project spreadsheet generated for Example 6.6 can be easily modified
to include the effect of leverage. The only assumptions in the spreadsheet are that the loan will be
paid with constant principal payments over the first five years and the interest paid each year will be
based on the outstanding debt balance. This could be modified for other repayment options.
7.3. ADJUSTMENT TO CASH FLOW EQUATIONS 133
134
7. FINANCIAL LEVERAGE
PROBLEMS
7.4
7.1. A used piece of heavy equipment is available for purchase at $300,000. A rental company is
deciding whether or not to purchase the equipment. The company estimates the equipment
will create annual incomes of $110,000 and have annual operating costs of $20,000. The
equipment can be depreciated in five years with straight-line depreciation. Based on the
results from part (a) below, should the rental company purchase the equipment if their
corporate tax rate is 35%? Consider a five-year life project and an after-tax MARR of 15%.
(a) Determine the return on equity for each of three different leverage factors of 0, 0.4,
and 0.7. Assume an interest rate on borrowed funds to be 10% compounded annually.
The principal payments will be constant for each of the five years and the interest paid
each year will be based on the outstanding debt balance.
(b) Assume two additional economic conditions: (i) annual income increases to $125,000
and (ii) annual income decreases to $95,000. Repeat part (a) for these two economic
conditions. Prepare a plot of the I RR on equity versus the leverage factor.
7.2. A corporation’s tax rate is 40%. An outlay of $35,000 is being considered for a new asset.
Estimated annual revenues are $30,000 and estimated annual operating costs are $10,000.
The useful life of the asset is 5 years and has no salvage value. Use the SYD method of
depreciation. A lending institution has offered to loan the corporation 50% of the initial
investment cost at an annual interest rate of 12.5%. The principal and interest will be
paid with a constant annual payment as calculated according to: P &I payment = Debt ∗
(A/P )12.5%,5. If the corporation’s after-tax MARR is 15%, should it accept the loan?
Solve Problem 6.7 using a leverage factor of 0.2.
7.3.
7.4. Use Excel® to solve Problem 6.8 for leverage factors of 0.2 and 0.4.
C H A P T E R 8
135
Basic Statistics and Probability
8.1
INTRODUCTION
In previous chapters of this text, it was assumed that all of the information needed to make an eco-
nomic analysis was known without any uncertainty. In practice, this is a rare situation. Nearly always,
an evaluator will need to include a measure of the uncertainty pertaining to one or more variables
in the analysis. This uncertainty may, in turn, add significant uncertainty about the profitability of
an investment. For example, with one set of economic assumptions, the project’s NPV might be
greater than zero which would indicate an acceptable investment. However, with a different set of
economic assumptions, the project’s NPV might be negative, thereby indicating that the investor
should pass on this opportunity. This range of uncertainty about the project’s profitability is one
way to define the “risk” in a project. Having a basic understanding of statistics and probability will
allow an evaluator to incorporate various risk factors into the analyses that are to be completed for a
project. Some techniques that are available to incorporate uncertainty into project variables, and that
apply the ideas of statistics and probability presented in this chapter, will be presented in Chapter 9.
8.2
STATISTICS
8.2.1 MEASURES OF CENTRAL TENDENCY
Averages are often used to represent a set of data. Several different types of averages can be calculated.
These include the arithmetic mean, the median, the mode, and the geometric mean.These are known
as measures of central tendency as they tend to be centrally located within the data.
Arithmetic Mean
The arithmetic mean of a set of data is calculated with Equation 8.1. The arithmetic mean is also
known as the expected value of the data.
μ =
(cid:17)
N(cid:4)
(cid:18)
xi
/N
i=1
(8.1)
where, xi = the ith value of the data
N = total number of data points
μ = arithmetic mean
136
8. BASIC STATISTICS AND PROBABILITY
Excel® has a built-in function to calculate the arithmetic mean:
= AVERAGE(number1, number2,…)
where, number1, number2, ... = list of data points
Median
When a set of data is arranged in order of magnitude, the median of the set is found by taking the
middle value (when there is an odd number of values) or the arithmetic mean of the two middle
values (when there is an even number of values).
Excel® has a built-in function to calculate the median:
= MEDIAN(number1, number2,…)
Mode
The mode is the value which occurs with the greatest frequency. A set of data can have a single
mode, several modes, or no modes.
Excel® has two built-in functions to calculate the mode:
single mode: = MODE.SNGL(number1, number2, …)
multiple modes: = MODE.MULT(number1, number2, …)
Geometric Mean
The geometric mean of a set of data is calculated with Equation 8.2:
(cid:19)
(cid:20)
(cid:20)
(cid:21)
G = N
N(cid:22)
xi
i=1
(8.2)
where, xi = the ith value of the data
N = total number of data points
N(cid:22)
i=1
= (x1)(x2)(x3) . . . (xN −1)(xN )
Excel® has a built-in function to calculate the geometric mean:
= GEOMEAN(number1, number2,…)
Example 8.1
Consider 100 exam scores from a college-level class as shown below:
8.2. STATISTICS 137
75
76
57
77
54
84
51
94
95
85
67
91
88
38
86
91
77
46
46
87
96
81
94
93
94
67
88
99
48
83
73
87
59
76
60
79
79
85
79
98
78
91
97
35
88
78
46
79
85
74
53
55
81
78
75
73
97
34
87
88
51
88
78
94
86
78
80
85
76
51
47
90
90
88
38
79
90
39
56
57
31
53
83
67
67
39
78
91
48
95
42
74
65
89
99
90
84
82
32
61
(a) Calculate the arithmetic mean:
μ = (75 + 67 + 96 + · · · + 95 + 61)/100 = 73.7
(b) Calculate the median:
First, order the 100 scores from high to low. Since there is an even number of values, the
median is the average of 50th (79) and 51st (78) values, or 78.5.
(c) Calculate the mode:
Again, order the 100 scores from high to low and find the value that occurs most often. In this
case, the value of 78 occurs six times. Therefore, the mode is 78.
(d) Calculate the geometric mean:
√
G = 100
75 ∗ 67 ∗ 96 ∗ · · · ∗ 95 ∗ 61 = 70.9
138
8. BASIC STATISTICS AND PROBABILITY
Using Excel®:
A
B
C
D
E
F
G
H
I
J
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
53
55
81
78
75
73
97
34
87
88
F
53
55
81
78
75
73
97
34
87
88
75
76
57
77
54
84
51
94
95
85
67
91
88
38
86
91
77
46
46
87
96
81
94
93
94
67
88
99
48
83
Average = 73.7
Median = 78.5
78
Geo Mean = 70.9
Mode =
A
75
76
57
77
54
84
51
94
95
85
B
67
91
88
38
86
91
77
46
46
87
C
96
81
94
93
94
67
88
99
48
83
73
87
59
76
60
79
79
85
79
98
D
73
87
59
76
60
79
79
85
79
98
78
91
97
35
88
78
46
79
85
74
E
78
91
97
35
88
78
46
79
85
74
Average = =AVERAGE(A1:J10)
Median = =MEDIAN(A1:J10)
Mode = =MODE.SNGL(A1:J10)
Geo Mean = =GEOMEAN(A1:J10)
51
88
78
94
86
78
80
85
76
51
G
51
88
78
94
86
78
80
85
76
51
47
90
90
88
38
79
90
39
56
57
H
47
90
90
88
38
79
90
39
56
57
31
53
83
67
67
39
78
91
48
95
I
31
53
83
67
67
39
78
91
48
95
42
74
65
89
99
90
84
82
32
61
J
42
74
65
89
99
90
84
82
32
61
8.2.2 MEASURES OF DISPERSION
It is frequently desired to determine how a set of data is dispersed or spread about its average.
Measures of dispersion which will be discussed in this chapter include the range, the mean deviation,
the standard deviation, and the variance.
8.2. STATISTICS 139
Range
The range of a set of data is simply the difference between the largest and the smallest values of the
data.
In order to compute the range of a set of data with Excel®, use the MAX and MIN functions:
=MAX(number1, number2,…) − MIN(number1, number2,…)
Mean Deviation
The mean deviation (or average deviation) is the mean of the distances between each value and the
mean. It is computed with Equation 8.3:
M.D. =
(cid:18)
|xi − μ|
/N
(cid:17)
N(cid:4)
i=1
(8.3)
where, xi = the ith value of the data
μ = the arithmetic mean of the data
N = total number of data points
Excel® does not have a built-in function to calculate the mean deviation. To use Excel®, do
the following:
1. Place the data in a single column (for example, assume 10 data points in cells A1 through
A10).
2. Use the formula =AVERAGE(A1:A10) in cell A11 to compute the average of this column of
data. This is the mean of the data.
3. In the adjacent column B, use the formula =ABS(A1−$A$11) in cell B1.
4. Copy this formula to cells B2 through B10.
5. Use the formula =AVERAGE(B1:B10) in cell B11 to compute the average of this column of
data. This is the mean deviation of the data.
Standard Deviation
Standard deviation is another measure of the variability of a data set about its mean. Its origins are
associated with the normal distribution that is discussed later in this chapter, but it has meaning
140
8. BASIC STATISTICS AND PROBABILITY
for any set of data. A small value of standard deviation indicates that the data points are clustered
more closely to the mean than a larger value of standard deviation. If the entire population has been
sampled (that is, N equals the total possible number of data points in the population), the standard
deviation is calculated with Equation 8.4:
(cid:19)
(cid:20)
(cid:20)
(cid:21)
N(cid:4)
σ =
(xi − μ)2 /N
(8.4)
i=1
where, xi = the ith value of the data
μ = the arithmetic mean of the data
N = total number of data points
If one was calculating the standard deviation of 100 exam scores in a particular college-level
class with 100 students, then N would be 100 in Equation 8.4. However, if only a subset of the
population is being sampled, N should be replaced with N − 1. It can be noted that when N gets
larger than about 30, there is very little error introduced by using N instead of N − 1. As an example
of a sample, assume that one wanted to measure the mean and standard deviation of the age of
the population in a city of 20,000 people. It would be difficult to get the age of all 20,000 people,
so a subset of the population is sampled (perhaps 1,000 people). One would use Equation 8.1 to
determine the mean age of the population and Equation 8.4 (with N − 1 instead of N) to determine
the standard deviation of the population’s age.
Excel® has two built-in functions to calculate the standard deviation:
=STDEV.P(number1, number2,…) for the entire population or
=STDEV.S(number1, number2,…) for a sample of the population.
Per Equation 8.4, STDEV.P contains a division by N, whereas STDEV.S contains a division
by N − 1.
Example 8.2
Consider 100 exam scores from a college-level class as shown below (same as Example 8.1):
75
76
57
77
54
84
51
94
95
85
67
91
88
38
86
91
77
46
46
87
96
81
94
93
94
67
88
99
48
83
73
87
59
76
60
79
79
85
79
98
78
91
97
35
88
78
46
79
85
74
53
55
81
78
75
73
97
34
87
88
51
88
78
94
86
78
80
85
76
51
47
90
90
88
38
79
90
39
56
57
31
53
83
67
67
39
78
91
48
95
42
74
65
89
99
90
84
82
32
61
(a) Calculate the range:
8.2. STATISTICS 141
Order the numbers from high to low. The range is then given by the highest value minus the
lowest value. Range = 99-31 = 68.
(b) Calculate the mean deviation:
M.D. = (|75 − 73.7| + |67 − 73.7| + |96 − 73.7| + · · · + |95 − 73.7| + |61 − 73.7|) /100
= 15.4
(c) Calculate the standard deviation:
(cid:23)
σ =
(75 − 73.7)2 + (67 − 73.7)2 + · · · + (95 − 73.7)2 + (61 − 73.7)2
100
= 18.5
Using Excel®:
A
75
76
57
77
54
84
51
94
95
85
B
67
91
88
38
86
91
77
46
46
87
C
96
81
94
93
94
67
88
99
48
83
D
73
87
59
76
60
79
79
85
79
98
E
78
91
97
35
88
78
46
79
85
74
F
53
55
81
78
75
73
97
34
87
88
G
51
88
78
94
86
78
80
85
76
51
H
47
90
90
88
38
79
90
39
56
57
I
31
53
83
67
67
39
78
91
48
95
J
42
74
65
89
99
90
84
82
32
61
Mode =
Average = 73.7
Median = 78.5
78
Geo Mean = 70.9
68
Mean Dev = 15.4
StdDev = 18.5
Range =
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
142
8. BASIC STATISTICS AND PROBABILITY
A
75
76
57
77
54
84
51
94
95
85
B
67
91
88
38
86
91
77
46
46
87
C
96
81
94
93
94
67
88
99
48
83
D
73
87
59
76
60
79
79
85
79
98
E
78
91
97
35
88
78
46
79
85
74
F
53
55
81
78
75
73
97
34
87
88
G
51
88
78
94
86
78
80
85
76
51
H
47
90
90
88
38
79
90
39
56
57
I
31
53
83
67
67
39
78
91
48
95
J
42
74
65
89
99
90
84
82
32
61
Average = =AVERAGE(A1:J10)
Median = =MEDIAN(A1:J10)
Mode = =MODE.SNGL(A1:J10)
Geo Mean = =GEOMEAN(A1:J10)
Range = =MAX(A1:J10)-MIN(A1:J10)
**121B= = veD naeM
StdDev =
)01J:1A(P.VEDTS=
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
71
18
**This assumes that the 100 data points are copied to cells A21:A120 and the procedure listed
above under Mean Deviation is followed.
8.2.3 FREQUENCY DISTRIBUTIONS
The creation of a frequency distribution is another technique to summarize large numbers of raw data.
When the raw data are summarized, they begin to take on more meaning and utility. A frequency
distribution is made by grouping the raw data into classes and counting the number of items that
fall into each class. This number is referred to as the class frequency. A table is then formed which
contains a column for the class, a column for the class frequency, and a column for the cumulative
class frequency. The resulting table is the frequency distribution. The size and number of classes will
depend upon the particular application that is being considered. Typically, frequency distributions
contain five to ten classes, all of equal size. However, some data might lend themselves to classes of
unequal size or even classes that might be open ended (normally, the first class or the last class or
both).
The cumulative frequency distribution, Fi, is the summation of the frequency distribution.
Example 8.3
Consider 100 exam scores from a college-level class as shown below (same as Example 8.1):
8.2. STATISTICS 143
75
76
57
77
54
84
51
94
95
85
67
91
88
38
86
91
77
46
46
87
96
81
94
93
94
67
88
99
48
83
73
87
59
76
60
79
79
85
79
98
78
91
97
35
88
78
46
79
85
74
53
55
81
78
75
73
97
34
87
88
51
88
78
94
86
78
80
85
76
51
47
90
90
88
38
79
90
39
56
57
31
53
83
67
67
39
78
91
48
95
42
74
65
89
99
90
84
82
32
61
Using ten classes from 0–100, develop the frequency distribution for the data.
Solution:
Within the frequency distribution, the range of numbers that is used to define the class is
called a class interval. The smaller number is the lower class limit and the larger number is the upper
class limit. Note that in Example 8.3, the upper class limit of one class is the same as the lower class
limit of the next class. If a value is exactly equal to one of the class limits, one needs to decide in
which class it belongs. It doesn’t matter if it is placed in the higher range or the lower range as long
as the evaluator remains consistent. In Example 8.3, any value that is equal to a class limit is placed
in the lower range (e.g., a value of 90 is placed in the 80-90 class). If this convention is used, then
one can define true class limits for a range. In this case, the true class limits would be 90.5-100.5,
80.5-90.5, 70.5-80.5, etc. Excel® utilizes this convention as well.
There are two other terms that need to be defined for frequency distributions. The class size
is the difference between the upper true class limit and the lower true class limit. The true class mark
144
8. BASIC STATISTICS AND PROBABILITY
is the midpoint of each true class interval or the average between the upper true class limit and the
lower true class limit. In Example 8.3, the class size is ten for all ten classes and the true class marks
are 95.5, 85.5, 75.5, etc.
True Class
Limits
0.5-10.5
10.5-20.5
20.5-30.5
30.5-40.5
40.5-50.5
50.5-60.5
60.5-70.5
70.5-80.5
80.5-90.5
90.5-100.5
Total
True Class
Mark
5.5
15.5
25.5
35.5
45.5
55.5
65.5
75.5
85.5
95.5
Frequency
fi
0
0
0
8
7
12
6
23
27
17
100
Cumula ve
Frequency, Fi
0
0
0
8
15
27
33
56
83
100
Frequency distributions are often represented graphically. Graphical representations include
histograms, frequency polygons, and relative and cumulative relative frequency diagrams.
A histogram consists of a set of rectangles, where a rectangle is drawn for each class interval
with the width of each rectangle equal to the class size and the height of the rectangle is the class
frequency. The histogram is constructed so that the center of each rectangle lies at its true class mark.
Figure 8.1 is the histogram for the data presented in Example 8.3.
A frequency polygon can be generated by creating a line graph of the frequency of each class
as a function of the true class marks. Figure 8.2 is the frequency polygon for the data presented in
Example 8.3. The first and last points of the polygon are found on the x-axis at what would be the
true class marks associated with class intervals before the first actual class interval (located at −4.5)
and after the last actual class interval (located at 105.5).
8.2. STATISTICS 145
Histogram
30
25
20
Freq
15
10
5
0
5.5
15.5
25.5
35.5
45.5
55.5
65.5
75.5
85.5
95.5
Exam Scores
Figure 8.1: Histogram for Example 8.3.
Frequency Polygon
30
25
20
Freq
15
10
5
0
-4.5
5.5
15.5 25.5 35.5 45.5 55.5 65.5 75.5 85.5 95.5 105.5
Exam Scores
Figure 8.2: Frequency polygon for Example 8.3.
146
8. BASIC STATISTICS AND PROBABILITY
8.2.4 RELATIVE FREQUENCY DISTRIBUTION
The relative frequency distribution is constructed by dividing the number of occurrences in each
class interval by the total number of points in the data set. The following shows the relative fre-
quency distribution for Example 8.3 while Figures 8.3 and 8.4 show the graphical versions of these
distributions.
True Class
Limits
True Class
Mark
0.5-10.5
10.5-20.5
20.5-30.5
30.5-40.5
40.5-50.5
50.5-60.5
60.5-70.5
70.5-80.5
80.5-90.5
90.5-100.5
5.5
15.5
25.5
35.5
45.5
55.5
65.5
75.5
85.5
95.5
Rela ve
Frequency,
fi
0.00
0.00
0.00
0.08
0.07
0.12
0.06
0.23
0.27
0.17
Cumula ve
Rela ve
Frequency, Fi
0.00
0.00
0.00
0.08
0.15
0.27
0.33
0.56
0.83
1.00
Rela(cid:415)ve Frequency Distribu(cid:415)on
0.3
0.25
0.2
Rel Freq
0.15
0.1
0.05
0
-4.5
5.5 15.5 25.5 35.5 45.5 55.5 65.5 75.5 85.5 95.5 105.5
Exam Scores
Figure 8.3: Relative frequency distribution for Example 8.3.
Cumula(cid:415)ve Rela(cid:415)ve Frequency
8.2. STATISTICS 147
Cumul
Rela(cid:415)ve
Freq
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
-4.5
5.5 15.5 25.5 35.5 45.5 55.5 65.5 75.5 85.5 95.5 105.5
Exam Scores
Figure 8.4: Cumulative relative frequency for Example 8.3.
If the data are presented in frequency distribution form, items such as the mean, mean devi-
ation, and standard deviation can be determined from the following equations, respectively.
μ(cid:7) =
M.D.(cid:7) =
M(cid:4)
j =1
M(cid:4)
fj x(cid:7)
j
(cid:7)
(cid:7)
(cid:7)x(cid:7)
j
(cid:7)
(cid:7)
(cid:7)
− μ(cid:7)
fj
j =1
(cid:19)
(cid:20)
(cid:20)
(cid:21)
M(cid:4)
j =1
σ (cid:7) =
(cid:5)
fj
x(cid:7)
j
− μ(cid:7)
(cid:6)
2
(8.5)
(8.6)
(8.7)
where, fj = the relative frequency of the j th class
= the true class mark of the j th class
x(cid:7)
j
M = total number of classes
μ(cid:7) = the arithmetic mean of the data based on the distribution
M.D.(cid:7) = the mean deviation of the data based on the distribution
σ (cid:7) = the standard deviation of the data based on the distribution
148
8. BASIC STATISTICS AND PROBABILITY
Example 8.4
Calculate the mean, mean deviation, and standard deviation for the data in Example 8.1, using
the relative frequency distributions found in Example 8.3:
Mean:
μ(cid:7) = 0.00 ∗ 5.5 + 0.00 ∗ 15.5 + · · · + 0.27 ∗ 85.5 + 0.17 ∗ 95.5 = 73.3
This value compares favorably to 73.7 computed with all 100 data points.
Mean Deviation:
M.D.(cid:7) =0.00 ∗ |5.5 − 73.3| + 0.00 ∗ |15.5 − 73.3| + · · · +
0.27 ∗ |85.5 − 73.3| + 0.17 ∗ |95.5 − 73.3|
=15.1
This value compares favorably to 15.4 computed with all 100 data points.
Standard Deviation:
(cid:23)
σ (cid:7) =
0.00 ∗ (5.5 − 73.3)2 + 0.00 ∗ (15.5 − 73.3)2 + · · · +
0.27 ∗ (85.5 − 73.3)2 + 0.00 ∗ (95.5 − 73.3)2
= 18.3
This value compares favorably to 18.5 computed with all 100 data points.
8.3. PROBABILITY 149
PROBABILITY
8.3
8.3.1 CLASSICAL DEFINITION
The classical definition of probability involves an event occurring from a group or a set of equally
likely outcomes. That is, when a fair coin is tossed, the specific outcome of that event is either a
heads or a tails with each outcome equally likely to occur. Suppose that a particular event occurs
a certain number of times out of a total possible number of other events. The probability that the
desired event will occur is given by Equation 8.8:
P (A) = nA/n
(8.8)
where, P (A) = the probability of event A occuring
nA = the number of times event A could occur
n = the total number of possible events
Example 8.5
Consider the probability of drawing an ace from a fair deck of cards. Since there are four aces
in a total of 52 cards and the chances of drawing any specific card is the same, the probability of
drawing an ace would be:
P (Ace) = 4/52 = 1/13 = 0.0769 = 7.69%
Note that in the above example, the probability of 4/52 is only correct for any given attempt
to draw an ace if, when an undesired card is drawn (not an ace), it is returned to the deck before
the next card is drawn. This is referred to as sampling with replacement. If the undesired card is not
returned to the deck, known as sampling without replacement, the probability changes to 4/51, then
4/50, etc.
Example 8.6
Consider the probability of rolling two fair die and getting a total of 8. When rolling two fair
die, the possible outcomes are:
Die 1
1
1
1
1
1
1
Die 2
1
2
3
4
5
6
Total
2
3
4
5
6
7
Die 1
2
2
2
2
2
2
Die 2
1
2
3
4
5
6
Total
3
4
5
6
7
8
Die 1
3
3
3
3
3
3
Die 2
1
2
3
4
5
6
Total
4
5
6
7
8
9
150
8. BASIC STATISTICS AND PROBABILITY
Die 1
4
4
4
4
4
4
Die 2
1
2
3
4
5
6
Total
5
6
7
8
9
10
Die 1
5
5
5
5
5
5
Die 2
1
2
3
4
5
6
Total
6
7
8
9
10
11
Die 1
6
6
6
6
6
6
Die 2
1
2
3
4
5
6
Total
7
8
9
10
11
12
As shown in the table, there are 36 possible outcomes of the dice rolls, five of which have a
value of 8. Therefore, the probability of getting exactly 8 is:
P (8) = 5/36 = 0.139 = 13.9%
Example 8.7
Consider the flipping of a fair coin (50% probability of a head and 50% probability of a tail).
The coin will be flipped three times. What is the probability that 0, 1, 2, and 3 heads occurred in
the three flips?
First Flip
T
T
T
T
H
H
H
H
Second Flip
T
T
H
H
T
T
H
H
Third Flip
T
H
T
H
T
H
T
H
Total # of Heads
0
1
1
2
1
2
2
3
As shown in the table, there are eight possible outcomes of the three flips. Therefore,
P (0Heads) = 1/8 = 0.125 = 12.5%
P (1Heads) = 3/8 = 0.375 = 37.5%
P (2Heads) = 3/8 = 0.375 = 37.5%
P (3Heads) = 1/8 = 0.125 = 12.5%
8.3.2 RELATIVE FREQUENCY DEFINITION
8.3. PROBABILITY 151
The classical definition of probability uses the concept of equally likely outcomes to aid in the
definition. Since the words “equally likely” themselves imply some notion of probability, the definition
would appear to be a bit circular in nature. To get around this, the concept of relative frequency
probability was introduced. If an experiment or trial is going to be repeated a large number of times,
the probability that a particular event of the experiment will occur is given by the relative frequency
shown in Equation 8.9:
P (A) = lim
n→∞
(nA/n)
(8.9)
Example 8.8
Consider the tossing of a coin. One would normally assume that the probability of getting a
head on any one toss is 50%. However, consider an experiment where a coin is tossed 100 times and
heads occurs 52 times and tails 48 times. For this case, the probability of getting a head would be
predicted to be 52%. If the coin is a fair one, as the number of experimental data points gets larger
and larger, the probability of getting a head will approach 50%.
In most cases throughout engineering, one does not have the luxury of performing an infinite
number of experiments in order to determine the true probability of an event occurring. For example,
if an engineer is doing failure tests on a particular manufactured component, one can only do a limited
number of failure experiments in order to determine the probability of failure.
8.3.3
SUBJECTIVE DEFINITION
A third type of probability definition is known as subjective probability. This type of probability
is not determined from theoretical or experimental work, but rather from the experience that an
individual or group of individuals has gained during their career in a particular area. This experience
is then used to predict the probability of future events. For example, a civil engineer who does road
design will gain, over time, a “feeling” or estimation of the probability that a road surface will begin
to fail within a certain time range based on weather conditions, quantity of traffic, type of surfacing
materials used, etc.
In summary, for economic evaluations, it is necessary that probabilities of certain outcomes be
assigned. This allows the evaluator to incorporate risk factors into economic analysis situations. The
difficulty is often in the assigning of the actual probabilities. One or more of the above definitions
may assist the evaluator in this task.
8.3.4 PROBABILITY DISTRIBUTIONS
For a given event or set of events, if probability or frequency distributions can be established, the
statistical concepts discussed earlier can be applied to calculate means and standard deviations for
each event.
152
8. BASIC STATISTICS AND PROBABILITY
Two different types of probability distributions, discrete and continuous, will be discussed
below.
Discrete Distribution
A discrete distribution is one which involves an experiment with a finite number of possibilities.
For example, as described earlier, when a fair die is thrown, a 1, 2, 3, 4, 5, or 6 will occur. Thus,
the outcome of a throw is a discrete value. Using the classical probability definition, each possible
outcome would have a probability of 1/6. Note that the sum of the probabilities of all possible
outcomes will always equal 1.0.
Example 8.9
Consider that a certain discrete random variable, x, has a discrete probability distribution as
follows:
(a) Graph the distribution
(b) Find the mean
(c) Find the standard deviation
(a) Graphically, the distribution is shown in Figure 8.5.
Discrete Probability Distribu(cid:415)on
0.3
0.25
0.2
P(x)
0.15
0.1
0.05
0
-3
-1
0
1
2
3
5
8
x
Figure 8.5: Probability distribution for the random discrete variable x in Example 8.9.
8.3. PROBABILITY 153
For this distribution, it would be useful to calculate the mean and standard deviation using
Equations 8.5 and 8.7:
(b) μ(cid:7) = 0.1 ∗ (−3) + 0.2 ∗ (−1) + 0.15 ∗ (0) + 0.25 ∗ (1) + 0.1 ∗ (2) + 0.1 ∗ (3)
+ 0.05 ∗ (5) + 0.05 ∗ (8) = 0.90
(cid:23)
(c) σ (cid:7) =
0.1 ∗ (−3 − 0.9)2 + 0.2 ∗ (−1 − 0.9)2 + 0.15 ∗ (0 − 0.9)2 + 0.25 ∗ (1 − 0.9)2
+ 0.1 ∗ (2 − 0.9)2 + 0.1 ∗ (3 − 0.9)2 + 0.05 ∗ (5 − 0.9)2 + 0.05 ∗ (8 − 0.9)2
= 2.51
Solution using Excel®:
A
B
x
-3
-1
0
1
2
3
5
8
1
2
3
4
5
6
7
8
9
10
Total =
11
12 Mean =
StdDev =
13
P(x)
0.1
0.2
0.15
0.25
0.1
0.1
0.05
0.05
1.00
0.90
2.51
C
x*P(x)
-0.3
-0.2
0
0.25
0.2
0.3
0.25
0.4
D
E
(x-mu)^2 P(x)*(x-mu)^2
1.521
0.722
0.122
0.003
0.121
0.441
0.841
2.521
15.2
3.6
0.8
0.0
1.2
4.4
16.8
50.4
154
8. BASIC STATISTICS AND PROBABILITY
A
B
x
-3
-1
0
1
2
3
5
8
1
2
3
4
5
6
7
8
9
10
11
Total =
12 Mean =
13
=SUM(B2:B9)
=SUM(C2:C9)
StdDev = =SQRT(SUM(E2:E9))
C
x*P(x)
=A2*B2
=A3*B3
=A4*B4
=A5*B5
=A6*B6
=A7*B7
=A8*B8
=A9*B9
P(x)
0.1
0.2
0.15
0.25
0.1
0.1
0.05
0.05
E
D
(x-mu)^2 P(x)*(x-mu)^2
=B2*D2
=B3*D3
=B4*D4
=B5*D5
=B6*D6
=B7*D7
=B8*D8
=B9*D9
=(A2-B$12)^2
=(A3-B$12)^2
=(A4-B$12)^2
=(A5-B$12)^2
=(A6-B$12)^2
=(A7-B$12)^2
=(A8-B$12)^2
=(A9-B$12)^2
Binomial Distribution
The binomial distribution is a standard discrete distribution that accounts for the case where there
are two possible events and the probabilities of each event are not the same. Given a number of
independent trials, n, of an experiment that has two possible outcomes (call them success and failure),
the probability of a certain number of successes occurring in those n trials is given by Equation 8.10:
Pn(x) = Cn
x pxqn−x
(8.10)
where, Pn(x) = the probability of x successes in n trials
= the number of combinations of n items taken x at a time = n!/(x!(n − x)!)
Cn
x
p = the probability of a success for any given trial
q = the probability of a failure for any given trial = 1 − p
In addition to the probability of exactly x successes in n trials, it is also common to determine
the probability of less than k successes, greater than k successes, or between l and k successes.
Specifically,
8.3. PROBABILITY 155
Pn(x < k) =
k−1(cid:4)
Pn(j )
Pn(x > k) =
Pn(l < x < k) =
j =0
n(cid:4)
j =k+1
k−1(cid:4)
j =l+1
Pn(j )
Pn(j )
(8.11)
(8.12)
(8.13)
Excel® has a built-in function to compute a binomial distribution:
=BINOM.DIST(#_successes, #_trials, prob_of_success, cumulative).
where, #_successes = number of successes in n trials (x)
#_trials = number of trials (n)
prob_of_success = probability of success (p)
cumulative = FALSE for probability distribution
= TRUE for cumulative probability distribution
Example 8.10
The probability that a fuse will be defective when first installed is 0.08. If six fuses are selected
at random, find each of the following:
(a) The probability that less than two fuses are defective
(b) The probability that four or more fuses are defective
(c) The probability that at least one is defective
Solution:
Define a success as a fuse that is defective. Therefore, p = 0.08 and q = 0.92.
P6(0) = C0
6 (0.08)0(0.92)6 = 6!
0!6!
6 (0.08)1(0.92)5 = 6!
1!5!
6 (0.08)2(0.92)4 = 6!
2!4!
= 0.606
P6(1) = C1
= 0.316
P6(2) = C2
= 0.0688
(0.08)0(0.92)6
(0.08)1(0.92)5
(0.08)2(0.92)4
156
8. BASIC STATISTICS AND PROBABILITY
P6(3) = C3
= 0.00797
P6(4) = C4
= 0.000520
P6(5) = C5
6 (0.08)3(0.92)3 = 6!
3!3!
6 (0.08)4(0.92)2 = 6!
4!2!
6 (0.08)5(0.92)1 = 6!
5!1!
6 (0.08)6(0.92)0 = 6!
6!0!
= 0.0000181
P6(6) = C6
= 0.000000262
(0.08)3(0.92)3
(0.08)4(0.92)2
(0.08)5(0.92)1
(0.08)6(0.92)0
(a) P6(x < 2) = P6(0) + P6(1) = 0.606 + 0.316 = 0.922
(b) P6(x ≥ 4) = P6(4) + P6(5) + P6(6) = 5.20 · 10−4 + 1.81 · 10−5 + 2.62 · 10−7 =
5.38 · 10−4
(c) P6(x > 0) = P6(1) + P6(2) + P6(3) + P6(4) + P6(5) + P6(6) = 0.394
Alternatively, P6(x > 0) = 1 − P6(0) = 1 − 0.606 = 0.394
Using Excel®:
In graphical form, the discrete distribution for Example 8.10 can be shown as:
8.3. PROBABILITY 157
P(x)
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
Binomial Distribu(cid:415)on
6.06E-01
3.16E-01
6.88E-02
7.97E-03 5.20E-04 1.81E-05 2.62E-07
0
1
2
3
4
5
6
# of Successes
Continuous Distributions
When the value of the event, x, can take on a continuous set of probability values, rather than
a set of specific values, then a probability density function, p(x), exists. While there are a wide
variety of continuous distributions possible, the authors have chosen to present three continuous
distributions: the uniform distribution, the triangular distribution, and the normal or Gaussian
distribution. Figure 8.6 is a representation of a continuous distribution.
For continuous probability distributions, the following statements and equations are pertinent:
1. For a given x, p(x) is not the probability of that exact value occurring. Since there are an
infinite number of values for x, the probability of any one specific value of x would be zero.
2. The total area under the curve will equal the value of unity.
3. The mean of the data is calculated with Equation 8.14.
μ =
(cid:24) ∞
−∞
xp(x)dx
(8.14)
158
8. BASIC STATISTICS AND PROBABILITY
Con(cid:415)nuous Distribu(cid:415)on
0.25
0.2
0.15
0.1
0.05
p(x)
0
0
2
4
6
10
12
14
16
8
x
Figure 8.6: Continuous distribution.
4. The standard deviation of the data is calculated with Equation 8.15:
(cid:23)(cid:24) ∞
−∞
σ =
x2p(x)dx − μ2
5. The cumulative probability, F (x), is defined with Equation 8.16.
F (x) =
(cid:24)
x
−∞
p(x)dx
(8.15)
(8.16)
6. F (x1) represents the probability that the value of x is less than or equal to x1.
7. The quantity (1 − F (x1)) represents the probability that the value of x is greater than or equal
to x1.
Uniform or Rectangular Distribution
The uniform or rectangular distribution is represented in Figure 8.7. Each value of x has the same
probability of occurring.
Let a be the minimum value of x and b be the maximum value of x. Since the area under the
probability curve must be unity, the height of the uniform distribution will be given by Equation 8.17:
h = 1/(b − a)
(8.17)
8.3. PROBABILITY 159
Figure 8.7: Uniform or rectangular distribution.
The uniform distribution then has the following properties:
p(x) = h for a ≤ x ≤ b
p(x) = 0 for all other values of x
μ = (a + b)/2
√
σ = (b − a)/
12
F (x) = (x − a)/(b − a)
(8.18)
(8.19)
(8.20)
Triangular Distribution
The triangular distribution is represented in Figure 8.8.
Let a be the minimum value of x, c be the maximum value of x, and b be the mode. P1 and P2
represent the areas from a to b and b to c, respectively. The triangular distribution has the following
properties:
h = 2/(c − a)
P1 = (b − a)/(c − a)
P2 = (c − b)/(c − a)
(8.21)
(8.22)
(8.23)
160
8. BASIC STATISTICS AND PROBABILITY
h
Figure 8.8: Triangular distribution.
μ = (a + b + c)/3
σ = (c − a) ∗
(cid:25)
(1 − P1P2)/18
F (x) = P1 [(x − a)/(b − a)]2
F (x) = 1 − P2 [(c − x)/(c − b)]2
for a ≤ x ≤ b
for b ≤ x ≤ c
(8.24)
(8.25)
(8.26)
(8.27)
Normal Distribution
The Normal or Gaussian distribution is a continuous probability function that takes on the common
“bell-shaped curve” as represented in Figure 8.9.
The shape of this distribution is calculated with Equation 8.28:
p(x) = 1
√
σ
2π
(cid:6)
2
(cid:5)
x−μ
σ
− 1
2
e
(8.28)
where, μ = the mean of the data
σ = the standard deviation of the data
range of variable x: − ∞ ≤ x ≤ ∞
8.3. PROBABILITY 161
Figure 8.9: Representation of a unit normal distribution (μ = 0, σ = 1).
When μ = 0 and σ = 1, the distribution is called a unit normal distribution and Equation 8.28
simplifies to Equation 8.29:
p(x) = 1√
2π
e− x2
2
(8.29)
One can convert any set of normally distributed data to a unit normal distribution through
the substitution of the variable Z, defined as:
Z = (x − μ)/σ
(8.30)
This allows one to then use Table 8.1 to determine values of p(Z) and F (Z) as defined above.
Since the unit normal distribution is symmetrical about Z = 0, one only needs the positive
portion of the table. If Z < 0, then use the following equations for p(Z) and F (z):
p(−Z) = p(Z)
F (−Z) = 1 − F (Z)
(8.31)
(8.32)
Excel® has a built-in function that calculates p(x) and F (x) given x, the mean, and the
standard deviation:
=NORM.DIST(x,Mean,Std_Dev,Cumulative)
where,
x = value at which to find the value of either of p(x) of F (x)
Mean = mean of the distribution
Std_Dev = standard deviation of the distribution
Cumulative = FALSE for p(x) or =TRUE for F (x).
162
8. BASIC STATISTICS AND PROBABILITY
Table 8.1: Values of p(Z) and F (Z) for the unit normal distribution
Z
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0.45
0.50
0.55
0.60
0.65
0.70
0.75
0.80
0.85
0.90
0.95
1.00
1.05
1.10
1.15
1.20
1.25
1.30
1.35
1.40
1.45
1.50
p(Z)
0.39894
0.39844
0.39695
0.39448
0.39104
0.38667
0.38139
0.37524
0.36827
0.36053
0.35207
0.34294
0.33322
0.32297
0.31225
0.30114
0.28969
0.27798
0.26609
0.25406
0.24197
0.22988
0.21785
0.20594
0.19419
0.18265
0.17137
0.16038
0.14973
0.13943
0.12952
F(Z)
0.50000
0.51994
0.53983
0.55962
0.57926
0.59871
0.61791
0.63683
0.65542
0.67364
0.69146
0.70884
0.72575
0.74215
0.75804
0.77337
0.78814
0.80234
0.81594
0.82894
0.84134
0.85314
0.86433
0.87493
0.88493
0.89435
0.90320
0.91149
0.91924
0.92647
0.93319
Z
1.55
1.60
1.65
1.70
1.75
1.80
1.85
1.90
1.95
2.00
2.05
2.10
2.15
2.20
2.25
2.30
2.35
2.40
2.45
2.50
2.55
2.60
2.65
2.70
2.75
2.80
2.85
2.90
2.95
3.00
p(Z)
0.12001
0.11092
0.10226
0.09405
0.08628
0.07895
0.07206
0.06562
0.05959
0.05399
0.04879
0.04398
0.03955
0.03547
0.03174
0.02833
0.02522
0.02239
0.01984
0.01753
0.01545
0.01358
0.01191
0.01042
0.00909
0.00792
0.00687
0.00595
0.00514
0.00443
F(Z)
0.93943
0.94520
0.95053
0.95543
0.95994
0.96407
0.96784
0.97128
0.97441
0.97725
0.97982
0.98214
0.98422
0.98610
0.98778
0.98928
0.99061
0.99180
0.99286
0.99379
0.99461
0.99534
0.99598
0.99653
0.99702
0.99744
0.99781
0.99813
0.99841
0.99865
There is also a built-in function that calculates x given F (x), the mean, and the standard
8.3. PROBABILITY 163
deviation:
=NORM.INV(F(x),Mean,Std_Dev)
where,
F(x) = value of the cumulative distribution at which to find the value of x
Example 8.11
An engineer estimates that the selling price of a particular commodity will range from a low
of $5.00 per item to a high of $10.00 per item.
(a) If the distribution is assumed to be uniform, calculate the mean (or expected) value and the
standard deviation for the price of this commodity. Also, calculate the probability that the
price will be greater than $9.00.
(b) If the distribution is assumed to be triangular with a most likely value (mode) of $7.00 per
item, calculate the mean (or expected) value and the standard deviation for the price of this
commodity. Also, calculate the probability that the price will be greater than $9.00.
Solution (a):
The distribution would be:
The mean would be: μ = (a + b)/2 = (5 + 10)/2 = 7.50
The standard deviation would be: σ = (b − a)/
The probability of the price being greater than $9.00:
√
12 = (10 − 5)/
√
12 = 1.443
Prob(> $9) = 1 − F (9) = 1 − {(9 − 5)/(10 − 5)} = 0.20
164
8. BASIC STATISTICS AND PROBABILITY
Solution (b):
The distribution would be:
The mean would be: μ = (a + b + c)/3 = (5 + 7 + 10)/3 = 7.33
The areas would be: P1 = (b − a)/(c − a) = 2/5 = 0.4 and P2 = (c − b)/(c − a) =
3/5 = 0.6
The standard deviation would be:
(cid:25)
σ = (c − a) ∗
(1 − P1P2)/18 = 5 ∗
(cid:25)
(1 − (0.4)(0.6))/18 = 1.03
The probability of the price being greater than $9.00:
Prob(> $9) = 1 − F (9) = 1 −
1 − (0.6) [(10 − 9)/(10 − 7)]2
= 0.067
(cid:5)
(cid:6)
Example 8.12
300 ball bearings are tested for their diameters. The mean diameter was determined to be
0.452 cm and the standard deviation was determined to be 0.010 cm. Assume that the diameters
are normally distributed.
(a) How many ball bearings would be expected to be smaller than 0.4425 cm?
(b) Seventy percent of the ball bearings would be expected to have a diameter greater than what
value?
The distribution would be:
8.3. PROBABILITY 165
Solution for (a) using the F (Z) table:
Z = (x − μ)/σ = (0.4425 − 0.452)/0.010 = −0.95
F (−0.95) = 1 − F (0.95) = 1 − 0.82894 = 0.17106
# of ball bearings less than 0.4425 cm diameter = 0.17106(300) = 51
Solution for (a) using Excel®:
A
Mean =
1
StdDev =
2
x =
3
4
F(x) =
5 # bearings < x =
B
0.452
0.010
0.4425
0.17106
51
1
2
3
4
5
A
B
= naeM
= veDdtS
x =
254.0
010.0
0.4425
F(x) = =NORM.DIST(B3,B1,B2,TRUE)
=300*B4
# bearings < x =
Solution for (b) using the F (z) table:
One needs the value of Z that produces an F (Z) of 0.30 (for 70% greater than that value).
Since 0.30 is less than 0.5, one needs the negative side of the curve. F (−Z) = 1 − F (z) = 1 − .3 =
.7. The value of Z for F (0.7) lies between 0.50 and 0.55. Interpolating, Z = 0.524. Therefore,
Z for F (Z) of 0.30 is Z = −0.524. Solving for x in Equation 8.30 yields x = Z ∗ σ + μ =
−0.524(0.010) + 0.452 = 0.447 cm.Therefore, 70% of the ball bearings will have a diameter greater
than 0.447 cm.
166
8. BASIC STATISTICS AND PROBABILITY
Solution for (b) using Excel®:
A
1 Mean =
StdDev =
2
F(x) =
3
x =
4
B
0.452
0.010
0.3
0.44676
1
2
3
4
B
A
Mean =
StdDev =
F(x) =
0.452
0.010
0.3
x = =NORM.INV(B3,B1,B2)
Combined Distributions
In some applications, it will be necessary to work with more than one distribution to describe a
particular variable. In order to find the mean and standard deviation for the combined distributions,
the mean and standard deviation for each separate distribution are first determined. Equations 8.33
and 8.34 are then used to calculate the overall average and standard deviation:
(cid:4)
μc =
σc =
Aiμi
(cid:26)
(cid:4)
(cid:27)
σ 2
i
Ai
(cid:28)
+ (μi − μc)2
(8.33)
(8.34)
where, μc = mean of the combined distributions
σc = standard deviation of the combined distributions
Ai = probability area associated with each distribution
μi = mean of each distribution
σi = standard deviation of each distribution
(cid:5)(cid:4)
(cid:6)
Ai = 1
Example 8.13
An oil well has a 25% chance of being a “dry hole” (no oil found) and a 75% chance of finding
an oil reservoir that contains between 10,000 and 60,000 barrels as shown in the distribution below.
(a) Calculate the mean and standard deviation of the combined distributions.
(b) What is the probability that the reservoir will contain less than 40,000 barrels?
(c) What is the probability that the reservoir will contain at least 50,000 barrels?
(d) Sketch the cumulative probability distribution, F (x).
8.3. PROBABILITY 167
0.25
p(x)
0 10,000 60,000
barrels
Solution for (a):
Since the discrete probability at x = 0 is 0.25, the remaining area under the uniform distribu-
tion is then 0.75. The mean of the discrete probability distribution is 0 barrels and the mean
of the uniform distribution is 35,000 barrels.
The mean of the combined distribution: μc = 0.25(0) + 0.75(35, 000) = 26, 250 barrels
The standard deviation of the discrete probability distribution is 0 barrels and the standard
deviation of the uniform distribution is 14,434 barrels.
The standard deviation of the combined distribution is
02 + (0 − 26, 250)2
+ 0.75 ∗
14, 4342 + (35, 000 − 26, 250)2
(cid:28)
(cid:27)
(cid:28)
(cid:29)
(cid:27)
σc =
0.25 ∗
= 19, 645 barrels
Solution for (b):
P rob(< 40, 000) = F (40, 000). Recall that F (x) is the area under the probability curve.
Therefore, F (40, 000) = 0.25 + 0.75 ∗ [(40, 000 − 10, 000)/(60, 000 − 10, 000)] = 0.70.
There is a 70% probability that the reservoir will contain less than 40,000 barrels.
Solution for (c):
P rob(> 50, 000) = 1 − F (50, 000)
= 1 − [0.25 + 0.75 ∗ [(50, 000 − 10, 000)/(60, 000 − 10, 000)]] = 0.15.
There is a 15% probability that the reservoir will contain at least 50,000 barrels.
168
8. BASIC STATISTICS AND PROBABILITY
Solution for (d):
Again, F (x) is the area under the probability distribution. Therefore,
for x < 0
for 0 ≤ x ≤ 10, 000
for 10, 000 ≤ x ≤ 60, 000
for x > 60, 000
F (x) = 0
F (x) = 0.25
F (x) = 0.25 + 0.75 [(x − 10, 000)/(60, 000 − 10, 000)]
F (x) = 1.0
Cumula(cid:415)ve Probability
F(x)
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
-10000
0
10000 20000 30000 40000 50000 60000 70000
Barrels
8.4
PROBLEMS
8.1. The following values of Young’s Modulus for a rubber compound (in 1000 lb/in 2) have
been measured. Determine the following:
(a) The frequency distribution using class boundaries of 4-6, 6-8, etc
(b) True class boundaries and true class marks
(c) The histogram and cumulative frequency diagrams
8 5 12 14 13 10 9
11 6 11 9 8 15 8
5 6 8 11 4 8 10
13 12 6 10 9 8 13
8.2.
For the data in Problem 8.1, calculate the mean, median, mode, and standard deviation.
Recalculate the mean and standard deviation using the frequency distribution determined
in Problem 8.1.
8.4. PROBLEMS 169
8.3. A particular event has two possible outcomes of true and false. There is a 50% probability of
getting a true outcome. The event is repeated four times. Construct a table that contains all
possible combinations of results and determine the probabilities of getting 0 true outcomes,
1 true outcome, 2 true outcomes, 3 true outcomes, and 4 true outcomes.
8.4.
Fifteen castings of a certain type are produced per day in a foundry. The finished castings
are inspected and classified as defective or non-defective. Records indicate that of the last
500 castings inspected, 16 were defective. Based on this information, find the following:
(a) The probability of having at no defective castings in a day’s production
(b) The probability of having at least two defective castings in a day’s production
8.5. The height of trucks on an interstate highway is approximately normally distributed with
mean of 10 ft and standard deviation of 1.5 ft. What is the height of an overpass if the
probability that a truck will clear it is 0.999?
8.6. The average life of a certain type of compressor is 10 years with a standard deviation of 1
year. The lives of the compressors follow a normal distribution. The manufacturer replaces,
at no cost, all compressors that fail while under the guarantee. If the manufacturer is willing
to replace only 3% of all compressors sold, how long of a guarantee should they offer?
8.7. A discrete distribution is given in the table below. Calculate the mean and standard deviation
of the distribution.
x
p(x)
1
0.2
2
0.3
3
0.1
4
0.4
170
8. BASIC STATISTICS AND PROBABILITY
8.8. Determine the mean and standard deviation of the following combined distribution.
8.9. A company is desirous of purchasing a service. The service will cost $10,000 and have a
probability of its life that can be described by a triangular distribution with values of a, b,
and c equal to 1, 3, and 6 years, respectively.
(a) Calculate the mean and the standard deviation of the life of the service
(b) What is the probability that the service will last at least 2 years?
(c) What is the probability that the service will last at least 5 years?
C H A P T E R 9
Sensitivity Analysis
171
9.1
INTRODUCTION
A simple way of incorporating the elements of uncertainty into an economic analysis is to use
sensitivity analysis. As described earlier, an evaluator will normally need to include a measure of the
uncertainty pertaining to one or more variables in the analysis. This uncertainty may, in turn, add
significant uncertainty about the profitability of an investment. This range of uncertainty about the
project’s profitability is one way to define the risk in a project.
Uncertainty in any particular variable can occur for a number of reasons. For example, the
method of measuring a parameter may have a certain amount of inaccuracy, the parameter may have
to be predicted into the future, or there may be a limited amount of data for a certain parameter. In
any case, the best that can be done for a variable with an uncertain value is to choose a reasonable
range over which it may vary and, perhaps, the type of distribution that the variable might take on
over that range.
Two types of sensitivity analysis will be considered in this chapter. The first is called the
range approach and involves the systematic variation of key variables to determine their overall
effect on the profitability of the investment. The second approach uses the concepts of probability
and statistics and is referred to as Monte Carlo Simulation (MCS). MCS has also been called
probabilistic sensitivity analysis.
9.1.1 RANGE APPROACH
When applying the range approach, ranges of variations of key variables, defined by the evaluator, are
established. For example, the estimated sales price of a commodity to be sold could be allowed to vary
±10% from a base value during the analysis. The economic analysis is conducted by: (a) choosing an
evaluation criteria (normally NPV or IRR); (b) computing the value of this criteria for a base case
set of variable values; and (c) repeating the computations by varying each key parameter within the
specified range.
There are, in general, two ways of conducting the range approach sensitivity analysis.
The first is by identifying the most likely, most optimistic, and the most pessimistic cases
by varying all of the parameters simultaneously. The most likely case is defined as the case where
all variables are at their respective mean values. This method allows the evaluator to determine
the minimum and maximum values that could be obtained for the evaluation criteria. It does not,
however, allow the evaluator to study the effects of any one variable on the economic analysis.
172
9. SENSITIVITY ANALYSIS
The second way is to use the mean values for each key variable and calculate the corresponding
value for the evaluation criteria. This is then designated as the base case value. Each key parameter
is then varied about its mean value while the other parameters are held constant at their base case
values and the evaluation criteria is recalculated. The process is repeated until each parameter has
been varied. Typically, each parameter is varied plus or minus 10 to 20% about its mean value. When
the calculations have been completed, the results are usually summarized in a “spider plot.” The
spider plot, shown schematically in Figure 9.1, is constructed by plotting the evaluation criteria on
the vertical axis and the percent variation on the horizontal axis. Quick inspection of the spider
plot provides information to the evaluator on which parameter or parameters affect the economic
analysis to the greatest degree. The parameter which yields the line with the greatest slope (positive
or negative slope) on the spider plot has the most effect on the analysis. As illustrated in the sample
plot shown in Figure 9.1, Variable A has the greatest effect on the evaluation criteria.
Ev(cid:258)(cid:367)(cid:437)(cid:258)(cid:415)(cid:381)(cid:374)
Criteria
Variable A
Variable B
Variable C
-X% 0 +X%
% vari(cid:258)(cid:415)on from the base case
Figure 9.1: Sample spider plot.
Example 9.1
A ten-year life project has an initial investment of $87,500, annual operating expenses of
$7,500, and annual incomes of $30,000. It is desired to conduct a range approach sensitivity analysis
by the two methods described earlier:
(a) Determine the most likely, the most optimistic, and the most pessimistic values for the I RR
by assuming the values given are the mean values for each parameter and that each parameter
has a range of ±20% from the mean value.
9.1. INTRODUCTION 173
(b) Vary each parameter independently by ±20% from the mean values while holding the other
two constant at their mean values and develop a spider plot for the calculated I RR values.
The cash flow diagram for the most likely (or base case) is as follows:
0
1
2
3
…
8
9
10
-87,500
22,500
22,500
22,500
----
22,500
22,500
22,500
Solution for (a):
The I RR for the most likely case is computed from the cash flow above to be 22.3%.
The most pessimistic case would be the combination of these three variables that would have
the most negative influence on the project’s I RR. This would be a 20% increase in initial investment,
a 20% increase in operating expenses, and a 20% decrease in income.
The most optimistic case would be the combination of these three variables that would have
the most positive influence on the project’s IRR. This would be a 20% decrease in initial investment,
a 20% decrease in operating expenses, and a 20% increase in income.
Solution for (b):
In this analysis, start with the most likely case from (a) and denote that as the base case. Then
vary one parameter at a time by plus and minus 20% and recalculate I RR for each new cash flow
diagram.
174
9. SENSITIVITY ANALYSIS
Spider Plot
Ini(cid:415)al Investment
Opera(cid:415)ng Expenses
Annual Income
IRR
35
30
25
20
15
10
5
0
-25
-15
-5
5
15
25
% (cid:115)(cid:258)(cid:396)(cid:349)(cid:258)(cid:415)on from Base Case
From the spider plot, it can be readily seen that the parameter which has the greatest effect on the
I RR is the annual income. A larger change in I RR is observed for the same percentage change in the
annual income than for initial investment or operating expenses. Changes in the operating expenses
have the least effect on the I RR. To minimize the risk that would be created by a ±20% uncertainty
in the annual income, it would be beneficial for the evaluator to do additional research into this
portion of the cash flow calculation and determine if the range in uncertainty can be reduced.
In the example problem, only two values for each variable have been used.To be more complete,
several values between −20% and +20% could have been used to generate additional values of I RR.
9.1. INTRODUCTION 175
The primary drawback on the spider plot approach is that it ignores the interactions that occur
when more than one variable is allowed to change at a time. Not only does one have to define many
more cases to account for all possible interactions, but it is also difficult to tabulate these results in a
meaningful way. Often, the results simply become a tabulated list of I RRs for each case evaluated.
For example, consider a problem that has four variables that have uncertain values and that three
numerical values for each of the variables are chosen in the range approach. This will result in 81
(3x3x3x3) individual solutions to the problem that must be presented to evaluate the effect of all
four variables. It may be very difficult for the evaluator to draw any conclusions from a long tabular
list of 81 results.
9.1.2 MONTE CARLO SIMULATION
Probabilistic sensitivity analysis or Monte Carlo Simulation (MCS) was introduced in the early
1960s. While it is a very powerful technique, it is often avoided due to the general lack of knowledge
among engineers and managers on how it works and the conclusions that can be drawn from it. This
section will attempt to explain the technique and its benefits.
Consider the example proposed above that contains four variables with uncertain values.
Instead of creating a tabular list of 81 evaluator-chosen cases, MCS allows each of the variables to vary
between minimum and maximum values according to some prescribed probability distribution and
then solves the problem for a large, randomized set of these input variables. The results of the MCS
are presented graphically as a cumulative probability distribution of the dependent variable (e.g.,
I RR, N P V , etc). This probability can then be interpreted using statistical methods to determine
the likelihood of a particular solution occurring or not occurring.
Distributions that are frequently used are the uniform, triangular, and normal distributions
presented in Chapter 8. These are fairly easy to describe mathematically and to input into an Excel®
spreadsheet or a computer program. The choice of the particular distribution for a certain variable
should be guided by the evaluator’s knowledge of that variable.
Within the MCS method, the selection of a value for an independent variable is accomplished
by using the fact that the cumulative probability distribution, F (x), will lie between zero and one
and will be monotonic in behavior. Thus, the selection of a random number between zero and one
will yield a distinct random value for the independent variable between the variable’s minimum and
maximum values. A different random number is chosen for each independent variable, resulting in
a truly random set of independent variable values. Once the values of all independent variables have
been determined, the dependent variable, i.e., an evaluation criteria such as I RR or NP V , can be
calculated. This process is then repeated a large number of times and the results of the dependent
variable calculations are grouped into class intervals and then the cumulative probability distribution
is constructed.
The following is a summary of the steps involved in the MCS method:
1. Select the independent variables that contain uncertainty in their values.
176
9. SENSITIVITY ANALYSIS
2. Estimate the minimum and maximum values for each independent variable.
3. Estimate the minimum and maximum values for the dependent variable and set up class
intervals in that range such that a probability distribution can be generated.
4. Select a probability distribution that best describes the behavior of each independent variable
between its minimum and maximum values.
5. Set up equations which will allow for the calculation of each of the independent variables.
This is done by determining expressions for the cumulative probability distributions, F (x),
for each independent variable and then solving this expression for the variable, x.
6. Generate a random number for each independent variable. A different random number is
determined for each independent variable. Random numbers are available from scientific
calculators, Excel®, or by using Table 9.1. One can enter this table at any random point and
then proceed through the table either by rows or columns. Excel® uses the =RAND() function
to generate a uniformly distributed random number between zero and one.
7. Use the random numbers to calculate the values for the independent variables using the
equations developed in step 5.
8. Calculate the dependent variable (or variables if necessary) for this set of independent variables
and increment a counter in the respective class interval.
9. Return to step 6 and repeat steps 6 through 8 a relatively large number of times. A large
number of trials might be 100, 1000, or 10,000 depending on the sensitivity of the dependent
variable to the independent variables.
10. Construct the cumulative probability distribution for the dependent variable.
9.1. INTRODUCTION 177
T
a
b
l
e
9
.
1
:
R
a
n
d
o
m
n
u
m
b
e
r
s
178
9. SENSITIVITY ANALYSIS
Example 9.2
For the problem described in Example 9.1 and the distributions given below, conduct a Monte
Carlo Simulation and summarize the results in a cumulative probability distribution. Complete 10
cases using I RR as the dependent variable.
Initial Investment:
(cid:47)(cid:374)(cid:349)(cid:415)al Investment
0.00003
0.000025
0.00002
p(x)
0.000015
0.00001
0.000005
0
60000
70000
80000
90000
100000
110000
Dollars
Operating Expenses:
Oper(cid:258)(cid:415)ng Expenses
0.0007
0.0006
0.0005
0.0004
0.0003
0.0002
0.0001
p(x)
0
5000
6000
7000
8000
9000
10000
Dollars/Year
Annual Income:
Annual Income
9.1. INTRODUCTION 179
p(x)
0.00009
0.00008
0.00007
0.00006
0.00005
0.00004
0.00003
0.00002
0.00001
0
23000
Solution:
25000
27000
29000
31000
33000
35000
37000
Dollars/Year
Develop equations for the independent variables:
Let x1 be the value for the initial investment. Since it has a uniform distribution, its cumulative
probability (F1) is given by
F1 = (x1 − a)/(b − a) = (x1 − 70000)/(105000 − 70000) = (x1 − 70000)/35000
Solving this equation for x1 yields
x1 = 35000F1 + 70000
(9.1)
Let x2 be the value for the operating expenses. Since it has a triangular distribution, its
cumulative probability (F2) is given by
P1 = (b − a)/(c − a) = (7500 − 6000)/(9000 − 6000) = 0.5
P2 = 1 − P1 = 0.5
For 6000 ≤ x2 ≤ 7500 or F2 ≤ 0.5 (the value of P1),
F2 = P1 [(x2 − a)/(b − a)]2 = 0.5 [(x2 − 6000)/1500]2
Solving this equation for x2 yields
x2 = 6000 + 1500
(cid:25)
2F2
(9.2)
For 7500 ≤ x2 ≤ 9000 or F2 ≥ 0.5 (the value of P1),
F2 = 1 − P2 [(c − x2)/(c − b)]2 = 1 − 0.5 [(9000 − x2)/1500]2
180
9. SENSITIVITY ANALYSIS
Solving this equation for x2 yields
x2 = 9000 − 1500
(cid:25)
2(1 − F2)
(9.3)
Let x3 be the value for the annual income. Since it has a uniform distribution, its cumulative
probability (F3) is given by
F3 = (x3 − a)/(b − a) = (x3 − 24000)/(36000 − 24000) = (x3 − 24000)/12000
Solving this equation for x3 yields
First iteration:
x3 = 12000F1 + 24000
(9.4)
Choose the first random number from the table. This will be the value for F1. (F1 = 0.90535).
Use this number in Equation 9.1 to determine the value to be used for the initial investment:
x1 = 35000(0.90535) + 70000 = 101, 700
Choose the second random number from the table. This will be the value for F2. (F2 =
0.86245). Since F2 ≥ 0.5, use this number in Equation 9.3 to determine the value to be used
for the operating expense:
x2 = 9000 − 1500
2(1 − 0.86245) = 8, 200
(cid:25)
Choose the third random number from the table.This will be the value for F3. (F3 = 0.32775).
Use this number in Equation 9.4 to determine the value to be used for the annual income:
x3 = 12000(0.32775) + 24000 = 27, 900
These values for initial investment, operating expense, and annual income yield the following
cash flow diagram for the project:
0
1
2
3
…
8
9
10
-101,700 19,700
19,700
19,700
----
19,700
19,700
19,700
I RR analysis yields a value of 14.3%. This value is tabulated in a list for further processing.
Second and successive iterations:
Follow the same procedure as listed for the first iteration. Three new random numbers are
used during each iteration. The results of the first ten iterations are shown in the table below.
9.1. INTRODUCTION 181
The cumulative probability distribution for I RR can be developed from information in the
following table:
182
9. SENSITIVITY ANALYSIS
Interval
10.5-12.5
12.5-14.5
14.5-16.5
16.5-18.5
18.5-20.5
20.5-22.5
22.5-24.5
24.5-26.5
26.5-28.5
28.5-30.5
30.5-32.5
32.5-34.5
Mid-point
11.5
13.5
15.5
17.5
19.5
21.5
23.5
25.5
27.5
29.5
31.5
33.5
Frequency
0
2
1
0
0
4
0
2
0
0
0
1
Prob
0.0
20.0
30.0
0.0
0.0
40.0
0.0
20.0
0.0
0.0
0.0
10.0
CumulProb
0.0
20.0
30.0
30.0
30.0
70.0
70.0
90.0
90.0
90.0
90.0
100.0
Cumu(cid:367)(cid:258)(cid:415)(cid:448)e Probability for IRR
Cumul Prob, %
100.0
90.0
80.0
70.0
60.0
50.0
40.0
30.0
20.0
10.0
0.0
10
15
20
25
30
35
IRR, %
With only ten iterations, this graph is too jagged to interpret correctly. The figure below
shows the same analysis after 100 iterations. One can see that the curve is much smoother. If even
more iterations are added, the curve will become smoother yet. However, the usefulness of the curve
may not increase proportionally to the number of iterations. One should only complete enough
iterations to get a reasonably smooth curve. Generally, this takes about 100 iterations, but this may
be a function of the actual problem being solved.This number of calculations can be easily completed
with Excel®.
Cumu(cid:367)(cid:258)(cid:415)(cid:448)e Probability for IRR
9.1. INTRODUCTION 183
F(x), %
100.0
90.0
80.0
70.0
60.0
50.0
40.0
30.0
20.0
10.0
0.0
10
15
20
25
IRR, %
30
35
40
As discussed in Chapter 8, the use of this graph is as follows. The value of F (x) at any I RR
is the probability that the project will attain that I RR or less. For example, there is approximately
an 18% probability that the project will earn an I RR value of less than 15%. Thus, if one uses
the investor’s MARR, F (x) provides the probability that the project will earn less than that value.
The quantity (100 − F (x)) would provide the probability that the project earns greater than that
MARR. For example, if the investor’s MARR is 20%, one would enter the horizontal axis at 20%
and read a cumulative probability of about 44%. The interpretation would be that there is a 56%
probability that the project will yield a 20% I RR or greater. This probability is then a direct measure
of risk associated with the project. If an evaluator feels that a 44% probability that the project will
not be economically viable is an unacceptable level of risk, then the project should be eliminated
from further consideration. However, if, in this example, the investor’s MARR is only 15%, there is
only a 18% probability that the project will not be economically viable. This investor would have a
more acceptable level of risk.
Example 9.3
A particular investment has three uncertain variables of initial cost, future value, and the
investment life.
The initial cost can be described by a uniform distribution from $100 to $200.
The future value can be described by a normal distribution with μ = $300 and σ = $30.
The investment life can be described by a discrete probability distribution with 40% probability
that n = 5 years, 30% probability that n = 6 years, 20% probability that n = 7 years, and 10%
probability that n = 8 years.
184
9. SENSITIVITY ANALYSIS
Based on this information,
(a) Calculate the minimum rate of return that can be earned, the maximum rate of return that
can be earned and the mean rate of return that will be earned. For the normal distribution,
assume that the minimum and maximum values of future value will be ±3σ from the mean
($210 and $390, respectively).
(b) Complete a Monte Carlo Simulation for this project to determine the probability that the
ROR will be at least 15%.
Solution for (a):
The mean values for the three distributions are:
Pmean = (100 + 200)/2 = 150
Fmean = 300
nmean = 0.4 ∗ 5 + 0.3 ∗ 6 + 0.2 ∗ 7 + 0.1 ∗ 8 = 6.0
The rate of return, ROR, can be calculated using the relationship, F = P ∗ (1 + i)n or
i = (F /P )1/n − 1
The three cases will be defined as follows:
Case
Minimum ROR
(Most P(cid:286)(cid:400)(cid:400)(cid:349)(cid:373)(cid:349)(cid:400)(cid:415)(cid:272)
(cid:18)(cid:258)(cid:400)(cid:286)(cid:895)
(cid:68)(cid:286)(cid:258)n ROR
(cid:68)(cid:258)(cid:454)(cid:349)mum ROR
(Most (cid:75)(cid:393)(cid:415)(cid:373)(cid:349)(cid:400)(cid:415)(cid:272)
(cid:18)(cid:258)(cid:400)(cid:286)(cid:895)
(cid:47)(cid:374)(cid:349)(cid:415)(cid:258)l Cost, P
Future V(cid:258)(cid:367)(cid:437)e, F
Life, n
ROR
$200
$210
$150
$100
$300
$390
8
6
5
0.61%
12.2%
31.3%
9.1. INTRODUCTION 185
Solution for (b):
Develop equations for the independent variables:
Let x1 be the value for the initial investment. Since it has a uniform distribution, its cumulative
probability (F1) is given by
F1 = (x1 − a)/(b − a) = (x1 − 100)/(200 − 100) = (x1 − 100)/100
Solving this equation for x1 yields
x1 = 100F1 + 100
Let x2 be the value for the future value. Since it has a normal distribution, its cumulative
probability (F2) is given by the values in Table 8.1. Once F2(Z) is randomly chosen, the
appropriate value of Z is determined from Table 8.1 and then x2 = Z ∗ σ + μ.
Let x3 be the value for the project life. Since it has a discrete distribution, its cumulative
probability (F3) is given by
5 ≤ x3 < 6
6 ≤ x3 < 7
7 ≤ x3 < 8
x3 ≥ 8
F3 = 0.4
F3 = 0.7
F3 = 0.9
F3 = 1.0
F(x)
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
Cum(cid:437)(cid:367)(cid:258)(cid:415)ve Probability for Project Life
4
5
6
7
8
9
n, years
Thus, once F3 is randomly chosen, the value for x3 will be:
F3 ≤ 0.4
0.4 < F3 ≤ 0.7
0.7 < F3 ≤ 0.9
0.9 < F3 ≤ 1.0
x3 = 5
x3 = 6
x3 = 7
x3 = 8
186
9. SENSITIVITY ANALYSIS
The dependent variable is ROR which is calculated by using: ROR = (x2/x1)1/x3 − 1.
Each iteration can be calculated using the following Excel® spreadsheet:
If one tabulates the result in cell B9 into another column of results (for example start in cell
A20), the spreadsheet will automatically select three new random numbers and a new result of
ROR will be calculated. This result would then be tabulated in cell A21. In order to generate
the histogram, this process is repeated manually 100 times (which would result in a column
of data from A20 to A119).
The results would be as follows:
Cumu(cid:367)(cid:258)(cid:415)(cid:448)e Probability for ROR
F(x), %
100.0
90.0
80.0
70.0
60.0
50.0
40.0
30.0
20.0
10.0
0.0
0
5
10
15
20
25
30
35
ROR, %
At ROR of 15%, F (x) is approximately 74%. This means that there is a 74% probability that
the ROR will be less than 15% or a 26% probability that the ROR will be at least 15%.
9.2
PROBLEMS
9.1. You are to conduct an extensive sensitivity analysis on the problem described below. The
sensitivity analysis will consist of three parts:
9.2. PROBLEMS 187
(a) A range approach where the most optimistic, most likely, and most pessimistic values
of the dependent variables NP V and I RR are determined.
(b) A range approach where the mean value is determined for each independent variable
and then each variable is allowed to vary ±20% about that mean while all other
independent variables are held constant. Create spider plots for NP V and I RR using
the results.
(c) A probabilistic approach using Monte Carlo Simulation. Complete 10 iterations and
create cumulative probability curves for NP V and I RR.
The project life is 7 years and the MARR is 15%.
The initial investment is given by a uniform distribution between $200,000 and $300,000.
The annual profit is given by a triangular distribution that has a minimum value of
$55,000/year, a mode of $67,500/year, and a maximum value of $85,000/year.
The salvage value of the investment is given by a triangular distribution that has a minimum
value of $60,000, a mode of $75,000, and a maximum value of $85,000.
The cash flow diagram would be:
0
1
2
3
…
6
7
-Invest
Pro(cid:302)t
Pro(cid:302)t
Pro(cid:302)t
----
Pro(cid:302)t
Pro(cid:302)t +
Salvage
9.2. Complete Problem 9.1 using an Excel® spreadsheet to calculate 100 iterations. Create
cumulative probability curves for NP V and I RR.
9.3. The following distributions are given for three independent variables, x1, x2, and x3 and the
relationship for the dependent variable, y. Calculate the largest, smallest, and mean values
of the dependent variable, y.
x1: Uniform distribution between 35 and 50
x2: Triangular distribution between 20 and 40 with a mode of 35
188
9. SENSITIVITY ANALYSIS
x3: Discrete distribution with a 50% probability that the value will be 2 and 50%
probability that the value will be 4
y = (x1)(x2) + x3
9.4. Using the information given in Problem 9.3, use Monte Carlo Simulation to calculate 10
iterations of the dependent variable and create the cumulative probability diagram for the
dependent variable y.
9.5. Complete Problem 9.4 using an Excel® spreadsheet and 100 iterations.
9.6. Complete Problem 9.4 using an Excel® spreadsheet and assuming that independent variable
x2 has a normal distribution with a mean of 30 and a standard deviation of 3. Compute 100
iterations and create the cumulative probability diagram for the dependent variable y.
A P P E N D I X A
189
Compound Interest Factors
190 APPENDIX A
F/P
1.01000
1.02010
1.03030
1.04060
1.05101
1.06152
1.07214
1.08286
1.09369
1.10462
1.11567
1.12683
1.13809
1.14947
1.16097
1.17258
1.18430
1.19615
1.20811
1.22019
1.28243
1.34785
1.41660
1.48886
1.56481
1.64463
1.72852
1.81670
1.90937
2.00676
2.10913
2.21672
2.32979
2.44863
2.57354
2.70481
P/F
0.99010
0.98030
0.97059
0.96098
0.95147
0.94205
0.93272
0.92348
0.91434
0.90529
0.89632
0.88745
0.87866
0.86996
0.86135
0.85282
0.84438
0.83602
0.82774
0.81954
0.77977
0.74192
0.70591
0.67165
0.63905
0.60804
0.57853
0.55045
0.52373
0.49831
0.47413
0.45112
0.42922
0.40839
0.38857
0.36971
i =
1%
F/A
1.0000
2.0100
3.0301
4.0604
5.1010
6.1520
7.2135
8.2857
9.3685
10.4622
11.5668
12.6825
13.8093
14.9474
16.0969
17.2579
18.4304
19.6147
20.8109
22.0190
28.2432
34.7849
41.6603
48.8864
56.4811
64.4632
72.8525
81.6697
90.9366
100.6763
110.9128
121.6715
132.9790
144.8633
157.3538
170.4814
A/F
1.00000
0.49751
0.33002
0.24628
0.19604
0.16255
0.13863
0.12069
0.10674
0.09558
0.08645
0.07885
0.07241
0.06690
0.06212
0.05794
0.05426
0.05098
0.04805
0.04542
0.03541
0.02875
0.02400
0.02046
0.01771
0.01551
0.01373
0.01224
0.01100
0.00993
0.00902
0.00822
0.00752
0.00690
0.00636
0.00587
P/A
0.9901
1.9704
2.9410
3.9020
4.8534
5.7955
6.7282
7.6517
8.5660
9.4713
10.3676
11.2551
12.1337
13.0037
13.8651
14.7179
15.5623
16.3983
17.2260
18.0456
22.0232
25.8077
29.4086
32.8347
36.0945
39.1961
42.1472
44.9550
47.6266
50.1685
52.5871
54.8882
57.0777
59.1609
61.1430
63.0289
A/G
A/P
0.00000
1.01000
0.49751
0.50751
0.99337
0.34002
1.48756
0.25628
1.98010
0.20604
2.47098
0.17255
2.96020
0.14863
3.44777
0.13069
3.93367
0.11674
4.41792
0.10558
4.90052
0.09645
5.38145
0.08885
5.86073
0.08241
6.33836
0.07690
6.81433
0.07212
7.28865
0.06794
7.76131
0.06426
8.23231
0.06098
8.70167
0.05805
0.05542
9.16937
0.04541 11.48312
0.03875 13.75566
0.03400 15.98711
0.03046 18.17761
0.02771 20.32730
0.02551 22.43635
0.02373 24.50495
0.02224 26.53331
0.02100 28.52167
0.01993 30.47026
0.01902 32.37934
0.01822 34.24920
0.01752 36.08013
0.01690 37.87245
0.01636 39.62648
0.01587 41.34257
n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
COMPOUND INTEREST FACTORS 191
F/P
1.02000
1.04040
1.06121
1.08243
1.10408
1.12616
1.14869
1.17166
1.19509
1.21899
1.24337
1.26824
1.29361
1.31948
1.34587
1.37279
1.40024
1.42825
1.45681
1.48595
1.64061
1.81136
1.99989
2.20804
2.43785
2.69159
2.97173
3.28103
3.62252
3.99956
4.41584
4.87544
5.38288
5.94313
6.56170
7.24465
P/F
0.98039
0.96117
0.94232
0.92385
0.90573
0.88797
0.87056
0.85349
0.83676
0.82035
0.80426
0.78849
0.77303
0.75788
0.74301
0.72845
0.71416
0.70016
0.68643
0.67297
0.60953
0.55207
0.50003
0.45289
0.41020
0.37153
0.33650
0.30478
0.27605
0.25003
0.22646
0.20511
0.18577
0.16826
0.15240
0.13803
i =
2%
F/A
1.0000
2.0200
3.0604
4.1216
5.2040
6.3081
7.4343
8.5830
9.7546
10.9497
12.1687
13.4121
14.6803
15.9739
17.2934
18.6393
20.0121
21.4123
22.8406
24.2974
32.0303
40.5681
49.9945
60.4020
71.8927
84.5794
98.5865
114.0515
131.1262
149.9779
170.7918
193.7720
219.1439
247.1567
278.0850
312.2323
A/F
1.00000
0.49505
0.32675
0.24262
0.19216
0.15853
0.13451
0.11651
0.10252
0.09133
0.08218
0.07456
0.06812
0.06260
0.05783
0.05365
0.04997
0.04670
0.04378
0.04116
0.03122
0.02465
0.02000
0.01656
0.01391
0.01182
0.01014
0.00877
0.00763
0.00667
0.00586
0.00516
0.00456
0.00405
0.00360
0.00320
P/A
0.9804
1.9416
2.8839
3.8077
4.7135
5.6014
6.4720
7.3255
8.1622
8.9826
9.7868
10.5753
11.3484
12.1062
12.8493
13.5777
14.2919
14.9920
15.6785
16.3514
19.5235
22.3965
24.9986
27.3555
29.4902
31.4236
33.1748
34.7609
36.1975
37.4986
38.6771
39.7445
40.7113
41.5869
42.3800
43.0984
A/G
A/P
0.00000
1.02000
0.49505
0.51505
0.98680
0.34675
1.47525
0.26262
1.96040
0.21216
2.44226
0.17853
2.92082
0.15451
3.39608
0.13651
3.86805
0.12252
4.33674
0.11133
4.80213
0.10218
5.26424
0.09456
5.72307
0.08812
6.17862
0.08260
6.63090
0.07783
7.07990
0.07365
7.52564
0.06997
7.96811
0.06670
8.40732
0.06378
0.06116
8.84328
0.05122 10.97445
0.04465 13.02512
0.04000 14.99613
0.03656 16.88850
0.03391 18.70336
0.03182 20.44198
0.03014 22.10572
0.02877 23.69610
0.02763 25.21471
0.02667 26.66323
0.02586 28.04344
0.02516 29.35718
0.02456 30.60635
0.02405 31.79292
0.02360 32.91889
0.02320 33.98628
n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
192 APPENDIX A
F/P
1.03000
1.06090
1.09273
1.12551
1.15927
1.19405
1.22987
1.26677
1.30477
1.34392
1.38423
1.42576
1.46853
1.51259
1.55797
1.60471
1.65285
1.70243
1.75351
1.80611
2.09378
2.42726
2.81386
3.26204
3.78160
4.38391
5.08215
5.89160
6.82998
7.91782
9.17893
10.64089
12.33571
14.30047
16.57816
19.21863
P/F
0.97087
0.94260
0.91514
0.88849
0.86261
0.83748
0.81309
0.78941
0.76642
0.74409
0.72242
0.70138
0.68095
0.66112
0.64186
0.62317
0.60502
0.58739
0.57029
0.55368
0.47761
0.41199
0.35538
0.30656
0.26444
0.22811
0.19677
0.16973
0.14641
0.12630
0.10895
0.09398
0.08107
0.06993
0.06032
0.05203
i =
3%
F/A
1.0000
2.0300
3.0909
4.1836
5.3091
6.4684
7.6625
8.8923
10.1591
11.4639
12.8078
14.1920
15.6178
17.0863
18.5989
20.1569
21.7616
23.4144
25.1169
26.8704
36.4593
47.5754
60.4621
75.4013
92.7199
112.7969
136.0716
163.0534
194.3328
230.5941
272.6309
321.3630
377.8570
443.3489
519.2720
607.2877
A/F
1.00000
0.49261
0.32353
0.23903
0.18835
0.15460
0.13051
0.11246
0.09843
0.08723
0.07808
0.07046
0.06403
0.05853
0.05377
0.04961
0.04595
0.04271
0.03981
0.03722
0.02743
0.02102
0.01654
0.01326
0.01079
0.00887
0.00735
0.00613
0.00515
0.00434
0.00367
0.00311
0.00265
0.00226
0.00193
0.00165
n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
P/A
0.9709
1.9135
2.8286
3.7171
4.5797
5.4172
6.2303
7.0197
7.7861
8.5302
9.2526
9.9540
10.6350
11.2961
11.9379
12.5611
13.1661
13.7535
14.3238
14.8775
17.4131
19.6004
21.4872
23.1148
24.5187
25.7298
26.7744
27.6756
28.4529
29.1234
29.7018
30.2008
30.6312
31.0024
31.3227
31.5989
A/G
A/P
0.00000
1.03000
0.49261
0.52261
0.98030
0.35353
1.46306
0.26903
1.94090
0.21835
2.41383
0.18460
2.88185
0.16051
3.34496
0.14246
3.80318
0.12843
4.25650
0.11723
4.70494
0.10808
5.14850
0.10046
5.58720
0.09403
6.02104
0.08853
6.45004
0.08377
6.87421
0.07961
7.29357
0.07595
7.70812
0.07271
8.11788
0.06981
0.06722
8.52286
0.05743 10.47677
0.05102 12.31407
0.04654 14.03749
0.04326 15.65016
0.04079 17.15557
0.03887 18.55751
0.03735 19.86004
0.03613 21.06742
0.03515 22.18407
0.03434 23.21454
0.03367 24.16342
0.03311 25.03534
0.03265 25.83490
0.03226 26.56665
0.03193 27.23505
0.03165 27.84445
COMPOUND INTEREST FACTORS 193
i =
4%
n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
F/P
1.04000
1.08160
1.12486
1.16986
1.21665
1.26532
1.31593
1.36857
1.42331
1.48024
1.53945
1.60103
1.66507
1.73168
1.80094
1.87298
1.94790
2.02582
2.10685
2.19112
2.66584
3.24340
3.94609
4.80102
5.84118
7.10668
8.64637
10.51963
12.79874
15.57162
18.94525
23.04980
28.04360
34.11933
41.51139
50.50495
P/F
0.96154
0.92456
0.88900
0.85480
0.82193
0.79031
0.75992
0.73069
0.70259
0.67556
0.64958
0.62460
0.60057
0.57748
0.55526
0.53391
0.51337
0.49363
0.47464
0.45639
0.37512
0.30832
0.25342
0.20829
0.17120
0.14071
0.11566
0.09506
0.07813
0.06422
0.05278
0.04338
0.03566
0.02931
0.02409
0.01980
F/A
1.0000
2.0400
3.1216
4.2465
5.4163
6.6330
7.8983
9.2142
10.5828
12.0061
13.4864
15.0258
16.6268
18.2919
20.0236
21.8245
23.6975
25.6454
27.6712
29.7781
41.6459
56.0849
73.6522
95.0255
121.0294
152.6671
191.1592
237.9907
294.9684
364.2905
448.6314
551.2450
676.0901
827.9833
1012.7846
1237.6237
A/F
1.00000
0.49020
0.32035
0.23549
0.18463
0.15076
0.12661
0.10853
0.09449
0.08329
0.07415
0.06655
0.06014
0.05467
0.04994
0.04582
0.04220
0.03899
0.03614
0.03358
0.02401
0.01783
0.01358
0.01052
0.00826
0.00655
0.00523
0.00420
0.00339
0.00275
0.00223
0.00181
0.00148
0.00121
0.00099
0.00081
P/A
0.9615
1.8861
2.7751
3.6299
4.4518
5.2421
6.0021
6.7327
7.4353
8.1109
8.7605
9.3851
9.9856
10.5631
11.1184
11.6523
12.1657
12.6593
13.1339
13.5903
15.6221
17.2920
18.6646
19.7928
20.7200
21.4822
22.1086
22.6235
23.0467
23.3945
23.6804
23.9154
24.1085
24.2673
24.3978
24.5050
A/G
A/P
0.00000
1.04000
0.49020
0.53020
0.97386
0.36035
1.45100
0.27549
1.92161
0.22463
2.38571
0.19076
2.84332
0.16661
3.29443
0.14853
3.73908
0.13449
4.17726
0.12329
4.60901
0.11415
5.03435
0.10655
5.45329
0.10014
5.86586
0.09467
6.27209
0.08994
6.67200
0.08582
7.06563
0.08220
7.45300
0.07899
7.83416
0.07614
8.20912
0.07358
0.06401
9.99252
0.05783 11.62743
0.05358 13.11984
0.05052 14.47651
0.04826 15.70474
0.04655 16.81225
0.04523 17.80704
0.04420 18.69723
0.04339 19.49093
0.04275 20.19614
0.04223 20.82062
0.04181 21.37185
0.04148 21.85693
0.04121 22.28255
0.04099 22.65498
0.04081 22.98000
194 APPENDIX A
i =
5%
n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
F/P
1.05000
1.10250
1.15763
1.21551
1.27628
1.34010
1.40710
1.47746
1.55133
1.62889
1.71034
1.79586
1.88565
1.97993
2.07893
2.18287
2.29202
2.40662
2.52695
2.65330
3.38635
4.32194
5.51602
7.03999
8.98501
11.46740
14.63563
18.67919
23.83990
30.42643
38.83269
49.56144
63.25435
80.73037
103.03468
131.50126
P/F
0.95238
0.90703
0.86384
0.82270
0.78353
0.74622
0.71068
0.67684
0.64461
0.61391
0.58468
0.55684
0.53032
0.50507
0.48102
0.45811
0.43630
0.41552
0.39573
0.37689
0.29530
0.23138
0.18129
0.14205
0.11130
0.08720
0.06833
0.05354
0.04195
0.03287
0.02575
0.02018
0.01581
0.01239
0.00971
0.00760
F/A
1.0000
2.0500
3.1525
4.3101
5.5256
6.8019
8.1420
9.5491
11.0266
12.5779
14.2068
15.9171
17.7130
19.5986
21.5786
23.6575
25.8404
28.1324
30.5390
33.0660
47.7271
66.4388
90.3203
120.7998
159.7002
209.3480
272.7126
353.5837
456.7980
588.5285
756.6537
971.2288
1245.0871
1594.6073
2040.6935
2610.0252
A/F
1.00000
0.48780
0.31721
0.23201
0.18097
0.14702
0.12282
0.10472
0.09069
0.07950
0.07039
0.06283
0.05646
0.05102
0.04634
0.04227
0.03870
0.03555
0.03275
0.03024
0.02095
0.01505
0.01107
0.00828
0.00626
0.00478
0.00367
0.00283
0.00219
0.00170
0.00132
0.00103
0.00080
0.00063
0.00049
0.00038
P/A
0.9524
1.8594
2.7232
3.5460
4.3295
5.0757
5.7864
6.4632
7.1078
7.7217
8.3064
8.8633
9.3936
9.8986
10.3797
10.8378
11.2741
11.6896
12.0853
12.4622
14.0939
15.3725
16.3742
17.1591
17.7741
18.2559
18.6335
18.9293
19.1611
19.3427
19.4850
19.5965
19.6838
19.7523
19.8059
19.8479
A/G
A/P
0.00000
1.05000
0.48780
0.53780
0.96749
0.36721
1.43905
0.28201
1.90252
0.23097
2.35790
0.19702
2.80523
0.17282
3.24451
0.15472
3.67579
0.14069
4.09909
0.12950
4.51444
0.12039
4.92190
0.11283
5.32150
0.10646
5.71329
0.10102
6.09731
0.09634
6.47363
0.09227
6.84229
0.08870
7.20336
0.08555
7.55690
0.08275
7.90297
0.08024
0.07095
9.52377
0.06505 10.96914
0.06107 12.24980
0.05828 13.37747
0.05626 14.36444
0.05478 15.22326
0.05367 15.96645
0.05283 16.60618
0.05219 17.15410
0.05170 17.62119
0.05132 18.01759
0.05103 18.35260
0.05080 18.63463
0.05063 18.87120
0.05049 19.06894
0.05038 19.23372
COMPOUND INTEREST FACTORS 195
i =
6%
n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
F/P
1.06000
1.12360
1.19102
1.26248
1.33823
1.41852
1.50363
1.59385
1.68948
1.79085
1.89830
2.01220
2.13293
2.26090
2.39656
2.54035
2.69277
2.85434
3.02560
3.20714
4.29187
5.74349
7.68609
10.28572
13.76461
18.42015
24.65032
32.98769
44.14497
59.07593
79.05692
105.79599
141.57890
189.46451
253.54625
339.30208
P/F
0.94340
0.89000
0.83962
0.79209
0.74726
0.70496
0.66506
0.62741
0.59190
0.55839
0.52679
0.49697
0.46884
0.44230
0.41727
0.39365
0.37136
0.35034
0.33051
0.31180
0.23300
0.17411
0.13011
0.09722
0.07265
0.05429
0.04057
0.03031
0.02265
0.01693
0.01265
0.00945
0.00706
0.00528
0.00394
0.00295
F/A
1.0000
2.0600
3.1836
4.3746
5.6371
6.9753
8.3938
9.8975
11.4913
13.1808
14.9716
16.8699
18.8821
21.0151
23.2760
25.6725
28.2129
30.9057
33.7600
36.7856
54.8645
79.0582
111.4348
154.7620
212.7435
290.3359
394.1720
533.1282
719.0829
967.9322
1300.9487
1746.5999
2342.9817
3141.0752
4209.1042
5638.3681
A/F
1.00000
0.48544
0.31411
0.22859
0.17740
0.14336
0.11914
0.10104
0.08702
0.07587
0.06679
0.05928
0.05296
0.04758
0.04296
0.03895
0.03544
0.03236
0.02962
0.02718
0.01823
0.01265
0.00897
0.00646
0.00470
0.00344
0.00254
0.00188
0.00139
0.00103
0.00077
0.00057
0.00043
0.00032
0.00024
0.00018
P/A
0.9434
1.8334
2.6730
3.4651
4.2124
4.9173
5.5824
6.2098
6.8017
7.3601
7.8869
8.3838
8.8527
9.2950
9.7122
10.1059
10.4773
10.8276
11.1581
11.4699
12.7834
13.7648
14.4982
15.0463
15.4558
15.7619
15.9905
16.1614
16.2891
16.3845
16.4558
16.5091
16.5489
16.5787
16.6009
16.6175
A/G
A/P
0.00000
1.06000
0.48544
0.54544
0.96118
0.37411
1.42723
0.28859
1.88363
0.23740
2.33040
0.20336
2.76758
0.17914
3.19521
0.16104
3.61333
0.14702
4.02201
0.13587
4.42129
0.12679
4.81126
0.11928
5.19198
0.11296
5.56352
0.10758
5.92598
0.10296
6.27943
0.09895
6.62397
0.09544
6.95970
0.09236
7.28673
0.08962
7.60515
0.08718
0.07823
9.07220
0.07265 10.34221
0.06897 11.43192
0.06646 12.35898
0.06470 13.14129
0.06344 13.79643
0.06254 14.34112
0.06188 14.79095
0.06139 15.16012
0.06103 15.46135
0.06077 15.70583
0.06057 15.90328
0.06043 16.06202
0.06032 16.18912
0.06024 16.29050
0.06018 16.37107
196 APPENDIX A
i =
7%
n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
F/P
1.07000
1.14490
1.22504
1.31080
1.40255
1.50073
1.60578
1.71819
1.83846
1.96715
2.10485
2.25219
2.40985
2.57853
2.75903
2.95216
3.15882
3.37993
3.61653
3.86968
5.42743
7.61226
10.67658
14.97446
21.00245
29.45703
41.31500
57.94643
81.27286
113.98939
159.87602
224.23439
314.50033
441.10298
618.66975
867.71633
P/F
0.93458
0.87344
0.81630
0.76290
0.71299
0.66634
0.62275
0.58201
0.54393
0.50835
0.47509
0.44401
0.41496
0.38782
0.36245
0.33873
0.31657
0.29586
0.27651
0.25842
0.18425
0.13137
0.09366
0.06678
0.04761
0.03395
0.02420
0.01726
0.01230
0.00877
0.00625
0.00446
0.00318
0.00227
0.00162
0.00115
A/F
F/A
P/A
0.9346
1.0000 1.00000
1.8080
2.0700 0.48309
2.6243
3.2149 0.31105
3.3872
4.4399 0.22523
4.1002
5.7507 0.17389
4.7665
7.1533 0.13980
5.3893
8.6540 0.11555
5.9713
10.2598 0.09747
6.5152
11.9780 0.08349
7.0236
13.8164 0.07238
7.4987
15.7836 0.06336
7.9427
17.8885 0.05590
8.3577
20.1406 0.04965
8.7455
22.5505 0.04434
9.1079
25.1290 0.03979
9.4466
27.8881 0.03586
30.8402 0.03243
9.7632
33.9990 0.02941 10.0591
37.3790 0.02675 10.3356
40.9955 0.02439 10.5940
63.2490 0.01581 11.6536
94.4608 0.01059 12.4090
138.2369 0.00723 12.9477
199.6351 0.00501 13.3317
285.7493 0.00350 13.6055
406.5289 0.00246 13.8007
575.9286 0.00174 13.9399
813.5204 0.00123 14.0392
1146.7552 0.00087 14.1099
1614.1342 0.00062 14.1604
2269.6574 0.00044 14.1964
3189.0627 0.00031 14.2220
4478.5761 0.00022 14.2403
6287.1854 0.00016 14.2533
8823.8535 0.00011 14.2626
12381.6618 0.00008 14.2693
A/G
A/P
0.00000
1.07000
0.48309
0.55309
0.95493
0.38105
1.41554
0.29523
1.86495
0.24389
2.30322
0.20980
2.73039
0.18555
3.14654
0.16747
3.55174
0.15349
3.94607
0.14238
4.32963
0.13336
4.70252
0.12590
5.06484
0.11965
5.41673
0.11434
5.75829
0.10979
6.08968
0.10586
6.41102
0.10243
6.72247
0.09941
7.02418
0.09675
7.31631
0.09439
8.63910
0.08581
0.08059
9.74868
0.07723 10.66873
0.07501 11.42335
0.07350 12.03599
0.07246 12.52868
0.07174 12.92146
0.07123 13.23209
0.07087 13.47598
0.07062 13.66619
0.07044 13.81365
0.07031 13.92735
0.07022 14.01458
0.07016 14.08122
0.07011 14.13191
0.07008 14.17034
COMPOUND INTEREST FACTORS 197
i =
8%
n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
F/P
1.08000
1.16640
1.25971
1.36049
1.46933
1.58687
1.71382
1.85093
1.99900
2.15892
2.33164
2.51817
2.71962
2.93719
3.17217
3.42594
3.70002
3.99602
4.31570
4.66096
6.84848
10.06266
14.78534
21.72452
31.92045
46.90161
68.91386
101.25706
148.77985
218.60641
321.20453
471.95483
693.45649
1018.91509
1497.12055
2199.76126
P/F
0.92593
0.85734
0.79383
0.73503
0.68058
0.63017
0.58349
0.54027
0.50025
0.46319
0.42888
0.39711
0.36770
0.34046
0.31524
0.29189
0.27027
0.25025
0.23171
0.21455
0.14602
0.09938
0.06763
0.04603
0.03133
0.02132
0.01451
0.00988
0.00672
0.00457
0.00311
0.00212
0.00144
0.00098
0.00067
0.00045
F/A
A/F
P/A
0.9259
1.0000 1.00000
1.7833
2.0800 0.48077
2.5771
3.2464 0.30803
3.3121
4.5061 0.22192
3.9927
5.8666 0.17046
4.6229
7.3359 0.13632
5.2064
8.9228 0.11207
5.7466
10.6366 0.09401
6.2469
12.4876 0.08008
6.7101
14.4866 0.06903
7.1390
16.6455 0.06008
7.5361
18.9771 0.05270
7.9038
21.4953 0.04652
8.2442
24.2149 0.04130
8.5595
27.1521 0.03683
8.8514
30.3243 0.03298
9.1216
33.7502 0.02963
9.3719
37.4502 0.02670
9.6036
41.4463 0.02413
45.7620 0.02185
9.8181
73.1059 0.01368 10.6748
113.2832 0.00883 11.2578
172.3168 0.00580 11.6546
259.0565 0.00386 11.9246
386.5056 0.00259 12.1084
573.7702 0.00174 12.2335
848.9232 0.00118 12.3186
1253.2133 0.00080 12.3766
1847.2481 0.00054 12.4160
2720.0801 0.00037 12.4428
4002.5566 0.00025 12.4611
5886.9354 0.00017 12.4735
8655.7061 0.00012 12.4820
12723.9386 0.00008 12.4877
18701.5069 0.00005 12.4917
27484.5157 0.00004 12.4943
A/G
A/P
0.00000
1.08000
0.48077
0.56077
0.94874
0.38803
1.40396
0.30192
1.84647
0.25046
2.27635
0.21632
2.69366
0.19207
3.09852
0.17401
3.49103
0.16008
3.87131
0.14903
4.23950
0.14008
4.59575
0.13270
4.94021
0.12652
5.27305
0.12130
5.59446
0.11683
5.90463
0.11298
6.20375
0.10963
6.49203
0.10670
6.76969
0.10413
7.03695
0.10185
8.22538
0.09368
9.18971
0.08883
9.96107
0.08580
0.08386 10.56992
0.08259 11.04465
0.08174 11.41071
0.08118 11.69015
0.08080 11.90154
0.08054 12.06016
0.08037 12.17832
0.08025 12.26577
0.08017 12.33013
0.08012 12.37725
0.08008 12.41158
0.08005 12.43650
0.08004 12.45452
198 APPENDIX A
i =
9%
n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
F/P
1.09000
1.18810
1.29503
1.41158
1.53862
1.67710
1.82804
1.99256
2.17189
2.36736
2.58043
2.81266
3.06580
3.34173
3.64248
3.97031
4.32763
4.71712
5.14166
5.60441
8.62308
13.26768
20.41397
31.40942
48.32729
74.35752
114.40826
176.03129
270.84596
416.73009
641.19089
986.55167
1517.93203
2335.52658
3593.49715
5529.04079
P/F
0.91743
0.84168
0.77218
0.70843
0.64993
0.59627
0.54703
0.50187
0.46043
0.42241
0.38753
0.35553
0.32618
0.29925
0.27454
0.25187
0.23107
0.21199
0.19449
0.17843
0.11597
0.07537
0.04899
0.03184
0.02069
0.01345
0.00874
0.00568
0.00369
0.00240
0.00156
0.00101
0.00066
0.00043
0.00028
0.00018
F/A
A/F
1.0000 1.00000
2.0900 0.47847
3.2781 0.30505
4.5731 0.21867
5.9847 0.16709
7.5233 0.13292
9.2004 0.10869
11.0285 0.09067
13.0210 0.07680
15.1929 0.06582
17.5603 0.05695
20.1407 0.04965
22.9534 0.04357
26.0192 0.03843
29.3609 0.03406
33.0034 0.03030
36.9737 0.02705
41.3013 0.02421
46.0185 0.02173
51.1601 0.01955
84.7009 0.01181
P/A
0.9174
1.7591
2.5313
3.2397
3.8897
4.4859
5.0330
5.5348
5.9952
6.4177
6.8052
7.1607
7.4869
7.7862
8.0607
8.3126
8.5436
8.7556
8.9501
9.1285
9.8226
136.3075 0.00734 10.2737
215.7108 0.00464 10.5668
337.8824 0.00296 10.7574
525.8587 0.00190 10.8812
815.0836 0.00123 10.9617
1260.0918 0.00079 11.0140
1944.7921 0.00051 11.0480
2998.2885 0.00033 11.0701
4619.2232 0.00022 11.0844
7113.2321 0.00014 11.0938
10950.5741 0.00009 11.0998
16854.8003 0.00006 11.1038
25939.1842 0.00004 11.1064
39916.6350 0.00003 11.1080
61422.6755 0.00002 11.1091
A/G
A/P
0.00000
1.09000
0.47847
0.56847
0.94262
0.39505
1.39250
0.30867
1.82820
0.25709
2.24979
0.22292
2.65740
0.19869
3.05117
0.18067
3.43123
0.16680
3.79777
0.15582
4.15096
0.14695
4.49102
0.13965
4.81816
0.13357
5.13262
0.12843
5.43463
0.12406
5.72446
0.12030
6.00238
0.11705
6.26865
0.11421
6.52358
0.11173
6.76745
0.10955
7.83160
0.10181
8.66566
0.09734
9.30829
0.09464
0.09296
9.79573
0.09190 10.16029
0.09123 10.42952
0.09079 10.62614
0.09051 10.76832
0.09033 10.87023
0.09022 10.94273
0.09014 10.99396
0.09009 11.02994
0.09006 11.05508
0.09004 11.07256
0.09003 11.08467
0.09002 11.09302
COMPOUND INTEREST FACTORS 199
n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
25
30
35
40
45
50
55
60
65
70
F/P
1.10000
1.21000
1.33100
1.46410
1.61051
1.77156
1.94872
2.14359
2.35795
2.59374
2.85312
3.13843
3.45227
3.79750
4.17725
4.59497
5.05447
5.55992
6.11591
6.72750
10.8347
17.4494
28.1024
45.2593
72.8905
117.391
189.059
304.482
490.371
789.747
P/F
0.90909
0.82645
0.75131
0.68301
0.62092
0.56447
0.51316
0.46651
0.42410
0.38554
0.35049
0.31863
0.28966
0.26333
0.23939
0.21763
0.19784
0.17986
0.16351
0.14864
0.09230
0.05731
0.03558
0.02209
0.01372
0.00852
0.00529
0.00328
0.00204
0.00127
i =
10%
F/A
1.0000
2.1000
3.3100
4.6410
6.1051
7.7156
9.4872
11.4359
13.5795
15.9374
18.5312
21.3843
24.5227
27.9750
31.7725
35.9497
40.5447
45.5992
51.1591
57.2750
98.3471
164.494
271.024
442.593
718.905
1163.91
1880.59
3034.82
4893.71
7887.47
A/F
1.00000
0.47619
0.30211
0.21547
0.16380
0.12961
0.10541
0.08744
0.07364
0.06275
0.05396
0.04676
0.04078
0.03575
0.03147
0.02782
0.02466
0.02193
0.01955
0.01746
0.01017
0.00608
0.00369
0.00226
0.00139
0.00086
0.00053
0.00033
0.00020
0.00013
P/A
0.9091
1.7355
2.4869
3.1699
3.7908
4.3553
4.8684
5.3349
5.7590
6.1446
6.4951
6.8137
7.1034
7.3667
7.6061
7.8237
8.0216
8.2014
8.3649
8.5136
9.0770
9.4269
9.6442
9.7791
9.8628
9.9148
9.9471
9.9672
9.9796
9.9873
A/P
1.10000
0.57619
0.40211
0.31547
0.26380
0.22961
0.20541
0.18744
0.17364
0.16275
0.15396
0.14676
0.14078
0.13575
0.13147
0.12782
0.12466
0.12193
0.11955
0.11746
0.11017
0.10608
0.10369
0.10226
0.10139
0.10086
0.10053
0.10033
0.10020
0.10013
A/G
0.00000
0.47619
0.93656
1.38117
1.81013
2.22356
2.62162
3.00448
3.37235
3.72546
4.06405
4.38840
4.69879
4.99553
5.27893
5.54934
5.80710
6.05256
6.28610
6.50808
7.45798
8.17623
8.70860
9.09623
9.37405
9.57041
9.70754
9.80229
9.86718
9.91125
200 APPENDIX A
F/P
1.12000
1.25440
1.40493
1.57352
1.76234
1.97382
2.21068
2.47596
2.77308
3.10585
3.47855
3.89598
4.36349
4.88711
5.47357
6.13039
6.86604
7.68997
8.61276
9.64629
17.0001
29.9599
52.7996
93.0510
163.988
289.002
509.321
897.597
1581.87
2787.80
P/F
0.89286
0.79719
0.71178
0.63552
0.56743
0.50663
0.45235
0.40388
0.36061
0.32197
0.28748
0.25668
0.22917
0.20462
0.18270
0.16312
0.14564
0.13004
0.11611
0.10367
0.05882
0.03338
0.01894
0.01075
0.00610
0.00346
0.00196
0.00111
0.00063
0.00036
i =
12%
F/A
1.0000
2.1200
3.3744
4.7793
6.3528
8.1152
10.0890
12.2997
14.7757
17.5487
20.6546
24.1331
28.0291
32.3926
37.2797
42.7533
48.8837
55.7497
63.4397
72.0524
133.334
241.333
431.663
767.091
1358.23
2400.02
4236.01
7471.64
13173.9
23223.3
A/F
1.00000
0.47170
0.29635
0.20923
0.15741
0.12323
0.09912
0.08130
0.06768
0.05698
0.04842
0.04144
0.03568
0.03087
0.02682
0.02339
0.02046
0.01794
0.01576
0.01388
0.00750
0.00414
0.00232
0.00130
0.00074
0.00042
0.00024
0.00013
0.00008
0.00004
P/A
0.8929
1.6901
2.4018
3.0373
3.6048
4.1114
4.5638
4.9676
5.3282
5.6502
5.9377
6.1944
6.4235
6.6282
6.8109
6.9740
7.1196
7.2497
7.3658
7.4694
7.8431
8.0552
8.1755
8.2438
8.2825
8.3045
8.3170
8.3240
8.3281
8.3303
A/P
1.12000
0.59170
0.41635
0.32923
0.27741
0.24323
0.21912
0.20130
0.18768
0.17698
0.16842
0.16144
0.15568
0.15087
0.14682
0.14339
0.14046
0.13794
0.13576
0.13388
0.12750
0.12414
0.12232
0.12130
0.12074
0.12042
0.12024
0.12013
0.12008
0.12004
A/G
0.00000
0.47170
0.92461
1.35885
1.77459
2.17205
2.55147
2.91314
3.25742
3.58465
3.89525
4.18965
4.46830
4.73169
4.98030
5.21466
5.43530
5.64274
5.83752
6.02020
6.77084
7.29742
7.65765
7.89879
8.05724
8.15972
8.22513
8.26641
8.29222
8.30821
n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
25
30
35
40
45
50
55
60
65
70
COMPOUND INTEREST FACTORS 201
F/P
1.15000
1.32250
1.52088
1.74901
2.01136
2.31306
2.66002
3.05902
3.51788
4.04556
4.65239
5.35025
6.15279
7.07571
8.13706
9.35762
10.7613
12.3755
14.2318
16.3665
32.9190
66.2118
133.176
267.864
538.769
1083.66
2179.62
4384.00
8817.79
17735.7
P/F
0.86957
0.75614
0.65752
0.57175
0.49718
0.43233
0.37594
0.32690
0.28426
0.24718
0.21494
0.18691
0.16253
0.14133
0.12289
0.10686
0.09293
0.08081
0.07027
0.06110
0.03038
0.01510
0.00751
0.00373
0.00186
0.00092
0.00046
0.00023
0.00011
0.00006
i =
15%
F/A
1.0000
2.1500
3.4725
4.9934
6.7424
8.7537
11.0668
13.7268
16.7858
20.3037
24.3493
29.0017
34.3519
40.5047
47.5804
55.7175
65.0751
75.8364
88.2118
102.444
212.793
434.745
881.170
1779.09
3585.13
7217.72
14524.1
29220.0
58778.6
118231
A/F
1.00000
0.46512
0.28798
0.20027
0.14832
0.11424
0.09036
0.07285
0.05957
0.04925
0.04107
0.03448
0.02911
0.02469
0.02102
0.01795
0.01537
0.01319
0.01134
0.00976
0.00470
0.00230
0.00113
0.00056
0.00028
0.00014
0.00007
0.00003
0.00002
0.00001
P/A
0.8696
1.6257
2.2832
2.8550
3.3522
3.7845
4.1604
4.4873
4.7716
5.0188
5.2337
5.4206
5.5831
5.7245
5.8474
5.9542
6.0472
6.1280
6.1982
6.2593
6.4641
6.5660
6.6166
6.6418
6.6543
6.6605
6.6636
6.6651
6.6659
6.6663
A/P
1.15000
0.61512
0.43798
0.35027
0.29832
0.26424
0.24036
0.22285
0.20957
0.19925
0.19107
0.18448
0.17911
0.17469
0.17102
0.16795
0.16537
0.16319
0.16134
0.15976
0.15470
0.15230
0.15113
0.15056
0.15028
0.15014
0.15007
0.15003
0.15002
0.15001
A/G
0.00000
0.46512
0.90713
1.32626
1.72281
2.09719
2.44985
2.78133
3.09223
3.38320
3.65494
3.90820
4.14376
4.36241
4.56496
4.75225
4.92509
5.08431
5.23073
5.36514
5.88343
6.20663
6.40187
6.51678
6.58299
6.62048
6.64142
6.65298
6.65929
6.66272
n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
25
30
35
40
45
50
55
60
65
70
202 APPENDIX A
n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
25
30
35
40
45
50
55
60
F/P
1.20000
1.44000
1.72800
2.07360
2.48832
2.98598
3.58318
4.29982
5.15978
6.19174
7.43008
8.91610
10.6993
12.8392
15.4070
18.4884
22.1861
26.6233
31.9480
38.3376
95.3962
237.376
590.668
1469.77
3657.26
9100.44
22644.8
56347.5
P/F
0.83333
0.69444
0.57870
0.48225
0.40188
0.33490
0.27908
0.23257
0.19381
0.16151
0.13459
0.11216
0.09346
0.07789
0.06491
0.05409
0.04507
0.03756
0.03130
0.02608
0.01048
0.00421
0.00169
0.00068
0.00027
0.00011
0.00004
0.00002
i =
F/A
1.0000
2.2000
3.6400
5.3680
7.4416
9.9299
12.9159
16.4991
20.7989
25.9587
32.1504
39.5805
48.4966
59.1959
72.0351
87.4421
105.931
128.117
154.740
186.688
471.981
1181.88
2948.34
7343.86
18281.3
45497.2
113219
281733
20%
A/F
P/A
1.00000 0.8333
0.45455 1.5278
0.27473 2.1065
0.18629 2.5887
0.13438 2.9906
0.10071 3.3255
0.07742 3.6046
0.06061 3.8372
0.04808 4.0310
0.03852 4.1925
0.03110 4.3271
0.02526 4.4392
0.02062 4.5327
0.01689 4.6106
0.01388 4.6755
0.01144 4.7296
0.00944 4.7746
0.00781 4.8122
0.00646 4.8435
0.00536 4.8696
0.00212 4.9476
0.00085 4.9789
0.00034 4.9915
0.00014 4.9966
0.00005 4.9986
0.00002 4.9995
0.00001 4.9998
0.00000 4.9999
A/P
1.20000
0.65455
0.47473
0.38629
0.33438
0.30071
0.27742
0.26061
0.24808
0.23852
0.23110
0.22526
0.22062
0.21689
0.21388
0.21144
0.20944
0.20781
0.20646
0.20536
0.20212
0.20085
0.20034
0.20014
0.20005
0.20002
0.20001
0.20000
A/G
0.00000
0.45455
0.87912
1.27422
1.64051
1.97883
2.29016
2.57562
2.83642
3.07386
3.28929
3.48410
3.65970
3.81749
3.95884
4.08511
4.19759
4.29752
4.38607
4.46435
4.73516
4.87308
4.94064
4.97277
4.98769
4.99451
4.99757
4.99894
COMPOUND INTEREST FACTORS 203
i =
25%
F/P
1.25000
1.56250
1.95313
2.44141
3.05176
3.81470
4.76837
5.96046
7.45058
9.31323
11.6415
14.5519
18.1899
22.7374
28.4217
35.5271
44.4089
55.5112
69.3889
86.7362
264.698
807.794
2465.19
7523.16
22958.87
70064.92
213821.2
652530.4
P/F
0.80000
0.64000
0.51200
0.40960
0.32768
0.26214
0.20972
0.16777
0.13422
0.10737
0.08590
0.06872
0.05498
0.04398
0.03518
0.02815
0.02252
0.01801
0.01441
0.01153
0.00378
0.00124
0.00041
0.00013
0.00004
0.00001
0.00000
0.00000
F/A
1.0000
2.2500
3.8125
5.7656
8.2070
11.2588
15.0735
19.8419
25.8023
33.2529
42.5661
54.2077
68.7596
86.9495
109.687
138.109
173.636
218.045
273.556
342.945
1054.79
3227.17
9856.76
30088.7
91831.5
280256
855281
2610118
A/F
1.00000
0.44444
0.26230
0.17344
0.12185
0.08882
0.06634
0.05040
0.03876
0.03007
0.02349
0.01845
0.01454
0.01150
0.00912
0.00724
0.00576
0.00459
0.00366
0.00292
0.00095
0.00031
0.00010
0.00003
0.00001
0.00000
0.00000
0.00000
P/A
0.8000
1.4400
1.9520
2.3616
2.6893
2.9514
3.1611
3.3289
3.4631
3.5705
3.6564
3.7251
3.7801
3.8241
3.8593
3.8874
3.9099
3.9279
3.9424
3.9539
3.9849
3.9950
3.9984
3.9995
3.9998
3.9999
4.0000
4.0000
A/P
1.25000
0.69444
0.51230
0.42344
0.37185
0.33882
0.31634
0.30040
0.28876
0.28007
0.27349
0.26845
0.26454
0.26150
0.25912
0.25724
0.25576
0.25459
0.25366
0.25292
0.25095
0.25031
0.25010
0.25003
0.25001
0.25000
0.25000
0.25000
A/G
0.00000
0.44444
0.85246
1.22493
1.56307
1.86833
2.14243
2.38725
2.60478
2.79710
2.96631
3.11452
3.24374
3.35595
3.45299
3.53660
3.60838
3.66979
3.72218
3.76673
3.90519
3.96282
3.98580
3.99468
3.99804
3.99929
3.99974
3.99991
n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
25
30
35
40
45
50
55
60
Authors’ Biographies
205
DAVID L. WHITMAN
David L. Whitman, P.E., Ph.D. received a B.S. degree (1975) in Electrical Engineering from the
University of Wyoming (UW). He also received a Ph.D. degree (1978) in Mineral Engineering
from the University of Wyoming. He worked in the synthetic fuels arena prior to becoming a faculty
member in Petroleum Engineering at the University of Wyoming in 1981. From 1989 to 2005,
he was the Associate Dean of Academics and since 2005 has been a professor of Electrical and
Computer Engineering. He received UW’s Ellbogen Outstanding Teacher Award in 1985, UW’s
College of Engineering Outstanding Undergraduate Teaching Award in 1990 and 2004 and the
ASEE Rocky Mountain Section Outstanding Teaching Award in 2001. He is a Past President
of the National Council of Examiners for Engineers and Surveyors (NCEES), chairman of the
IEEE-USA Licensure & Registration Committee, and an active member of ASEE.
RONALD E. TERRY
Ronald E. Terry, Ph.D. received a B.S. in Chemical Engineering from Oregon State University
(1971) and a Ph.D. from Brigham Young University (BYU) (1976). He worked for Phillips Petroleum
Company after graduate school and began his academic career in 1977 at the University of Kansas
in the Chemical and Petroleum Engineering Department. He taught in the Petroleum Engineering
Department at the University of Wyoming (1981-1987) and at BYU in the Chemical Engineering
Department (1987-2007) and in the Technology and Engineering Education Department (2007-
present). He has received teaching awards at the University of Kansas, University of Wyoming, and
at Brigham Young University.
Early in his career, his scholarship efforts involved researching methods to enhance the pro-
duction of oil and gas. After joining BYU, his scholarship centered on pedagogy, student learning,
and engineering ethics. He has served as acting department chair, associate dean, and in BYU’s
central administration as an Associate in the Office of Planning and Assessment for five years
(2003-2008). He is past president of the Rocky Mountain Section of the American Society for
Engineering Education.
|
https://ieeexplore.ieee.org/xpl/ebooks/bookPdfWithBanner.jsp?fileName=6813065.pdf&bkn=6813064&pdfType=book
|
SYNTHESIS LECTURES ON ENGINEERING
Series Editor: Steven F. Barrett, University of Wyoming
Series ISSN: 1939-5221
Crafting Your
Research Future
A Guide to Successful Master’s and PhD
Degrees in Science & Engineering
Charles X. Ling
Western University, Ontario, Canada
Qiang Yang
Hong Kong University of Science and Technology, China
What is it like to be a researcher or a scientist? For young people, including graduate students and junior
faculty members in universities, how can they identify good ideas for research? How do they conduct
solid research to verify and realize their new ideas? How can they formulate their ideas and research
results into high-quality articles, and publish them in highly competitive journals and conferences? What
are effective ways to supervise graduate students so that they can establish themselves quickly in their
research careers? In this book, Ling and Yang answer these questions in a step-by-step manner with
specific and concrete examples from their first-hand research experience.
Critical Acclaim for Crafting Your Research Future
“Ling and Yang summarize the practical aspects of the expectations for the modern graduate student.
They will all benefit.” —Randy Goebel, University of Alberta
“It will be tremendously useful to post-docs and graduate students alike (and perhaps even some
junior faculty!).” —Adrian M. Owen, Professor and Canada Excellence Research Chair, Western University
“Want to have a successful research career? This is a nice guide and companion!”
—Jiawei Han, Abel Bliss Professor, University of Illinois at Urbana-Champaign
About Morgan & Claypool Publishers
This volume is a printed version of a work that appears in Synthesis, the
Digital Library of Engineering and Computer Science. Synthesis Lectures
provide concise, original presentations of important research and development
topics, published quickly, in digital and print formats. For more information
visit synthesis.morganclaypool.com
Mor gan Cl aypool Publishers
&
ISBN: 978-1-60845-810-3
90000
w w w . m o r g a n c l a y p o o l . c o m
9 781608 458103
Y
I
N
G
•
L
A
N
G
C
R
A
F
T
I
N
G
Y
O
U
R
R
E
S
E
A
R
C
H
F
U
T
U
R
E
M
O
R
G
A
N
&
C
L
A
Y
P
O
O
L
Critical Acclaim for Crafting Your Research Future
“Ling and Yang summarize the practical aspects of the expectations for the modern
graduate student. They will all benefit.”
— Randy Goebel, Professor of University of Alberta
“It will be tremendously useful to post-docs and graduate students alike (and perhaps
even some junior faculty!)”
— Adrian M. Owen, Professor and Canada Excellence Research Chair, Western
University
“Want to have a successful research career? This is a nice guide and companion!”
— Jiawei Han, Abel Bliss Professor, University of Illinois at Urbana-Champaign
Copyright © 2012 by Morgan & Claypool
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or
transmitted in any form or by any means—electronic, mechanical, photocopy, recording, or any other
except for brief quotations in printed reviews, without the prior permission of the publisher.
u
Crafting Your Research Fut re
A Guide to Successful Master’s and Ph.D. Degrees in Science & Engineering
Charles X. Ling and Qiang Yang
www.morganclaypool.com
ISBN: 9781608458103 paperback
ISBN: 9781608458110 ebook
DOI: 10.2200/S00412ED1V01Y201203ENG018
A Publication in the
SYnTheSIS LeCTuReS on engIneeRIng
Lecture #18
Series Editor: Steven F. Barrett
Series ISSN
ISSN 1939-5221
print
ISSN 1939-523X
electronic
Crafting Your Research
Future
A guide to Successful Master’s and
Ph.D. Degrees in Science & engineering
Charles X. Ling
University of Western Ontario, Ontario, Canada
Qiang Yang
Hong Kong University of Science and Technology, Hong Kong, China
SYnTheSIS LeCTuReS on engIneeRIng #18
iv
AbStRACt
What is it like to be a researcher or a scientist? For young people, including gradu-
ate students and junior faculty members in universities, how can they identify good
ideas for research? How do they conduct solid research to verify and realize their
new ideas? How can they formulate their ideas and research results into high-
quality articles, and publish them in highly competitive journals and conferences?
What are effective ways to supervise graduate students so that they can establish
themselves quickly in their research careers? In this book, Ling and Yang answer
these questions in a step-by-step manner with specific and concrete examples from
their first-hand research experience.
keYwoRdS:
Research Methodology, Research Career Guidebook, Guidebook for Graduate
Students, Ph.D. and Masters Research, Writing and Publishing Papers, Writing
Ph.D. Thesis, Science and Engineering Career.
Contents
Acknowledgments ......................................................................................ix
Preface ......................................................................................................xi
1.
basics of Research ..............................................................................1
1.1 What Is Research? ............................................................................ 1
1.2 Who Are Researchers and Who Are Not? ...................................... 2
1.3 What Do Researchers Do? ............................................................... 3
1.4 What Skills and Abilities Are the Most Important
for Researchers? ................................................................................ 7
1.5 What Are Some Pros and Cons of Being Researchers? ................. 10
1.6 So, How Do I Become a Researcher? ............................................ 12
1.7 What Are the Differences between a Master’s
and Ph.D. Thesis? .......................................................................... 12
1.8 How to Find a Suitable Supervisor for Ph.D. Study ..................... 13
1.9 How Long Does It Take to Get My Ph.D. Degree? ..................... 16
1.10 Three Representative Graduate Students ....................................... 17
2. Goals of Ph.d. Research ..................................................................19
2.1 Goal #1: Be the Best in the Field ................................................... 19
2.2 Goal #2: Be Independent ............................................................... 25
2.3 Three Key Tasks for Getting a Ph.D. (and Master’s) Degree ........ 27
2.4 The Milestones of Getting a Ph.D. Degree ................................... 28
2.5 Distractions to the Goals................................................................ 30
3. Getting Started: Finding New Ideas and organizing Your Plans ....... 33
3.1 Your First Year ................................................................................33
3.2 How to Find Relevant Papers (Literature Search) .........................35
3.3 How to Read Papers .......................................................................37
vi CRAFtING YouR ReSeARCh FutuRe
3.4 Where to Get New Ideas ............................................................... 41
From Ideas to Research and Thesis Topic ..................................... 44
3.5
3.6 How Do I Know If I Am on the Right Track? .............................. 46
3.7 Setting Up a Plan for Ph.D. Thesis Early ...................................... 47
3.8 Learning to Organize Papers and Ideas Well ................................. 49
4.
Conducting Solid Research ..............................................................51
4.1 An Overview of a Research Process ............................................... 51
Jim Gray’s Criteria ......................................................................... 54
4.2
4.3 The Research Matrix Method ........................................................ 59
4.4 Carrying Out Your Research .......................................................... 63
4.5 Brand Yourself ................................................................................ 67
4.6 Empirical vs. Theoretical Research ................................................ 69
4.7 Team Work and Multi-Disciplinary Research ............................... 74
5. writing and Publishing Papers .........................................................77
“Publish or Perish” .......................................................................... 78
5.1
5.2 Why Publishing Top-Quality Papers Is Hard ................................ 80
5.3 What Makes a Great Paper? .......................................................... 81
5.4 A Few Untold Truths about Research Papers ................................ 83
5.5 The Roles of You, Your Supervisor, and Proofreader ..................... 85
5.6 Supervisors: How to Improve Your Students’ Writing Most
Effectively ....................................................................................... 87
5.7 Where to Submit: Conference or Journals? .................................... 89
5.8 How Are Full-Length Conference Papers Reviewed? ................... 91
5.9 How Are Journal Papers Reviewed? ............................................... 94
6. Misconceptions and tips for Paper writing ......................................99
“It’s So Obvious that Our Paper Is Great” ..................................... 99
6.1
“It Is Your Responsibility to Understand My Paper” ................... 102
6.2
6.3 The 10/30 Test ............................................................................. 103
6.4 Top-Down Refinement of Papers ................................................ 104
6.5 Create a Hierarchy of Subsections and Choose
Section Titles Carefully ................................................................ 109
6.6 Tips for Paper Writing ................................................................. 110
6.6.1 Use Certain Words to Signal Readers .............................. 111
6.6.2 Use Simple Sentences, But Not Simpler .......................... 111
CoNteNtS vii
6.6.3 Use a Small Set of Terms Throughout the Paper ............. 112
6.6.4 Use Examples Early and Use Them Throughout
the Paper ........................................................................... 113
6.6.5 Use Figures, Diagrams, Charts, Photos, Tables................ 113
6.6.6 Write about Your Motivations and Justifications ............. 114
6.6.7 Pose Potential Questions and Answer Them
Yourself ............................................................................. 114
6.6.8 Emphasize and Reiterate Key Points in Your Paper ........ 115
6.6.9 Make Connections Throughout the Paper ....................... 116
6.6.10 Format Papers for Easy Reading ...................................... 116
6.7 Other Misconceptions and Flaws ................................................. 117
6.8 Summary ...................................................................................... 118
7. writing and defending a Ph.d. thesis ...........................................121
7.1 Thesis and Dissertation ................................................................ 121
7.2 Thesis Organization: Top Down or Bottom Up .......................... 122
7.3 Defending Your Thesis ................................................................. 126
8.
Life After Ph.d. .............................................................................131
8.1 A Day In the Life of a Typical Professor ..................................... 131
8.2 Applying for Research Grants ...................................................... 135
8.3 Technology Transfer ..................................................................... 144
Summary ................................................................................................ 151
References .............................................................................................. 153
Author biographies ................................................................................. 155
ix
Acknowledgments
First of all, we thank the thousands of graduate students and researchers who
attended our seminars on how to conduct solid research and how to write and
publish top-quality papers. To these students: your enthusiasm for research, your
questions, and your suggestions helped us tremendously in preparing and finishing
this book. It is our wish to motivate and support more young researchers in their
path toward a successful research career.
From Charles Ling: I wish to dedicate this book to my parents Zongyun
Ling and Ruqian Cao. They were both life-long, awarding-winning, high-school
teachers. They gave me so much inspiration and guidance throughout my youth,
which laid the foundation of my research life from my graduate study to a full
research career. This book summarizes over 20 years of experience as a researcher,
which represents a direct outcome of my parents’ love, care, and life-long educa-
tion for me. I wrote parts of this book on the bedside of my father when he stayed
in the hospital for several months in 2011. The book is for you!
From Qiang Yang: I dedicate this book to my parents, Professors Haishou
Yang and Xiuying Li, who were professors in China’s Peking University and
Tsinghua University, respectively, where they worked all their lives until retire-
ment. My parents inspired me to pursue a research career in Astrophysics and then
Computer Science. Through their own exemplary research careers, they showed
me the “garden of research,” which is “full of beautiful flowers waiting to be
picked” (to quote from my father). To them: my deepest love and appreciation!
Many colleagues, friends, and students read the early drafts of the book, and
made many detailed suggestions and comments. Brian Srivastava, a very capable
x CRAFtING YouR ReSeARCh FutuRe
Ph.D. student at Western, provided detailed and thorough suggestions and com-
ments in early draft of the book. Lili Zhao, a research assistant in Hong Kong
University of Science and Technology (HKUST), made detailed edits. Luiz
Fernando Capretz, Jimmy Huang, Huan Liu, Aijun An, Steve Barrett, Wei Fan,
Jenna Butler, Randy Goebel, Eileen Ni, and many others gave us suggestions on
various parts of the book. Adrian Owen, Stephen Sims, C.B. Dean and Jiawei
Han gave us encouragement after reading parts of the book. Many of our former
Ph.D. students, especially Victor Sheng and Weizhu Chen, gave us comments and
suggestions. To all: our special gratitude!
xi
Preface
When we were children, we were often asked, “What do you want to be when you
grow up?” Our answer, and dream, was: “I want to become a scientist!” Indeed,
we have many giants to look up to: from Newton to Einstein, to Nobel Laureates
(such as Richard Feynman). However, when we really went through the process
of becoming scientists, we found that the road was not all that straightforward; it
was full of misconceptions that took many instances of trial and error to find out.
True, there are numerous online articles and commentaries, telling young people
how to become a successful researcher and scientist, but most are scattered and
some are biased. There has not been a central place where one can find advice
targeted to answering such questions about the first and most important stepping
stone toward becoming a researcher: graduate study.
This situation will hopefully be changed with the publishing of this book,
“Crafting Your Research Future.” Summarizing more than 20 years of experience
in research and practice, we have put together an accessible introductory book for
aspiring researchers in science and technology.
We strive to make this book accessible to a general audience such that it
could be used by readers interested in general areas of science and technology. At
the same time, we try to be as specific and concrete as possible, to show you step
by step, how research is conducted. We also include several case studies and many
specific examples.
The book also summarizes over a decade of seminars and talks at over
50 universities and institutes across the world on the topics: how to do research
and how to write and publish research papers. In all our talks and seminars, the
xii CRAFtING YouR ReSeARCh FutuRe
attendance has been overwhelming, with hundreds of students and faculty filling
the lecture hall. Questions and answer periods stretched beyond the talk. Each
seminar gives us a chance to improve our presentation, and this book is the latest
in this effort.
Here is a brief overview of the book.
Chapter 1 sets the general stage for the book, by discussing what research
is, and what researchers are.
Chapter 2 discusses the goals of research and lays down the basic steps to-
ward a research career.
Chapter 3 answers the question of how to get started by looking for a suit-
able supervisor, reading relevant literature and getting new ideas. It advocates a
principled method with three key steps to guide any research effort.
Chapter 4 presents a general methodology on how to evaluate potential
directions in order to find one that is high impact, and how to conduct solid and
thorough research. It goes into details on the differences between empirical, theo-
retical, and multidisciplinary research.
Chapter 5 discusses how to write and publish high quality papers in com-
petitive journals and conferences. It also provides perspectives from reviewers of
journal and conference articles, and provides tips on how to handle reviewers’
comments.
Chapter 6 presents a collection of commonly met misconceptions and pit-
falls, and offers useful tips on paper writing.
Chapter 7 discusses how to plan, organize and write a Ph.D. thesis, and
how to defend a Ph.D. thesis. It presents two typical approaches: top down and
bottom up.
Chapter 8 gives readers a detailed picture on life after Ph.D. by depicting
a realistic picture of the life of a professor. This chapter also discusses important
issues such as technology transfer.
PReFACe xiii
Both authors have been university professors for over 20 years, and have su-
pervised many Master’s and Ph.D. students. Some of our Ph.D. students are now
professors in universities in the US, Canada, China, and other countries. Some
are researchers in research institutes and research labs at Google, IBM, and other
companies. Some are running start-up companies successfully.
We have put a lot of effort into helping our students to choose thesis topics,
do solid research, and to publish top-quality papers. Often co-authoring papers
with students in our research, we have published over 300 peer-reviewed jour-
nal and conference papers. We have also been Associate Editors and Editor in
Chief of several top-ranked journals in the computer science field, such as IEEE1
Transactions on Knowledge and Data Engineering and ACM2 Transactions on
Intelligent Systems and Technology.
A more detailed biography of the authors can be found at the end of the
book.
1 IEEE: Institute of Electrical and Electronics Engineers
2 ACM: The Association for Computing Machinery
1
C H A P T ER 1
basics of Research
In this chapter, we examine some of the frequently asked questions about the
basics of research. If you want to know what researchers and scientists are like, if
you are still not sure whether doing research is a suitable career for you, or if you
are just a beginner in research, read on.
1.1 whAt IS ReSeARCh?
In natural science and engineering, “research” can be loosely defined as making
novel and significant contributions to our understanding of the world, based on
reproducible observations and verifiable results. There are often large overlaps
in scope between research in science and engineering. In science, the emphasis
is on discovering new theories, paradigms, approaches, algorithms, simulations,
experiments, and so on, while in engineering, the emphasis is to solve real-world
problems with new technologies, designs, processes, methods, models, testing, and
so on. Novelty and significance are the key ingredients of research in both science
and engineering.
Something that is “novel,” “new,” “original,” “creative,” “unconventional,”
“innovative,” or “ground-breaking,” means that it was unknown previously. In the
research arena, it usually means that it was not published previously in academic
journals, conference proceedings, books, technical reports, or other accessible
venues. Novel work must also be significant or have high impact, be important,
useful, seminal, and so on, to qualify as research. A highly significant research will
ultimately bring huge benefit to many people in the world.
How can novelty and significance be judged? Clearly, if a work, either in the
same or in different forms, has been published previously, it is certainly not novel.
2 CRAFtING YouR ReSeARCh FutuRe
If the work is only a small derivation or obvious improvement from previous work,
it is also not considered novel. “Significance” is harder to judge. A research work is
highly significant if it lays the foundation for future work, very significant if it dis-
covers new paradigms or methods that can solve new problems, or quite significant
if it outperforms previous methods on some important tasks. Often significance
can only be assessed in the future; in many conferences and journals, a 10-year
“most significant paper award” is given to a paper that made the most significant
contributions 10 years ago. One informal measure of the impact of research work
is how many other researchers have used this work as the basis to build their own
research programs, and how many research papers have cited the work after it was
published. Thus, experienced researchers are often called on to review and judge
the significance of research papers submitted for publication.
Chapter 3 to Chapter 6 will discuss the process of doing research in greater
details.
1.2 who ARe ReSeARCheRS ANd who ARe Not?
Researchers conduct research as the main part of their professional career. They
are also often referred to as scholars or academics. This book is written mainly for
researchers-to-be, especially Ph.D. candidates, in natural science and engineering.
The book may also be helpful to young researchers in their early careers, such as
assistant and associate professors in universities. In many of the sciences, especially
natural sciences, researchers are also called scientists. In mathematics, they are
known as mathematicians; in computer science, computer scientists; in biology,
biologists, and so on. Most researchers work at universities, research institutes, and
research labs of large companies.
One misconception about researchers and scientists that we want to correct
is that typical researchers are old “nerds” or “geeks”: people with either gray hair or
no hair, wearing torn clothes, thinking all the time and talking to themselves, and
who are incomprehensible and unapproachable. In reality, most researchers and
scientists are full of life, fun, versatile, and have many hobbies. A typical example
bASICS oF ReSeARCh 3
is Professor Richard Feynman, a physicist and Nobel Laureate who also played
music, learned to draw, went to bars, married (three times), acted in a film, trav-
eled around the world, wrote many popular science books, and so on. We highly
recommend that you read his legendary book: Surely You’re Joking, Mr. Feynman!
(published by W.W. Norton). In fact, most researchers and scientists that we know
of, including us, are full of color and are “normal” people!
Here are some typical examples of non-researchers. Salespersons are not
researchers. People who follow routines and instructions, like factory workers
and accountants, are not researchers. Teachers and instructors whose main job is
teaching in schools, colleges or universities are usually not researchers, unless they
develop, test, and experiment on new techniques for education. Engineers solve
real-world problems and build things, and if they make novel and significant con-
tributions in the process, they are also researchers. Medical doctors and dentists
are professionals and usually not researchers, unless they conduct research on new
medicines or new treatment.
On the one hand, researchers, such as professors, often also do other routine
work such as teaching, managing research grants, and so on. On the other hand,
many non-researchers also do some research in their work.
1.3 whAt do ReSeARCheRS do?
Here is a list of 11 tasks that researchers often do in their day-to-day work.
Chapter 8 describes a typical day in the life of a researcher in going about some
of these tasks.
Task 1.
Explore and come up with new ideas (i.e., novelty of re-
search). To make sure that the idea is new, researchers must know well
the frontier of their research areas by conducting an extensive litera-
ture search. Chapter 3 of this book will discuss this in great detail.
Task 2.
Validate and implement the new ideas to see if they work, and
make them work well (i.e., significance of research). Chapter 4 will
describe how to do solid research in detail.
4 CRAFtING YouR ReSeARCh FutuRe
Task 3. Write up the research outcome in Tasks 1 and 2 as “manu-
scripts” or “papers,” and submit them to academic journals, confer-
ences, book publishers, or other media outlets. Usually the research
manuscript is first peer-reviewed by other researchers in the same
field, as they are probably the only qualified people to determine if the
work is novel and significant enough to be accepted for publication. To
ensure fairness and frankness, reviewers are always anonymous. Often
the selection is quite competitive: many top journals and conferences
may have an acceptance rate of only 10-20% for full-length papers.
That is, the majority of the submissions are rejected. Chapter 5 and
Chapter 6 will discuss how to write manuscripts well. Chapter 5 will
also share some insights about how papers are reviewed.
Task 4.
Review papers and manuscripts from other researchers to see
if they are novel and significant enough to be accepted for publica-
tion. In this role, the researchers become reviewers, as mentioned in
Task 3.
Task 5.
Administrate academic journals and organize academic
conferences, especially when one becomes an established researcher.
These duties are often time-consuming, and are usually on a volun-
tary basis (i.e., no extra income), although in the long run, they help
increase the academic and professional visibility of the researcher in
his/ her field.
Task 6.
Attend academic conferences to present papers, to learn the
most recent research results, and to exchange research ideas and knowl-
edge with other researchers. This is also an important “networking”
time with other researchers in the same field. Established researchers
may also be invited to give keynote speeches at conferences.
Task 7.
Apply for research grants from government and other organi-
zations to support research, including providing support to graduate
bASICS oF ReSeARCh 5
students. As most research work may not have immediate monetary
return, usually the government and organizations invest money as
grants to support research. The grant application can also be highly
competitive. See Chapter 8.
Task 8.
Supervise graduate students, especially Ph.D. candidates, so
they become researchers in the near future. This book is mainly about
how to successfully complete both Master’s and Ph.D. study, includ-
ing thesis writing and defense. It is also about how to be an effective
supervisor.
Task 9.
Teach graduate and undergraduate courses in universities
and colleges. Some of those students will become researchers in the
future.
Task 10.
Perform administrative work for the organizations that re-
searchers work in. In the university this includes committee work in the
department, faculty, or college, writing reference letters, and so on.
Task 11. Commercialize research outcome (also known as Conducting
“Technology Transfer”). If particular research work is fundamental,
the research outcome is often published in an open forum and the
research outcome is published in an open media, often free for other
researchers to verify and utilize for further research. However, some
research is applicable in the real world, and the results can be com-
mercialized. In this case, researchers may also seek to protect their
intellectual properties with patents and so on. A small number of
researchers even create spin-off start-up companies to commercial-
ize their research. We also have a wealth of experience in industrial
applications and technology transfer, and we will discuss this topic in
Chapter 8 of the book.
Many researchers work as professors, with ranks known as assistant, associate, or
full professors, in research-oriented universities. Usually they spend a considerable
6 CRAFtING YouR ReSeARCh FutuRe
amount of time and effort in Tasks 1, 2, 3, 7, 8 and 9. In terms of research topics
and methodologies (Tasks 1 and 2), university professors are quite free in choosing
whatever interests them the most, provided that they can produce novel and
significant results. They can also freely express their academic views and results
in publications (Task 3), provided that the work is peer-reviewed and accepted.
Depending on individual researchers, they can also be quite involved in Tasks 5
and 11, and other tasks on the list.
Researchers may also work for large companies and various organizations.
For example, Microsoft, Google, IBM, GE, DuPont, Pfizer, Boeing, National
Institutes of Health (NIH), National Research Council of Canada, and so on
all have divisions for research. Researcher’s main duties usually include Tasks 1,
2, 3, and 11. They may also be involved in Tasks 4, 5, and 8; in fact many Ph.D.
students conduct research for companies and organizations as interns. In terms
of research topics, they are not as free as university professors; they often have
to work on research problems that are directly related to the company’s service
and products. However, Google has a policy, called Innovation Time Off, where
Google engineers can spend 20% of their work time on new projects that the
engineers and scientists are interested in doing, but may not be on the company’s
current agenda.
Ph.D. candidates are researchers-to-be, supervised by university professors.
Their main job is Tasks 1–3. Often they also work with their supervisors on many
other tasks including Task 8. As senior Ph.Ds, they may help junior Ph.Ds and
Master’s students to do research. They need to learn to do these tasks so that
when they finish their Ph.D. program, they become independent researchers who
can perform Tasks 1–11 reasonably well by themselves. Some disciplines, such as
Biology and Physics, have the practice in which one often follows a Ph.D. with
a Post Doctoral fellowship, which is a half step between Ph.D. students and pro-
fessors. These positions are usually short, but give the researcher an opportunity
to learn parts of the process they didn’t see as students, notably managing group
budgets and different approaches to management, from their Ph.D. supervisors.
bASICS oF ReSeARCh 7
1.4 whAt SkILLS ANd AbILItIeS ARe the MoSt
IMPoRtANt FoR ReSeARCheRS?
Both of us (the authors) were fascinated by the stories of the famous mathemati-
cians and physicists (such as Archimedes, Newton, Euler, Gauss, and Einstein),
and dreamed of being scientists when we were young. We can still vividly recall the
time when we were middle school students in China. This was the time when the
Culture Revolution (during which intellectuals were suppressed1) had just ended,
and economic reform was just about to start. One day, a vivid news report entitled
“The Goldbach’s Conjecture” (in Chinese), appeared in all major newspapers, ra-
dios, and magazines in China. It reported a Chinese mathematician named Chen,2
who proved the “Last Hurdle” to solving the Goldbach’s Conjecture. The report
was like “a thunder,” awakening the youth in China as the “spring time for modern
science” had arrived in China!
The Conjecture, proposed by Goldbach in 1742, can be simply stated as:
every even integer greater than 2 can be expressed as the sum of two prime num-
bers (one prime plus another prime, or 1+1). Much progress had been made toward
the proof of the conjecture since it was proposed. For example, mathematicians
before Chen proved that any sufficiently large even integer could be expressed as
the sum of two primes, or a prime and the product of 6 primes (1+6). Chen spent
more than 10 years in his 50 square-foot apartment and proved the “Last Hurdle”
of the Conjecture: every sufficiently large even integer can be expressed as the sum
of a prime and the product of two primes (1+2). His proof took over 200 pages.
There was only one step to reap the “crown in mathematics” (proving 1+1).
One of us (Ling) was so inspired by the report, and intrigued by the appar-
ently simple conjecture, that he went to buy several books on number theory by
himself (in those years food was still a scarce resource), and spent days passionately
reading them and trying to prove 1+1. He knew the proof must be hard, but he
believed that he could be very creative, and could think of a novel approach that
1 http://en.wikipedia.org/wiki/Cultural_Revolution
2 Chen Jingrun. http://en.wikipedia.org/wiki/Chen_Jingrun
8 CRAFtING YouR ReSeARCh FutuRe
others had not thought of. One day he believed he found an ingenious proof on
only two pages, and he rushed to his father announcing his great discovery! His
father calmed him down, and asked him to check the proof carefully. You can
imagine the outcome.
After he entered his undergraduate study in Computer Science, he wrote
a computer program to verify the Conjecture for integers with many digits, hop-
ing to find a counter example to refute the Conjecture. You can imagine the
outcome.
The Conjecture of 1+1 remains open today. It has now been verified by
computers to even integers of 18 digits long, according to the Wikipedia page on
the Conjecture. But he still dreams that someday he can prove it, or refute it.
Do you have dreams, passion, curiosity, and creativity? Almost everyone
does. These are some of the most important qualities for being researchers. Besides
these, we list what you need to succeed on the path of research below.
•
Passion, focus, enthusiasm, and interests in research. Researchers
must be keenly interested and highly passionate about their research,
so that they can stay focused and study research problems deeply over
weeks, months, or even years.
•
Curiosity and creativity. Researchers must be highly curious and
creative. They often ask many questions, both in research and in daily
life, and think “out of the box” for new ideas and solutions that most
people have not thought about. They may often think of “plot holes”
when they watch movies, and discuss how to make the movies better!
•
Critical and independent thinking. Researchers must not just fol-
low the conventional wisdom and established views. They must think
independently and ask critical questions (e.g., what is wrong? How
can I make it better?).
•
Risk taking. Often new ideas do not work, or have been explored
and published by others. Researchers must be willing to take risks in
exploration.
bASICS oF ReSeARCh 9
•
high scientific integrity. Researchers must be willing to take risk and
fail, and be honest and transparent about failure. They must also be
truthful in reporting their research outcome (see Section 5.4). Peter
Medawar’s excellent book, Advice to a Young Scientist, should be read
by all graduate students.
•
Quick learning and strong analytical and problem-solving skills.
Researchers need to be quick learners for new knowledge and tech-
niques needed for research. They must have good analytical and
problem-solving skills (which can be learned) to analyze, implement,
and test their new ideas (conducting solid research).
•
diligence. Researchers must stay focused, think deeply, and study
new research problems in a great depth. They often work for more
than 8 hours a day, and 40 hours a week. They may spend months or
even years trying to make some breakthrough in their research.
•
Good communication skills. As researchers disseminate their results
via manuscripts or papers, it is extremely important that they learn to
write well. They must be a good “story teller”: knowing their audi-
ence (readers) and convincing others quickly to believe in their story
(accepting their papers). They must also be able to present their ideas
clearly and convincingly in conferences. University professors usually
need to teach courses so they must learn to be good instructors, and
love teaching classes.
What we list above are crucial traits and skills for researchers; other positive per-
sonal traits and abilities are also important. You can ask yourself if you are suitable
to be a researcher by comparing your personality and strengths with the list above.
Note that, however, most of the required abilities and skills can be learned and
improved, and one weak ability may be compensated by other stronger ones. For
example, if you are not very strong in mathematical theory, you can choose a thesis
topic on applied or experimental research.
10 CRAFtING YouR ReSeARCh FutuRe
1.5 whAt ARe SoMe PRoS ANd CoNS oF
beING ReSeARCheRS?
Here is a list of “Pros” of being researchers. It reflects our personal views.
•
high career satisfaction. Researchers have more freedom in choosing
what interests them the most to work on than many other forms of
employment. University professors also have flexible working hours
in their university office, and have summer months in which they
can focus on research, and travel for conferences and research visits.
It is also highly rewarding when researchers make new discoveries
and breakthroughs, which benefit people and are recognized by their
peers.
•
Protected research environment. Researchers, especially university
professors, have a relatively stable and protected position for research.
These positions are usually not much affected by the short-term
economic downturns and by the opinions of politicians. To protect
academic freedom and give researchers time to produce high-impact
results, most universities have a “tenure” system—tenured professors
cannot be fired easily when they dissent from prevailing opinion,
openly disagree with authorities of any sort, or spend time on unfash-
ionable topics’ (Wikipedia), or if their research ‘quantity’ fluctuates
due long-term research. Publishing a few high-impact papers is often
better than publishing more low-impact ones.
•
Reasonably good income. Researchers usually have a Ph.D. degree,
and their entry salary is often much higher than those with Master’s
or Bachelor’s degrees. In addition, researchers’ salary is also quite
stable; there is usually no “bonus” or “penalty” if they accomplish a lot
or a bit less for Tasks 1–10 in a particular year. This can be regarded
as a disincentive, we suppose, as salespersons can earn more money if
they sell more products. However, professors can earn an extra income
if they hold secondary consulting or business positions, or if they are
successful in technology transfer (Task 11).
bASICS oF ReSeARCh 11
•
Researchers are highly respected by most societies. In 1999 an
A&E Biography program selected the 100 most influential persons
of the last millennium (from the year 1000 to 2000). Among the
top ten most influential persons, five are scientists; they are Galileo
(#10), Copernicus (#9), Einstein (#8), Darwin (#4), and Newton (#2).
They are highly respected by people. They are also the ultimate role
models for young researchers. In addition, we have many other giants
to look up to: from Nobel Laureates (such as Richard Feynman), to
winners of the Turing Award (the highest distinction in the computer
field) and John Fritz Medal (referred to as the highest award in the
engineering).
Some possible “Cons” of being researchers are:
•
Researchers often work more than 8 hours a day and 40 hours a week
to stay competitive. The amount of work needed for doing Tasks 1 to
11 well can be daunting and intimidating. However, most researchers
are highly effective, and they learn to manage time well. In addition,
research is a career, not just a “job.” Often researchers are intrigued
by research questions, and enjoy thinking about them when they have
extra time. They are also thrilled when a breakthrough is made.
•
Long preparation time. To become a researcher, one usually needs to
study at a university for one’s Bachelor’s, Master’s, and Ph.D. degrees.
Sometimes they need to take a few years of post-doctoral positions
after their Ph.D., before a more stable position is secured. However,
Ph.D. candidates and post-doctoral fellows are partially researchers.
They usually receive a small to moderate income during that time.
•
Relatively narrow career choice. Depending on the economy and
available funding, in some years the number of job openings for
post-doctoral fellows and professors in universities and research-
ers in companies can be limited. In addition, the number of Ph.D.
students graduating in the last few decades is increasing. Thus, the
12 CRAFtING YouR ReSeARCh FutuRe
competition for new positions for professors and researchers can be
very high. Also, after you get a Ph.D. degree, you may be “overly
qualified” for jobs for Master’s (or Bachelor’s) degrees. However, the
economy goes in cycles, and there are many other career choices for
researchers, including entrepreneurship (start-ups). When you choose
research topics, you must also think carefully what type of jobs (e.g.,
professors, researchers in companies, etc.) you want to take in the fu-
ture. We will discuss how to select research topics in Chapter 3.
1.6
So, how do I beCoMe A ReSeARCheR?
Most researchers learn to do research during their Ph.D. study in a reputable uni-
versity, supervised by a university professor. To be accepted in a Ph.D. program,
you normally need to have Master’s and Bachelor’s degrees in the same or similar
areas. If you study for your Master’s and Ph.D. degrees under the same supervisor,
usually the combined duration would be shorter than when you work under dif-
ferent supervisors.
1.7 whAt ARe the dIFFeReNCeS betweeN A
MASteR’S ANd Ph.d. theSIS?
First of all, many universities do not require all Master’s students to write a
Master’s thesis; often you can get the same Master’s degree by the “course op-
tion”—taking a large number of graduate-level courses. But if you want to get a
taste of doing research, or if you plan to pursue Ph.D. after your Master’s study,
then taking the “thesis option” is advised.
As for the differences between Master’s and Ph.D. theses, it is difficult to
draw the line precisely. It certainly does not depend on the length of the thesis!
Some folk legend says that the shortest Ph.D. thesis is 14 pages long, but the
longest is over 1,000 pages. Both Master’s and Ph.D. theses require novel and sig-
nificant contributions, but such requirements for a Ph.D. thesis are much higher.
Occasionally there is a debate and discussion among thesis examiners if a thesis
being examined should really belong to the other category!
bASICS oF ReSeARCh 13
One intuitive and approximate way to distinguish between these two types
of theses is that the work in a Ph.D. thesis should be publishable or already pub-
lished in several top journals and conferences in the field. A Master’s thesis, on the
other hand, may only be publishable or already published in 1-2 medium competi-
tive journals or conferences in the field.
Several typical differences between Master’s and Ph.D. theses and students
are outlined below:
•
A Master’s thesis can apply previous work to a new problem or ap-
plication; a Ph.D. thesis normally contains significantly new theory,
methods, and applications.
•
A Master’s thesis can make small, incremental improvements from
previous work; a Ph.D. thesis usually presents and studies a new topic
in the field, and makes much larger contributions.
•
A Master’s thesis can be a critical survey of existing works, often with
additional comparison of these works theoretically or empirically; a
Ph.D. thesis must present some new methods, and compare them
with previous works convincingly either by theory or experiments.
•
A Master’s thesis can report negative results of a seemingly promising
approach, and analyze why it does not work; a Ph.D. thesis should
contain positive results in addition to analysis of the negative results.
•
If one graduates with a Master’s degree, he or she may not yet learn to
become an independent researcher; his or her job is usually not doing
full-time research. A Ph.D. graduate, on the other hand, should be
an independent researcher, and can compete for and take a university
professor job directly.
Again, the distinction is vague, and varies among different fields and disci-
plines, universities, and countries.
Though this book is written mainly for Ph.D. candidates in Science and
Engineering, it applies to Master’s candidates as well with smaller requirements.
14 CRAFtING YouR ReSeARCh FutuRe
We will indicate special aspects for Master’s candidates in appropriate places in
this book.
1.8 how to FINd A SuItAbLe SuPeRvISoR FoR
Ph.d. StudY
Different departments and universities have different ways of pairing graduate
students with supervisors. Choosing an appropriate supervisor for Ph.D. study is
crucial, as you may be “stuck” with him or her for the next 3–5 years. This is also
important but less critical for Master’s students as they usually work with their
supervisors for 1–2 years. If you and your supervisor do not have a good working
relationship in these years, disagreement, tension, and occasionally, hardship and
conflict will arise.
In general, the following factors should be taken into consideration when
choosing your supervisors:
•
Research areas and interests. The number one most important factor
is whether or not your research passion and interests match well with
your supervisor or not. Nowadays professors’ profiles, research areas,
and publications are usually posted online. We recommend that you go
through the department’s website of the school that you are applying
to carefully, and spend time studying potential supervisors’ research
and publications. You can also write to the potential supervisors, with
some intriguing questions about their research, and ask them if they
are willing to supervise you. You should mention your research inter-
ests and professors you wish to be your supervisors in the application
materials. If a professor agrees to supervise you, the chance of being
accepted is much higher, given that you satisfy the basic requirements
for the university and department that you are applying. Some depart-
ments let students choose supervisors one year after they are admitted.
This would give you ample time to take courses offered by various
professors and talk to them before you choose a supervisor.
•
Supervisory philosophy and style. This is almost as important as
bASICS oF ReSeARCh 15
research areas and interests mentioned above. Two orthogonal dimen-
sions can be considered: the amount of pressure and guidance. Some
professors are very demanding, which may not be bad for you if you
need some pressure to excel, while others are relaxed, so you work at
your pace. On the other hand, some professors give their graduate
students a lot of guidance, such as what research problems to work on,
how to write papers, while others give them a lot of freedom; you may
need to figure out a lot of things, including thesis topics, by yourself.
Does a potential supervisor’s style of supervision match your personal-
ity and working habit? It is important to find this out before settling
down with a supervisor. Bring Figure 1.1 to your potential supervisor,
and see where he or she stands.
•
Funding availability. This is also a practical issue to consider. Though
the stipend for Ph.D. candidates is usually quite small to live lavishly,
you need to be able to live healthily without the need to work eight
hours a day in restaurants to make ends meet. Some professors’ fund-
FIGuRe 1.1: Two dimensions of the supervisory style: the amount of pressure vs.
guidance.
16 CRAFtING YouR ReSeARCh FutuRe
ing can last beyond the usual period, e.g., four years in many universi-
ties. Some will last only one or two years. Some professors can fund
your conference trip if your paper is accepted, while others may not.
•
Academic and industrial social networks. It is often said that one is
in some ways defined by his or her friends. This is very true for gradu-
ate studies and even research careers. Remember that when you join
a supervisor’s research group, you enter a social network, consisting of
all of your supervisor’s former students, advisors, current colleagues,
contacts and current and future students. These people will be there to
help you succeed. This means that you will be able to talk to them eas-
ily about new ideas, references, and job possibilities during and after
your Ph.D. study. You might leverage on their social networks as well,
thus expanding your views, support, and career opportunities.
•
track record. It is extremely important for you to do background
study to see the academic standing of your potential supervisors in
their respective field of research. Today this is easy to do by check-
ing on the publications and their citations in search services such as
Google Scholar. It also helps to talk to former and current graduate
students of the potential supervisors for the issues discussed above.
It is also a good idea to read publications of graduate students under
supervision of potential supervisors.
1.9 how LoNG doeS It tAke to Get
MY Ph.d. deGRee?
We often hear students asking: “How long do we need to take before I can gradu-
ate with a Ph.D. degree?”
Well, to us, the student is asking the wrong question. To begin with, we
must make clear what a Ph.D. degree entails (for more details, also see the next
chapter: “Goals of Ph.D. Research”). Getting a Ph.D. degree means that you are
to learn to view the world in a different way, to be able to identify a new and chal-
bASICS oF ReSeARCh 17
lenging problem on your own, and to solve the problem via a new methodology. It
also means that you can claim to the world that you are an expert in one impor-
tant, albeit narrow field, where others can consider you a leader in this field. One
practical criterion is to go to conferences or workshops, where if you find people
looking for you because they want to talk to the foremost experts in this field,
you know you are ready to graduate, since you have crossed the line to being an
expert in your chosen field. Likewise, if you find your work starting to be cited by
peers, it is a good sign that you are ready to get your Ph.D. degree, and be called a
“Dr”!
Thus, the answer to your question (“how long does it take for me to gradu-
ate”) is “it depends on you!” Unlike undergraduate study, which usually has a
fixed length, Ph.D. study can last from 3 years to 10 years. It all depends on how
effectively you accomplish Tasks 1-3 mentioned earlier in this chapter, how you
select a suitable Ph.D. topic, and how well you work with your supervisor. This
book will provide you with useful guidelines and effective methods of obtaining
a Ph.D. in relatively short time. Most of our Ph.D. candidates get their degree in
3 to 5 years.
1.10 thRee RePReSeNtAtIve GRAduAte StudeNtS
Throughout this book, we will use three fictitious Ph.D. students as examples.
They represent three general types of Ph.D. students, and are based on several past
Ph.D. candidates under our supervision.
Student A is a Ph.D. candidate whose strength is academic research and
whose career dream is a professorship in a university in the US or Canada. He
published a good number of papers in competitive conferences and reputable jour-
nals, and got his Ph.D. in 4 years. He then became a post-doctoral fellow for two
years to deepen his research. He is now a tenure-track assistant professor in a US
university. Through the book we will see how he started, how he chose and tried a
few different research topics before settling down on his Ph.D. topic, how he did
his research, and how he published many papers.
18 CRAFtING YouR ReSeARCh FutuRe
Student B is a Ph.D. candidate whose strength is empirical research. He
wants to work in a research lab in a company. He has written several conference
papers and one journal paper related to the topic of his thesis, but more impor-
tantly, he has participated in an international competition, and got into the top-3
in a well-known large-scale experimental contest based on real-world data.
Student C is a Ph.D. candidate whose strength is engineering and industrial
applications, as well as commercializing his ideas. His research is more applica-
tion oriented, and his aim is to build some novel and useful systems in the future
and start a venture to commercialize his ideas. He developed a novel experimental
system to offer sufficient value over the current state of the art and business. Thus,
he has successfully filed a patent and published a few news articles on his technol-
ogy. He also published a few conference papers. He builds a startup business after
graduation.
To cover a whole spectrum of graduate students in our book, we refer to
these types of students as Student A, Student B, and Student C, respectively,
where they correspond to the three different approaches to Ph.D. study: Student
A for an academic and fundamental research type, Student B for an empirical-
research type, and Student C for an application and entrepreneurial type. Later in
the book, we will use Student A1, Student A2, Student B1, and so on, to describe
specific instances of various student types to highlight some specific details. In
such cases, Students A1 and A2 are two examples of type-A students.
• • • •
19
C H A P T ER 2
Goals of Ph.d. Research
Before we describe in detail how to get a Ph.D., we want to first discuss the ul-
timate goals of Ph.D. research in this chapter, so that you “begin with the end in
mind” for your Ph.D. study.
2.1 GoAL #1: be the beSt IN the FIeLd
The number one goal of your Ph.D. study is that, in an important, albeit narrow
field (i.e., your thesis topic), your research is the best in the world (in terms of novelty
and significance) at the time when you complete your thesis, and you are among
the very few best experts in that topic.
We can use a very simple figure to illustrate this, as in Figure 2.1. The x-axis
represents a certain domain, such as artificial intelligence or water purification.
The y-axis is the expertise of knowledge in that domain. A horizontal line in the
figure represents the current best knowledge, which is also called the frontier or
the state-of-the-art of research. Assume that you will get your Ph.D. in four years.
Your yearly research progress is depicted in the figure. Note that this overly simpli-
fied diagram is for illustration purpose only.
Near the end of your Ph.D. study (say, the third and fourth years), you
should produce research that advances the current best knowledge (so that the
“best line” will be raised a bit due to your work). The height of your knowledge
curve above the “best line” is roughly the novelty of your work; the width is
roughly the significance or impact. You should establish yourself in this important
(albeit narrow) field. In other words, you should “own” an area. How do you prove
that you are ready to establish yourself in your field? A typical way is publishing a
20 CRAFtING YouR ReSeARCh FutuRe
FIGuRe 2.1: Progress in four years of Ph.D. study.
series of high quality papers in top-ranked conferences and journals in your Ph.D.
area (Chapter 5 and Chapter 6), and successfully write up and defend your Ph.D.
thesis (often a nerve-racking process; see Chapter 8).
Student A1, for example, researched in the area of machine learning, an area
in Artificial Intelligence (which is an exciting research area that explores what
makes us intelligent and how to make machines intelligent as well). He published
over 10 top journals and conference papers, and finished his Ph.D. thesis in 4 years.
Student B1 published 5 top-ranked conference and journal papers in 4 years, and
finished a system that implements the ideas in these papers. Student C1, on the
other hand, finished 3 papers in conferences, but also filed one patent successfully.
He graduated with a venture-capital company’s offer to start up a new business.
Early in your Ph.D. study (e.g., first year), you can explore various topics
that interest you (the several small peaks in Figure 2.1). But soon you should focus
(one peak), and try to conduct research in one topic in great depth. Examples of
a thesis topic can be cost-sensitive learning with data acquisition (Student A1’s
topic, an area of research in machine learning), active-learning and user interface
GoALS oF Phd ReSeARCh 21
design for Student B1 (another area of research in computer science), and a new
method for harnessing crowdsourcing power (an area of computer and social
sciences where computer and human problem solvers work together to tackle a
difficult problem, such as language translation) and for Student C1, turn it into a
well-formed business model. Chapter 3 and Chapter 4 will discuss this crucial step
of selecting Ph.D. topics in more detail.
During the four years of your Ph.D. study, you are putting in efforts to
literally “push up” your knowledge curves as shown in Figure 2.1. The Area under
Curve (AUC), as shown in the figure, is the area between each curve (such as
your knowledge curve for year 4) and the x-axis. The value of the AUC roughly
represents your effort. This value is cumulative, and should be steadily increasing
during the years of your Ph.D. study. If you do not put in sufficiently large effort, it
is impossible to advance the frontier of the research. In Chapter 1 we emphasize that
researchers must be diligent.
You do not need, and often it is impossible to be highly knowledgeable in
all topics in a field. However, you do need to be so in areas related to your Ph.D.
topics. The shape of the knowledge curves in Figure 2.1 illustrates this. For ex-
ample, Student A1’s topic is cost-sensitive learning. He is also very knowledgeable
on this topic (a form of supervised learning), but less so on other areas of Artificial
Intelligence, such as unsupervised learning.
If you are a Master’s student in the thesis option, it usually takes two years
for you to finish your thesis. The situation is similar to the first two years in Ph.D.
study, with two exceptions:
In your first year you often have less chance to explore several topics deeply.
You and your supervisor would quickly decide a research topic together, and you
would quickly make some novel contributions (your Master’s thesis) in a narrow
field in the second year. Figure 2.2 (a) depicts this situation.
You may conduct a critical review of several related topics, or write a com-
prehensive survey of a recent topic. You provide new insights and draw new rela-
tions in one or several broad areas. Figure 2.2 (b) illustrates this situation. See
22 CRAFtING YouR ReSeARCh FutuRe
FIGuRe 2.2: Progress in two years of Master’s study.
Chapter 1 for more discussions on the differences between Master’s and Ph.D.
theses.
There are several common situations where Ph.D. candidates do not keep
this goal in mind, and may have a hard time, or take many more years to finish
their Ph.D. theses. We will discuss them briefly below.
In the first situation, they spend most of their Ph.D. study exploring differ-
ent topics in the field. They may put in a huge amount of effort over four years
(high AUC), but they do not have a focus, and do little in advancing the state-
of-the-art in any topics they explore. They essentially accomplish several Master’s
theses. But Master’s theses in different topics usually cannot be equal to one Ph.D.
thesis. Figure 2.3 depicts this situation.
In the second situation, they explore several different topics, and they man-
age to make minor contributions and publish a few minor papers in those topics.
In this case, they have published on many topics (which may also earn them many
Master’s theses), but they do not establish themselves in one. They do not “own” an
area, and other researchers do not remember what they have done. Their Ph.D.
theses jump between several different topics, and they may have trouble passing
the thesis defense. Figure 2.4 depicts this situation.
Again, your efforts should have a focus. Instead of publishing three papers
on three different topics, it is much better to publish three papers on one topic,
GoALS oF Phd ReSeARCh 23
FIGuRe 2.3: What to avoid in a Ph.D. study: cover many areas shallowly.
making novel and significant contributions in that area (as in Figure 2.1). By the
end of the Ph.D. study, you should “own” an area, so that people will turn to you
for advice or for reviewing papers in that area.
A few exceptionally strong and productive Ph.D. candidates do manage
to advance research in more than one area, but their Ph.D. thesis usually only
consists of published papers in one area. A Ph.D. thesis should be coherent, and
focus on one (relatively broad) topic. For example, Student A1 in his early research
explored the area of co-training, and published papers in a couple of top confer-
ences. But then he found that it was not easy to select a Ph.D. thesis topic. After
many brainstorm meetings (see Chapter 3), he changed his area to cost-sensitive
learning with data acquisition, a less-studied but important new area of cost-sen-
sitive learning. He worked very hard, and published several top-rated papers on
this new topic in the next 2.5 years, and established himself well in this area. At
the end of his four-year Ph.D. study, he successfully defended his Ph.D. thesis on
24 CRAFtING YouR ReSeARCh FutuRe
FIGuRe 2.4: What to avoid in a Ph.D. study: publishing in several topics but not well
established in any one of the topic areas.
cost-sensitive learning with data acquisition. The thesis does not include his early
published work on co-training.
Of course, when you become a university professor after your Ph.D., you can
develop several strong research areas, working with Ph.D. candidates who have
different interests. It is usually quite difficult for a Ph.D. candidate to do so in a
short amount of time (3–5 years).
In the third situation, some Ph.D. candidates are keen to find some research
topics that can be advanced with little effort (low AUC). They wish they could
just focus on a very narrow problem. Figure 2.5 illustrates this situation. This
may only be possible if it is an easy and quick extension of your supervisor’s (or
previous Ph.D. candidate’s) work. But in this case, you are not learning to be an
independent researcher (see Section 2.2), your contribution is usually minor, and
you do not own the area. Thus, for Ph.D. research, the situation in Figure 2.5 is
GoALS oF Phd ReSeARCh 25
FIGuRe 2.5: An impossible situation in Ph.D. study.
impossible. There is no shortcut in producing highly novel and significant work
with little effort, or without knowing related areas.
Your supervisor may have a good vision on which research topics and prob-
lems are promising to work on as your Ph.D. topic. But your passion, interests, and
technical strengths are crucial in Ph.D. topic selection. Chapter 3 and Chapter 4
will explain this in great detail.
2.2 GoAL #2: be INdePeNdeNt
The number two goal of your Ph.D. study is to learn to become an independent
researcher so that you can do tasks listed in Chapter 1, especially Tasks 1–3 (briefly,
find new ideas, do solid research, and publish top papers), well by yourself. This
is simply because after you earn your Ph.D. degree, you may be employed in a
research-oriented university, and suddenly, you have no research supervisors! You
must do everything (Tasks 1–11) by yourself. Being independent also means that
26 CRAFtING YouR ReSeARCh FutuRe
FIGuRe 2.6: The “independence curve” for a Ph.D. student.
you become an independent thinker—you have your own academic viewpoints
and you can back them up with your research.
We will use another simple figure to illustrate your learning curve of becom-
ing independent. This time the x-axis is the years you spend in the Ph.D. program,
and y-axis is the level of independence. It would be good if your independence
curve increases linearly with the year, and better if it grows exponentially. See
Figure 2.6 for illustration.
To be highly independent, you need to develop quickly those skills and
abilities crucial for researchers discussed in Chapter 1.
For all of the three example Ph.D. candidates we described in Chapter 1,
they became independent quickly in 4–5 years of their Ph.D. study. More specifi-
cally, in the first two years, we (i.e. supervisors) usually held daily or weekly
meetings with them discussing new ideas, methods, and experimental results in
detail. We also spent a great effort training them to write well their first papers
(see Chapter 5). In the later years, we usually met weekly, or whenever needed, to
discuss ideas, approaches, and results at high-level. They can extend and modify
the ideas, design and modify experimental settings, and write up the paper drafts,
mostly by themselves. They have also learned to review papers and write parts
of grant proposals. Some of them have also taught courses, and helped supervise
Master’s students.
GoALS oF Phd ReSeARCh 27
2.3 thRee keY tASkS FoR GettING A Ph.d.
(ANd MASteR’S) deGRee
As we discussed in Chapter 1, as a Ph.D. candidate, your key tasks are Tasks 1–3.
We will refer to them briefly as: “find new ideas,” “do solid research,” and “publish
top papers,” respectively. It is unlikely that you can get a Ph.D. by doing Task 1,
then Task 2, and then Task 3 in a sequence only once. Instead, it is an iterative
process. There are also often interactions among Tasks 1, 2, and 3. Through several
iterations, you broaden and deepen your Ph.D. topic by building up enough re-
search results and publishing enough papers. In this case your Ph.D. thesis would
simply be a “recompilation” or integration of the published papers (more about this
in Chapters 5–7). Figure 2.7 illustrates this process.
As mentioned earlier, a good rule of thumb is that you have published a
good number of papers in reputable journals and conferences based on your Ph.D.
topic. In the following four chapters (Chapters 3–6) we will describe strategies for
accomplishing these three key tasks effectively. Chapter 7 will provide you with
guidelines on how to write your Ph.D. thesis, and successfully defend it.
Note that although your main job as a Ph.D. candidate is Tasks 1–3, often
you also work with your supervisor on all other tasks described in Chapter 1. By
the end of your Ph.D. study (hopefully within 3–5 years), you should have made
FIGuRe 2.7: An iterative process of three key steps in getting your thesis.
28 CRAFtING YouR ReSeARCh FutuRe
significant contributions in your Ph.D. thesis, and established yourself as a young,
independent, and competent researcher!
If you are a Master’s student, however, you usually only need to go through
one, or at most two, iterations of Tasks 1–3, after you take the required courses.
You should try to publish 1–2 medium competitive conference papers, which
should constitute the main body of your Master’s thesis. There is also usually a de-
fense process, but the requirement of novelty and significance in Master’s thesis is
usually much less than a Ph.D. thesis. Thus, this book is also suitable for Master’s
students—the basic processes and strategies are the same.
2.4 the MILeStoNeS oF GettING A Ph.d. deGRee
After discussing the three key tasks in getting a Ph.D. degree, we will describe a
series of milestones in 4–5 years of your Ph.D. study. They are:
•
•
•
•
Complete all course credits (one to two years).
Pass a written or oral Ph.D. qualifying exam (~2nd Year).
Pass a written or oral Ph.D. proposal defense (~3rd–4th year).
Pass the final Ph.D. thesis defense (~4th–5th year).
A typical graduate school requires a Ph.D. student to complete between five to
seven graduate-level courses. For students transferred from other departments and
disciplines, they often have to make up some core undergraduate courses that may
range from one to three. For example, a Physics student transferring to Computer
Science may be required to take Data Structures and Algorithm Design, Theory
of Computation, and Operating Systems. All the course works are expected to be
finished in roughly one to two years. To prepare yourself for the research, you
should try to select the courses that fall within or are closely related to the research
area that you are interested in. This is also the time to get your feet wet in the
research world. See Section 3.1 for more details.
Many universities require Ph.D. students to take an exam known as the
Ph.D. qualifying exam. It can be oral, written, or both. The purpose is to make
sure that Ph.D. students have sufficient background knowledge to start and con-
GoALS oF Phd ReSeARCh 29
duct research.
Research life truly starts when you finish the Ph.D. qualifying exam. By this
time the student is usually in a celebratory mode, joining parties and sleeping late,
mindful that the courses and exam days are finally over, for life! This period in your
graduate life may be intoxicating. The problem is that many graduate students
may also never wake up from this period. A real challenge for Ph.D. students is
that what lies ahead is truly critical, and it is highly advised that the student be
prepared for it.
The next milestone is the thesis proposal defense. Ph.D. students need to
find a thesis topic (see Chapter 3 and Chapter 4), conduct a comprehensive survey
of existing works, find a new and exciting problem, and propose solutions to it. In
the thesis proposal, the student is asked to lay out the problem to be tackled, all
previous approaches in solving the problem, a detailed discussion of their merits
and weaknesses, and the student’s particular plan for tackling the problem. The
defense is usually oral, in front of a small committee often consisting of faculty
members of the same department.
Then you must conduct solid research to prove (or disprove) the proposed
solutions, often after many revisions along the way. This task turns out to be
more difficult and challenging than most people would think, because it is typi-
cally loosely structured, and requires methodical and personal organization and
perseverance to handle it well. Again refer to Chapter 3 and Chapter 4 on how
to find a good research topic, including Gray’s Criteria and the Matrix Method
for finding ideas. It is not atypical for a student to try different ideas to see how
they pan out, each followed by a conference or journal article. Refer to Chapter 5
and Chapter 6 for paper writing. These activities are basically the iteration of the
three key tasks (find new ideas, do solid research, publish top papers) mentioned
earlier in this chapter.
After you publish a good number of journal and conference papers on the
thesis topic, with your supervisor’s approval, you can prepare your Ph.D. thesis and
30 CRAFtING YouR ReSeARCh FutuRe
the final defense. Chapter 7 provides a lot of detailed guidelines in writing and
defending your thesis successfully.
In addition to accomplishing those milestones, Ph.D. students in most uni-
versities also need to spend 10–20 hours per week in a teaching assistant (TA)
job.
2.5 dIStRACtIoNS to the GoALS
Sometimes Ph.D. candidates are distracted during their Ph.D. studies to achieve
the two major goals of Ph.D. research effectively.
Many Ph.D. students come from countries whose mother tongue is not
English, or whose culture is very different from the country they are enrolled in
the graduate study. They may have a language barrier, culture shock, homesick-
ness, and other circumstances (as we did when we first came to the US to study
over 25 years ago). Making friends of various cultural backgrounds and quick
learning and adaption are important and helpful in minimizing distractions and
focusing on your Ph.D. research quickly. If you are not fluent in spoken English
(assuming English is used in lectures and paper writing), preparing the lecture
materials before lectures is very effective in helping your learn the English lan-
guage and describe things clearly. Chapter 5 will discuss how to write papers well
in English, or any other languages.
In a typical university, a graduate student often works as a Teaching
Assistant (TA) to help fund the study. This is like a “real” job in which graduate
students are required to perform 10–20 hours in running tutorials, marking as-
signment and exam papers and holding office hours each week for undergraduate
courses. TA work is important and should be accomplished well since besides pro-
viding funding for your study, it also gives you an opportunity to learn to present
yourself well (to students in the class or to individual students). This can be very
useful especially if you want to become a professor in the future. This also presents
a challenge because now you have to learn to balance different parts of a gradu-
ate-student life: teaching and researching, among others. Students should learn to
GoALS oF Phd ReSeARCh 31
perform their TA duties effectively, and ensure a sufficient amount of time to be
reserved for research. This need to balance one’s life is the same if one has to raise
a family, learn how to drive, travel for fun, participate in sports and social activi-
ties, and generally, enjoy one’s life. Indeed, you need to have a life, or as people say,
“work hard, play hard.”
Even with a high demand in research, Ph.D. students (and faculty mem-
bers) must find time to do physical exercise to keep themselves fit and healthy.
This also improves the efficiency of their brains. In one word, being healthy and
staying highly effective in research work and living a good life is the key to achiev-
ing the two goals (that is, being the best in your field and being independent) of
Ph.D. study successfully in 3–5 years.
Some Ph.D. candidates do work hard, and enjoy doing research, to the point
that they do not care how long it takes to get a Ph.D.. They enjoy the Ph.D. pro-
gram even if it takes more than 8 years. Often the funding from the supervisor and
university runs out, and they have to teach many courses to make a living (another
distraction). Remember, Ph.D. study is a training process to establish yourself in
some area and become a researcher. This can usually be done in 4–5 years, if you
follow the guidelines in this book. Then you should defend your Ph.D. thesis, and
try to become a full-time researcher, such as university professors, researchers in
companies, and be successful.
• • • •
C H A P T ER 3
33
Getting Started: Finding New
Ideas and organizing Your Plans
Let us start from the very beginning of the journey: the first year of your Ph.D.
study when you will probably take graduate courses, and maybe a comprehensive
exam (or Ph.D. qualification exam). We will focus on issues related to your re-
search, and show you how to start doing research in the first and second years.
3.1
YouR FIRSt YeAR
You should begin your Ph.D. (and Master’s) study by taking several broad,
graduate-level courses that are related to your future research, especially courses
taught by your supervisor. For example, if your research direction is machine
learning, then courses such as Machine Learning, Data Mining, and Statistical
Learning are highly relevant. If you are willing to pursue a career as an engineer,
then you should probably want to take some fundamental engineering courses,
including supporting courses such as statistics and probability, computer software
tools such as R or MATLAB, and engineering design courses. In addition, you
can read by yourself other broad, graduate-level textbooks and take online video
courses (there are many good ones) in the field. This will prepare you to have
enough basic knowledge for the research in the field. We use a simple figure to il-
lustrate this, as in Figure 3.1. Step A in the figure shows that, after taking courses
and reading textbooks, you now have a broad knowledge about the field.
While you are taking graduate courses and reading textbooks, you may find
certain topics (such as semi-supervised learning, clustering) fascinating and inter-
esting. Explore them further by:
34 CRAFtING YouR ReSeARCh FutuRe
FIGuRe 3.1: Progress in the first year: taking courses (Step A), and then exploring a few
areas (Step B).
•
•
•
•
•
•
talking to your supervisors, who may suggest to you certain papers to
read, or certain promising directions to explore further;
trying to find recent survey papers on these topics;
paying attention to invited speeches, special issues of the journals on
these topics;
trying to go to a conference or two, even low-impact ones that are
nearby (and cheap). Talk to researchers, ask them their opinions on
the research areas you are interested in, and listen to their paper pre-
sentations on what sorts of problems they work on, and how difficult
they are. A good paper does not have to win a Nobel Prize to be
good.
finding recent and high-impact research papers (see Section 3.2) and
read them critically and creatively (see Section 3.3).
writing a survey or review paper in the topic area you are interested.
GettING StARted: New IdeAS ANd oRGANIzING PLANS 35
By the end of your first year, and early in the second year, you should
have studied a few potential research topics and areas more deeply. Step B in Fig-
ure 3.1 illustrates your knowledge curve with a few peaks (the same as the first-
year knowledge curve in Figure 2.1).
3.2 how to FINd ReLevANt PAPeRS
(LIteRAtuRe SeARCh)
With the Internet, Google Scholar,1 Arnetminer,2 Microsoft’s Academic Search,3
and other search engines for scholarly publications (such as elsevier’s Scirus and
CiteseerX 4 ), and publishers’ digital libraries to which many universities subscribe,
searching and downloading papers is often a few clicks away (OK, maybe more
than a few clicks). The question is: how to find recent and high-impact papers?
The process of finding out previous published work is often called literature
search. The following are some guidelines:
•
Usually, highly reputable journals and conferences review and select
papers very rigorously. Thus, papers published there are usually more
novel and significant. Your supervisor should know well the reputable
journals and conferences in the area, and after you work in the area
for a while, you will too.
•
One way to judge the reputation of a journal is to look up its impact
factor. The impact factor (IF) of a journal is roughly the average num-
ber of citations an average paper published in the journal receives in
the preceding two years. For example, if a journal has an impact factor
of 10, it means that on ave rage, in the past two years, each paper in
the journal received roughly 10 citations from other papers published
in journals belonging to a reasonably high-quality pool. This pool of
journals, known as the Science Citation Index or SCI, is maintained
1 http://www.scholar.google.com
2 http://www.arnetminer.org/
3 http://academic.research.microsoft.com/
4 http://citeseer.ist.psu.edu/
36 CRAFtING YouR ReSeARCh FutuRe
by an organization called the Institute for Scientific Information
(ISI).
•
•
Papers published by senior and well-established researchers often
have high impact.
Use scholarly search engines (mentioned earlier) and search for papers
with relevant keywords directly. Those search engines often return
a ranked list, combining publication date (novelty), citation counts
(impact), and so on together. However, they often do not reveal their
ranking algorithms, so you should try different filters, and with several
different scholarly search engines.
Figure 3.2 shows an example of searching for the topic of “hierarchical
classification” in Google Scholar. It returns a ranked list of papers based on an
unknown ranking algorithm, but it seems heavily based on citation counts. We
can see that the first few papers were published over 10 years ago, with hundreds
FIGuRe 3.2: An example of searching for a topic with Google Scholar.
GettING StARted: New IdeAS ANd oRGANIzING PLANS 37
of citations (results in Figure 3.2 were obtained on June 9, 2011). But how to find
recent papers which often have few or no citations? Google Scholar offers several
useful filters, especial the time filter, with “anytime” as default. You can change the
time to be “since 2010,” for example, and it returns a ranked list of papers since
2010.
After you find some recent and high-impact papers, read them. You can
then search again with different keywords used in those papers. For example, “text
categorization,” “taxonomy learning,” and “faceted browsing” are different terms
for “hierarchical classification.” You can also use the “Cited by” feature of Google
Scholar, and find all papers that have cited a paper. Pretty soon you will find tens
or hundreds of papers on the topic that you are interested in.
How can you read all of these papers, many of which can be quite challeng-
ing to read?
3.3 how to ReAd PAPeRS
You might ask: who does not know how to read papers? Just read them like reading
textbooks! Indeed, most of us read tons of textbooks. Well, there is a fundamental
difference between reading well-explained textbooks and reading research papers.
You read textbooks mainly for understanding and learning, expecting all details
are worked out and “fed” to you, whereas when you read research papers, you are
expected to develop your own research ideas and questions to surpass their authors!
Another simple way to see the difference is that you read textbooks for receiving
existing knowledge, while you read research papers for outputting new knowledge.
Clearly, the objectives of these types of reading are entirely different.
We often see new Ph.D. (and Master’s) students take two “bad” approaches
in reading research papers. The first bad approach is to spend too much time
reading the paper very carefully and trying to understand every detail of it, as they
read textbooks. Some new Ph.D. students spend days reading one 10-page confer-
ence paper, and come to ask us, “how is this done exactly?” We sometimes receive
emails asking us implementation details of our published papers. (However, if
38 CRAFtING YouR ReSeARCh FutuRe
other researchers are trying to replicate our published work, we will try our best
to reply to them.)
The right approach is to quickly understand the problems, assumptions,
main ideas and the proposed solutions presented in the paper at a high-level. You
will then spend more time to analyze the paper critically, find new ideas, and con-
duct research to surpass the previous papers.
More specifically, you should probably spend 30% (or less) of the time
reading and understanding the paper’s main idea and solutions. This amounts to
perhaps 1–2 hours to read a 10-page conference paper. How can you read quickly
to understand the main idea of a paper, which can be quite technical? Well, the
structure of the paper, if well written, should already give you a hint as to which
parts are important to read. A research paper usually consists of an abstract and
the body of the paper, which is divided into sections with section headings. Each
section may have subsections, and then subsubsections, and so on (we will discuss
the paper structure more in Chapter 6). A well-written paper should be structured
in a top-down fashion; that is, the high-level ideas and solutions should be de-
scribed in high-level (sub)sections. Thus, when we read papers, we should focus
on high-level (sub)sections, and skim over low-level ones which usually contain
technical details.
Figure 3.3 illustrates a hypothetical paper and its associated structure. Quite
often you only need to read the Abstract and Introduction sections quickly (say 15
minutes) to determine whether the paper addresses your thesis topic. If not, there
is no need to read it further. If yes, you would continue to read the main frame-
work of the proposed algorithms, main results of experimental comparison, and
conclusions. We use red boxes to denote the high-level sections you would read
more carefully. The parts marked with the yellow boxes may be quickly skimmed.
The parts without the boxes may be omitted in the initial reading. This helps us
to read a research paper quickly for the main ideas, problems, and solutions at the
high level.
GettING StARted: New IdeAS ANd oRGANIzING PLANS 39
FIGuRe 3.3: Structure of the hypothetical paper. We suggest you read the high-level
sections (red box) more carefully, skim the lower-level sections (yellow box), and omit
technical details (unboxed).
Understanding how a paper should be structured and written will also help
you learn to write your papers this way in the future. We will focus on how to write
powerful and easy-to-follow papers in Chapters 5 and 6.
While you spend 30% (or less) of your effort reading and understanding the
paper, you should spend 70% (or more) effort thinking creatively and critically:
•
Critical thinking. What is wrong with the paper? Are the assump-
tions reasonable? Is the problem ill-posed? Does the solution have any
technical flaws?
Student A1 read some papers on co-training (an area in machine
learning) when he took our graduate course. He implemented some
popular co-training algorithms, and tested them on some real-world
datasets. He found that the results were not as good as reported in
the published papers. This led us (Student A1 and his supervisor) to
40 CRAFtING YouR ReSeARCh FutuRe
question: does co-training really work that well? In what situations
does it work well, or not so well? After further research, we wrote and
published papers in some top conferences and journals in the field. If
he did not challenge the previously established views, there would not
be these research outcomes.
•
Creative thinking. What else can be used to solve the same or
broader problem? How can I do it differently, and better?
It is extremely important that you form a habit of thinking critically and cre-
atively from now on for whatever you read (even newspapers and magazines) or
watch (TV or movies). We always think of “plot holes” when we watch movies
(critical thinking), and discuss how to make the movies better (creative thinking)!
One of the authors (Ling) also works in child education. One fascinating think-
ing game he often plays with kids is to read a new story to them or watch a new
movie with them. He stops many times during the process and asks kids: what is
wrong with the story? How would you continue? How can you make it better?
It is crucial to develop a curious, creative, and critical mind, as this is one of the
important abilities for doing research (see Chapter 1).
Even when you read a paper published 10 years ago, you could still try to
think critically and creatively to “discover” new ideas from the paper. Such discov-
eries are likely to be re-discoveries (e.g., already published by other researchers).
However, if you are used to thinking critically and creatively, when you read recent
papers at the frontier of the research in your field, your re-discoveries will likely be
true discoveries. Those true discoveries will earn you a Ph.D. degree, and help you
establish yourself in the field.
We want to emphasize again that in reading research publications, read less,
think more. There is ample research in education and psychology showing that if
people (and kids) are given too much information or instructions, they become less
creative and explorative. After all, for researchers, the purpose of reading published
papers is to make new discoveries that go beyond the existing works!
GettING StARted: New IdeAS ANd oRGANIzING PLANS 41
Depending on the culture and environment where you grew up and received
earlier education, you may already be used to challenging the status quo rather
than merely following instructions. If you are not, it may take you some months
or even years to change your mentality to be more critical and creative. But if you
are aware of it, and try hard, you will improve quickly.
There may be one situation where you do need to read carefully the details
of a previously published work: when you need to re-implement it. This may
sound paradoxical, as our goal is to do novel research, but it does provide food
for thought and ready baselines to compare to, especially when starting empirical
research. We will discuss this in detail in Section 4.5.
The second “bad” approach in reading research papers is side-tracking too
far and for too long into some minute details, without seeing the big picture. Often
research papers contain some technical information and methods that you are not
familiar with. For example, a paper on supervised learning may contain a method
that uses quadratic optimization as its subroutine, which you may not know well.
It would be a bad approach if you get an 800-page textbook on optimization and
read it from cover to cover to “prepare” yourself for that research paper.
The right approach is “need-driven learning.” Your undergraduate and
Master’s programs should prepare you with enough basic knowledge for doing
research. For the missing knowledge, such as the aforementioned quadratic opti-
mization in papers you are reading, you sidetrack to learn it quickly, and come back
and focus on the main papers you are reading.
The key in reading published papers is to know what has been done, and
come up with novel ideas. How do you do that?
3.4 wheRe to Get New IdeAS
Scientific discovery, at a very high level, can be regarded as a process of com-
ing up with some new hypotheses, collecting data and performing experiments,
and confirming or refuting the hypotheses. But in reality, the process is far more
complex. Hypotheses are like new ideas in research. How to come up with new,
42 CRAFtING YouR ReSeARCh FutuRe
interesting, and potentially useful hypotheses? Often the hypotheses need to be
modified many times while research is being conducted. How can this be done?
How to design experiments to convincingly confirm or refute hypotheses? This
section discusses how to find new ideas or hypotheses in research.
One risky approach new Ph.D. candidates often use to find new ideas is
to read the “future work” section of a paper, or just do the “obvious next thing.”
However, these are the ideas that the authors have already thought about, and may
have been working on, that they are not really the “future” work per se. Thus, there
is a chance that if you worked on these ideas, the authors may always be ahead of
you. In addition, your mind will be restricted by the authors — you still have not
generated new ideas independently.
Earlier, we discussed how you should read less and think more—think
critically and creatively. When you are doing that, often you will have questions
about the published methods, new ideas and hypotheses about solving the same
or different problems, new ways of making assumptions, and so on. You should
write down those questions and ideas right away on the margin or the backside of
the paper. If you read papers on computers (PDF files), use note-taking software
to take notes, or an Adobe PDF Reader equipped with the note-taking function.
When you think actively, ideas come and go, and it is crucial to write them down
immediately, and study them more deeply later to see if they are indeed new and
worth further exploration.
In fact, new ideas can come from other unexpected occasions when you
keep on thinking about intriguing research questions. Legend says that the an-
cient Greek scholar Archimedes had a “Eureka!” moment when he took a public
bath and suddenly discovered how to measure the volume of an irregular object
(a crown made with gold) by submerging it in the water. He leaped out of a bath,
and ran home naked shouting “Eureka!” If you have an “Aha” or “Eureka!” mo-
ment when taking a bath, we suggest you also to run to get a pen and paper right
away — but remember to always bring pen and paper to the bathroom! Or better
GettING StARted: New IdeAS ANd oRGANIzING PLANS 43
yet, bring a laptop or tablet to the bathroom or bathtub, but make sure that it is
waterproof !
Sometimes new Ph.D. candidates complain to us that after they read many
papers, they find that their “new” ideas have all been discovered! They claim that
the more they read, the more difficult the topic seems to be, and the less inter-
esting the topic becomes! This can happen in a well-developed, mature research
area. In the next section we will point out that you should pay attention to “hot”
and recent research topics when you select your Ph.D. topic. Here, we offer you
another very important and effective approach to generating new ideas: having a
“brainstorm” session with other people.
A group brainstorming session is a group session where two or more people
actively participate and suggest ideas. These people can be fellow graduate stu-
dents, professors, and other researchers, some of whom may even be outside your
research area for fresh ideas. It is actually quite important and useful to develop a
“team” of people who help each other during your Ph.D. study. In a brainstorming
session, often assisted with amply supplied coffee (or a small amount of beer), no
ideas should be criticized or judged. New ideas will be drawn and written on a large
board, which may help visually stimulate more new ideas. Those ideas can be ex-
plored deeply by different graduate students in parallel afterward.
Combining “read less, think more” and brainstorming, a very effective ap-
proach we often take is to have one person in the group read a just-published
paper first. He or she would present only the problem part to the group, and lead
the rest to brainstorm: How would I solve it, and do better? What is wrong with
the paper? Again, many ideas and solutions may be found, and can be compared
with the published paper. Often new research ideas not mentioned in the paper
may be generated, and can be explored afterward.
A crucial aspect of generating new ideas by you or within a group is to try to
generate and treasure bold new ideas. By “bold,” we mean ideas that are radically
different from the previous ones, or that refute and overhaul previous work. For
44 CRAFtING YouR ReSeARCh FutuRe
example, if you discover that previous work made a wrong or unrealistic assump-
tion and you propose a better one, you probably need to propose new algorithms,
and solve a different set of problems. Avoid small, incremental improvements, the
so-called “salami” ideas which produce “salami” papers (thin and unsubstantial).
A small, incremental improvement is, for example, a different way to search for
optimal parameters of the same published work, and by doing so you improve the
results by, say, 3%. Your new ideas should almost always be application- or prob-
lem-driven, not solution driven. This will improve the impact of your research
work.
For example, Student A1 was initially interested in cost-sensitive learning.
After reading many papers on this subject, he wrote a survey on it. Writing a
survey paper is an excellent way to start researching in an area. He also extended
traditional cost-sensitive learning methods, and published a few papers. However,
the work was not bold enough to be a Ph.D. thesis. To help the student, we brain-
stormed: what are other forms of costs in cost-sensitive learning that are useful in
real-world applications but have not yet been studied? After several brainstorming
sessions, we raised new ideas on different data acquisition costs in cost-sensitive
learning. After a literature search, we find out that this extensively studied previ-
ously. As this is a bold, new idea in the area of cost-sensitive learning, Student A1
explored the idea deeply, checking into previous works that included the support-
ing theory, solutions, and application examples. As a consequence of the research,
he published several top-rated conference and journal papers, and wrote his Ph.D.
thesis on this topic. Since he is one of the first to work on this topic and many
people are likely to follow up in this direction, we can say that student A1 “owns”
this topic now.
3.5
FRoM IdeAS to ReSeARCh ANd theSIS toPIC
A “small” new idea may lead to a conference paper, while several small new ideas on
the same topic or one “ground-breaking” new idea can lead to a research and thesis
topic. The task of choosing a research and Ph.D. thesis topic is a complex and
GettING StARted: New IdeAS ANd oRGANIzING PLANS 45
trial-and-error process. It may also determine your future research direction and
career. Section 4.1 provides you with several specific criteria on a good research
and thesis topic, proposed by the late computer scientist and Turing Award winner
Jim Gray. Here we list some general factors that should be considered carefully:
•
•
Again, your passion and interests in the topic. You must be highly
passionate and fascinated in the selected topics.
Your technical strengths. Are you theoretical and math oriented? If
so you can consider research in theoretical aspects of a field, such as
theoretical models. Good at statistics? Consider statistical simulation
or optimization, for example. Good at design? Consider a new design
and processes. Good at applications? Consider empirical research and
killer applications.
•
how new and how “hot” the topic is. True, one can make novel and
significant contributions in virtually any area of science and engi-
neering, but it is often quite hard to do so in 4–5 years working on
a Ph.D. thesis if a topic has been studied for many decades, and you
could spend a huge amount of time just learning and studying previ-
ous works. Thus, you should consider recent topics that are or are
becoming hot, and in which rapid progress is being made, as your
Ph.D. topic.
•
Your supervisor’s vision and opinions. Your supervisor often has
more experience in research and better judgment on new research
topics. Sometimes a topic may have been well studied for many years
without much breakthrough, even though papers are still being pub-
lished on the topic; sometimes a topic can be too hard or too easy.
Thus, discussion with your supervisor is often crucial. Note, however,
as Ph.D. research would advance the state-of-the-art, no one can be
100% sure if a chosen topic will be adequate for Ph.D. study before
actual research is carried out.
46 CRAFtING YouR ReSeARCh FutuRe
•
Your future career. Do you want to be a professor in universities, or
a researcher in companies? For the latter case, you should select em-
pirical research where the research project can highlight new ways of
applying theories.
When selecting a thesis topic and carrying out research to explore the “un-
known territory,” is there any way to know “if I am on the right track”?
3.6 how do I kNow IF I AM oN the RIGht tRACk?
Many new Ph.D. candidates often feel in the dark if they are selecting a good
topic for their Ph.D. thesis, or if they are doing the right thing in their research.
An extremely important way to validate your thesis topic is that you complete your
first iteration of the three main tasks ( find new ideas, do solid research, publish papers)
early in your Ph.D. study.
In the later part of the first year, and during the second year of your Ph.D.
study, you should start doing research and publishing papers (Tasks 1–3) in re-
search topics you are interested in. Start with some small, interesting research
problems, and do solid research (see Chapter 4). If you obtain positive and inter-
esting results, you can write them up as a conference paper, or a short journal paper
(see Chapters 5 and 6), and submit it to a conference or journal. You can start with
medium competitive (or local) conferences or journals.
As we mentioned earlier, research papers are almost always peer-reviewed
by other researchers anonymously. This allows reviewers to write detailed and
often critical reviews of your papers. If your paper is accepted, it surely validates
your ideas and research topic, and the reviews can help you to further improve
your work. If the paper is not accepted, comments from the reviewers will often
give you great feedback about your research and the paper. However, it is a bad
idea to submit poor and pre mature papers as they can negatively affect your
reputation.
Here are some other useful ways to validate your Ph.D. topic and research:
•
•
•
GettING StARted: New IdeAS ANd oRGANIzING PLANS 47
Discuss your potential topic with your supervisor and other colleagues
and Ph.D. students extensively.
Discuss your ideas with other researchers who work in the same field
via emails.
Chat with other researchers in conferences. Keep in mind that people
may give less critical feedback than they might in writing reviews of
your papers.
•
Find and read recent Ph.D. theses on similar topics from other uni-
versities, and compare the depth and scope of your research work to
theirs.
To reiterate, it is very important to go through the Tasks 1–3 early in your
Ph.D. study. This will not only prepare you for future iterations in your third and
fourth year of your Ph.D. study, but also validate your Ph.D. thesis topics.
3.7
SettING uP A PLAN FoR Ph.d. theSIS eARLY
After you have gone through Tasks 1–3 once or twice, it is highly desirable to
set up a “plan” for your Ph.D. thesis, usually near the end of the second year. The
plan looks a lot like a two-level outline of your future Ph.D. thesis, with a list of
related research problems to be solved. This is usually possible because you have
read many recent papers on this topic critically and creatively, and performed some
research already.
With the plan, in the next few iterations of your Ph.D. research, you essen-
tially pick up some research problems in the plan, do solid research, and publish
high-quality papers. Note however, the plan is not carved in stone—you need to
remain flexible with the plan. For example, you can skip some research problems if
they are too hard or they do not yield enough results. Remember, doing research
takes risks in exploring unknown territories. You can also add new, related prob-
lems during your research. However, you must have a strong focus—all the research
problems you will be solving are within a (broad) thesis topic. In the end, your
48 CRAFtING YouR ReSeARCh FutuRe
FIGuRe 3.4: A plan of a sample Ph.D. thesis with a central theme and a list of potential
problems to be solved, and how it evolves into the final thesis with publications in [ . . . ].
Some topics are added in later (“new” in the notes), and some were not finished (“unfin-
ished” in the notes) in the final Ph.D. thesis.
thesis plan and the final Ph.D. thesis outline may have an overlap of 50–80%.
This way, it is possible that you can make novel and significant contributions and
establish yourself in a certain area (“own” an area) at the end of the four-year Ph.D.
study, as seen in Figure 3.1.
After Student A1 switched to cost-sensitive learning with data acquisition,
we set up a thesis plan together on this new topic. The student worked hard, and
published a series of papers in top conferences and journals. This is not surprising
as Student A1 has already gone through Tasks 1–3 on earlier research, and can
conduct solid research and write papers reasonably well by himself. His final Ph.D.
thesis is almost like a simple collection of these relevant papers. See Figure 3.4 for
his thesis plan. We put some notes in [ . . . ] in the plan for papers that were later
published. Some topics are added in later (“new” in the notes), but some were not
finished in his Ph.D. thesis (“unfinished” in the notes). This plan provided a strong
focus in his Ph.D. study, yet he was still flexible with the plan. Note that his final
Ph.D. thesis does not include his early published work on co-training. A thesis
must be a coherent piece of work, usually on one central topic and theme.
Ph.D. study usually only takes 3–5 years; this is rather short for making
significant contributions in research and for establishing yourself in some area.
GettING StARted: New IdeAS ANd oRGANIzING PLANS 49
Having a plan and working systematically (Tasks 1–3 in Chapter 1) according to
the plan is crucial to make this happen.
3.8
LeARNING to oRGANIze PAPeRS ANd
IdeAS weLL
A related but important issue in undertaking the long and arduous task of com-
pleting Ph.D. research is the need of a good system to organize the many papers
that you have read, notes on ideas you have thought about, and references you
have used and cited in your papers. Otherwise it can be hard to find those things
during research and when you write your thesis. In the “old” days when we worked
on our theses, we created piles of physical papers on different topics, and used text
documents to keep track of papers, ideas, and so on. Nowadays most papers are in
electronic format, such as PDF, or portable document format, created by Adobe
Systems, and there are quite a few “reference management software” that can help
you with all of these tasks with ease. Two popular ones are Mendeley5 and Papers.6
Also, you can use ebook software, including iTunes and Kindle to organize PDF
files.
You can create your own note-taking system, or choose a software package
to organize papers, notes, and references, and you need to use it effectively to man-
age your research the in several years of your Ph.D. study.
• • • •
5 http://www.mendeley.com/
6 http://www.mekentosj.com/papers/
51
C H A P T ER 4
Conducting Solid Research
In this chapter, we go deeper in describing the process of conducting solid research.
We start by giving a high-level overview on the research process (Section 4.1). We
then discuss some high-level criteria, called Gray’s Criteria, to judge what research
ideas and projects are worthwhile to pursue (Section 4.2). Subsequently, we discuss
how to form hypotheses to work on in order to pass the criteria (Section 4.3) and
how to confirm or dispel the formulated hypotheses (Section 4.4). We further dis-
cuss some other objectives as you go through your research process, such as how
to brand yourself (Section 4.5), how to choose between theoretical and empirical
research (Section 4.6), and what to expect in team work and multidisciplinary
research (Section 4.7).
4.1 AN oveRvIew oF A ReSeARCh PRoCeSS
We refer to the steps in which we conduct research as a research process. A re-
search process, broadly speaking, consists of the following phases:
Step 1: Find a novel and potentially high-impact problem to solve based
on the high-level criteria (Section 4.2) on choosing a good problem.
Step 2: Find a candidate solution. You then formulate a hypothesis as a
(problem, solution) pair. Each hypothesis states that the solution can
indeed solve the problem.
Step 3: Conduct theoretical or empirical analysis on the hypotheses to
confirm or dispel them. You also conduct significance tests to establish
the validity of your conclusions on these hypotheses.
52 CRAFtING YouR ReSeARCh FutuRe
Step 4: If you are successful in your significance testing, then you are done.
Otherwise, you return to Step 2, find another candidate solution to
the problem, and repeat the process.
Your initial solution to the problem might be a simple one, and there is a
good chance that you will fail to solve the problem either because the problem is
too hard to solve, or because the solution does not solve the problem. The reason
that you often refute your initial solution is that the solution is usually very simple
to start with, and other researchers working in the same field may have already
thought about the solution to the problem, tried it, and refuted the hypothesis.
However, as most published articles only report on “positive” results, you might
not be aware of the fact that the solution in fact is not a good fit for the problem,
and thought otherwise!
In research, we almost always have to revise and “tweak” our initial hypoth-
eses to “make them work,” so to speak. Finding the right problems, and forming
and revising the most appropriate solutions for these problems, is a highly com-
plex and creative process, and thus it is hard to “quantify ” this creative process
precisely. We will describe a so-called Research Matrix Method, which inspires us
to make systematic associations between problems and solutions. This often leads
to new hypotheses and novel ways of solving the same or different problems. See
Section 4.3 for detail.
How do we confirm that a hypothesis is true? There is a very broad spec-
trum of approaches. On one end of the spectrum is a (pure) theoretical approach.
It uses mathematics to represent the hypothesis, and logical reasoning and
mathematical derivation to prove (or disapprove) it. However, often assumptions
need to be made so that proofs can be carried out. One of the most important
and classical theoretical work is Euclid’s elements, written in 300 BC. It starts
with definitions and some basic assumptions (called axioms or postulates, such
as two parallel lines will not intersect if they are extended in any direction), and
uses logical reasoning to prove (confirm) hundreds of propositions and theorems,
including the Pythagorean Theorem. When a theorem is proved once, it is true
CoNduCtING SoLId ReSeARCh 53
forever. There is no need to “run experiments” to verify proven theorems. The key
questions to answer are how strong the assumptions are, and how the results can
be applied in practice.
At the other end of the spectrum is (pure) empirical research. Often such
empirical research is closely related to practice, as empirical methods can be easily
turned to applications. Thus, do not be afraid to pose some bold hypotheses that
represent killer applications —solutions to certain problems that may bring direct
benefits to a large number of people. Empirical hypotheses testing can often be
stated in a natural language, say English, and can only be confirmed by running
experiments on simulated or real data. Experiments can be conducted with people,
animals or plants, materials and objects. They can also involve computer simula-
tions, and computer programs taking various datasets as inputs. When conducting
these experiments, it is crucial that researchers have a sharp eye for interesting
and anomalous results. This is another important way to find surprising or new
hypotheses during research in addition to the research matrix method we mention
in Section 4.3.
We often use different materials, datasets, and have different experimental
settings in experiments. Do we need to exhaust all possible materials, datasets, or
experimental settings, to show our new design and methods are better than previ-
ously published results? What if this is impossible? How can we compare with
previous design and methods published by other people that we do not have access
to? Your decisions on these issues will determine how thorough and rigorous your
experiments are to support the validity of the hypothesis. The degree of such empiri-
cal evidence with experiments can be quantified, fortunately, by the Significance
Tests, which is a subject in statistics. Here the term “significance” does not mean
importance or impact, as we have been using in this book. Briefly, a significance
test is a statistical procedure to test if a conclusion (e.g., your method is better than
previous ones) is likely to happen by chance. See Section 4.4 for more details. In
order to pass the test, repeated experiments and head-to-head comparisons usually
need to be conducted as part of your research.
54 CRAFtING YouR ReSeARCh FutuRe
A more rigorous support can be obtained if you can ask authors of the pub-
lished papers to apply their own methods on your materials and data, and your
new method is still shown to be better than theirs. This is because the original au-
thors can usually apply their own methods better than you could. However, if you
only compare your method with those results in some published papers (without
reimplementation of these methods on your own), the evidence that supports your
hypothesis is considered weak, as the experimental settings can be different, and
you might not satisfy the assumptions needed to conduct the significance tests. See
Section 4.4 for more details on this.
4.2
JIM GRAY’S CRIteRIA1
Many students think of research as an admirable career, where one can truly make
a name in history. Just consider examples in our textbooks: Newton, Galileo, and
Einstein, to name a few. However, for a beginning student, research should be
both a curious adventure and a serious business to manage. Let’s start with the fun
and adventurous part first.
One of the authors (Yang) once attended a lecture given by Professor
Charles Townes, an American Nobel Prize-winning physicist who made major
advances in quantum electronics for lasers and masers. Yang had earlier met
Townes when he freshly graduated from Peking University when Townes first
visited China on a science mission as President Jimmy Carter’s representative in
1982. In this later seminar, Townes reflected on his joy in his very first discovery,
a fish. He said that in his childhood he used to catch fish. One day he got a fish
that he could not name according to an encyclopedia. Curious, he wrote a letter
to the Smithsonian Institute, a US museum. He was pleasantly surprised when he
got a letter in return, noting: “Congratulations, Mr. Townes, you have discovered
a new species of fish!” Imagine the joy and excitement this boy experienced when
reading these words!
1 Jim Gray was an American computer scientist who was credited for major developments in database
theory and a proponent of the fourth paradigm in scientific discovery.
CoNduCtING SoLId ReSeARCh 55
Discovery is a childhood joy. As we note in our previous chapters, one enters
the research field primarily because innovation and discovery are fun. It is a dream
we all have had since childhood. What we want to say in this chapter is that to
be prepared for a career in research, we must have more than curiosity; we must
have a process in which we can manage our research like managing a business. Just
like running business, if we happen to manage it well, it will flourish and prosper.
Our impact in the world and our contribution to knowledge will be significant.
Otherwise, if we do not manage it well, we will find ourselves frustrated and
discouraged.
Once we know that we are motivated to do research, our first task is to
decide on our objectives and directions in which we will go deeper. This process
will definitely be curiosity driven, but should also follow a well-managed process.
As noted in previous chapters, finding a good topic takes many activities, ranging
from talking to experts in the field to critiquing many papers and works in your
field of interest. But if there are several areas that you are more or less equally in-
terested in, which one should you choose? How do successful people pick an area
to work on? Why are some equally talented people not as successful and impactful
as others?
In our careers, we have been talking and listening to many “successful”
researchers. One of the best-known criteria for high-impact research was sum-
marized by the late great computer scientist and Turing Award winner Jim Gray,
who was credited with the invention of modern database technology and the
fourth paradigm in computer-science research. In his 1999 Turing Award lecture,
he stated that good research should
•
•
•
•
have clear benefit,
be simple to state,
have no obvious solution,
have a criterion where progress and solutions are testable, and where
it is possible to break the larger problem down into smaller steps, so
that one can see intermediate progress after each step.
56 CRAFtING YouR ReSeARCh FutuRe
We will refer to this set of rules as “Gray’s Criteria” hereafter.
Take word processing as an example. Suppose a student is very interested in
proposing to do research in word-processing technology. This technology has clear
benefit, which is very simple to state. However, it is obvious today how to do it, via
a variety of word-processing software. Thus, the research is not impactful, unless it
is a dramatic improvement on the existing word-processing software.
However, suppose that the student has proposed to invent a new word-
processing system based on speech input instead. This has clear benefit for a va-
riety of people, including the handicapped. It is very simple to state, and it is not
yet obvious how to do it accurately since in the speech recognition field today, we
are still far from having a robust recognition technology based on voice input. The
progress can be testable: by having humans speak to a microphone and producing
coherent and accurate text. Finally, the objective can be broken down to smaller
steps, including phoneme recognition, sentence recognition, grammatical analysis,
and fault tolerant, etc. Thus, this can be considered a good direction to pursue for
the student.
Going back to our earlier example of Students A, B, and C. Recall that
Student A is a type of student mostly interested in fundamental and academic re-
search. Earlier we described A1 in detail. Let us use A2 to denote another student
of this type. For Student A2 to find a research problem, the question becomes:
how do I advance the state-of-the-art in academic research? After reading research
papers and proceedings, Student A2 found the topic of “transfer learning” to be
a very attractive topic, which studies how knowledge gained by an agent in one
domain of expertise can be usefully transformed into knowledge in another, re-
lated domain. Transfer learning tries to endow machines with the ability to learn
from its experiences that are drawn not only from a single domain of interest,
such as examples of categorizing images, but also from other different but related
domains of interest, such as reading books. The topic of transfer learning also has
its psychological and educational foundations, as it is an integral part of human
intelligence. People have the ability to carry out transfer learning, for example,
when they learn to plan military and business strategies via their knowledge in
CoNduCtING SoLId ReSeARCh 57
playing chess.
Student A2 found the topic of transfer learning intriguing enough to con-
tinue exploration. In particular, he found the topic of transferring learned models
between text and images very interesting, and he started to conduct a thorough
survey of existing works in transfer learning, and then applied the Gray’s Criteria
to the topic.
The topic has clear benefit to many computer science areas, such as data
mining and machine learning applications, as these algorithms are a major compo-
nent of search engines, social networking applications, and business applications.
However, a major bottleneck in these applications is the lack of high quality anno-
tated data, because data labeling is a time-consuming task often done by humans.
Transfer learning allows one to borrow the knowledge gained from other fields to
alleviate the dependency on annotated data in a given domain. In Student A2’s
problem context, transferring knowledge gained from text documents to images
allows an online social media system to categorize and search for images of interest
much faster and more accurately than before.
The problem is also simple to state. Try to tell an ordinary computer user
that transfer learning allows a search engine to adapt to their personal preferences
much better than before, or tell a credit card user that card misuse and fraud de-
tection can be made more accurate, and you will know why. In the case of Student
A2’s problem, a story of teaching computers to recognize images by reading text
sounds very fascinating!
While we humans apply transfer learning every day in our lives, it is in fact
not that obvious how to do it on a computer. Previous approaches in machine
learning and data mining relied on the assumption that the training and testing
data are represented similarly. This assumption is no longer valid in Student A2’s
problem.
To Student A2, progress and solutions in transferring from text to images
is clearly testable, as we can use the percentage of correctly categorized images as
58 CRAFtING YouR ReSeARCh FutuRe
a measure of success. This measure is a function of the number of text documents
fed to the machine for reading, which can be displayed clearly as a chart.
Finally, the problem can also be broken down into smaller steps, so that one
can see intermediate progress. For example, Student A2 can try to work with an
existing translator from text to images. Then he can design algorithms and mod-
els to learn the translator from data. Finally, he can consider varying the types of
text and images and observe the performance of the resulting transfer-learning
system.
Now let us turn to another student of type B (empirical research), Student
B2, who is interested in large-scale application-oriented problems, with an aim to
join research labs in industrial companies after graduation. For Student B2, the
transfer-learning problem itself is interesting, but he is interested in seeing large-
scale applications at work. To do this, he found an application idea in the area of
query classification in search engine design, where the problem is to automatically
categorize user queries into categories. For example, when we type in a query
“hotel” in a search engine, the system must deduce that the user might be inter-
ested in a commercial purchase, thus providing the user with informational results
about hotels, as well as advertisements. Making transfer learning work in this
search context allows a search engine to quickly adapt to changes in a commercial
search and advertising application. This would allow new areas to be quickly de-
veloped, and search-query categorization can be readily extended by “transferring”
previously annotated query data to new application domains. To demonstrate the
innovation, Student B2 plans to build a large-scale data mining system based on
transfer learning based on data collected by a search-engine company. To make
the data access feasible, he decides to join a commercial search company for one
semester as a student intern.
For a student of type C (application and entrepreneurship), Student C2,
whose aim is to become an entrepreneur, may decide to focus on a more profitable
area, online advertising, in order to gain more experience in the area of business
intelligence to prepare for his future company. Advertising is a science as much as
CoNduCtING SoLId ReSeARCh 59
an art, which touches not only on business as a topic, but also economic theory,
game theory, human-computer interaction, artificial intelligence, and data mining.
Since carrying out this research involves a multitude of disciplines, the student de-
cides to go to various academic as well as business conferences to enlarge his social
network. He goes to some more focused and commercial workshops more often,
and plays a major role in many functions in conference or workshop organizations
such as being a student volunteer.
One of the most important aspects of Gray’s Criteria is that the progress
should be testable. Today, much research is done by means of system building
and empirical testing. In this context, a testable criterion can be further refined
to: we must ensure that it is feasible to get credible and a sufficient amount of
data. In many student cases under our direct or indirect supervision, we have
observed that this is a critical point where many students get stuck: they have
invented great ideas, algorithms, and theorems, and implemented systems. The
only thing that remained to be resolved is credible verification, and this is where
the students often fail: they cannot find credible datasets beyond publicly available
toy-ish data, which breaks down the whole process of research. For example, in
computer science and engineering, some students conduct research by proposing
new algorithms that rely on search-engine log data. However, this data cannot
be accessible at university labs. Some of the data are only available via industrial
companies such as search-engine companies. In this aspect, students should seek
out opportunities to work in collaboration with industrial research labs, where they
can take on the position of a research intern, to complete their empirical research.
Indeed, this has become one of the most effective ways for students to carry out
their research plans.
4.3 the ReSeARCh MAtRIX Method
Gray’s last rule says that research should be able to be broken down into smaller
steps, so that one can see intermediate progress. This is important, since as we
know well in history, new innovations are possible by “standing on the shoulders
60 CRAFtING YouR ReSeARCh FutuRe
of the giants.” This applies to great inventions. It also applies to your own research
agenda, in which one of these giants may, in fact, be you.
One of the most effective methods is the so-called research matrix method,
in which one can systematically plot a path to successful research. In Figure 4.1,
we see a matrix partitioned via an x-axis, which represents different methods and
techniques that we have and a y-axis where we list the potential problems and
subproblems we try to solve. Some of the subproblems are related and should be
solved in sequence, while others can be solved concurrently.
To make the methodology more concrete, let us consider the case of a stu-
dent who is interested in machine learning and data mining areas, as we mentioned
before. Suppose that the student is interested in making a career in cost-sensitive
learning. Cost-sensitive learning concerns how to build a computer-based model
that can improve itself over time by considering different costs associated with
different types of mistakes made by the model on future examples. This area is
particularly important because many practical machine-learning problems are
intrinsically cost-sensitive, in that the data is unbalanced. A good starting point
is to carry out an extensive survey of this area, as we noted in Chapter 2. Suppose
that the student has broken down the cost-sensitive learning area into several sub-
FIGuRe 4.1: The research matrix method.
CoNduCtING SoLId ReSeARCh 61
problems: cost-sensitive learning when the costs are known, cost-sensitive learning
when costs are unknown ahead of time, cost-sensitive learning when the cost is
associated with missing data acquisition, and cost-sensitive learning when data
comes in stream form. On the method dimension, the y-axis, one can lay out such
methods as Bayesian classification, ensemble methods, density-ratio methods,
and partition or tree-based methods. Furthermore, one can list out various online
learning methods.
By examining this matrix, the student can first try to fill in various cells by
papers he or she has read before: each citation index such as [3], etc., represents
one such paper. The student can optionally put notes in each cell, to denote the
strengths and weaknesses of each paper upon reading. The student can find vari-
ous surveys and comments on the Internet via search engines, which can get the
student quickly into focused areas. These are the areas where there is no existing
literature, or few literature pieces, to occupy a cell. Examples in the figure include
the column #2, row #4, or cell #(3,3).
First, cell #(3,3) offers a perfect opportunity for the student to enter the
field, or “get his or her feet wet,” so to speak. The student can try to apply method
#3 to subproblem #3 in order to get the necessary practice. Others in the machine-
learning field will appreciate the relatively significant impact because this proposed
work clearly offers a novel solution for an important subproblem.
If this is truly the case, as this is how many of us more senior researchers
often started our research career, congratulate yourself—you have entered the
door of research. This is where you can taste the sweet fruit of your own discov-
ery for the first time. However, more exciting things await you in the matrix, so
read on!
Example: Student A2 is most interested in pursuing an academic career.
He has found a topic in the area of “ transfer learning,” and sets out to find a suit-
able topic in this area for his Ph.D. research. After reading the relevant journals,
proceedings and articles, Student A2 lists the specific transfer learning related
problems as “knowledge transferring in the same feature space,” “knowledge
62 CRAFtING YouR ReSeARCh FutuRe
transferring across different feature spaces,” “knowledge transferring with unbal-
anced data,” and “knowledge transferring from multiple source domains.” On the
method dimension, he lists: “instance-based transfer,” “feature based transfer,”
“transferring with ensemble of models,” and “transferring across multiple learn-
ing tasks.” From the matrix, he noticed that there has been considerable previous
works by other researchers on ”knowledge transferring in the same feature space,”
“instance based transfer,” “knowledge transferring across different feature spaces,”
“instance based transfer,” etc., but there is relatively little work under ”transferring
across different feature spaces,” “transferring across multiple tasks.” After discuss-
ing these findings with his supervisor and supervisory committee members, he
decides that this is an area to pursue. He then writes a proposal, formulates sev-
eral plausible solutions, and sets out to find appropriate data for the experiments.
Finally, Student A2 gets several papers published and completes his PH.D. work.
In contrast with Student A2, Student C2 is more interested in an industrial
and entrepreneurial career. Thus, Student C2 decides on a different topic to pur-
sue, but he can still make use of the matrix method. For example, Student C2 finds
that the topic of recommendation systems (such as that used by Amazon.com) to
be significant and of practical importance. In this area, he finds that the problems
can be formulated as a vector “recommendation based on dense matrices,” “recom-
mendation based on sparse matrices,” “recommendation with an external ontol-
ogy for users and products,” and “evolutionary recommendation with temporal
information.” On the solution dimension, he lists “user based recommendation,”
“item-based recommendation,” “model-based recommendation.” This gives a ma-
trix where Student C2 starts performing a literature search and filling in the cells
with references. Student C2 soon finds out that the area of “evolutionary recom-
mendation with temporal information,” “model-based recommendation” is both
interesting, new and will be of tremendous commercial value if introduced to some
well-known commercial online-shopping companies. This idea is confirmed by his
committee members, which includes a researcher from a well-known industrial
research lab. He sets out to do his research, which can be tested on large-scale data
and using well-established evaluation criteria. With this work he applies for a pat-
ent and then writes a business plan for a university-based spin-off company.
CoNduCtING SoLId ReSeARCh 63
4.4 CARRYING out YouR ReSeARCh
Once you have set the research objective and have done the related literature
survey, you are ready to conduct your research. A first important step is to state
your research objective in one understandable sentence. For example, if your objec-
tive is to show that you can design a better search engine than Google, then you
might want to be more specific about what you mean by “better,” and formulate
your hypothesis more concretely. For example, you might state “my hypothesis is
to use social networking information in Web search results toward more accurate
ranking results compared to not using such information in Web search.” This
hypothesis can be easily understood by even non-computer science people, and
can be concretely confirmed or disproved. More concretely, the hypothesis can be
further instantiated by the following statement: “my algorithm ABC uses social
networking information of masses of users as well as hyperlink information. We
expect that ABC gives better Webpage ranking results than Google’s PageRank
algorithm2 when the former uses additional social networking knowledge.”
It is more advantageous to make your hypothesis more concrete. First, hav-
ing a more concrete hypothesis at the outset allows researchers to focus on a few
key elements in your work. For example, it allows you to formulate your method-
ology and present your ABC algorithm as clearly as possible, so that others can
repeat your experiments and confirm your hypothesis. Second, it allows you to
decide on the necessary baselines and evaluation metrics in order to confirm or
disprove your hypothesis. For example, you may formulate your methodology as
one that is based on a theoretical proof that no matter what PageRank algorithm
provides in the final result ranking, your ABC algorithm is going to perform at
least as well as that by PageRank. In order to show this, you may realize that you
need a widely accepted evaluation metric in order to compare the ranking results.
2 http://infolab.stanford.edu/~backrub/google.html
64 CRAFtING YouR ReSeARCh FutuRe
For example, you might adopt the “Area Under the ROC Curve’’ (or the AUC
measure) as a ranking metric.3 Alternatively, you may wish to use a metric known
as Normalized Discounted Cumulative Gain (NDCG4). Whichever metric you
might choose to use, you need to follow up with a proof that your ABC algorithm
will outperform the PageRank algorithm by a significant margin. If you are more
theoretically oriented, you might look for proof methods that can help you show
that your hypothesis holds with a series of theorems and corollaries.
In many cases, however, the hypothesis can only be answered empirically. If
you adopt this path, you’ll need to design the experiments, which involves several
key steps. First, you’ll need to identify the datasets and sample the necessary data
for your experiments. Often the original dataset is noisy, incomplete, and large,
and for this reason they are often called the raw dataset. For example, for showing
the superiority of a new search method, one often needs to obtain large quanti-
ties of search log data from a real search engine. This raw data is often extremely
large and incomplete, but a sample of the data that is properly cleaned can be used
to represent the general real world situations that are encountered by a real Web
search algorithm. In this case, a section of the data that involves several months
that cover the four seasons of a year might suffice. When there are multiple seg-
ments of the data that vary in distributions, it is better to sample each part inde-
pendently. Furthermore, it is a good idea to apply stratification by partitioning the
entire large population into small segments, where the elements of each segment
are sampled out randomly. The sampled data is expected to resemble the distribu-
tion of the original dataset. Furthermore, when the data have much variance due
to unusually high uncertainty, it is a good idea to apply Monte Carlo methods for
sampling, where the basic idea is to run repeated computer simulation to generate
the data according to the input parameters underlying the data distribution.
3 J. Huang and C.X. Ling. ‘Using AUC and accuracy in evaluating learning algorithms.’ In IEEE
Transactions on Knowledge and Data Engineering, volume 17. 2005
4 K. Jarvelin, J. Kekalainen: Cumulated gain-based evaluation of IR techniques. ACM Transactions
on Information Systems 20(4), 422–446 (2002).
CoNduCtING SoLId ReSeARCh 65
An important aspect of experimental design is to decide the free parameters
or independent variables that you must deal with, so that you can decide on their
effect on the dependent variable (the search ranking result in our Web-search
example). It is important that one of these variables is allowed to vary, while oth-
ers (known as control variables) are kept at constant values or randomized. For
example, in the search engine design problem, variables include the number of
Web pages, the in-degree and out-degree of hyperlinks of each page, the length of
each page, and the number of topics discussed in the content of the pages. When
all of these parameters are fixed, we have a scenario that represents a snapshot of
the real world. When we vary one parameter while holding all others constant, we
create a performance comparison chart on which we display both the performance
of ABC and PageRank, in order to ascertain the superiority of ABC as a better
method. This experiment is often repeated by varying one variable at a time for
creating a collection of charted results.
Often, even after the experimental metric and comparison baselines are
settled, we still have to decide on how to obtain credible subjects on which we ob-
tain the results. For example, in the Web search case, we might hire several users to
judge the quality of ranking results in response to a search query. Here, we have to
decide on a collection of benchmark search queries that other researchers also use
in evaluating their algorithms. We must also ensure that the users have a sufficient
degree of independence and diversity to ensure the generality of the result. In fact,
there is a large amount of literature in psychology on how to choose test subjects
to ensure the most credible results.
When dealing with models that can be constructed based on the data, it is
important that we do not mix the training data used for model building with the
data for testing and validating our models and hypothesis. In experimental design,
this is known as hold-out tests, in which we hold out a portion of the data purely
for evaluating the obtained model. When the data are in short supply, we can
also use n-fold cross validation in which we partition the data in n folds, and let
each fold take a turn in being the test data with the rest as the training data. This
66 CRAFtING YouR ReSeARCh FutuRe
results in n results that can be averaged to produce the final, more reliable result.
In doing this, an important issue is to show that the comparison results are con-
sistent under different experimental conditions, via statistical measures such as
variance and confidence intervals.
Finally, once we complete the experiments and produce the results, say in
the form of a number of data tables, we must follow up with interpretations and
analysis of the results. There are two aspects here. First, we must be able to tell
the reader whether the experimental results from competing methods showed
significant differences via statistical significance tests. Second, we must explain
the significance of the results via what we know from the domain knowledge; in
our search example, for example, we need to explain what a certain difference in
ranking result would translate to in user experience and search quality.
Now we return to the topic of conducting statistical significance tests on
your results. The basic idea here is to ensure that the results are not due to random
chance. This can be done by setting a significance level at a low value, such as 5%.
Then, we can establish that the probability that result is by chance is lower than
5%, and subsequently, we can say that our result is significant; in this case, we can
reject the null hypothesis that there is no difference.
The significance testing can be done via various significance tests. The stu-
dent’s t-test, for example, tells if the means of two sample groups show significant
differences. It starts with a null hypothesis that there is really no difference to be
detected between the means of the two methods, and set out to show that the null
hypothesis can be rejected. The student t-test is used to determine whether there
is a difference between two independent sample groups and when the sizes of the
groups are reasonably small. Another often-used significance metric is p-value,
which is the probability of observing the test statistic value if we assume that the
null hypothesis to be true. We often reject the null hypothesis when the test sta-
tistic is less than the significance level a.
Summarizing, the research process starts with the formulation of one or
more hypotheses (often a null hypothesis and alternative hypotheses), and goes
CoNduCtING SoLId ReSeARCh 67
through a cycle in which an experimental design is completed together with data
selected and sampled, and then the statistical significance of the results ascertained.
Finally, domain-related explanations are given so that others can fully appreciate
the impact of the findings. In the process, it is important that the experiments are
repeatable by others under the same experimental conditions.
4.5
bRANd YouRSeLF
Taking the view of the matrix (again, see Figure 4.1), scientific research itself is
not very different from what commercial market researchers typically do when
launching a product or a company: they have to find out where the market needs
truly are.
On examination of the matrix again, a more important discovery from the
matrix is that an entire area can be “owned” by the young researcher, through
which the researcher can truly brand himself or herself as an expert in an area with
a significant impact.
Let us go horizontal first (see the horizontal arrow in Figure 4.1). This is
when you have a good grasp on a significant and high-impact subproblem, perhaps
due to your fastidious experimentation and examination of a particular problem at
hand, or perhaps because you have gained an important insight by reading many
research papers and noticed a lack of attention to a hidden gem. Once you have
a good understanding of the nature and complexity of the subproblem, you start
asking the questions: “Can method [x] be applied to this problem?” “What impact
will be brought forth if I succeed in applying method [x] to this problem?” and,
“if method [x] cannot be applied to this problem, what knowledge is gained in the
field of research?”
When going horizontal in this methodology, is it critical to compare the
different methods [x] and [y] on the same problem to elicit their relative merits?
“What are the advantages and weaknesses of method [x] as compared to method
[y]?” It may be the case that method [x] is more efficient than method [y], whereas
method [y] is more accurate, by some measure of accuracy, than method [x]. It
68 CRAFtING YouR ReSeARCh FutuRe
might also be true that method [x] is better in reducing the error rate, if we con-
tinue with the cost-sensitive learning example, or that method [y] performs better
in some other measure of success such as AUC in ranking. In other words, many
exciting research questions can be raised and answered by conducting this research
horizontally, and as a result, many top-quality research papers can be written.
Likewise, we can also go vertically from top-down in this matrix (see the
vertical arrow in Figure 4.1), by considering applying the same method to a variety
of related problems in one focused area. This is preferred if you have a very good
grasp on the method’s inner strengths and weaknesses. As an example, suppose
we choose the decision-tree learning method in machine learning, and wish to
apply this solution to various subproblems of cost-sensitive learning. Then we may
ask whether the method performs as expected on unbalanced class classification
problems, on cost-sensitive learning problems when we have no good knowledge
on the cost function, and on cost-sensitive learning when human feedback (which
are also costly) are considered in the learning process for future data. We can
compare the decision-tree based methods to all other methods that others have
tried before on this problem, and if possible, we can find out why the decision-tree
based method can perform better or worse than these methods. In this way, we
can find out the method’s true merits and weaknesses in various learning problems
that involve costs.
Like the horizontal research methodology, this top-down process also al-
lows you to truly brand yourself in your research community, and make a name
for yourself. A popular analogy in many research circles is that a research method
is like a hammer, whereas a research problem is like a nail. In this analogy, the
horizontal methodology is akin to finding different hammers for the same nail,
where as the vertical methodology is like finding different nails to hammer on by
the same hammer. As the saying goes, “once you have a hammer, everything looks
like a nail!”
Finally, a better solution is if you can identify both a hammer and a nail,
by going both horizontally and vertically (e.g., a submatrix from cell# (1,1) to
CoNduCtING SoLId ReSeARCh 69
FIGuRe 4.2: A tree of research results.
cell# (3,3) in Figure 4.1). If you can truly practice this “hybrid methodology,” you
may already have a research career at hand, and a tree of fruits will grow with the
associated publications, as demonstrated in Figure 4.2.
4.6
eMPIRICAL vS. theoRetICAL ReSeARCh
The above discussion naturally brings us to a popular question raised by many
students: should I do theoretical research or empirical research? In engineering,
the corresponding question is: should I do fundamental research or application-
oriented research?
As we discussed so far, the researchers’ job is to uncover the causal relations
between factors that may be hidden in nature, and this job can be accomplished
either theoretically or experimentally. Generally speaking, theoretical research
fits better for areas where well-founded assumptions can be formulated to allow
inferences to be drawn from, to verify one or more phenomena under observation.
In contrast, empirical research is when such assumptions are difficult to come up
with, thus requiring experimental verification and data collection. In traditional
70 CRAFtING YouR ReSeARCh FutuRe
science such as physics, the boundary between theoretical and empirical research
was clearer. Nowadays, especially in engineering science and practice, the line
between theory and experimental research is less clear. Add to this gray area the
term “system oriented research,” which may indeed confuse some beginners and
even seasoned professors. We have more than once heard some senior professors
and researchers proclaim: “we have built this system, which is my research.”
We wish to clarify, however, in this section, that building a system without
subsequent analysis is not research, yet. But, if you have a hypothesis, say “Method
X is better than Method Y under conditions Z,” this hypothesis can be verified by
building systems to realize X, Y under Z and evaluate the hypothesis with these
systems. This way of proving or disproving a hypothesis is research.
Let’s start with empirical research, which is often associated with concepts
such as system building, gaining experiences, and making observations. Often,
empirical research starts with a hypothesis, which is a prediction or proclamation
about the world. As an example, the claim that “it is possible to fly faster than the
speed of sound” is a hypothesis. It can be proved, or disproved by building a plane
that can break the sound barrier, which provides evidence that is observable.
In science and engineering, hypotheses and empirical research are often
tightly coupled, but they may not be raised and verified by the same person or the
same group of researchers. Theoretical research proposes models and hypotheses
that can be generalized and tested. They often state a fact that goes beyond the
current set of observations using methods of induction. Once the models are built,
theories should have the ability to deduce new facts about the future. Thus, good
theories have the generalization ability. Empirical and theoretical research also
often go hand in hand; in fact, in physics, the theoretical and empirical research
works are often done by theoretical physicists and experimental physicists, respec-
tively, who are often two separate groups of people.
How do we get the initial ideas in empirical research? One often-practiced
first step is learning by repeating some other researcher’s work and rebuilding their
systems. For example, if you decided to build a search engine system for scientific
CoNduCtING SoLId ReSeARCh 71
literature, then perhaps a good system to try to build is the PageRank algorithm
that underlies the Google search engine. There are several advantages in doing
this. First, to get one’s hands dirty and to understand the details of system build-
ing is very important in system and engineering oriented research. By repeating
others’ work in your own environment, you will understand what is stated in their
research papers much more deeply, and even be able to understand what was left
out of their papers; in some cases, many important details are left out either un-
intentionally or because of space limitation. However, unless you try building the
systems on your own, you will not be able to fully understand these details. Second,
building these systems allows you to appreciate the complexity and scope of the
problems at hand, and think hard on how to plan for your own research in the fu-
ture. Often, one cannot understand fully the issues and resources that are involved
in completing a project. It might be the case that, just by reading about a system in
a research paper, one often underestimates the scope of the project. This is particu-
larly troublesome for beginners, as a major delay in system building can disrupt the
long-term plan for one’s Ph.D. study. Third, building a system allows you to have
an important resource at hand: the baseline or benchmark system, against which
you can later compare your more superior system to and demonstrate the validity
of your future hypotheses. Finally, and perhaps most importantly, going through
an entire process of building a system allows you to reveal many of the weaknesses
or oversights of a prior work, and allows you to propose your own hypothesis
and formulate an innovative idea. Back to our earlier search-engine example: you
might find that the original PageRank algorithm did not take into account the
individual users’ preferences and interests, and decide to develop a personalized
search engine for scientific literature in response.
To see whether you as a researcher should become a theorist or an ex-
perimental scientist, consider that in science and engineering, the development of
theories have been serving two strikingly different purposes. The first purpose is
for experiments to confirm or disprove a preceding theory, and the second one is
vice versa.
72 CRAFtING YouR ReSeARCh FutuRe
In the first scenario, on the one hand, theory comes first, which should
intrigue a sufficiently large number of subsequent researchers, who then build
systems and conduct experiments to prove or disprove the proposed theory.
One example of this mode of research is Artificial Intelligence. In his fa-
mous article, “Computing Machinery and Intelligence,” Alan Turing put forward
a grand hypothesis for all to see: that digital computers built based on Turing
machine can be made intelligent on par with humans. This hypothesis cannot be
proven or disproven by theory only, just like the hypothesis that the Earth goes
around the Sun cannot be verified based on theory alone. One should actually
build an intelligent computer to pass the Turing test to be claimed intelligent, and
this is both systems work and experimental research. The theory is testable and is
also decomposable, thus we have various branches of AI that includes computer
vision, machine learning, planning, game theory and computer games, etc. This is
an example of theory before experiments.
On the other hand, we can have experiments before theory in many other
examples. This used to be largely how physical science was done. You do experi-
ments, and produce results that do not connect with, or cannot be explained by,
a current theory. You now have something about which to posit a theory. The
Michelson–Morley experiment performed in 1887 created a “crisis” for classical
physics, and then Einstein proposed Special Relativity (a new theory) that can
explain the experiment outcome. Here is a more recent example in the develop-
ment of modern machine learning theories. In the early 1980s, there was great
excitement about the fact that digital computers could be made to learn from ex-
amples. Many schools of learning algorithms and systems flourished, each backed
by a number of successful applications. Noticing this diverse development in the
computational learning field, and the inability of existing theories to unify the
seemingly different learning algorithms, Leslie Valiant from Harvard University
developed the PAC learning theory, which stands for probably approximately cor-
rect learning. According to this theory, learning is simplified as the ability for a
system to make fewer and fewer mistakes with higher and higher likelihood, given
CoNduCtING SoLId ReSeARCh 73
that more examples are input. Several assumptions are made on how the examples
are drawn, how the mistakes are measured, and how to define when the learning is
successful. In most of the real-world machine learning applications, these assump-
tions are not necessarily true, or cannot be easily verified. Nevertheless, the theory
was abstract, elegant, and simple, and very useful in explicating the differences
between different learning algorithms. For this and other works, Valiant obtained
the 2011 Turing Award.
Are you more like Alan Turing or Leslie Valiant? The answer is that most
likely you will need both abilities in your work. As a beginning researcher, you
need to have a keen eye on the way the general field is going, and able to see the
theoretical underpinning in the trend. At the same time, you need to cultivate a
keen eye on how to unify different empirical research and various theories with a
simple and elegant theory of your own. This may happen in a computer science
thesis on machine learning, where one can propose a new learning problem, offer
a solution, experimentally compare different approaches, and develop a theory to
prove the algorithm’s convergence and efficiency, under certain assumptions. In
other words, the world has become more complex that one had better become
both a theoretician and an experimental scientist.
Now we return to the question of the value of building systems. As stated
earlier, one cannot claim that one is a researcher by way of building systems only.
However, one can become a researcher while building a system with a hypothesis
in mind. The fact that you build an airplane is to prove that heavier-than-air ma-
chines are able to fly in air. Likewise, the fact that we build a computer system is to
prove that the machine can be made to accomplish one or more of the intelligent
tasks that, before the system was built, humans were still better at these tasks. An
example is the IBM’s WATSON system, which beats human champions in the TV
game of Jeopardy, which before WATSON was only achieved with smart humans.
Thus, before building your system, ask yourself: what are you trying to prove?
In our example of Students A, B, and C, the difference between theoretical
versus empirical research can be reflected in the choices of A and B, who prefer
74 CRAFtING YouR ReSeARCh FutuRe
theoretical and empirical research, respectively. Let a particular instance of Student
A be Student A2. Student A2’s research focus is in how to learn a better model by
learning from examples that can come from very different source domains, where
these other domains may use a different feature representation. Student A2 further
decides to use a method of ensemble model learning as a solution. To proceed, he
first asks the question: “is it possible to transfer useful knowledge between any pair
of sources and target learning domains?” To answer this question, he consults the
theoretical works in computer science and mathematics, and formulates a set of
sufficient and necessary conditions with theoretical proof of transferability. This
set of theorems then guides his search for a better empirical solution, which he
then verifies further with large data sets.
In contrast, Student B2 (an instance of Student B) prefers to follow an em-
pirical approach from the beginning in the research area of knowledge transfer. To
do this, he finds a specific application domain of online product recommendations,
and realizes that the main challenge in this area is to come up with a scalable set of
algorithms that can work both accurately and efficiently. He then consults research
papers in this area, which he extends into an innovative algorithm. He tests the
algorithm on several large data sets obtained from online product recommenda-
tion and advertising companies, on which he realizes good performance.
4.7 teAM woRk ANd MuLtI-dISCIPLINARY
ReSeARCh
A long, long time ago, research used to be a one-person affair. Leonardo da Vinci
studied animal and human anatomy, architecture, and physics alone. Galileo
Galilei studied the motion of objects and planets, Newton published his three laws
of physics, and they published their works all by themselves. This situation has
dramatically changed today: open any issue of nature, and you will notice many
co-authors for each paper; sometimes the authorship takes up an entire page. In
milder cases, look up any author in engineering science, in Google scholar or
DBLP, and you will find an active researcher who has likely co-authored several,
CoNduCtING SoLId ReSeARCh 75
if not many, research papers with dozens of other people. In today’s science and
engineering research, teamwork and collaboration have become commonplace.
Enough has been said about effective team work, especially in business and
management sections of bookstores online and in airports. For scientific research,
however, collaboration often takes some unique conditions. We can summarize
these conditions as: common goals, communication, complimentary skill set, and
social cohesiveness.
A most common team setting is a student-advisor combination. There can
be variations where an advisor is replaced by a committee member, and the student
by a student group. In this setting, the student is often the main risk taker, taking
the initiative in seeking out new research topics, conducting surveys, inventing
algorithms, and carrying out the experiments needed if the work involved is em-
pirical in nature. In this case, the advisor acts as a consultant, giving feedback to
students from time to time and making sure that the student is on the right track.
At the beginning, the advisor is often the generator of the initial ideas, defin-
ing an area where the advisor has experience in, and listing out several potential
broad topics for the student to choose from. Later, the student becomes the main
idea generator. In weekly meetings, for example, the advisor often asks “have you
checked this paper out?” “Why not consider this other approach?” Or, “have you
talked to so and so in our department, for he or she might have a solution for your
problem?” When it is time for paper writing, the advisor often acts as an editor,
especially for the first versions of the research paper. In fact, an effective advisor
can help a student choose a better title, formulate a better abstract, and a better
introduction, which are often the most difficult tasks to do for a beginner. See
Section 5.6 for some other effective approaches with which your supervisor can
help you improve your paper writing quickly.
These points are particularly important when dealing with multi-
disciplinary research, where team members come from different disciplines. This
is when your teammates are most likely your peers. Bear in mind that multi-
disciplinary research is often the most rewarding, because it allows each researcher
76 CRAFtING YouR ReSeARCh FutuRe
to “think outside the box” of his or her own research field. This can happen in
bioinformatics, computational physics, or environmental sustainability fields. The
authors had the pleasant experience of traveling far into China’s west together
with some of Asia’s best ecologists and computer scientists in seeking ways to un-
derstand birds’ migration patterns via GPS5 trajectories. In a series of eye-and-ear
opening workshops, researchers from different disciplines pleasantly discovered
what each could do to achieve the goal of environmental sustainability.
In multi-disciplinary research, many new challenges exist. While it is fun to
work with researchers elsewhere, it does take time to learn each other’s objectives,
tools, and terminologies. Sometimes it feels like getting a second Ph.D. degree.
This period of re-education might last one or more years, after which one can find
many exciting new horizons. Thus, we highly encourage Ph.D. students to be will-
ing to drop in each other’s seminars and conferences to learn something entirely
new. You will be pleasantly surprised!
• • • •
5 Global Positioning System.
C H A P T ER 5
77
writing and Publishing Papers
In Chapter 3, you learned how to find novel and interesting research ideas and
topics, and in Chapter 4, you learned how to conduct solid research to demon-
strate your ideas. Assume that you have accumulated enough novel and significant
results in your research. The next important task, for researchers and graduate
students, is to write and publish high quality papers in order to tell the world
about your contributions in research. Albert Einstein, for example, was awarded
the 1921 Nobel Prize in Physics largely due to his published paper on the pho-
toelectric effect (which gave rise to quantum theory) in 1905. In fact, Einstein
published three other papers on Brownian motion, the special theory of relativity,
and E = mc2 in 1905. It is those four publications that contributed substantially to
the foundation of modern physics and changed the world since.
In general, a research work and outcome can be very roughly categorized
into top-notch (say top 5%), very good (top 5–25%), good (top 25–50%), and so-
so (bottom 50%), based on its novelty and significance. Your effort (finding novel
ideas and conducting solid research, as discussed in Chapters 3 and 4), is to “up-
grade” your research outcome from so-so to good, to very good, and to top-notch,
whenever possible. Which category a finished research shall fall into is, in fact, a
highly complex issue. Experienced researchers, such as your supervisors, may have
a better sense where your research outcome fits in. But let us assume that this can
be done for now. Ideally, you should obtain at least good or very good research
outcome before considering writing and publishing a paper.
We can also roughly categorize journals and conferences into the same four
classes based on many factors (such as the Impact Factor, competitiveness, and so
on). Again, this is a highly complex issue, but assume that this can be done. Then,
78 CRAFtING YouR ReSeARCh FutuRe
good writing is to ensure that a research outcome will be published in a publication
venue of the corresponding category. For example, a very good research outcome
should be published in a very good publication venue, instead of being “down-
graded” to good or even so-so ones, after repeatedly rejected by very good venues
due to poor writing. Sadly, we have seen this type of down-grading happening too
often in young researchers. Chapters 5 and 6 of this book will hopefully guide you
to change this.
Most graduate students will have written essays and reports by the time
they enter graduate programs. Some may have written newsletter articles, novels,
poems, promotional flyers, and speeches. They will surely have written tons of
emails, instant messages, and online chat. But most may have never written aca-
demic research papers, which are quite different from the other forms of writing
mentioned above. As the late Turing Award winner and Nobel Lauriate, Herb
Simon1 was known to tell his students: to write a good paper, “you decide on
what you want to say, say it, and then choose a good title!” There are many details
embedded in this statement, which is the topic of this chapter. In particular, this
chapter discusses the key elements of writing research papers, and emphasizes the
main differences between scientific writing versus other forms of writing.
5.1
“PubLISh oR PeRISh”
As we have seen, publishing high-impact, top-quality papers in top, peer-reviewed
venues in a sustained manner is extremely important for researchers, especially
faculty members in universities and researchers in research labs. It is the most
important means to share with the research community your contributions in
research, and your papers will hopefully bring impact to the field. In addition,
your academic promotion is also critically hinged on your high-impact publica-
tions. The same can be said about your research grants and projects. Thus, in the
research circle, there is a phrase: “publish or perish”!
1 Herbert A. Simon (1916–2001) was a professor at Carnegie Mellon University. He was a founder
of many disciplines, including artificial intelligence. He received ACM’s Turing Award in 1975 and
the Nobel Memorial Prize in Economics in 1978.
wRItING ANd PubLIShING PAPeRS 79
FIGuRe 5.1: Illustrating h-index: an author’s h-index is when the publication count
line crosses the citation curve, given that the papers are sorted from more to less on the
X-axis.
Indeed, the pressure for publishing high-impact papers is very high in
universities in North America, and in many countries around the world. This
pressure tends to have some adverse effects, including producing “salami” papers
(see Chapter 3) that are not highly impactful and overlooking the other important
responsibilities (such as teaching) of university professors. We are certainly against
publishing many unimportant papers just to pad your resume.
Despite the negative connotations associated with the phrase “publish or
perish,” publishing high-quality papers that report novel and significant research
results is the duty of researchers. Here, we emphasize that a researcher’s goal is to
publish high-impact papers that others can use to build their own research pro-
grams. One indication of impact is citation numbers, which counts the number of
references others have made in their papers when referring to your work. Today we
have many ways to count citations. Some count citations by journal articles only;
for example, the Science Citation Index, or SCI, is an often-used index on journals
from the Institute for Scientific Information (ISI), which is owned by Thomson
Reuters. Instead of just counting a limited number of journals, other citation
80 CRAFtING YouR ReSeARCh FutuRe
services may include most open-source publication venues, including conferences.
Examples include the Google Scholar and the CiteSeerX services, which are in-
creasingly being used to judge the impact of papers and researchers.
A particularly important measure is the h-index, which is a measure de-
signed by Jorge E. Hirsch on both the productivity and the citation numbers of
a paper. According to Wikipedia2: a scholar with an index of h has published h
papers each of which has been cited in other papers at least h times. For example,
a researcher with an h-index of 20 means that the researcher has 20 papers that
are cited by others for 20 times or more, and the rest of the papers published by
the researcher are cited less than 20 times. The h-index, as shown in Figure 5.1,
reflects both the quantity of publications and the quality of publications, which
can be viewed via a chart. We sort the publications of an author from the most
cited to the least cited, and take this list as the X-axis. The Y-axis is the count
value: we can plot the number of citations for each paper, which gives one curve
(marked as “Citation Count”). Similarly, we can plot the diagonal straight line as
the number of papers, which is denoted by “Paper Count.” When the line and the
curve cross each other, the corresponding X-value gives the h-index value (which
is 17 in this example).
However, publishing top-quality papers is not an easy task. This leads to our
next question: why is publishing top-quality papers hard?
5.2 whY PubLIShING toP-QuALItY PAPeRS
IS hARd
There are usually two major avenues of publications: conferences and journals.
Most good conferences and journals want to maintain their reputation of only
publishing a limited number of top-quality papers. At the same time, most re-
searchers want to publish in those top conferences and journals, because their
papers will have a high visibility. Thus, the competition is very stiff.
2 http://en.wikipedia.org/wiki/H-index
wRItING ANd PubLIShING PAPeRS 81
In computer science and some other disciplines, there are many top-rated
conferences that publish full-length papers (e.g., 6 to 12 pages of double-column,
condensed texts). The acceptance rate can range from 10 to 30%. Many top spe-
cialty journals in science and engineering also have an acceptance rate of about
10–30%. Some broad top-of-the-heap journals in science (such as Science and
nature) have very low acceptance rates.
Some students tend to “try their luck” when they submit their papers to
those top conferences and journals. “30% is not too bad, and is much higher than
winning a lottery!” They further explain mathematically: if I submit a paper to 20
conferences and journals with an acceptance rate of 30%, the probability that none
of them accepts my paper is 0.720, which is about 0.014. Thus, the chance that at
least one conference/journal accepts my paper is 98.6%!
We usually give those students the following advice: first, do not submit
“lousy” papers to a good conference or journal just to try your luck, or just to get
reviewer comments about the paper. Doing so will easily damage your reputation.
Second, it is academic misconduct to simultaneously submit the same or very
similar papers to multiple places (see Section 5.4 on plagiarism).
5.3 whAt MAkeS A GReAt PAPeR?
What makes a great paper? What ingredients will ensure that your paper will get
accepted? As you have probably guessed, there is no absolute answer to these ques-
tions; it is relative to the conference or journal you submit to, and the answer is
also subjective by individual reviewers. When reviewers evaluate papers, they take
into account the overall quality of the conferences and journals that you submit to.
Despite these factors, there are some common elements that reviewers often use
to judge and rank papers. We will review them below.
First, you need to understand the review process. Almost all conferences
and journals recruit a pool of established researchers to review submitted papers.
For each submission, 2–5 reviewers (usually 3) in the same or closely related areas
are selected to review your paper. Sections 5.8 and 5.9 will discuss paper reviews
for conferences and journals in detail. These reviewers often need to provide a set
82 CRAFtING YouR ReSeARCh FutuRe
of “scores” for rating the quality of the paper, along with a detailed justification
of their scores. As mentioned earlier, reviewers are always anonymous, thus, they
often review papers with a critical eye. Scores are usually integers between 1 to 5,
or between 1 to 10. The following is a typical list of questions that reviewers must
answer with scores:
Please use an integer from 1 to 5 to answer each question below (1 for defi-
nitely NO, 5 for definitely YES):
1.
2.
3.
4.
5.
6.
Are the research and results novel?
Are the results significant?
Is the paper technically sound?
Is the paper clearly written and well presented?
Should the paper be accepted?
Are you highly confident about you.r review?
Clearly, Questions 1 and 2 are about novelty and significance of the research
we have been talking about throughout this book. Question 3 is about research
methodologies (see Chapter 4). The research must be technically sound to support
the main claim of the paper. If the idea is not novel, the result is not significant,
or the research method is flawed, the scores on Questions 1–3 would be low, and
the chance for your paper to be accepted would be low, no matter how well you
write your paper.
However, too often we see young researchers, including Ph.D. candidates,
who have conducted novel and significant research, but have difficulty in writing
their papers well. That is, their papers would receive a low score for Question 4
above. This also means that it is difficult and frustrating for reviewers to make
an accurate judgment about Questions 1–3. For a high-quality and competitive
conference or journal, reviewers are usually critical and cautious—if they are not
sure about the novelty and significance of a paper, they tend to give low scores for
Questions 1–3. Such a paper would then likely be rejected.
As we have been organizers of many competitive international conferences,
often as Program Committee Chairs, we can say that among rejected submissions,
wRItING ANd PubLIShING PAPeRS 83
at least half of them were negatively influenced by poor writing. This happens to many
papers written by native English speakers. Chapter 6 describes many language-
independent common misconceptions and flaws that often weaken the submitted
papers significantly. If authors adhere to guidelines and methods that we propose
in this and the next chapters, those papers would be better judged, and would have
a better chance of being accepted.
We will start with several basic, untold truths about research papers.
5.4 A Few uNtoLd tRuthS About
ReSeARCh PAPeRS
We would like to first lay down a few basic “ground truths” or postulates about
research papers. These postulates are so basic and well accepted in the research
community that your supervisors (or other researchers) may never explicitly spell
them out for you. However, many early researchers learn these postulates in the
hard way.
The first and most important ground truth about a research paper is that it
must be truthful, honest, and accurate. Unlike commercial ads and product pro-
motion, authors of research papers should be as unbiased as possible, discussing
both strengths and weaknesses of their work in the paper. The experimental results
must be truthful and accurate. Material, data, and results should be kept for a cer-
tain period of time (usually several years) after the paper is published for possible
future verification by other researchers. If researchers intentionally falsify the re-
sults or mislead the readers, serious questions may be raised about the researchers’
credibility; in the worst case, academic fraud may be charged. Occasionally we hear
news about research misconduct where data and research results are fabricated, or
the published results cannot be replicated. This often results in published papers
being retracted from the journals, and/or researchers being dismissed from their
research institutes. Honest mistakes are allowed, but strong due diligence should
be applied to avoid mistakes that could have been avoided.
Another common form of research misconduct is plagiarism. This includes
many forms. One of them is called citation plagiarism. This is when authors fail
84 CRAFtING YouR ReSeARCh FutuRe
to cite and credit previous work that is similar to their work, and give readers
an impression that the work is original and done by themselves. We understand
that many authors may not wish to do this on purpose; often they may not find
certain previous publications to know that their work has actually been done and
published before. In many cases, reviewers can understand such unintentional
omission, and will point out such previous papers. As authors, you must then study
and cite these and other published papers, and discuss how your paper is different
from them. If the previous work is indeed very similar or identical to yours, then
you should not submit your paper anymore, until you improve it to show that it is
better than the previous work, at least in some aspects.
In Section 3.2, we mentioned that you should do a thorough literature
search, as thorough as you can before you start your research on a chosen topic or
problem. This would help prevent you from wasting time and effort in “reinventing
the wheel.” When you make a claim in your paper on novelty, you may also say “to
the best of our knowledge, our work is original . . .” to reflect your best effort.
Another form of plagiarism is multiple publications of the same or very
similar content in different conferences and journals. Many conferences and jour-
nals explicitly ask authors to submit only original work, and forbid simultaneous
submissions even if the authors intend to withdraw others should one submission
be accepted. Multiple submissions and publications can be discovered easily (espe-
cially with modern electronic submission systems), and will easily damage authors’
reputations.
When you include others’ writings as your own, you run the risk of violat-
ing copyright law. Some graduate students copy many sentences or even several
paragraphs from other papers, often from previous work or a general introduction
to a topic, in their own papers. True, the description of previous work or a well-
established topic is pretty much the same. But still, you cannot copy sentences
from other papers into yours as if you wrote them. The “other papers” may actually
include your own published papers. The publication agreement of your previous
papers may stipulate that anyone, including you, needs to get a written permission
if any part of the paper is reproduced. If you need to quote the whole sentences
or paragraph as fair use in your paper, put them in quotation marks, and cite the
wRItING ANd PubLIShING PAPeRS 85
sources.
An easy way to avoid copyright infringement, when you write similar con-
tent as in other papers, is to wait for a few hours (or days) after you read the other
papers, and write the description in your own words. This way, your content will
be similar, but the wording will be quite different.
The last often-untold truth about research papers is that it must reveal the
full details of your research so that other researchers can replicate your results
independently after reading your published paper. In Chapter 1, we described re-
search as making novel and significant contributions with repeatable and verifiable
results. Thus, it is your responsibility to reveal technical details of your work as
fully and as clearly as possible in your paper. If other researchers require additional
details of your published paper, you are obliged to supply them. Indeed, research
in science is “self-regulating” and “self-correcting,” where published results and
papers will be analyzed, replicated, verified, and improved upon by other research-
ers. It is an open (fully revealing), fair (relatively speaking), and free (i.e., academic
freedom) community. If your work contains trade secrets or methods with poten-
tial commercial gain, you can choose not to publish it, or you can secure patents or
other intellectual property protections before the paper is submitted.
When you are ready to write your first papers, you should certainly get your
supervisor involved. You may have a proofreader (e.g., a native speaker) to review
and revise the paper. What are the roles of you (Ph.D. candidate), your supervisor,
and the proofreader in a paper writing process?
5.5 the RoLeS oF You, YouR SuPeRvISoR,
ANd PRooFReAdeR
Quite a few graduate students are from abroad, where their native language is
not English (such as Chinese, Persian, Spanish), but they need to write papers in
English. “Since my native language is not English, my supervisor should write the
86 CRAFtING YouR ReSeARCh FutuRe
papers, while I am doing the experiments,” some may say. Or, students just write a
first draft, give it to their supervisor or proofreader to revise and polish the paper
to “perfection.” These viewpoints are certainly not correct. You must learn to write
papers well by yourself by the end of your Ph.D. study, regardless of whether your
native language is English or not.
In fact, many native English speakers do not know how to write research
papers well. One of our Master’s students, who is a native English-speaking
Canadian, almost failed the thesis defense due to his poor writing. There are
many language-independent misconceptions and flaws (see Chapter 6) in writing
research papers, and they are often harder to overcome than grammatical errors.
Your supervisor will play an important role in correcting them.
A good understanding of the roles of you, your supervisor, and proofreader
is important in writing a high-quality research paper. Here is a list of roles that the
supervisor may play in paper writing:
•
•
•
•
Help you to understand the goals, logic flow, organization, and argu-
ments in writing research papers.
Help you to identify specific misconceptions in writing research pa-
pers, and you must learn to correct them quickly.
Make high-level suggestions on how to improve the presentation and
argument of the papers.
Make a final check of the paper before it is submitted.
Your role in paper writing:
•
•
Work with your supervisor closely for the first 1–2 papers, and learn to
write papers well quickly (see Section 5.6 for advice for supervisors).
Become independent quickly, and be able to write the whole paper
well by yourself.
Proofreader’s role:
•
After a paper is largely finished (with excellent structure, flow, argu-
ment, organization, and presentation) by you and your supervisor, a
proofreader can be asked to further improve the paper and correct
wRItING ANd PubLIShING PAPeRS 87
minor English errors. Other graduate students in the group or in the
department would be good candidates.
We want to emphasize that the role of proofreaders, unless they are also
researchers in the similar area, is quite minimal. Often they cannot help you
strengthen the logical flow, arguments, or organization of your paper. They can
only point out minor English errors (such as the question of whether “the” should
or should not be used in a certain place, and the use of prepositions). You and your
supervisor are most crucial for writing a high-quality research paper.
So, how can you and your supervisor can work together most effectively in
improving your paper writing? We answer this question in the next section.
5.6
SuPeRvISoRS: how to IMPRove YouR
StudeNtS’ wRItING MoSt eFFeCtIveLY
This section is mainly written for supervisors, who wish to improve their graduate
students’ writing most effectively. If you are a graduate student and want to work
with your supervisors to improve your writing skills quickly, talk to your supervi-
sors to see if they can use the method we suggest here.
Many supervisors first ask their students to write paper drafts, and correct
them on papers (e.g., with red ink). They pass the corrected drafts to students to
“input” the corrections in the paper. This process may be repeated many times.
Unbeknownst to supervisors, their students often have a hard time reading their
handwriting, and do not know why these changes have been made. The improve-
ment in the students’ writing skill is often minimal in each such writing-revision
iteration.
We have found that the following method can very quickly improve students’
writing skill, to the point that they can write well by themselves after writing only
1 to 2 papers. We call the method the PI method, or the Progressive Improvement
method. It may take a bit more time in the beginning, but the extra effort at the
beginning will be paid back many times during their Ph.D. study.
The PI method works roughly as follows. When it is time to write a research
paper, you can ask your student to write a few sections first, or about half of the
88 CRAFtING YouR ReSeARCh FutuRe
first draft. These sections may include the Abstract, Introduction, and a section on
your new theory or methods. Why not ask him to write the whole paper? Well, if
this is the student’s first paper, it is likely that the draft needs be completely rewrit-
ten. Thus, the student does not need to waste too much time in writing a complete
draft that will not be used in the future.
You then sit with your student, and work closely with him, word by word, for
the first page (just one page) of the paper. For a student who is a native English
speaker, you may focus on high-level structure, logic flow, and how to make con-
vincing arguments in the paper. For a foreign student, you may also need to work
on detailed sentence structures and choice of words. You need to provide detailed
explanations about why you revise the paper this way. Ask your student to write
down each writing principle and each type of mistake your student made. You
then tell your student to learn and generalize from these principles and mistakes,
and apply them to revising the rest of the paper (and to write new sections). You
ask him to review the notes often, take time to revise the paper carefully, and try
not to make the same mistakes again.
Your student may come back in a few days with the revised paper. You can
first quickly review the previous page(s) you have worked on, and offer further
advice on improvements. Then you work with him, word by word, on the next
new page (still, just one page). You should notice that the number of mistakes and
needed improvements might be cut by half. If your student makes the same mis-
take as the last time, you should give him a serious warning. Your student should
write down new tips and mistakes, and then go away to revise the rest of the paper,
and come back to you in a few days. This process goes on progressively (thus called
Progressive Improvement), until the whole paper is improved once. By that time,
the writing skill of your student should have improved tremendously.
It is often necessary to go over this whole process two or more times. In
the second and third rounds you and your student may focus more on the overall
structure and logic flow of the paper, adding experiments to make the results more
convincing, adding examples to make it easier to understand, and so on. Again,
you work closely with your student, but only on 1–2 issues, or only on 1–2 pages
wRItING ANd PubLIShING PAPeRS 89
at a time. Then, you may ask him to apply the ideas to improve the whole paper.
Several such iterations are often needed to produce a highly polished and satisfac-
tory paper, to be submitted to competitive conferences or journals. By the time the
first paper is finished (usually in several months), you student should improve his
writing tremendously, and be ready to write a reasonably good paper by himself.
Your student will also learn how you have tried to make their paper as per-
fect as possible. Keep each version of the revisions so that your student can see
how many changes have been made, and learn from this process after the paper
is submitted.
In Chinese there is a saying: it is better to teach people to fish, rather than
giving them a fish. (The conventional English translation of this is: Give a man
a fish and you feed him for a day. Teach a man to fish and you feed him for a
lifetime.) By spending some extra time teaching your students to write their first
paper at the beginning of their graduate study, they learn to write good papers by
themselves for the rest of their research career and life. This in turn frees your time
for working on other important research problems.
Often, we have several graduate students who are learning to write papers,
and we use the following methods to be even more efficient. We will distribute
the first draft of a paper to these students, and ask them to review and revise the
first page carefully on their own. We will then use Skype (a software for online
group voice chat) and Teamviewer (a software that allows several people to view
the same screen where the paper is actually revised) to discuss and revise the paper
simultaneously. This way, all of our graduate students can contribute, revise, and
learn how to write better research papers, even when we are in different places or
countries.
Below, we will first discuss where to submit the paper after it is written.
5.7 wheRe to SubMIt: CoNFeReNCe oR
JouRNALS?
In some disciplines, such as biology and statistics, conference papers are usually
very short and in the form of an extended abstract, and the acceptance rates are
90 CRAFtING YouR ReSeARCh FutuRe
usually high. The purpose of the conferences in these disciplines is to let research-
ers meet and present their most recent work, usually as posters. Often thousands
of researchers go to such conferences each year. Complete work is often published
only in journals. In some other disciplines, such as Computer Science, confer-
ence papers are full-length with 6 to 12 pages of condensed text, often in double
columns, reporting on novel contributions. The reviews for such conferences are
more rigorous, and thus, the acceptance rates are typically quite low. If a paper is
accepted, authors often have 20 to 30 minutes to present their work orally in a
conference session, sometimes as a poster as well. Often, such conference papers
are regarded as important as journal paper publications, although different depart-
ments, schools, and universities may put them under different weights.
Here are some other differences between full-length conference and journal
papers:
•
Conference papers often have a shorter review time than journals, and
their review processes are also different (see Sections 5.8 and 5.9).
Thus, conference papers are suitable for reporting the most current
work with rapid progress. Going to conferences also allows you to
meet other researchers and build up your academic network.
•
Annual conferences have submission deadlines while journals do not.
Thus, conference deadlines are often taken as drivers and landmarks
for making progress in research.
•
Conference papers, including the full-length ones, usually have page
limits, while most journals do not. Thus, journal papers are usually
longer, extending full-length conference papers to report complete
work, and thus are often regarded as “archival.”
One way to enter a research field is to attend a top or main conference in
that field. In computer science, for example, each subfield such as artificial intel-
ligence or machine learning has its own “top conference.” Here you will meet with
the leaders of the field, hear the latest progress in the field, and meet numerous
wRItING ANd PubLIShING PAPeRS 91
people who are active researchers in the field. Soon, you will want to be an author
yourself, and start to look for a conference to submit your paper to. If your paper
is accepted, you can present your paper in conferences while others listen and learn
about your work. It will be a boost to your confidence in research, and make you
a mature researcher.
Which conferences and journals will you select to submit your papers?
Each discipline has a set of conferences and journals; usually some are considered
top-tiered, while others so-so. You have probably heard about some humorous
(yet true) stories about nonsensical computer-generated papers being accepted by
peer-reviewed conferences and journals. You certainly do not want to submit your
papers there!
Here are some general guidelines in choosing conferences and journals
to publish your work. If a conference is the main conference in the field where
most important researchers go, where “importance” can be measured by citation
numbers, h-index, or leadership in an area, then you may consider submiting a
paper or poster even if the acceptance rate is high. Otherwise, choose the main
conferences in a field with an acceptance rate around or below 1/3. For journals, it
is hard to specify an Impact Factor (IF) as a guide to determine where to submit
your papers, as IF varies a lot for different disciplines. One general guideline is to
choose the top tier, or the top 25%, of the journals in the field in terms of IF. You
may also talk to your supervisor who should know well which conferences and
journals belong to top-tier.
After you submit a full-length paper to the conference, you might ask: how
are conference papers reviewed?
5.8 how ARe FuLL-LeNGth CoNFeReNCe PAPeRS
RevIewed?
Many prestigious and competitive conferences in computer science require that
the names and affiliations of authors remain anonymous in the submission, such
that the reviews can be more fair—reviewers will judge the papers only based
92 CRAFtING YouR ReSeARCh FutuRe
on the contents, not the authors. Some conferences, however, do not have this
requirement precisely because author information can also provide some credit-
ability of the work. In both cases, it is usually quite fair when your submissions are
reviewed by other researchers.
Let us consider the process in which a top conference reviews and accepts
full-length papers. The authors of this book have been conference reviewers, pro-
gram committee (PC) members, senior PC members, PC chairs and conference
chairs. Naturally, we are in a good position to answer this question.
First, when the conference paper deadline closes, typically the conference
chairs will start to filter out the papers that obviously do not make the cut. Some are
rejected right away due to their wrong format, such as being longer than the page
limit. Typically, there are also a number of senior PC (SPC) members; each oversees
the review process for a subset of the papers and provides comments and recom-
mendations for PC chairs at the end. A paper is often reviewed by three reviewers,
who are at arm’s length from the authors. Often, this means that the reviewers are
not current and former colleagues, students, advisors and friends of the author.
What do reviewers look for when they review a paper? Section 5.3 described
briefly what makes a great paper. Again, the most important attributes are novelty
and significance, as mentioned many times in the book. Reviewers would want to
find out what is new in this paper, compared to established literature and practice
in the field. If they cannot find what is new about the paper, say in the abstract
and introduction of your paper, they might start looking for reasons to reject the
paper. Another important thing in paper review is to look for evidence to back
up what the author claims are the innovative aspects of the paper, to ensure that
there is real impact and significance in the research. For example, the author might
claim that the paper presents a new method for learning communities in a large-
scale social network in the abstract and introduction of the paper. However, if
the experiments deliver only tests on a small-scale data set, or if the experiments
do not compare with some well-known methods in the literature, then there will
be enough grounds to reject the paper. Section 5.3 discussed what makes a great
wRItING ANd PubLIShING PAPeRS 93
paper, and reviewers need to give scores to assess a paper’s originality, significance,
technicality, and presentation.
Generally, to reject a paper is often easier than to accept one, because there
are many reasons a paper can be rejected, and only one of them needs to be found.
In other words, a paper might have many critical points where things can go
wrong, and if one of these critical points is not handled properly, the paper might
be rejected.
We can simulate the process of rejecting a submitted paper by pretending
that we are reading a freshly received submission. First, a reviewer can read the
abstract of the paper and search for the term “In this paper . . .” or an anchor
place where the authors state what their main contributions are. The reviewer can
then take this contribution statement and start looking for elaborations (that is,
by following the top-down refinement discussed in Chapter 6), beginning with a
reading of the introduction section. If the reviewer cannot find any elaboration, it
is a sign that the paper was not written in a logical way, and it is very likely that
the paper is poorly written. If similar patterns can be found in many other parts
of the paper that shows that the paper contains too many logical confusions or
syntactic mistakes, then the reviewer may consider these facts as strong evidence
that the paper should be rejected for a thorough rewrite. Subsequently, if the paper
clearly states its innovations in both the abstract and introduction sections of the
paper, but if the author has not followed up with strong evidence to back up these
contribution claims (i.e., the author did not state how he did his solid research, as
we mentioned in Chapter 4), then the paper again can be rejected. If the research
results can be in place, but the author has not discussed or cited some very relevant
works in related literature, then it may be a sign that the author is not sufficiently
familiar with the established literature in the field, and it may be another ground
to consider this as a major weakness, which may lead to a rejection.
Summing up the above procedure, the process of a quick scan on a paper
can be rather strict, but fast: it may take 20 minutes for a reviewer to check all of
the above points, before a rejection decision is made due to a lack of logical flow,
94 CRAFtING YouR ReSeARCh FutuRe
clarity, sufficient related works or research results. If, however, the paper survives
the above screening process, it may still not mean that the paper will be accepted.
The reviewer might take another 30 minutes or longer to carefully read the techni-
cal parts of the paper, go through the derivations, theorems, and results, check the
relevant literature, and perhaps even ask the opinion of another expert in the area,
before giving a recommendation on the paper.
In recent years a reviewer’s workload is increasing each year. For example,
a typical computer science conference can easily receive nearly 1,000 submis-
sions. A reviewer might have to review 10 papers in several weeks. The time they
spend on each paper is usually rather short. Each reviewer needs to provide the
reviewing scores, as well as detailed justification and suggestions. In Chapter 6
we will discuss how to make papers easy to read, so that your paper can pass the
so-called “10/30” test.
Some conferences allow authors to view the reviews and input their feedback
online before final decisions are made. If such an opportunity is given, you should
try your best to provide succinct, clear, and strong statements about reviews, espe-
cially if you believe some aspects of the reviews are not correct.
The final acceptance decision of each paper will be made by the senior
program committee member (SPC) and the program chairs. Once the decision
is made in a conference, which is usually just “Accept” or “Reject,” the decision is
final, and usually cannot be appealed. There is no room to formally reply to the
reviews and to submit a revised version, as most journals would allow. So, how are
journal papers reviewed?
5.9 how ARe JouRNAL PAPeRS RevIewed?
Journal papers are peer-reviewed by other researchers, in a similar way as a confer-
ence paper. The EIC (Editors-in-Chief ) of the journal may assign each submis-
sion to one of the Associate Editors (AE) of the journal. The AE in charge of your
paper will recruit reviewers and oversee the reviews of your submitted paper. The
final decision is usually made by the EIC.
wRItING ANd PubLIShING PAPeRS 95
Compared to conference papers, major differences exist that make journal
papers much “deeper.” First, a journal usually does not have a very strict page
limit, making it possible for authors to fully express themselves. Thus, authors can
go much deeper in a journal paper than in a conference paper in describing their
research results. Second, the process of paper review for a journal paper is often
longer than a conference paper, going through several “rounds” of comments and
response between the reviewers and authors. This process makes it possible for
authors to submit a revised paper to address all reviewers’ concerns.
More specifically, after the first round of review, authors may receive one
of the following four decisions: accept as is, accept with minor revisions, accept
with major revisions, and reject. In the middle two cases, authors need to prepare
a revision within a certain time limit to resubmit to the journal. Accompanying
the revised manuscript, authors also need to prepare a clear and detailed point-to-
point reply to all reviewers’ comments, especially on the negative ones, in a response
letter. Here authors can give a detailed rebuttal if they believe that the reviewers
are wrong. When authors believe that the reviewers have made good suggestions,
they should improve the paper, and specify how and where in the article they have
addressed these comments. The response letter can be long (e.g., 10 pages), and
should be carefully written.
If the paper is initially accepted with major revision, after the revised paper
is submitted, the paper will usually be handled by the same AE (Associate Editor),
and reviewed by the same reviewers. The second-round revision can still be re-
jected, if reviewers are not convinced and satisfied with your revision, or it might
result in a second major revision (only in rare cases), minor revision, or acceptance
(happy ending!). If the second-round decision is a minor revision, you should
submit the revised paper, and another point-to-point response letter, similar to the
one for the major revision; but this one can be shorter than that for a major revi-
sion. Usually, the revised paper will be checked by the associate editor in charge,
along with a few designated reviewers, only to see if all of the minor concerns are
addressed. If so, the paper will be accepted.
96 CRAFtING YouR ReSeARCh FutuRe
Occasionally, during these rounds of review you may think that you are being
treated unfairly. In that case, you can always write a letter to the EIC to complain.
Compared to conference papers, the process of submitting and revising a
journal article is often lengthy, but it provides very good training for a student and
researcher alike. It is in this process that more detailed weaknesses of the manu-
script can be brought out and addressed. Thus, despite the lengthier time delay in
getting a paper published, it is still worthwhile to submit an extended version of a
conference article to a journal, in order to “close the book” after fully exploring an
idea. For this reason, journals are often considered to be archival.
When extending a full-length conference paper to a journal article, how
much extra material is often included? On this question, the answer varies from
journal to journal. In some conferences, such as the computer graphics conference
ACM SIGGraph,3 most conference articles can be directly transferred to a cor-
responding journal, such as the ACM Transactions on Graphics journal in this
example. This is done under the agreement between the conference and journal
on copyright matters, and on how papers are cited, so that one of the conference
or journal articles will be cited at a time and citations will not be counted twice.
In other cases, full-length conference papers need to add at least an additional 25
to 30% of new material before being considered for journal publications. This ad-
ditional material can be more experimental results, more theoretical results, more
discussions and comparisons with related works, or a combination of the above. In
this case, we suggest that you write a cover letter to the EIC detailing what is new
in the journal submission compared to the published conference paper.
As you can see, the process of paper review in a typical journal is quite dif-
ferent from that in conference. While review of conference papers is a “one-shot”
process, meaning that the review results cannot be disputed once the final decision
is made, for a journal paper the authors can often engage in rounds of communi-
3 Association of Computing Machinery, Special Interest Group on Graphics and Interactive
Techniques; http://www.siggraph.org
wRItING ANd PubLIShING PAPeRS 97
cations via the response letters, and thus the decision process is more interactive.
In addition, because a journal is often longer in page count, authors can write
more thoroughly about a subject. Thus, we strongly suggest that Ph.D. candidates
should try their best to submit and publish 1–2 top-quality journal papers reflect-
ing their Ph.D. work, especially if they plan to enter the academia.
• • • •
99
C H A P T ER 6
Misconceptions and tips
for Paper writing
In the previous chapter, we have given an overview on the process of paper writing,
submission, and reviewing. In this chapter, we focus on some common and major
misconceptions and flaws that junior researchers and graduate students often have
when they write their first research papers. These flaws are language-independent —
that is, native English speakers often make them too. Ironically, it is often much
harder to correct those language-independent flaws than grammatical errors in
English, and thus, supervisors may need to be forceful in correcting them, and
correcting them early, before the Ph.D. thesis is written. Publishing a series of
papers in competitive journals and conferences during Ph.D. study is the best way
to validate not only the thesis topic, but also the good presentation style of the
papers.
6.1
“It’S So obvIouS thAt ouR PAPeR IS GReAt”
One common weakness of papers is that the authors do not make a strong argu-
ment for the research in their paper. They might think that the reviewers can see
and conclude easily that the paper is great. This may be due to the culture and
environment that they grew up in; e.g., some cultures believe that “we should be
modest,” and thus we should not stress the importance of our work too much.
However, as we saw earlier, publishing in top journals and conferences is highly
competitive, and thus it is your responsibility and to your benefit to argue, as
strongly as you can but not exaggerate, that your work is novel and significant.
100 CRAFtING YouR ReSeARCh FutuRe
Indeed, a research paper is essentially an argument about one central theme: our
research work makes novel and significant contributions. This central theme is
almost always supported by the following logical steps:
•
•
the problem is important in advancing knowledge. If the problem
you study is not important, then why study and write about it?
Previous works A, b, . . . have been done, but they have certain
weaknesses. Most likely some previous works have been done, and if
they are already perfect, there is no need for you to study it.
•
we propose a new theory/method/design/process z. You should
emphasize the novelty of your work. Is it the first time that Z is pro-
posed? Is Z intriguing and surprising? If so, you should say it in the
paper.
•
we prove/demonstrate that z is better (at least in some aspects)
compared to A, b, and others. Can you prove it theoretically? Do
you run extensive experiments to demonstrate the superiority of Z
compared to previous works and the state-of-the-art (see Chapter 4)?
This supports the high impact and significance of your work.
•
discuss z’s strengths and weaknesses. You should also be upfront
about the weaknesses of Z. A research paper is not like product
promotion—it must be honest, fair, and accurate. The weakness also
leads to future work of Z, usually described in the Conclusion section
of the paper.
Each component should be further supported and justified, as strongly as
you can, by your paper. For example, when you claim that the problem is impor-
tant, you can support it by showing who else also claimed this importance in the
past, citing many previous publications on the problem, or referring to its use in
real-world designs, engineering work, and applications. When you claim that your
new theory or method is better than a previous one, you should be able to prove
it theoretically, or run extensive experimental comparisons and perform statistical
MISCoNCePtIoNS ANd tIPS FoR PAPeR wRItING 101
significance tests on the results (see Chapter 4 for how to justify the superiority of
your work). If you show that your new method has been deployed in practice, such
as industrial applications, it would be even better. The more support you have for
your argument, the more convincing the paper will be, and the greater the chance
that the paper would be accepted.
If you are the first one to propose the new theory, paradigm, or method, you
should claim it in your paper: “To the best of our knowledge, we are the first to
propose . . .”. Be explicit and forceful, and avoid ambiguity. Use the active-voice
sentence structure to describe what you have done (we propose . . . ; we demon-
strate . . .); a passive voice (it was proposed . . .) is confusing as it is unclear who
actually proposed it. Make sure to distinguish what your paper is proposing and
solving from what previous works have proposed or solved.
Note that making a strong argument for your paper does not mean that you
should exaggerate your claims. In fact, you should never claim anything more than
what you can support and justify in the paper. Opposite from the statement “I am
modest,” we sometimes see the mistake of claiming “I am the greatest.” Some au-
thors make a grand claim (such as “We have solved the AI problems completely”),
but grossly fail to support it. This casts a very negative opinion about the paper,
which can be easily and quickly rejected.
Note that while making a strong argument about the superiority your
research, use a polite tone, such as “to the best of our knowledge,” “as far as we
know.” Similarly, use a polite tone when discussing the weaknesses of previous
works, such as “it seems that the previous work . . .”. Back up your claims with
strong evidence.
If you plan to submit your paper to a specialized conference or journal, you
may not have to say, literarily, that the problem we are studying in this paper is
important. This can be implied in the description of the problems. But none of
the above logical components should be missing in the paper.
To summarize, your paper is an argument, and should have one central
theme: your paper contains novel and significant contributions. You should
102 CRAFtING YouR ReSeARCh FutuRe
emphasize your contributions. You should argue for your paper, as strongly as you
can, and at the same time, not exaggerate your contributions. It is a tricky balance.
Be positive, accurate, upfront, and balanced.
6.2
“It IS YouR ReSPoNSIbILItY to uNdeRStANd
MY PAPeR”
Very often, when early researchers (including Ph.D. candidates) write their papers,
they unintentionally make their papers hard to understand by reviewers and future
readers. They often believe that it is the reviewers’ responsibility to understand
their papers (hence the title of this section). They often blame reviewers for not
having enough knowledge or putting in enough time to understand their papers
(“Reviewers should read my paper more carefully and check out my published
papers cited, to fully understand this paper.” Or “Reviewers are so careless. They
point out . . . but it is clearly stated at line 23 of the right column on page 5!”).
Furthermore, some researchers even think it is a “good thing” to make their papers
hard to read (“If my paper is easy to understand, it must be too simple!”). They
might think that it is natural for research papers to be hard to understand (“My
paper should be hard to understand—it is based on many years of research and
my Ph.D. thesis!”).
In the last section, we describe one objective of writing a paper: to make an
argument, as strongly as you can, for the novelty and significance of your paper.
Here, we want to add another objective: you need to do so as clearly and as simply as
you can. The two objectives are, surprisingly, not contradictory to each other.
Why is the objective of clarity so important? A major reason is that review-
ers are very busy researchers themselves, and they won’t spend hours to understand
every page of your paper. If reviewers don’t quite understand your paper due to
your poor writing, they tend to be frustrated, and give you low scores and nega-
tive recommendations (Section 5.3). The other reason is that after your paper is
published, as we discussed in Section 3.3, readers, who themselves are researchers,
need to get your main ideas and results quickly. Research papers should be written
MISCoNCePtIoNS ANd tIPS FoR PAPeR wRItING 103
in a straightforward, direct, simple, and clear manner anyway. It is a significant
milestone in a graduate student’s career when he or she can consider “This paper
is so simple,” as a compliment rather than a criticism, as long as the simpler de-
scription addresses the same problem. As we will show you, it is actually not too
hard to do.
6.3 the 10/30 teSt
Whenever you are writing a research paper, or a report or thesis for that matter,
you must remember at all times that you are writing for your readers, such as re-
viewers, other researchers, and thesis examiners.You know your work so well after
many years of research, but this is not true for your reviewers. Often it is because
you know your work so well that it is hard for you to write for others. You must
put yourself in reviewers’ shoes, so to speak. This is the same as designing prod-
ucts for your target clients and customers—you must think from the perspectives
ofyour end users.
To write for reviewers, you must first gauge the knowledge level of average
reviewers. If you are submitting your paper to a general conference or journal, a
gentle introduction to the problem you study is needed. You should assume that
the reviewers have little knowledge on the specific research problems you work
on. You should certainly assume that reviewers have never attended your weekly
research meetings, nor have they heard your presentations on the work! All they
have at hand is this paper that they are reading. If they have questions while re-
viewing your paper, they cannot ask and communicate with you!
One way to check if your paper is clearly written is to judge if it can pass
what we call the “10/30 test”: for average reviewers, can they get a good idea of
what the problem you are studying is and what your main contributions are within
10 minutes? In addition, can they get a very good understanding of your paper
(assuming a 10-page conference paper) to make an acceptance decision within a
further 30 minutes? Most likely reviewers will not spend more than one to two
hours to review your paper (including writing the reviews).
104 CRAFtING YouR ReSeARCh FutuRe
Before you submit your paper, give it to some colleagues who are familiar
with the area but have not worked on your problems, to see if it can pass the 10/30
test with them!
It is somewhat a “dilemma” that the more novel your work is (which is a
good thing), the harder that you can convince the reviewers. You must introduce
your work “gently” and support it strongly with technical details and convincing
results (such as definitions, theorems, proof, designs, methods, data, experiments,
and applications). How can you do these two “opposite” tasks well? The question
turns out to be a key point in writing research papers, and we will present it in the
next section.
6.4 toP-dowN ReFINeMeNt oF PAPeRS
Top-down refinement, briefly speaking, means that you must present your work
in the general-to-specific style. This can be facilitated by a top-down section
structure of papers. Recall that, in Section 3.3, we discussed a typical structure of
a research paper (duplicated here in Figure 6.1) and described how such a struc-
ture can help you to read the papers quickly to get the main ideas and results. You
should write your paper in this way too, so that others, especially reviewers, can
get the main ideas and results of your paper quickly and easily.
Let us now explain this key issue in more detail. First, recall that your paper
should have the following logical steps supporting the central theme, briefly,
•
•
•
•
The problem is important
Previous works have been done but they have certain weaknesses
We propose our new theory/method/design/process
We prove/demonstrate that ours is better than previous works
In your research paper, you need to “tell your story” to reviewers of these
central criteria several times (4 times, actually) with different levels of details. That
is, you need to tell your story in 10 words (as in the title), 200 words (in Abstract),
1,000 words (in Introduction), and 5,000 words (the main body), with increasing
levels of details. More specifically:
MISCoNCePtIoNS ANd tIPS FoR PAPeR wRItING 105
title: The title of the paper is the highest level of summary of the central theme.
It may contain a few to a dozen or so words (say 10 words), and it must be at a
very high level (i.e., not too technical). It should convey a positive and exciting
tone by using, for example, words such as “improving,” “novel,” or the “hot” topic
that your paper is about, such as “social networks.”
If writing a paper is hard work, then creating a good title is an art. The title
of your paper gives the reader a first impression, and this is why some seasoned
researchers even go as far as say that the title equals half the paper. A first rule of
thumb is that you want your paper to stand out in a pool of many other papers of
the same type, either as a paper in an issue of a journal, which might consist of 10
or 20 papers, or in conference proceedings with 100 or more papers. Another rule
of thumb to follow is that your title is no longer than one line; a long title gives an
indication that the work might be too specific, or the authors are not good at sum-
marizing their work. Another rule of thumb is to ensure that no one has published
the same or very similar title before. This rule can be enforced by checking on a
search engine, which often returns a list of closely matched titles. You may try to
enter some potential titles in a search box and see how many hits you will get as a
result, and whether the returned results resemble the paper you are writing.
Abstract: The Abstract of the paper may contain a few hundreds of words (say
200 words). It must be a high-level 200-word summary of the complete central theme,
telling readers about your work. In Figure 6.1, we use a “200-word story” and
“elevator pitch” to describe the Abstract. That is, the Abstract must be at a high
level, positive, simple, and it tells a complete “story” about your work in 200 words.
It should attract people’s attention, like an elevator pitch when you tell investors
(reviewers here) how great your product (your paper) is within a few minutes.
Here is a sample Abstract. We add some comments in [. . .].
General web search engines, such as Google and Bing, play an important
role in people’s life. [The problem is important] However, most of them return a
flat list of webpages based only on keywords search. [Previous works have certain
weaknesses] It would be ideal if hierarchical browsing on topics and keyword
106 CRAFtING YouR ReSeARCh FutuRe
FIGuRe 6.1: Outline of a sample paper, and an illustration of top-down refinement.
search could be seamlessly combined. In this paper we report our attempt toward
building an integrated web search engine with a topic hierarchy. We implement a
hierarchical classification system and embed it in our search engine. [A very high-
level summary of our work] We also design a novel user interface that allows users
to dynamically request for more results when searching any category of the hier-
archy. [Emphasizing novelty and usefulness] Though the coverage of our current
search engine is still small, [be upfront on the weakness of the work] the results,
including a user study, have shown that it is better than the flat search engine, and
it has a great potential as the next-generation search engine. [Using positive words
to show the significance and impact of the work.]
As you can see, the abstract presents a complete line of argument of the
central theme at a very high level without using technical jargons. It also casts a
positive and exciting tone about the novelty and significance the work.
Introduction: The Introduction section usually takes 1–4 pages (on average,
2 pages) to write. In the Introduction, you will re-tell your central theme again
with more explanations on every component of the argument than the Abstract.
MISCoNCePtIoNS ANd tIPS FoR PAPeR wRItING 107
The Introduction should also emphasize the background, namely, why the prob-
lem is important; who has studied it; who should read the paper, and where have
similar techniques been used, etc. The Introduction should still be written at a
high level, avoiding technical terms and details as much as you can.
What is the relation between the arguments, such as “the problem is im-
portant,” in the Abstract and Introduction sections? Simple! each argument in the
Introduction is simply a refinement of a corresponding one in the Abstract, in the same
order. A simple and rough rule of thumb is that each sentence in the Abstract can
be expanded and refined into roughly three to 20 sentences in the Introduction
(20 sentences can form a paragraph). If the Abstract consists of sentences A, B,
C, . . . , then the Introduction will be A1, A2, . . . B1, B2, . . . C1, C2, . . . The
set of sentences Ai, i = 1, 2, . . . , is simply an expansion of A. The same applies to
B, C, and so on. If the sentences and logic flow in the Abstract are modified in a
revision, then be sure to remember that subsequent revisions should be made in
the Introduction to reflect the changes.
Here are the beginning sentences in the Introduction that accompanies the
sample Abstract we mentioned earlier in this section. Again, our comments are
included in [. . .].
General web search engines (such as Google and Bing) play an important
role in people’s lives in helping people to find information they want. According
to SEMPO (2009), more than 14 billion searches are conducted globally in each
month. However, most of them return a flat list of webpages based only on key-
words search. As queries are usually related to topics, simple keyword queries are
often insufficient to express such topics as keywords. Many keywords (such as
china, chip) are also ambiguous without a topic. It would be ideal if hierarchical
browsing and keyword search can be seamlessly combined, to narrow down the
scope of search, and to remove ambiguity in keywords. The original Yahoo! Search
Engine is a topic directory with hierarchy for easy browsing. However, . . . [back-
ground about search engine development]
108 CRAFtING YouR ReSeARCh FutuRe
You can see that the first two sentences in the Introduction are further
elaborations of the first sentence in the Abstract. The following sentences are fur-
ther elaborations of the second and third sentences in the Abstract section. The
Introduction also provides the background on search engines, and the motivation
and rationale of this research.
At the end of the Introduction you usually describe the layout of the rest of
the paper. You tell the readers what each section of the paper will describe very
briefly. Be sure to use an active voice as much as you can. For example: We orga-
nize the rest of the paper as follows. In Section 2, we discuss previous work on . . .
In Section 3 we describe. . . . Finally, in Section 4 we show . . . etc.
Previous work: You may have a section on reviewing previous works, such as
Section 2 in Figure 6.1. This is a further expansion of the corresponding part in
the Introduction, which in turn expands from the Abstract. The key here is to
demonstrate to the readers that you are familiar with the state of the art, so that
your further statement on your contributions has credibility. Reviews of previous
works do not need to be long, but you must draw the difference between previous
works and your work.
Sections on your work: Your main original work will be described here. As it
may be quite lengthy, you may need several sections for it. These sections make a
detailed argument on your central theme. These sections describe what your novel
theories, methods, or processes are (Section 3 in Figure 6.1) and why they are sig-
nificant; that is, why they are better than previous works and the state-of-the-art
(Section 4 in Figure 6. 1). You can have more sections (such as new theory, deploy-
ment, and so on) for your work. The overall structure of these sections is simply a
further top-down refinement of the corresponding parts in the Introduction.
In fact, you should apply the top-down refinement at all levels in these sec-
tions. Your top-down refinement should be supported by the organization of the
paper. That is, you create high-level subsections (such as Sections 3.1 and 3.2), to
MISCoNCePtIoNS ANd tIPS FoR PAPeR wRItING 109
describe the high-level ideas, and then use low-level subsections (such as Sections
3.1.1, 3.1.2) to describe more technical details, as shown in Figure 6.1.
The top-down refinement is often used within each paragraph. Usually one
paragraph should have one central idea. It often starts with a summary sentence
(such as “Active learning has been shown to reduce the number of labeled exam-
ples significantly”). Then you further elaborate this summary sentence by showing
why and how in the rest of the paragraph.
Conclusion: After you describe your contributions in great length, near the end of
your paper, you need to write a summary or conclusion section. In this section, you
re-tell the whole story again at a high level. The Conclusion section can be similar
to the Abstract, with more details on your novel and significant contributions now
that you have presented all the specifics, and future works.
With this top-down refinement structure, it becomes very easy for reviewers,
and future readers, to understand your ideas. At the same time, they can go deeper
into your technical details and results to any level of detail they wish. Reviewers
can spend 10 minutes on the Abstract, Introduction, and some top-level descrip-
tions to understand the main contributions of your work at a high level. This helps
them understand quickly the rest of the paper by spending 30 minutes or so to
hover between high-level subsections, and if they want to, go deeper into techni-
cal details at low-level subsections. Only this way can your paper possibly pass the
“10/30 test” described earlier in the section.
Do you find that our book is easy to understand? That is because of how we
framed it: we have been using the top-down refinement in writing this chapter,
and the whole book.Now, please do the same in your papers!
6.5 CReAte A hIeRARChY oF SubSeCtIoNS ANd
ChooSe SeCtIoN tItLeS CAReFuLLY
As we have seen the importance of writing research papers in a top-down refine-
ment fashion, it is thus extremely important to create levels of (sub)sections, and
110 CRAFtING YouR ReSeARCh FutuRe
choose section titles very carefully to reflect the overall logic structure of the paper.
Note that in some fields and journals, papers often have a fixed structure and sec-
tion titles (such as sections on Introduction, Hypothesis, Methods, Results, and
so on). If this is the case, you should follow the convention. Avoid a very long
section without subsections. For example, if the Review section is very long, split
it into several subsections (Sections 2.1, 2.2, etc.), each perhaps discussing one
type of previous work. On the other hand, avoid very deep subsections (such as
3.1.2.1.1). Usually you should not go deeper than 3 or 4 levels in structure. If you
have to use deeper subsections, consider including two or more first-level sections
(for example, Section 3 for the new theory and Section 4 for the new method)
rather than putting them into one big section.
At the beginning of each section and subsection, you should write a brief
summary or introduction to this section. Thus, between Section 3 and 3.1, there
should be some introductory or summary paragraph(s) about Section 3. Similarly,
you should write some introductory paragraphs for Section 3.1, before you create
and use Section 3.1.1. This is exactly the top-down refinement you apply at all
levels in the paper.
The writing process itself can be a top-down refinement process. When our
graduate students are ready to write their second or third papers by themselves, we
usually work with them on the Abstract carefully, together with a 2-3 level outline
(including subsection titles and some very brief contents) for the whole paper.
This will make the whole paper logical, consistent, and coherent. Then we ask our
students to “fill in” details in each section, also in the top-down manner. This will
make it easy for your students to write great papers.
6.6 tIPS FoR PAPeR wRItING
Above we discussed a number of language-independent misconceptions in paper
writing and how to correct them. Here we offer a number of tips for writing good
papers, resulting from our own experiences as well as extensive interactions with
other authors and reviewers alike.
MISCoNCePtIoNS ANd tIPS FoR PAPeR wRItING 111
6.6.1 use Certain words to Signal Readers
When you describe your work at a high level (in the top-down refinement pro-
cess), use certain keywords to signal readers so they know which part is important
and which part can be skipped. The following are some examples. We underline
those keywords here for illustration.
“We will first describe our method at a high level here. It consists of four
major steps. We will describe them briefly below.”
“The intuition of our method is that . . .”
“We will describe a proof sketch first to give some intuition about the theo-
rem and the proof.”
“Generally speaking, our method is . . . .”
When you start describing technical details, after a high-level description,
you should also signal readers about this. You can use keywords such as:
“More specifically, . . .”
“For example, . . .”
“We provide details of Step 3 as follows. . . .”
“Below we give a detailed breakdown of our experimental results.”
These words basically signal reviewers that if you are comfortable with the
general idea, you can skip the details here!
6.6.2
use Simple Sentences, but Not Simpler
Einstein once famously said: “your theory should be as simple as possible, but not
simpler.” The same can be said in writing papers. Often, when beginning research-
ers write papers, they use long and complex sentences to describe their ideas and
work.Most of the time, writing can be simplified to improve readability greatly.
Here is a paragraph from a draft paper written by a capable non-native English-
speaking Ph.D. student:
“There are two basic functionalities that we expect our system to offer: an
effective knowledge organization model to represent the panorama of knowledge
in levels and scales, named MKN model; a multi-faceted eBook retrieval system
112 CRAFtING YouR ReSeARCh FutuRe
based on MKN model to ease the searching and cognitive process for users, named
MIQS, using Facet Grouping, Multi-scale Relevance Analysis, and Information
Visualization to overcome the difficulties above.”
You will notice that the whole thing is just one sentence! Maybe grammati-
cally it is correct, but it is certainly hard to follow. Interestingly, when we asked the
student why he wrote such a long and complex sentence, he replied that he had
been preparing for the Graduate Record Examinations (GRE) test. It is a required
standardized test for admission to many graduate schools, and it includes tests for
reading comprehension. Do you really want to test reviewers’ reading comprehen-
sion level with your paper?
We revised the long and complex sentence to the following eight short and
simple ones:
“There are two basic functionalities that we expect our system to offer.
The first functionality is an effective knowledge organization model.
It represents the panorama of knowledge in levels and scales. We call
it the MKN model. The second functionality is a multi-faceted eBook
retrieval system. It is based on the MKN model to ease the searching
and cognitive process for users. We call it MIQS. It uses facet group-
ing, multi-scale relevance analysis, and information visualization to
overcome the difficulties mentioned earlier.”
Reviewers will certainly appreciate simple, straightforward, and logical sen-
tences in your paper.
Even when the ideas, methods, or results you want to express are complex,
you should try your best to explain them using as simple ways as possible. Often
figures and diagrams can be created to help this effort, and analogies can be drawn
to simplify the explanation.
6.6.3
use a Small Set of terms throughout the Paper
Another important method to make your paper easy to understand by reviewers
and readers is to use a small set of technical terms throughout the paper. Often
MISCoNCePtIoNS ANd tIPS FoR PAPeR wRItING 113
several technical terms are equivalent in meaning, and different authors may use
different terms in their paper. Unlike writing novels and poems in which you
would try to avoid using the same terms, in research papers, it is actually better
to keep using the same technical terms. Using different terms for the same thing
would confuse readers greatly. For example, in the Information Retrieval area, a re-
search field that concerns how documents can be indexed and searched effectively,
we often talk about the “precision on the test data.” But other terms are also used
in other papers, such as “precision on testing data,” “precision on validation set,”
“test-set precision,” “percent of correct retrieval on test set.” If you use all of these
interchangeably in your paper, you will cause a huge amount of confusion.
Thus, it is best to stick to one term for the same meaning and use it
throughout the whole paper. If your choice is unconventional, you can formally
or informally define the terms early in the paper, or even include a table of term
definitions early in the paper, and then use them throughout the paper.
6.6.4
use examples early and use them throughout the Paper
When you explain some abstract concepts or complex modeling processes, it is
often best to use one or two concrete examples. Use examples early in the paper.
That is, do not overwhelm readers with too many abstract concepts and complex
processes before showing examples. By then, the reviewers may have gotten lost
(and worse, they may lose interest in your paper). A good approach is to introduce
one or two examples early, and to further develop the same example(s) throughout
the paper. This is why they are often called “running examples.” You can see that
we have been using several graduate students as examples throughout the paper,
and hopefully our approach has made our book easier and fun to read!
6.6.5
use Figures, diagrams, Charts, Photos, tables
Just as examples can greatly enhance the comprehension of your paper, so too can
diagrams, flow charts, photos, figures, drawings, illustrations, tables, and so on. Use
them lavishly in the paper. You will find that we have used many figures in this
book. “A picture is worth a thousand words.” Often it is much faster to view and
114 CRAFtING YouR ReSeARCh FutuRe
understand charts and images than reading hundreds of words that explain the
same thing. When you make your paper easy to understand and save reviewers’
time, they will appreciate it.
6.6.6
write about Your Motivations and Justifications
As we emphasized earlier, it is extremely important that you write your paper
for your reviewers and readers. You must write from their perspective. They may
know the area well, but they may know little about the specific research problem
studied in your paper. Thus, when you write about your novel theories, methods,
processes, and so on, you should write about the motivations—why they work—at
least intuitively. If there is a new mathematical equation, explain why you choose
its particular form and parameters.
Writing about motivations may not be so easy, as you have worked on your
research problem for months or even years. Try to recall your initial motivations,
or try to explain your work to some colleagues who do not know your research
problem. Or, you can keep track of the motivations and write them down as you
go, as part of your overall documentation for your research process, so you can refer
to them and look them up later.
Similar to motivations, you must justify certain important choices you made
in your research. For example, why did you choose those particular parameters in
your experiments and datasets? Why did you combine those processes but not
others? For each choice you made, you should try your best to justify it, at least
briefly. Reviewers may not “buy” your justifications completely, but if do not see
them, they can easily raise questions and cast negative views of your paper.
6.6.7
Pose Potential Questions and Answer them Yourself
As you are writing for your reviewers who would be reading your paper for the first
time, you must think hard about one thing: what questions would they have while
reading my paper? These questions can come up anywhere in the paper. For ex-
ample, in the Introduction, after you briefly describe the problems and your novel
MISCoNCePtIoNS ANd tIPS FoR PAPeR wRItING 115
solutions, reviewers may wonder why you did not use some previous methods,
perhaps with some obvious modifications, that could be effective in solving your
problems. In the experiment section, reviewers may wonder why you did not use a
different material, method, or a different optimization process.
Writing about motivations and justifications will answer some of those
questions. However, sometimes there is no natural place to answer some of these
questions. In these cases, you can pose these questions yourself, at the most likely place
that reviewers may ask, and then answer them by yourself !
The easiest way to pose such questions is “One might wonder . . . ,” or “One
might argue that . . .” Then provide a succinct answer. If you raise questions at the
same time as reviewers would while they are reading your paper, and you provide
satisfactory answers, you cannot imagine how satisfied the reviewers will be after
reading your paper!
6.6.8
emphasize and Reiterate key Points in Your Paper
Often we hear graduate students complaining about reviewers’ apparent careless-
ness when reading their paper. They point out a question from the reviewer but
the answer actually appeared in the paper.
As we discussed earlier, reviewers are very busy researchers themselves, and
they often have very limited time in reading and reviewing papers. If certain mes-
sages are very important, you should emphasize them, and reiterate them several
times in the paper, especially in high-level subsections. It may also be worthwhile
to present the information in multiple ways, as part of sentences, diagrams, lists, or
headings. This allows you to reinforce important information and concepts.How
many times and in how many different ways have we reiterated the key point that
you should use top-down refinement in writing papers in this book? This should
give you an example.
If your method has some small but key differences from the previous
methods, and you think reviewers can easily overlook them, you must point this
out, and emphasize these key differences. You should also reiterate such key
116 CRAFtING YouR ReSeARCh FutuRe
differences several times in the paper, including perhaps in the Introduction and
Conclusions.
6.6.9
Make Connections throughout the Paper
Another important approach, which is often overlooked by young researchers,
is to make ample connections within your paper. In our book there are many
places we use “See Section . . . for more details,” “As we discussed in Section . . . ,”
“Recall that . . . ,” and so on. In fact, whenever you write about something that has
been mentioned earlier, or will be mentioned later in the paper, you can consider
making a connection there. This makes your paper coherent, consistent, and easy
to understand.
6.6.10 Format Papers for easy Reading
We have presented many approaches and tips on how to make papers easier for
reviewers and readers to understand. There are more such tips, but we will not
go over them now. Instead, we want to talk about some formatting details. This
shows that even “trivial” matters such as formatting should be considered when
you write your papers.
One annoying thing we often encounter when we review papers is that
figures, charts, and the text explaining them are not on the same page. This can
happen easily when you use LaTexTM to format the paper, which arranges the fig-
ures or charts automatically. Imagine that reviewers have to flip your paper back
and forth to match the text with the diagram—a frustrating process. We often try
to reformat the paper manually so the figures and accompanying texts are on the
same page, and even near each other.
Another related issue is figure captions and tables. Often you can choose to
put explanation texts either in the main text of the paper, or in the captions. We
prefer the explanation texts to be put in captions. Captions are always very near
(at the top or bottom of ) the figures or tables.
Yet another issue is the font size. Often in figures or tables, the fonts are too
small to be viewed easily. This is often due to the amount of content presented
MISCoNCePtIoNS ANd tIPS FoR PAPeR wRItING 117
FIGuRe 6.2: An example of a paper with too many results on the page.
in the figures and tables. Many authors want to present as much information as
possible in figures and tables to show “how much” work they have done. Often too
many figures are jammed in a small space, too many curves in each figure, and too
many rows and columns in the table. All of these result in small font sizes.
Not only are small fonts hard to view, with too much information in one figure
or table, reviewers can be confused about where to look! Remember you cannot point
out with your finger which column to look at while reviewers are reviewing your
paper! Here is an example of a paper with too many results on the page (see in
Figure 6.2).
Young researchers are often eager to present as much information as they
can fit in the limited space, thinking that perhaps this helps show that they have
done a lot of work. In reality, it is often beneficial to only present a smaller amount
of most relevant and key information, especially in figures and tables, with adequate
font sizes.
6.7 otheR MISCoNCePtIoNS ANd FLAwS
There are many other misconceptions and flaws in writing research papers.
However, as this book focuses mainly on the overall process of doing research for
graduate students in science and engineering, we selectively present some key ones
here. We plan to write a book exclusively on research paper writing in the future to
give this topic complete coverage. Here are some of the other often encountered
misconceptions and questions:
118 CRAFtING YouR ReSeARCh FutuRe
•
“My work is really elegant and beautiful. It has a lot of math:defini-
tions, theorems, and long proofs. Don’t blame me if reviewers cannot
understand it!”
•
“To emphasize that our method is so much more superior, I use a lot
of old, italics, exclamation points, and super strong language in the
paper. My supervisor removes most of them. Is he insane or what??!”
•
“My supervisor always corrects little things in English that appear ev-
erywhere even in newspapers, books (even this book), and magazines.
For example, he would change ‘it can’t . . .’ to ‘it cannot . . .’. It’s really
a waste of time, isn’t it?”
•
“I am putting so many new contributions in the paper, including new
theory, proofs, algorithms, experiments, and even an implemented
system, so that it cannot be rejected by reviewers.”
•
“My supervisor asked me to revise the paper further. I think it is al-
ready perfect. Besides, I wrote it, so it is hard for me to find my own
errors and make further improvement.”
•
“My work is so much better than previous work, but it still got re-
jected. I can tell from the reviews that the reviewers are authors, or
friends of authors, of the previous work I compared to. How can re-
viewers accept my paper if their own work is inferior to mine? Is it
clearly a conflict of interest?”
•
“The reviewers of my journal submission are so unreasonable. I don’t
think I can ever revise the paper to satisfy their demands. I should try
another journal.”
6.8
SuMMARY
The two chapters on paper writing (Chapters 5 and 6) are quite long, but they
are extremely important for young researchers. Too often young researchers write
papers that are unnecessarily hard to understand, and they believe that it is the
reviewers’ responsibility to understand their papers. Clearly this “flaw” is not re-
MISCoNCePtIoNS ANd tIPS FoR PAPeR wRItING 119
lated to English or any specific language they are using. A lot of effort can and
should be put into writing an easy-to-follow paper that can pass the “10/30” test
we mentioned earlier in the chapter. This actually should also be true for writing
essays, homework assignments, reports, or any long documents.
Thus, it is not reviewers’ responsibility to understand your paper; it is your
responsibility to make it easy to understand. You can do that using these ap-
proaches and tips we have discussed in Chapters 5 and 6, from the top-down
refinement strategy to “trivial” formatting details.
Notice that we have presented two objectives of paper writing: one is to
make an argument, as strong as you can, for your paper; the other is to present
your arguments and results as clearly and as simply as you can. These two objec-
tives are not conflicting, and a huge amount of time and effort can be put in to
optimize them.
Unfortunately, the “global optimal solutions” for these objective functions
are unknown, and can never be reached completely. That is, there are always ways
to improve your papers (including research results in the papers). The more time and
effort you put in, the better the paper will be. Thus, doing research and paper writing
are a never-ending optimization process. When we submit a paper, it is not because
it is already perfect; rather it is because we have put in as much effort as we can
before the deadline (most conferences have deadlines), or we believe that the paper
is on par with, or better than, the quality of published papers in the same journals
or conferences, and we want to put a closure to the current work.
• • • •
121
C H A P T ER 7
writing and defending
a Ph.d. thesis
In previous chapters, we discussed how to conduct effective research and publish
research papers. We have discussed how to find good ideas for a thesis in Sec-
tion 3.5, and how to set up a thesis plan early in Section 3.7. In this chapter, we
put all these ideas together and give an overview of one of the most important
functions of Ph.D. study: to successfully write and defend a Ph.D. thesis. You will
find that writing a thesis is quite similar to writing a research paper (see Chapters
5 and 6), but many important differences also exist.
7.1 theSIS ANd dISSeRtAtIoN
A Ph.D. thesis or dissertation is the most important piece of work in your early
career. It is a logically organized document that convincingly states your solution
to a challenging problem. It should be logically organized such that your commit-
tee members should understand and appreciate it. The committee of the thesis
examiners is tasked to find holes in your thesis, and a Ph.D. defense is organized
so that in the space of two or more hours you must bravely and convincingly de-
fend it, in front of the examiners. Your entire graduate life is essentially condensed
to this activity!
Sounds tense and terrible? Actually it is not. If we take you as a product,
your Ph.D. program as a piece of software, then the difference between when you
first entered the program, and when you exit successfully from the program, is
a person (you!) who is more confident, logical, knowledgeable in a focused area
of research, with knowledge of your problem and solution inside out. When you
122 CRAFtING YouR ReSeARCh FutuRe
enter your Ph.D. defense, you are the most knowledgeable person in the proposed
area of research (more than most thesis examiners), and you are there to tell your
committee just that.
In the traditional sense, a thesis is a hypothesis or statement about the
world. For example, a thesis might be that “the Earth goes around the Sun, but
not the other way around.” Another example of a thesis might be that “a distance
measure on the data differences can be found to help build a piece of classification
software that performs better than existing classifiers.” However, a dissertation is
an organized presentation of your thesis, in which you present the thesis and its
entire context, present how others have approached the thesis, and your particular
solution. A dissertation includes all evidence you have to defend the thesis, and all
your thoughts that arise as a result of the thesis. Despite this difference, we often
treat them the same. Thus, we often hear advisors and students say, “I am writing
my thesis.”
In Section 3.5, we briefly discussed the process of going from research ideas
to a thesis topic, in which we mentioned that, among the many factors, passion
for research, technical strengths, and the area popularity can be important factors
for you to consider in choosing a thesis topic. In Section 4.2, we further discussed
how to systematically filter the potential thesis ideas based on impact, importance,
and knowledge of how to break down a complex problem into simpler ones. When
you enter the actual process of writing a Ph.D. thesis, you will find that all of these
skills and conditions will play an important role.
The procedures in writing and defending your thesis are all different in
different universities and countries, but the main parts are the same. There are
basically two ways to organize your thesis: top-down or bottom-up.
7.2 theSIS oRGANIzAtIoN: toP dowN oR
bottoM uP
By the time the student sits down and writes the thesis, typically the student will
have accumulated a number of documents as resources. These might include the
student’s published conference or journal articles, some accepted and some just
wRItING ANd deFeNdING A Ph.d. theSIS 123
submitted. They also include the student’s proposal documents, and other drafts
that the student has written. Now the task for the student is to organize them into
a coherent Ph.D. thesis.
First, consider the top-down process (also refer to the top-down process of
paper writing in Section 6.4). The top-down approach starts from a central thesis;
for example, for a computer science student, a problem such as “How to include a
human in the loop when constructing a text classification model?” can be the the-
sis. It should be simple and concise, and easy to state even to non-domain experts.
This is particularly important for you to convince the external Ph.D. committee
members when defending your thesis, because by rule the committee must include
someone outside of your area of study. For example, a Physics professor might be
asked to be the external examiner for a Computer Science Ph.D. student. Then,
using the Research Matrix Method that we described in Section 4.3, the thesis
might be broken down horizontally or vertically into several subproblems, each
with a coherent introduction, existing solution survey, and your proposed solution,
along with all experimental and theoretical evidence for you to defend it. Each
subproblem can then be further broken down into subsections, each focusing on
a sub-subproblem of its own, but all connected in some logical fashion. This is
similar to the “thesis plan” we mentioned in Section 3.7.
This top-down approach, illustrated in Figure 7.1, has the following
advantages:
•
•
•
•
It allows the student and advisor to know about the central thesis early
in the program.
It can easily reveal missing problems and solutions early in the study,
so that a remedy can be designed accordingly early in the program.
It allows the student to have good training in grant proposal writing,
and ensures good management style.
It makes it easy to define coherent terminology and symbols used
throughout the thesis.
However, the top-down approach is also not without weaknesses. It is
often very challenging for a junior researcher to foresee all subproblems and
124 CRAFtING YouR ReSeARCh FutuRe
FIGuRe 7.1: The top-down approach.
solutions early in the process, as research planning is typically iterative: you plan
and then you use the feedback of your plan execution to replan, until the plan is
satisfactory.
In contrast to the top-down process of organizing your thesis, a bottom-up
process is often favored by many (see Figure 7.2). This applies when the student
has published several research papers and has some written resources scattered
around. A student using this strategy acts as a gold-miner looking for nuggets in
FIGuRe 7.2: A bottom-up process for building a thesis.
wRItING ANd deFeNdING A Ph.d. theSIS 125
a pile of rocks. Like a miner, the student has to discover a subset of the material
that has a coherent theme in the writing. Again like a miner the student may not
take all the raw material in for the content of the thesis, and instead will pick and
choose. This is where the student has to have a keen eye on what is necessary to
compose the thesis, and what is not. A risk is for the student to mistakenly take
the thesis as a job report that includes all that the student has done, thus including
some material that has been published by the student, but has little to do with the
main theme of the thesis.
Summarizing, the advantages of the bottom-up process are:
It gives the student more flexibility in exploring different potential
themes early in the program.
The student has a “fat” CV to take to job interviews after completing
the thesis, where the diversity adds to advantages in job hunting.
•
•
Some of the disadvantages of the approach include:
•
There is the risk that the student compiles all published papers as
the thesis without selection, thus destroying the coherence of the
dissertation.
•
The student might spend more time threading together the works
reported in papers, and put in much effort unifying the symbols and
terminologies used in different papers.
In the end, whether the student prefers a top-down or a bottom-up ap-
proach is entirely a personal choice.
Let’s consider the earlier examples of Students A2, B2, and C2, where A2
prefers theoretical research, B2 prefers empirical and system-oriented research,
while C2 prefers research that can lead to patents and start-up companies. As
an example, Student A2 wishes to follow the top-down method for thesis writ-
ing and organization, and in so doing he finds a topic to work on first using the
matrix method presented in Chapter 4. After isolating the thesis topic to be in
the area of “transferability for knowledge transfer between domains of different
126 CRAFtING YouR ReSeARCh FutuRe
feature spaces,” he divides the problem into several subproblems, such as defining
a robust distance function, learning a representative subspace, and then transfer-
ring between two or more domains with different weights attached to the learning
models. These weights are further learned from the source and target data. As a
top-down style researcher, Student A2 pursues the problem by breaking it down
into finer and finer details.
In contrast, Student B2 pursues his research agenda in a bottom-up manner.
Recall that Student B2 is interested in scalable online recommendation algorithms
that are both efficient and effective. He publishes several papers on different as-
pects of the problem, each focusing on a different scalability or accuracy issues
associated with his methods. When he accumulates a sufficient number of publi-
cations, he then takes stock of what he has accomplished, trying to summarize it
into a coherent whole. He finally decides that he will use the material of scalable
architecture for online recommendation as his main topic for the thesis, based on
which he selects and extends the rest of the subtopics into his thesis as separate
chapters.
Finally, Student C2, who favors applications that lead to patents and start-
up companies, may select a combination of top-down and bottom-up way of
organizing his thesis plan. For example, the student may start by working on a
data-mining competition that also leads to several publications and patents. The
student then finds the topic of social network analysis to be new and fascinating,
and decides to plan for a thesis in this area. He designs a top-down plan to com-
plete the thesis, and works on each part in turn. In the end, the thesis appears like
a hierarchically organized “island” of works among several pieces of finished, but
different works that are not included in the thesis.
7.3 deFeNdING YouR theSIS
Before the student celebrates the successful ending of a long student career, there
is one more hurdle to cross: the thesis defense. This is where the student has to
demonstrate to the experts and non-experts that he or she has proposed a thesis,
wRItING ANd deFeNdING A Ph.d. theSIS 127
knows all the previous major works related to the thesis, and has accumulated
convincing evidence on the thesis. In addition, the student has to demonstrate that
he or she can effectively communicate the thesis.
In many universities, a Ph.D. defense consists of a student presentation
(45 minutes to one hour long), followed by a (closed door) question and answer
period; the latter part can go in two rounds.
Thesis presentation is unlike any conference or seminar presentations the
student has done before, as it summarizes a major project of the student. In a way,
your thesis presentation should convince the committee that in the previous five
or so years of Ph.D. program, you have contributed to world knowledge by one or
more of the following:
•
•
•
Developed a better solution to an existing challenging problem.
Identified a new problem and designed its formulation, and offered a
new solution, such that in effect you opened up a new area.
Developed a new methodology that unifies many major approaches in
the area, and offers your own new insight.
A thesis presentation should demonstrate that you have understood the
problem well including its implications in the real world and that you:
•
•
•
Know all the previous major approaches to the same problem and can
discuss their pluses and minuses with ease.
Know how to design a solution as well as evaluate its merits and
weaknesses.
Have mastered the skill of presenting the same problem to different
audiences, at different lengths and in different contexts.
This last point is particularly important, as presentation skill is one of the
distinguishing criteria that separate a Ph.D. from the rest. In this view a Ph.D.
must be able to sell his or her thesis, much like a salesperson or a business account
manager sells his products. In 40 minutes to 1 hour, the student must convince the
committee that the work holds water. The student should be able to summarize
128 CRAFtING YouR ReSeARCh FutuRe
the work succinctly in one sentence, three sentences, one paragraph, and several
paragraphs, depending on the context in which the presentation is made.
In the actual presentation, the student needs to go through these points:
•
•
•
•
what the problem is,
why the problem is important to solve,
what the related work solves, how does it solve it, what merits does it
have, and what weaknesses does it have.
how the student has solved it better than others.
These points are similar to the logical steps in any research paper, as de-
scribed in Section 6.1. When comparing to previous work, you should be as objec-
tive as possible, without personal sentiment. Avoid saying overly broad things like,
“But their method is ineffective since it cannot scale up.”
A thesis presentation is usually followed by an hour-long question-and-
answer period. Be prepared, but also be aware that you cannot anticipate all ques-
tions. When meeting new and startling questions, pause for a moment before you
answer them. Some points to keep in mind:
•
Questions can be highly critical from some examiners, but it usually
does not mean that they intend to fail you. It is normal that commit-
tee members will drill as deeply as they can, in order to see how well
you actually know your area. They also wish to see if you can handle
tough questions before they let you graduate; after all, they are the
“safety guard” for the university’s good name.
•
Questions from non-experts in a field can be general and be from dif-
ferent, unexpected angles. Don’t be afraid to say, “I don’t know about
this, and I will look into it later.”
•
•
Make sure you know the basics of your thesis very well. If examiners
find that you misunderstood some basic concepts, they can fail you.
If you can use slides during a defense, prepare your answers to poten-
tial questions with extra slides, just in case questions come up during
the Q&A period.
wRItING ANd deFeNdING A Ph.d. theSIS 129
Don’t be afraid to pause, think about a question, and discuss how it is rel-
evant to your thesis. You need to be sure you understand what is being asked of
you before answering it.
It also helps if you prepare some generic questions that keep popping up in
thesis defenses everywhere:
•
•
•
•
•
Summarize your main contributions in one to three sentences.
What is the problem you are trying to solve?
What are the main contributions of your thesis? Which one is
ground-breaking?
What is one major weakness of your solution?
If you were to do your Ph.D. study again, how would you do it
differently?
You may still be very nervous before your thesis presentation and defense,
especially if your native language is not English. The best advice we can give is to
rehearse it several times before the actual defense. Ask your supervisor to invite a few
colleagues, postdocs, and other graduate students to act as examiners, and stage
a “mock defense” for you. As supervisors, we sometimes even video-recorded the
whole mock defense process for some of our previous Ph.D. students, and they
found it extremely helpful. They became highly confident, and did an excellent job
in the thesis defense.
As graduate students, we all hear anecdotes about some famous questions
all the time. In the case of one author of this book (Yang), there was a famous
professor who was on his committee with a reputation for falling asleep during
the student presentation and then waking up, asking: “So, how do you apply this
thing?!” This turned out to be the killer question in many of his exams.
• • • •
131
C H A P T ER 8
Life After Ph.d.
When we were young, becoming a professor seemed to be the ideal thing to do.
You become a teacher with your graduate students. You have summer and winter
breaks. You get to go to different places for conferences. On top of all this, you
are admired by many. In fact, this picture is only partially true. In this chapter, we
try to demystify the life of a professor, and give readers a realistic picture of life
after Ph.D.
8.1 A dAY IN the LIFe oF A tYPICAL PRoFeSSoR
Suppose that you have completed your Ph.D. degree. Now what? One of the
authors (Yang) had the following story to tell: after his defense at the University
of Maryland, his supervisor Professor Dana Nau congratulated him with a smile:
Congratulations! Now you have to write a Ph.D. thesis every year!
This is actually not true. The truth is that after the Ph.D., one typically has
to write one Ph.D. thesis every three months! This includes writing grant propos-
als, preparing and teaching classes, meeting students and mentoring them, serv-
ing for conference and journal organizations, attending and sometimes chairing
committee meetings, balancing budgets for research groups, and reviewing others’
grant proposals. Besides these, you are constantly reading papers, discussing with
students, and learning new things. See the list of activities and tasks for researchers
in Chapter 1. This seemingly confusing set of roles does not seem so attractive un-
less they are put in perspective. Here is a picture of a day in the life of a professor
in the university.
132 CRAFtING YouR ReSeARCh FutuRe
7:00 a.m.:
After getting up and having breakfast, and saying goodbyes to family
members, the professor heads to the computer to catch up on a few
urgent emails that need to be handled right away.
8:00 a.m.:
The professor heads for the swimming pool or gym. A 30-minute
exercise adds infinite energy to the professor’s brainpower.
It is essential for a researcher to have a clear mind when working
on a research problem. It is good for anyone to have a healthy body.
Students often believe that putting more time on a problem may
eventually lead to a desired result, no matter how tired and exhausted
they are. However, pouring more time and effort onto a problem is less
effective than spending your time and energy wisely. Having at least
30 minutes of physical exercise will allow one to gain much more than
spending five times more time on a problem while the mind is murky.
8:30 a.m.:
The professor heads for office, gets a cup of coffee or tea along the way,
and reads more emails and reviews the slides before the 9am class.
9:00 a.m.:
An undergraduate class starts. A class usually lasts 1.5 hours.
A faculty member is expected to teach between two to four classes a
week. To prepare for the classes, the professor needs to spend more
time preparing notes, slides, and other class material, or work with
teaching assistants on preparing for assignments, projects, and exami-
nations. Professors are also asked to teach new courses, especially in
fast-changing disciplines such as computer science and engineering.
In this case, more time needs be spent on the course preparation.
Teaching is an integral part of a researcher’s career. Teaching allows the
research to communicate smoothly with students, honing one’s ability
to explain a difficult concept in plain language, and to illustrate com-
LIFe AFteR Ph.d. 133
plicated solutions on a variety of easy-to-understand matters. Often
professors can also identify exceptional students in the class for collab-
orative research, and inspire them to become researchers in the future.
Communication ability is important for a researcher, as it is part of
the researcher’s job to write research papers and explain new find-
ings to peers and to people in other fields. It is also important for the
researcher to work with more junior researchers and cultivate future
generations. Thus, teaching should not be taken as a distraction, but
rather, it should be an essential ingredient of being a good researcher.
10:30 a.m.:
During the class, a few students asked questions that required further
explanations. The professor invites them to his office to continue the
discussion after class.
11:00 a.m.:
The professor joins a committee meeting on graduate admission
matters.
A faculty member is expected to do a certain amount of service work.
Before getting his or her tenure, a professor is expected to join one or
two departmental committees in a light-load mode. After tenure, which
typically earns him an associate professor title, the committee work load
of a professor is expected to increase, sometimes involving committees
beyond his or her own department on matters concerning the school
or the university as a whole. The duties of these committees typically
involve student admission, scholarship, faculty tenure and promotion,
university research grant distribution and proposal ranking, etc. They
can range from one to six hours for a senior professor per week.
12:00 p.m.:
The professor has his lunch. Some professors schedule lunch meetings
with their students, so that they can discuss in a more relaxed atmo-
sphere, and at the same time, save time.
134 CRAFtING YouR ReSeARCh FutuRe
1:00 p.m.:
The professor attends a student Ph.D. defense session. In his career, a
professor is expected to join many students’ committees, ranging from
undergraduate studies to Ph.D. programs. Sometimes, a professor is
called upon to be an external member of a university wide Ph.D. com-
mittee, serving on the Ph.D. defenses for students in other depart-
ments. On these occasions, he may have to listen to the presentations
of members outside their area of expertise.
3:00 p.m.:
The professor meets his Ph.D. students in a group meeting. This is
when the professor has the most fun. The professor first listens to a
student’s presentation of recently conducted experiments, and then
raises several questions and discusses possible answers with the stu-
dents. The professor then works out the next steps in their research
with the students, and agrees to send the students some references
related to the topic of discussion.
4:30 p.m.:
The professor joins his students and colleagues to attend a depart-
mental seminar. Attending seminars is a typical activity of an aca-
demic. This is how professors can learn about others’ work, and how
they can disseminate their work to others. Sometimes, these seminars
are given by job applicants, in which case the professor will be asked
to interview and rank the candidates.
5:30 p.m.:
Proposal writing: a professor has to constantly apply for new projects
and grants in order to support his group of students, postdoc fellows,
and research assistants, and to support his group members when they
go attend conferences. All research activities, ranging from computer
usage to travel related to conferences, need support. Thus, it is critical
that the professor writes successful grant proposals, and writes them
LIFe AFteR Ph.d. 135
often. This is because a typical grant application has a low success rate
that can range between 5% to 25%, depending on where you are in
the world and the type of grant you apply for. We discuss some tips
on how to write a successful grant proposal in Section 8.2.
6:30 p.m.:
The professor returns home for dinner with his family.
8:30 p.m.:
The professor starts working on a research paper that he jointly writes
with his students or colleagues. This may be mixed with phone calls,
emails, Skype, or in person meetings in research labs.
10:30 p.m.:
The professor starts to book his trip to attend the next conference. He
sends emails to his travel agent . . .
As you can see, the professor’s day is full of different types of activities, all
centered around learning and helping others learn. There are also times when
there are too many of these activities that demand attention all at once, which can
be distracting. Because of this, a good professor should be highly effective, and also
be a good time manager.
Of course, there is more freedom associated with being a researcher. When
the class is finished and papers are submitted, most professors can relax a bit. They
can also go to different conferences, visit other universities, and chat with other
researchers on ideas for future research. This brings us back to Chapter 1, where
we discussed the pros and cons of being researchers.
8.2 APPLYING FoR ReSeARCh GRANtS
One of the major activities in a professor’s career is to apply for research grants. As we
described before, these grants are the major funding sources for supporting all ac-
tivities around a research project, including paying for students’ and staff members’
salary, lab equipment, computers and printers, and travel expenses to conferences
136 CRAFtING YouR ReSeARCh FutuRe
and visits. In the United States, grants are also essential in paying the summer
salaries of a professor, since many universities pay for only the teaching terms of a
professor’s salary. Research grants are typically applied through granting agencies.
These granting agencies are typically divided into the following categories:
•
University grants: these are the internal grants where a small amount
of funds are made available typically to allow new faculty members to
get started, or to encourage faculty members to take on a new research
direction.
•
Government funding agency: these include the National Science
Foundation in the US and in China, NSERC in Canada, A-star
funding in Singapore, European union grants, and Research Grants
Commission in Hong Kong and National Science Foundation of
China, and the European Research Council, to name a few.
•
Industrial or military research grants: These are sponsored by industry
or the military organization, usually with a particular mission in mind.
The objectives of these grants or projects are usually more specifically
focused on building of prototype products. Stringent constraints on
spending and milestones apply.
For a researcher, applying for research grants is a part of everyday life. For
most researchers, the size of the grant is an important factor that determines how
far you can reach in achieving your research goal. To be a successful grant appli-
cant, the researcher has to be more than a scientist or engineer; he or she must
also be a good market researcher in order to understand the needs of the society,
a good salesman to present his ideas well, a good writer and presenter, and a good
manager in case the project involves more parties. For one person to have all these
qualities is challenging, but a successful grant applicant must possess at least some
of these attributes.
Often for each type of grant, grant proposals are called for once a year.
Applying for a project is similar to submitting a paper to a conference, in that
LIFe AFteR Ph.d. 137
both grants and conferences have deadlines. However, major differences exist.
This is underscored by the fact that, often, we hear complaints from even seasoned
researchers: “if I have been very successful in getting papers accepted by very pres-
tigious conferences and journals, why do I still fail to get my proposals accepted?!”
This is a valid question: why is writing good grant proposals so hard?
The key to understanding the difference between research papers and pro-
posals is that their readership can be very different. While a research paper at a
prestigious venue is often reviewed by two to three very qualified experts in one’s
field of study, a proposal is often reviewed by a much larger pool of people, where
only a small subset of them are domain experts and the rest are from a more di-
verse range of areas (although still in the general area of one’s field). Most of the
others are there to help assess the general impact and quality of the proposal, and
while these reviewers are known in the general areas of research, they may not be
experts with the same depth as the author. Thus, if they cannot understand your
proposal, the proposal will be ranked low.
Another difference between a proposal and a research paper is that a pro-
posal must motivate the goals and convince the reviewers that the goals are both
grand and achievable, without going too much to either extreme. On the one
hand, a proposal has to show that the goals to be achieved are challenging, such
that few others have tried them before, and thus the goals are innovative. On the
other hand, these goals are not so out of reach, at least for this researcher, that
the means to reach the goals are viable. This is a fine balance of two competing
extremes, and indeed it is very difficult to attain. However, for a research paper,
reviewers have the problem and solution in front of him or her from the content
of the research paper, thus the balancing act between motivating the problem and
solving the problem is less of a problem. In contrast, for a research proposal, it is
often impossible to reveal all details of the solution, since otherwise the need for
the proposal would be in question.
One of the most important aspects of a research grant proposal is its title.
While short, a title is truly a window into the whole proposal. A good title is
138 CRAFtING YouR ReSeARCh FutuRe
critical in helping the reviewer form a first impression on the entire proposed
project. Recall that in many cases, reviewers themselves are not necessary specific
domain experts in the proposed domain; for example, a computer architecture
researcher may be called upon to review a proposal on data mining. Thus, the
title part should also serve well for the generalist. A rule of thumb is that the title
should contain several components, including the target problem (e.g., a learn-
ing algorithm for image understanding), the proposed method or solution (e.g.,
a Bayesian method) and the application area in which the method will be used
to solve the problem (e.g., images and text from social media or social networks).
A title such as “Image Understanding based on Bayesian Methods for Socially
Tagged Images Under Uncertainty and Incomplete Data Environments” might
not be a good title, because it is a bit too long. In contrast, “Image Understanding
for Socially Tagged Images Under Uncertainty” is more succinct and better.
Similar to the title, the abstract and objectives part of the proposal provides
a slightly more detailed window into the proposal, where the aim is to guide the
reviewer into the proposal in a more structured manner once the reviewer has
been convinced that the proposal is sufficiently interesting by reading the title.
Thus, the abstract should state the problem, tell the reader why these problems are
challenging and new, and then state in general terms what the proposed solutions
are. This is followed by a few sentences on the impact of the solution should the
project be funded and succeed. We have given similar advice on paper writing in
Chapters 5 and 6.
In a similar vein, the objective part of a proposal should state clearly what
the proposed tasks and aims are for the project to accomplish. If there are several
objectives in the statement, it is important that there is a main objective, which,
like the title, should be designed so as to impress the reader. This main objective
can be stated at the beginning of the list of objectives, or at the end. It may often
be the case when the writer of the proposal has done some preliminary works, that
he might wish to use the first objective as a lead-in to the rest of the proposal. This
is fine as long as the researcher has an overview sentence to tell the reader about
LIFe AFteR Ph.d. 139
the case. If the main objective is stated first, then the rest are sub-objectives and
should be grouped as subtasks under the first important objective. This forms a
top-down taxonomy for the reader to follow.
In general, a research proposal is broken down into several functional sec-
tions, each serving a different purpose:
•
The abstract explains the proposed project even to an outsider, making
clear in one or two sentences the main problem, the target audience, the
open challenges and open problems, motivations related to these chal-
lenges and problems, proposed solutions at a high level, and the impact
your solution will bring should the project be funded and succeed.
•
An overview of the objectives that describes the main things you wish
to achieve, along with an indication that your work is important not
only for your own research field, but for society as a whole. The ob-
jectives should be hierarchically organized so that readers can follow
each path to other important parts of the rest of the proposal. Readers
can use this to conduct a review of the proposal in a non-sequential
manner, jumping to parts that they wish to see efficiently.
•
A review of previous works and background research that describes the
context of the proposed project as well as what the investigators and
other people have done in the proposed research area in the past. It is
important to stay focused in the background discussion so as not to
diverge into too many irrelevant details and marginally related fields.
The main purpose of this part is, first, to convince the readers that the
researchers are true experts in the field of study; second, to show that
there are not many prior works addressing the same problem being
proposed; and third, to demonstrate that the investigators themselves
have done preliminary work leading to the proposed projects.
•
The methodology and research plan part, discusses in detail the re-
search method, subtasks, research schedule, and evaluation methods.
140 CRAFtING YouR ReSeARCh FutuRe
Important in this part are the references back to previous sections that
discuss specific research subtasks and steps to the keywords indicating
the major objectives in the title, abstract, and objectives part; these can
be served by sentences such as “recall in Section 1, we discussed the
importance of designing a new method for ABC; in this section we
describe the details of ABC.” The more tightly connected these parts
are via references like this, the easier it is for reviewers to master the
main points of the proposal, and thus the better the outcome.
•
The budget and research schedule part, should be carefully proposed
rather than going to either extremes of asking for too much funding
or too little. In one extreme, if too much funding is requested, it is
difficult to justify the proposal in terms of its scope. But if too little is
asked, reviewers will question the feasibility of the proposed project
as well.
To help the reader understand the full scope of reviewing criteria, here we
quote the proposal review criteria from the U.S. National Institute of Health
(NIH), a well-known funding agency, as an example. This set of criteria is fairly
typical of all criteria of many major grant agencies across the world1:
•
overall Impact. Reviewers will provide an overall impact/priority
score to reflect their assessment of the likelihood for the project to
exert a sustained, powerful influence on the research field(s) involved,
in consideration of the following review criteria and additional review
criteria (as applicable for the project proposed).
•
Significance. Does the project address an important problem or a
critical barrier to progress in the field? If the aims of the project are
achieved, how will scientific knowledge, technical capability, and/or
clinical practice be improved? How will successful completion of the
1 See http://grants.nih.gov/grants/peer/critiques/sbir-sttr.htm#sbir-sttr_01
LIFe AFteR Ph.d. 141
aims change the concepts, methods, technologies, treatments, services,
or preventative interventions that drive this field? Does the proposed
project have commercial potential to lead to a marketable product,
process, or service?
•
Investigator(s). Are the PD/PIs (Project Directors and Principle
Investigators), collaborators, and other researchers well suited to the
project? If Early Stage Investigators or New Investigators, or in the
early stages of independent careers, do they have appropriate experi-
ence and training? If established, have they demonstrated an ongoing
record of accomplishments that have advanced their field(s)? If the
project is collaborative or multi-PD/PI, do the investigators have com-
plementary and integrated expertise? Are their leadership approach,
governance, and organizational structure appropriate for the project?
•
Innovation. Does the application challenge and seek to shift current
research or clinical practice paradigms by utilizing novel theoretical
concepts, approaches or methodologies, instrumentation, or interven-
tions? Are the concepts, approaches or methodologies, instrumenta-
tion, or interventions novel to one field of research or novel in a broad
sense? Is a refinement, improvement, or new application of theoretical
concepts, approaches or methodologies, instrumentation, or interven-
tions proposed?
•
Approach. Are the overall strategy, methodology, and analyses well-
reasoned and appropriate for accomplishing the specific aims of the
project? Are potential problems, alternative strategies, and bench-
marks for success presented? If the project is in the early stages of
development, will the strategy establish feasibility and will particularly
risky aspects be managed?
•
environment. Will the scientific environment in which the work will
be done contribute to the probability of success? Are the institutional
support, equipment, and other physical resources available to the
142 CRAFtING YouR ReSeARCh FutuRe
investigators adequate for the project proposed? Will the project ben-
efit from unique features of the scientific environment, subject pop-
ulations, or collaborative arrangement?
What are some of the typical mistakes made by the writers of grant pro-
posals? The following is an excerpt of some of the comments summarized from
reviewers’ comments by Hong Kong’s natural sciences and engineering research
agency, or the RGC (research grants council).2 Note that these are examples of
generalized (negative) comments from many proposals accumulated in several
years:
originality
•
•
•
•
There is only limited innovation.
This is an incremental research. There is lack of evidence of new
breakthrough.
Objective 1 is not exciting. Objective 2 seems to be a part of Objective
3. Overall the proposal lacks new ideas to investigate.
The objectives, even if achieved, are not exciting.
Methods
•
•
•
It is unclear how the data will be collected and analyzed in task 1.
Difficult to see how the research design/methodology will achieve the
project objectives.
The PI (principle investigator) needs to write the design and meth-
odologies more clearly. For this purpose, step-by-step procedures,
diagrams, and equations would be helpful.
Feasibility
•
The Principle Investigator (PI)’s track record is a concern to this re-
viewer, as he has not published any paper or patent in the area of re-
search in this proposal.
2 http://www.ugc.edu.hk/eng/rgc/index.htm
LIFe AFteR Ph.d. 143
•
•
•
•
•
This proposal on <topic> was submitted before and was not supported.
Some arguments have been provided, but do not seem convincing.
The PI has limited understanding of <research area>, which is very
important to the second objective. The PI should have involved co-Is
to contribute missing expertise.
No preliminary results are provided to support the hypothesis.
This work is not properly based on existing published knowledge in
the subject area.
The objectives of this proposal are too wide. I believe too much has
been promised in this proposal for two years of work.
Likely contribution to discipline
•
•
•
The PI failed to clarify how the proposed technique can achieve any
better performance than other state-of-the-art techniques.
It is stated that an understanding of <topic> is of paramount impor-
tance. You need to describe for whom this is important and why.
I am not optimistic that the proposed research will generate signifi-
cant publications in main-stream journals.
others
•
This proposal is full of ambiguities and I do not think the applicant
has a clear mind where he is heading. Basic concepts and many tech-
•
•
•
•
nical details are unclear.
The proposal has three objectives. These objectives are not integrated;
they appear disjointed.
The manpower planning is poorly put together.
The proposal is too qualitative when it should be a quantitative topic.
There is no sufficient information in the proposal about the analytical
framework, theoretical foundation, model specification, estimation
strategy, data source, and potential economic impact to Hong Kong
and beyond.
144 CRAFtING YouR ReSeARCh FutuRe
•
I don’t think the current title properly reflects what the authors intend
to do.
8.3 teChNoLoGY tRANSFeR
A professor’s life is not just teaching and research. Often times, a professor, and
his students and colleagues, might invent something useful for society, and this is
when they start thinking about how to move the technology and ideas from their
labs to the real world in areas such as industry and the market place. This process
of moving knowledge away from labs, conferences, and journals, to practice is
known as technology transfer.
A researcher can conduct technology transfer at any point in time. He or she
may transfer knowledge while still being a graduate student. He or she may be a
postdoc fellow or a professor, or an industrial researcher working in an industry lab
such as Google, Microsoft Research, General Electric, Huawei, or pharmaceutical
companies such as Pfizer, etc. Technology transfer may take one’s time away from
pure academic research for a while, but it is a very satisfying process, especially
after seeing one’s ideas and work being used by people.
There can be several ways in which technology transfer can happen. The
simplest way is licensing, where, with the help of a legal consultant, one can sign
an agreement with a company such that the company can use the technology in a
limited manner. If the technology is a piece of software, then the agreement will
state clearly what the software includes and does not include, what happens when
the software refers to some other people’s software that they themselves have lim-
ited use clauses, and what happens when the software breaks. That is why a legal
expert is often needed to assist in negotiating and drafting a licensing agreement.
The process of defining these terms can be lengthy and tedious. Fortunately, many
universities and research labs have offices that are designated for this use, and in
many cases this costs the researcher very little or nothing, since the legal services
can often recuperate their costs through a percentage of the agreed fee as part of
the agreement.
LIFe AFteR Ph.d. 145
When reading a legal agreement, such as a licensing agreement, for the first
time, a researcher is often inundated with jargon and clauses that are quite foreign
to the researcher. When this happens, do not worry, since a legal document is not
very different from a research paper. Similar to a research article, the documents
often start with definitions in upper case; these are the terms that will be used
repeatedly and unambiguously in the subsequent texts. The legal terms are similar
to a set of logical rules and statements. While a lawyer is helpful, it is also the
researchers’ task to ensure that these statements are consistent and the coverage is
appropriate. Mathematical logic does help here.
An important aspect of licensing is the terms of use of the product to be
licensed. The term can be limited by geography, such as the Asian or European
market, or by time, such as number of years. In some extreme cases, the researcher
might agree to give all rights of use to the company, in which cases the term
might refer to “exclusive use,” which forbids the researcher to transfer the same
technology to others, essentially preventing competitors from having access to the
same technology. In such cases, the benefit to the researcher is usually higher than
non-exclusive use clauses.
Patents are another kind of technology transfer. Patents are statements of a
new invention, be it a process or a product design that is associated with a right
assigned to the inventor. When filing patents on an idea, a researcher goes through
a process that is almost the same as doing research. First, the researcher should
define what the idea is in the simplest and most unambiguous terms. Then, the re-
searcher goes through a literature search to prove that other similar ideas or tech-
nology are in fact different from the one concerned, in one way or the other. This
is like writing a related work section of an article or a Ph.D. dissertation. Often
there are patent databases in a library to help one go through this search process,
sometimes with a fee, but many search engines today provide look-up services to
the public for free. Like a research article, a patent application should reference
many research papers in citations. In addition, the researcher should give the full
details on the design of the idea, such as how it might be used in practice, and so
146 CRAFtING YouR ReSeARCh FutuRe
on. A patent lawyer, who will assist in the filing process, often reviews this docu-
ment. The filing time may in fact be quite long, sometimes two years, and the fee
can range from thousands of US dollars to tens of thousands. When filing a pat-
ent on an idea, there is a requirement that no prior publication has been made on
the same idea, thus the researcher is often forbidden to publish a research article
on the same idea until the patent is approved by the patent office. In some cases,
however, a patent can be filed while a paper is being reviewed by a conference or
a journal, but researchers should consult a lawyer about this.
Sometimes a researcher spends some time working for a company on a
limited time basis. In such cases, we say that the researcher is consulting with the
company. Often, universities encourage researchers to do some consulting work
while being employed at a university. For example, many universities in the US,
Canada, and Hong Kong allow a faculty member to spend one day per week to
consult with a company. Consulting activities vary greatly from person to person.
In one case, a faculty might answer questions from the people working in the
company that they consult with. In other cases, a faculty member might actually
sit in at the company as if he or she was an employee of the company.
The most complicated, but also most rewarding offshoot of technology
transfer is spinning off a company. We hear many legends of spin-off companies,
such as Google, which was formed by two Ph.D. students who invented a new
search technology. Universities often turn on a green light for faculty members
and students to go create spin-off companies, by allowing them a “leave of ab-
sence.” This is a term that refers to the practice of allowing a faculty member to
leave the university for a specified time period, often one or two years, in which
the researcher is not paid a salary by the university, but the position is kept for the
person. This is why the researcher needs to find the funding required to support
him or herself during the spin-off company creation process.
Spin-off companies can be created with or without a faculty member. In
the case of Student C, for example, the student may decide to form a spin-off
company with the help of the university Technology Transfer office, taking advise
LIFe AFteR Ph.d. 147
from the supervisors as well as industrial partners. For example, student C might
find his new algorithms to perform much better than previously known algo-
rithms in an e-commerce area, and may decide to create a spin-off company in
order to commercialize this algorithm. He and his supervisor may decide to leave
the university for a while, and formulate a business plan and marketing plan to
extend his algorithms to more industrial applications. His previously filed patent
on the algorithm would have helped a lot when talking to potential venture capital
companies, many of whom decide to take a portion of the company’s share and
provide funding. Furthermore, the venture capital companies may have access to
a large network of experienced business people, from whom Student C selected a
few as the new companies’ business development officers, such as Chief Executive
Officers (CEOs) or Chief Financial Officers (CFOs). The student might take
the position of a Chief Technology Officer (CTO), which allows the student to
further expand his talent in product development.
However, what we often don’t hear is the amount of work that these suc-
cessful, or unsuccessful, companies require. While it sounds nice to be able to
hand others a business card with a title of CEO or CTO, in fact, before deciding
on spinning off a company, the researcher should take many factors into careful
consideration, as the risk is also the greatest among all options of technology
transfer.
A necessary first step in finding venture capital to support a spin-off com-
pany is to write a business plan. When writing a business plan, first and foremost,
researchers should describe their objectives in sufficient depth so that funding
parties can be sufficiently convinced that there is indeed an untapped market out
there for the technology. This process is very much similar to describing one’s
ideas in a research article. Writing this up is also akin to writing the Introduction
section of a paper (see Chapter 6). The researcher himself or herself should be
firstly convinced of the existence of the market.
Then, the researcher should prepare a thorough literature study, like build-
ing a related work section of their paper, in showing that despite the existence of
148 CRAFtING YouR ReSeARCh FutuRe
such a lucrative market, in fact few competitors have tried exactly the same ideas
before. This requires a detailed argument and analysis, often with many citations
and quotes.
Subsequently, the researcher has to discuss the methodology itself and a
business model in which he or she will specify how this company can survive based
on the revenue received by selling this product after taking cost into consideration.
There is a wide spectrum of operating methods which a company can rely on,
ranging from building up a service and sales channel for the product to reach a
larger audience, to facilitating a strategic alliance with other companies to mutu-
ally benefit from the sales. Note that in a business plan, the technology description
part is needed and this part is similar to writing a research paper; however, unlike
a research paper, the technology part of a business plan only occupies a small por-
tion, as a strong argument is needed in analyzing the market and the competitors.
It is sometimes estimated that technology often occupies only five to ten percent
of an entire business building process. Even so, the reward far out-runs the hur-
dles, since otherwise we would not have seen so many successful examples.
There is also the management aspect to consider. At this point, the re-
searcher has to think like a manager, as a team of experts will be needed to work
in sync. This team is known as a “management team,” which includes not only the
researcher, but a person with business management experience as a CEO, a finan-
cial and accounting expert known as a CFO, a publicity expert known as a Chief
Information Officer (CIO), etc. The team is in fact the most critical part of the
spin-off company, and because of this, a venture capital company sometimes will
assemble such a team for the researcher, by inviting other experienced people to
join in. This is when the researcher will see his or her own shares in the company
shrink, but in fact the whole pie might become much larger. Thus it is a worth-
while practice in many cases. With the help of these experts, the researcher will
further complete a financial plan and a marketing plan for the spin-off company.
One of the stickiest issues when creating a spin-off company is the IP, or
intellectual property. This refers to the content of the technology, a specification
LIFe AFteR Ph.d. 149
of who owns it and a claim that it has never been done before. In some universi-
ties and research labs, the university owns partial IP for any invention created
therein. When creating a company, legally the university can claim a portion of the
company’s share. In this case, it is important for the researcher to negotiate with
the company early in the process, so that the university or the research lab also
bears a part of the cost in early stages of the company’s creation, such as providing
subsidized office space, computing facilities, and other legal services.
• • • •
151
Summary
In this book, we have systematically discussed how to do research, from setting
one’s goals for a research career, to getting research ideas, reading and critiquing
papers, formulating a research plan, writing and publishing research papers, and
to writing and presenting one’s thesis. We also discussed what life is like after one
gets a Ph.D. degree. We hope the examples, lessons, and experiences presented in
this book demystify a researcher’s life, so that young and inspiring students can set
the right goals in life, and young and beginning researchers have something to rely
on. We will succeed if this book can offer some guidance to students and junior
researchers alike to help them succeed in a fruitful and adventurous research life.
Indeed, research is full of adventures and fun if you master the secrets of
doing it right. In our life we totally enjoyed doing research ourselves. We certainly
hope that you do too.
• • • •
153
References
[1]
Advice on Research and Writing. Mark Leone. http://www.cs.cmu
.edu/~mleone/how-to.html
[2]
Advice to a Beginning Graduate Student. Manuel Blum. http://www
-2.cs.cmu.edu/%7Emblum/research/pdf/grad.html
[3]
“A Ph.D. is Not Enough.” Peter J. Feibelman. Addison-Wesley, Reading,
MA, 1993.
[4]
The Craft of Research (Guides to Writing, Editing, and Publishing).
Wayne C. Booth, Gregory G. Colomb, and Joseph M. Williams, The
University of Chicago Press.
[5]
How to Research. Loraine Blaxter, Christina Hughes, and Malcolm Tight.
Open University Press.
[6]
‘How to Write a Research Paper.’ Martyn Shuttleworth, LuLu.com. Au-
gust 2010.
[7]
Surely You’re Joking, Mr. Feynman!: Adventures of a Curious Character,
Richard Feynman, Ralph Leighton (contributor), and Edward Hutchings
(editor). W W Norton, 1985. doi:10.1119/1.14087
[8]
Advice to a Young Scientist. Peter Brian Medawar. Harper & Row, 1979.
155
Author biographies
Charles Ling has been a professor at the Western University in Canada since 1989.
He obtained his BSc. in Computer Science from the Shanghai Jiaotong University
in 1985, and then graduated with his Masters and then Ph.D. in Computer
Science from the University of Pennsylvania, USA, in 1987 and 1989, respectively.
He specializes in data mining and machine learning, and their applications in
Internet, business, healthcare, and bioinformatics. Overall, he has published over
120 research papers in peer-reviewed conferences and journals. Charles Ling has
also engaged in much professional service in the above areas. He has been an asso-
ciate editor for several top journals and an organizer for several top conferences in
computer science. He is also a Senior Member of IEEE and a Lifetime Member
of AAAI (Association of Advancement of Artificial Intelligence). Charles Ling
is the director of the Data Mining and Business Intelligence Lab at Western
University, where has been involved in several technology transfer projects.
Charles Ling is also a specialist in child gifted education. He integrates his
research in Artificial Intelligence and cognitive science, and develops a full range
of thinking strategies that improve children’s intellectual abilities. These thinking
strategies embrace, enhance, and connect with math, science, and other areas. He
can be reached at [email protected].
Qiang Yang is a professor in the Department of Computer Science and
Engineering, Hong Kong University of Science and Technology, Hong Kong,
China. He is an IEEE Fellow, IAPR Fellow (International Association of Pattern
156 CRAFtING YouR ReSeARCh FutuRe
Recognition) and an ACM Distinguished Scientist, for his contributions to
Artificial Intelligence and Data Mining, which are also his research interests.
Qiang Yang graduated from Peking University in 1982 with BSc. degree
in Astrophysics, and obtained his PhD degree in Computer Science from the
University of Maryland, College Park, in 1989. He was an assistant and then as-
sociate professor at the University of Waterloo, Canada between 1989 and 1995,
and an associate and then a full professor and NSERC Industrial Research Chair
at Simon Fraser University in Canada from 1995 to 2001. He is an author of two
books and over 300 publications on AI and data mining. His research teams won
several prestigious international competitions on data mining. He was an invited
speaker at several top conferences, and a founding Editor in Chief of an ACM
journal (ACM TIST), as well as an editor for several top journals. He has also
been an organizer for some top conferences in computer science. Besides academic
research, he has engaged in several industrial projects with IT companies, and
been sitting on several research grant panels. In his spare time, he enjoys sports,
reading and traveling. He can be reached at [email protected].
|
https://ieeexplore.ieee.org/xpl/ebooks/bookPdfWithBanner.jsp?fileName=7416054.pdf&bkn=7416053&pdfType=book
|
Series ISSN 1939-5221
S
a
e
e
d
B
.
i
N
k
u
Engineering Principles in Everyday Life for Non-Engineers
Saeed B. Niku, California Polytechnic State University
This book is about the role of some engineering principles in our everyday lives. Engineers study these
principles and use them in the design and analysis of the products and systems with which they work.
The same principles play basic and influential roles in our everyday lives as well.
Whether the concept of entropy, the moments of inertia, the natural frequency, the Coriolis
acceleration, or the electromotive force, the roles and effects of these phenomena are the same in a
system designed by an engineer or created by nature. This shows that learning about these engineering
concepts helps us to understand why certain things happen or behave the way they do, and that these
concepts are not strange phenomena invented by individuals only for their own use, rather, they are part
of our everyday physical and natural world, but are used to our benefit by the engineers and scientists.
Learning about these principles might also help attract more and more qualified and interested high
school and college students to the engineering fields. Each chapter of this book explains one of these
principles through examples, discussions, and at times, simple equations.
This book can be a general reference book for learning about engineering for all audiences, especially for
college students in majors other than engineering, or used in general education classes for technical content,
or for encouraging high school students into thinking about STEM (science, technology, engineering, and
mathematics), or general non-fiction reading. Many books are supposedly written for “dummies.” This is
not one of them. The assumption within is that people are intelligent and with perseverance and patience
they can learn about new subjects. It is for anyone interested in learning how the world works.
ABOUT SYNTHESIS
This volume is a printed version of a work that appears in the Synthesis Digital
Synthesis Lectures provide concise
Library of Engineering and Computer Science.
original presentations of
topics, published
information, visit our website:
quickly
http://store.morganclaypool.com
in digital and print formats. For more
important research and development
store.morganclaypool.com
i
i
E
n
g
n
e
e
r
n
g
P
r
n
c
p
l
e
s
i
i
i
n
e
v
e
r
y
d
a
y
l
i
f
e
F
o
r
N
o
n
E
n
g
n
e
e
r
s
i
-
Engineering
Principles
in
everyday
life
for
Non-Engineers
Saeed B. Niku
Engineering Principles in
Everyday Life for Non-Engineers
Synthesis Lectures on
Engineering
Each book in the series is written by a well known expert in the field. Most titles cover subjects
such as professional development, education, and study skills, as well as basic introductory
undergraduate material and other topics appropriate for a broader and less technical audience.
In addition, the series includes several titles written on very specific topics not covered elsewhere
in the Synthesis Digital Library.
Engineering Principles in Everyday Life for Non-Engineers
Saeed Benjamin Niku
2016
A, B, See... in 3D: A Workbook to Improve 3-D Visualization Skills
Dan G. Dimitriu
2015
e Captains of Energy: Systems Dynamics from an Energy Perspective
Vincent C. Prantil and Timothy Decker
2015
Lying by Approximation: e Truth about Finite Element Analysis
Vincent C. Prantil, Christopher Papadopoulos, and Paul D. Gessler
2013
Simplified Models for Assessing Heat and Mass Transfer in Evaporative Towers
Alessandra De Angelis, Onorio Saro, Giulio Lorenzini, Stefano D’Elia, and Marco Medici
2013
e Engineering Design Challenge: A Creative Process
Charles W. Dolan
2013
e Making of Green Engineers: Sustainable Development and the Hybrid Imagination
Andrew Jamison
2013
iv
Crafting Your Research Future: A Guide to Successful Master’s and Ph.D. Degrees in
Science & Engineering
Charles X. Ling and Qiang Yang
2012
Fundamentals of Engineering Economics and Decision Analysis
David L. Whitman and Ronald E. Terry
2012
A Little Book on Teaching: A Beginner’s Guide for Educators of Engineering and Applied
Science
Steven F. Barrett
2012
Engineering ermodynamics and 21st Century Energy Problems: A Textbook
Companion for Student Engagement
Donna Riley
2011
MATLAB for Engineering and the Life Sciences
Joseph V. Tranquillo
2011
Systems Engineering: Building Successful Systems
Howard Eisner
2011
Fin Shape ermal Optimization Using Bejan’s Constructal eory
Giulio Lorenzini, Simone Moretti, and Alessandra Conti
2011
Geometric Programming for Design and Cost Optimization (with illustrative case study
problems and solutions), Second Edition
Robert C. Creese
2010
Survive and rive: A Guide for Untenured Faculty
Wendy C. Crone
2010
Geometric Programming for Design and Cost Optimization (with Illustrative Case Study
Problems and Solutions)
Robert C. Creese
2009
v
Style and Ethics of Communication in Science and Engineering
Jay D. Humphrey and Jeffrey W. Holmes
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Analog Multimedia
Explorations
Lina J. Karam and Naji Mounsef
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Digital Multimedia and
Robotics Explorations
Lina J. Karam and Naji Mounsef
2008
CAD/CAM of Sculptured Surfaces on Multi-Axis NC Machine: e DG/K-Based
Approach
Stephen P. Radzevich
2008
Tensor Properties of Solids, Part Two: Transport Properties of Solids
Richard F. Tinder
2007
Tensor Properties of Solids, Part One: Equilibrium Tensor Properties of Solids
Richard F. Tinder
2007
Essentials of Applied Mathematics for Scientists and Engineers
Robert G. Watts
2007
Project Management for Engineering Design
Charles Lessard and Joseph Lessard
2007
Relativistic Flight Mechanics and Space Travel
Richard F. Tinder
2006
Copyright © 2016 by Morgan & Claypool
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in
any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations
in printed reviews, without the prior permission of the publisher.
Engineering Principles in Everyday Life for Non-Engineers
Saeed Benjamin Niku
www.morganclaypool.com
ISBN: 9781627058582
ISBN: 9781627058599
paperback
ebook
DOI 10.2200/S00699ED1V01Y201601ENG026
A Publication in the Morgan & Claypool Publishers series
SYNTHESIS LECTURES ON ENGINEERING
Lecture #26
Series ISSN
Print 1939-5221 Electronic 1939-523X
Engineering Principles in
Everyday Life for Non-Engineers
Saeed Benjamin Niku
California Polytechnic State University
San Luis Obispo
SYNTHESIS LECTURES ON ENGINEERING #26
CM&cLaypoolMorganpublishers&ABSTRACT
is book is about the role of some engineering principles in our everyday lives. Engineers study
these principles and use them in the design and analysis of the products and systems with which
they work. e same principles play basic and influential roles in our everyday lives as well.
Whether the concept of entropy, the moments of inertia, the natural frequency, the Coriolis
acceleration, or the electromotive force, the roles and effects of these phenomena are the same
in a system designed by an engineer or created by nature. is shows that learning about these
engineering concepts helps us to understand why certain things happen or behave the way they
do, and that these concepts are not strange phenomena invented by individuals only for their own
use, rather, they are part of our everyday physical and natural world, but are used to our benefit
by the engineers and scientists. Learning about these principles might also help attract more and
more qualified and interested high school and college students to the engineering fields. Each
chapter of this book explains one of these principles through examples, discussions, and at times,
simple equations.
KEYWORDS
engineering concepts, entropy, thermodynamics, thermodynamic cycles, combined
cycle power generation, moments of inertia, stepper motors, DC motors, AC motors,
transformers, engines, rotary engines, 2-cycle engines, hybrid cars, vibrations, nat-
ural frequency, hearing, guitars, signal transmission, Coriolis acceleration, vectors,
weather systems, electromagnetic force, EMF, back-EMF
ix
Dedicated to Shohreh, Adam, and Alan
for their patience with me
Contents
xi
Prologue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
1
Entropy: Natural Orders, ermodynamics, Friction, Hybrid Cars, and
Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1
Entropy and Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2
1.3
Is it Possible to Defy Entropy? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Why Do We Get Older? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Entropy as Described by an Equation: ermodynamics . . . . . . . . . . . . . . . . . . . 7
1.5
1.5.1 e First Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5.2 e Second Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.6 Hybrid Cars Anyone? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Common Misconceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.7
2
Natural Frequencies: Vibrations, Hearing, Biomechanics, and Guitars . . . . . . . 23
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.1
2.2
System Response to External Forces at Different Frequencies . . . . . . . . . . . . . . 26
2.3 Natural Frequency of Other Common Systems . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.3.1 Pendulum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.3.2 Cantilevered Beams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.3.3 Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Applications and Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.4.1 Guitars, Pianos, and Other Stringed Instruments . . . . . . . . . . . . . . . . . . 35
2.4.2 Speaking and Vocal Cords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.4.3 Tuning to a Radio or TV Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.4.4 Hearing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.4.5 Walking and Running, Hearts and Lungs . . . . . . . . . . . . . . . . . . . . . . . . 53
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
2.4
2.5
3
Coriolis Acceleration and its Effects: Bikes, Weather Systems, Airplanes,
and Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
xii
3.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.2.1 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.2.2 Vector Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.2.3 Rotations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.2.4 Acceleration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.2.5 Reference Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.2.6 Rotating Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Coriolis Acceleration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Inertial Reaction to Acceleration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Air and Water Circulations (Convections) Due to Heat . . . . . . . . . . . . . . . . . . . 72
Coriolis Acceleration and Weather Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Accelerations Due to Combined Motions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.7.1 Riding Bicycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.7.2 Oscillating Fans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.7.3 Airplanes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3.7.4 Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
3.7.5 Movements of a Spacecraft in Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
3.3
3.4
3.5
3.6
3.7
4 ermodynamic Cycles: Refrigeration, Air Conditioning, Engines, and
Power Cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
4.1
4.2
4.3
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Refrigeration Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Spark-Ignition Power Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
4.3.1 4-stroke Engines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.3.2 2-stroke Engines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
4.4 ermodynamic Representation of the Spark-Ignition Power Cycle . . . . . . . . 114
4.5
Compression-Ignition Diesel Engine Power Cycle . . . . . . . . . . . . . . . . . . . . . . 118
4.6 ermodynamic Representation of Compression-Ignition Power Cycle . . . . . 119
Rotary (Wankel) Engines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
4.7
Power Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
4.8
4.9
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
4.10 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
5
Moments of Inertia: Mass and Area Moments of Inertia, Accelerations,
Inertial Forces, Strengths, and Strains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
xiii
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
5.1
5.2
Second Moment of the Area (Area Moment of Inertia) . . . . . . . . . . . . . . . . . . 126
5.3 Deflections of a Beam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Parallel Axis eorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
5.4
Polar Moment of Inertia (Polar Moment of the Area) . . . . . . . . . . . . . . . . . . . 138
5.5
Strength of Materials: Stress, Strain, and Modulus of Elasticity . . . . . . . . . . . 142
5.6
5.7
Role of Moments of the Area in Stress Calculations . . . . . . . . . . . . . . . . . . . . . 146
5.8 Mass Moment of Inertia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
6
Electromotive Force: Motors, Transformers, AC and DC Currents . . . . . . . . . 155
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
6.1
6.2
Introductory Terms: Voltage, Current, and Resistance . . . . . . . . . . . . . . . . . . 155
6.3 Magnetic Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
6.4
Electromotive Force . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
6.5 DC Motors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
AC Motors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
6.6
Stepper Motors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
6.7
6.7.1 Canstack Stepper Motors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
6.7.2 Hybrid Stepper Motors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Transformers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
6.8
6.9 DC Generators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
6.10 AC Generators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
6.11 Back-emf Issues in Motors and Transformers: Laminated Iron Cores . . . . . . . 182
6.12 Back-emf in DC Motors: Servomotors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
6.13 Advantages and Disadvantages of Different Motors . . . . . . . . . . . . . . . . . . . . . 186
Author’s Biography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Prologue
xv
Almost every aspect of our lives is governed or affected by some engineering concept. Most people
do not know about these concepts or are oblivious to them. We take it for granted that at any
time, we have cold drinks and safe foods in our refrigerators. is was not true a few decades
ago. e principles on which refrigeration is based have existed forever, but we did not know
how to use them properly. We take it for granted that wherever we go, a mixture of oxygen and
nitrogen is present. We do not suffocate from a lack of oxygen in one location while burning in
pure oxygen in another. And we take it for granted that we age—but why? We can walk for hours
without really getting tired, but cannot run for long without getting tired. Why is it that cities on
the Atlantic coast get snow, but the ones on the Pacific coast do not? Tree branches get smaller
in diameter as their distance to the trunk increases, but why? And a telephone book can be easily
bent, but not a piece of cardboard. Why? Most hybrid cars have better gas mileage in the city
than on freeway driving. Why do airplanes fly, how do engines work, and how do we hear? Why
is it that we can select the broadcast from one station at a time without mixing information from
hundreds of others?
So how are these issues related to engineering (or as some would call it, physics)? Simple
principles govern why and how things happen. If we learn these principles we can understand
how different phenomena affect our daily lives, why certain things happen as they do, and how
we can use them to our benefit. ere are too many engineering principles to discuss for non-
engineers. is book covers a few of these principles that are more general and directly apply to
our understanding of natural phenomena. A few equations are used in the discussion to better
understand the issues. I hope this will not be a distraction. I envision that whether or not you are
an engineer or scientist, you will be able to follow them.
is book can be a general reference book for learning about engineering for all audiences,
especially for college students in majors other than engineering, or used in general education
classes for technical content, or for encouraging high school students into thinking about STEM
(science, technology, engineering, and mathematics), or general non-fiction reading. Many indi-
viduals from all walks of life have made encouraging comments about how they enjoyed reading
the manuscript and how they have learned from it, and I thank them for their time.
Not knowing about a subject does not make a person dumb. Many books are supposedly
written for dummies. e assumption in this book is that people are intelligent, even if they do
not know about certain fields; as long as they have the perseverance and are patient they can
learn about new subjects. So this book is not for dummies. It is for intelligent individuals who are
interested in learning new material.
xvi PROLOGUE
I would like to foremost thank Alan Niku for his thorough and thoughtful editing of the
manuscript, his comments, his humor, and his photography. I would also like to thank Joel Clay-
pool for his courage in taking on this project and Dr. C.L. Tondo for his work on the project,
and Hila Ratzabi for editing the manuscript. In addition, my sincere thanks go to Daniel Raviv,
James LoCascio, Patrick Lemieux, Frank Pekar, William Murray, Steven Klisch, Julia Wu, Jesse
Maddren, Glen orncroft, Ahmad Nafisi, Larry Coolidge, Sina Niku, and others whom I may
have forgotten by now, for their assistance during the writing of this project.
Let’s get to work. We will look at a few concepts and see how they relate to our lives every
day. I hope you will enjoy the book and learn something new from it.
Saeed Benjamin Niku
Mechanical Engineering
Cal Poly, San Luis Obispo
January 2016
C H A P T E R 1
1
Entropy
Natural Orders, ermodynamics, Friction,
Hybrid Cars, and Energy
1.1
INTRODUCTION
Imagine walking into a room and finding out that all of the oxygen in the room has separated
into one side and all the nitrogen and other gases in the opposite side, and consequently you can
either not breathe or you almost get burned in the pure oxygen. Certainly this does not happen
in nature. In reality, even if you design a chamber with a membrane in the middle, fill one side
with oxygen and the other side with nitrogen, and then pull out the membrane, the oxygen and
nitrogen completely mix after a while. e natural world does not like this artificial order; thanks
to entropy, it will mix them together into a uniform mixture. Unless other forces and characteristics
(such as differences in density, non-mixing fluids) intervene, things get mixed up into uniform
states.
In fact, this is true for any artificial order created against the natural disorder of things. We
may build a solid structure; nature will destroy it one way or another. We come into living; nature
will eventually find ways to kill us. Mountains and valleys are formed by other forces of nature; the
mountains eventually wash off into the valleys to fill them into a natural disorder or uniformity.
Boiling water will cool to the same temperature as the environment. Only natural orders remain,
to some extent.
So what is entropy?
Entropy is a measure of the level of organization (or disorganization) in a system; a degree
of its organization or randomness; the probability of knowing where the molecules of a medium
are at any given time. When order is increased and the system becomes more organized, entropy
decreases (yes, decreases). When order is decreased, entropy increases. In all natural systems, there
is a tendency for entropy to increase. When an artificial order is imposed on a system the entropy
decreases. Can entropy eventually increase to infinity? Perhaps. But we know that it increases
when order is reduced.
In the following examples, we will investigate the role entropy plays in reducing order.
2
1. ENTROPY
Example 1.1 Ice in Warm Water
Imagine a glass of water at your bedside overnight. If you measure the temperature of the
water in the morning, it will be the same as the room’s temperature; there is no difference between
these temperatures, and therefore, no heat is transferred from one to the other. Even if the water
was originally warmer than the air, or colder than it, after some time, heat would have transferred
from one to the other to bring both media to the same temperature.
Now drop an ice cube into a glass of warm water. Similarly, since there is a temperature
difference between the ice and the water; heat is transferred from the water to the ice until they are
both at the same temperature. e concentrated amount of thermal energy in the water compared
to the ice is not natural; in natural systems, heat is transferred in the direction of higher-to-lower
temperature from one system to another until they are all at the same temperature, whereas here,
the ice is cold and the water is warm. erefore, there is an “un-natural” order in this system. We
have deliberately created a system that has a particular order to it; entropy will destroy this order
through the transfer of heat between them. Not only is it impossible to wake up one day and see
that the glass of water you left at your bedside has transformed itself into a glass of warmer water
plus an ice cube in the glass (by transferring heat from a portion of the water into the rest to make
an ice cube), even if you create this order, entropy will nullify it.
e same is true in other systems. Cold air versus warm air, a hot plate and a cold dish on
it, a stove and a pot of water, a heater and the cold air surrounding it, and the air-conditioned
building and the warm weather outside. In all these examples, heat flows in the direction of
higher-to-lower temperature until they are the same as the temperature of the place in which
they are and achieving equilibrium, therefore eliminating the order caused by the difference in
temperatures.
Example 1.2 Mountains and Valleys
Mountains are above the average horizon and valleys are below, therefore creating an order.
Even though mountains and valley are themselves created by natural forces, this difference creates
a particular un-natural order. rough different mechanisms, nature will try to break down the
mountains and fill up the valleys to destroy this order, even if it takes millions of years. We cannot
expect the dirt to simply rise to create mountains in the opposite direction.
What are some of these mechanisms?
As air is heated and the energy of its molecules increases, the distance between these
molecules increases too, making the air less dense. As a result, the warmer (and less dense) air
rises. Since entropy does not favor the creation of a new order (leaving empty the space that
the warm air previously occupied), cold air moves in to fill the empty space, thus creating winds
and currents. e wind moving through the mountains erodes the mountains ever so slowly and
moves the material away to reduce it. To this, we should also add flowing waters when it rains
and snow melts, or when rivers cut through the dirt and carry it along. e Nile River is famous
for flooding at its delta every year and fertilizing the area with new and nutrition-filled dirt from
the mountains above. Figure 1.1 shows the erosion of earth due to these mechanisms at work at
Petra.
1.1. INTRODUCTION 3
Figure 1.1: e washed out, worn out Petra valley.
Most materials expand when heated and contract when cooled, including when frozen.
is is due to the fact that as the temperature decreases, so does the energy of the molecules,
and therefore, they get closer to each other, reducing the volume of the material and making it
denser. However, exceptions exist, including Bismuth and water; they expand when frozen. e
density of water is maximum when it is at about 4(cid:14)C at sea level. e volume of water increases
when it freezes and when it is heated; this is why you should not place a closed water container
in the freezer; it may rupture. You can see the result of this expansion by observing the surface
of an ice cube. If you mark the surface of water in a plastic container and place the container in a
freezer, you will notice that when frozen, the surface of the ice will have risen above the original
line. However, if you use a metal container (and cover the surface with a piece of cardboard to
prevent the direct blowing of cold air onto the surface) you will notice that the surface of the ice
is somewhat conical in shape (although shallow), as in Figure 1.2. In most refrigerators, the cold
air is blown throughout the freezer compartment. When a plastic container is used, since it is
not a very good heat conductor, the water freezes at the surface first, and therefore, as the water
expands, it pushes up the surface as it continues to freeze. However, since the metal container is
a good conductor, it causes the water to start freezing at the perimeter of the surface and around
the walls. As it continues to freeze and expand, the surface rises slowly and continues to freeze
at slightly higher levels until it finishes with an apex in the middle, just like a hill (the complete
mechanism at work here is more complicated though).
Water pipes may also rupture in the winter when they freeze. In places where the winter
temperatures are very low, water pipes are buried a few feet underground to keep them warmer
4
1. ENTROPY
Figure 1.2: In a metal container, as water freezes around the perimeter towards the center, it expands
and pushes the water level up slowly, creating a conical shape with an apex.
and prevent their rupture. e same is true for the water that gets in between crevices of rocks
and other rigid material, whether on a mountain or not. When the water freezes, it cracks the
rock or breaks it apart, and therefore, eventually destroys the order that exists.
And what happens to the tree branch that falls in the forest when a hurricane passes through
or when the tree is diseased? Microorganisms, beavers, termites, and other agents eventually break
it down and destroy it.
Example 1.3 What Happened to the Flagpole?
So, you had a flagpole in the yard, and after a few years, it rusted and eventually failed. e
same is the fate of the sign pole at the bus-stop. ey rust because there is an order in the system
that is un-natural. Entropy demands that the order be destroyed. One may try to postpone the
rusting by painting the pole or embedding it in concrete, or keeping away the moisture whenever
possible. But rusting, or oxidation, is another natural mechanism for reducing the artificial order
of things. Just think about the radiator in your car. It will eventually rust and fail.
But then what shall we say about the steel used in an engine, where it is constantly lubricated
and moisture is kept away? It hardly rusts. In this case, friction and rubbing action between
the different moving parts will eventually erode the material, eventually reducing it into nothing
(although the engine will stop long before this state is reached, and therefore, oxidation takes over
in the junkyard).
So do all metals oxidize? Not really. Stainless steel is almost rust-proof (although not all
types and all qualities are rust-proof. Less expensive stainless steel that lacks enough chromium
will rust. Check out your barbeque). Gold does not rust either, but it wears out. So it is not
impervious to entropy. And what happens to that plastic part that does not rust? e ultraviolet
1.2. ENTROPY AND EFFICIENCY 5
light will help in its decomposition, first by fading, then hardening and cracking, and eventually
decomposing. Is this not what happens to most paints too?
Nature, in its arsenal of mechanisms for reducing order, has countless other weapons, in-
cluding insects, animals, microorganisms, viruses, diseases, fire, wind, floods, and many others,
each with a unique capability to do its work.
1.2 ENTROPY AND EFFICIENCY
Reducing entropy will increase the efficiency of a system by increasing the order within the system.
To understand this, let’s look at the following case.
Example 1.4 Going to the Office
Naturally, one may need to sleep longer one day due to being tired, less another day due to
sickness or having to take care of a chore. It seems more natural that people who all participate in
similar activities in the same place like an office or classroom or a factory may like to go to work or
to class when they feel like it or when they can. Wouldn’t it be nice to be able to go to a classroom
any time you prefer? Except that such a system is not efficient. A teacher would have to repeat the
same material countless times every day as students showed up randomly when they wanted. One
might go to a bank for a transaction. However, if the tellers and bank managers came to work
as they wished, chances that the work got done would be very low. Meetings would never work
either if participants would arrive when they wished. ink what would happen if the workers of
a factory took vacations as they wished instead of all taking the same days off. e factory would
not be efficient either.
However, we will create an un-natural situation by requiring that everyone, regardless of
how long they have slept, whether they are tired or not or whether they like it or not, must go to
a class or be present in the office or participate in a meeting at the same designated time. But as
a result, the teacher will have to teach once, the customer can make a transaction with the office
staff, and everyone knows when they can expect to see someone for an appointment. Is this not
more efficient? Yes, but it is not natural. e tendency in this system is also to reduce order; people
skip work, get sick, do not come to work on time, and appointments are not kept. To maintain
efficiency, we need to sacrifice comfort or desires. Otherwise, natural chaos would abound.
1.3
IS IT POSSIBLE TO DEFY ENTROPY?
Yes. Simply by creating order, we defy entropy. By doing so we increase order, and consequently,
efficiency. As a result, we create systems that do things for us, from which we benefit even if we
need to constantly fight entropy.
As an example, take the engine of a car. e design of an engine forces a particular sequence
of events to repeat thousands of times a minute for hundreds of thousands of miles of travel. Each
time the air is sucked into the cylinders, it is compressed, fuel is injected into the hot air causing
6
1. ENTROPY
an explosion and creating useful work that turns the engine, and the burnt fuel-air gases are forced
out (see Chapter 4). is is not a natural sequence and does not happen by itself, but precisely
because of that, it is efficient (useful to us). Entropy tends to create situations to reduce this order
when components break, rust, or wear out and as accidents happen. But as long as the order is
maintained, the system is a useful entity.
If you think about any other system, including living systems, the fundamentals remain
the same. A completely chaotic system is natural. However, everything has a particular order in
it that makes it useful. We, and everything we create, are under the control of this fundamental
phenomenon, even if we learn to defy it.
1.4 WHY DO WE GET OLDER?
As mentioned previously, living systems are also subject to the same entropy. Humans, for ex-
ample, are very sophisticated and orderly systems. Nature, based on entropy, uses a variety of
mechanisms through which it destroys this order, including disease, accidents, and aging. We get
older because our systems have to eventually stop. Obviously, here I will not discuss the inherent
mechanisms that the body uses to cause aging (such as the natural DNA markers that measure
our age and turn on or off different functions of our DNA). Whatever these mechanisms, they
are just tools through which entropy does its work. However, even if we do our best to take care
of our bodies and stay healthy, even if we never contract a disease, never smoke or drink, always
exercise, eat right, and always make prudent decisions, we still get old and our bodily functions
eventually stop, some sooner, some later.
Physiologists and engineers can derive equations that describe this phenomenon mathe-
matically as well. For example, considering caloric intake (how much energy is consumed by an
individual through the food the person eats) versus the expenditure of energy by a person, one
can calculate the efficiency of the human body and the food eaten. Of course, the efficiencies of
different food systems are different. However, this waste of energy can be mathematically related
to entropy generation. erefore, one can actually estimate the increase of entropy of the system
of human body.
Now think about the fact that when a child scratches her knee, when a person cuts his finger
while cooking, or even after surgery, the body heals itself. But why should it? Is that not against
entropy? Yes it is. When it heals, the body adds to its order, reducing entropy. So healing, and in
fact life itself, are against entropy. As long as we are alive, our bodies have learned to defy entropy
and to heal, overcome diseases, grow bigger, re-produce, and create new life. Each one of these
phenomena is against the fundamental principles of entropy, disorganization, and the randomness
of nature. But we are born (regardless of our will and against entropy), we grow bigger and taller
and stronger, we get better after illness, and we go on living for a period of time. Each one of
these reduces the overall entropy of the universe until we eventually die, when it increases once
again. Planting a seed and growing it into a tree is the same. e secret of life in the seed can
1.5. ENTROPY AS DESCRIBED BY AN EQUATION: THERMODYNAMICS 7
defy entropy, causing it to sprout, grow, create fruits and seeds, and resist death for a while. But
eventually, all vegetation and trees come to an end too.
1.5 ENTROPY AS DESCRIBED BY AN EQUATION:
THERMODYNAMICS
ermodynamics is the study of thermal systems, resulting in the transformation of different
forms of energy, the creation of useful work, heat transfer, and the dynamics involved. ermo-
dynamics is built on two basic laws, appropriately referred to as the first law and the second law.
(ere is also a zeroth law that indicates that if two bodies are at thermal equilibrium with a third
body, they are also at thermal equilibrium with each other. Kind of obvious, but necessary.) ere
is also a third law of thermodynamics, stating that entropy is zero at absolute-zero temperature.
In any system, both the first and second laws of thermodynamics must be satisfied, not just
one.
1.5.1 THE FIRST LAW
e first law of thermodynamics relates to the fact that energy is not created, but only transformed
from one form to another. erefore, for every system (we refer to it as a closed system, separated
from the environment or other systems), the total energy remains the same because we cannot
produce any energy and we cannot destroy any energy, but only transform it into other forms.
For example, in our cars, we transform into mechanical energy some of the chemical energy
stored in the fuel by first transforming it into thermal energy (combustion). e remaining part
of the energy is rejected from the engine through the exhaust and the radiator (if water-cooled)
and by direct radiation of heat into the atmosphere. e mechanical energy of the engine is also
transformed into other forms, for example kinetic energy of the car moving at a particular speed,
or the potential energy of the car going uphill (we will talk about these later). is potential energy
is once again converted to more kinetic energy (going faster) if we continue downhill. All of this
energy is eventually converted into thermal energy through the brakes, air resistance, and friction
in the system (we eventually slow down and stop).
e total amount of chemical energy going to the engine is equal to the total mechanical
energy plus the rejected heat. is can be described by:
or
Efuel
D
Emechanical
Erejected
C
Ein
D
Eused
C
Erejected;
(1.1)
(1.2)
where Ein is the input energy, is the Eused energy that is converted into a useful form such as
mechanical energy, and is the Erejected thermal energy rejected to the environment. You might see
this equation containing the word work as well, such as the work done by the engine. Work is another
form of expressing energy. When a force moves, it works. erefore, when the force created by
8
1. ENTROPY
an engine at the point of contact between the wheels and the ground pushes the car forward,
it is doing work. Work, which can be transferred into mechanical energy, can be calculated by
multiplying the force by the distance travelled or a torque by the angle rotated. is is usually
expressed as:
W
F
d;
(cid:1)
D
(1.3)
where F is the force and d is displacement (how much the object has moved).
is is true for the human body too. If you consider the human body as a system, then the
total energy intake (from the foods we eat) should be equal to the generated work, the stored
energy in the body (such as by fat), and the rejected heat. So imagine that a person takes in 2,000
Calories of energy in one day. Suppose that the person uses 900 Calories to walk, think, perform
bodily functions, talk, etc. Also imagine that the person loses 900 Calories of heat through radi-
ation and convection. Our bodies, when warmer than the environment, lose thermal energy. We
must lose the thermal energy generated by our muscles and physiological functions to not only
remain comfortable, but to even stay alive. e opposite is true in temperatures warmer than our
body temperature; not only does the body get warmer, it cannot reject its extra thermal energy
through convection or radiation. e reason we perspire is to increase heat loss from the body by
evaporating the sweat on our skin (it takes thermal energy from the body to evaporate, therefore
transferring our body heat into the environment. In damp environments where humidity is high
and the body cannot easily evaporate the sweat, we feel warmer. Similarly, when there is a breeze,
we feel cooler because it increases heat loss from the body). Without this heat loss, we may die
(should we use anti-perspirants in hot weather?). e remaining 200 Calories that are not rejected
and are not used otherwise will be stored by our bodies in the form of fat. At 9 Calories per gram
of fat, this is about 22 grams of fat added to the body, a weight gain. On the other hand, if the
energy intake is less than the work produced plus the heat loss, our bodies will convert the body
fat into energy supply, therefore causing a net weight loss. e energy equilibrium requirement
is maintained through this dynamic.
It should be mentioned here that this is a very simplified model of the human body. In
general, each person has a different metabolic rate that is affected by many factors, including
heredity, age, and so on. Some of the food we eat is not digested at all, and is rejected as waste.
When we suddenly reduce our energy intake (say by dieting), the body assumes there may be a
supply problem, like a famine, and slows the metabolic rate, storing more fat for future emergen-
cies. If we start eating more again, it will convert more to fat. If we eat as usual but work more
(burn more calories), there is less left for conversion to fat. erefore, the aforementioned model
should be used for understanding the energy equilibrium and not a complete picture of human
metabolism or dieting. In general, reducing energy intake should have the same effect as doing
more work (more walking, exercising, swimming, etc.). However, doing more activities, even in
light of eating more calories, is more fun!
By the way, you may have noticed that in the preceding section, we used Calories and not
calories. Each Calorie is in fact one kilocalorie or 1,000 calories. In food science notation, the
1.5. ENTROPY AS DESCRIBED BY AN EQUATION: THERMODYNAMICS 9
unit used is Calorie. It is generally assumed that carbohydrates and proteins are approximately
equivalent of 4 Calories per gram, or 4,000 calories per gram. Fat is approximately 9 Calories per
gram, or 9,000 calories per gram. One calorie is the energy needed to raise the temperature of 1 cc
(cubic centimeter) of water by 1 degree Celsius (or 1.8 degrees Fahrenheit). erefore, one Calorie
is the energy needed to raise the temperature of one liter of water by one (cid:14)C. Consequently,
2,000 Calories consumed by one person in one day can heat up 2,000 liters of water by one (cid:14)C. A
person requiring 2,000 Calories per day for maintaining constant weight, who fasts for 24 hours
(no food intake whatsoever, except water), and who still performs routine work as everyday life
222 grams or a little less than 0.5 pounds.
that burns 2,000 Calories of fat, will lose 2,000/9
Now calculate how many days one would have to fast, completely, and maintain the same level of
activity, to lose a desired amount of body fat.
D
So, can we cool down a room during the heat of the summer by placing an air-conditioning
unit or a refrigerator with its door open inside the room? Do either of them not produce cooler
air that can make the room cooler? e answer is no. e room will actually be hotter. is is
because there is friction in every system, regardless of what we do. So we need energy to overcome
this friction. We also need energy to transfer the heat from one part of the system to another.
Remember, energy is neither produced, nor destroyed, but only transferred (or converted) from
one form to another. e cooler air of the refrigerator or the air-conditioning unit is the result
of a thermodynamic cycle (we will discuss this in Chapter 4) which removes the thermal energy
through one part of the system called an evaporator (and therefore, making that part cooler)
and transferring it to another part of the system through the condenser (and therefore, making it
hotter). e net result is that we have spent energy in order to do this transfer. If the evaporator
part of the system is outside of the room, and therefore, transfers the additional thermal energy
to the ambient air, the net result is that the room or the refrigerator will be cooler inside at the
expense of being hotter outside. If the whole system were within the room, the net result would
be a hotter room. Note how we have created a particular order within this system that against the
expected outcome (that heat flows from a hotter place to a colder place) transfers the heat from
a colder place to a hotter place by the additional work we do.
It is necessary to mention one thing here. If you look at a dictionary, the word
heat is defined in terms such as “energy associated with the motion of atoms
or molecules in solids and capable of being transmitted by convection and ra-
diation,” as a “form of energy possessed by bodies,” the “perceptible sensible
or measurable effect of such energy so transmitted,” etc. In vernacular con-
versations we also refer to heat as energy. However, in thermodynamics, heat
is the transfer of energy from one medium to another, which is different. Al-
though in the realm of thermodynamics equating these to each other is not
correct, we still use the word as if it is an energy term and refer to heaters,
..
10
1. ENTROPY
heat exchangers, heat pumps, heat absorption and heat rejection, and similar
terms. Inadvertently, the word is also used here as if it were energy because we
normally refer to it as such. Understanding that although correct definitions
are different, we may sometimes use the word “heat” as if it were thermal or
internal energy.
..
1.5.2 THE SECOND LAW
e second law of thermodynamics relates to the quality of this energy transformation. But first
a word about energy types.
Rub your hands against each other for a few seconds. ey will feel warm. Burn a small
stick of wood. It will give off thermal energy. Turn on a light bulb (especially an incandescent light
bulb that gives off light through a heated element) and it gets hot. Run the engine of your car,
and it too will get hot. Now try to do the opposite: use the heat to move your hands, to recreate
the stick of wood, regenerate the electricity that turned on the light, or recreate the gasoline that
went into the engine. You will need to create a system composed of many elements to move your
hands, spend the energy for a long time to nurture and raise a tree, make a power-plant or design
a device like an engine, or use a chemical reactor to re-make the gasoline. is is because thermal
energy is the lowest-quality energy. All other forms of energy tend to reduce to thermal energy
unless we do something drastic. Natural systems go toward the lower-quality thermal energy. For
example, what happens to the energy of your voice as you speak? Your voice will vibrate countless
different systems in your vicinity, including surfaces and molecules of air through which the energy
eventually converts to thermal energy. In fact, the sound can only emanate in air by vibrating the
molecules of air; sound does not transfer in vacuum. And all that mechanical energy in the form
of vibrating elements converts to thermal energy. And what happens to the sound level if you
speak to someone while you are inside and the other person is outside of a room? e reason the
level of your voice heard outside is lower is that part of the energy is absorbed and converted into
thermal energy by the walls, the doors and the windows, and the furniture and other things in
the room.
Additionally, the efficiency of converting other energy forms into thermal energy versus
converting thermal energy into other forms of energy is very different. Not only is it easier to
convert electrical energy to thermal energy, it is also more efficient. erefore, the best efficiency
one might expect from a power plant that converts lower-quality thermal energy (chemical energy
of the fuel converted to thermal energy during burning) into higher-quality electrical energy is
about 40% (see the discussion about “combined cycles” in Section 4.8), whereas converting electrical
energy into thermal energy is very efficient (almost all electrical energy is converted into thermal
energy in a lamp). is is also very much related to entropy, but we will not get into it for now.
e efficiency of converting electrical energy into mechanical energy (such as in an electric motor)
1.5. ENTROPY AS DESCRIBED BY AN EQUATION: THERMODYNAMICS 11
can be over 90%. Similarly, electrical-to-electrical conversion of energy such as in a transformer
or a charger can also be about 90% or so.
Another important issue is the value or utility (usefulness) of energy. What matters is not
only the total amount of energy that is available, but also at what temperature it is (this is called
exergy). A high temperature medium is higher in value or utility than the same medium in low
temperature. For example, the total energy stored in the waters of a lake is tremendous, but since
its temperature is about the same as the ambient temperature, this energy cannot readily be used.
A small mass of fluid at high temperature may have the same energy as a large tank of the same
fluid at ambient temperature. We cannot easily use the energy of the larger mass at near ambient
temperature, but the energy of the high-temperature small mass can be used readily (for example,
to heat a glass of water) and, therefore, it has more utility.
As mentioned earlier, both the first and second laws of thermodynamics must be satisfied.
Imagine a cup of hot coffee left in a room. Eventually, the heat transfers from the coffee into the
room, and therefore, the energy lost from the coffee is gained by the room, satisfying the first
law. However, if it were up to the first law alone, it should also be possible that some energy from
the room transfers itself to the now-cooled cup of coffee and heats it again; if it were left to the
balance of energy transfer alone, both scenarios would satisfy the first law and either one would
be possible. But we know this does not happen, because it violates the second law. Based on the
second law of thermodynamics, it is impossible for the heat to transfer from a cold source into a
hot source on its own.
A very common blessing and curse of everyday life is friction. Friction is a blessing when we
need it, and a curse when we do not need it. Examples abound, but for instance, when we brake, it
is friction that stops our car or bicycle. In this case, more friction generally makes a better brake.
e same is true in walking; we can only walk because there is friction. Just think of walking on
ice or with roller skates and how hard it is even though the friction is low, not zero. And we can
grab a fork and eat only because there is friction. In all these cases we win because there is friction.
But we lose when there is unwanted friction, for example in the mechanical components of our
car, air friction (drag) as we move through the air, and friction on the floor when we push heavy
objects. However, in both cases, whether a blessing or a curse, friction always opposes motion
and therefore it always converts part of the energy (or all of a specific form of energy such as
kinetic energy) in the system into thermal energy, the lowest form of energy. Since every real
system has friction, every real system creates wasted heat, whether a car, a fan, a computer, or our
bodies. ere is no escape from this. erefore, as every system operates, it will lose some of its
energy into thermal energy, and consequently, there can never be a 100% efficient system (this is
also directly related to entropy and is expressed as a thermodynamic equation that is used in the
analysis and design of systems, but is beyond the scope of this book).
In fact, based on the preceding argument, perpetual machines are fundamentally impossi-
ble. Since every system has friction in it, it is impossible to drive a system indefinitely without
supplying some energy into it; the friction in the system converts the added energy into ther-
12
1. ENTROPY
mal energy. Without it, the machine will not move, and even if the system is given some initial
energy (like a flywheel which is already rotating) the stored energy will soon be converted into
thermal energy and rejected due to the friction in the system. Based on this fact, next time you
hear someone’s idea of a novel, innovative, and unique perpetual machine, go ahead and bet that
it will not work and challenge them to make it if they insist that you just don’t understand.
So what is the second law of thermodynamics anyway? e second law states that the trans-
fer of energy from one system to another is in the direction of lower-quality energy. For example,
a hot glass of water left in a room will lose its energy to the cooler air in the room until they are
at equilibrium and there is no more transfer of energy. What is the chance (probability) that the
energy in the cooler room would somehow collect itself into the glass of water and make it hotter
than the room temperature? Zero. Is this not what we already said about entropy anyway?
1.6 HYBRID CARS ANYONE?
Hybrid cars have recently become very popular, and rightly so, due to their very high efficiency as
compared to other vehicles with regular internal combustion (IC) engines. For cars of a similar
size and weight, typical fuel consumption may be in the 25–30 MPG in the city and 30–40 MPG
on freeways, whereas for a hybrid it might be 50–60 MPG in the city and about 40–45 MPG
on freeways. Obviously, hybrid cars are much more efficient, even if the numbers are strangely
confusing in that hybrids use more gasoline in freeway driving than city driving.
First, let’s see why non-hybrid cars are more efficient in freeway driving than in city driving.
As previously discussed, the total energy of the fuel spent is converted into the kinetic energy of
the car and into thermal energy due to friction, drag, sounds and vibrations, as well as the huge
amount of energy rejected by the engine through the exhaust, radiator, and heat loss through the
body of the engine (and due to the second law of thermodynamics, it is impossible to eliminate
this loss). e efficiency of the best engines is lower than 40%; this means that less than 40% of
the total energy is converted into useful energy such as kinetic energy, while the rest is lost as
thermal energy. is is even worse when the engine runs without the car moving, such as behind
a traffic light or in congested traffic. In these situations, there is no kinetic energy, and therefore
all the fuel energy is wasted.
e kinetic energy stored in a car of mass m at a velocity of v is:
E
D
mv2:
1
2
So, in stop-and-go driving in the city, every time we speed up, we convert less than 30%–40%
of the spent fuel energy into kinetic energy. In a non-hybrid car, when we brake, this energy is
converted to additional thermal energy and is rejected to the environment, causing more loss. In
fact, sometimes the thermal energy of braking can be so much that it may damage the brake’s
rotor assembly. e rotor gets so hot that, due to what is called residual stresses that remain in it
as a result of manufacturing operations, it can bend, requiring that it be grinded to prevent pul-
sations when the brake is applied. e more we accelerate and gain speed and then slow down by
(1.4)
1.6. HYBRID CARS ANYONE?
13
braking, the more energy we waste, leading to higher gasoline consumption in the city compared
to freeway driving where we drive at a more steady speed. In this case, since we do not slow down
or completely stop as much as we do in city driving, there is much less waste, and therefore better
efficiency. Engines require an injection of extra fuel to accelerate when we speed up, further re-
ducing the efficiency in city driving. As a result, more uniform speeds in freeway driving increase
efficiency, reducing the need for additional gasoline. To make matters even more complicated,
since most drivers like to have plenty of power available to accelerate quickly, more powerful en-
gines are installed in cars; in freeway driving, when accelerations are lower (smaller changes in
speeds in freeway driving), only a fraction of the available power of the engine is used. Since
engines have different efficiencies at different power levels, the total efficiency of the engines is
affected by the power it generates.
In purely electric cars (also called Zero Emission Vehicles or ZEV, Battery Electric Vehicles or
BEV, Electric Vehicles or EV ), instead of an engine, the source of energy is a large set of batteries.
e car is propelled by converting the electrical energy stored in the batteries (actually in the
form of chemical energy) to mechanical energy through electric motors that rotate the wheels.
Like in a conventional car it is possible to stop an electric car by braking, and therefore, converting
the kinetic energy of the car into thermal energy and rejecting it into the atmosphere. However,
due to a phenomenon called electromotive force (or EMF), the kinetic energy can be recaptured
and converted back to electrical energy that can be used to recharge the batteries. Of course,
this is not 100% efficient either and some energy is lost during conversion (in fact, due to safety
concerns, there is always some braking mixed in with regeneration to ensure that the vehicle stops
when needed, especially at low speeds). erefore, since a lot of the kinetic energy is captured and
converted back into electrical energy and stored in the batteries (or ultra-capacitors), the efficiency
of the system is much higher than when an internal combustion engine is used. As a result, the
total efficiency of an electric car can be much better than a regular engine-equipped car.
A Word about Electromotive Force (EMF)
Imagine that a conductor (for example a wire) is placed within a mag-
netic field (for example between the two poles of a magnet) as in Figure 1.3.
With an electric current flowing through it, the conductor will experience a
force (or a torque in rotational systems) called electromotive force, which is
perpendicular to the plane formed by the current and the field. Similarly, if
the conductor is moved (or rotated) within the field (for example, by applying
a force or torque to it and causing it to move), a current will be induced in
the conductor. e same is true if the magnetic field is turned on and off or
is changed in magnitude or direction. is simple principle governs how elec-
tric motors work, how electric generators generate electricity, and even how
transformers change the ratio of voltage and current for electrical energy dis-
..
14
1. ENTROPY
tribution systems. More on this in Chapter 6, but for now suffice it to say that
an electric motor and an electric generator (such as the generator in your car
that recharges your battery after you drain it a bit by starting the engine, and
which if it fails, your battery will drain, stranding you in the middle of worst
places when you need to restart your engine!) are the same thing (with minor
differences for managing the DC and AC currents). Figure 1.4 shows the sta-
tor coils of an AC induction motor and its rotor. e rotor is a collection of
conductors that move within the magnetic field, generated by the stator coils.
Figure 1.3: A wire carrying a current, placed within a magnetic field, will ex-
perience a force in a direction normal to a plane formed by the current and the
field.
is means that if a current passes through a motor’s coils, the electro-
motive force will cause it to rotate, acting as a motor. However, if you rotate
the shaft of the motor, either by attaching it to something else that is rotating
or by turning it manually, it will have a current induced in it, acting as a gen-
erator. Figure 1.5 shows a flashlight in which the user can rotate the handle to
charge the energy-storage unit within the flashlight. In this case, instead of a
rechargeable battery, a large-capacity capacitor may be used.
..
CurrentFieldForceNSForce1.6. HYBRID CARS ANYONE?
15
Figure 1.4: An AC motor with its stator coil and rotor.
Figure 1.5: In this flashlight, a crank is used to rotate a generator in order to con-
vert mechanical energy into electrical energy. is is either stored in a rechargeable
battery or in a large capacitor.
However, as explained earlier, the mechanical energy is converted into electri-
cal energy through the generator (which is really a small motor) and stored in
the capacitor. Similarly, in a hybrid car, the same motor that rotates the tires
when powered by the current from the batteries can also function as a gen-
erator and produce a current that recharges the batteries when it is forcefully
rotated by the wheels (while the current from the batteries is cut off ). Since
a torque is needed to turn the generator, it converts the kinetic energy of the
car into electricity, and consequently acting as a brake. Obviously, electronic
..
16
1. ENTROPY
circuits are used to control the flow and the charging of the batteries and how
much force is applied.
So, in electric and hybrid cars instead of braking by mechanical means
and wasting the energy into thermal energy, the kinetic energy is recaptured
and converted back to electric form, stored in the battery, and used again later.
..
It should be mentioned that, when charged by plugging into the electric grid, the energy
stored in the batteries comes from a power plant that also burns fuel, so it is not that much more
efficient than an engine; the difference is that power plant energy conversion systems are some-
what more efficient than engines, perhaps a little over 40% or so (see Section 4.8 for additional
notes on this). Electricity can be “generated” (or more accurately, converted) by burning gas or
coal (less expensive), by solar panels, through nuclear or hydroelectric power plants, wind energy
systems, etc. erefore, the total system is more efficient than an engine.
Electric cars have many advantages in addition to their efficiency. Since there is no engine,
there is no need for oil changes, maintaining the water level in the radiator (there is no radiator for
cooling the engine anyway), and almost no need for replacing brakes. ey also do not have a gear
box or a clutch (or transmission). Figure 1.6 shows a Tesla Motor S-Type car without the body. It
shows how there are very few parts to the car. However, electric cars have a fundamental drawback.
When the battery is drained, it must be recharged, whether at home, at work, outside shopping
malls, or at charging stations. Unless the car is driven short distances, for example between your
house and your place of work or school or shopping, and time is available to recharge the batteries
at night or while you shop, or if charging stations are readily available to charge the batteries on
a regular basis (which requires time), you may run out of energy and not be able to drive your
car. is severely limits the range of an electric car and limits its usefulness to only short trips.
Some companies have proposed, and have attempted at great expense, to create battery-exchange
stations as common as gas stations, into which you may drive your car, automatically exchange
your battery with a similarly charged unit, pay for the energy used, and quickly get on your way
again. Until such a time when there are sufficient stations everywhere, the recharging of batteries
will remain an issue.
A huge advantage can be created if a relatively small engine is also added to the purely
electric system to recharge the batteries at a constant rate while we drive. ese cars are referred
to as Range Extended Electric Vehicles or Plug in Hybrid Electric Vehicles (PHEV). In this case,
the car has a complete electric drive system, including batteries, drive motors and generators for
brakes, and control systems, but also an engine, fuel tank, and associated hardware. However, in
general, the power required to constantly recharge the batteries is a fraction of the power needed
to propel the car at high accelerations. erefore, a small engine can be designed and used to
generate electricity at its maximum efficiency to recharge the batteries. is way, the driver may
drive the car just like a regular car without regard to range limitations. Since most of the energy is
recaptured during braking, and because they can be charged at night, the efficiency of these cars
1.6. HYBRID CARS ANYONE?
17
(a)
(b)
(c)
Figure 1.6: Tesla Motors S-Type chassis (a), drivetrain (b), and steering mechanism (c). Due to its
nature, an electric car is very simple, with very few parts, compared to a conventional car or a hybrid.
e batteries are placed in the middle part of the car under the seats.
in terms of needed gasoline is very large. However, if the intention for the engine is to propel the
car in extended driving situations like a regular engine capable of large accelerations, the engine
will have to be larger and its efficiency will be lower. For example, the 2012 model Chevrolet Volt
has an 84-horsepower engine. e engine of the 2013 Toyota Prius is 138 HP.
So then what is a hybrid electric vehicle (HEV )? A hybrid car is the combination (hybrid)
of both systems that share the power generation duties. Although it sounds even more inefficient
to have both an engine and a set of usually heavy and expensive batteries with limited lives, and
drive motors and control systems, hybrids offer something that electric cars lack: the convenience
of having gasoline available to burn regardless of how long a drive might be, as well as the desired
18
1. ENTROPY
accelerations available when needed, all at a much better gas mileage. It should be mentioned here
that there are many different combinations of gasoline and electric drive duties used in hybrids,
each with their own characteristics (for example, parallel systems, series systems, and power-split
systems). Regardless, in hybrid cars, when not plugged in to recharge the batteries, a regular
engine converts the chemical energy of the gasoline into electrical energy which is used to charge
the batteries or assist in powering the vehicle. e electrical energy is used to drive the electric
motors to propel the vehicle, and the EMF is used to convert much of the kinetic energy of
the car back to electrical energy when we brake. As long as the speed of the car is relatively
low (up to about 30–35 miles per hour) and the batteries are charged, the electrical energy is
used to drive the electric motors and propel the vehicle. When the battery is drained, the engine
automatically starts to charge the battery and also help propel the car. At higher speeds the engine
starts and participates in propelling the car. In freeway driving, it is mostly the engine that drives
the car. Still, the kinetic energy of the car is recaptured and used to charge the batteries during
braking. Since the engine does not principally drive the car in most cases, and since it does not
normally start the car from rest where the most torque is needed, and therefore does not need to
be hugely powerful to provide large accelerations that are needed only a fraction of our driving
time, the engine can be much smaller and can mostly run at a constant rate at its most efficient
state. erefore, it will have the best possible efficiency at the lowest possible weight (although
manufacturers are increasing the size of engines to satisfy our hunger for more power, albeit at
the cost of fuel efficiency!).
So why is it that the efficiency of a hybrid vehicle is even better in city driving than in free-
way driving? Do we not accelerate, decelerate, and brake more often in city driving and therefore
waste more energy? Should it not be that gas consumption in freeway driving should be even less
than in city driving? en why is it larger for hybrids?
ere are two reasons for this anomaly. One is that hybrids switch to their engines in free-
way driving. erefore, the efficiency is that of the engine. However, the more important reason
is drag resistance. When an object moves through the air—regardless of its shape—the faster it
goes, the larger the air resistance. e very simplified equation describing this phenomenon is:
FD
D
1
2
(cid:26)Cd Av2;
(1.5)
where FD is the drag force indicating resistance to movement, is the density (cid:26) of the fluid (in this
case, air), Cd is the coefficient of drag, a measure dependent on the shape of the object, A is the
frontal area of the object (the area of the vehicle if you were to look at it directly from the front),
and v2 is the square of the velocity of the vehicle. Coefficient of drag varies for different shapes;
airplanes have a much smaller coefficient of drag than buses because they are more aerodynamic.
Coefficient of drag for cars also varies depending on their shape. e most important element
of Equation (1.5) is speed because it is squared. erefore, as the speed increases, the drag force
increases quadratically. For example, for the same car, when everything else remains the same, as
the speed goes from 30 miles an hour to 60 miles an hour, the drag increases four times as much.
1.7. COMMON MISCONCEPTIONS 19
At 75 miles an hour, the drag is 6.25 times as much. erefore, in freeway driving, the engine
has to provide more power to compensate for the drag force at a much higher value than city
driving where speeds are low. Unlike stop-and-go driving conditions where much of the kinetic
energy is converted back to electrical energy and restored into the batteries, all the energy is used
to compensate for drag force and not recaptured in freeway driving, and therefore, the efficiency
of hybrids in freeway driving is less than in city driving. Strange, but true.
1.7 COMMON MISCONCEPTIONS
Since we are talking about energy, let’s also talk about some misconceptions people have about it.
I. Heating a room faster by setting the thermostat higher: Some people think that if a
room is very cold, they can warm it faster if they set the thermostat at a higher temperature.
Unless they have variable-rate furnaces with multiple burners, variable-rate or multiple fans, and
a smart controller, this is not true (most systems are simply on-off systems). Let’s say the room is at
60(cid:14)F and the desired temperature is 72(cid:14)F. If you set the thermostat at 72(cid:14)F, the heater will pump
heat into the room until it gets to 72(cid:14)F and stop (depending on the settings of the furnace, some
systems slow down a bit when the temperature is near the desired value to prevent overshooting
and to allow the furnace to cool down). Setting the thermostat at 85(cid:14)F will not increase the rate
of heating the room to 72(cid:14)F because the furnace works at its maximum power regardless of the
set temperature. So, the temperature of the room will not increase any faster; it just continues to
increase until it reaches the desired temperature. So, if you initially set the thermostat to 85(cid:14)F
and subsequently reduce it to 72(cid:14)F as it reaches this desired value, the rate of heating will be same
as if you were to initially set it at 72(cid:14)F and be done with it.
II. Cooling a room by leaving the refrigerator door open: On a hot summer day, when it
is difficult to bear the heat, it is tempting to leave the refrigerator door open in order to blow
cool air into the room, and some people do so because they think they are generating cool air that
can reduce the temperature of the room. As was discussed earlier, this is a misconception too.
Although the refrigerator does blow cool air into the room, as long as the whole unit is inside the
room, the total net effect is more heat. Leaving the door of a refrigerator open will actually make
the room warmer, not colder. is is due to the fact that the refrigerator is colder than the room
because the heat is transferred from it and added to the air that is blown over its condenser. Due
to the ever-present friction and inefficiencies in every system, it takes a net positive amount of
energy to do this, therefore adding more thermal energy to the room and further warming it. We
will see about thermodynamic cycles, including refrigeration cycles in Chapter 4, but for now let
it suffice to say that leaving your refrigerator door open does not make the room cooler.
In fact, it is the same if an air-conditioning unit is completely enclosed inside a room. So
how do air conditioners normally make a room cooler? By placing their condensers, where the
hot air is blown out, outside of the room, whether a traditional AC unit, a unit installed in a
window, or a unit whose evaporator is inside the room and its condenser is outside. All you are
doing is moving the thermal energy from inside the room plus the work done by the system
20
1. ENTROPY
to the outside environment; the net effect is more heat. Figure 1.7a shows the evaporator of an
air-conditioning system inside the room, where the thermal energy of the room is transferred to
the coolant, thereby making the room cooler. Figure 1.7b shows the condenser unit of the same
system outside of the room, where the thermal energy is rejected into the outside air.
(a)
(b)
Figure 1.7: Even though the evaporator part of the air conditioning unit is inside the room where it
cools the air, the condenser part is outside in order to dissipate the thermal energy to the environment.
Similarly, a running fan inside a room will also make the room warmer because the energy
spent by the motor is added to the room. However, since the moving air helps evaporate sweat
from the body, and therefore cools it, it makes the person feel better. Nevertheless, the temperature
rises as a result of running the fan.
III. e hand dryer in the restroom blows cold air when it starts: You may have noticed
that when you use blown air to dry your hands in a public restroom, it feels that at the beginning,
when we believe it should be hot to dry our hands quickly, the blower blows cold air. Later, when
the hands are almost dry, it blows hot air. However, if you try to pass them under the blower when
your hands are dry, you will notice that the air is warm from the beginning. e reason we feel
the air is cold at the beginning is that when our hands are wet, the air evaporates the moisture on
our hands, cooling it in the process. What our hands feel during this process is the consequence
of evaporation and the transfer of heat from the hands while the moisture evaporates.
IV. Food cooks faster in boiling water if you turn up the heat: If your food is cooking in
already-boiling water, turning up the heat will not increase the temperature of the water, and as
long as the water remains boiling, it will not cook any faster. is is because when water or other
fluids boil, the temperature remains constant at the boiling point. For water at sea level, this is
212(cid:14)F or 100(cid:14)C. is means that if you increase the amount of thermal energy through burning
more gas or more electrical energy passing through the heating element, the additional energy
1.7. COMMON MISCONCEPTIONS 21
will make the water boil faster, not hotter, and therefore, it will convert to steam at a faster rate,
but the temperature will remain the same. At pressures lower than the air pressure at sea level
(for example, if you go to Denver), the boiling temperature decreases as well, cooking the food
slightly slower. So what is the right way to increase the temperature of the water and cook faster?
Using a pressure cooker, in which the lid is completely sealed causing the pressure in the pot to
increase. is will raise the boiling temperature, therefore cooking faster. However, you cannot
remove the lid without first letting out the steam to reduce the pressure equal to the atmospheric
pressure in order to taste the food or check it. Otherwise, it may blow up in your face! Each time
you do this, a lot of energy is wasted too.
e same is true in steam locomotives. In order to increase the temperature of the water
and generate more steam and more power, a pressure vessel is used. is increases the boiling
temperature and increases the total energy that the steam carries, therefore achieving more power
transmission. e downfall is that pressure vessels are heavier and more dangerous.
e same principle is also used in the design of a novel coffee cup that keeps your coffee at
a constant temperature for a comparatively long time. e cup is double-walled, where the space
between the two walls is filled with a chemical compound that boils at about 180(cid:14)F, a desired
temperature for hot coffee. If the freshly poured coffee is hotter than this temperature, the extra
energy is transferred through the metal wall of the coffee cup to the fluid in between and heats
it up to a boiling temperature. e remaining energy will boil the compound into steam. Since
the heat capacity of boiling the compound is more than heating it, much of the initial excess heat
energy of the coffee will be stored in the chemical compound in the form of steam at a constant
temperature. As the coffee gets cooler, the heat energy of the compound is transferred back to the
coffee, keeping it at the desired temperature for a longer time.
Enjoy your coffee without the danger of pouring it over your legs and burning them while
you drive!
C H A P T E R 2
23
Natural Frequencies
Vibrations, Hearing, Biomechanics, and Guitars
2.1
INTRODUCTION
Imagine that you are sitting at home on a fine morning when suddenly your whole house starts
shaking. You look outside and realize a train or airplane has just passed by. You are at a rock concert
and the drummer starts playing a bass beat so heavy you feel like your heart is actually vibrating
inside your chest. Or you are stopped at an intersection and the car next to you is blasting reggae
music with sub-woofers and your whole car starts to shake along with the beat. But how can an
object have such a powerful effect on another object with which it does not even have physical
contact?
What about the tires of a car shaking vigorously if they are not balanced and your cell phone
vibrating when there is an incoming call? I know someone who claims that his back molars vibrate
when he hums a D# note. He uses this vibration to tune his instruments when he does not have
access to a tuner. What causes these vibrations?
e phenomenon that causes all of these is called natural frequency. In this chapter we will
study natural frequency, vibrations, and many other related issues and see how to reduce unwanted
vibrations and benefit from them when we need to and how they affect our everyday lives.
Imagine that you attach a small weight m to a spring (with spring constant k, see below),
as shown in Figure 2.1 and hang the spring. Obviously, the weight will pull down the spring until
the force in the spring equals the weight. Since it is in equilibrium, the mass will stay where it is
without motion
Now imagine that you pull down the weight extending the spring and then let go; it will
start to go up compressing the spring, stop, come down stretching the spring, stop again, and start
to go up again, oscillating up and down repeatedly at a constant rate. How long will it continue
to oscillate? is depends on a number of different factors, including the internal friction in the
spring and air resistance (that converts the kinetic energy of the weight to heat, as discussed in
Chapter 1), also referred to as damping. In the absence of any factors that will convert this energy
into heat, the mass will theoretically oscillate forever, converting its potential energy P (energy
stored in the spring as it is stretched or compressed) into kinetic energy K of the mass as described
by:
K
D
1
2
mV 2;
(2.1)
24
2. NATURAL FREQUENCIES
Figure 2.1: A weight hanging from a spring in a stable condition.
where V is the velocity of the mass. Of course, nothing in nature is completely frictionless, and
air resistance exists unless there is an absolute vacuum. So in reality, after a few oscillations, the
mass will eventually stop when all its initial energy is converted to heat.
Where did that initial energy come from? From pulling the spring down, stretching it, and
storing the energy in a form called potential energy or elastic energy:
P
D
1
2
kd 2;
(2.2)
where k is the spring constant and d is the displacement or the stretch in the spring from its free
(unstretched) length. Spring constant is the force necessary to stretch or compress a spring one
unit of length. In the metric system it is the force (in Newtons) necessary to stretch a spring one
meter. In English units, it is the force necessary in lbs to stretch the spring one inch or foot. e
displacement can be easily calculated from:
mg
k
;
d
D
(2.3)
where g is the acceleration of gravity (for example, 32.2 ft/sec2 or 9.81 m/sec2), and therefore,
mg is the weight of the mass.
It is important to notice that the little energy given to the system will cause the mass to
oscillate for a long time—perhaps forever—if there is no friction or air resistance. e frequency
at which the mass oscillates is called natural frequency.
So what is natural frequency? e natural frequency of a part or system is the frequency at
which it will oscillate theoretically without any (or in reality with little) outside energy. e rate
kmof natural frequency, in the absence of damping (such as friction) can be described as:
2.1. INTRODUCTION 25
f
D
1
2(cid:25)
r k
m
Hz;
(2.4)
where the term 1
2(cid:25) is a constant of conversion, k is the spring constant, and m is the mass of the
part (not its weight). Hz (read Hertz) is the unit used to describe frequencies (for example, your
radio station may be at 90.5 MHz, or 90.5 Mega Hertz of oscillations per second). Let’s first
explore this concept before continuing.
It should be clear from Equation (2.4) that as k increases, the natural frequency increases
too. Conversely, as m increases, the natural frequency decreases. As engineers refer to this, when
a system is stiffer (larger k), the natural frequency is higher and the oscillations are quick, whereas
when the system is more massive (larger m), the natural frequency is lower and oscillations are
slow. For example, imagine that we have two springs, one with a spring constant of 5 lb/in, one
with 10 lb/in, and the weight used is 1 lb. To get the mass of this weight, we will use:
W
D
mg or m
W
g D
1
386
lb
in=sec2 D
D
0:0026 lb:sec2=in:
(2.5)
(Note the unit used for mass in English system. In SI units, the unit of mass is kg, but in English
units, there is no real unit for mass, so we use this unit which comes from dividing the unit of
force lb by the unit of acceleration in/sec2).
For the combination of the 5 lb/in spring and the 0.0026 lb.sec2=in mass (1-lb weight), the
natural frequency of the system will be:
f
D
1
2(cid:25)
r 5
0:0026 D
7 Hz
or that one full oscillation will take 1
the natural frequency will be:
7 (cid:138)
0:14 seconds. For the second spring, with the same mass,
f
D
1
2(cid:25)
r 10
0:0026 D
9:87 Hz
and the time required to fully oscillate once is about 0.1 seconds. As you see, when the spring is
stiffer (harder, requiring more force to pull or push), the natural frequency is higher too, taking
less time to complete one oscillation; the mass moves faster.
Now let’s take the same 5 lb/in spring, but attach a 2-lb weight (m
0:0052 lb.sec2/in) to it. e natural frequency will be:
W
g D
2
386 D
D
f
D
1
2(cid:25)
r 5
0:0052 D
4:94 Hz
with the time required for a complete oscillation at 0.2 seconds.
26
2. NATURAL FREQUENCIES
To see how this relates to real-life products, consider sub-woofer and tweeter speakers. e
sub-woofer is usually a larger, relatively heavy speaker with a more massive cone. erefore, its
natural frequency is lower, and as a result it is more appropriate for generating low-frequency bass
sounds. e tweeter is usually small, such as the speaker used in your computer or cell phone, with
a very light-weight diaphragm, which has a higher natural frequency that is suitable for generating
higher frequency treble sounds. We will see more about these and other examples shortly.
Since systems can oscillate very easily at their natural frequency, it means that if there is a
changing force that is near the natural frequency of the system, it will induce large oscillations
into the system. When it is to our benefit, we take advantage of this phenomenon, whereas when
it is to our detriment, we try to reduce or control it. erefore, any time there are forces present
that oscillate near the natural frequency of a system, we must watch out for large, sometimes
out-of-control oscillations in the system. If the inducing force varies at a frequency that is not
close to the natural frequency, the system will not oscillate freely; it requires much more energy
to oscillate a system or a part at frequencies other than the natural frequency.
2.2
SYSTEM RESPONSE TO EXTERNAL FORCES AT
DIFFERENT FREQUENCIES
Figure 2.2 shows the response of a system to varying-amplitude force (like a sine wave) at different
frequencies. Although in reality, input forces may be very different, engineers use known inputs
such as a sine wave to study the output of a system and understand its behavior. e x-axis shows
the ratio of the frequency of the external force relative to the natural frequency of the system.
So when this ratio is !=!n
1, the frequency of the external forcing function is the same as
the natural frequency of the system. At other values, the frequency of the external force is either
higher or lower than the natural frequency of the system.
D
e y-axis shows the response of the system, also called the magnification factor. It indi-
cates how large the response of the system is relative to the amplitude of the force. So when the
magnification factor is equal to 1, the amplitude of the vibration of the system is the same as
the external force, and therefore, there is no magnification. When it is larger than 1, the system
oscillates at a larger amplitude than the external force (it is magnified), and when it is smaller, it
indicates that the vibration is reduced. is is an important factor in the design of systems where
we might wish to increase or decrease the amplitude of the vibration.
Another important factor in Figure 2.2 is the damping ratio, shown as (cid:16) (Greek symbol
zeta). We already mentioned that all systems have some friction, air resistance, or other damping
(such as shock absorbers in your car) in them. (cid:16) indicates the level of this damping. A larger damp-
ing ratio indicates quicker conversion of the system’s energy to heat, and consequently, reducing
the amplitude of the vibration and how long it oscillates.
What is important about Figure 2.2 here is that it shows how the system responds as the
frequency and damping ratio change. Note that around !=!n
1 (when the frequency of the
external force is the same as the natural frequency), the amplitude of the response is very large,
D
2.2. SYSTEM RESPONSE TO EXTERNAL FORCES AT DIFFERENT FREQUENCIES 27
Figure 2.2: e response of a system to an external driving force at different frequencies and ampli-
tudes and damping.
especially when the damping ratio is lower. eoretically, in the absence of any damping, the
amplitude of the response could be infinite, a theoretically devastating result. is would mean
that the system could disintegrate as it is subjected to an external force at the natural frequency.
However, although every system has some damping, when it is low, the amplitude of the response
can be very large, many times larger than the input. As the damping increases, the amplitude of
1:00, also called critical damping, the amplitude of the
the response deceases. At the value of (cid:16)
response varies between 1 (when the input frequency is zero) and zero as the frequency increases.
is indicates that with critical damping, the amplitude of the response is always smaller than the
input, and therefore, the vibration is always reduced. We will see more about this later, but as an
example, the suspension system of a car is designed to have a critical damping value to prevent
excessive oscillations when it encounters a bump or similar external force. If the shock absorbers
(that provide the damping in your car) get old or are damaged, the car can oscillate many times
before the vibration dies out. Similarly, our bodies have a lot of damping. Consequently, our body
parts are largely shielded from external vibrations. Although there is still a danger present when
body parts are subjected to frequencies near their natural frequencies, the damping in the body
reduces this danger significantly. Next time you are taking a picture in a moving car try to place
D
ζζζζζζωω28
2. NATURAL FREQUENCIES
the camera somewhere on the body of the car (dashboard, side of the doors, etc.) and see how
much vibration is transmitted to the camera, to the point of making it impossible to take a good
picture. But when you hold the camera in your own hands, the vibration is dampened significantly,
allowing you to take nice pictures. e same is true with reading a book in a moving car. Placing
the book on the body of the car might make it impossible to read due to vibrations.
Also notice how the amplitude of the response increases as the frequency of the input
approaches the natural frequency, but reduces significantly as the frequency of the external force
increases beyond the natural frequency. For example, as the ratio of !=!n approaches 2 or larger,
the amplitude of the response of the system approaches zero, indicating that the system does not
vibrate. It is only at around the natural frequency that the response is large. is is very important,
as it indicates that we can prevent large vibrations if the frequency of the external force is larger
than the natural frequency. We will later see how this plays a pivotal role in our hearing mechanism
and the design of certain systems.
e following examples can help us better understand some of these concepts.
Example 2.1 Balancing the tires of a car
To make the ride of a car more comfortable, the tire assembly is attached to a suspension
system consisting of a spring and a damper (also called a shock absorber). e suspension may take
different forms, but in most cases it is a spring and a shock absorber. Figure 2.3 shows two typical
suspension systems for cars, one with a leaf spring and shock absorber, one with a coil spring and
shock absorber. e shock absorber is designed to exert a force proportional to the velocity of
the oscillation in the opposite direction of the motion, therefore dampening the oscillation and
stopping it; the larger the velocity of the oscillation, the larger the force will be. But as far as we
are concerned, the tire-spring assembly is very similar to the system of Figure 2.1. erefore, it
has a natural frequency at which it oscillates vigorously when the frequency of the input force is
close to it. Where does this force come from?
e force may come from an imbalance in the tire or the tire assembly as a result of man-
ufacturing processes; no manufacturing process is ever perfect. erefore, tires may be slightly
heavier at one point compared to another, resulting in an imbalanced tire. e extra heaviness
on one side of the tire, although sometimes very small (perhaps a few grams only), induces an
outward force in the tire when the tire is rotating, referred to as centrifugal force. (Mechanical
engineers and physicists do not refer to this name. We prefer to talk about an inward acceleration
called centripetal acceleration, pointed toward the center of rotation. Due to the inertia of the tire,
there is a reaction to this acceleration, pushing outwardly. Please see Chapters 3 and 5 for a more
thorough discussion about inertia.) For example, the centrifugal force is the same force that lets
you rotate a weight attached to a string without it falling, as in Figure 2.4. e same force dur-
ing the spin cycle of a washing machine will force out the water. At much higher values (due to
extremely rapid rotations) this force also separates uranium from other impurities, and therefore,
concentrates it at higher purity levels.
2.2. SYSTEM RESPONSE TO EXTERNAL FORCES AT DIFFERENT FREQUENCIES 29
Figure 2.3: Typical spring-damper assembly of automobile suspension systems.
Figure 2.4: A weight, attached to a string, will not fall when it is rotating around a fixed point due to
the outwardly pushing centrifugal force. Although the centripetal acceleration is inward, the reaction
to this acceleration is outward.
Since this force is always outward, the direction of this force changes as the tire rotates,
sometimes facing down, sometimes facing up, to the left or to the right. Especially when the
force is pushing down or pushing up, it pushes against the spring, deflecting it. is is exactly the
same as in Figure 2.1 where a weight (a force) is attached to a spring, causing it to oscillate. As
shown in Figure 2.2, as long as the frequency of this force is not similar to the natural frequency
of the suspension system and the tire assembly, the tire will not oscillate much. But when the
frequency of the alternating force is close to the natural frequency of the system, even the small
force caused by a few grams of the weight imbalance is enough to violently oscillate the tire,
shaking the car with everything in it; it only takes a small amount of force to introduce large
oscillations at the natural frequency. However, the oscillations are much smaller if the frequency
of the force is below or above the natural frequency of the system.
a30
2. NATURAL FREQUENCIES
In this case, our goal should be to eliminate this undesirable vibration. To do so, we elim-
inate the source of the force, the tire imbalance. is is why the tire is tested for imbalance by
placing it in a machine that rotates it and measures the force exerted by the imbalance as well as
its location. e technician places a counter-balance weight on the tire assembly across from the
center, therefore balancing it. Since the source of the oscillating force is eliminated, so are the
resulting vibrations at the natural frequency.
e same is true for any other device that rotates this way, be it the blades on the turbine
of a jet engine, the tub of a washing machine when clothes are loaded in it, the driveshaft of a
car, or even its engine. For example, if the blades on the turbine of a jet engine are not carefully
balanced across from each other, the turbine, rotating at very high rates (perhaps 50,000 to 80,000
revolutions per minute) will generate tremendously high forces that can vigorously shake the
engine. e driveshaft of your car that connects the gearbox (in the front) to the differential
(placed in the rear of the car in rear-axle driven cars) must be balanced too; otherwise it will shake
when rotating at the natural frequency range. And if you place a heavy jacket, a heavy article of
clothing, a rug, or a blanket into a washing machine without other articles to counterbalance
it, the machine may shake out of control during the spin cycle, causing it to move and break
away from the water lines and cause severe water damage to its surroundings. And although it is
slightly different from this exact example, even an engine has forces that must be counterbalanced
to prevent excessive vibrations in it. Except for certain configurations (such as a V-6 engine that
is naturally balanced), the counterbalance weights are integrated into the crankshaft.
Example 2.2 Shakers and Oscillators
In Example 2.1 we looked at a system in which our desire was to eliminate vibrations
(oscillations). In many other systems we may actually want to take advantage of the oscillations
at the natural frequency. ree examples (and there are many more) are a cell phone vibrator,
a hand-held massager, and electric shavers. In all cases, either a mass-spring or a rotating mass
system is designed with a particular natural frequency. A vibrating force, either electromagnetic
or mechanical (an imbalanced-weight rotating), is applied to the system at the same frequency,
whereupon the mass oscillates vigorously although the force is very small. Since little energy is
needed to induce vibrations at the natural frequency rate, the cell phone vibrator, the massager,
or the electric shaver head operate with little expenditure of energy.
2.3 NATURAL FREQUENCY OF OTHER COMMON
SYSTEMS
Natural frequency is not just a characteristic of a mass-spring system or a rotating system. Many
other systems have similar frequencies at which they oscillate with little or no external force. One
such system is a pendulum. Others include wires, attached at both ends, and cantilevered beams.
We will now discuss these systems because they also play an important role in many systems we
often use.
2.3. NATURAL FREQUENCY OF OTHER COMMON SYSTEMS 31
2.3.1 PENDULUM
Imagine a pendulum, a mass m attached to a string (or bar) l and hung at one end as in Figure 2.5a.
If you move the pendulum to one side (for accuracy, assume the angle is small) and release it, since
the mass now has potential energy at an unstable state, it will move down to the bottom, gaining
speed as its potential energy is converted to kinetic energy. As the mass continues to the opposite
side, the kinetic energy converts back to potential energy until the mass stops, repeating the
process indefinitely until, either due to friction or air resistance, the energy is lost in the form of
heat. eoretically, in the absence of friction and air resistance, the oscillation will go on forever;
in reality, it will oscillate a few times before it stops. Similar to the aforementioned case with a
mass and spring, if you desire to oscillate the mass at a rate other than this rate, you will have to
exert a force on it, whereas at this rate, there is no need for additional input force. Similarly, this
is the natural frequency at which the system oscillates with little to no external force.
(a)
(b)
Figure 2.5: A pendulum’s oscillation has a natural frequency that is independent of its mass.
Although it is easy to derive the equation describing the natural frequency, suffice it to say
that the natural frequency of a simple pendulum with a point mass (not the bar) at small angles
is:
f
D
1
2(cid:25)
r g
l
Hz:
(2.6)
Notice that the mass does not affect the natural frequency (m is not part of this equation). is
means that regardless of the size of the mass, the natural frequency of a simple pendulum is only
affected by the length of the string (or more accurately, the distance between the center of the
mass and the point of suspension), and of course, the acceleration of gravity. erefore, unless you
move to a different planet, or move up a mountain, etc., the natural frequency of the pendulum
will remain the same unless l changes.
Where is this used? An example of an application of this system in everyday life is a grand-
father clock. Because the natural frequency of a pendulum is fixed, it can be designed (with
appropriate dimensions and lengths) to have a period of exactly one second (or its multiples).
mll,m32
2. NATURAL FREQUENCIES
erefore, at that rate, it requires very little force supplied from the spring winding, a mass hung
from a chain, or an electric field, to oscillate it for a very long time. Have you ever seen how a
grandfather clock is adjusted? A small screw on the bottom of the pendulum is turned to move in
or out just a bit. e change in weight distribution (not the total weight) changes the location of
the center of mass of the pendulum (causing a change in l) and changing the natural frequency
and the period of oscillation.
e same is true in another very important part of our lives: walking and running. We will
discuss this a little later together with other body parts.
Example 2.3
A child in a swing is mechanically very similar to a pendulum. Based on the size of the
swing and the weight distribution of the child, the swing will have a certain natural frequency at
which it tends to oscillate. It requires a lot of force to swing the child at another rate (you would
have to grab the swing and move it back and forth at all times to force it to oscillate at other
frequencies).
2.3.2 CANTILEVERED BEAMS
Now imagine a cantilevered bar—a bar that is attached to a rigid body (like a wall) at one end, but
is free at the other end—as in Figure 2.6. If the bar is pulled down a bit and released (plucked),
the bar will oscillate up and down until the stored energy in it is converted to heat due to damping
or internal friction in the material or because of air resistance. Here too, if you desire oscillations
other than the natural frequency rate, you will need to exert a force on the bar; it does not need
any additional force to oscillate at this rate.
Figure 2.6: Oscillations of a cantilevered bar.
m,LLhbe equation describing the first natural frequency of a cantilevered beam is:
2.3. NATURAL FREQUENCY OF OTHER COMMON SYSTEMS 33
1
2(cid:25)
(cid:20) 3:5156
L2
(cid:21) s EI
(cid:26)
;
f1
D
(2.7)
(cid:2)
where L is the length of the beam, is (cid:26) the density (or mass per unit length), and I is the area mo-
ment of inertia (see Chapter 5 for more details). e moment of inertia is discussed in Chapter 5,
but for a beam with a rectangular cross section of width b and height h, as shown in Figure 2.6,
it is:
1
12
E is the modulus of elasticity, a measure of the hardness or stiffness of the material. If you pull
a piece of material, it stretches. Modulus of elasticity is a representation of this relationship and
describes how stiff a material is (see Chapter 5 for more detail). e Modulus of elasticity for steel
is about 30
106 psi (in engineering terms, it is the ratio of stress over strain).
bh3;
(2.8)
D
I
e important issue in Equations (2.7) and (2.8) is that the natural frequency of a beam
is affected by the properties of the material (E and (cid:26)), the length of the beam, its thickness and
width. If any of these factors change, the natural frequency will change too. erefore, conceivably,
we can make a series of beams with different dimensions and tune them to all have different natural
frequencies as we want them.
Are there any examples of where this is used? Of course there are, including the vibrations
of a reed in an oboe or clarinet, a tuning fork, and our hearing mechanism. e reed of an oboe,
a fundamental part of it that makes the sounds, is essentially a cantilevered bar (see Figure 2.7).
As the musician forces an air-stream over it, it vibrates and generates the sound we hear after it
is amplified by the body of the oboe. As the speed or pressure of the air stream changes, so does
the vibration frequency of the reed. In a tuning fork too, when the legs of the u-shaped fork are
struck, they vibrate at their natural frequency. rough the choice of dimensions and material
used, the fork is designed to vibrate at a desired frequency.
e same principle is used in the design of a particular tachometer with no moving parts
(see Figure 2.8a). Tachometers measure how fast something rotates (for example, an engine
shaft). Most tachometers are based on the back-emf principle discussed in Chapters 1 and 6.
e tachometer is very similar to a small electric motor that is connected to the rotating machine,
and which generates a current/voltage proportional to how fast it is rotating. A gauge measures
the voltage. However, this particular device has a number of small cantilevered beams of differ-
ent thicknesses next to each other, each with a unique natural frequency that is slightly different
from the neighboring ones. By placing the tachometer on a rotating machine, its vibrations are
transferred to the tachometer, forcing only one of the beams to vibrate vigorously when its natural
frequency matches that of the rotating machine. erefore, with no moving parts, the number
of revolutions per minute of the rotating machine can be measured. Figure 2.8b shows how this
tachometer indicates the speed of the motor of a drill press at 2,000 rpm.
34
2. NATURAL FREQUENCIES
Figure 2.7: e reed of a clarinet.
2.3.3
STRINGS
Finally, consider a string that is attached to a rigid body at one end and kept taut with an axial
force acting on the other, as in Figure 2.9. Plucking the string will also induce oscillations in it at
its natural frequency that requires no more external force until it dies out due to internal damping
and other frictional forces.
e equation [1] describing the natural frequency for the string is:
1
2L
s F
(cid:26)A
;
f
D
(2.9)
where L is the length of the string, A is the cross sectional area, (cid:26) is the density of the material
(mass per volume, or how heavy each unit volume of the material is), and F is the tension or force
in the string. When the tension is increased, the natural frequency of the string increases as well,
creating a higher pitch (this is how a guitar is tuned). As the length of the string increases, the
natural frequency decreases. erefore, when the length of the string is reduced by fretting, the
pitch increases (notice how the lengths of the strings in a harp are different in order to produce
different pitches). For larger cross sections A, the natural frequency decreases as well. erefore,
thicker strings have a lower pitch range. Heavier materials (steel versus nylon) also produce lower
pitch vibrations. erefore, combinations of length, thickness, material, and tension can create
any natural frequency we desire.
2.4 APPLICATIONS AND EXAMPLES
e following sections show the applications of natural frequencies in a multitude of systems and
devices. In each case, you will notice how the same engineering principles apply and how they are
2.4. APPLICATIONS AND EXAMPLES 35
(a)
(b)
Figure 2.8: A tachometer with no rotating parts.
Figure 2.9: Oscillations of a string, attached firmly at both ends.
used, whether in devices and systems that we design and build, or natural systems created through
natural forces.
2.4.1 GUITARS, PIANOS, AND OTHER STRINGED INSTRUMENTS
As the previous discussion indicated, large ranges of pitches can be produced by strings depending
on their length, thickness, material characteristics, and tension. In a piano many strings are used,
each with a specific length and thickness, practically all the same material. However, to tune the
Vibrating at2000 Hz.FL,A,m36
2. NATURAL FREQUENCIES
piano, an expert tuner adjusts the tension in order to create an exact pitch (natural frequency). In
a piano, when a key is pressed, a hammer hits a string, and therefore, depending on how hard it
hits the string, the volume of the sound varies. It is also possible to dampen the sound by pressing
a damper against it. In harpsichords, the string is plucked just like a guitar; otherwise it is very
similar to a piano. Harps are the same; each string at a different length produces a different pitch.
e sound is adjusted by adjusting the tension. A harp is also plucked with the fingers.
In many stringed instruments, from guitars, violins, and violas to cellos and basses, all
lengths are equal (some other instruments have varying lengths strings). However, the thicknesses
of the strings are different and so are the materials used (Figure 2.10). Some strings are steel,
some are nylon, and some are wound with a wire (nickel) for a lower pitch. e tone of the open
string is adjusted/tuned by changing the tension. Subsequently, the instrument is “played” by
changing the natural frequency through fretting or fingering. is is even more sophisticated in
instruments such as a violin, where vibrato is common. In some electric guitars a tremolo bar is
used to change the tensions of all strings simultaneously, thereby changing the pitch of all of them
(see Figure 2.11). In this case, instead of attaching the strings to the body of the guitar, they are
attached to the bridge. Since the bridge has a spring-loaded hinge, it can be moved slightly by
the musician to change the tension.
Can you guess how musicians use a tuning fork to tune a guitar or violin? Why does it
work?
Figure 2.10: Strings of a guitar produce pitches based on their lengths, the force pulling them at one
end (including the changes in the force through the tremolo bar), the material from which they are
made, and their cross sections.
2.4. APPLICATIONS AND EXAMPLES 37
Figure 2.11: A tremolo bar is used to change the tension on all strings simultaneously, thereby chang-
ing their natural frequency and their tone.
How Tension is Applied in String Instruments: Worm Gears: To apply ten-
sion to the strings in a string instrument either friction–based pegs or a worm
gear-based tuning key is used. For a violin or a viola, where the tensions are
lower and the instrument is not plucked constantly, the strings are tensioned
by turning the pegs or tuning keys. ese pegs are held in place through fric-
tion. e pegs and the holes are tapered at a shallow angle, and therefore, by
pushing them into the hole, enough friction is generated to keep the pegs from
loosening (Figure 2.12). However, in guitars and most other instruments that
are plucked, the forces are larger and friction may not be enough. In that case,
worm gears are usually used (Figure 2.12). So what is a worm gear? Although
worm gears are not related to the subject of vibration, let’s look at the way they
work before continuing. is will show us how most engineering subjects are
inter-related too.
..
38
2. NATURAL FREQUENCIES
Figure 2.12: In a violin, tension is provided by a tuning peg, which is held by
friction. In a guitar, since the forces are larger, tension is provided by a worm
gear–based tuning key.
Worm Gears: Worm gears are very common in devices for reducing
speed and increasing torque, including in automobile steering mechanisms,
jacks, wenches, and others. Like other pairs of gears, they provide reductions
or increases in angular velocities and torques. But they also have other char-
acteristics that make them useful in particular instances. Figure 2.13 shows
a simple worm gear. Depending on whether the worm is a right-handed or
left-handed worm (turning the worm in the direction of your curled fingers
of the right or left-hand will move the thread forward along the direction of
your thumb; also see Chapter 3), the worm gear will rotate counter-clockwise
(CCW) or clockwise (CW).
..
2.4. APPLICATIONS AND EXAMPLES 39
Figure 2.13: A simple worm gear.
First a word about gear ratios. In all gear systems, the reduction or in-
crease in angular speed or torque is proportional to the gear ratio (the ratio of
the number of teeth on each gear, usually called the driver gear and the driven
gear). erefore:
;
N2
N1
!1
!2 D
T2
T1 D
where !1; T1; N1 and !2; T2; N2 are the angular velocity, torque, and number
of teeth of each gear, as shown in Figure 2.14. Notice how these are related.
e larger a gear, the slower it rotates, but the larger the torque. erefore, by
selecting the appropriate number of teeth on a pair of gears we can increase or
decrease the angular speed and torque.
(2.10)
So why is it that when a gear rotates faster, the torque on it is lower, and
vice versa? Of course we can answer this question by calculating the moments
on each gear and by drawing free body diagrams as well, but here we will
consider the principle of work and energy. As we have already discussed, the
total energy in a system is constant unless we add energy to it or remove energy
from it. is is called conservation of work and energy. Assuming that the
friction in the system is small enough to be negligible, the total work or energy
into and out of the system of gears is constant. However, work is equal to force
multiplied by linear displacement, or equal to a torque multiplied by angular
displacement. erefore, the total input and output should be equal, or:
..
W
T1
!1
(cid:2)
D
D
T2
(cid:2)
!2;
(2.11)
WormWorm gear40
2. NATURAL FREQUENCIES
where W is the work. is is the same result as Equation (2.10). Consequently,
as the angular speed is reduced, the torque is increased, and vice versa. is is
exactly what happens in an automobile gear box as well. In the first gear, the
output angular speed is reduced through higher gear ratios, creating larger out-
put torques that can start a car moving. When the speed of the car increases,
we shift into second and third, etc., increasing the speed, but lowering the
output torque.
Figure 2.14: A gear reduction system.
Although worm gears look somewhat different, they are kinematically
the same. ey provide a large gear ratio for their size, but usually have more
friction as well. However, depending on their helix angle, they can be self-
locking, an important characteristic. So what is the helix angle? In fact, if you
look closely, the worm looks like a screw. A screw is nothing more than a
triangle, wrapped around a cylinder, as shown in Figure 2.15. e angle of the
triangle at the tip is the helix angle. is determines how many threads are
present in any given length (e.g., 20 threads per inch in a 1=4-20 screw). In
reality, when a screw is rotated, the nut moves up or down on this (inclined)
plane. A larger helix angle means that the nut moves faster, but it requires a
larger force to move up. Imagine that the angle is large, creating a steep incline.
What will happen to a box on a steep inclined plane if the force behind it is
removed? As shown in Figure 2.16, in the absence of friction large enough to
stop the motion, the box will slide down. With smaller helix angles the box
tends to stay and not slide. If you translate this concept into a screw, and if the
..
ωω2.4. APPLICATIONS AND EXAMPLES 41
helix angle is small, the nut will not move down on a screw when the load is
removed, causing a self-locking mechanism. If the angle is steep, it is possible
that as the force is removed, the nut may automatically move down, making it
not self-locking. But which one is better? Imagine you use a jack to raise your
car by applying a torque to the handle. One common design for automobile
jacks is equivalent to a nut moving on a screw, raising or lowering the car. How
would you like it if the car you just raised would comeback down as soon as
you released the handle? Here we want to make sure the nut on the jack is
self-locking. In other applications such as in a hand-press we want to make
sure that as soon as the handle is released the press returns without external
effort by the operator. is will increase the efficiency of the system. Here,
a not-self-locking screw is better. Consequently, based on our needs, we can
design the screw to be self-locking or non-self-locking.
For the guitar, we obviously want the tensioner to be self locking; other-
wise, as soon as the tuning keys are released the tension will be lost. Although
other ways exist to do this, worm gears are commonly used because they can
easily be self-locking, even at large tensions.
Figure 2.15: An inclined plane wrapped around a cylinder creates a screw.
Figure 2.16: A box moving up an inclined plane.
..
Helix angleHelix angle42
2. NATURAL FREQUENCIES
2.4.2
SPEAKING AND VOCAL CORDS
Humans speak and produce sounds by expelling (or modulating) air from their lungs through
vocal cords (also called vocal folds) situated in our larynxes. e air causes the cords (actually
folds) to vibrate. As in a guitar or violin, the sound resonates within the larynx, sometimes with
additional harmonics, creating an audible and unique voice. e shape of the cords, their thickness
and size, and the shape of the larynx create each person’s unique voice as well as the different
frequencies of each sound. For example, the average fundamental frequency is about 210 Hz for
women, about 125 Hz for men, and more than 300 Hz for children. By changing tension in the
cords, humans can alter their frequency and produce different sounds, pronounce different letters,
and sing.
e production of sound is not the only characteristic that follows the engineering princi-
ples we have already discussed. We can also see the effect of similar variables in the system. For
example, generally adult males’ voices are lower-pitched than those of women or children. As we
might expect, the male vocal cords are longer, ranging between 1.75 and 2.5 cm (0.75 to 1 inch)
versus 1.25 and 1.75 cm (0.5 to 0.75 inch) for women. We have already seen that longer strings
(and cantilevered beams) have lower natural frequencies and produce lower-pitched sounds than
shorter strings. As a child grows and his or her cords elongate, his or her voice changes too.
Figure 2.17 shows typical vocal folds.
Figure 2.17: e vocal cords and folds.
2.4.3 TUNING TO A RADIO OR TV CHANNEL
e tuning of a radio or TV to different channels or stations is in fact related to natural frequencies
as well, although in this case it relates to the natural frequency of an electronic circuit’s output.
ere are certain circuits that are specifically designed so that their output voltage oscillates, e.g.,
like a sine wave. Although most circuits are too complicated to discuss here, we can consider a
2.4. APPLICATIONS AND EXAMPLES 43
very simplified set up to study the fundamentals. is will teach us how a tuning system works.
So first let’s talk about this, then about tuning.
Imagine a very simple electric circuit composed of a coil or inductor L and a capacitor C
as in Figure 2.18. As was discussed in Chapter 1, since the coil is a conductor, when a current
passes through it, a magnetic field is developed. Conversely, when the coil is placed within a
varying magnetic field, a current is induced in it. ese are called electromotive force (emf ) and
back-emf. e important thing to realize is that these can happen as a result of each other.
Figure 2.18: An R-L-C circuit and its response. e voltage oscillates at the natural frequency rate
of the circuit.
A capacitor is another electronic element that can store electrical energy and discharge
it back into the circuit when the voltage of the load is less than the voltage across the capacitor.
eoretically, if there is absolutely no loss of energy in the circuit due to electrical resistance, when
the circuit is initially energized, the flow of the electrons in the circuit will cause the coil to generate
a back-emf voltage which charges the capacitor. When the voltage in the coil becomes less than
the capacitor, the capacitor discharges its energy back into the coil, causing the same back-emf.
is repeats forever, creating an oscillating voltage. In real life, every electrical element has some
resistance R, so every time the current goes through, part of the energy is converted to thermal
energy, and consequently, the oscillation of the voltage in the circuit dies very quickly. However,
just like a mechanical device, such as a grandfather clock where the energy loss is compensated
by the energy stored in a weight or a spring, the energy stored in a battery or similar device can
DCInductor CoilCapacitorResistorDC source− +44
2. NATURAL FREQUENCIES
compensate for the lost energy in the circuit. erefore, we can expect that the R-L-C system
may continue to oscillate indefinitely as long as we have a source of external energy to compensate
for the loss. In most systems a crystal is used for this purpose, but the principles stay the same.
Figure 2.18 shows the schematic of an R-L-C circuit, a simple circuit consisting of a coil and a
fixed capacitor put together for testing, and the output of the system as seen on an oscilloscope
when an impulse signal is applied to the circuit. Notice how quickly it dies out due to the electrical
resistance in the wires.
e frequency at which the voltage in the system oscillates is a function of the capacitance
of the capacitor C (a measure of charge-storing capability of the capacitor) and the inductance of
the coil L as:
f
D
1
2(cid:25)
r 1
LC
:
(2.12)
Changing the value of L or C will change the oscillating frequency of the circuit. is is exactly
what is done in manually tuning an old-style radio by a knob. Turning the tuning knob moves a set
of plates within a capacitor relative to the counterpart fixed plates, changing how much energy is
stored between the plates (Figure 2.19). Although the same can be accomplished by other means
(such as the use of a quartz crystal), the basic idea is to create an oscillating voltage in a circuit.
Figure 2.19: A schematic drawing of a variable capacitor.
How is this related to tuning to a radio or TV station? To see this, imagine a pendulum
oscillating at a particular rate, in front of which is a plane with a hole, also oscillating as in Fig-
ure 2.20. An observer is looking through the hole trying to see the pendulum. If the rate of the
movements of the pendulum and the plane are exactly the same and they start at the same time,
the observer will continually see the pendulum through the hole. However, if the rates are not the
same, even if they start exactly at the same time, the observer will actually not see the pendulum,
except by chance when they happen to be at the same location at the same time. When the two
Rotatable platesStationaryplateshave the same frequency of oscillation, we can say that they are tuned (synchronized) with each
other, moving at the same rate. Stay tuned, as we are not there yet. Now we need to see how
different broadcasts are coded for distinction.
2.4. APPLICATIONS AND EXAMPLES 45
Figure 2.20: An observer behind a plane with a hole may or may not see the pendulum moving
depending on whether or not the pendulum and the plane have the same frequency of motion.
ere are hundreds of stations that broadcast radio and television programs. Without some
unique feature to distinguish one signal (or station) from another, every receiver would capture
the combined broadcasts from all the stations at once, obviously a completely useless system. is
would happen if the information broadcast by any station (radio, TV, the Police, etc.) consisted of
only the intended signal (for example the music) without a distinguishing signature to differentiate
it from another. However, to create multiple stations with multiple channels of broadcast, each
with a unique signature, the signal is modulated with a carrier signal before broadcast, either based
on amplitude modulation (AM) or frequency modulation (FM). We will not get into too much
detail about this, but let’s see what this means.
Imagine that f .t / (some function of time, which in general can be anything, including
music, video, or any other signal) is the signal that is to be broadcast. Figure 2.21 shows a sim-
ple sinusoidal function f1.t/
D
0:5
0:5 cos.50t/ as shown Figure 2.22a (a cosine that oscillates between 0 and 1 instead). e
frequency of this signal is 50 times as large as the sine function of Figure 2.21. Similarly, Fig-
ure 2.22b shows a similar signal, but at a frequency of 100 instead of 50. Notice how the two
signals look the same, but one is faster at a higher oscillation frequency.
sin.t/ as an example. Now consider another function f2.t/
D
C
Modulating (combining, in this case multiplying) the two signals together will result in a
signal that has the overall shape of the lower frequency signal f1.t/, but with the higher frequen-
46
2. NATURAL FREQUENCIES
Figure 2.21: A simple sine function signal of f .t/
sin.t/.
D
(a) f .t/
0:5
C
D
0:5 cos.50t/
(b) f .t /
0:5
C
D
0:5 cos.100t /
Figure 2.22: A higher frequency carrier signal at the frequency of 50 and 100 cycles per second.
cies of the second function f2.t/. Figure 2.23 shows the result of modulating these functions at
different frequencies of 50 and 100 as:
F .t/
f1.t/
f2.t/
sin.t/
(cid:140)0:5
(cid:2)
C
D
(cid:2)
D
0:5 cos.50t/(cid:141)
and
F .t/
f1.t/
f2.t/
sin.t/
(cid:140)0:5
(cid:2)
C
D
(cid:2)
D
0:5 cos.100t/(cid:141) :
What is interesting is that the same can be done at any other frequency, all resulting in the same
overall shape of f1.t/, but at different frequencies.
2.4. APPLICATIONS AND EXAMPLES 47
(a) F .t/
sin.t/
(cid:140)0:5
(cid:2)
C
D
0:5 cos.50t/(cid:141)
(b) F .t/
sin.t/
(cid:140)0:5
(cid:2)
C
D
0:5 cos.100t/(cid:141)
Figure 2.23: Modulated signals of a simple sine function and higher frequency cosine functions result
in the same basic overall shape of the sine function, but at a higher frequency of the carrier signal.
Figures 2.24 and 2.25 show another signal and its modulated signals at two different fre-
quencies. Similar to the previous case, the original shape of the signal is preserved but when
modulated, the signal contains the higher frequencies of the carrier signals. Figure 2.26 shows
two additional signals and their modulated versions for comparison.
en how is this used as a unique signature for each broadcasting station? Imagine that each
station is granted a particular frequency that it uses as its carrier frequency, used to modulate its
particular signal. Whether music, dialogue, pictures and video, or any other data, the signal is
modulated with the station’s signature frequency. erefore, the broadcast signal will have the
basic information in it, but is broadcast with a carrier frequency unique to the station.
Now look back at Figure 2.20. As with the pendulum and the plane with a hole, where they
are either in tune or out of tune, your receiver (TV, radio, or other device) may be in tune with
a particular signal frequency or out of tune with it. If it is in tune with a signal, it will “see” the
signal continuously and will therefore receive it. Since it is out of tune with all other signals it will
not “see” any of them. All it takes for your receiver to tune in is to have the same frequency as the
carrier frequency of the signal (or station). In other words, if the receiver has a frequency similar
to the carrier frequency of the broadcast signal, it will receive it; if not, it will not see the broadcast
signal. is is done by an oscillating circuit such as in Figure 2.18 and Equation (2.12). A variable
capacitor in an R-L-C or similar circuit changes the natural frequency of the receiver, matching it
with the frequency of the carrier signal of the particular station in which one is interested. When
the two are in tune, the receiver will receive only that signal. A low-pass filter eliminates the high
carrier frequency (called de-modulation), ending up with the original signal that is amplified and
played back as music, dialogue, video, etc.
48
2. NATURAL FREQUENCIES
Figure 2.24: e signal f .t/
sin.t/
C
D
1
3 sin.3t/.
(a)
(b)
Figure 2.25: e result of modulating the signal of Figure 2.24 with a carrier signal at two different
frequencies as F .t/
0:5 cos.100t/(cid:141).
0:5 cos.50t/(cid:141) and F .t/
sin.3t/(cid:141)
sin.3t/(cid:141)
(cid:140)sin.t/
(cid:140)sin.t/
(cid:140)0:5
(cid:140)0:5
C
C
D
C
C
D
(cid:2)
(cid:2)
Frequency modulation is somewhat more complicated. Instead of modulating the ampli-
tude of the signal with the frequency of the carrier signal, the frequency of the carrier signal is
changed based on the amplitude. As the amplitude of the signal changes, the amplitude of the
carrier signal remains the same, but its frequency changes within a certain range. e rest is the
same. We will not discuss the details of FM modulation here.
2.4. APPLICATIONS AND EXAMPLES 49
Figure 2.26: Two additional examples of signals f .t/ and their modulated signal F .t/
(cid:140)0:5
C
signal.
(cid:2)
0:5 cos.100t/(cid:141). Notice how the shapes of the signals are preserved at the frequency of the carrier
f .t/
D
Realistically, the modulating frequencies are very high, in the hundreds of thousands (kHz)
for AM, and in the millions (MHz) for FM.
Amplitude modulation is prone to noise, but has a better range, while FM is less prone
to picking up noise, but does not travel very far. is is why most available out-of-town stations
where cities are not close by are AM, not FM.
2.4.4 HEARING
e hearing mechanism in humans (and most animals) is also related to natural frequencies. To
see this relationship, let us first examine the human ear and its parts, then we will discuss the
mechanism of hearing.
50
2. NATURAL FREQUENCIES
e human ear has three distinct parts: the outer ear, the middle ear, and the inner ear, as
schematically shown in Figure 2.27. Each section has a different function.
Figure 2.27: Schematic drawing of the human auditory system.
e outer ear has the following roles:
1. e pinna is a distinct feature of the human face, giving it a certain beauty and making the
human face look as we have come to know and love.
2. It is sensitive to touch, temperature, and other stimuli.
3. It acts as a radiator to dispense of excess heat when needed. ere are a lot of blood vessels
in this organ. When the body needs to dispense heat, blood flow to the outer ear increases.
is is why the ears turn red when the person is hot or nervous.
4. It collects sound. e sound we hear is the result of the reaction of our hearing mechanism
to the vibration of molecules of air. e pinna increases the ability of the system to sense
these vibrations.
5. It helps in determining the direction of the sound. Humans hear in stereo; this means they
can sense the approximate location of the source of sound in space. is is because the
distance from a source of sound to each ear is slightly different. e very small difference
in the time that it takes for the sound to reach each ear is detected by the brain, helping it
Auditory canalTympanicmambraneHammerAnvilStirrupSemi-circularcanalsCochleaEustachiantubePinnaOuter earMiddle earInner ear2.4. APPLICATIONS AND EXAMPLES 51
to determine the location of the sound. However, we can also determine if the source is in
front of or behind us, even with closed eyes, because of the unique shape of the outer ear.
Since its shape relative to the front or rear is different, it can detect whether the source is
in the front or rear.
Sound energy (a mechanical type of energy) travels through the ear canal (also called the
auditory canal) to the tympanic membrane (ear drum). e tympanic membrane is a thin skin
layer that is vibrated by sounds ranging in frequencies between about 20 Hz to about 20,000 Hz,
an amazing range. is is why we can hear sounds within this approximate range. We cannot
hear sounds with frequencies above this range (called ultrasound) or below it. Other animals
can. Dogs, bats, and many other rodents can hear frequencies far above this range. Can you guess
why? Primarily, it is the ability of their (smaller and therefore higher natural-frequency) tympanic
membranes to oscillate at higher frequencies that enables them to hear those frequencies.
is characteristic is used to drive rodents away from houses and farms without affecting
humans. Since humans do not hear ultrasonic vibrations, a device that is plugged into an electrical
outlet or pushed into the ground creates loud ultrasonic sound bites annoying rodents and bats
and ground squirrels and driving them away without humans even hearing it. If lower frequencies
of ultrasound were used domestic animals might also hear the sound and be annoyed.
e vibrations of the tympanic membrane are transferred to the middle ear. e middle
ear consists of three bones (Ossicles) called the hammer, anvil, and stirrup. ese bones are held
together by tiny muscles. ey bridge the tympanic membrane on one side and the cochlea of
the inner ear on the other, creating a physical connection between the outer ear and the inner
ear. Stirrup bone touches the cochlea at the oval window, where the vibrations of the stirrup are
transferred to the liquid within the cochlea. A narrow tube called the Eustachian Tube connects
the nasopharynx (throat cavity) to the middle ear. e middle ear has four distinct functions as
well:
1. It helps in isolating the inner ear from the tympanic membrane. ere are many cases where
the tympanic membrane may be damaged by external factors such as extreme sounds or
intentional operations (e.g., when a doctor inserts a plug into the tympanic membrane to
help young children with drainage of the middle ear when it is infected). If the inner ear
were directly attached to the tympanic membrane, any physical damage to the tympanic
membrane, whether intentional or accidental, would permanently damage the inner ear
resulting in permanent hearing loss. But with this arrangement, where the middle ear acts
as a safety device, damage to the outer ear will not result in a permanent loss of hearing.
2. e Eustachian tube helps with the equalization of air pressure between the outer and mid-
dle ear. Without this equalization, not only would the middle ear ache terribly, as the out-
side air pressure changes, the pressure difference between the outer and middle ear would
prevent us from clearly hearing sounds. By swallowing, we force the Eustachian tube to
open, therefore equalizing pressure in the middle ear. If an individual with a cold or flu or
52
2. NATURAL FREQUENCIES
allergies cannot equalize the air pressure due to inflammation of the Eustachian tube, he or
she may have pain and may not hear well. Physicians may even suggest that the individual
avoid flying.
3. e specific arrangement of the three bones allows the middle ear to amplify sound vibra-
tions. is amplification helps in hearing lower threshold sounds.
4. e three bones of the middle ear transfer the vibrations to the inner ear. In the presence
of very loud sounds, the vibration of these bones becomes very large too. As a safety device,
and to prevent the inner ear from permanent damage, the tiny muscles connected to the
hammer, anvil, and stirrup bones contract, reducing the amplitude of the vibrations, the
severity of the sound, and hearing damage. If you have ever experienced a heavy feeling in
your ears when exposed to loud noises (such as a loud concert or gunshot), it is because the
middle ear muscles were contracted. is in itself is an indication of damage to the inner
ear, even if not as severe as it might have been without this safety feature.
Consequently, sound vibrations are transferred to the inner ear.
e inner ear consists of the cochlea and the semicircular canals. We will discuss the func-
tion of the semicircular canals shortly, but they are not part of the hearing mechanism.
e cochlea is a spiral canal, about 2 5
8 turns, with a complicated structure whose individual
functions are not yet fully understood. Within it are three passages, two of which are separated
by a membrane called basilar membrane. Unlike the cochlea, the basilar membrane is thicker and
narrower at the base of the cochlea near the oval window and thinner and wider at the apex
(end). As the fluid inside the cochlea is vibrated by the ossicles (hammer, anvil, and stirrup), the
basilar membrane vibrates with it. However, since the input to the cochlea is only through the
oval window as one signal only, the cochlea has to decompose and codify the sound into different
frequencies that can be recognized by the brain. is is the job of the basilar membrane.
Since the width and the thickness of the basilar membrane varies throughout its length,
each location on it has a particular natural frequency. As the sound vibrations go through the
cochlea, one location on the basilar membrane vibrates in synch with the particular frequency of
the sound. It is as if each location is tuned to vibrate at one frequency, higher frequencies at the
base where the basilar membrane is thicker and narrower (resulting in a higher natural frequency)
and lower frequencies toward the apex where the membrane is thinner and wider (resulting in
lower natural frequencies). As a result, although only one set of vibrations enters the inner ear,
the basilar membrane “decomposes” the sound into individual frequencies at each location.
Along the membrane are rows of inner and outer hair cells (which are not really hair, but
very thin cell-structures) that are extremely sensitive to motion. e outer hair cells (numbering
about 12,000) help with tuning the basilar membrane for its decomposition task. e inner cells,
numbering about 3,500 in a single row, detect the vibrations of the basilar membrane and send
a signal to the brain through the auditory nerve, where the sound is heard and recognized (the
2.4. APPLICATIONS AND EXAMPLES 53
mechanism by which the brain interprets and understands the sound and the meaning of sounds
is beyond the scope of this book).
All sounds can be decomposed into a collection of simple sine and cosine sounds at par-
ticular frequencies (called the Fourier Series). erefore, the collective vibrations of the individual
hair-cells within the cochlea will enable us to hear and understand the sound about us. If a hair-
cell does not send the proper signal, we will not hear the corresponding frequency. is is why
people who lose their hearing ability will have a difficult time understanding sounds even if it is
amplified by a hearing aid; they do not hear the sounds correctly.
As we age, we naturally lose our ability to hear higher frequency sounds. However, expo-
sure to loud noises can also eventually damage hair-cells permanently, and consequently, hearing
ability.
2.4.5 WALKING AND RUNNING, HEARTS AND LUNGS
Have you noticed that your arms and legs are in fact very similar to pendulums? Granted, each
arm or leg has two oscillating portions, the upper part and the lower part. is is called a double
pendulum. One fortunate thing is that the motions of both arms and legs are relatively limited;
they move less than 150(cid:14). Otherwise, their motions would be more complicated. Nonetheless,
each arm or leg functions as a pendulum, and like pendulums, they have a natural frequency.
Equation (2.6) for Figure 2.5 is the natural frequency of a pendulum like in a grandfather’s clock,
with the mass concentrated at one point. e arms and the legs are more like bars with distributed
mass. e natural frequency of a bar can be expressed as:
f
D
1
2(cid:25)
r mgr
Io
;
(2.13)
where I0 is the second mass-moment of inertia (see Chapter 5), m is the mass of the bar, g is the
acceleration of gravity, and r is the distance from the pivot point to the center of mass of the bar.
Both the length of the arm or leg (as in r) and the mass are important factors, as is the mass-
moment of inertia which is a measure of the distribution of mass. ere are relatively simple ways
of measuring the mass of the arm or leg and calculating its mass moment of inertia even for living
humans. erefore, we should be able to calculate what we need. However, what is of interest to
us is not the calculation, but understanding natural frequency and its role in our everyday life.
A person may be able to walk for hours without getting tired. But if he or she is walking
briskly or carrying a weight in his or her hands while walking, even if it is only a couple of pounds,
it becomes much harder to walk more than a short time before the person feels tired. Why? When
walking, we tend to move our legs and arms in about their natural frequency rates. Our arms move
in the opposite direction of our legs in order to help us with balance as we move forward. At these
rates, it takes little energy to move the legs and arms, so a person can do it for a long time without
tiring much. However, moving at a brisk rate will change this situation; now you are forcing arms
and legs to move at rates different from their natural frequencies. erefore, much more energy
54
2. NATURAL FREQUENCIES
Figure 2.28: e Tacoma Narrows Bridge (Prelinger Archives).
is needed to do this. If you are trying to exercise or burn calories, this is the right thing to do. If
you want to walk longer, perhaps to have a nice walk along the beach, then brisk walking is not
the right thing. Similarly, if you carry a small weight in your hand as you walk, it is not the weight
that causes you to burn calories or get tired; it is how the weight adds to the moment of inertia
and how the natural frequency of your arm changes. erefore, even if you move at a normal rate,
you still burn additional calories because at your normal rate of walking, you are no longer moving
your arms at the previous natural frequency rate. Have you noticed how physical activity experts
prescribe exactly the same things—to walk briskly or to carry a light weight in your hands—as
you walk? is is why.
It is actually very similar for the lungs and the heart. We breathe at the natural frequency of
our lungs; therefore little energy is needed to do so. But now try to breathe at a different rate, and
you get tired quickly. e same is true with a heart if it beats at a rate above the natural frequency
rate. By the way, dogs pant at a high rate compared to, for example, humans. Can you tell why?
An adult human heart beats at about 70 times per minute. However, for infants, it is about
120 and for young kids, it is about 90 beats per minute. Why? Since the mass of an infant’s heart is
smaller, its natural frequency is larger (as we have seen with other systems). As the heart grows, the
rate decreases. By the way, in a 65-year lifetime, the heart beats at least 70
2:4
109 times. is is 2.4 billion times. Not too bad.
365
60
65
24
D
(cid:2)
(cid:2)
(cid:2)
(cid:2)
(cid:2)
Example 2.4 Tacoma Narrows Bridge
In 1940, the brand new Tacoma Narrows Bridge in Puget Sound, Washington, collapsed
due to a phenomenon called wind-induced flutter. e fundamental reason for the violent move-
ments was that the girders (deep I-beams used in the construction) moved as a reaction to the high
winds in the narrows because the sides of the bridge were closed and the wind could not freely
move through them. Since the frequency of the variations in the wind happened to be close to
the natural frequency of the bridge, the motions became larger and the bridge swayed more until
it collapsed. Fortunately, since this had happened from the time of construction (but never to this
extent), the bridge was closed and there was no traffic on it. A new bridge was dedicated in 1950.
at bridge still stands.
2.5. BIBLIOGRAPHY 55
2.5 BIBLIOGRAPHY
[1] Dimarogonas, Andrew: Vibration for Engineers, 2nd ed., Prentice-Hall, NJ, 1996. 34
C H A P T E R 3
57
Coriolis Acceleration and its
Effects
Bikes, Weather Systems, Airplanes, and Robots
3.1
INTRODUCTION
Oregon, Washington, and California on the West Coast of the U.S. are next to an ocean as are
their East Coast counterparts, including New York, Florida, the Carolinas and New England
states. However, their weather patterns are significantly different. For example, except for the
high altitudes of the mountainous areas of California, the rest of the state does not get any snow
in the winter and is relatively dry in the summer, but New York gets a lot of snow in the winter
and is very humid and warm in the summer. For example, during the month of February in 2015,
the eastern half of the U.S. experienced record low temperatures and record amounts of snow,
even as far south as Florida, which plunged to low 20-degrees F. At the same time, the western
states had a record dry season and high temperatures. e cities in the east were in single-digit
temperatures even without the effects of wind-chill while California was enjoying temperatures
in the 70s and 80s F.
Looking at weather maps, you could clearly see how the so called Siberian Express air mass,
moving over the North Pole and traveling south over the North America veered eastward inun-
dating the eastern part of the U.S. Why? Weather patterns are affected, among other things, by
altitude, latitude, longitude, and proximity to large bodies of water, but also by a phenomenon
called Coriolis Acceleration that pushes the air-mass, the jet-stream, and the prevailing winds to-
ward east. In this chapter, we will learn about Coriolis acceleration and also discuss gyroscopic
effects and accelerations caused by rotating frames that explain why a bicycle does not fall when
riding, why you will fall if you turn the handle of a bicycle or a scooter rather than leaning right
or left, and why an airplane may rotate unexpectedly when in flight.
To learn about these concepts and understand the basics, we will need to first make a few
definitions and go over some introductory issues. After that, we will return to learn how Coriolis
acceleration affects our everyday lives. But along the way, we will learn a lot about mechanics and
how these concepts are part of our lives too.
58
3. CORIOLIS ACCELERATION AND ITS EFFECTS
3.2 DEFINITIONS
Since Coriolis acceleration, like all other accelerations, is a vector, let’s start by defining vectors.
3.2.1 VECTORS
A vector is a mathematical expression possessing a magnitude, a direction (also called line of
action), and a sense. Values that are not vectors are scalars. For example, a quantity such as $10
is a scalar. It has no direction; it is only a magnitude. Bags of fruit are also scalars. Time is a
scalar too; it has no direction. So is the speed of travel. It only indicates how fast one moves.
However, velocity is a vector. Not only does it specify a magnitude (the speed) of travel, it also
indicates the direction (and sense) of travel. For example, the line of action of the velocity vector
may be 30(cid:14) up from horizontal. erefore, we know in what direction the object is moving.
However, this does not yet specify the sense of travel, whether it is moving away from us or
getting closer to us. erefore, we also specify the sense at which the vector acts through an
P with its length representing the magnitude, its direction
arrowhead. Figure 3.1 shows a vector E
(30(cid:14) up from horizontal), and its sense (going to the right). With the same magnitude and line
of action, if the sense is reversed, the object moves in the opposite direction and the location of
the object will be completely different as time goes by.
Figure 3.1: A vector with its magnitude, direction, and sense.
Force, acceleration, and distance travelled are also vectors. For example, indicating that a
car moved 10 ft does not indicate in what direction it moved. To completely specify the motion, it
is necessary to also specify the direction and the sense of the motion. Force is a vector too because
if you pull an object it gets closer to you, and if you push the object it gets away from you. So force
P is specified
has a direction and a sense, and consequently, it is a vector. Notice how a vector E
with an arrow above it. ere are also other common notations used for indicating vectors such
as bold letters (P) or (P ).
3 apples
Unlike scalars that can be simply added, subtracted, or multiplied, vector addition, subtrac-
tion, and multiplication require more and may yield very different results. For example, 5 apples
8 apples, but a 5-lb force plus a 3-lb force may or may not be equal to 8 lbs de-
C
pending on the directions and senses of the two forces. Figure 3.2a shows how a parallelogram
is used to add vectors. As shown, the summation of the two vectors (also called the resultant) is
equal to the diagonal of the parallelogram that is formed by the two vectors. e resultants of the
D
θV1 and E
V2 are different when the directions and senses of the vectors change.
same two vectors E
As you can see in Figure 3.2b, adding two vectors with equal magnitudes acting on the same line
of action will result in a vector twice as large if they have the same sense, but zero magnitude if
they have opposite senses because they will cancel each other.
3.2. DEFINITIONS 59
(a)
(b)
Figure 3.2: Vector additions.
Vectors are fundamentally important in engineering. Many subjects of study use vector
notations and vector analysis for proper results. Examples abound, from the forces acting on
the wings of an airplane in flight to the forces acting on buildings, and from hydrodynamics to
space flight, motors, and robotics. For example, the force generated by a jet engine is the same
as the resultant of the drag forces and lift forces that keep an airplane afloat. e forces shown
in Figure 3.2a are exactly applicable to the way a ship is pulled by tugboats. e forces of the
tugboats will pull the ship in the direction of the resultant force.
3.2.2 VECTOR MULTIPLICATION
Vectors can be multiplied, but not like scalars. For example, for scalars, 3
12. But for vec-
tors, the result of multiplication is not the same. ere are two types of vector multiplication
called Dot Product .
R/. e result of a dot product is
a scalar, a simple number. But the result of a cross product is another vector (note the difference
in notations). At least for our discussion here, we need to learn about cross products.
R/ and Cross Product .
V2
(cid:2) E
V2
(cid:1) E
D E
V1
E
V1
E
D
D
(cid:2)
4
60
3. CORIOLIS ACCELERATION AND ITS EFFECTS
V1 and E
V1
R
D E
E
V2:
e cross product of two vectors E
V2 is another vector (read it as V1 cross V2):
(3.1)
(cid:2) E
R is perpendicular to the plane formed by the two vectors E
V2, and
e direction of vector E
its sense follows the right-hand-rule. e right-hand-rule means that if you curl the fingers of
V2, your thumb will indicate the sense. e
your right hand in the direction of going from E
right-hand-rule convention (and this is a convention only) is a very common and useful indicator,
used in many different situations. Figure 3.3 shows the result of the cross product of two sample
vectors. Notice the direction and the sense of the resultant vector. Also note that this is a three-
dimensional or spatial figure, not planar, so you must use your imagination in seeing the vectors
in three-dimensional space. Cross products will be used to explain many of the concepts related
to accelerations, including Coriolis.
V1 and E
V1 to E
Figure 3.3: e cross product of two vectors.
For math-oriented minds, the magnitude of the dot product for simple cases is:
and the magnitude of the vector representing a cross product in simple cases is:
R
D
V1V2 cos (cid:18);
R
D
V1V2 sin (cid:18);
(3.2)
(3.3)
where V1 and V2 are the magnitudes of the two vectors and (cid:18) is the angle between them. One
important result we can derive from Equation (3.3) is that since sin (cid:18)
0, when
two vectors are parallel (and therefore the angle between them is 0), their cross product will be 0.
Similarly, when two vectors are perpendicular to each other, their dot product is zero.
0 when (cid:18)
D
D
×3.2. DEFINITIONS 61
In practice, both dot and cross products are important and very useful. For example, imagine
F for a distance Ed as in Figure 3.4. As mentioned earlier, both force
that you push a box with force E
and distance (we usually refer to distance as displacement) are vectors; they have a magnitude,
but also a direction and a sense. e energy required to move the object is called work. Work,
like energy, is a scalar; it has a magnitude but no direction. Obviously, the larger the force or
distance, the more energy is required to move the object. To calculate the work needed to move
F and Ed , which yields a scalar, as
the object, we can take the dot product of the two vectors E
expected. erefore:
W
D Ed
F
(cid:1) E
D
dF cos (cid:18):
Figure 3.4: e work or energy needed to move an object is the dot product of the force and the
displacement.
As mentioned earlier, when two vectors are perpendicular to each other, their dot product
is zero. e weight of the box of Figure 3.4 is a vector that is directed downward (due to gravity).
F , the weight does not do any work because it is
As the box is pushed to the right by force E
perpendicular to the displacement; it only does work when the box moves in the same direction,
downward (we refer to this as a change in potential energy).
Now imagine that you are tightening a bolt using a wrench as in Figure 3.5. Obviously,
if you exert a larger force or if you use a longer wrench, increasing the distance of the force to
F and Ed is another vector
the bolt, the bolt tightens more. e cross product of these two vectors E
called moment or torque. is torque is what tightens the bolt, and is perpendicular to both vectors,
and its magnitude is:
T (cid:12)
(cid:12)
(cid:12) D Ed
(cid:12)
(cid:12)
(cid:12) E
F
(cid:2) E
D
dF sin (cid:18):
So, although in both cases, the vectors involved are E
together can be drastically different.
F and Ed , the result of multiplying them
Can you figure out the direction of the torque vector? For Figure 3.5, it is into the page
(as if an arrow is shot into the page). Notice how your thumb, with the curled fingers of your
F , points into the page. Most common bolts have right-hand threads
right hand going from Ed to E
Frdr62
3. CORIOLIS ACCELERATION AND ITS EFFECTS
Figure 3.5: e torque caused by a force E
F applied at a distance Ed .
(when you rotate the bolt or a nut in the direction of your curled right-hand fingers, the bolt or the
nut moves forward in the direction of your thumb). erefore, this torque moves the bolt forward,
thus tightening it. A left-hand threaded bolt will move backward. Consequently, turning it in the
same direction will loosen the bolt.
3.2.3 ROTATIONS
Now let’s see how a rotational motion is defined. If you imagine a plate rotating, the direction
of the rotation can be specified as clockwise (CW) or counterclockwise (CCW), depending on
which side you look at (see Figure 3.6). If you are not familiar with these terms, it is probably
because you have always had a digital clock. Nonetheless, we can also specify the rotational motion
of the plate as a vector perpendicular to the motion.
e vector is conventionally assumed to be directed in the direction of your thumb if your
right-hand fingers curl in the direction of rotation (right-hand-rule). erefore, looking straight
at a clock and curling your fingers in the direction of rotation (CW) will point your right thumb
in the direction toward the clock. e opposite rotation (CCW) will point your right thumb
outward away from the clock.
Looking at a bicycle too, when the tires rotate, the rotation of each one can be described by
a vector as shown in Figure 3.7. is vector (and its direction) is very important in understanding
why we can ride a bicycle without falling. Similarly, the rotations of the propellers of an airplane
or the turbine of a jet engine can be characterized by vectors in the same fashion.
3.2.4 ACCELERATION
Acceleration is the change in either the magnitude of velocity, its direction, or both. For example,
if you are currently going at a rate of 50 mph traveling south on a straight road, but your speed
increases to 60 mph, you have a positive acceleration in the same direction as your travel, as
FrdrFrdr3.2. DEFINITIONS 63
Figure 3.6: Rotation of a plate in clockwise or counterclockwise directions.
Figure 3.7: A bicycle’s tire rotations can be specified by a vector.
indicated by the fact that your body is pushed back in the opposite direction of the acceleration.
Similarly, if you continue to travel at 50 mph but follow a turn in the road and change your
direction, you experience an acceleration as indicated by the fact that your body moves in the
opposite direction of the change (we will discuss the reaction of the body to acceleration later).
If your speed decreases from 50 mph to 40 mph, it creates an acceleration in the opposite
direction of your travel, slowing you down. Strictly speaking, this is not a negative acceleration,
but is called deceleration. And in fact, there is a difference between negative acceleration and
deceleration. Deceleration means you are slowing down. Negative acceleration means that the
acceleration vector is in the negative direction relative to a reference frame (or positive axis).
In other words, if the positive direction of an axis is to the right, and if the direction of the
acceleration is in the negative direction (to the left), then this vector is negative. It may still be an
acceleration (increasing the velocity to the left) or a deceleration (slowing down while still going
64
3. CORIOLIS ACCELERATION AND ITS EFFECTS
to the left). Deceleration is in the opposite direction of your velocity or direction of motion,
and therefore, it slows you down. Consequently, if your motion is in the negative direction and
you decelerate (slow down), your acceleration is in the opposite direction of motion (which is
the positive direction) and therefore, a positive acceleration. You should always look into the
direction of the acceleration vector in relation to a reference frame to decide if it is positive or
negative acceleration, as compared to whether your speed is increasing or decreasing to decide if
it is an acceleration or deceleration.
ere are many different types of acceleration. For example, consider a point on a rotating
plane as shown in Figure 3.8. At the instant shown, the point P travels exactly to the left, and
therefore, its velocity is also pointed to the left. However, from experience we know that a little
later it will end up at point P 0, traveling down, with its velocity pointed down. Obviously, the
direction of the velocity between these two points changes, even if the magnitude remains the
same. erefore, in addition to possibly changing in magnitude, the direction of the velocity
vector has changed. is must have been caused by an acceleration too. is is called centripetal
acceleration and is a function of the square of the angular velocity of the plate !2. As we will
discuss shortly, Coriolis acceleration is another type of acceleration, and together with all other
accelerations that may exist, constitutes the total acceleration of the object of interest.
Figure 3.8: e rotation of a plate and how the direction of the velocity of any point on it changes as
it rotates, causing centripetal acceleration.
3.2.5 REFERENCE FRAMES
Reference frames are used to describe the position, orientation, and motions of objects in a plane
or in space as depicted in Figure 3.9. In two dimensions (on a plane), we usually use two axes, x
and y. In three dimensions (space) we use three axes x, y, and z. For example, when in a plane,
point P is at a distance of a from the x-axis and b from the y-axis. In three-dimensions (space)
a point Q is expressed by three values of a distance a from the x-axis and b from the y-axis
for the projection of Q on the x-y plane, and c from the x-y plane. Similarly, we can define
the orientation of an object relative to these axes. Motions can also be defined relative to these
axes within the reference frame. e axes of frames are always mutually perpendicular to each
PPVrVr''other, and they follow the same right-hand-rule we saw earlier. erefore, the z-axis will be in
the direction of your thumb if you curl your fingers in the direction of going from the x-axis to
the y-axis, or
x
E
y
(cid:2) E
z.
D E
3.2. DEFINITIONS 65
Figure 3.9: Two- and three-dimensional reference frames.
We can also consider an extension to this, which helps us with the next sections as well.
e two reference frames shown in Figure 3.9 are stationary; they are fixed and do not move, and
therefore we refer to other things relative to them. However, it is possible to also have additional
frames that move relative to the fixed reference frame. We call them moving frames. For example,
z0 to a bike at the hub of the front tire as shown in
y0 (cid:0)
imagine that we attach a frame x0 (cid:0)
z will
Figure 3.10. As the bike moves, the location of the frame relative to the fixed frame x
change. However, unlike the location of the rider relative to x0 (cid:0)
z0 which does not change,
y0 (cid:0)
z0 does
the position of any point P on the tire (for example, the valve stem) relative to x0 (cid:0)
change. is distinction will play an important role in the next section.
(cid:0)
(cid:0)
y0 (cid:0)
y
3.2.6 ROTATING FRAMES
Figure 3.11 shows a wheel that is rotating about a shaft. As we discussed in Section 3.2.3, the
rotation of the wheel can be described by a vector perpendicular to it. In this figure, a fixed ref-
erence frame x
z is attached to the center of the wheel (the z-axis is perpendicular to the
(cid:0)
plane of the wheel, indicating its direction of rotation) while x0 (cid:0)
z0 is a frame attached to
z0 frame does not stay at one location when the
the wheel at point P . If you notice, the x0 (cid:0)
wheel rotates; the frame rotates with the wheel. is is called a rotating frame.
y0 (cid:0)
y0 (cid:0)
(cid:0)
y
Looking at the same wheel from above, we will see both the fixed frame and the rotating
frame. When the wheel rotates, the rotating frame moves to new locations and its position and
orientation (the directions of its axes) change. Figure 3.12 shows the wheel from above. You can
z0
also see how the same applies to a rotating bar. When the bar rotates, the frame x0 (cid:0)
attached to it rotates with the bar.
y0 (cid:0)
We can attach a fixed frame x
In fact, the same applies to the rotation of planet Earth and everything that moves with it.
y0 elsewhere
y to the center of the Earth as well as frames x0 (cid:0)
(cid:0)
yzxycbaQabxPxyz66
3. CORIOLIS ACCELERATION AND ITS EFFECTS
Figure 3.10: A moving frame and its motion relative to a fixed frame.
Figure 3.11: A rotating frame.
(Figure 3.13). As the Earth rotates, the frames attached to it also rotate. is rotation is very
slow, one revolution every 24 hours. However, since the average radius of the Earth is 3,960 miles
(6,370 km), the speed of a point on the equator is 2(cid:25).3960/=24 or over 1,000 miles per hour
(2(cid:25).6370/=24 or over 1,600 km/hr). erefore, although the frame rotates slowly, its position
changes vastly. Nonetheless, the frame is rotating and this does matter when we talk about the
weather.
y1
Let’s take this one step further as it will shortly help us with our analysis. Suppose that
z.
a frame x1
Now also assume that a second rotating bar is attached to the first bar, and a frame x2
z2
is attached to this bar, as shown in Figure 3.14a. When both bars rotate relative to each other,
not only do these frames rotate relative to the fixed frame, the second frame rotates relative to
z1 is attached to a rotating bar, rotating relative to a fixed frame x
y
(cid:0)
y2
(cid:0)
(cid:0)
(cid:0)
(cid:0)
(cid:0)
Pxyx'y'z'zx'y'z'Pxyz3.2. DEFINITIONS 67
Figure 3.12: Rotating frames attached to a wheel or a bar. As the wheel or the bar rotate, the position
and orientation of the rotating frame changes.
Figure 3.13: e Earth and a frame attached to it. e frame rotates with the rotation of the Earth.
(cid:0)
y1
the first one. e way we look at this in mechanics is to assume that you are located on the first
bar (let’s say there is a chair attached to this bar and you are sitting on it). en you may not feel
that frame x1
z1 attached to your chair is rotating, but you will see that the second frame
x2
z2 is rotating relative to you. erefore, there is motion between these frames relative
y2
to each other. In fact, this is what happens to us on Earth. Since we are attached to the Earth,
we do not necessarily feel that we are rotating, but we see other objects (frames) move relative
to us. However, an observer outside of planet Earth (e.g., in a spaceship or a satellite) will see us
rotating and other objects moving relative to us. e same is true if you are sitting in an airplane
(cid:0)
(cid:0)
(cid:0)
xyx'y'x'y'x'y'x'y'xyx'y'xyzx'y'z'x'y'z'68
3. CORIOLIS ACCELERATION AND ITS EFFECTS
and someone walks in the aisle. Regardless of whether or not you feel the motion of the airplane
(which is moving very fast), you see the person is getting closer to you. However, you both move
relative to an outside frame (or object). Please note that although in Figure 3.14 both arms move
in the same plane, generally they may move in three-dimensional motion. Figure 3.14b shows a
robot with its linkages moving relative to each other in three dimensions.
(a)
(b)
Figure 3.14: Motions of moving frames relative to a fixed frame and relative to each other.
Neither motions nor frames representing them have to necessarily be rotational. A move-
ment void of any rotation is called a translation. For example, Figure 3.15 shows two simple
examples where in (a), a slider simply slides (translates) on a bar while a second bar, attached to
it, rotates. In case (b) the bar rotates while a slider slides over the bar. In contrast with the two-bar
system of Figure 3.14 where two bars rotate relative to each other, in this case a slider translates
while the bar rotates. Once again, this will be an important issue when we talk about Coriolis
acceleration and the weather.
In Figure 3.15 frames are attached to the bars and sliders. In case (a), frame x1
z1
(the z-axis is perpendicular to the page, but not shown) is attached to the slider and translates with
it while x2
z1 is
z2, attached to the slider, rotates
attached to the bar and rotates with it while frame x2
with the bar but also slides (translates) with the slider relative to x1
z2, attached to the bar, rotates relative to it. In case (b), frame x1
z1.
y1
y1
y1
y2
y2
(cid:0)
(cid:0)
(cid:0)
(cid:0)
(cid:0)
(cid:0)
(cid:0)
(cid:0)
Although these two cases seem similar to each other, they are in fact very different. In
z1 rotates.
z1 only translates, while in (b), frame x1
case (a), the first frame x1
y1
y1
(cid:0)
(cid:0)
(cid:0)
(cid:0)
(cid:0)
(cid:0)
xy1x1y2x2y3.3. CORIOLIS ACCELERATION 69
(a)
(b)
Figure 3.15: Combined rotations and translation with a bar and a slider. Although these two systems
have similar components, the motions are very different.
It is therefore a rotating frame. Consequently, frame x2
z2 also rotates with it. So, it is
important to note that when a frame is rotating, and within it there are other motions (either
translations or rotations), the rotation of the first frame will rotate the subsequent frames. is
change in the direction of the velocity of the subsequent frames creates an acceleration component
called Coriolis acceleration that does not exist when the frame does not rotate. Now that we have
endured a long set of introductions, we are ready to look at this acceleration and see what it does.
y2
(cid:0)
(cid:0)
3.3 CORIOLIS ACCELERATION
Coriolis acceleration is one of the components of the total acceleration of a particle or a rigid body.
e total acceleration is the vector addition of all the changes in the magnitude and the direction of
the velocity of the object, each caused by something different. However, the Coriolis acceleration
is present when there is a velocity within a rotating frame. erefore, the first requirement is that
there must be a rotating frame, within which there is another motion. ese two requirements
must be present for the Coriolis to exist, and it is the result of the changes in the direction of
the velocity of the second motion caused by the rotating frame. Otherwise, if the frame is not
rotating, or if there is no motion present on the rotating frame (therefore no velocity), there will
not be a Coriolis acceleration present.
Looking once again at Figure 3.15a, notice that the first frame is not rotating, and conse-
quently, even though the bar is moving relative to it, there is no Coriolis acceleration, whereas in
xy1x1y2x2yrωxy1y1x2x2yurω70
3. CORIOLIS ACCELERATION AND ITS EFFECTS
Figure 3.15b since there is a rotating frame upon which there is a velocity
acceleration.
u, there will be Coriolis
E
Coriolis acceleration, like all other components of acceleration of a body, is a vector with
magnitude and direction. For mathematically oriented minds, the magnitude of the Coriolis ac-
celeration is:
2
D
ac
E
(3.4)
!
E
ac is the Coriolis acceleration component,
E
u;
(cid:2) E
! is the angular velocity vector of the rotating
where
E
u is the linear velocity of the frame that moves relative to it. As you may remember,
frame, and
E
! is the representation of the rotation of an object, and it follows the right-hand-rule; if you curl
E
the fingers of your right hand in the direction of rotation, your thumb will show the direction
u is the vector showing the direction of the
of the vector representing the rotation. Similarly,
E
linear (translational) motion of the slider. As we discussed in Section 3.2.1, the cross product is
another vector whose direction also follows the right-hand-rule. erefore, if the fingers of your
u, your thumb will be in
right hand are curled in the direction of going from vector
E
the direction of the Coriolis acceleration. ese vectors, shown in Figure 3.15b, are drawn again
in Figure 3.16 with the corresponding directions of Coriolis acceleration when the direction of
! changes. Notice how the direction of Coriolis acceleration changes with a change in the
vector
E
u changes. We will shortly see how this
direction of
E
is an important factor in weather systems.
!. e same will happen if the direction of
E
! to vector
E
Figure 3.16: e direction of Coriolis acceleration.
3.4
INERTIAL REACTION TO ACCELERATION
Imagine you are sitting in a car and the driver presses on the gas. What happens to your body as
the car accelerates? You probably have noticed that as the car accelerates forward, your body is
3.4. INERTIAL REACTION TO ACCELERATION 71
pushed back against the seat. Similarly, if the driver brakes, creating a deceleration (or a negative
acceleration, pointing in the opposite direction), your body will be thrown forward. is is why
when a car is in a head-on collision the passengers are thrown forward, and if not restrained by
either a seat belt or airbag, they can collide with the windshield and be severely injured.
Why is it like this? Because as we will see in Chapter 5, the inertia of a body (its mass) tends
to resist changes in movement; it does not want to go faster or to slow down. If it is not moving,
it tends to stay that way. If it is moving at a particular speed, it tends to continue at that speed.
e only reason it might be forced to move or change its speed is if a force or torque is applied
to it. In that case, the body reacts to the acceleration in the opposite direction of the acceleration.
If it is pushed forward, it tends to want to move backward. If is it pushed backward, it tends to
want to go forward, resisting change in its motion.
In mechanics, we do not really write our equations like this; instead, we draw what is called
a free-body diagram and a mass-acceleration (or inertial-reaction) diagram, and set them equal
to each other in order to solve the problem. But this is beyond the scope of this book. We will
just look at the reaction of the mass or inertia, which is to resist motion, always trying to move
in the opposite direction of an acceleration. We can also see the same phenomenon when a mass,
attached to a string, is rotated. In rotational motions, the so-called centripetal acceleration is
always pointed toward the center of rotation. erefore, the mass always tends to move away from
the center (this is referred to as centrifugal force). If it were not for the string applying a force to
it, the mass would fly away from the center. erefore, the mass continues in a circle as long as
the string applies a force to it. You may realize that this is also the same as what happens to fabrics
in a washing machine during the spin cycle: water particles, free to move without much restraint,
tend to move away from the center of rotation while the fabric is spun fast, separating from the
fabric outwardly. e same is also true in other centrifugal devices that separate solid particles
from a mixture, including blood samples and Uranium concentrators used for enrichment.
In fact, this can also be applied to societies. Since each society (a family, community, city,
country) has a “mass,” it usually resists change unless something forces it, and even at that, the
society reacts to it due to its inertia. Some societies have larger inertia (regardless of their size),
some smaller inertia. e larger the inertia of a society is, the larger the resistance to change.
More traditional societies have larger inertia; they do not like to change their traditions as much
because of the value they see in it. In many cases, in order for a society to accept changes some
large force (influence) is needed. ese may be economic forces (such as periods of growth or
depression), natural disasters, wars, great leaders, dictatorships, or huge social effects. Similarly,
small changes in society can be accomplished with more ease because reaction to change will be
smaller. In some instances, when a great change is forced on a society it may result in disastrous
reactions, break-downs of the fabric of society, or revolutions.
We will see the same phenomenon as we discuss weather systems shortly.
72
3. CORIOLIS ACCELERATION AND ITS EFFECTS
3.5 AIR AND WATER CIRCULATIONS (CONVECTIONS)
DUE TO HEAT
Before we embark on learning about the relationship between Coriolis acceleration and the mo-
tions of air masses, it is necessary to also understand one other phenomenon: the circulation of
air and water due to heat injection (have you noticed how many different issues are at work for
this phenomenon?).
Imagine a pot of water on a stove. As the pot heats up, the water warms up as well. But
the hottest point on the pot is where it is the closest to the source of heat. At that location, the
water adjacent to the hottest point receives maximum heat. As the molecules of water warm up,
due to the increased kinetic energy, the distance between molecules of water increases, increasing
the volume and decreasing the density of water (making it lighter) adjacent to the hottest point.
e lighter water now rises up toward the surface (as for example, a piece of wood might do
when placed in water—since its density is lower than water, it floats up). However, rising water
cannot leave a vacuum behind; something else has to replace it. Consequently, colder water from
the surrounding area will rush in to replace it. As shown in Figure 3.17, water rises near the
heat source and colder water rushes in to replace it, creating a double (actually donut-shaped)
circulation all around.
Figure 3.17: Circulation of water in a pot.
e same is true with air; as warmer but lighter air rises, colder air from elsewhere rushes in
to replace it. Otherwise, we end up with vacuum (and the possibility that people in a warm place
may not have enough air to breathe). is is the root cause of wind, and why in general, wind is
cooler. is is also why beaches are generally windy. When the earth warms up due to sunshine,
air rises. e cooler air from the ocean blows in to replace it. If you stand next to an open flame
like a BBQ pit, the warm air rises and colder air replaces it, and consequently, although you may
3.5. AIR AND WATER CIRCULATIONS (CONVECTIONS) DUE TO HEAT 73
feel warmer on your frontside where the radiation heat from the fire warms you, you may feel
relatively colder on your back due to the wind.
Sometimes firefighters try to control fire by fire. is means that to combat
an advancing brush fire, they start a new fire at a safe distance in advance
of the burning fire. e air above the original fire, being hot, rises, pulling
the air from around for replacement. is will cause the new fire, set by the
firefighters, to move in the direction of the old fire until they merge. Since the
brush is already burned by the new fire, the original one runs out of fuel and
dies out. is is possible only because of the direction of the wind.
..
e faster the air moves, the lower its pressure (this is why airplanes can fly, as we will see
later). erefore, the warmer air that rises will create a lower pressure region. e region with
colder air that is not moving has higher pressure (which in meteorology is considered stable air).
e air from the higher pressure region flows toward the lower pressure region, creating wind.
Most materials expand as they are heated. is is due to the increase in the ki-
netic energy of their molecules, resulting in increased distance between them.
erefore, the density of these materials decreases as they are heated. How-
ever, there are certain materials that do not abide by this rule. For example,
Bismuth, a naturally occurring element with atomic number 83, gets smaller
when heated, and therefore, more dense.
Water is the same. Water expands when heated and contracts when
cooled until about 4(cid:14)C. At this temperature, it is at its smallest volume, and
therefore, its densest. As it is cooled further and freezes, it actually expands.
is expansion has many important consequences:
1. Due to this expansion when water freezes, the volume increases. is
means that as water freezes it requires more space. If there is no room
for this expansion, the resulting forces can be very large. For example,
a bottle which is relatively full of water and is capped well may explode
if left in a freezer. Similarly, exposed piping in cold environments can
burst when water freezes.
2. Since water expands when frozen, it becomes lighter. Consequently,
when making ice cubes, you may notice that the center of the ice cube
rises as it freezes and becomes lighter and lighter.
..
74
3. CORIOLIS ACCELERATION AND ITS EFFECTS
3. Water is densest at 4(cid:14)C. is means that the water in large bodies of
water like pools or lakes is densest just near freezing, but not yet frozen.
Because it is the densest, the water at this temperature sinks to the bot-
tom. is keeps the fish and other living creatures safe. Otherwise, if ice
were denser, it would sink down to the bottom, freezing and killing all
its fish and other living organisms. Is it not nice that Nature thinks of
these things?
..
e same happens with air surrounding the Earth at the macro level. As the air warms
under the influence of the sun, it rises, pulling in colder air to replace it. However, wind patterns
change as the Earth rotates around the sun, influenced by its tilt.
Earth has a tilt of approximately 23:45(cid:14) relative to its elliptical path (see Figure 3.18). Due
to this tilt, the total amount of sunshine received at each location changes during the year, causing
the seasons. e Equator divides the Earth into two equal halves. e Tropic of Cancer is 23:45(cid:14)
above the Equator, and the Tropic of Capricorn is 23:45(cid:14) below it. It is on these two tropics
that the sun’s energy is the greatest in the summer for the Northern Hemisphere and winter
(their summer, even though it is January and February) in the Southern Hemisphere because
each is perpendicular to the radiated energy from the sun. Many of the deserts of the world are
also located on these two tropics. For this reason, wind directions change in different seasons,
affecting Coriolis acceleration.
Imagine a summer day in the Northern Hemisphere; the most intense heat radiation from
the sun over the Earth occurs around the lower 1/3 of the Northern Hemisphere, both on land
and on bodies of water. Very similar to the pot of water on a stove, as the air and water warm up,
the air and the moisture within it rise, creating a low pressure area, causing the air from the high
pressure area to rush in to replace it. is creates an almost constant circulation of air throughout
the Earth with circulating winds (called cells). However, due to other influences, instead of a
double circulation pattern, there are three circulatory patterns over the Northern Hemisphere
and three over the Southern Hemisphere (see Figure 3.19). ese are called Hadley, Ferrel, and
Polar cells. Notice that the winds go in opposite directions near the surface versus the upper
atmosphere.
e expectation would be that there should only be one cell in each hemisphere; hot air
rises, and is replaced by cold air. However, both Poles are huge heat sinks; they are very cold, with
cold air that sinks down, creating Polar cells. In general, the weather between 0 and 30 degrees is
heavily influenced by the relatively stable Hadley cell, as is the weather between 60 and 90 degrees
by the relatively stable Polar cells. It is the areas between 30–60 degrees, influenced by the Ferrel
cells, that are more unstable.
e Ferrel cells are somewhat secondary in nature, existing as a result of the Hadley and
Polar cells. As a result, the Ferrel cells, also known as the Zone of Mixing, change much more than
the other cells. Notice that most of the land mass in the Northern Hemisphere is in this region,
3.6. CORIOLIS ACCELERATION AND WEATHER SYSTEMS 75
Figure 3.18: e position of Earth relative to the Sun during the year.
including the U.S., the southern half of Canada, most of Europe, most of Asia, including China,
and the southern half of Russia.
3.6 CORIOLIS ACCELERATION AND WEATHER SYSTEMS
Coriolis acceleration affects many other systems too, but since we have studied so much to get to
this point, let’s talk about the effects of Coriolis acceleration on weather systems.
So, once again, why is it that the states along the West and East Coast of the U.S. are all
adjacent to large bodies of water (oceans), but their weather patterns are so different? One would
expect that being next to an ocean, the weather is heavily influenced by moisture, and therefore,
all these states should have similar weather patterns, both in the summer and in winter. However,
we know they do not have similar weather patterns. North east states receive significant snow
during winter season while south east states do not; those states are generally warmer. Similarly,
76
3. CORIOLIS ACCELERATION AND ITS EFFECTS
Figure 3.19: e continuous air circulation patterns of Hadley, Ferrel, and Polar cells in the Northern
and Southern Hemispheres.
north west states receive more snow (although not as much as the eastern states) than the south
western states. Summers in Florida are very humid, but not in California. As you see, it is not
simply the latitude or longitude that causes differences in weather patterns.
Additionally, there are somewhat constant world-wide winds (the jet stream, prevailing
winds, trade winds, etc.) that almost always blow in the same direction, and although they do
dip down or move up as the weather and seasons change, they predominantly blow in the same
direction (which has been used for sailing for millennia). What causes these winds? And what
causes cyclones? Coriolis acceleration.
As we saw in Section 3.3, when there is (linear) motion within a rotating frame, there is a
component of acceleration called Coriolis, which is perpendicular to both the rotation vector and
the motion (velocity) vector, and it follows the right-hand-rule. Here, we have the perfect recipe
for this too, where the rotating frame is the Earth, and the motions are of the aforementioned
cells. e combination of the two together causes Coriolis acceleration. However, notice that
since the winds are in different directions, the directions of the Coriolis accelerations vary too.
It should be clear that the wind directions at the surface and at higher altitudes are op-
posites, and therefore, the directions of Coriolis accelerations will also be opposite. But to avoid
confusion, let’s only look at these directions on the surface; the opposite will be true for higher
elevations.
First let’s look at the Polar cell in the Northern Hemisphere. Figure 3.20 is a closer look at
the Polar cell. e rotation of the Earth is represented as an upward vector, and the velocity vector
of the air moving within the Polar cell near the surface is southward, moving away from the Pole.
u, the Coriolis acceleration will be toward the
!
2
Remembering the Coriolis equation of
E
ac
E
(cid:2) E
D
Hadley CellFerrel CellPolar cell3.6. CORIOLIS ACCELERATION AND WEATHER SYSTEMS 77
east. However, just like the reaction of your body to a forward acceleration due to its inertia (see
Section 3.4), which pushes your body backward, the air mass will react to this acceleration, causing
it to move toward the west, creating Polar Easterly winds (coming from the east). e air mass,
instead of simply moving down from the high pressure area to the low pressure area, generally
moves west in the 60–90 degree region.
Figure 3.20: Within the Polar cell, the direction of the winds near the surface is southward. e
Coriolis acceleration is toward the east, forcing the winds to react and go westward, creating Polar
Easterly winds.
e rest is similar. If you look at the Ferrel cell, the motion near the surface is northward
toward the Pole. Since the rotation vector is still the same, the direction of Coriolis will be to-
ward the west and the reaction of the air mass will be toward the east, creating the Westerly winds
within the 30–60 degrees region. Once again, instead of simply moving from high to low pressure
areas, the winds shift toward the east. What is interesting is that as the Westerlies and the Polar
Easterlies run into each other, they create all sorts of weather patterns, affecting regional climates.
Westerlies bring warm and moist air from the oceans to the west coasts of the continents, de-
scribing why the weather on the East Coast of the U.S. is so different from the West Coast. Due
to these Westerlies, the general pattern of air mass movements in the U.S. is eastward; air masses
move from the west to the east. West Coast weather is influenced by the air coming from the
Atlantic Ocean, a marine climate that is warmer and moister in winter, not causing snow until
higher altitudes, whereas the East Coast weather is influenced by continental air from the land-
mass where it is colder, causing snow in winter. If the air would be moving straight down from
Direction of the windin the Polar cell at thesurfaceDirection of CoriolisaccelerationRotation of theEarthwindEarth'srotationCoriolis78
3. CORIOLIS ACCELERATION AND ITS EFFECTS
high to low pressure areas without the influence of Coriolis acceleration, the weather patterns on
both coasts of the continents would be similar.
For the Hadley cell too, the direction of motion of the wind at the surface is southward,
Coriolis acceleration is to the east, and the reaction to it causes the winds to shift toward the west,
creating the Northeast Trade winds. Trade winds are the steering winds of the tropical cyclones
near the Equator; they determine the direction that the cyclones take as they travel westward.
Southern Hemisphere winds follow the same rules too, creating the Polar Easterly winds
at the Polar cell, the Westerly winds within the Ferrel cell, and the Southeast Trade winds within
the Hadley cell. See Figure 3.21 for the directions of these winds.
Figure 3.21: e prevailing winds shift due to Coriolis acceleration within each cell.
Note how the Easterlies and Westerlies create almost constant cycles in the Northern and
Southern Hemispheres. ese cycles have been used in sailing for millennia. ey enable ships to
sail from continent to continent. Similarly, depending on whether an airplane flies into a prevail-
ing wind or against it, flight times can change significantly. Flying from San Francisco to New
York usually takes less time than flying from New York to San Francisco, perhaps as much as one
hour depending on different current conditions.
Figure 3.22 shows the general directions of the winds over the continents. Please remember
that these are general directions. ese wind directions are heavily influenced by seasons, bodies
of water, mountains, and temperatures, therefore influencing regional and local weather patterns.
Still, you can see how these winds influence the general weather patterns.
It is interesting that the same also happens on a smaller scale in your car. Next
time you are driving in a car and the fan is on, try to notice what happens as
..
Polar easterly windsNortheast trade windsWesterly windsSoutheast trade windsWesterly windsPolar easterly winds3.6. CORIOLIS ACCELERATION AND WEATHER SYSTEMS 79
Figure 3.22: e general direction of the prevailing winds over the continents.
the car turns. If the blower is blowing the air to your face, as soon as you turn,
the air’s direction changes and you will not feel the air on your face. When you
straighten out, the air blows in your face again. Why? Just like before, when
the car turns, it becomes a rotating frame within which there is air moving. If
you do the same cross product, with the vector representing the rotation of the
car in the up-down direction and the blowing air in the direction toward you,
the cross product of the resulting Coriolis acceleration will be to the left or
right, deflecting the air sideways. When the car goes straight and the rotation
vector is zero, the air moves directly toward you.
..
And now to our original question of why the weather of the West Coast is so different
from the East Coast. As you may see in Figure 3.22, the general weather of California is heavily
influenced by the Westerlies, which are commonly moist and warm due to the Pacific Ocean.
erefore, it does not snow until the mountains because the air is warmer and more humid,
whereas, due to the same Westerlies that pass over a huge landmass, by the time the air reaches
the East Coast, it is cold and therefore snows. e same is also true when the polar easterlies dip
down and bring cold air from the north into those states. As you move down to the Southern
Gulf states, both the Westerlies and the Easterly trade winds bring moisture to those states, so
the air is warm and humid.
Of course, other local issues affect local winds and weather too. ese include sea and land
breezes during the day, mountains and valley breezes, and high mountain thermal flows that affect
local winds (see Figure 3.23).
An important issue to notice is that the angular velocity (rate of rotation) of the Earth is very
low (one revolution per 24 hours). Consequently, only long motions are affected by it over large
80
3. CORIOLIS ACCELERATION AND ITS EFFECTS
Figure 3.23: Rows of trees in San Luis Obispo, near the coast in California, all leaning to the east
due to the influence of regular on-shore sea-breeze winds from the Pacific Ocean.
distances. Small motions we normally make like walking or throwing a ball are hardly affected by
Coriolis at all. e idea, that the rotation of the water in a toilet bowl or when the water drains
in a bathtub is due to Coriolis, is wrong; the water in a toilet rotates because it is deliberately
designed with lateral openings to rotate the water as it is discharged to increase efficiency. e
rotation of the water as it drains in a bathtub is due to the conservation of angular momentum, a
subject that we have not covered here. But neither is due to Coriolis, and therefore, neither will
necessarily go the opposite way in the Southern Hemisphere.
e importance of Coriolis acceleration varies based on latitude (the angle between the
location and the Equator, defining north-south locations). Figure 3.24 shows the velocities of the
surface winds within the Polar and Hadley cells again. Notice that since the winds follow the
surface, the directions (line of action) of these velocities are not exactly the same, but follow the
curvature of the Earth.
3.7. ACCELERATIONS DUE TO COMBINED MOTIONS 81
Figure 3.24: Surface wind directions of the Polar and Hadley cells.
Now let’s look at the Coriolis acceleration of these two cells. Notice that as Equation (3.4)
shows, Coriolis acceleration is the cross product of the angular velocity of the Earth and the wind
velocity. But also as we saw earlier in Section 3.2.2, due to the nature of cross products, only
! counts; the cross
the component of the velocity that is perpendicular to the angular velocity
N
! is zero. As Equation (3.3) shows,
product of the component of the velocity that is parallel to
N
the magnitude of the cross product is a function of sin.(cid:18)/, which is zero when two lines are parallel.
As Figure 3.24 shows, the velocity of the surface winds in the Polar cell is nearly perpendicular
! representing the rotation of the Earth. However, the velocity of the surface winds in
to vector
N
!. erefore, the Coriolis acceleration and its effect
the Hadley cell is pretty close to parallel to
N
is much more pronounced in the Polar cell compared to the Hadley cell near the Equator.
3.7 ACCELERATIONS DUE TO COMBINED MOTIONS
Gyroscopic motions are closely related to this subject and we will see a couple of applications
where they are used later, but due to their complicated nature, we will not cover gyroscopic mo-
tions in this book. However, when there are multiple simultaneous rotations about different axes,
they result in additional accelerations that cause interesting results. Since we have already dis-
cussed many of the elements related to these, let’s go a bit further and also discuss these and their
effect on the objects that many of us use regularly.
3.7.1 RIDING BICYCLES
One of the first things a child must learn for riding a bicycle is that turning the handle to the
right or left will throw the rider to the side; instead he or she must lean to one side or the other
Hadley CellPolar cell82
3. CORIOLIS ACCELERATION AND ITS EFFECTS
to force the front tire to turn to one side. is is because, in addition to other factors such as
the head-tube angle and the offset (called rake angle) and location of the center of gravity and
friction, the combined rotations of the tire and the handle-bar create an additional acceleration
similar to Coriolis acceleration that affects the turning (as mentioned earlier, this can be explained
by gyroscopic motions, but we will not discuss that here).
While riding a bicycle there are two rotating frames, one turning within the other, causing
acceleration. To see how this works, let’s look at Figure 3.25. Imagine that a disk rotates about
the x-axis as represented by vector !1. Now imagine that this disk is also rotating about the z-
axis as represented by !2. As you can imagine, as time goes on, as a result of the rotation !2,
vector !1 changes direction to !10 (and beyond). is change in direction, which in fact is the
acceleration, is shown as (cid:1)! (read delta-omega, meaning change in !). Although we are not
showing it here, this change is actually perpendicular to both of these vectors (along the y-axis)
and its magnitude is !1!2. As we have seen before, this is the same as the cross-product of these
two vectors. erefore, the change (cid:1)!
a can be shown as:
D N
a
N
!2
D N
!1:
(cid:2) N
(3.5)
is means that as one vector rotates due to a rotation, the resulting acceleration is perpendicular
to both vectors. Now let’s see how this translates into the bicycle ride.
Figure 3.25: Rotations within a rotating frame cause acceleration.
a
N
!2
D N
Referring to Figure 3.26, Section 3.2.3, we see that the rotations of the tires of a bicycle can
be represented by the vector !1 about the x-axis. If the handle-bar is rotated about the z-axis as
!2 at the same time, there will be an acceleration perpendicular to both of these about the y-axis
!1. A reaction to this component of the acceleration due to inertia will tend to throw
as
the rider to the right. Conversely, with the same !1 representing the rotation of the tires, if the
rider leans to one side, there will be a rotation !2 along the y-axis, causing an acceleration along
the z-axis that causes the handle-bar to rotate. As was mentioned earlier, there is a lot more to
the total reaction of a bicycle, but this simple analysis shows how the bike reacts to rotations.
(cid:2) N
xyz11xyz1x'y'121ωωωωωω∆ω3.7. ACCELERATIONS DUE TO COMBINED MOTIONS 83
Figure 3.26: Rotations of a bicycle’s tires and handle-bar and the resulting acceleration.
Why we can ride a bike without falling over: When the tires of a bicycle
rotate, the rotation creates a vector perpendicular to the motion. is vector
represents the angular momentum of the tire, and is a function of the weight
of the tire, the way this weight is distributed (called moment of inertia), and
how fast it rotates. Nonetheless, angular momentum is a vector whose direc-
tion follows the right-hand-rule. Although we have not discussed angular mo-
mentum yet, let it suffice to say that this vector likes to maintain its direction.
In other words, changing the direction of this vector requires an attempt, an
external torque. Otherwise, the direction and magnitude of this vector tend to
remain the same. External factors such as friction eventually reduce the angu-
lar momentum. is is why a rotating wheel eventually comes to a stop. is
is true for the direction too; the direction of an angular momentum vector
tends to remain the same unless forced to change. is is called preservation
of angular momentum and is an important subject in dynamics.
But what is important about this? e important thing is that due to this
resistance to change, barring external influences, the direction of an angular
momentum vector will not change. erefore, as long as the tires of a bicycle
are rotating (sufficiently) the direction of the vector resists changing. As a
result, the tire will not fall over.
In some instances, if the rider brakes very hard, both tires may lock
and stop rotating while they slide on the road before completely stopping.
Unfortunately, this causes the bike to fall over because there is no longer any
..
xyz12aωω84
3. CORIOLIS ACCELERATION AND ITS EFFECTS
angular momentum. To prevent this, especially in motorbikes, the rear brakes
are designed to never create forces large enough to lock the wheel. Since the
contribution of braking force on the rear wheel is lower anyway, this generally
does not significantly diminish the braking ability of the total system, but
prevents the bike from falling over.
..
3.7.2 OSCILLATING FANS
An oscillating fan is in fact very similar to a bicycle. e rotating blades create a vector !1 perpen-
dicular to the plane of the blades, as shown in Figure 3.27. Now imagine that you also turn on the
oscillation mechanism that oscillates the fan to the right and left, as represented by a vector !2. Of
course, the direction of !2 changes as the direction of oscillation changes. erefore, the resulting
!1 also changes direction. is means that as the fan oscillates either to
acceleration
the right or left, the forces generated by this acceleration will tend to push the fan’s base forward,
backward, or sideways. Note that here too, the magnitude of vector !1 representing the rotation
of the blades is much larger than the magnitude of !2 representing the oscillations. erefore,
the resulting acceleration is relatively small.
!2
D N
(cid:2) N
a
N
Figure 3.27: e rotations and oscillations of a fan and their representations.
12ωω3.7.3 AIRPLANES
3.7. ACCELERATIONS DUE TO COMBINED MOTIONS 85
An airplane motions can be described by attaching a frame to it as shown in Figure 3.28. Although
other conventions exist, the rotation of the airplane along an axis through its fuselage pointing
forward is generally called roll. A rotation about an axis through the wings is called pitch, while a
rotation to the left or right along a vertical axis pointing down is called yaw. As before, the three
axes of the reference frame are mutually perpendicular. Conventionally, the x-axis is assigned to
the roll axis, the y-axis is assigned to the pitch axis, and the z-axis is assigned to the yaw axis.
z. Notice how in this common convention for airplanes the positive direction
y
erefore,
(cid:2) E
of the yaw axis is downward.
D E
x
E
Figure 3.28: Motions of an airplane can be described through a frame attached to it.
First a word about how an airplane becomes airborne. An engineering principle called
Bernoulli’s principle indicates that when a fluid moves faster, its pressure drops. You can simplify
this by looking at the total energy of a system, including both its potential and kinetic energies
(see Chapters 1 and 2). Unless there is a net positive energy into the system, the total should
remain the same due to the conservation of energy law. erefore, as the kinetic energy of the sys-
tem increases due to its higher speed, its potential energy (translated into its pressure) drops. is
is used in many places and systems, for example in measuring the airspeed of an airplane or in
the old carburetors that were used to mix gasoline with air and supply it to the engine. Take the
airplane: a small pipe called a pitot tube is attached to the wing of the airplane and directly into
the airstream. As the airplane flies, the air that is pushed into the pitot tube comes to a stop be-
cause the tube is closed at its opposite end. erefore, the total kinetic energy of the air converts
to potential energy, and as a result, pressure increases. As the plane goes faster, the pressure in
the pitot tube increases. is pressure is measured and calibrated into the speed that the speed
indicator shows. In the old-style engine carburetors a venturi was used to take advantage of the
rollpitchyaw86
3. CORIOLIS ACCELERATION AND ITS EFFECTS
same principle. A venturi is essentially a tube whose diameter reduces at some point (see Fig-
ure 3.29). Since the same amount of gas or fluid passes through the smaller cross section, speed
must increase, reducing pressure. erefore, the pressure within the smaller cross section is lower
than before or after. is was used to suck the gasoline from a small tank next to it, mixing it with
the air and supplying the engine with the fuel-air mixture. Many other systems are also based on
this principle. However, for an airplane, the increase in speed comes from the shape of the wing’s
cross section.
Figure 3.29: As the fluid moves through a venturi, within the smaller cross section, its speed increases
and its pressure decreases.
Looking at Figure 3.30 you will notice that as the airplane moves through the air, due to
the asymmetric shape of the cross section of the wing which has a longer length on the top than
it does on the bottom, the air has to travel faster above the wing than it does below in order to
maintain continuity. As a result, the pressure above the wings is lower than the pressure below.
is creates a positive upward force that floats the airplane. Notice that since the pressure above
can never approach zero, the maximum difference between the pressure below and above is only
a fraction of the total atmospheric pressure. However, since the wings are large, the pressure
difference multiplied by the large area creates enough upward force to float the plane. By the way,
you can test this by holding a piece of paper in your hands at one edge, letting the other edge
hang freely. If you blow air over the paper it moves up. is is because the air moving over the
paper has a velocity larger than the air below it. is reduces the pressure above the paper, lifting
it.
We will discuss control surfaces and how airplanes’ motions are controlled shortly, but first
let’s see how these surfaces work. A control surface is a portion of the wing or the rudder of an
airplane which moves independently of it. Control forces are generated by moving the control
surfaces into the air stream in different directions, resulting in controlled motions. Figure 3.31
shows a control surface that is lowered into the airstream. e air pushes against the surface and
creates a force as shown, trying to “straighten” the obstruction. is force can be resolved into two
forces, one horizontal, one vertical. e horizontal force is called drag and it works against the
forward motion of the airplane, increasing demand from the engine to overcome it. e vertical
force tries to move the surface and with it, the wing, upward. Similarly, if the control surface is
Higher speed,lower pressureLower speed,higher pressureLower speed,higher pressureFluid inFluid out3.7. ACCELERATIONS DUE TO COMBINED MOTIONS 87
Figure 3.30: An airplane becomes airborne due to the pressure difference below and above the wings,
caused by the increased velocity of air traveling above the wings. Due to Bernoulli’s principle, as the
air speed increases, its pressure decreases.
moved up into the airstream, the resulting force will be downward and the wing will move down
with it too. ese control surfaces control the motions about the three axes.
As you see, if the surface is lowered, the force will be upward. If the surface is lifted up,
the force is downward. What happens if one surface is lowered while the opposite one is lifted
up simultaneously? ere will be a pair of forces, one downward, one upward. ese two forces
together create a torque (a couple) that causes rotation. is torque will rotate the airplane about
the roll axis. In order to rotate the airplane along each axis a set of control surfaces are used, as
shown in Figure 3.32, called ailerons, the rudder, and elevators.
Motion along the roll axis is controlled by control surfaces on the rear edge of wings close
to the tips called ailerons. Ailerons move in opposite directions, developing forces that are also in
opposite directions as the airplane moves through the air. e aileron that is up creates a downward
force; the one that is down creates an upward force. As we saw in Section 3.2.2 these two forces
will create a moment along the roll axis and will roll the airplane along that axis. Obviously, for
controlled motions during flight, small forces are used to roll the plane slowly. But imagine a
fighter airplane that rolls quickly to shun an incoming missile. e forces will be larger, requiring
great skill on the pilot’s part to control the plane.
Motion along the yaw axis is controlled by the rudder attached to the vertical tail fin, the
large vertical control surface on the back of the airplane. e rudder moves to the left or right,
creating a force to the right or left, rotating the airplane along the yaw axis. You may have noticed
that the surface of a road is usually raised on one side along a turn. is is a lot more obvious in
race tracks where race cars travel at very high speeds. is is necessary to keep the car on the road
when during the turn, it is subject to centripetal acceleration (or what is referred to as centrifugal
force which is really the reaction of inertia to centripetal acceleration as we discussed earlier).
Similarly, airplanes are intentionally banked a little by rolling them along the roll axis during
turns to prevent particles inside the airplane from flying outwardly (for example, if you have a
glass of water on your tray in an airplane when it turns, without this banking, the glass will slide
Longer path, higher speed, lower pressureShorter path, lower speed, higher pressureAir88
3. CORIOLIS ACCELERATION AND ITS EFFECTS
Figure 3.31: As the control surface is moved down into the airstream, a force is generated that can
be resolved into drag and an upward force that lifts the wing on one side. e opposite happens if the
surface is moved up into the airstream.
off the tray). In order to do this, the pilot usually combines the motions of the rudder with that
of the ailerons to bank the airplane along the roll axis during a turn along the yaw.
Motion along the pitch axis is controlled by elevators. Elevators are really part of the rear
tail surfaces (called stabilizers) that move upward or downward, either attached to the fuselage or
attached to the top of the vertical fin. Alternately, a single control surface called stabilator may be
used to do the same. Elevators and stabilizers are also used to level the airplane during the flight.
In general, a pilot needs to make sure that the center of gravity of the plane is in front of the
center of lift (usually indicated by a minimum and maximum distance), which is on the wings.
In smaller airplanes before take-off the pilot makes sure that the luggage in the tail part of the
airplane is moved around until the center of gravity is in front of the center of lift. is way, if
the plane stalls, it will nose-dive (and not tail-dive). After the plane gains some speed, the pilot
can try to level the airplane again and continue flying. Otherwise, as a plane stalls, it may crash.
airstreamforcegeneratedaileronwing cross sectionaileron up,force downaileron down,force upDragupwardforce3.7. ACCELERATIONS DUE TO COMBINED MOTIONS 89
Figure 3.32: Control surfaces of an airplane. As a control surface is pushed into the airstream, the air
presses against it, creating a force that attempts to push it back.
In some airplanes, the vertical fins and the stabilizers are combined into two tail surfaces that are
at an angle relative to the wings (like a “v”).
If you know how to make a paper airplane, make one. By bending both the wing tips up or
down, one up and one down, or bending the rudder’s tip, you can force the airplane to roll, pitch,
or yaw in any combination.
How to Make a Paper Airplane: Figure 3.33 shows one way to make a simple
paper airplane.
Figure 3.33: A simple way to make paper airplanes.
..
rollpitchyawaileronrudderelevatorstabilizervertical fin90
3. CORIOLIS ACCELERATION AND ITS EFFECTS
As we discussed earlier, any time there is motion within a rotating frame, there will be an
additional acceleration component caused by the cross products of the two motions as shown by
Equation (3.5) repeated here:
a
N
!2
D N
!1:
(cid:2) N
Like bicycles and fans, since the blades (or in the case of a jet engine, the turbines) rotate along the
roll axis, there is a relatively large vector present in that direction. Whenever the airplane is rotated
along the roll axis the same vector is present too. If at the same time the plane rotates along the
pitch or yaw axes, there will be an acceleration component in the direction perpendicular to both.
erefore, anytime the pilot rotates the plane about the pitch axis, he or she has to also make a
correction about the yaw axis, and whenever there is a rotation about the yaw axis, he or she needs
to make a correction about the pitch axis. In most cases though, since the rotations are very slow,
the resulting acceleration is very small and it may not be necessary to do much about it. However,
in faster maneuvers or during take off (when the nose is pulled up by a rotation along the pitch
axis) and landing, when these rotations are larger, corrections are necessary in some cases. Pilots
are taught to look at the “ball in the race” instrument that indicates if the net acceleration vector
is off to one side or not, and to “step on the ball” to keep this vector along the pilot’s spine, thus
keeping him or her straight in the seat, and not pushed to one side or the other. In larger airplanes
the automatic control system of the airplane automatically takes care of these corrections.
You may have noticed that as airplanes touch the ground, the tires, which at that instant are
stationary, rub against the tarmac until their speed is equal to the speed of the airplane, making
noise and smoking. In this process, the tires wear as well. One common tendency is to suggest
that the tires be rotated just before landing by attaching a small motor to each tire, therefore
eliminating this rubbing and smoking. e problem is that since the tires rotate along the pitch-
axis, as the pilot tries to correct the airplane’s roll or yaw, additional motions are created along the
other axes, making it difficult to control the airplane. ere is a story that one airliner used tires
that had small scoop-like extensions on the tires to force them to rotate when the landing gear
was lowered before landing. Due to this phenomenon, the pilots used to cut the scoops out of the
tires by a knife to prevent this rotation.
Auto-pilots are based on gyroscopes, which also involve the same ideas we have discussed.
A gyroscope has a flywheel-type rotating mass, which due to its relatively large mass and high
speed of rotation, creates a large angular momentum. is momentum resists a change in its
direction. erefore, anytime it is subjected to rotation in a direction other than the direction of
the angular momentum vector, it resists the motion. Gyroscopes can be used in two ways, one
to steady the motions of a system, another to automatically control its motions, called auto pilot.
e same is true for ships and other water-vessels. For example, if a large gyro is mounted on a
ship, it will resist motions along the other two axes that are perpendicular to the direction of the
angular momentum vector. As a result, the gyro will steady the motion of the ship against waves.
Additionally, based on Equation (3.5), since the gyro moves about an axis perpendicular to the
motion induced in it, through sensors such as a potentiometer, a signal may be measured that can
3.7. ACCELERATIONS DUE TO COMBINED MOTIONS 91
be used to move the control surfaces of the airplane to correct the induced motion. erefore,
except during take-off, landing, or emergencies such as sudden turbulences, while the airplane
cruises at high altitudes the auto-pilot can be in control of the airplane.
3.7.4 ROBOTS
Robot manipulators, similar to the ones used in industry to manufacture and assemble parts and
products, have multiple joints (degrees of freedom or DOF) that enable them to move to any
position or orientation within their reach (see Figure 3.34). is is usually translated into a hip
joint, a shoulder joint, an elbow joint, and three wrist joints (if the robot is a 6-DOF or 6-jointed
robot. Fewer DOF or joints are also common, but they are not as versatile). Clearly, when all
joints rotate together to move the robot to a new position or orientation, a similar situation like
the bicycle, fan, or airplane exists; every rotation within another rotation creates an additional
acceleration component perpendicular to the two rotations. Since the robot has up to six joints, it is
possible that joints 1 and 2 may be moving simultaneously. erefore, there will be an acceleration
!1!2. Now suppose that joints 1, 2, and 3 move simultaneously. In this case, there will still be
an acceleration component caused by the rotation of !2 (the shoulder) within !1 (the waist) as
!1!2. However, there will also be another acceleration component caused by the rotation of joint
3 (the elbow) within the waist as !1!3, but since joint 3 also moves within joint 2, there will be
an acceleration !2!3. erefore, the total acceleration includes all three components. Similarly,
if all six joints move together, since joints 2, 3, 4, 5, and 6 all move within joint 1, there will be
acceleration components !1!2, !1!3, !1!4, !1!5, !1!6, and since joints 3, 4, 5, and 6 move
within joint 2, there will be accelerations !2!3, !2!4, !2!5, !2!6, as well as !3!4, !3!5, !3!6,
as well as !4!5, !4!6, and !5!6. Each acceleration, multiplied by its own corresponding mass or
moment of inertia produces a Coriolis force that has to be dealt with. In most robots that move
slowly, these accelerations are small and can be ignored. But for fast-moving robots, these can
become significant and must be considered in the design of the robot.
To experience the same phenomenon yourself, first turn about your waist while your arm is
stretched outward. Next move your arm up and down while still stretched. en move your arm
up and down while you rotate about your waist. You will notice how you feel an additional force
against your arm, stemming from the Coriolis-type acceleration.
3.7.5 MOVEMENTS OF A SPACECRAFT IN SPACE
e following excerpt is from NASA’s Gemini-VIII spacecraft mission journal, written by astro-
nauts Neil Armstrong and David Scott (see https://www.hq.nasa.gov/alsj/alsj-Gemini
VIII.html).
After station-keeping for about 36 minutes, docking with the Gemini Agena Target
Vehicle was accomplished. e final docking maneuver was begun when a distance
of about 2 feet separated the two vehicles. A relative velocity of about three-fourths
of a foot per second was achieved at the moment of contact. e nose of the space-
92
3. CORIOLIS ACCELERATION AND ITS EFFECTS
Figure 3.34: A robot and its joint movements.
craft moved into the docking adapter very smoothly and the docking and rigidizing
sequence took place very quickly and with no difficulty. e docking sequence was
completed at 6:33:22 ground elapsed time, with the two vehicles rigidized together.
For a period of 27 minutes after docking, the stability and control of the docked vehi-
cles was excellent. At approximately 7:00:30 ground elapsed time, the crew noted that
the spacecraft-Gemini Agena Target Vehicle combination was developing unexpected
roll and yaw rates. e command pilot was able to reduce these rates to essentially
zero; however, after he released the hand controller, the rates began to increase again
and the crew found it difficult to effectively control the rates without excessive use of
spacecraft Orbital Attitude and Maneuver System propellants. In an effort to isolate
the problem and stop the excessive fuel consumption, the crew initiated the sequence
to undock the spacecraft from the Gemini Agena Target Vehicle. After undocking,
the spacecraft rates in roll and yaw began to increase, indicating a spacecraft problem
which the crew attempted to isolate by initiating malfunction-analysis procedures.
When the rates reached approximately 300 degrees per second, the crew completely
deactivated the Orbital Attitude and Maneuver System and activated both rings of the
Reentry Control System in the direct-direct mode. After ascertaining that spacecraft
rates could be reduced using the Reentry Control System, one ring of the system was
turned off to save fuel for reentry and the spacecraft rates were reduced to zero using
the other ring. e crew continued the malfunction analysis and isolated the problem
area to the No. 8 thruster (yaw left-roll left) in the Orbital Attitude and Maneuver
System. e circuitry to this thruster had failed to an “on” condition.
Joint 1, waistJoint 2, shoulderJoint 3, elbowJoint 4, wristJoint 5, wristJoint 6, wrist3.7. ACCELERATIONS DUE TO COMBINED MOTIONS 93
is report relates to many of the principles that we have discussed in this chapter. To better
understand these, let’s start with thrusters and their role.
Large rockets are used to apply large forces (and acceleration) on spacecraft for rapid move-
ments, but small motions and rotations are accomplished by firing small thrusters for short periods
of time. Imagine a spacecraft, schematically shown in Figure 3.35, moving in space. Also imagine
two pairs of opposing thrusters attached to it in one plane as shown. If thrusters A and B are fired
simultaneously (usually for a very short time), the spacecraft will accelerate in the direction shown
due to the force exerted by the thrusters. To slow down or return the spacecraft to its previous
speed, thrusters C and D are fired simultaneously to exert similar but opposing forces to the craft.
To rotate the craft in the counter-clockwise direction, thrusters C and B are fired simul-
taneously. In this case, the forces of these thrusters are opposing and cancel each other, and
consequently, the summation of forces is zero and the craft will not accelerate linearly. However,
since these forces create a torque in the counter-clockwise direction, there will be a torque in that
direction, resulting in a rotation. Similarly, to rotate the craft in a clockwise direction, thrusters A
and D are fired simultaneously. In either case, to slow down the rotation or to stop it, the opposite
pairs of thrusters are fired.
Figure 3.35: Depending on which pair of rocket thrusters is fired, the spacecraft may move in one
direction, rotate clockwise, or rotate counterclockwise.
Since a spacecraft is free to move in three dimensions, it must have a similar arrangement
of two pairs of thrusters in each plane to allow controlling its motions along the x-, y-, and z-
axis as schematically shown in Figure 3.36. A similar arrangement is used for maneuvering spatial
movements of astronauts in space walks.
What happened with the spacecraft in the earlier story was that one of the thrusters had
malfunctioned in the “on” position, and was consequently applying a torque to the vehicle and
accelerating it to about 300 degrees per second, a very large value that induces dizziness in as-
tronauts and can eventually destroy their vehicle. e astronauts had separated the two crafts to
find which one was at fault. After ascertaining that the docking vehicle was the reason, they fired
another series of thrusters that are used for controlling reentry to counter the malfunctioning
MovesforwardRotatesCCWRotatesCWABCDABCDABCD94
3. CORIOLIS ACCELERATION AND ITS EFFECTS
Figure 3.36: Pairs of thrusters are used in different planes (x-y, x-z, and y-z) to move a spacecraft
along or rotate a spacecraft about its axes.
thruster. Eventually, the astronauts turned off the automatic system that was supposed to keep
the vehicle running and took over the piloting of the vehicle themselves.
Notice that in spatial movements, since the object can rotate along all the axes, there are
Coriolis accelerations along different axes as well. erefore, quick rotations that increase Coriolis
are more difficult to control.
Rocket propelled human flight systems follow the same rules, except that gravity is present.
erefore, thrusters or jet packs are only needed for lifting while gravity exerts a downward force.
e rotations are accomplished by pairs of thrusters. But watch out for Coriolis acceleration.
Hopefully, you have noticed how all these issues are inter-related. Whether a bicycle, an
airplane, the air coming out of a vent in your car, the weather, a fan, a robot, or a spacecraft,
engineering principles govern how systems behave and react. Engineers use these principles in
the design and analysis of systems that we use every day. Knowing them allows the engineer to
not only create useful devices and systems, but also to protect users against adverse reactions that
might occur as a result of these principles.
Front viewSide viewC H A P T E R 4
95
ermodynamic Cycles
Refrigeration, Air Conditioning, Engines,
and Power Cycles
4.1
INTRODUCTION
When I was a junior in an engineering college my uncle asked me, “Do you know how a refriger-
ator works? Can you repair one?” I replied yes, I know how it works. But whether I can repair one
or not depends on a lot of other things. What I meant was that as engineering students, we learn
thermodynamics, in which we study the principles that govern how a refrigeration cycle works,
and based on that we can design the system. However, each company uses somewhat different
sets of components to achieve about the same results. Based on experience with those compo-
nents, you may or may not be able to fix a broken system or even recognize exactly what a part
does; a certified technician can do that better than an engineer. But a technician cannot design
the system or create a new one. e same is true with engines. You learn how an engine works
and how to design it to ensure that it works properly, but as an engineer, you may or may not
know how to fix it depending on your experience. To see this relationship and to understand why
it is important to learn the basics and the principles of engineering let’s look at refrigeration and
power development systems and how the principles and the practical devices map into each other.
If you have access to a bicycle pump do the following exercise (if not, a simple balloon will
do): Firmly place your finger at the output valve of the pump and press down on the plunger
(down-stroke), pressurizing the air inside, and hold it (with the balloon, blow it up and hold the
tip to prevent the air from escaping, but do not tie it with a knot). If you touch the body of the
pump you will notice that it is a bit warmer (the balloon will most probably not get noticeably
warm because of its size). Why do you think it is warm now?
is is because we perform “work” on the air to pressurize it (as was discussed in Chapter 3,
work is force multiplied by distance. As a force moves, it does work, which also means that it adds
energy to the system. In this example, we exert a force on the plunger to compress the air, and we
move it in the same direction, doing work and adding energy to the system). e added work will
increase the temperature of the pressurized air. e same happens when the air inside a tank is
pressurized by a pump; the tank body’s temperature rises a little because the air inside gets warmer.
In mechanical engineering, the relationship between pressure, volume, and temperature of a gas
96
4. THERMODYNAMIC CYCLES
can be expressed by an equation of state. e most common one for gases is called ideal gas law.
Tables containing the detailed properties of real gases are also available. Equation (4.1) shows
the ideal gas relationship between pressure, specific volume, and temperature. In this equation,
temperature is the slave to the pressure and volume. R is called the specific gas constant and is known
for different gases. For air, R is 0.2870 kJ/kg.K (kilo-Joules per kilogram Kelvin, where Kelvin is
273. C is the temperature in degrees Celsius). In English
the absolute temperature, or K(cid:14) D
units, R
53:34 ft-lbf/lbmR (feet, pound-feet per pound-mass Rankin, where Rankin is the
absolute temperature in English units or R(cid:14) D
P v
460 where F is in Fahrenheit).
F(cid:14) C
RT:
C(cid:14) C
D
(4.1)
D
In this equation, P is the pressure, v is the specific volume (volume/mass), R is the gas constant,
and T is the absolute temperature.
Due to the added work and as a result of Equation (4.1), the temperature of the air within
the bicycle pump rises. We will see more about this shortly, but assuming that the ratio of the
original volume of the pump to its final volume is 4:1 without any leaks, the final pressure can be
about 7 times as much. Assuming a temperature of air before compression of 20(cid:14)C (68(cid:14)F), the
final temperature can be as high as 237(cid:14)C (460(cid:14)F). You may ask why the pump warms up only
a bit if the air itself is this hot? e answer is that the weight of the compressed air compared to
the pump is very small. erefore, although the temperature of the air increases a lot, the total
energy is not much, increasing the pump’s body temperature only a little.
A colleague of mine has designed a simple pump from glass in which the
air can be quickly compressed about 20 times, raising its temperature to about
700(cid:14)C (1290(cid:14)F) and instantly combusting a small piece of paper that is placed
at the bottom of the pump. Although the air inside becomes very hot, enough
to burn the paper, the total energy is barely enough to warm up the body of
the pump.
..
Now imagine that while you continue to hold the pressure, you let the bicycle pump (or the
balloon) cool down. Here, while the air is under pressure, it cools down by losing its heat energy,
and although the pressure drops somewhat, the air is at a higher pressure than when we started.
Next imagine that you release the plunger of the pump while you still hold the pump’s outlet
orifice (or let go of the balloon’s tip to let the air out). Since the air is under pressure, it will push
back the plunger, and as a result, the air’s pressure returns (almost) to the original value before it
was pressurized (in reality, the air is now doing some work and therefore loses some more energy).
Lastly, what happens at this point is that since the air returns to its original pressure and
in the process has lost a net sum of energy, its temperature also reduces to something lower than
what it was at the beginning; the pump will feel a little cooler at the bottom (and the balloon also
4.1. INTRODUCTION 97
feels cooler). erefore, since it is now cooler than the environment, the air can absorb the heat
from the environment and make it cooler. In this process, we forced the system to absorb heat
from one environment, transfer the heat to another, and reject it there. e net effect is a lower
temperature in one place and a higher temperature somewhere else.
is is exactly what happens in any refrigerator or air conditioning system. Neither of
these systems “create” coldness; they just transfer the heat from one environment to another such
that one becomes colder, one becomes hotter. Remembering our discussion about entropy in
Chapter 1, is this not in the opposite direction of what natural systems would do to increase
entropy?
Should we assume that the net effect is zero? In fact, we should not. Since we need to
employ work (add energy) to compress the air and since all systems have friction (they waste
energy, even if we were not compressing air but just moving the plunger), we add to the energy
of the system which ultimately has to be rejected. erefore, we will need to reject more heat
at a higher temperature than we absorb at a lower temperature, making the higher-temperature
source even hotter. In other words, as we discussed in Chapter 1 with entropy, the efficiency of the
system can never be 100%. is is why if you run a refrigerator inside a room, even with its door
open, the room will eventually become hotter not cooler; the total heat transferred to the room is
equal to the heat from inside the refrigerator plus the electric energy used to run the compressor
and the fans. What the refrigerator accomplishes is to keep its interior compartment cooler at the
expense of a higher exterior temperature.
Now let’s see what this means in engineering terms. First, notice that this is a cycle: we
compress the gas that is at a particular temperature, let it cool down, then expand it which further
cools it down, and in turn it absorbs the heat and warms up to its original state. We repeat the
process. is is called a thermodynamic cycle. ere are many different types of thermodynamic
cycles, each with their own specific characteristics and applications, including cycles that trans-
late into power development (such as in power plants), engines, and refrigeration systems. ese
cycles are usually described with graphs, including a graph of temperature versus entropy .T
s/,
pressure versus volume .P
h/. In this book we
will use the pressure versus volume .P
V / diagram for its simplicity, even though it is somewhat
limited in its usefulness.
V /, and pressure versus enthalpy or energy .P
(cid:0)
(cid:0)
(cid:0)
(cid:0)
(cid:0)
Figure 4.1 shows the P
V diagram for the first phase of this process. e x-axis shows
the volume of the air at any state while the y-axis shows the corresponding pressure. Each of the
isotherm lines show the relationship between the volume and pressure at a constant temperature.
In this case, if the temperature of the gas is kept constant, as the pressure increases, the volume
will decrease according the isotherm line. Now let’s see how our process maps into this diagram.
Let’s say we start at point 1 at a particular pressure and volume. In the case of the pump,
this indicates the atmospheric pressure and the volume of the pump. Segment 1-2 shows the
compression of the air; we compress the air in the pump, and as a result, its pressure increases, its
98
4. THERMODYNAMIC CYCLES
Figure 4.1: e pressure versus volume .P
experiment.
(cid:0)
V / diagram of the first segment of the bicycle pump
volume decreases, and it becomes hotter. As shown, point 2 is at a higher pressure, lower volume,
and at a higher temperature than point 1.
Segment 2-3 in Figure 4.2 shows the cooling of the air as we maintain the volume, but
we let the air cool down (here, we are assuming that as the air cools down, its volume does not
change. In reality, the volume decreases a little as it cools down). Notice how the line indicates
the changes in the state of the gas. Its volume is (almost) constant, its pressure is lower, and the
temperature is also lower as indicated by a lower-temperature isotherm, in this case the same as
the original temperature.
Segment 3-4 in Figure 4.3 indicates the release of the plunger, where the pressure returns
(almost) to the previous level and the volume is a little lower too (due to the decrease in tem-
perature). However, notice that point 4 is at a lower temperature than point 3 (lines 1-2 and 3-4
follow a constant entropy line).
Segment 4-1 in Figure 4.4 is the absorption of outside heat energy into the system, which
returns the system back to its original state. ese four segments constitute a cycle that can be
repeated. In the process, we transfer heat from one environment into another by applying external
energy to the system, with a negative net effect.
4.2 REFRIGERATION CYCLE
e way a refrigerator works is very similar to the bicycle pump example, except that as an en-
gineered device, it is designed to be much more efficient and to work continuously. Each part of
Volume, VPressure, PLines of constanttemperature (isotherms)12coolerhotter1-2: Compress the air.4.2. REFRIGERATION CYCLE 99
Figure 4.2: e pressure versus volume .P
pump experiment.
(cid:0)
V / diagram with the second segment of the bicycle
Figure 4.3: e pressure versus volume .P
experiment.
(cid:0)
V / diagram with the third segment of the bicycle pump
the cycle is accomplished by a particular component. Let’s see what these components are and
how they work and how the cycle differs from Figure 4.4.
Unlike the bicycle pump, where the medium was air, the medium in most refrigeration
systems is a chemical with favorable characteristics such as boiling point and heat capacity. For
decades, the refrigerant of choice was Freon-12, a chlorofluorocarbon (CFC), and it worked very
Volume, VPressure, P1231-2: Compress the air.2-3: Cool down.coolerhotterVolume, VPressure, P12341-2: Compress the air.2-3: Cool down.3-4: Expand the air.coolerhotter100
4. THERMODYNAMIC CYCLES
Figure 4.4: e complete pressure versus volume .P
cycle.
(cid:0)
V / diagram of the bicycle pump experiment
well due to its physical characteristics. Freon-12 was also freely used as a propellant in sprays.
However, since Freon had an adverse effect on the upper-atmosphere Ozone layer, it was banned
in the mid 1990s. It was replaced with tetrafluoroethane (R-134a). Although R-134a works well
too, due to the size of its molecules, it leaks more easily and needs to be replaced more often. It
turns out that R-134a also has adverse effects on global warming (as much as 2,000 times more
than CO2) and is being phased out. A big difference between the bicycle pump example and a
refrigeration system is that instead of air, which is always a gas, Freon switches states between gas
and liquid. is helps in maintaining a desired and almost constant temperature in the refrigerator
and freezer and increases the efficiency of the system.
Figure 4.5 shows a typical refrigeration system. e system consists of four operations;
compression, condensation, expansion, and evaporation. ese operations are accomplished by
components (conveniently) called a compressor, a condenser, an expansion valve, and an evapo-
rator.
e compression segment of the cycle is accomplished by a compressor. A compressor is a
combination of an electric motor and a pump, both integrated together and hermetically sealed
to prevent leakage. e function of the compressor is to compress the refrigerant. As a result,
the refrigerant becomes a pressurized hot gas. In many cases, a small fan is installed next to the
compressor to blow air on it to keep it cool; otherwise the heat may damage the compressor.
Remember that in order to compress a gas, we need to do work on it, adding to the energy of the
system. is work, supplied by the compressor motor, eventually turns into heat and is ultimately
wasted. Figure 4.6 shows a typical compressor and cooling fan next to it. Typically, the compressor
is in the rear-bottom part of the refrigerator and can be accessed from the back.
Volume, VPressure, P1234coolerhotter1-2: Compress the air.2-3: Cool down.3-4: Expand the air.4-1: Absorb heat.4.2. REFRIGERATION CYCLE 101
Figure 4.5: A typical refrigeration system and its components.
Figure 4.6: A typical compressor and cooling fan next to it.
e second component of the system is a condenser. In reality the condenser is a simple heat
exchanger; it is a series of tubes that transfer the heat of the high-pressure hot refrigerant out to
the ambient air. As a result, the compressed and overheated refrigerant cools down, and eventually
becomes a liquid (and this is why it is called a condenser, because it condenses the superheated
gas into liquid). erefore, at point 2 in Figure 4.5, the refrigerant is in liquid form. To increase
heat transfer from the condenser, it is possible to add a fan to blow air over the condenser and
EvaporatorCondenserCompressorExpansion valveWorkHeat transferred fromrefrigerated spaceHeat transferred to theambient air1234102
4. THERMODYNAMIC CYCLES
cool it down. In older refrigerators, the condenser was usually placed vertically on the back of the
unit in order to take advantage of convection heat exchange; the warm air would simply rise and
escape from behind the unit. In modern units, the condenser is usually under the refrigerator.
In reality, the small fan that is used to cool down the compressor is designed to suck in the air
from the front of the refrigerator at the bottom, pass it over the condenser and the compressor,
and blow it out the back, in effect cooling both of them together. Condensers are very prone to
dust accumulation that greatly reduces their effectiveness. erefore, it is advisable to clean the
condenser once in a while. It should be mentioned that for larger systems, water cooling, larger
fans, and other assistive devices are added to remove larger heat loads. Figure 4.7 shows typical
condensers in a household refrigerator (a) and in an industrial unit (b).
(a)
(b)
Figure 4.7: Typical condensers in refrigerators (a) and industrial units (b).
e third component of the system is an expansion valve. Typically, the expansion valve is
a long capillary (very narrow) tube. When the liquefied refrigerant passes through it, due to the
large pressure drop in the capillary tube, the high-pressure liquid loses its pressure and becomes
a mixture of gas and liquid at low pressure. Just like the bicycle pump example, when the cooled
refrigerant is allowed to lose its pressure, its temperature drops significantly. erefore, at point
3 in Figure 4.5, the liquid entering the evaporator is very cold, and consequently, the evaporator
will also be very cold. Figure 4.8a shows a typical expansion valve. In larger systems the expansion
valve can be an actual valve, which similarly reduces the pressure of the liquid as it passes through
(Figure 4.8b).
e fourth component of the system is an evaporator. In reality, the evaporator is also a
simple heat exchanger just like the condenser, and is similarly made of tubes. e cold refrigerant
mixture of gas and liquid absorbs the heat of the refrigerator and boils into gas, which is then
sent back to the compressor. In this process, the heat of the refrigerated area is absorbed and
4.2. REFRIGERATION CYCLE 103
(a)
(b)
Figure 4.8: Typical expansion valves.
transferred to the outside. To increase the effectiveness of the refrigerator in modern systems a
fan blows air over the evaporator and then into the freezer which is typically at about 4(cid:14)F (
15(cid:14)C).
e refrigerator area is cooled through the freezer air and is typically at 35(cid:14)–40(cid:14)F (1–4(cid:14)C). In
most systems, the evaporator is behind the freezer area and cannot be seen. In older refrigerators,
the evaporator coils were embedded into the freezer box area.
(cid:0)
(cid:0)
Figure 4.9 shows a typical thermodynamic P
V (pressure vs. volume) refrigeration cy-
cle. As expected, it includes the same compression, condensation, expansion, and evaporation
segments. Segment 1-2 in Figure 4.9 shows the thermodynamic representation of compression.
During compression, pressure increases, volume decreases, and temperature increases. Although
not entirely accurate, it is usually assumed that the entropy of the system remains the same dur-
ing this operation. is is not an important issue for our discussion here, as we have not really
studied this subject. However, the assumption helps us determine how this segment behaves in
the P
V diagram. All points under the curve are mixtures of gas and liquid. All points to the
right of the dome are gas. erefore, at point 2, the refrigerant is in gas form. Segment 2-3 shows
the condensation, where the refrigerant condenses by losing heat and becomes a mixture. In this
process, pressure remains the same, but since the gas liquefies, the volume decreases. Segment
3-4 is the expansion, where volume increases a small amount, but pressure drops and temperature
decreases. Segment 4-1 is evaporation, where the liquid evaporates by absorbing the outside heat
at constant pressure and its volume increases.
(cid:0)
104
4. THERMODYNAMIC CYCLES
Figure 4.9: ermodynamic representation of the refrigeration cycle.
While designing a system, which is what many engineers do, the designer has to choose
components of the system so that they collectively work as desired. e thermodynamic cycles
such as Figure 4.9 allow the engineer to design the system, pick appropriate specifications (such
as temperatures, pressures, volumes, etc.) and choose appropriate components that work together
and produce the desired results. is includes the size, capacity, and power of the compressor, the
size of the fan, the dimensions of the condenser and evaporator, the ranges of temperature and
pressures, and so on. Without these thermodynamic tables and material behavior charts, it would
be impossible to design a system that is efficient and works well.
It should be mentioned here that modern refrigerators have other added features. For ex-
ample, huge layers of ice would form on the freezer walls of old refrigerators that doubled as
an evaporator. is is because when the air cools and its temperature drops the moisture in the
air condenses to water and freezes over the cool surface. To prevent this, about every 22 hours,
refrigerators switch off and the freezer walls are heated slightly for a short time to thaw the ice,
which drops down into a tray at the bottom of the refrigerator. e water eventually evaporates
into the outside room. is action keeps the freezer ice-free, but uses energy to heat the freezer
wall and melt the ice, but also to cool it down again.
Air-conditioning systems are essentially the same as refrigeration systems, except that the
components may be put together somewhat differently to cool down an environment instead
of the limited volume of the refrigerator. ese systems also include a compressor, condenser,
expansion valve, and evaporator. e heat of the system is transferred to the outside by a fan,
whereas the air from the room is sucked in by a fan, blown over the evaporator to cool down,
and returned to the same environment. In this process, since the air is cooled down, some of the
moisture in the air condenses on the evaporator, and therefore the humidity of the air is reduced
Pressure, PVolume, V1234CompressionCondensationExpansionEvaporation4.3. SPARK-IGNITION POWER CYCLE 105
too. In the summertime, this is a good thing because moist air feels warmer. Consequently, the air
feels better because it is cooler and also drier. However, like refrigerators, the condensed water has
to be drained. You may have noticed that in many air-conditioning systems, it appears that the
unit is leaking. at is in fact condensed water and not a leak. e same is true in automobile air-
conditioners. Other than these differences, an air-conditioning system and a refrigeration system
are thermodynamically very similar.
4.3
SPARK-IGNITION POWER CYCLE
A spark-ignition cycle approximates the cycle of power development by an internal combustion
engine with spark plugs. is is also similar to what is referred to in thermodynamics as an Otto
Cycle which is an ideal cycle (an ideal cycle is approximate. Real cycles differ somewhat from ideal
cycles. But to learn the principles, we always start with an ideal cycle, then modify the cycle to
a more realistic model). Conversely, a compression ignition cycle approximates a diesel engine,
where the air is compressed much more and consequently, it becomes much hotter to the point
that when the fuel is injected into it, it explodes and burns without the need for a spark plug. We
will discuss the differences between these two engines later.
An internal combustion engine in general refers to any type of engine in which the com-
bustion of the fuel and air within a closed environment produces the gases that generate the
mechanical work, and includes regular gasoline engines, diesel engines, rotary engines, and jet
engines. Conversely, steam engines are not internally combusting engines; in steam engines, the
fire is outside of the engine and instead, combustion products boil water into steam in a boiler
and the steam is used to power the engine. Common gasoline and diesel engines are called recip-
rocating IC engines because the piston reciprocates (moves up and down) in a cylinder, rotating a
crankshaft that is connected to it via a crank and a connecting rod. Wankel (rotary) engines and
jet engines do not reciprocate (in fact, they do not have pistons and connecting rods and cranks);
instead Wankel engines have rotary 3-sided rotors that revolve within a chamber, and jet en-
gines have compressors, combustion chambers, and turbine rotors that always rotate. We should
remember that in this section our discussion revolves around reciprocating internal combustion
engines even if we just refer to them as IC engines.
First let’s see how an engine works, then we will look at the thermodynamic cycle repre-
senting it. It should be mentioned here that there are two types of gasoline reciprocating internal
combustion (IC) engines—2-stroke and 4-stroke. As we will see shortly, 2-stroke engines are
less efficient and more polluting than 4-stroke engines. Four-stroke engines are cleaner, more
efficient, and vastly more popular, but as we will see, they develop power in every other cycle.
Except in specific cases such as small motorbikes, model airplane engines, some lawn-mower
engines and the like, almost all engines are 4-stroke.
106
4. THERMODYNAMIC CYCLES
4.3.1
4-STROKE ENGINES
Figure 4.10a shows the schematic of a one-cylinder, 4-stroke, spark-ignition reciprocating internal
combustion engine (notice all the qualifiers that are used to define it). In kinematics, this is called
a slider-crank mechanism (Figure 4.10b) because it consists of a slider (the piston) and a crank,
connected together by a coupler. However, unlike the slider-crank mechanism where the slider
simply slides on a surface, in the internal combustion engine the piston oscillates inside a cylinder
with a cylinder head which enables it to compress the air inside. e engine schematic shows the
engine block and cylinder head, the piston, the connecting rod (coupler), the crank and crankshaft,
the spark plug, the fuel injector, and two valves on the top. One valve is on the intake manifold
and allows the outside air to be sucked into the engine; the other is on the exhaust manifold and
allows the burned gases to be pushed out to the atmosphere.
(a)
(b)
Figure 4.10: A single-cylinder reciprocating internal combustion engine and a slider-crank mecha-
nism.
In 4-stroke engines the complete cycle occurs within two complete rotations of the
crankshaft (720(cid:14)), causing the piston to move up and down twice within the complete cycle (Fig-
ure 4.11). ese are called the intake, compression, power, and exhaust strokes. Let’s assume that
the piston is at the top (called top-dead-center or TDC) and is at the beginning of a cycle (intake
stroke). At this point the intake valve is open, and as the piston moves down, it sucks in filtered
air, as shown in Figure 4.11a. In older cars, the air would move through a device called a car-
buretor. Carburetors are no longer used in cars, but they are still around in older cars (even into
the 90s). As discussed in Chapter 3, when fluids or gases move faster, their pressure drops. A
carburetor has a Venturi that, due to its reduced cross section, increases the speed of air, dropping
PistonEngine blockCylinder headValveSpark plugFuel injectorConnecting rodCrankshaftValveCrankSliderCrankCoupler4.3. SPARK-IGNITION POWER CYCLE 107
the pressure. e pressure difference through the Venturi sucks in some fuel from a small tank at
the side of the carburetor, causing fuel (gasoline) to be mixed into the air-stream. In fact, for this
reason, the faster the engine rotates, the better the fuel mixes with air. e fuel-air mixture con-
tinues to the cylinder. In newer engines, an injector is used to inject a precise amount of gasoline
into the cylinder as the piston moves down, mixing with the air (other possibilities exist). When
the piston is almost at the bottom (called bottom-dead-center or BDC), the intake valve is closed.
(a)
(b)
(c)
(d)
Figure 4.11: e four strokes of a 4-stroke gasoline engine.
e second stroke is compression. As the piston moves up, since both valves are closed and
the fuel-air mixture is trapped inside the cylinder, it compresses causing both the pressure and
temperature to rise (Figure 4.11b). In most gasoline engines, the compression ratio, the ratio of
the air volume at the beginning and end of this stroke .V1=V2/, varies between about 8 and 11.
A compression ratio of about 11 raises the temperature of the air to the point that it will require
premium gasoline with higher octane; otherwise, the fuel-air mixture may ignite at an improper
time, potentially damaging the engine (called detonation or knocking). At compression ratios of
about 8 the temperature is still low enough to not combust prematurely; therefore regular gasoline
can be used.
It should be mentioned here that the ignition, combustion, and burning of the fuel-air
mixture is a very complicated and involved issue that is beyond the scope of this book. We simplify
Intake stroke:- Intake valve open- Fuel injectedCompression stroke:- Both valves closed- Spark near top dead centerPower stroke:- Both valves closedExhaust stroke:- Exhaust valve open108
4. THERMODYNAMIC CYCLES
this process greatly here. In fact, one of the major issues in an internal combustion engine class
in engineering programs is the process of combustion and its mechanisms.
Gasoline Octane Number: e gasoline octane number relates to the tem-
perature at which gasoline auto-ignites in an engine. e higher the octane
number, the higher this temperature will be. Gasoline octane numbers vary
between 87 and 91. Gasoline with octane number 87 is called regular, 89 is
medium grade, and 91 is premium gasoline, not in quality but in the auto-
ignition temperature. e auto-ignition temperature of gasoline is between
246–280(cid:14)C (475–536(cid:14)F) depending on its grade when measured in open air.
In engines, the temperature at which the gas may ignite prematurely is much
higher, at about 747
40(cid:14)F) [1]. ese numbers do not re-
(cid:6)
late to the quality or cleanliness of the gasoline at all. Higher octane in a fuel
is achieved by mixing different hydrocarbons and by adding chemicals to the
gasoline that increase its ignition temperature, and therefore, make it more ex-
pensive. However, all grades have the same heat energy value. If your engine’s
compression ratio is higher, say 11, it will require higher octane gasoline. If it
is about 8-9, it can safely work with regular 87-octane gasoline.
22(cid:14)C (1380
(cid:6)
Should you use the higher octane gasoline in an engine with compres-
sion ratio of about 8-9 as some suggest, thinking it is a better gasoline? e
answer is no. Higher octane gasoline will not provide more energy, will not
burn better or cleaner, and will not keep your engine cleaner. erefore, for
more money, you will get the same result. Using higher grade gasoline in a car
that does not require it will make no difference except in unnecessary higher
cost.
e only exception is when an engine has carbon deposits in it that
cause it to continue to turn even when turned off, or if the carbon deposit
causes pre-ignition. is happens to older engines in which after a long pe-
riod of time, carbon deposits accumulate in the cylinder; when the engine
is hot, the deposits cause the air-fuel mixture to combust before the piston
gets to the ignition point when the spark plug sparks. is premature ignition
causes the mixture to burn prematurely, increasing pressure to unsafe levels.
is may damage the engine permanently. Additionally, in some engines the
same carbon deposits cause the mixture to ignite although the engine is turned
off, continuing to rotate. Higher octane gasoline may improve the situation
by decreasing the probability of the mixture igniting prematurely. Otherwise,
continue to use the grade that the manufacturer recommends.
..
4.3. SPARK-IGNITION POWER CYCLE 109
Most typical engine management systems found in modern cars have a
knock sensor that monitors if the fuel is pre-igniting. In modern computer con-
trolled engines, the ignition timing will be automatically altered by the engine
management system to reduce the knock to an acceptable level. However, this
should not be a reason to use regular gasoline in a high-compression engine
that requires premium grade either.
..
e third stroke is the power stroke. Within the two rotations of the crankshaft, this portion
is the only one that actually delivers power to the engine. As the piston nears the top-dead-
center, the spark plug is fired to generate a strong spark within the fuel-air mixture, causing it
to combust and burn quickly as the piston clears the top-dead-center. Combustion creates a very
high-pressure mixture, which when multiplied by the area of the piston, translates to a very large
force, pushing down the piston in its down-stroke and generating a large torque at the crankshaft.
Both valves remain closed during this stroke.
Just a note here. Do you remember the definition of work (force multiplied by distance)?
Here, the force of combustion gases on the piston pushes it down, therefore moving the piston.
Consequently, we have a force that displaces, creating work, or energy. e force is not constant,
therefore the rate of work generated is also not constant.
Finally, the fourth stroke is the exhaust. As the piston starts to move up again, the exhaust
valve is opened, allowing the piston to push out the hot, burned gases, almost completely clearing
the cylinder of the spent fuel-air mixture and preparing it for the repetition of the first stroke,
sucking in fresh air as the exhaust valve is closed and the intake valve is opened. e cycle repeats
until the engine is shut off.
What is the purpose of higher compression ratios if they require more expensive gasoline?
In general, the higher the compression ratio, the better the efficiency of the engine. Engine ef-
ficiency percentages range from the 20s to the low-30s. As the compression ratio is increased, it
compresses the same air more compactly, increasing its temperature and reducing its volume. As
a result:
1. e mixture burns better at higher temperatures.
2. Because compression and combustion happen in a smaller volume, the fuel mixes better and
burns more quickly and completely.
3. Higher pressures produce larger forces.
4. More power is squeezed from the combustion gases.
When an engine operates at high altitudes, since the air is thinner, less air enters the cylin-
ders. As a result, compression pressure is lower and the engine delivers less power. Turbochargers
are used to increase the intake pressure and push more air into the cylinder, especially at higher
altitudes.
110
4. THERMODYNAMIC CYCLES
For ideal gases, the ratio between the pressure at bottom-dead-center P1 and top-dead-
center P2 can be calculated as a function of compression ratio rc (ratio of volumes at bottom-
dead-center V1 and top-dead-center V2 or rc
D
V1=V2) as:
n
(cid:19)
;
P2
P1 D
(cid:18) V1
V2
(4.2)
1:4 for ideal gases. erefore, we can see that for a compression ratio of 8, the pressure
where n
ratio will be about 18 whereas for compression ratio of 11, the pressure ratio increases to 28, a
significant increase.
(cid:25)
Except in specific applications such as 1-cylinder lawn mower or power tool engines and
some small cars with 2- or 3-cylinder engines, most automobile engines have at least 4 cylinders.
Five, 6, 8, and even 12 cylinders are also common. In multiple-cylinder engines each cylinder and
piston combination operates exactly as mentioned before. However, all connecting rods are at-
tached to a common crankshaft. Consequently, the movements of the pistons are all coordinated.
For example, in a 4-cylinder engine, if one piston is at top-dead-center and is at the beginning
of its intake stroke, another piston might be at bottom-dead-center and at the beginning of its
upward motion to compress the fuel-air mixture. A third cylinder may be at TDC and at the
beginning of its power stroke, while the fourth is also at BDC and ready for its exhaust stroke.
e same sequence continues between all four, and as a result, in every stroke, one of the cylinders
is at its power stroke.
As mentioned earlier, in 4-stroke engines, the total cycle for each piston requires two ro-
tations of the crankshaft or 720(cid:14), and therefore, each stroke is one-quarter of this, or 180(cid:14). Con-
sequently, in a 4-cylinder engine, one power stroke occurs at every 180(cid:14) of the crankshaft rota-
tion. As a result, the multiple-cylinder arrangement will make the output power much smoother
than if there were only one larger cylinder. For a 6-cylinder engine, the power stroke is at every
90(cid:14). erefore, even for the
720(cid:14)=6
same size engine, a 6 or 8-cylinder engine will run much more smoothly than 4-cylinder engines.
Notice that since one cylinder at a time produces power the output is more uniform, whereas if
they were all arranged to fire simultaneously (which is possible if they are all connected to one
crank), the output would vary much more, causing a much rougher ride.
120(cid:14), and for an 8-cylinder engine, it is at every 720(cid:14)=8
D
D
It should be mentioned here that it is extremely crucial that the valves close and open at
exact proper times. e valves are opened and closed by pear-shaped cams on a camshaft (Fig-
ure 4.12). To coordinate the valve timing with the position of the pistons, the camshaft is run
directly by the crankshaft through gears, a timing belt, or a timing chain at half the speed. As the
camshaft rotates, the cams on it turn and open the valves; springs close the valve. All these mo-
tions require work, which comes from the crankshaft (part of the power developed by the power
stroke). erefore, part of the energy of the engine goes into running its internal parts. Later-
model engines may have two valves for intake and two valves for exhaust in order to speed up the
process of intake and exhaust. Consequently, a 4-cylinder engine may have 16 valves (specified as
DOHC engine for double-overhead-cam engine). In these engines, since the four valves can be
4.3. SPARK-IGNITION POWER CYCLE 111
in four corners of the combustion chamber in the shape of a plus sign, there is room in the middle
for the spark plug. As a result, the combustion starts in the middle of the chamber with equal
distance to the perimeter, more completely burning the fuel-air mixture. erefore, these engines
produce less pollution and are more efficient too. Figure 4.13 shows the same cylinder-head block
with the valve arrangement and the camshafts on it.
(a)
(b)
Figure 4.12: A cam opens and closes the valve as it rotates.
112
4. THERMODYNAMIC CYCLES
Figure 4.13: e valve arrangement and the camshaft on a cylinder-head block.
To facilitate quicker passage of air in or out of the cylinder, the designer of the
engine should want to open the valves as much as possible. However, there is
a limit to how much this can be because when the piston gets to top-dead-
center, the remaining volume is very small and we do not want the piston to
run into the valves. However, there is an additional concern. Like any other
mechanical device, it is possible that the timing belt or chain may break. In
that case, some valves may remain open while the piston continues to travel to
the top, eventually running into them. is can be disastrous to both the piston
and the valves, and should be avoided at all costs. is is why manufacturers
recommend that every so often, the timing belt be replaced before it breaks.
Others use a timing chain, which in general can last much longer without
failure.
..
IntakevalvesExhaustvalvesSpark plugopening4.3. SPARK-IGNITION POWER CYCLE 113
Additionally, it is possible to design the engine in such a way to ensure
that the pistons and the valves will not collide at all. ese engines are referred
to as non-interference engines versus interference engines in which the pistons
and valves may collide if the timing belt breaks. In non-interference engines,
although the engine stops working if the belt breaks, it remains safe and as
soon the belt is replaced, the engine can be used again—a minor repair that
can be done easily. In interference engines, if the timing belt breaks, the engine
may require major overhaul. Find out what the engine in your car is and how
often you need to replace the timing belt.
..
As you may imagine, there is a very large amount of heat generated in an engine, a large
portion of which must be transferred to the environment; otherwise, the engine parts will overheat
and will be damaged. In order to keep the engine cool, most engines have a water-cooling system
and a radiator that transfers excess heat to the environment. To do this, there are water passages
throughout the engine block. A water pump forces the water around the cylinders and the engine
body, and later, through the radiator. With the aid of a fan, the radiator transfers the heat out. To
keep a constant range of temperatures, a thermostat is used to stop the flow when the coolant is
cold, and open when it gets hot.
Alternately, some engines, including some automobile engines as well as airplane engines,
are air-cooled. In this case, the flow of air over the engine fins cools the engine.
e other major issue in engines is friction between the contact surfaces, including the
piston and the cylinders, the connecting rod and the cranks, and the cranks and the crankshaft.
e friction causes additional heat that must be removed as well. To reduce friction and to cool
down these engine parts they are constantly lubricated with engine oil. e crankcase, the big
reservoir at the bottom of the engine block, is filled with oil. An oil pump, sometimes inside the
crankcase, pumps the oil between these contact surfaces and also splashes some oil onto the inner
surfaces of the cylinder when the piston is at the top, lubricating the contact surface as the piston
slides down. Of course, the pumping of the oil also takes away a little more of the engine power.
4.3.2
2-STROKE ENGINES
So far we have studied 4-stroke engines. However, as mentioned earlier, there are also 2-stroke
engines that are used with simpler systems such as motor bikes or model airplanes. In this case,
all necessary parts of the cycle have to happen within two strokes (one complete revolution of the
crankshaft) or within 360(cid:14).
ere are advantages and disadvantages to 2-stroke engines. One advantage is that since
the power stroke happens at every 360(cid:14), we should expect that the power development per cycle
is more dense in a 2-stroke engine than in a 4-stroke engine, where the power development is
once every 720(cid:14). But due to other inefficiencies of the 2-stroke engine, this ratio is not twice as
much. Another advantage of a 2-stroke engine is that since it lacks the similar valve arrange-
114
4. THERMODYNAMIC CYCLES
ment necessary in a 4-stroke engine, it is usually much simpler with fewer parts. erefore, it is
an appropriate design for model airplanes and other applications where cost, space, and weight
are important issues. However, as we will see, due to their construction, these engines are more
polluting and wasteful, and due to the lack of an oil pump, they require that the oil be added
to the fuel. erefore, the engine burns a mixture of oil and gasoline, which makes it even more
polluting. It is expected that the mixture lubricates the engine as it goes through the system.
Two-stroke engines do not have intake and exhaust valves. Instead, there are two openings
on the lower part of the cylinder body that are normally closed when the piston is up and covers
them, and open as the piston moves down. In 2-stroke engines, the crankcase is also closed except
through a valve, as depicted in Figure 4.14. When the pressure in the crankcase is lower than the
outside, it opens and air is entered into the crankcase; when the pressure in the crankcase is
higher, it simply closes. erefore, as the piston moves up and creates relatively lower pressure
in the crankcase, air is sucked in. As the piston moves down, the valve closes and the air that is
trapped in the crankcase is pushed up into the cylinder through the intake opening, all the while
sucking a little gasoline-oil mixture with it into the cylinder (see the previous discussion about
venture effect). Unlike 4-stroke cycles, more than one thing happens simultaneously during each
stroke of the 2-stroke cycles. erefore, we need to start at some arbitrary point, follow all that
happens, and eventually end at the same point in order to see how this engine works.
Imagine that the piston is moving down and is close to its bottom-dead-center as in Fig-
ure 4.15a. By this time, the intake valve in the crankcase is closed due to the increased pressure in
the crankcase, the exhaust port is opened and the consumed fuel-air mixture is mostly out, and
as the piston continues its downward motion, the intake opening on the cylinder opens as well,
and the somewhat-compressed fuel-oil-air mixture in the crankcase is pushed into the cylinder.
As the piston starts its upward motion (Figure 4.15b), it closes the intake and exhaust
openings, opens the crankcase intake valve bringing new fuel-oil-air mixture into the crankcase
for the next cycle, and compresses the mixture in the cylinder until it reaches near the top-dead-
center. At that point, a spark combusts the mixture, starting the downward power cycle.
Once again, as the piston moves down, it closes the crankcase intake valve, opens the ex-
haust and the intake openings in short sequence, and repeating the cycle until the engine stops.
Clearly, before all the exhaust gases escape, a new fuel-oil-air mixture starts entering the cham-
ber and mixing with it. is reduces the efficiency of the engine and also lets some of the new
unburned mixture out the exhaust, polluting the air. As was mentioned earlier, these engines are
simple, with fewer moving parts, and with more frequent power cycles, but are more polluting
and less efficient.
4.4 THERMODYNAMIC REPRESENTATION OF THE
SPARK-IGNITION POWER CYCLE
Similar to the refrigeration cycle, where we compared the actual work of the system with its
thermodynamic representation, we can do the same for power cycles. is can help engineers
4.4. THERMODYNAMIC REPRESENTATION OF THE SPARK-IGNITION POWER CYCLE 115
Figure 4.14: Schematic of a 2-stroke engine.
design engines with desired specifications, calculate the power developed by different size engines,
and help them design more efficient and better engines. It is crucial for an engineer to work with
thermodynamic representations.
(cid:0)
(cid:0)
(cid:0)
s/ representations. Figure 4.16a shows the .P
Once again, it is very useful to also study the temperature-entropy .T
s/ diagram as
well as the pressure-volume .P
V / diagram. However, since we have not studied entropy as a
V / diagram for an
tool, we will skip the .T
idealized power development cycle. Segment 1-2 represents the compression cycle, and as shown,
while volume decreases, pressure increases as the piston moves up toward the top-dead-center.
Segment 2-3 represents the combustion of the fuel-air mixture in the chamber. In the idealized
cycle, this combustion of the mixture is assumed to be very quick, so fast that ideally (reality is
close to this but not exactly) the volume is not changed, but there is a huge increase in the pressure.
Segment 3-4 is the expansion of the gases, the development of power in the engine, when high-
pressure gases push down the piston and create the force or moment that rotates the crankshaft.
As shown, the pressure decreases while volume increases as the piston moves down. At this point,
the exhaust valve is opened and the remaining gas escapes, rejecting the remaining heat. Ideally,
this also happens instantaneously, and therefore, at constant volume. e difference between an
(cid:0)
Spark plugFuel-oilmixtureExhuast portIntakeopeningAirDeflector116
4. THERMODYNAMIC CYCLES
(a)
(b)
Figure 4.15: e strokes of a 2-stroke engine.
actual 4-stroke engine cycle and the thermodynamic cycle representation is the remaining two
strokes of exhaust and intake. In 4-stroke engines the piston moves up and exhaust gases are
pushed out (which require a little more energy), and subsequently the piston moves down and fuel-
air mixture is pulled in (also requiring a little more energy). However, the ideal thermodynamic
cycle does not show these; actual thermodynamic cycle representation in Figure 4.16b includes an
additional section to represent the exhaust (segment 4-5) and intake (segment 5-1) strokes. Also
notice that in reality, volumes change during ignition and heat rejection. In reality, combustion
requires time to complete. Ideally, it is best to start the combustion before top-dead-center and
allow it to complete at the same volume that it started. erefore, in reality, the spark ignition
occurs as much as 20(cid:14) before top-dead-center, and ends as shown in Figure 4.16b at about the
same volume.
e area surrounded by the four segments in the P
As mentioned earlier, the ideal thermodynamic representation only represents two of the
four strokes of the engine. You may notice that a 2-stroke engine more closely matches this repre-
sentation, even though the compression and exhaust are not instantaneous as the cycle assumes.
V diagram represents the power de-
veloped by the engine at each complete cycle (this is the same as pressure multiplied by volume at
each instant. e volume of a cylinder is the area of the base multiplied by its height. In an engine,
the volume is the piston area multiplied by the stroke of the piston. Conversely, pressure multi-
plied by an area is force, and force multiplied by distance is work. erefore, pressure multiplied
by volume is the same as force multiplied by distance, both representing work). An engineer can
(cid:0)
Spark plugExhuastoutMixture tothe cylinderSpark plugAir to thecrankcasecompression,then ignitionFuel-oilmixture4.4. THERMODYNAMIC REPRESENTATION OF THE SPARK-IGNITION POWER CYCLE 117
(a)
(b)
Figure 4.16: ermodynamic representation of the spark-ignition power cycle.
use this graph to estimate or calculate the power output of the engine. e ratio of the power de-
veloped with respect to the chemical energy of the input gasoline determines the efficiency of the
engine, and can be estimated from the graph. In reality, the efficiencies are measured under more
realistic conditions. It should be mentioned here that fuel injection has increased the efficiency
of modern engines. However, there is a large set of desirable characteristics and undesirable con-
sequences that play an important role in the efficiency of engines and cars in general. e shape
(aerodynamics) of a car, the desired acceleration and power, the weight of the car, the accessories
that are operated by the engine, etc., all affect the efficiency of the car and its MPG rating. High
accelerations and high power output means that the engine is very powerful when needed, but in
most conditions, its power is excessive and not used, reducing its efficiency significantly. At the
same time, we desire to reduce pollution, and therefore add limiting devices and pollution reduc-
tion systems to engines that limit their performance and reduce other desirable characteristics.
erefore, like many other engineering decisions, the design of the engine and the chosen size
and power characteristics are compromises, based on marketing and engineering considerations.
1-2 Compression2-3 Ignition3-4 Expansion4-1 Heat rejection1234Pressure PVolume VIdealizedPressure PVolume VIgnitionExhaustIntakeCompressionExpansion12345End of combustionRealistic118
4. THERMODYNAMIC CYCLES
On a side note, you may have noticed how you were repeatedly asked to imag-
ine certain things in order to visualize motions and other happenings in a still
picture or figure throughout the book. It is very common in engineering to vi-
sualize motions in still pictures and drawings and to see things in one’s mind.
Whenever we design something new that does not exist we see it in our mind’s
eye. Some individuals may already be good at this, others learn to do it. In en-
gineering problem solving one needs to visualize things that do not exist and
see motions and happenings that are not shown. Whether an engine, a mech-
anism, a robot, or a thermodynamic cycle, we see much more in a drawing
than is shown.
..
4.5 COMPRESSION-IGNITION DIESEL ENGINE POWER
CYCLE
Compression-ignition diesel engines are quite similar to spark-ignition 4-stroke engines. ey
are predominantly 4-stroke, similarly structured to have intake, compression, power, and exhaust
strokes. ey also have similar valve operation and construction. e major difference between
them is in the way fuel is delivered; in spark-ignition engines, the fuel and air are mixed and com-
pressed before the mixture is ignited by a spark plug ahead of the piston reaching the top-dead-
center, whereas in compression-ignition engines only air is compressed and the fuel is injected
into it toward the end of the compression stroke. In diesel engines, compression ratios are much
higher than in gasoline engines, and therefore, there is no need to use a spark plug to ignite the
mixture; it auto-ignites when the fuel is injected into the hot air.
In diesel engines, due to the large compression ratios of 14–20 (even higher in larger sys-
tems), the air (and not a fuel-air mixture) is compressed to a high degree, significantly increasing
its temperature. Equation (4.3) shows the resulting temperature as air is compressed (tempera-
tures are in Kelvin, (cid:14)K
273 or in Rankin (cid:14)R
460):
D (cid:14)C
C
C
D (cid:14)F
0:4;
T2
D
T1Cr
(4.3)
where Cr is the compression ratio and T1 and T2 are temperatures before and after compression.
Table 4.1 shows the pressure ratios and corresponding temperatures for initial temperature of
20(cid:14)C
68(cid:14)F for different compression ratios from Equations (4.2) and (4.3).
D
When the compression ratio increases, the temperature increases too. For a compression
ratio of 8, the approximate temperature will be about 400(cid:14)C (752(cid:14)F), whereas for a compression
ratio of 15, it will be 592(cid:14)C (1100(cid:14)F). Since the auto-ignition temperature of diesel fuel is lower
than that of gasoline, it can even ignite without a spark at these temperatures. erefore, instead
of a spark plug (and its support system) the fuel is injected into the hot, compressed air.
4.6. THERMODYNAMIC REPRESENTATION OF COMPRESSION-IGNITION POWER CYCLE 119
Table 4.1: Compression ratios and associated pressure ratios and temperatures
e first stroke of a diesel engine is the intake stroke, when the intake valve is opened and air
is sucked in. With turbo-charging, the air is pushed into the chamber at slightly higher pressure;
this is very helpful at higher elevations where the air pressure is a little lower. As the piston
moves toward the top-dead-center in the compression stroke, the air is compressed, increasing its
pressure and temperature. Near the top-dead-center, diesel fuel is injected at high pressure into the
chamber, and due to the high temperature of the air compared to the auto-ignition temperature
of the diesel fuel, it ignites immediately and burns as the piston continues its downward power
stroke, creating the torque at the crankshaft. During the fourth stroke, as the piston travels up
to the top-dead-center, the exhaust valve is opened, allowing the burnt cases to escape. e valve
closes as the piston moves down again while the intake valve is open, repeating the cycle.
Advantages of a diesel cycle include higher efficiencies due to higher compression ratios,
lack of an ignition system, and lower cost of diesel fuel (in most places). Diesel engines are usu-
ally powerful and are used for trucks, locomotives, marine applications, factories (to generate
electricity and run other machines), and even in some power plants. Disadvantages include the
lack of availability of diesel fuel as compared with regular gasoline in gas stations (at least in the
U.S.), lower power delivery at higher altitudes, more noise, and difficulty starting the engine in
cold temperatures. Many diesel engines include a heating element in the combustion chamber
for cold-starting the engine. Diesel engines are also more polluting, although they have improved
in recent years. However, due to the availability of plenty of air in the mixture, diesel engines
produce less CO and more CO2.
4.6 THERMODYNAMIC REPRESENTATION OF
COMPRESSION-IGNITION POWER CYCLE
Similar to the thermodynamic representation of the spark-ignition power cycle, and for the same
reasons, we can also represent the compression-ignition cycle (also called constant pressure com-
bustion cycle) with both (T
V ) thermodynamic graphs. Figure 4.17a shows the
ideal compression-ignition diesel cycle. Segment 1-2 represents the compression of the air in the
s) and (P
(cid:0)
(cid:0)
Compression ratioCrPressure ratioP2/P11829446690 811152025ApproximateTemperature T2°C(°F)400(752)490(914)592(1100)698(1288)789(1450)120
4. THERMODYNAMIC CYCLES
cylinder as the piston moves up toward the top-dead-center, at which point fuel is injected into
the cylinder. Segment 2-3 represents the combustion. e ideal cycle assumes that combustion
occurs at constant pressure because fuel continues to burn as the piston moves down. erefore,
the pressure increase due to combustion compensates for the pressure loss due to increase in the
volume. Segment 3-4 represents the expansion of gases (development of power), and segment 4-1
represents the rejection of remaining heat or exhaust. Figure 4.17b shows a more realistic diesel
cycle where the compression/combustion and expansion is broken into two segments. As in the
case of gasoline engines, we can also add the intake and exhaust strokes to the diagram.
(a)
(b)
Figure 4.17: ermodynamic P
(cid:0)
V representation of the compression-ignition cycle.
Have you noticed the particular noise that diesel trucks make as they travel downhill at
high speeds? is noise is due to braking the truck with the engine instead of powering it with
the engine. As we have discussed in previous chapters, kinetic energy of a body is:
K
D
1
2
mV 2;
(4.4)
where m is the mass and V is the velocity. Trucks are massive, especially when fully loaded. When
they travel at high speeds, their kinetic energy is tremendously large. Slowing a truck in a downhill
stretch of the highway is almost impossible without damaging the brakes, assuming they even
1-2 Compression2-3 Injection/combustion3-4 Expansion/power development4-1 Heat rejection1234Pressure PVolume VIdealizedPressure PVolume VRealistic4.7. ROTARY (WANKEL) ENGINES 121
work. To control the speed of a truck in downhill stretches and to slow it down to manageable
values, the engine is practically shut down by cutting off fuel to it while still keeping it engaged
with the transmission, forcing it to rotate. As a result, the engine acts as a pump, not an engine,
requiring work to turn. is work is provided by the kinetic energy of the truck, slowing it down.
In other words, in order for the engine to keep turning without fuel, it takes the kinetic energy of
the truck and slows it down. To increase this effect, it is possible to alter the opening and closing
of the valves and increase the work still required to turn the engine. All these alterations can be
studied and designed using the same thermodynamic representations.
It should be mentioned here that we have only looked at three thermodynamics cycles.
However, there are many others that relate to other systems, including Stirling and Ericsson
cycles, the Carnot cycle, and the Brayton cycle.
4.7 ROTARY (WANKEL) ENGINES
Rotary engines follow a similar thermodynamic cycle, but are mechanically different from re-
ciprocating internal combustion engines. ey have an intake, compression, ignition, expansion,
and exhaust segments. However, instead of the usual slider-crank mechanism (piston and cylin-
der, connecting rod, and a crank) rotary engines include an epitrochoid-shaped housing with two
openings for intake and exhaust and a three-sided rotor as shown in Figure 4.18. e rotor both
rotates and orbits around the fixed geared shaft called the eccentric-shaft (e-shaft) with a 1/3 ratio
such that for every three rotations of the eccentric shaft, the rotor rotates only once. is forces
the three corners of the rotor to always remain in contact with the housing. Spark plugs ignite the
compressed fuel-air mixture at the proper time. ese engines are simpler, smaller and lighter,
and provide a better power-to-weight ratio. However, they are relatively new compared to the
reciprocating engines, and therefore, there is less experience available with the design and service
aspects of these engines.
As shown in Figure 4.18, at any given time, multiple segments of the cycle happen si-
multaneously. For example, in Figure 4.18a, the engine is at the end of its intake, in the middle
of its power development, and in the exhaust stroke all at the same time. In Figure 4.18b, it is
compressing the fuel-air mixture, developing power, and finishing exhaust. In Figure 4.18c the
engine is taking in the air mixture, the spark plugs initiate combustion, and exhaust has started.
Figure 4.19 shows the rotor of an actual engine in the combustion chamber.
4.8
POWER GENERATION
(cid:0)
s) and pressure versus volume
Although engineers use similar temperature versus entropy (T
V / diagrams to design and analyze power generation systems (such as in a power plant),
.P
we will only discuss the principles of these systems here because in real life, these systems can
be complicated and there are too many variations that make each system uniquely different from
another, therefore changing the efficiency of the system.
(cid:0)
122
4. THERMODYNAMIC CYCLES
(a)
(b)
(c)
(d)
Figure 4.18: Rotary (Wankel) Engine operation.
IntakeExhaustPower(a)CompressionPowerExhaustIgnitionIntake(d)4.8. POWER GENERATION 123
Figure 4.19: A rotary engine’s combustion chamber and rotor in different positions similar to posi-
tions shown in Figure 4.18.
As was mentioned earlier, energy is neither created nor destroyed; it is only converted from
one form to another. When we speak of power generation in a power plant, we actually mean the
conversion of one form of energy such as thermal or hydraulic or chemical energy into electrical.
Power generation, among others, includes conversion of energy from coal, gas, or other hydro-
carbon and fossil fuels, nuclear, wind, hydraulic, and solar into electrical energy. In each of these
systems, a generator is turned at a constant speed to convert the energy into electrical form (see
Chapter 6 about generators and motors). e power needed to rotate the rotor of the generator
is provided by one of the aforementioned systems.
One of the most common systems used is a steam generator. In these systems, fossil fuel
is burned in order to turn water into steam at high pressure and temperature, raising its energy
to a very high level. e pressurized and hot steam is then pushed through a steam turbine,
causing it to turn. e shaft of the turbine is connected to the generator, and thus, it rotates. e
energy needed to boil the water into steam may come from burning coal, gas, other hydrocarbons,
nuclear reaction, or similar. Coal is inexpensive and plentiful, but it is very dirty and creates a lot of
pollutants, including carbon dioxide. Coal is used all over the world, but in certain countries that
use it extensively, the level of air pollution is also very high. Since in recent years gas has become
much more available and much cheaper, but burns much more cleanly, many systems have been
converted to burn gas.
An important issue here is that, as was discussed earlier, due to the second law of thermo-
dynamics, it is impossible to assume that all the energy of the steam can be converted to electrical
power. is will defy the second law. erefore, at best, the efficiency of a power plant can be
as high as low-40s percent. is means that close to 60% of the power in the fuel is wasted as
rejected heat.
Another popular system is to turn the generator of a power plant by a jet engine; here, fuel
is burned in a jet engine just like the way it is burned in an airplane engine in order to fly it.
However, instead of the jet engine pushing through the air to fly the plane, the shaft is connected
124
4. THERMODYNAMIC CYCLES
to the generator, rotating it to generate electricity. e burned air/fuel mixture leaves the system
still with high level of energy left in it because it is still very hot and has much kinetic energy (it
comes out of the jet engine at a very high speed). erefore, like most other systems, the efficiency
of the jet engine is low and most of the energy is wasted as rejected heat.
An alternative, originally designed decades ago but becoming more popular only recently,
is a combined-cycle. In a combined cycle, a generator is powered by the jet engine as described
earlier. However, the high energy left in the burnt gas is captured by using it to boil water into
steam just like the steam power systems. Since more of the energy is captured between the two
systems compared to either of them alone, the efficiency of such a system can be more than 60%,
a significant increase.
is indicates how engineering principles can be used to make a system better, more effi-
V / diagrams can be
cient, and less polluting. ermodynamic cycles and the .T
s/ and .P
used to design and tune the system to its best possible performance level.
(cid:0)
(cid:0)
4.9 CONCLUSION
As you have probably noticed, the intention of this chapter was not to discuss refrigerators or
engines or power generation, but how the study of thermodynamics is necessary in order to know
what these cycles are and how they are used by engineers to design and improve these systems.
ere is, of course, a lot more to thermodynamics than what is discussed here. But hopefully this
discussion shows the importance of thermodynamics in our everyday technological lives. ere
are over a billion cars in the world. Imagine if through thermodynamic studies we could improve
their efficiency by a couple of percentage points. Imagine how much energy we would save and
how much less pollution we would have to deal with.
4.10 BIBLIOGRAPHY
[1] Gluckstein, M.E and Walcutt, C. “End-Gas Temperature-Pressure Histories and eir Re-
lation to Knock,” Transactions of SAE 69, 529, 1961. 108
C H A P T E R 5
125
Moments of Inertia
Mass and Area Moments of Inertia, Accelerations,
Inertial Forces, Strengths, and Strains
5.1
INTRODUCTION
If you think of any classic cartoons, it is inevitable that at some point a beloved animal character
will slowly crawl on the branch of a tree as it is pursued by its nemesis, bending the branch more
and more until it breaks. Have you ever wondered what would happen if the animal had some
knowledge of engineering and could calculate how far it could go before the branch would break
(knowing about engineering principles makes cartoons even more interesting)? In real life, we
can actually predict the strength of the part we are loading and calculate how much load it can
safely carry without breaking. Extending the idea of cartoons to real life we can find countless
examples where the situation is the same. Simply think of the load on the wings of an airplane.
It is actually similar to the situation mentioned earlier. is chapter discusses these relationships
and how these ideas are related to each other. Let’s start with the following experiment.
Please take a ruler or a piece of wood or similar object and place it between two raised
points (perhaps two cups or books) and then press it in the middle as in Figure 5.1a. You will
notice that bends. A larger force exerted by you will cause the ruler to bend more.
Now turn the ruler 90(cid:14) on its side and repeat as in Figure 5.1b. You will notice that the
ruler, under the same force, does not bend at all (or bends very little). So why is it that although
it is the same object, with the same dimensions, the same mass or weight, and the same strength,
that in one orientation it bends more easily than in another orientation?
e reason is that the moment of inertia of the object, in this case the area moment of inertia,
is different between the two orientations. Although everything else is the same, the area moments
of inertia are not. e same is true when we deal with the motion of objects, where mass moment
of inertia is a factor. In this chapter we will examine how moments of inertia affect the behavior
of objects, both as they relate to static (not moving) and dynamic (moving) situations.
It should be mentioned here that area moment of inertia is not an accurate description of
this entity, but it is a name which we commonly use. A better name would be second moment of
the area.
126
5. MOMENTS OF INERTIA
(a)
(b)
Figure 5.1: How much a ruler bends under the same load depends on its area moment of inertia
(second moment of the area) in that orientation.
In fact, we can define both the first and second moments of an area. e first moment of
area is used to calculate the center of the area, and although this has many applications, we will
not discuss it here. We will first discuss the area moment of inertia; mass moment of inertia will
be discussed second.
5.2
SECOND MOMENT OF THE AREA (AREA MOMENT OF
INERTIA)
Second Moment of Area (or Area Moment of Inertia) is a representation of the dimensions of an
area and its distribution (thin, tall, round, square, hollow). Among others, second moment of the
area is a measure of how much a body resists bending under a force or resists rotation under a
torque (such as in the rotation of one end of a shaft relative to the other end when twisted).
Let’s consider a simple bar with a rectangular cross section as shown in Figure 5.2. Although
it is very easy to derive the second moment of a rectangular area by integration, we will skip this
derivation here. Let it suffice to say that the second moment of a rectangular area about the x-axis
is:
Ix
D
1
12
bh3;
(5.1)
where b is the length of the base of the rectangle and h is its height. Notice that the second
moment of the area is independent of the length of the bar.
In order to get a feel for the numerical value of the second moment of the area of a rectan-
gular beam we need to look at a few examples with real numbers. Please stay with the numerical
examples as they clarify the point much better than a simple equation. So let’s assume that the
base of the beam is one inch and the height is 4 in. e second moment of the area of the beam
5.2. SECOND MOMENT OF THE AREA (AREA MOMENT OF INERTIA)
127
Figure 5.2: e rectangular cross section of a beam.
will be:
Ix
.1/.4/3
5:33 in4:
1
12
D
D
Notice that the unit for the second moment of the area is in4 (or cm4, etc.). Also notice
that this is a measure of the area and its distribution, meaning the size of the area and its relative
width and height, but that it has nothing to do with what kind of material it is or how strong it
is. Note also that for this example, the area of the cross section is 1
4 in2.
4
Now let’s do the same, but this time we will turn the beam on its side such that the base
4 in2, but this
will be 4 in while the height is 1 in. Notice that the area is still the same 4
time, the area moment of inertia will be:
D
(cid:2)
1
(cid:2)
D
Ix
D
1
12
.4/.1/3
D
0:33 in4:
As you notice, even though the beam is exactly the same, with the same dimensions and
the same area, its second moment of area has changed significantly, in this particular example a
ratio of 5.33/0.33 or more than 16/1, all because we simply turned it around.
To better understand this, let’s now consider a beam with a square cross section of 2
4 in2 as before, but the area moment of inertia is:
Here too, the area is the same 2
2
(cid:2)
D
2 in.
(cid:2)
Ix
D
1
12
.2/.2/3
D
1:33 in4;
and the ratio, compared to the previous case, is 5.33/1.33 or 4/1. Once again, factors influencing
the magnitude of the second moment of an area are both the actual dimensions of the area and
their distribution.
bhxybhl128
5. MOMENTS OF INERTIA
We can also define the area moment of inertia of the same cross section about the y-axis
(we will see the application of this shortly). In this case, as in Figure 5.2, relative to the y-axis the
height h will be the base and the base b will act as the height, and therefore:
Iy
D
1
12
hb3:
Substituting the same dimensions as before, we will get the second moment of area about
the y-axis as 0.33 in4. Notice that the second moment of area about the x-axis for our first case
4 is the same as the second moment of area about the y-axis for the second case
when b
1.
when b
1; h
4; h
If we calculate the approximate second moments of area of the ruler of Figure 5.1 where
D
D
D
D
the ruler is 0.075 in thick and 1.175 in wide, we get the following:
For Figure 5.2a:
For Figure 5.2b:
Ix
Ix
D
D
1
12 .1:175/.0:075/4
12 .0:075/.1:175/4
1
0:000003 in4
0:0119.
D
D
e ratio is 0:0119=0:000003
3,840. is means it will take 3,840 times as much force to
bend the ruler of Figure 5.1a the same amount as Figure 5.1b, an amazing difference (meaning
that most probably, the ruler will break before it bends).
D
Before we continue our discussion of the second moment of area, let’s see how it is used in
calculating the deflection of the beam under a load as well as its stresses.
5.3 DEFLECTIONS OF A BEAM
Deflection relates to how much a beam bends; it is usually calculated at its maximum, in this case,
in the middle of the beam. e maximum deflection for a simple beam, supported at the two ends
(like the ruler of Figure 5.1) and a single force in the middle can be calculated by:
ymax D (cid:0)
FL3
48EI
;
(5.2)
where ymax is the maximum deflection at the center of the beam, F is the load, L is the length
of the beam, E is called modulus of elasticity, a material property (which we will discuss later),
and I is the second moment of the cross sectional area of the beam, as shown in Figure 5.3a.
e negative sign indicates that the beam bends down (below the reference frame x-axis). Since
the second moment of the area I is in the denominator, as it gets larger the deflection decreases.
As you can see, when the ruler of Figure 5.1 is turned 90(cid:14), the only thing that changes in this
equation is the second moment of the area of its cross section; otherwise, the load, the modulus
of elasticity, and the length remain the same. If the second moment of the area is 3,840 times as
large, the deflection will be 3,840 time smaller compared to the first case, and this is exactly what
we see. is fact is used extensively in the design of structures and machine elements in order to
limit or increase deflections as necessary. For example, a roof beam should not deflect much, and
therefore, the beam is laid in the direction of the maximum second moment of the area, whereas
in the leaf spring of Figure 5.4, used at the rear axle of a truck, the beam (each leaf of the spring)
is laid on its base to increase the deflections, therefore acting as a spring.
5.3. DEFLECTIONS OF A BEAM 129
(a)
(b)
(c)
Figure 5.3: e deflection of a beam under a load at its center.
Figure 5.4: A leaf spring used in a truck.
e second moment of the area used in Equation (5.2) is Ix. So when do we use Ix and
when Iy? It depends on about what axis the beam bends. For example, if the beam of Figure 5.2
LFymaxxyymaxxyLFymaxxyL130
5. MOMENTS OF INERTIA
is loaded with a force in the vertical direction, causing it to bend about the x-axis, we use Ix. If
the beam were loaded with a horizontal force causing it to bend about the y-axis, we would use
Iy.
Now assume that instead of one ruler we would use two of them on top of each other as
in Figure 5.3b. In this case, since they both bend under the same load, we can assume that each
one will carry almost 1=2 of the load, and therefore the deflection will be 1=2 of the first case, or
similarly, that the total second moment of the area is twice as much for two of them, and therefore:
ymax D (cid:0)
FL3
48E.2I /
:
Notice that this means that the two rulers slide over each other as they bend. is can be
likened to bending a telephone book. As you bend the book, all the pages bend together and slide
over each other, but they all maintain their original lengths.
Now let’s assume that instead we use a similar ruler or beam, but twice as thick. In this
case, the total amount of material would be the same as using two thinner rulers or beams, and
the overall dimensions would be similar. However, the second moments of the area are different.
Whereas with two rulers, the total second moment of the area is:
Itotal
2
(cid:2)
D
1
12
bh3
D
1
6
bh3;
the total second moment of the area for a beam twice as thick (with its height equal to twice the
height of the original beam) is:
Itotal
1
12
D
b.2h/3
1
12
D
b.8h3/
2
3
D
bh3;
which is four times as large as the first case with a deflection four times smaller. Notice that
unlike the first case where the two beams slide over each other, in this case there are no separate
layers to slide over each other; the beam bends as one piece. is small difference is the reason
for the different magnitudes of deflection (and as we will see later, stresses). It is as if you held
all the pages of the telephone book together while trying to bend it, if they were all glued; it
would strongly resist bending. e sliding of the layers of the beam over each other is called shear.
When the layers slide over each other and consequently there is no resistance between them,
there is no shear force; when they are prevented from freely sliding over each other, there is a
shear force between the layers, and consequently, the book does not bend. Figure 5.5 shows this
difference between a telephone book whose pages are prevented from sliding by two large paper
clips versus free sliding of the pages. is also explains why a cardboard is much stronger (stiffer)
than individual papers with the same thickness. e paper layers in a cardboard are glued together,
therefore preventing them from sliding over each other. Plywood is made of thin layers of wood,
glued together. ey are strong and resist bending too. However, plywood is also used in the
5.3. DEFLECTIONS OF A BEAM 131
Figure 5.5: When the pages of a telephone book are prevented from sliding over each other, it does
not bend as much due to the differences in second moments of the area.
manufacture of bent surfaces such as in modern furniture. In this case, the thin layers are first
bent to shape, then glued together. erefore, they maintain their shape.
So why does the sliding of layers over each other matter? To understand this, let’s once
again look at the cross section of the beam, in this case a rectangle. As you see in Figure 5.6, the
centerline of the cross section is called neutral axis which is through the center of the area (called
centroid). For symmetrical cross sections such as a rectangle or a circle, the neutral axis is in the
middle. At the plane of the neutral axis the length of the beam does not change during bending.
is means that as the beam bends, its length remains the same at the neutral axis, while all other
layers change length. In the case of a beam loaded from above as shown in Figure 5.6, all layers
of the beam above the neutral axis must shorten while all layers of the material below the neutral
axis must lengthen. e farther away a layer is from the neutral axis, the larger the increase or
decrease in its length. Now imagine how much larger the increase and decrease will be when the
cross section increases in height. is is why the bending of the beam decreases significantly as the
height of the beam increases, which is reflected as h3 in the second moment of the area equation.
Now compare this with doubling the number of beams instead of doubling the height. In
the case of two beams, the lengths of the layers of each beam increase or decrease independently
based on the beam’s height, while each one slides over the other beam. (ink about the layers of
the upper beam below its neutral axis which lengthen, while the layers of the lower beam above its
neutral axis which shorten. At the interface, one is shortened, one is lengthened. Consequently,
they slide over each other.) Because the height of each individual beam is less, the increase or
decrease in length of the layers is much less. Once again, think about the telephone book and
how the lengths of the pages must increase or decrease when glued together versus the pages
sliding over each other when not glued. However, when the height of the beam is doubled, there
132
5. MOMENTS OF INERTIA
Figure 5.6: e neutral axis of a rectangular cross section.
is no sliding of the layers; the farther away a layer is from the neutral axis, the larger its shortening
or lengthening.
Obviously, it is possible to have cross sections other than a rectangle. Examples include cir-
cular (such as a shaft), hollow circular (such as a tube), hollow rectangular, I-beams, C-channels,
L-shaped angles, and many more. ere are either formulae for calculating the second moments
of area for these shapes, or they can be found in tables.
e second moment of a circular area is:
Ix
Iy
D
D
1
4
(cid:25)r 4;
(5.3)
where r is the radius of the circular cross section. Notice that due to the symmetry of a circular
area, the second moments about the x-axis and y-axis are the same.
e second moment of the area for a tube can easily be calculated by subtracting the second
moment of the inner area (treated as missing material or as a negative moment) from the second
moment of the total area as:
Ix
D
1
4
4
(cid:25)ro
(cid:0)
1
4
(cid:25)ri
4;
(5.4)
where ro and ri are the outer and inner radii of the tube as shown in Figure 5.7a. e same can
be done for a hollow rectangular tube or other shapes. Similarly, we can add second moments of
the area together for shapes that are combinations of elements with which we are already familiar.
For example, suppose that two rectangular beams of the same size are placed next to each other
as shown in Figure 5.7b. e total second moment of the area about the x-axis for both will be
the summation of the moments, or:
Ix
D
1
12
bh3
1
12
C
bh3
D
1
6
bh3:
(5.5)
If you have ever worked with electrical wires you have probably noticed that multi-strand
wires are much easier to bend than single-strand wires of the same gauge (thickness). e reason
bhNeutral axisLengthsincreaseLengths decreaseLengths increaseLengthsdecreaseF5.3. DEFLECTIONS OF A BEAM 133
(a)
(b)
Figure 5.7: Second moments of the area for combined areas.
is that multi-strand wires consist of many thinner wires that can slide over each other. e total
second moment of the area is the summation of the moments of each strand. However, the second
moment of the area for the thicker single-strand wire is much bigger than the second moment of
the area for the multi-strand wire, and consequently, it is stiffer. e same is also true for a steel
cable versus a steel bar of the same diameter. Cables are much easier to bend than bars because
the second moment of the area for a bar is much bigger too. e strands of the cable can slide
over each other; the layers of the bar cannot.
Tree branches are the same. icker branches have a larger diameter which increases the
area moment of inertia, reducing deflection under the force of winds and the weight of its fruit,
other branches, animals, and leaves. Being as smart as it is, nature provides adequate strength as
necessary. Because the loads decrease as we get closer to the top of the tree or to the tip of each
branch, its thickness reduces as well. is reduces the weight and optimizes the design; there is
basically enough material to take the load as needed.
Second moments of the area for most common standard building beams such as I-beams
are available in manufacturers’ tables where engineers readily find them. However, for shapes that
are not included in tables or are not common, we can easily calculate the second moments of
area, some by mathematical integration, others by combining formulae used for common shapes.
To understand this, which also helps in further understanding the idea of the second moment of
area, let’s consider the Parallel Axis eorem.
xyorirbhxyb134
5. MOMENTS OF INERTIA
5.4
PARALLEL AXIS THEOREM
As you may have noticed, we calculated the second moment of the area about the neutral axis
(we placed the origin of the reference axes at the center and calculated the moments relative to
the x-axis and y-axis). However, for many different reasons (which will become clear shortly)
we may need to calculate the second moment about other axes away from the neutral axis. e
second moment of the area about another axis x0, parallel to the neutral axis, can be found from:
Ix0 D
Ix
C
Ad 2;
(5.6)
where A is the area and d is the distance between the two axes. In other words, the second moment
of the area about x0 is equal to the second moment about an axis through the centroid plus the
area multiplied by the square of the distance between the two axes. is is called parallel axis
theorem. For example, the second moment of the area of a rectangle about the bottom of the
rectangle instead of its centerline (Figure 5.8) is:
Ix0 D
Ix
C
Ad 2
1
12
bh3
.bh/(cid:18) h
2
C
D
2
(cid:19)
1
12
bh3
1
4
C
D
bh3
D
1
3
bh3:
Figure 5.8: Parallel axis theorem.
Now let’s see where this can be used. Imagine that we model an I-beam (Figure 5.9b), a
very common structural beam element whose second moment of area is often needed for stress
and deflection calculations, as three rectangular-shaped areas attached to each other as shown in
Figure 5.9a. e vertical portion is called a web and the horizontal portions are called flanges. In
this case, the total second moment of the area about the neutral axis x is the summation of the
second moments of each of the three areas, all about the neutral axis x. e second moment of
the web can easily be calculated by Equation (5.1). However, the second moments of the flanges
must also be calculated about the same x-axis that was used for the web, which is a distance of d
away from each flange. erefore, we will need to use the parallel axis theorem Equation (5.6) to
calculate the contribution of the flanges to the total second moment. e total second moment
bhx'yx5.4. PARALLEL AXIS THEOREM 135
of area for the I-beam about the x-axis is:
e second moment of the web about the x-axis is:
Itotalx D
Iwebx C
2Iflangex :
Iwebx D
1
12
.t/.h3/:
e second moment of each flange about its own axis x0 and x00 is:
Iflangex
0 D
Iflangex
00 D
1
12
.b/.t 3/:
But since we need the second moment about the x-axis, we use parallel axis theorem and
get:
Iflangex D
Iflangex D
Iflangex
1
12
bt 3
Ad 2
1
12
D
0 C
.b/.t 3/
C
.bt/.d 2/
btd 2:
C
erefore, the total second moment for the I-beam is:
Itotalx D
1
12
th3
2 (cid:18) 1
12
C
bt 3
C
btd 2(cid:19) :
(a)
(b)
Figure 5.9: (a) A simplified model of an I-beam, (b) An actual I-beam.
To see the significance of this let’s assume we make an I-beam out of three pieces similar
4 in. If the three were laid next to each other as in Figure 5.10a, the
to our previous example, 1
(cid:2)
thdxx''x'ydbtWebFlangeFlange136
5. MOMENTS OF INERTIA
total second moment of the area would be the summation of their individual moments about the
x-axis:
2Iflange
Itotal
D
D
Iweb
1
12
C
.1/.4/3
2 (cid:18) 1
12
C
.4/.1/3(cid:19)
6 in4:
D
However, if they were assembled (and glued/welded together) into an I-beam as in Fig-
ure 5.10b, the second moment of the area would be:
Itotal
1
12
1
12
D
D
th3
C
.1/.4/3
bt 3
2 (cid:18) 1
12
2 (cid:18) 1
12
C
btd 2(cid:19)
C
.4/.1/3
C
.4/.1/.2:5/2(cid:19)
56 in4:
D
(a)
(b)
Figure 5.10: An I-beam versus its constituent parts next to each other makes a huge difference in the
total second moment of the area.
is is over nine times as large. e fact that the flanges are at a distance away from the
x-axis significantly adds to the total second moment as compared to the flanges on the x-axis.
is is why the distribution of the material, and not the total area, is important in how much load
a beam carries or how much it deflects under the load. is example shows the importance of the
shape of the beam and how much load the same material carries in a structure or a machine. Now
suppose that the same amount of material is used either as a flat sheet or as an I-beam by cutting
it into three strips and gluing the pieces together in the shape of an I-beam (Figure 5.11). Even
though they are the same amount of area (same material), the I-beam will carry a much larger
btxtbbthdxx''x'dbt5.5. POLAR MOMENT OF INERTIA (POLAR MOMENT OF THE AREA)
137
Figure 5.11: A strip of material versus cutting and gluing it into an I-beam. e I-beam carries sig-
nificantly more load than the strip.
load. Engineers can design structural elements that are much more efficient with less material
because they use these engineering principles in their designs.
Another major example of where the second moment of the area is increased by distributing
the material farther away from the neutral axis is the use of a truss. Because the distance of the
element of the truss from its neutral axis is increased, its moment of inertia is also increased
significantly, enabling it to carry larger loads, especially at larger spans. Figure 5.12 shows an
example of a truss used as the main load-carrying element in a ceiling. Look for it in a bridge
next time you see one.
Figure 5.13 shows a common dish rack. Can you tell why the body is designed this way? In
addition to their effect on the shape of the rack, the two sets of semi-circular welded horizontal
members create a much larger second moment of area than if they were added together as a thicker
rod, were laid next to each other, or were free to slide over each other.
Figure 5.14 shows two corrugated pieces of cardboard. Looking at their cross sections and
the differences between their construction, can you tell why the one in Figure 5.14a rolls easily
in one direction for wrapping purposes (but not in the perpendicular direction), whereas the
corrugated cardboard of Figure 5.14b is stiff in all directions?
5.5
POLAR MOMENT OF INERTIA (POLAR MOMENT OF
THE AREA)
So far we discussed the role of the second moment of the area in bending. Now imagine a similar
situation, but here we intend to twist a bar by applying a moment or torque to one end as shown
in Figure 5.15. is twisting of the bar is called torsion. As in bending, when a bar is twisted, one
end of the bar rotates relative to the other end. is twisting of the bar is called angular deflection.
138
5. MOMENTS OF INERTIA
Figure 5.12: A truss used as a load-carrying element in a ceiling. e second moment of the area
of the truss is significantly larger due to the way the elements are distributed farther away from the
neutral axis.
Figure 5.13: e two sets of semi-circular horizontal members of the dish rack increase the second
moment of the area, decreasing its deflections.
In torsion, we use the polar moment of the area (polar moment of inertia), J , which for a
round shape is:
J
D
1
2
(cid:25)r 4:
(5.7)
5.5. POLAR MOMENT OF INERTIA (POLAR MOMENT OF THE AREA)
139
(a)
(b)
Figure 5.14: Corrugated cardboard is stiff due to its increased area moment of inertia.
Figure 5.15: Torsion of a bar.
Iy
Notice that this is twice as large as Ix (Equation (5.3)), which makes it equal to Ix
(for a symmetrical cross section). e polar moment of the area can be similarly calculated for
other shapes, including with the use of the parallel axis theorem.
C
Similarly, we can also define modulus of rigidity G, a material property similar to the modu-
lus of elasticity E that was used in calculating deflections in Equation (5.2). In torsion, the angular
deflection (angle of twist) is:
T L
J G
;
(cid:30)
D
(5.8)
where (cid:30) is the angular deflection, T is the applied torque, and L is the length of the bar. When
torque or the length of the bar increase, the angular deflection increases as well. However, as the
polar moment of the area increases, the angular deflection decreases. Similar to bending, the polar
moment of the area directly affects the twisting of the bar.
TorqueAngular deflectionAngle of twist140
5. MOMENTS OF INERTIA
e polar moment of the area for a hollow bar is:
J
D
1
2
(cid:25) r 4
o (cid:0)
1
2
(cid:25) r 4
i ;
(5.9)
where ro and ri are the outer and inner radii of the bar. is is the polar moment of the larger
area, minus the polar moment of the missing (hollow) area. Now imagine a solid shaft with a
radius of 0.5 in. e polar moment of the area will be:
J
D
1
2
(cid:25) r 4
o D
1
2
(cid:25).0:5/4
D
0:098 in4:
e cross sectional area of the shaft will be:
(cid:25)r 2
A
D
D
(cid:25).0:5/2
D
0:785 in2:
Now imagine that we use the same amount of material (same cross sectional area), but
we make the shaft hollow. In this case, the outer diameter of the shaft will have to increase to
accommodate the hole and still have the same area. ere are countless different choices available
for the inner and outer diameters to achieve the same area. Let’s choose the outer diameter of
0.75 in. In that case, the inner diameter will be:
0:785
ri
D
D
(cid:25).0:75/2
0:56:
(cid:0)
(cid:25).ri /2
erefore, the shaft will be a hollow tube with 0.56 and 0.75 inner and outer radii as shown
in Figure 5.16. e polar moment of the area with the new dimensions will change to:
1
2
(cid:25) r 4
i D
1
2
(cid:25).0:75/4
1
2
(cid:0)
(cid:25).0:56/4
(cid:25) r 4
1
2
0:343:
o (cid:0)
J
J
D
D
Notice how much larger the polar moment of the area is although the same amount of
3:5 times as large.
material has been used. e new polar moment of the area is 0:343=0:098
As long as we do not make the new shaft’s wall thickness so small that it will collapse under the
load, increasing diameter also increases the polar moment of the area. Obviously, this is much
more efficient in material use.
D
An example of this is the driveshaft of an automobile. When the engine of a car is in the
front but the car is rear-wheel driven (examples include many older cars, some larger cars, and
most trucks), a driveshaft connects the transmission (in the front) to the differential (in the back)
as shown in Figure 5.17. In order to increase the efficiency of the system and lower the weight
and the cost of the car, the shaft is hollow.
So far we only discussed the role of second moment of the area (and polar moment of the
area) in deflections. Actually, although in certain applications deflection calculations might be the
5.6. STRENGTH OF MATERIALS: STRESS, STRAIN, AND MODULUS OF ELASTICITY 141
Figure 5.16: e polar moment of the area increases significantly as the shaft is made hollow but with
the same area.
Figure 5.17: e driveshaft of a car connects the transmission to the differential.
primary concern, in most cases stress calculations are even more important because stress calcu-
lations determine whether or not a structural or mechanical element can carry the load to which
it is subjected. Before we discuss this issue, let’s first look at the material strength characteristics
and see how they are related to moments of the area and mechanical stress.
5.6
STRENGTH OF MATERIALS: STRESS, STRAIN, AND
MODULUS OF ELASTICITY
If you attach a weight to a spring it will stretch (elongate). For larger weights, the stretch will
be larger. If you plot the weights versus elongations, you will notice that for the most part, the
r = 0.5 inA = 0.785 in2J = 0.098 in4r = 0.75 inA = 0.785 in2J = 0.343 in4r = 0.56 in142
5. MOMENTS OF INERTIA
relationship is linear. is means that for example, if the weight is doubled, the elongation will
be doubled too. erefore, we can define a relationship between the weight (which is a force) and
the elongation as:
F
d
;
k
D
(5.10)
where F is the force (or weight) in lb or N (Newton, a unit of force in SI system of units), d is the
elongation in inches or meters, and k is the spring constant in lb/in or N/m. k is a measure of the
stiffness of the spring; the larger the stiffness, the harder it is to stretch or compress the spring.
Now imagine that you continue to add to the weight until the spring stretches to its fullest. At
that point, the spring does not stretch as freely as before. erefore, the relationship between the
force and deflection changes at this point and it becomes much stiffer, a non-linear relationship.
Figure 5.18 shows a simplified depiction of this behavior.
Figure 5.18: e spring stiffness is the ratio of applied force and the resulting elongation.
A similar thing happens to a metal bar. Figure 5.19 shows a typical bar that is used to study
the characteristics of many materials, including metals. e bar is placed in a machine that pulls
the two ends, applies a force to the bar, and measures the elongation of the bar under the load.
In this example the bar was pulled until it broke. Note how the area that broke was reduced in
diameter before breaking. As it may be clear to you, the bar’s elongation is influenced by how thick
it is; the thicker the bar, the smaller the elongations for the same force. erefore, to measure the
strength of the material without the influence of its size, the force is normally divided by the area.
is is called stress. It may also be clear that the longer the bar is, the larger the total elongation
will be (think of a short rubber band and a long one; the long rubber band stretches more than
the short one). erefore, in order to eliminate the effect of length and measure only the material
property, the elongation is divided by the length of the bar. is is called strain. Consequently,
we can study the relationship between stress and strain. is way, the relationship is about the
ForceElongationk5.6. STRENGTH OF MATERIALS: STRESS, STRAIN, AND MODULUS OF ELASTICITY 143
behavior of the material without regard to its thickness or length. erefore:
F
A
;
(cid:27)
D
(5.11)
where (cid:27) (read sigma) is the stress, F is the applied force, and A is the cross sectional area of the
bar, and:
l 0
l
;
"
D
(5.12)
where " (read epsilon) is the strain, l is the original length, and l 0 is the elongation.
Figure 5.19: A typical material testing specimen.
Calculation of the stress in any structural or machine element is crucial in ensuring that the
structure or the machine will be able to take the load to which it is subjected. If stresses exceed the
strength of the material, it will fail. Stress calculations are among the most important activities
that a design engineer might perform. At other times, not just the stresses but also the deflections
are considered because excessive deflections may also cause failure. erefore, it is not just the
calculation of stresses and strains, but also the understanding of the behavior and strengths of the
material used that are extremely important in engineering design and analysis.
Figure 5.20 shows the relationship between stress and strain for steel. As you see, when a
force is applied to steel, it stretches; when the force is removed, it returns to its original length.
erefore, a steel bar acts just like a spring, albeit it is much stiffer. is is an extremely important
characteristic that is used in many engineering calculations to ensure that a part can carry the load
to which it is subjected. As shown in Figure 5.20, to a certain point, the steel bar will elongate
proportionally to the applied force, at which point its behavior changes. is is called proportional
stress limit Sp and it is a very important characteristic. e ratio of stress over strain within this
limit (the slope of the line) is also a very important characteristic of materials, called the modulus
Diameter dArea ALength lForce FElongation l'Force F144
5. MOMENTS OF INERTIA
of elasticity, E:
E
D
(cid:27)
"
:
(5.13)
Figure 5.20: e relationship between the stress and strain of steel.
Modulus of elasticity for common steel is about 30
106 psi or 200 GPa (Giga-Pascal) in
SI units. is means that although a steel bar behaves like a spring, it requires 30 million lbs of
force per square-inch of the area to elongate it 1 in/in.
(cid:2)
Up until the proportional limit, steel behaves linearly. For small additional forces, the elon-
gation is not linear, but when the force is removed, the bar still returns completely to its origi-
nal length without any permanent change in its length. is limit is called elastic limit or elastic
strength or yield strength, Sy. However, if the force is increased beyond this limit, the bar will per-
manently elongate, although when the force is removed the bar shortens an amount representing
the elastic elongation. So for example, if a machine part is subjected to a force large enough to take
it beyond the elastic limit, it will permanently change (and this is why it is called yield strength,
because at this point the material yields to a new shape or length). is change is called plastic
deformation. When the load is removed, it shortens an amount equal to the elastic deformation,
but with a permanent elongation that no longer goes to zero. In many situations in design, this is
considered a failure of the part, even if it has not broken. Imagine a part of an engine permanently
elongating while rotating; the engine will no longer function properly even if no parts are broken.
However, most parts are designed with a safety factor to ensure we do not reach the yield strength.
And this is why we dare load a car with large loads, but still do not expect it to plastically change
forever. We know it deforms under the additional load, but since it is elastic, as soon as the load
is removed, it returns to its original shape.
If the force is increased further, even for relatively smaller amounts, the steel bar will elon-
gate in much larger amounts until it eventually approaches the maximum stress it can take, called
StressStrainSFSuSpSy5.6. STRENGTH OF MATERIALS: STRESS, STRAIN, AND MODULUS OF ELASTICITY 145
ultimate strength, Su. At that point, the cross section of the area becomes smaller (because it
plastically yields) and the material breaks and fails at SF .
Other types of steel behave somewhat differently. For example, a piece of high-carbon steel
is much stronger, but also very brittle, and therefore, does not elongate as much although it can
take higher loads. erefore, the stress-strain graph representing it, as well as its proportional,
yield, and ultimate strengths and its modulus of elasticity will be different. Similarly, other metals
(aluminum, brass, copper, stainless steel, and other alloys) all behave a little differently, but follow
similar patterns. Additionally, other materials such as glass, concrete, wood, and plastics can also
be characterized similarly even though the numbers and the patterns of behavior may be different.
For example, glass is a very hard but brittle material. erefore, it does not yield much, and if
subjected to bending it suddenly breaks before any permanent elongation or yielding has occurred.
As shown in Figure 5.20, when a part is subjected to loads beyond its yield
strength, it permanently deforms, although the elastic deformation is recov-
ered when the load is removed. However, what happens here is that if the
same part were to be subjected to a new load, the load would have to be larger
than before in order to permanently deform the part again (shown as the dot-
ted line). is is because, as you may notice in Figure 5.20, the portion of the
graph between the yield strength and ultimate strength has a small upward
slope, and therefore, every time the load must be larger to have the same ef-
fect. is means that the material actually becomes harder and stiffer every
time it is loaded beyond the yield strength. is is called cold-working, and
is a common method of strengthening parts. For example, cold-rolled steel is
stronger than hot-rolled steel because if it is heated, the material is softer and
it does not require as much force to yield it.
It is interesting to note that human nature is somewhat similar. People
who never work hard or never endure hardships behave differently than people
who experience difficulties and hardships and learn from these experiences. A
broken toe, an illness, lost belongings, failures, and social difficulties all con-
tribute to our resilience. Every experience that involves some hardship beyond
our “yield limit” will make us tougher. We even have a name for people who
have never had hardships. We call them spoiled. And just like metals, where
if the loads become too large for the material it will break and fail, we hope
that the hardships to which humans are exposed will not be beyond their ca-
pability, causing complete failure. Otherwise, experiences with hardships are
good for us; they make us stronger.
..
146
5. MOMENTS OF INERTIA
5.7 ROLE OF MOMENTS OF THE AREA IN STRESS
CALCULATIONS
Now that we have learned about stresses, we can go back to the previous discussion about moments
of the area.
In Sections 5.3 and 5.5 we discussed the linear deflection of a beam in bending and angular
deflection of a bar in torsion and saw how we can calculate these deflections for simple elements.
Similarly, we can calculate the stresses in bending and torsion (and of course in more complicated
loading situations that we will not discuss here). For a bending beam as in Figures 5.3 and 5.5,
the maximum stress (which happens to be in the middle of the beam) is:
M C
I
;
(cid:27)
D
(5.14)
where (cid:27) (read sigma) is the stress, M is the moment, C is the distance to the top or bottom of
the beam from the neutral axis (for maximum stress), and I is the second moment of the area.
e second moment of the area also plays a fundamental role in the calculation of stresses as
it does for deflections. e larger the second moment of the area is, the smaller the maximum
stress will be. Also notice that if we let C
0, indicating a distance of zero from the neutral axis,
Equation (5.14) shows that the stress on the neutral axis will be zero (and consequently, there
is no deflection either); it linearly increases as we move away from the neutral axis to the top or
bottom.
D
Similarly, for torsion, the maximum (shear) stress in the bar of Figure 5.15 is:
T r
J
;
(cid:28)
D
(5.15)
where (cid:28) (read tau) is the shear stress, T is the applied torque, r is the radius of the bar, and J is
the polar moment of the area as discussed in Section 5.7. Once again, polar moment of the area
is a fundamental element in the calculation of stresses as well as deflections. e larger the polar
moment of the area is, the smaller the shear stress will be. is also shows that the stress at the
center of the bar (where r
0) is zero, increasing as we get closer to the outer edge. In fact, the
material closer to the center is almost wasted; it carries little load (because stresses are low). is
is another good reason to use a hollow shaft rather than a solid one. e same material spread
out into a hollow shaft will have a larger polar moment of inertia and will also save on wasted
low-stress material.
D
e diameter of tree branches becomes smaller closer to the tip compared to the base as
shown in Figure 5.21. Since the load on the branch is smaller closer to the tip, the diameter and
the moment of inertia of the branch are smaller, resulting in less weight and increasing the tree’s
efficiency.
e second moment of the area and the polar moment of the area are very important con-
cepts in engineering. Understanding the role of moments of the area in this process is a funda-
mental requirement for engineers.
5.8. MASS MOMENT OF INERTIA 147
Figure 5.21: Tree branches become smaller closer to the tip because the load on the branch is smaller
too.
5.8 MASS MOMENT OF INERTIA
As area moments of inertia (including the polar moment of the area) are a representation of the
distribution of the area, mass moments of inertia are representations of how the mass is dis-
tributed. So, even though two different parts may have the same total mass, depending on their
shape, their mass moments might be very different. Similarly, as the area moments of inertia
directly impact how the material reacts under the influence of external loads and how large the
stresses and strains are, mass moments of inertia directly affect the way the mass reacts to acceler-
ations, causing it to move differently depending on not just the mass, but its distribution too. is
has a direct effect on our daily lives and the way things move and react as we work with them.
So let’s first talk about the context in which mass moments of inertia play a role before we
learn what they are and how to calculate them for simple cases.
In Chapter 3 we had a discussion about linear accelerations and how mass reacts to accel-
erations (please review if necessary). As mentioned there, imagine that you are sitting in a car,
accelerating forward. You will notice that you are pushed back against the seat. In this case, since
the acceleration vector is forward (causing the car to speed up in the forward direction), the mass
of your body reacts to this acceleration; due to its inertia (sluggishness), the body tends to stay in
the condition it is in and not change, and therefore, reacts to a push forward by resisting it. e
same is true in other conditions. For example, if a body is moving at a constant speed it tends to
remain at that speed and reacts to speeding up or slowing down. erefore, when braking and
consequently having a backward acceleration, the body tends to move forward to react to it unless
it is restrained. In fact, if this deceleration happens very quickly (such as in an accident when
the slow-down is extremely quick, creating a huge backward acceleration), the body may spring
forward enough to hit the front windshield. is is why we have seat belts and airbags to restraint
148
5. MOMENTS OF INERTIA
the body and keep it from hitting the windshield. Please see Sections 3.2.4 and 3.4 for additional
discussions.
e discussion above is about the relationship between a force, linear acceleration, and
mass. A similar relationship exists in angular motion. In this case, the relationship is between a
moment or torque (instead of a force), angular acceleration (instead of linear acceleration), and
a, we
F
mass moment of inertia (instead of mass). erefore, similar to the linear case with E
E
can write an equation that describes the angular version as:
D
m
T
E
I
(cid:11);
E
D
(5.16)
T is the torque, I is the mass moment of inertia, and
(cid:11) is the angular acceleration vector.
where E
E
is means that the torque induces an angular acceleration in a body proportional to the mass
moment of inertia that causes it to rotate. A larger torque creates a larger angular acceleration.
However, if the mass moment of inertia is smaller, the angular acceleration will be larger for the
same torque, and vice versa.
For example, consider a fan with the blades attached or removed. If the blades are not
attached to the motor when it is turned on, the motor shaft rotates quickly in a very short period
of time. is means that its angular acceleration is very high, and therefore it speeds up from zero
to its maximum velocity very quickly. However, when the blade is attached to the motor and it is
turned on it takes a relatively long time before the blades reach their maximum speed. Although
it is true that the blades are also trying to push out the air and therefore add to the load on the
motor, the much-lower angular acceleration is due to the much larger mass moment of inertia
of the blades. Assuming that the torque of the motor is essentially the same, because the mass
moment of inertia is larger, the angular acceleration is lower, requiring much more time to speed
up to its maximum.
It may appear that the lower angular acceleration might just be the result of adding mass to
the motor. However, if we were to add a metal ball with the same total mass equal to the blades
to the motor and repeat the test, we would find that although the angular acceleration would be
lower than no mass, it would still be much higher than with the blades. is indicates that it is
not only the mass that matters but how it is distributed. In fact, we might mention that the rotor
of the motor also has a mass moment of inertia that affects the angular acceleration of the rotor.
Even when there are no blades attached to the rotor, the moment of inertia of the rotor is still
present. We simply add to it when we attach the blades. e actual mass of the rotor may be much
larger than the mass of the (plastic) blades, but the mass moment of inertia is much less compared
to the blades. And this is why if we just add a ball with a mass equal to the blades, it will be as if
the rotor were a bit heavier with little effect. But with the blades attached, the mass moment of
inertia is increased significantly, affecting the angular acceleration significantly. So let’s see how
this can be analyzed and calculated. is analysis will help us understand what mass moment of
inertia really is.
As shown in Figure 5.22, the mass moment of inertia for a plate of radius r, thickness t,
and mass m about an axis x going through its center is:
5.8. MASS MOMENT OF INERTIA 149
Ix
D
1
2
mr 2:
(5.17)
Figure 5.22: Mass moment of inertia for a plate.
Assume that a plate is 4 inches wide (2 inches in radius) and 1 inch thick. Its mass can be
calculated by multiplying the volume by its density. e volume of the plate is its area multiplied
by its thickness, or:
Vol
(cid:25)r 2t
(cid:25).2/2.1/
12:57 in3:
D
D
D
e specific weight of steel is 490 lb/ft3 (0.2837 lb/in3). is means that the density of steel
0.000734 lbs2/in4 (this strange looking unit is the result of expressing the mass in English
0.00783 kg/cm3 in SI units. erefore, the mass of the plate is:
is (cid:26)
units). is is equivalent of (cid:26)
D
D
m
Vol
(cid:26)
(cid:2)
D
D
.12:57/.0:000734/
0:0092 lbs2=in:
D
e mass moment of inertia of the plate is:
Ix
D
1
2
mr 2
1
2
D
.0:0092/.22/
D
0:0184 lbs2in:
Now let’s take the same amount of material as before (same thickness, area, volume, and
mass), but make it into a ring with an outside diameter of 5 inches and an inner diameter of
3 inches (outside and inside radii of 2.5 and 1.5 inches), as shown in Figure 5.23a. e area of
the ring with ro as its outer radius and ri as its inner radius is:
Vol
t (cid:0)(cid:25) r 2
o (cid:0)
D
(cid:25) r 2
i (cid:1)
D
1 (cid:0)(cid:25).2:52
1:52/(cid:1)
(cid:0)
D
12:57 in3;
which is the same as before, as will be its mass (0.0092 lbs2/in). However, the mass moment of
inertia of the ring is:
Ix
D
m (cid:0)r 2
o C
r 2
i (cid:1) :
(5.18)
1
2
xtr150
5. MOMENTS OF INERTIA
Substituting the new radii in this equation gives us:
Ix
D
1
2
.0:0092/.2:52
1:52/
C
D
0:0391 lbs2in;
which is more than twice as large as the solid plate. Although we did not use any more material,
simply by changing the size (a different distribution of mass), we more than doubled the mass
moment of inertia.
(a)
(b)
Figure 5.23: e mass moment of inertia of a ring is changed as the distribution of the material
changes.
Now consider a third version: Assume we still use the same amount of material, but this
4:58 inches as shown in Figure 5.23b.
5 and ri
time form the ring to have dimensions of ro
In this case, too, since the area of the ring and its thickness are the same, so is its mass of
0.0092 lbs2/in. However, the new mass moment of inertia will be:
D
D
Ix
1
2
1
2
D
D
r 2
i (cid:1)
m (cid:0)r 2
o C
.0:0092/.52
4:582/
C
D
0:211 lbs2in;
D
which once again is nearly 0.211/0.0184
11.5 times as large. is is the power of the way the
mass is distributed. As the material is pushed outwardly, the mass moment of inertia increases.
Figure 5.24 shows a typical way this is used in the design of machinery. In this figure, instead of
attaching a uniform-thickness plate to the air motor, the same amount of material is made into
the shape of a flywheel with a much larger mass moment of inertia. e larger moment of inertia
is needed for smooth operation of the air motor, but it is provided without using a massive plate.
In a typical flywheel, most of the mass is moved into the rim, which is connected to the hub with a
xtrorixtrirothin plate. e same design is used in reciprocating internal combustion engines (used in all cars)
to smooth out the variations in the thermodynamic cycle. See Chapter 4 for more discussion.
5.8. MASS MOMENT OF INERTIA 151
Figure 5.24: A typical flywheel is designed to have a larger mass moment of inertia without being too
heavy by pushing most of the material outwardly to the rim.
If you were to turn on a fan with the blades attached, as in Figure 5.25a, you would notice
that the blades take a while to reach their maximum rotational speed. However, if the blades were
removed as in Figure 5.25b, the motor shaft would reach its maximum speed much more quickly,
in a fraction of the time needed with the blades. As we discussed earlier, Equation (5.16) shows the
relationship between the mass moment of inertia and angular acceleration. A fan motor without
the blades has much less moment of inertia (of the rotor) than with the blades, especially since
the mass moment of inertia of the blades, with their outwardly distribution, is relatively large.
erefore, the angular acceleration at the shaft without the blades is much larger than with the
blades, and consequently, the motor reaches its maximum rotational speed much more quickly.
is is even more apparent in ceiling fans, where the mass moment of inertia of the long blades
is even higher.
Many bicycle enthusiasts look for a lightweight bike, usually at much higher cost, thinking
that it is easier and faster to ride. Although the weight of the bike is a factor, the acceleration and
how quickly the maximum speed is achieved are more importantly affected by the weight, size,
and weight distribution of the tires. As you might guess by now, since bike tires rotate, their mass
moment of inertia directly affects the angular acceleration, and consequently, how quickly the
maximum speed is achieved. erefore, skinnier tires used in racing bicycles that are lightweight
152
5. MOMENTS OF INERTIA
(a)
(b)
Figure 5.25: A fan motor accelerates much more slowly when the fan blades are attached as compared
with the blades removed.
will have lower mass moment of inertia compared with fatter and heavier tires used in mountain
bikes. Some bike owners go as far as drilling holes in the sprockets of their tires, thinking they are
reducing the moments of inertia (as well as mass). How much effect do you believe this will have
on the overall mass moment of inertia of the tires? Almost none. However, reducing the weight
of the rim and the weight of the rubber used in the tire will significantly affect the moment of
inertia.
Now let’s look at a different situation. e propellers of airplanes and helicopters also rotate
about the shaft, and like the aforementioned examples, we should expect their mass moments of
inertia to affect the torque needed to rotate the propeller and the accelerations achieved. So first
let’s look at how we can calculate their approximate mass moments of inertia.
To do this, let’s model the shape of a propeller as a slender bar. is approximation is useful
for seeing what is important, but not accurate enough in practical terms. e actual mass moment
of inertia can be found either experimentally or by writing more complicated equations.
Figure 5.26a shows a slender rod (the length is much larger than the diameter). Assume
that the rod is attached to an axis at its center and rotates in a plane. Equation (5.16) still applies
here; the applied torque is equal to the mass moment of inertia times the angular acceleration.
e mass moment of inertia of a slender bar is:
5.8. MASS MOMENT OF INERTIA 153
1
12
I
D
mL2;
(5.19)
where m is the mass and L is the total length of the slender bar. Now suppose that the bar rotates
about one end, not the center. In this case, we need to calculate the moment of inertia about the
end, not about the center. As with the second moment of the area about an axis other than the
centroidal axis, we need to use the parallel axis theorem to calculate the mass moment of inertia
about an axis other than the one at the center of the mass. is, similar to Equation (5.6) for area
moment of inertia, can be written as:
IB
Io
C
D
md 2;
(5.20)
where IB is the mass moment of inertia about an axis B away from the center of mass, Io is the
mass moment of inertia about the center of mass, and d is the distance between the two axes.
erefore, for the slender bar of Figure 5.26b, the mass moment of inertia about one end will be:
IB
Io
1
12
D
D
2
(cid:19)
m(cid:18) L
2
C
mL2
1
4
C
mL2
1
3
D
mL2:
Obviously, this moment of inertia is 4 times as large, resulting in an acceleration that is
4 times as slow, or requiring 4 times as large a torque to rotate the bar at the same rate.
Figure 5.26: e mass moment of inertia of a slender bar.
Just to clarify this in a different way, let’s recalculate the mass moment of inertia of the
slender bar about its center of mass by assuming that it is the summation of two bars, each with
a length half as much and a mass half as much, attached together at one end. erefore, the total
mass moment of inertia will be twice the moment of inertia of a bar at half the length and half
LL/2LL/2BO154
5. MOMENTS OF INERTIA
the mass, calculated at its end, or:
2 (cid:0)I 0(cid:1)
2
1
3
(cid:18) 1
2
m(cid:19) (cid:18) L
2
D
2!
(cid:19)
1
12
mL2;
IO
D
D
which is exactly the same as before for the mass moment of inertia of a slender bar about its center
of mass.
erefore, when propellers are longer or heavier, their mass moments of inertia increase. In
helicopters, where the propellers are much longer than in airplane engines, it is almost impossible
to turn them as fast as in an airplane; their mass moment of inertia is much larger, putting a much
larger load on the engine.
As you can see, both the second area and mass moments of inertia play a fundamental role
in many things in our daily lives. For example, you can see the effects of the moment of inertia
of the wings of a bird both in terms of their strength under the weight of the bird as well as how
much force (or moment) is needed to flap them and in the effects of the moments of inertia of
the legs of different creatures, including humans, in running. You can hopefully imagine these
same effects considered in the design of a bridge, the flight of an airplane, the rotating parts of an
engine, and countless other devices and machines we use every day. Understanding these concepts
helps us both control their effects and use them to our advantage.
C H A P T E R 6
155
Electromotive Force
Motors, Transformers, AC and DC Currents
INTRODUCTION
6.1
Each of the two generators of the Diablo Canyon nuclear power plant in San Luis Obispo County
generates over 1,100 megawatts of power, enough for about 3 million people. e Ames Research
Center national full-scale subsonic wind tunnel in Mountain View, California, is 40
80 ft and
creates winds of up to 350 mph (560 km/h), large enough to test a real, full-scale Boeing 737. e
fans and the motors running these fans are enormous. And yet, the generators used to recharge
a hand-held flashlight are the size of a large olive and the motors used in small remote-control
servomotors are about ¼ inch in diameter. What is important is that the largest and the smallest
of motors and generators are actually very similar in the way they work and that they all follow
Faraday’s Law which we will study later.
(cid:2)
When you simply plug in an electric motor (whether as part of a device or stand-alone)
it simply turns and provides a torque that allows the device to do its job. e same is true for
a DC motor that is connected to a battery. You may also use a simple charger (or transformer)
to recharge your batteries, whether in a cell phone, camera, computer, hybrid car, or toy. In fact,
you may have heard that the high voltage (as large as 500,000 volts) of electric power is lowered
to the household voltage (110 volts) with a transformer before it is delivered to your place of
residence or work. All these examples are based on a phenomenon called electromotive force or
emf. A similar phenomenon that works in the opposite way, called back-emf, is also an important
issue that affects the way these systems work or are designed.
In this chapter we will study these two phenomena, how they are used, and where they
appear to affect our daily lives. But first let’s learn the difference between voltage and a current,
and their relationship. ese terms appear in all issues related to circuits and electric devices.
6.2
INTRODUCTORY TERMS: VOLTAGE, CURRENT, AND
RESISTANCE
Equation (6.1) shows the relationship between voltage (V ), current (I ), and resistance (R). But
what is the physical meaning of these terms? To understand it, let’s make an analogy. We will
look at a simple fluid system to show how they are related.
IR:
V
D
(6.1)
156
6. ELECTROMOTIVE FORCE
Imagine a tank of water with a pipe attached to it, full of water as shown in Figure 6.1. At
the bottom of the pipe there is a valve, closed at this time, which prevents the water from flowing
in the pipe. e pressure at the bottom of the pipe is a function of the density of water and its
height. Larger heights (h) will increase the pressure at the bottom of the pipe.
Figure 6.1: A tank-pipe-valve system shows the analogy between hydraulic and electric systems.
Now imagine that we open the valve just a bit. As a result, water will start to flow slowly
at the bottom. e amount of water flowing is a function of the pressure and the opening of the
valve. At this point, the valve provides resistance to the full flow of the water. Further opening
the valve will increase water flow until it is fully opened, at which time the flow is at its maximum
rate. Obviously, at higher pressures, the flow will be larger too. Notice that regardless of the valve
opening, the flow is also a function of the diameter of the pipe. Smaller diameters provide more
resistance to the flow. For example, if the pipe were a hose with a small diameter, the flow would
be less than if it were a large pipe. erefore, the pipe diameter also introduces resistance to the
flow.
Electrical systems can be explained the same way. e water pressure is analogous to the
voltage. e flow of the water is analogous to the current which represents how many electrons
pass a cross section of the wire. e valve represents a variable resistance similar to resistors that
are used in circuits to control current. e resistance of the pipe too represents the electrical re-
sistance of wires and conductors. As shown in Equation (6.1), when the resistance of a circuit
increases, the current decreases. By changing either the voltage or the resistance, the current flow
can be controlled. For a system (for example a motor) where the resistance is constant, when the
voltage changes, the current changes accordingly. In mechanical systems the equivalent analo-
gous elements are force and voltage, velocity and current, and viscous coefficient of friction and
hresistance. To understand this, think of walking in a pool. You need to exert yourself to move in
the water. e thicker the fluid, the harder it will be to move.
6.3. MAGNETIC FIELDS 157
6.3 MAGNETIC FIELDS
Imagine that there is a magnetic field present. is can happen if you have a permanent magnet or
if you wrap a wire into a coil and run a current through it, as shown in Figure 6.2. In the vicinity
of the coil or the magnet, there will be a magnetic field such that if we bring another magnet
close to either of them, similar poles (both north or both south) repel each other and the opposite
poles attract even before they touch. Since the strength of the field (flux density) at any point,
among other things, is related to the square of the distance from the source, the field strength
reduces quickly as you move away from the source. erefore, with most simple magnets, this
is felt when you are close. e same can be felt at larger distances when you experiment with a
stronger magnet.
Figure 6.2: A permanent magnet and coils (when electricity flows through them) create a magnetic
field around themselves.
You can actually visualize a magnetic field by peppering small pieces of iron (filings) onto
a piece of paper over a magnet as shown in Figure 6.3a. e lines formed by the iron filings show
the shape of the field between the poles. Figure 6.3b shows the general shape of magnetic fields.
Magnetic flux lines always close between the poles. Unless they are somehow concentrated
by other means, they surround the magnet in all directions. is is similar to having a source of
light that illuminates in all directions equally. As a result, the strength of the magnetic field is
distributed and low. However, like a flood light whose light intensity is concentrated in a small
angle by reflectors, the strength of the magnetic field around a magnet or a coil can be increased
locally by concentrating them with an iron core. is is why we almost always see an iron bar
within a coil; the coil generates the electromagnetic field, but it is distributed all around and
consequently, it is very weak. e iron core inside it concentrates the field causing it to be much
158
6. ELECTROMOTIVE FORCE
(a)
(b)
Figure 6.3: e shape of a magnetic field around magnets and coils. Iron filings, peppered within the
field will line themselves up to follow the magnetic flux lines.
stronger around the bar. For the same reason too, all motors are invariably made with a metal
casing to concentrate the magnetic field within the casing. As a result, even if you bring a small
piece of steel near the body of a motor, the magnet within the motor does not attract it. is
is an important characteristic of magnets and coils and is used in almost all transformers and
motors as well as magnetic devices used as sensors such as a Linear Variable Differential Transducer
(LVDT ) used to measure distances. We will later discuss the additional effects of the iron cores
in a transformer, how they can be a detriment to the efficiency of the system, and how we can
overcome that.
Before we explore the subject of electromotive force, let’s first study magnetic flux a bit
more. is will help us understand the subject much more easily.
SolenidMagnetWire loope strength of the magnetic flux is the product of the flux density (the level of the con-
centration of the magnetic flux in any area) and the area, or:
6.3. MAGNETIC FIELDS 159
F
B
(cid:1)
D
A;
(6.2)
where F is the flux, B is the flux density, and A is the area. As expected, the flux changes as a result
of any variations in either the flux density or the area. As we will shortly see, what is important
in the generation of electromotive force is not just the strength of the field but the changes in it
with respect to time.
ese changes come about as a result of the changes in the flux strength or the area. For
example, in a transformer the strength of the flux (flux density) changes due to the nature of the
alternating current (AC) power. e AC current or voltage follows a sine function as shown in
Figure 6.4. e voltage changes from zero to a maximum level, then decreases back to zero, then
follows to a maximum with the opposite polarity (negative direction), finally returning to zero
again, and repeating the pattern 50 times a second (or 60 times a second outside of the U.S. and
Canada). As the voltage changes, so does the flux density. As a result, when a coil is connected
to AC power, its flux density varies continuously.
Figure 6.4: e sinusoidal nature of AC power creates a continuous change in the flux density of a
coil.
Now imagine that we have a permanent magnet or a coil (connected to a DC source of
power which does not vary) with constant flux density. If we pass a conductor (e.g., a piece of
wire) through the flux, the conductor will disrupt the flux, effectively changing its area. is
means that as a result of passing the conductor within the flux and crossing its lines, we cause a
change in its area, creating variations in the flux. e effect is the same as when the flux density
changes. Either one of these two will change the flux strength.
TimeVoltage160
6. ELECTROMOTIVE FORCE
6.4 ELECTROMOTIVE FORCE
Electromotive force relates to the interactions between a magnetic field and an electrical current
through a conductor (such as a wire). According to Faraday’s Law, when there is a change in the
magnetic field, the interaction results in the generation of a force if the conductor is carrying a
current, or induction of a voltage (or current) in a closed-loop conductor if it is moved (by a force).
A series of simple experiments by Michael Faraday in England and by Joseph Henry in the U.S.
in 1831 helped formulate this phenomenon. Figure 6.5 shows a galvanometer (which measures
a current) in series with a simple conductor coil. If a bar magnet is moved toward the coil, the
galvanometer deflects, indicating a current in the conductor. If the bar is stopped, the galvanome-
ter goes back to zero. If the bar moves back, the galvanometer deflects in the opposite direction,
indicating a current in the opposite direction. If the magnet is reversed, all these indications also
reverse. e same will happen if the magnet is kept stationary but the coil is moved relative to it.
erefore, as shown, when the magnet is moved within a coil it generates a current. is is called
an induced electromotive force or emf.
Figure 6.5: A magnet moving toward a coil induces a current in the coil. is is called induced elec-
tromotive force or emf.
Similarly, as shown in Figure 6.6, if the galvanometer and the coil are stationed close to
another stationary coil that is connected to a power source (battery), when the switch is closed
or opened the galvanometer deflects momentarily in opposite directions, but not if the switch is
left on or off. It is only as a result of the switch turning on or off that the galvanometer deflects,
indicating that the change in the current causes an electromotive induction in the coil. is is the
principle behind the generation of electrical power in a generator.
is interaction between a magnetic field, a conductor, and relative motion (caused by a
force that creates the motion) is interchangeable. is means that as in Figure 6.5, in the presence
of a current through the coil, the magnet will experience a force that moves it relative to the coil
(still called the electromotive force). e same principle is the basis on which all motors operate
too.
6.4. ELECTROMOTIVE FORCE 161
Figure 6.6: Whenever the switch is closed or opened the galvanometer deflects, indicating a momen-
tary induction of electromotive force in the coil.
e opposite of the same phenomenon is called back electromotive force or back-emf. We will
discuss this a bit more later. Now let’s see what this means in practice.
Notice that as we just saw in Section 6.3, the change in the flux can come from a change in
its density or from a conductor crossing its lines. Also note that a closed-loop conductor means
that the wire is continuous or attached to a load. For example, let’s say we attach the two sides of
the wire to a lamp. In that case, the circuit is closed-loop or continuous, and therefore, the voltage
which is induced can travel through the wire to the load (lamp) and return. is creates a current
in the wire. Otherwise, if there is a voltage but the wire is not continuous or attached to a load,
there is no current flow and nothing happens.
As a side note, I remember a group of first-year students who had designed
and constructed a device which, based on this principle, was supposed to re-
duce vibrations in a pendulum as it passed through a magnetic field. However,
the device was not working and the students had assumed that it was not con-
structed well. What they had not realized was that since the pendulum was
insulated and not attached to a load to actually use the voltage induced in it,
there was no current and as a result there was no damping of the pendulum.
..
As mentioned previously, this is the principle that governs the operation of all motors,
generators, and transformers. Although these are seemingly different devices, the different inter-
actions of the same elements of Faraday’s Law are at work for each one. Now let’s see how each
one works.
First let’s see about motors. Imagine that there is a magnetic field generated by a permanent
magnet (where the flux intensity is constant). Now imagine that we take a conductor such as a
wire and pass a current through it. As a result of Faraday’s Law, the interaction between the flux
VRs162
6. ELECTROMOTIVE FORCE
and the current-carrying conductor is a force on the conductor, pushing it away. We will see how
an actual motor works continuously, but for now, as you can see, a force is generated that pushes
away the conductor.
Now take the same system mentioned previously, but instead of supplying a current through
the conductor, assume that we move the conductor through the flux (which is caused by a force
we supply). e crossing of the flux also creates a change in the flux (area), and based on Faraday’s
Law, this will induce a current in the conductor. is system is a generator (and we will see the
details later). Note that a generator and a motor are the same; in one, we supply the current and
it moves, in the other we supply the motion and it induces a voltage (or current). It should be
mentioned that although the workings of a DC and AC motor and their details are different, they
all follow the same principles.
Next consider a transformer. ere are no moving parts in it. Instead, two coils interact with
each other. e supplied current to one coil is AC power which varies constantly and changes the
flux. Consequently, as a result of the changes in the flux and based on Faraday’s Law, a voltage is
induced in the second coil. erefore, although these devices are different and each one is designed
for a different application, they all follow the same rules. Now let’s see how AC and DC motors,
generators, and transformers work.
6.5 DC MOTORS
DC stands for Direct Current, meaning that the polarity (direction of flow) does not change. If a
battery is used, the magnitude does not change either. DC power is usually supplied by batteries
or by circuits that deliver a direct current. erefore, a DC motor requires a DC source and will
not work with AC power. ese motors are powerful, their direction of rotation can be changed,
and their speed can be controlled relatively easily as we will discuss later. However, their power-
to-weight ratio is lower than AC-type motors and they cannot tolerate high temperatures as much
as AC-type motors.
As expected, DC motors operate based on the principles of electromotive force. A per-
manent magnet (called a stator) provides a magnetic field whose lines are crossed by a current-
carrying conductor (called a rotor). Since the effective area of the flux is disrupted, a force is
generated that pushes the coil rotor as shown in Figure 6.7a. Since the conductor is attached to
a shaft, it rotates to the middle as a result of the force. e same thing happens to the second
coil which is now receiving the current. erefore, the rotation continues. In order to provide the
current to the coils sequentially when needed, a set of commutators and brushes are used. e
coils are connected to the commutators. e brushes, carrying the current from the power source,
slip over the commutators and supply the coils with a current.
In reality, the rotor coils are formed around iron cores, creating magnets. We may describe
the motion of a DC motor through the pulling of opposite poles and pushing of similar poles
between the poles of the stator magnet and the rotor magnets. When the current is supplied to
a coil, the core develops a north pole and a south pole that are pulled or pushed by the poles of
6.6. AC MOTORS 163
(a)
(b)
(c)
Figure 6.7: e components of a DC motor.
the stator, causing the rotor to rotate. However, as soon as one coil rotates away, power is cut
off and instead, the next coil is powered which repeats the process until the power is cut off. In
order to make the motion of the motor smooth, most DC motors have rotors with three coils
(Figure 6.7c). Either one or two of the three coils are powered at any given time. Figure 6.7b
shows the rotor, stator, commutators, and brushes of a small DC motor.
6.6 AC MOTORS
AC motors are simpler in their construction and operation and are therefore more rugged. ey
are made of a permanent magnet (PM) rotor or a simple cage (such as squirrel cage rotor) and
a coil stator. AC motors do not have brushes or commutators because AC power automatically
NSMagnetsCommutatorsRotor coilsBrushes on springpower conectorsContacts to thepower source164
6. ELECTROMOTIVE FORCE
provides a changing magnetic field and consequently, there is no need for external switching of
the direction of the current for continuous rotation.
Imagine that a permanent magnet is mounted on a shaft to form a rotor as shown in Fig-
ure 6.8. e stator is a coil in which the current flows. Let’s assume that the rotor is situated such
that both the north and south poles are aligned with poles of the coil which at this instant is not
magnetized yet. Imagine that at the instant shown, the AC current starts from zero in the positive
direction flowing into the coil. is will create a magnetic field such that both north poles and
both south poles of the rotor and stator will repel each other, forcing the rotor to rotate in the
direction shown in order to bring the north of the rotor closer to the south of the stator and the
south of the rotor to the north of the stator. As the AC current increases to its maximum level,
the repelling and attracting forces between the poles increase. Eventually, the AC current starts
to decrease toward zero, but still provides the same forces in the same directions. As soon as the
poles of the rotor and stator align themselves with each other’s opposite poles, the direction of
the AC current changes its polarity, thereby switching the directions of the poles on the stator.
is will create the exact situation as before, but at the new location, repelling the now-similar
south poles and north poles toward the opposite side. With this complete cycle, the rotor rotates
once. But as soon as it reaches the opposite side, the AC switches again, forcing the motion to
continue, repeating indefinitely until the motor is shut down. ere is no need for commutation,
switching, or brushes.
Figure 6.8: A permanent magnet AC motor.
Notice how the rotor follows the stator’s moving field. Due to the nature of AC power,
the field continually switches directions, and the rotor follows it. erefore, the speed at which it
rotates is a function of the line frequency. For example, the line frequency in the U.S. is 60 Hz.
Depending on the number of poles used, the speed of an AC motor with permanent magnets will
NNSSAC current6.6. AC MOTORS 165
be 1,800, 3,600, etc. e same motor in many other countries whose line frequency is 50 Hz will
be 1,500, 3,000, etc. ese are usually referred to as synchronous motors because they have a fixed
speed that is a function of the line frequency. is speed, to a large extent, stays constant as the
load increases or decreases, but the angle between the rotor and stator changes a little to increase
or decrease the load. If the load increases more than the motor can handle, instead of slowing
down as a DC motor does, it simply stops.
Another type of AC motor is called an induction motor. Induction motors are very similar to
synchronous AC motors but instead of the permanent magnet rotor, they simply have a metal rotor
in the form of a number of longitudinal bars that are connected together like a cage. erefore,
the rotor is usually called a squirrel cage rotor as shown in Figure 6.9. Notice that the rotor is not
powered by any electrical current; it is simply a metal cage. Similar to other AC motors, the stator
is made of coils that are powered by an AC current. In this case, due to back-emf, the varying flux
induces a current into the cage. However, since there is a current in the cage’s conductors, a force
is generated that rotates the rotor. However, in this case there are no poles that exactly follow
the magnetic field, and consequently, the rotor can rotate at any speed as the torque and current
change. ese motors, also called asynchronous motors, are rugged, powerful, long lasting, simple,
and economical. ere are no brushes, magnets, or commutators. ey are used in most AC
applications. eir basic disadvantage is that they always rotate in the same direction. erefore,
they can only be used in applications where the motor does not need to change direction (e.g.,
washing machines, dryers, fans, pumps, etc.). ey cannot be used as drill motors because unlike
DC motors, they cannot be reversed.
Figure 6.9: An AC induction motor’s stator and rotor. e rotor is a simple non-magnetic cage with
no power supplied to it. AC power is supplied to the stator coil only.
One exception is called a reversible AC motor, where the coil is center-tapped as shown in
Figure 6.10. In this case, only 1=2 of the coil is used for each direction, producing 1=2 of the
torque. erefore, for the same power rating, these motors need twice as much winding, making
166
6. ELECTROMOTIVE FORCE
them heavier and more costly. To reverse the direction of rotation, the AC current flows from the
center to only one side of the coil, thereby going left to right or right to left, creating a field that
is the opposite of the other case. As a result, the rotor will rotate in one direction or another.
Figure 6.10: A reversible AC motor can switch directions because the stator coil is center-tapped. As
a result, the current flows in opposite directions depending on which route is chosen, creating fields
that are in opposite directions and forcing the rotor to rotate in opposite directions.
For devices like drill motors that require direction change but where DC power is not
available unless it is rectified and lowered to suit common low-voltage DC motors, a universal
motor is used. Universal motors are a combination of DC principles and AC power. Instead of
permanent magnet stators like those in DC motors, they have coil magnets as in AC motors that
need to be supplied with an AC current. e rotor is a coil with brushes and commutators that
is also powered by an AC current. In this case, the magnetic field caused by AC power in the
stator coil changes direction 60 times a second, but because the rotor is also supplied with the
same AC current, its direction also changes the same 60 times per second at precisely the same
time. Since they both switch directions at the same time, it is as if it were a DC current. e
additional switching through the brushes and commutators causes the rotor to rotate like a DC
motor. erefore, although powered by an AC current, the motor’s direction of rotation can be
switched like a DC motor.
In other types of motors such as stepper motors and brushless DC motors, the attempt is
to do the opposite; to run a DC motor with the construction of an AC motor with no brushes or
commutators. is makes the motors more rugged and longer lasting. We will study these motors
next.
6.7
STEPPER MOTORS
Unlike DC and AC motors that start rotating continually when they are connected to a power
source, stepper motors do not; they only move one step when the field is field rotation is usually
accomplished by a dedicated circuit, a computer, a microprocessor, a PLC (Programmable Logic
Controller) or similar means. erefore, the movements of the rotor are under complete control
of an external device.
6.7. STEPPER MOTORS 167
To understand the way a stepper motor works let’s consider a simplified version first. Fig-
ure 6.11a shows a permanent magnet rotor and a coil in off position with their poles aligned. As
soon as the coil is turned on, the similar north and south poles will repel each other (Figure 6.11b)
until the poles of the magnet line up with the opposite poles of the coil (Figure 6.11c). At this
point, the rotor will stay here without movement, even if we turn off the coil. is is called the
point of least reluctance, a stable position. In this position, even if we apply a torque to move the
rotor it will resist. If we once again turn on the coil in the opposite polarity of the first case, the
similar poles will repel each other again (Figure 6.11d), forcing the rotor to rotate again until the
opposite poles align and it stops again. In this process, every time we turn on the coil in proper
polarity, we force the rotor to rotate half of a full circle or 180(cid:14). However, although we can force
the rotor to only rotate this fixed amount, there are two problems with this set up: (1) that the step
size is large, and (2) there is no control over the direction of motion that the rotor takes. When
we turn on the coil, the rotor may rotate either clockwise (CW) or counter-clockwise (CCW).
(a)
(c)
(b)
(d)
Figure 6.11: A simple stepper motor set up.
To improve this situation let us consider the set up in Figure 6.12 where we have added
a second coil. In this case, assume that we start as before, when both coils are off and the rotor
is aligned with the poles of coil-1. Now assume we turn on coil-2 such that its poles will be as
shown in Figure 6.12b. As a result, the rotor will rotate to align itself with the poles of coil-2.
If we next turn off coil-2 and turn on coil-1 as shown in Figure 6.12c, the rotor will rotate until
its poles align with the poles of coil-1. Once again, we turn off coil-1 and turn on coil-2 in the
opposite polarity, forcing the rotor to rotate again. Continuing to turn on and off coils-1 and -2
sequentially in proper polarity we can force the rotor to rotate clockwise or counterclockwise as
much as we want. In this case, the step size is reduced to a quarter of a turn or 90(cid:14) and we know
the rotor’s direction of rotation; unlike the first case it is not left up to chance, which is a big
improvement. Also notice that by selecting how many times we turn each coil on and off we can
ensure that the stepper motor rotates exactly as much as we wish, no more, no less. Additionally,
NSSNNSNSNSNSSN168
6. ELECTROMOTIVE FORCE
by selecting how fast we turn the coils on and off we can control how fast the rotor rotates; if they
are turned off and on more quickly, the rotor will also rotate more quickly and vice versa. So we
are in complete control of the magnitude of rotation, the speed of rotation, as well as the direction
of rotation.
(a)
(b)
(c)
(d)
Figure 6.12: Employing two coils instead of one will improve the stepper motor behavior and char-
acteristics.
We can improve the situation and cut the size of the step in half if we employ another
variation. As shown in Figure 6.13b, suppose that instead of turning off coil-2 at this instant and
turning on coil-1 we would keep it on while we turn on coil-1. With both coils on, since the rotor’s
magnet needs to balance itself at the point of least reluctance, it will align itself in the middle of
the arc between the two, thereby rotating only 45(cid:14) (Figure 6.13c). If we then turn off coil-2 it
will rotate the remaining 45(cid:14) to complete the step (Figure 6.13d). is is called half stepping and
is common in many applications. erefore, without adding any new coils or other components,
simply by controlling when the coils are turned on and off we can reduce the step size by half.
e only remaining problem is that for most applications, even 45(cid:14) is a large displacement. is is
NS1221NS122SN1N122SSN1NS122SN16.7. STEPPER MOTORS 169
because although we have control over displacement, speed, and direction of rotation, we actually
have no control over the location of the rotor in between the poles when it is under load. erefore,
it is desirable to reduce the size of these steps even further. However, there is a limit to how many
coils we can add. In the following section, we will see how two different methods are employed to
reduce the step size of common stepper motors significantly without adding a significant number
of coils.
(a)
(b)
(c)
(d)
Figure 6.13: Schematic of an improved stepper motor.
6.7.1 CANSTACK STEPPER MOTORS
Canstack motors are rugged and simple in construction. e motor is comprised of a permanent
magnet rotor made of a flexible sheet magnet that is similar to the type that is used for refrigerator
magnets. ese magnets, called halfback array magnets, are made by embedding (powder) steel
filings in a flexible resin and magnetizing them with a machine. To understand the difference
NS1221NS122SN1NS122SNSN1NS122SN1170
6. ELECTROMOTIVE FORCE
between these magnets and a regular steel magnet, turn one of these magnets on its back and try
to stick it to any steel material. You will notice that the magnet does not stick. is is not due
to the fact that many of these magnets have a sheath of plastic, used for advertising, on them.
It is because these magnets are magnetized only on one side. Figure 6.14 schematically shows
how these magnets are a series of magnets next to each other with all their poles on one side; the
opposite side is not magnetic. Figure 6.14 also shows the construction of the rotor of the stepper
motor and a real rotor. e rotor is made of the same type of magnet, rolled into a cylinder.
erefore, the rotor will have a series of south and north poles sequentially located next to each
other.
Figure 6.14: e magnetic field of the rotor of a canstack stepper motor possesses a series of magnets
next to each other.
Figure 6.15 shows the stator of a canstack stepper motor. It is made up of two electromag-
nets, stacked over each other, each made up of two plates and a coil. Each plate has, in this case,
12 little fingers or tabs on it as shown. When a current flows in the coil, each of these plates
becomes either a north or douth pole. erefore, there will be 12 tabs of north and 12 tabs of
south created when a current flows in each coil. e coils can be turned on and off indepen-
dently of each other in either polarity by center-tapping the coil as was discussed in Section 6.6,
Figure 6.10. Notice that this means that when a coil is turned on, it creates an equivalent of
12 magnets (24 poles), or a total of 48 tabs and 48 poles around the stator when both coils are
turned on. But instead of having to make 24 individual coils within the motor and turning them
on and off sequentially, we only need to turn two magnets on and off. But notice that although
we only have two electromagnets, since each coil is center-tapped, we effectively have four coils,
referred to as Coil-1, Coil-2, Coil-3, and Coil-4. Coil-1 and Coil-2 are the same coil, but with
opposite polarity, etc.).
N SN SN SN SN SN SN SN SN SN SSNSSSSNNNSNSSSSNNNCross section of the magnet6.7. STEPPER MOTORS 171
Figure 6.15: Canstack stepper motor is comprised of a permanent magnet rotor and a stacked, two-
stage stator with repeating poles that are staggered from each other to provide small step sizes.
Let’s call the plates (and thereby the tabs attached to each) A and B for the first electro-
magnet (Coil-1 and Coil-2) and C and D for the second electromagnet (Coil-3 and Coil-4).
Table 6.1 shows their magnetic poles for each polarity:
Table 6.1: e poles of the stepper motor electromagnets versus the polarity of the current
e trick is that these plates are assembled such that the tabs form a staggered set so that
they will have a sequence of A, C, B, D, A, C, B, D, A, C,: : : : : : . Figure 6.16 shows this arrange-
ment in a linear fashion.
So, what is the effect of this arrangement? Suppose that we turn on Coils 1 and 3 at the
same time. is means tabs A and B will be N and S, and tabs C and D will be N and S. erefore,
the sequence of A, C, B, D, A, C, B, D, A, C,: : : will result in N, N, S, S, N, N, S, S, etc. (please
follow this carefully). Similar sequences will form as we turn the coils on and off. Table 6.2 shows
the pattern when the coils are switched six times.
Notice that in Table 6.2, as highlighted, the field rotates in one direction. For example,
any two south poles next to each other advance one step as the coils are switched on and off
RotorStatorSingle platewith tabsTabs ATabs BTabs CTabs DCoil-1NSCoil-2SNCoil-3NSCoil-4SN172
6. ELECTROMOTIVE FORCE
Figure 6.16: e four plates that constitute the two magnets of a canstack stepper motor are staggered
relative to each other such that their tabs are sequentially repeating in an A, C, B, D, A, C, B, D, A,
C,: : : : : : . fashion.
Table 6.2: e sequence of magnetic poles of stepper motor tabs as the coils are sequentially turned
on and off
sequentially, as do the rest. erefore, if the rotor is aligned with the tabs such that its north is
between the two souths, the rotor moves with the sequence as shown in Figure 6.17.
e continuous motion of the stepper is accomplished by simply repeating the sequence
of switching coils 1 through 4 in 1-3, 1-4, 2-4, 2-3 order as shown in Table 6.2. Reversing the
sequence will force the rotor to turn backward. Since there are 48 tabs, each step will be 360(cid:14)=48
D
n where n is the number
7:5(cid:14). Consequently, the total rotation of the rotor will be equal to 7:5(cid:14) (cid:2)
of switchings. e faster we switch, the faster the rotor rotates. is way, we are in complete
control of the total angular displacement, angular speed, and direction of rotation of the rotor.
If a microprocessor is used to run the stepper motor, as is the case in most devices, the
microprocessor turns four switches on and off that provide a current to each coil in a 1-3, 1-4, 2-
4, 2-3 sequence, which is extremely simple to program with a microprocessor. Figure 6.18 shows
a schematic of this set up.
ACBDACACBDACBDCoil-1, Coil-3NNSSNNNNNNNNSSCoil-1, Coil-4NSSSSNCoil-2, Coil-4SSNNSSNNCoil-2, Coil-3SSSNNSCoil-1, Coil-3SSNNSSCoil-1, Coil-4NSSNNSSN6.7. STEPPER MOTORS 173
Figure 6.17: e rotor follows the field as the field is advanced in the stator of a stepper motor. For
simplicity, only some of the tabs are shown.
Figure 6.18: e schematic of a simple set up to run a stepper motor with a microprocessor.
It should be mentioned here that there is much more to stepper motor drives, control
schemes, efficiency, and other issues that are beyond the scope of this discussion.
SSSSSSNNNNNNSTabs ATabs BTabs CTabs DNSSSSNNNNNSNSTabs ATabs BTabs CTabs DMicroprocessoroutput portsSteppermotorcoilcontactss4s1s2s3SwitchesVin174
6. ELECTROMOTIVE FORCE
6.7.2 HYBRID STEPPER MOTORS
ese stepper motors usually have a much smaller step size, for example 1:8(cid:14) at full step and
0:9(cid:14) at half step. is translates to 200 and 400 steps per revolution respectively. However, this
is achieved with the same number of coils. To understand how this works, let’s look at a simple
principle that is not only used here, but also in calipers that are used for measuring dimensions
more accurately.
Imagine that a bar A with a certain length is divided into 10 portions as shown in Fig-
ure 6.19. Obviously, each portion will be 1/10th. Also imagine that bar B with the same length
is divided similarly into 10 portions. erefore, the divisions of both bars A and B will be exactly
the same. If at any time the division marks of A and B are aligned, one of the bars would have
to move one full division in order to align the next set of division marks. For example, if division
marks 2 on A and B are aligned, the next possibility for alignment will be if B is moved one full
division until 3 on A will be aligned with 2 on B.
Figure 6.19: Aligning division marks of similar lengths requires one full-length motion.
Now imagine that we take the same length bars A and B, divide A into 10 portions as
before, but divide bar B into 11 divisions as shown in Figure 6.20a (other numbers of divisions
are perfectly fine too. Each number will result in a different proportion, but they are all fine).
Also imagine that at one point, division mark 3 on bar A is aligned with division mark 3 on B
(Figure 6.20b). Unlike the previous case where the division marks were all the same length, here
they are slightly different. Consequently, all it takes to align the next set of division marks is for
bar B to move only about 1/10th of this distance, or about 1/100th of the total length until division
mark 4 on A aligns with division mark 4 on B (Figure 6.20c). In other words, since the divisions
are no longer the same, bar B only has to move the distance of the difference between the two, or
.1=10/
0:01. is means that we have made the divisions so much smaller without
having to draw 100 lines. is principle is used in calipers to measure dimensions to about 1/100th
of an inch. It should be mentioned here that it does not matter what the number of divisions are
.1=11//
(cid:138)
(cid:0)
012345678910012345678910AB012345678910012345678910ABas long as they are not equal. So we could achieve similar results (albeit different values) with 10
and 8, 8 and 7, 20 and 21, or any other unequal pairs.
6.7. STEPPER MOTORS 175
(a)
(b)
(c)
Figure 6.20: e unequal number of divisions on equal lengths allows for measuring much smaller
distances as in a caliper.
(cid:0)
.1=50/(cid:141)
Figure 6.21 shows the construction of the rotor and stator of a hybrid stepper motor. Notice
how the rotor and the stator have teeth or divisions on them. In this particular example, the rotor
has 50 divisions, and the stator has the equivalent of 40 divisions. Just like the caliper, in order
to move the rotor to align with the next division at any location, the rotor has to only move an
angle of (cid:140).1=40/
1:8(cid:14), which is much smaller than
0:02/
the canstack step size. Combining these two seemingly unrelated concepts benefits us very much.
e rotor of a hybrid stepper motor is a simple magnet with one north and one south pole.
To reduce the back-emf current in it as it rotates, the rotor is made of laminated layers attached
together to form the rotor as is shown in Figure 6.21. e stator has four coils in it that can
be individually turned on and off. erefore, exactly like the canstack motors and with a similar
schematic as in Figure 6.18, a sequence of 1-3, 1-4, 2-4, 2-3 or its reverse drives the hybrid stepper
motor forward or backward with complete control over its displacement, speed, and direction.
360(cid:14) D
360(cid:14) D
.0:025
(cid:0)
(cid:2)
(cid:2)
012345678910A012345678910B11012345678910A012345678910B11012345678910A012345678910B11176
6. ELECTROMOTIVE FORCE
Figure 6.21: A hybrid stepper motor and its rotor and stator construction.
6.8 TRANSFORMERS
Transformers are used to increase or decrease voltages and currents. ey function based on the
same principles we have already discussed although there are no moving parts in them. ey are
composed of a primary coil, a secondary coil, and an iron core to concentrate the flux and increase
the efficiency of the device.
e primary and secondary coils are simple coils with different number of turns (loops) in
them designated as N1 and N2. An AC current is fed into the primary coil which creates a varying
flux. According to Faraday’s Law, since the flux intensity is changing due to the AC current, it
induces a voltage into the secondary coil. In general, without an iron core to concentrate the flux,
the level of induced voltage is very low and most of the energy is wasted. However, in the presence
of a core, the efficiency of the system can be increased to 90% or better. Figure 6.22 shows the
schematic drawing of two ways transformers may be built. Figure 6.23 is a typical transformer.
e primary and secondary coils, the iron core, and the connections for different levels of voltage
can clearly be seen.
e induced voltage can be expressed as:
Vout
D
Vin(cid:11) .N2=N1/ cos (cid:12);
(6.3)
where Vout is the voltage in the secondary and Vin is the supplied voltage in the primary. e
constant (cid:11) represents the effects of the coupling between the primary and secondary coils as
a result of the iron core concentrating the flux and can vary from near zero to a maximum of
1 under the best conditions. Larger values of (cid:11) indicate a better and more efficient transformer.
(cid:12) is the angle between the primary and secondary coils. N1 and N2 are the number of turns in
the primary and secondary coils.
6.8. TRANSFORMERS 177
Figure 6.22: Schematic drawing of two ways a transformer may be built.
Figure 6.23: A typical transformer with its coils, iron core, and connections.
Varying the Vin will proportionally change Vout. Since AC voltage varies between zero and a
maximum value in both positive and negative directions, it follows that the induced voltage in the
secondary also varies between zero and a maximum value in both directions. Consequently, the
output of a transformer is also AC unless we do something else to rectify it. In certain applications
(such as automotive or charging batteries where the battery requires a DC current) the AC output
of the generator is rectified using diodes. e positive polarity current goes through directly, but
the negative portion is switched back into positive. Consequently, the current becomes DC.
In transformers used for increasing or decreasing voltage the primary and secondary coils
are usually kept parallel, and consequently, the angle between them is zero. As a result, cos (cid:12)
primarycoilSecondarycoilIron corePrimarycoilSecondarycoilIron coreConnection pointsfor different voltages178
6. ELECTROMOTIVE FORCE
is 1 and the induced voltage achieves its maximum value. However, in other applications such
as resolvers, the angle may be changed by rotating one of the coils relative to the other, changing
cos (cid:12) and influencing the output voltage. Resolvers are used as sensors to measure the angle of
rotation of shafts and joints in systems such as robots.
e output voltage is also proportional to the ratio of the primary and secondary coils as
N2=N1. is means that if the number of turns in the secondary coil is larger than the primary coil,
the output voltage is larger than the input voltage and vice versa. is is the primary application
of transformers; by selecting the ratio of the number of turns in the primary and secondary coils,
we can achieve any desirable voltage ratio.
As mentioned earlier, if N2=N1 is larger than 1, the output voltage will be increased. If
it is smaller than 1, it will be decreased. Assume that we use a ratio of 10/1. is means that
the output voltage will increase by a factor close to 10. Does this mean that we have increased
the power of the system at no cost? Obviously if this were true, we could use a transformer to
“generate” additional power indefinitely at no cost, which is impossible. So what else should we
expect? Instead of considering only the voltage we must consider the power, which for electrical
systems is defined as:
P
V I;
(6.4)
D
where P is the power in watts, V is the voltage and I is the current. In other words, the power of
an electrical system is the product of its voltage and current. We have seen that, except for losses
(which are always present, and the best we can do is to approach 100% efficiency, but never reach
100%), the power should remain the same; output power should be equal to input power because
we do not generate power or energy out of nowhere. erefore, we should expect that when we
increase the voltage, the corresponding current of the system reduces proportionally, and when
we decrease the voltage, its current increases proportionally. As an example, and assuming almost
100% efficiency, if we increase the voltage by a factor of 10, the current in the secondary will
be 1/10th of the current in the primary. And this is exactly what a transformer does. It increases
or decreases the voltage at the expense of the current. We are not changing the total power (or
energy) available; we are just transforming the ratios of the voltage and current at each other’s
expense.
Electric power transmission is the main application of this system. To better understand
this, let’s first look at electrical power loss in electrical systems. All electrical conductors, even the
best materials (like copper), show some resistance to the free flow of electrons. is means that
as electrons move in a conductor, some of the energy they have is converted to heat energy. e
power lost as heat energy in a conductor is expressed by:
Plost
RI 2;
(6.5)
D
where I is the current and R is the resistance. Clearly, since I is squared, it is a much more
important factor than resistance. In other words, in order to reduce power loss in conductors, it is
more important and more effective to reduce the current than it is to reduce resistance. Resistance
6.8. TRANSFORMERS 179
can be reduced by increasing the cross section of the conductor (thicker wires) which increases
the weight and the cost of the wire, sometimes prohibitively. However, reducing the current at
the same rate reduces power loss much more significantly. And this is exactly why transformers
are used.
Electrical power generation is usually accomplished in power plants in locations where they
make the most sense. For example, hydroelectric power plants do not require fuel such as oil or
gas, generating (actually, converting the potential energy of the water behind a dam into) electri-
cal energy at very low cost; the power comes from the kinetic and/or potential energy of the water
running through the turbines. However, dams are usually not close to cities or communities where
electricity is needed and consequently, the power must be transmitted. Another concern might
be pollution, noise, and economics (cost of the land). In most cities it is impossible to operate a
power plant within the city limits. Power plants are in the outskirts and their power must be trans-
mitted. And perhaps most importantly, it is very uneconomical to have small power plants in each
neighborhood for the consumption of the small community around the plant; large power plants
are much more efficient and economical. erefore, a few large power plants generate enormous
amounts of electrical energy that are transmitted to large areas for consumption. Nowadays, al-
most all plants, whether hydroelectric, fuel based, or solar and wind, are interconnected through a
grid which feeds all communities. Consequently, it is crucial to be able to transmit huge amounts
of electrical energy from one place to another.
However, as we saw, when power is transmitted through a conductor, some of it inevitably
converts to heat energy due to electrical resistance in the conductor. To reduce loss, we can either
use heavy-gauge copper wires at enormous weight and cost or try to reduce the current. To un-
derstand this issue, suppose that a power plant generates electricity at 100 volts of potential and
100 amps of current, yielding 100
10,000 watts of power. In this case, the loss of power
in a length of wire with 1 ohm of resistance will be Plost
10,000 watts, an
enormous amount, basically wasting all the power that was generated.
.1/.100/2
RI 2
100
D
D
D
D
(cid:2)
D
Now suppose that using a transformer, we increase the voltage to 10,000 volts. is means
that, assuming there is no loss, the current will be reduced to 1 amp, still yielding 10,000 watts.
In this case, the power loss for the same length of wire equaling 1 ohm will be Plost
D
.1/.1/2
1 watt. So the loss is 1/10,000th of the first case, allowing the power to be distributed
to a very large area. is means that the power at the point of consumption is at 10,000 volts and
1 amp, which is completely useless; we need high-current power at 110 volts (220 volts in many
other countries) to run our machines, appliances, and devices. However, if another transformer
with the opposite ratio of the first one is used to once again reverse the transformation, we can
recover the same 100 volts and 100 amps at the point of consumption and deliver appropriate
power to the user. is is exactly what is done. In the first part of the transmission journey over
rural transmission towers and lines for long distances, the voltage may be increased to hundreds
of thousands of volts with very little current (in the U.S., there are power lines with 115, 138,
161, 230, 345, and 500 kV). In sub-stations, this is reduced to tens of thousands of volts for local
RI 2
D
180
6. ELECTROMOTIVE FORCE
transmission, and finally, reduced again by local transformers in neighborhoods to 110 volts and
delivered to users. Figure 6.24 shows typical transmission towers and transformer units on top of
an electrical pole, reducing electric power from tens of thousands of volts to 110 volts.
Figure 6.24: Typical transmission lines and transformers on power poles that reduce electric power
from tens of thousands of volts to 110 volts for domestic consumption.
However, all this is possible because we deal with AC power which provides the necessary
variation in the flux density needed for Faraday’s Law to work. DC power would not provide this
opportunity because it does not change; to induce a voltage in the secondary coil, there would have
to be a motion present which is not the case for transformers (otherwise, it becomes a generator
which we will see later). Nowadays, it is possible to electronically switch on and off a DC current
and cause it to induce a voltage in the secondary coil, and consequently, have a DC transformer.
But this was not the case in the past.
e story is that omas Edison had spent a lot of money to establish local
DC-generating power plants in different neighborhoods of New York City
and to transmit them via copper wires to households. e first one was in
1882 on Pearl Street in lower Manhattan. However, as discussed, to reduce
power loss in the wires, he had to use very thick wires for the transmission of
low voltage, high current electricity. At the time when he started transformers
did not exist anyway, but even if they did, they would have been ineffective
..
6.9. DC GENERATORS 181
with a DC current. Later Tesla designed and built prototype transformers
with AC power. George Westinghouse, an entrepreneurial inventor and pio-
neer of Edison’s era, seized the opportunity to generate low cost hydroelectric
AC power at Niagara Falls, and by transforming it to high voltage and using
very thin wires, transmit the power to New York City at very low cost and
compete with Edison. e rivalry was intense, and at one point Edison had
his engineers design and build what is now known as the electric chair (which
is used for execution) in order to scare people from using AC power. How-
ever, this did not work, and Westinghouse became a huge company, at one
time employing more people than any other company in the U.S.
..
In reality, there are three lines of transmission for three-phase power (needed for higher
voltages and currents in larger plants and certain applications like three-phase motors). Each
of the three phases is treated exactly the same, but carried on a separate wire. To reduce the
electromagnetic effects of these high voltages, the wires are usually drawn in a braided manner
(they never touch; in fact they are apart from each other far enough to prevent arching between
them, but braided).
Chargers we use to recharge our batteries are miniature transformers too. eir primary and
secondary coils are designed to provide the proper voltage needed. Some chargers provide DC
output. is is done externally by diodes, rectifiers, and capacitors, etc. In other words, although
the output of the transformer is an AC current, diodes and rectifiers eliminate or switch the
negative polarity portion to a positive one, and capacitors or other averaging circuits modify the
rectified output close to a DC (see Section 6.10).
In some transformers the secondary coil is tapped at different counts of turns, generating
a variety of voltages which are accessible to the user. erefore, the user may choose a variety of
different voltages depending on the application. Figure 6.23 shows such a transformer.
6.9 DC GENERATORS
For most cases, a generator is nothing different than a motor. Instead of supplying a current to
the motor and expecting it to rotate and provide a torque, a generator is rotated externally (by a
torque) and is expected to provide a current (or voltage) as long as it is connected to a load such
as a lamp. Otherwise, since it is not in a complete circuit, it will rotate without resistance and
no power will be generated. is is still within the parameters of Faraday’s Law; because we are
changing the flux intensity by rotating the rotor, a voltage is induced in the conductors.
A DC generator is the same as a DC motor. Since we intentionally use brushes and com-
mutators, as discussed earlier, the output of a DC generator is discontinuous and choppy. is
means that although the current is always in one direction and the polarity is always the same,
the current does fluctuate. However, for most purposes such as charging batteries this is not a
182
6. ELECTROMOTIVE FORCE
problem. But for a true DC current that, similar to the current from a battery, does not fluctuate,
the output must be smoothed.
6.10 AC GENERATORS
As with DC generators, for most cases an AC generator is also the same as an AC motor. When
the rotor is rotated by an external torque, the magnetic field induces a voltage or current in the
stator as long as a complete circuit exists. erefore, when attached to a load, it can operate the
device. However, in order to use an AC generator for charging batteries the current must be rec-
tified; the reverse polarity section of the current must be switched to forward polarity (remember
Figure 6.4 and how the polarity of an AC current reverses every 1/2 cycle). Simple diode arrange-
ments called rectifiers are used to do this, and therefore, although the current is not constant, it
can be used to charge batteries. is is the set-up used in automobiles too. e generator is usu-
ally an AC generator with rectifiers in it. Figure 6.25 shows a simple rectifier circuit made up of
four diodes in the form of a bridge. In the forward polarity portion the current flows through
diode B, through the load, through diode D and back to the source, forming a complete circuit.
For the reverse polarity portion of the current notice how the direction of the positive/negative
is now reversed. As soon as the polarity reverses, diode B no longer allows the current through.
Instead, the current flows through diode C, through the load, through diode A, and back to the
source. However, notice how in both cases, the direction of the output feeding through the load
is the same. As a result, the output is now rectified and is always positive. Once again, notice that
although the current is rectified, it is not constant. To create a constant-magnitude current that
resembles the current from a battery it is necessary to remove the variation. To do this a simple
circuit made up of a resistor and capacitor can be used to filter out the changes (this is called a
low-pass filter). is simple combination of a resistor and capacitor reduces the variations to a
large extent and makes the current smooth. e capacitor charges up when the voltage is higher
and discharges back into the circuit when it is lower, smoothing the current. If too much current
is drawn from the source this smoothness reduces and shows ripples in the output.
It should be mentioned here that since AC induction motors (squirrel cage) do not have
magnets, they do not generate a voltage unless something else is done. is includes a capacitor
to give an initial charge to the rotor to start the current, which, as long as the rotor turns at or
above the nominal speed, will continue to generate electricity.
6.11 BACK-EMF ISSUES IN MOTORS AND TRANSFORMERS:
LAMINATED IRON CORES
An interesting consequence of Faraday’s Law is that it is also present in reverse even when it is
undesirable. As we discussed earlier, due to the electromotive force phenomenon, when a current
is supplied to the rotor of a motor, it rotates. Conversely, due to back-emf, when the rotor is turned
by an external torque, the same Faraday’s Law causes an induction of a current in the stator. And
6.11. BACK-EMF ISSUES IN MOTORS AND TRANSFORMERS: LAMINATED IRON CORES 183
Figure 6.25: A rectifier is used to change the direction of the negative portion of the AC current into
positive, thereby converting it to a DC current.
as we saw, a transformer also functions based on the same electromotive force and Faraday’s Law.
However, both in motors and transformers, since we need to concentrate the flux, we use a metal
core or a metal rotor. Based on the back-emf and the presence of varying flux, caused by an AC
current or as a result of using brushes and commutators, a current is also induced in the core of
the transformer or the rotor of a motor (also called eddy current). If the core is solid iron, due to
its low electrical resistance, the current can be large, creating a lot of heat (this is used in induction
heating, where food is cooked when the bottom of the pot or a pan is heated by eddy currents).
is is a huge waste of energy and a cause of concern to dissipate the heat. In order to reduce the
effects of this back-emf in the core of the transformer or the rotor we need to reduce the flow
of the back-emf current. To do so, the core of the transformer or the rotor is usually made up of
thin layers of metal, laminated together, that are insulated from each other. Because the current
Load+_ACgeneratorAC currentinput+_Load+_ACgeneratorAC currentinput+_ForwardportionReverseportionDC currentoutputDC currentoutputABCDABCD184
6. ELECTROMOTIVE FORCE
flows in very thin layers that have higher electrical resistance, and consequently lower currents,
heat generation is effectively eliminated. e layers are laminated together by pins, rivets, and
welds, or pressed together. Figure 6.26 shows (a) the stator of an induction motor, (b) the rotor
of a small DC motor, (c) a transformer core, and (d) the magnetic rotor of a stepper motor. All
of these are made of thin layers laminated together, but insulated from each other, to form the
required shape.
(a)
(c)
(b)
(d)
Figure 6.26: e rotors of motors and the cores of transformers are usually made of thin layers of the
metal that are insulated from each other and connected by pins, rivets, screws, or welds to take the
required shape and fight back against the induced back-emf due to the changing flux.
6.12. BACK-EMF IN DC MOTORS: SERVOMOTORS 185
6.12 BACK-EMF IN DC MOTORS: SERVOMOTORS
Back-emf plays a significant role in the performance of all motors, but especially DC motors. To
understand this issue first recall our discussion about the relationship between an applied torque,
mass moment of inertia, and angular acceleration as described by Equation (5.16), repeated here:
(cid:11)
E
As we discussed, when a torque is applied to a rotating body, it accelerates and rotates faster
in proportion to its mass moment of inertia. As long as the torque is present and exceeds friction
and other resistive torques, the body will continue to accelerate and rotate faster.
T
E
D
I
Now consider a DC motor that is connected to a battery. As the current flows through the
motor and the electromotive force exerts a torque on the rotor, it will start to rotate. In proportion
to the mass moment of inertia of the rotor, it will continue to accelerate and rotate faster as long
as the torque is present (and of course, for lighter rotors, the acceleration is higher and vice versa).
However, since this torque continues to be present, should we not expect to see the rotor’s speed
continue to increase, theoretically to infinity? In other words, while the current continues, so
does the torque and the acceleration, increasing the angular velocity indefinitely until the rotor
disintegrates. But we know from experience that if we connect a motor to a battery, when it reaches
its nominal value, the velocity no longer increases. Why? is is due to the same back-emf.
Once again, let’s imagine that we connect a DC motor to a battery. Since there is a torque,
the rotor accelerates and its angular velocity increases. However, as mentioned before, since the
rotor contains coils that are moving in the presence of a magnetic field, a back-emf voltage (or
current) is induced in the coil which is in the opposite direction of the supplied current. As a result,
it reduces the effective current to a value that is smaller than the supplied value. As the rotor
increases its speed, the back-emf current increases and effectively reduces the supplied current
until such a time that the supplied current and the back-emf current equal each other, but in
opposite directions; the effective current at that speed is zero. erefore, the effective torque at
that speed goes to zero as does the resulting angular acceleration, and the velocity no longer
increases. As a result, the DC motor continues to run at that nominal speed until conditions
change, not increasing to infinity.
Now imagine that we engage a load, for example a fan blade or a wheel, to the motor. Since
the effective torque on the rotor at the nominal speed is zero but we have added an external load,
there will be a negative acceleration (deceleration) which will slow down the rotor. However, as
it slows down, the back-emf will be lower, increasing the effective current, which increases the
torque. erefore, as we increase the external load on the motor, it will slow down until the torque
generated by the motor equals it. At this point the motor will rotate at a constant speed that is
lower than when it was not loaded. If we increase the load, the motor will further slow down to
decrease the back-emf, increasing the effective current and increasing the torque supplied. is
process continues as the load changes; with an increased load the motor slows down, and with
a decreased load it speeds up to match the back-emf with the required torque. You may have
186
6. ELECTROMOTIVE FORCE
experienced this phenomenon when dealing with DC motors, whether in appliances, toys, or
other devices.
In many applications it is necessary to maintain an exact speed regardless of the load on the
motor. For example, the speed of the motors of a robot arm that moves in space for welding two
pieces together must be very exactly controlled; otherwise the welds will either be too thick, too
thin, or non-uniform. However, the load of the arm changes as it moves through the workspace.
Without a control system to maintain correct velocities it would be impossible to perform a sat-
isfactory job. In order to control the speed or maintain a constant speed we must use a controller
that, through the use of an appropriate sensor, measures the velocity of the rotor and compares
it to the desired value. If the speed is lower than desired, it will increase the supplied current (or
voltage) to the motor, increasing the effective current and increasing the speed. If the speed is
too high, it will decrease the supplied current (or voltage) to the motor, reducing the effective
current and torque and slowing down the motor. is process continues as long as the motor is
running. Such a system is called a feedback control system. It feeds back the sensed information
to the controller which compares it with the desired value, and makes appropriate adjustments
as necessary. Feedback systems are not just for controlling a motor but for countless devices and
systems and may take many shapes and control many other factors. But their principal intent is to
control some characteristic of a device through this sensed state of affairs and to make adjustments
to control the output.
A motor that is equipped with such a controller is called a servomotor because its velocity
can be controlled. In fact, by an additional position-sensor feedback, we can easily measure how
much the rotor rotates, and turn it off as it approaches the desired angle of rotation. So, the total
displacement and the speed of rotation of servomotors can both be specified and controlled. But
the main point of this discussion is that the back-emf continues to play a pivotal role in the way a
motor responds to always-varying loads and the way it is controlled. Without the controller, the
desired speeds of motors must be maintained manually.
6.13 ADVANTAGES AND DISADVANTAGES OF DIFFERENT
MOTORS
Different types of motors have different characteristics, advantages, and disadvantages that make
them unique in their applications and utility. In this section we will look at each type and learn
about their characteristics. e major issues related to motors are heat rejection, reliability and
life expectancy, torque rating, ability to reverse direction, and control of displacement and speed.
Heat rejection is an important issue in motors. As we discussed before, when a current flows
through a conductor, due to the ever-presence of some resistance in the wires, heat is generated
according to Equation (6.5). is heat increases as the current or the resistance increases. If not
rejected or dissipated, the heat can severely damage the motor. Heat production is more prominent
when the motor is under load. As we discussed, when the load on a motor increases, it slows down,
the back-emf is lower, and the effective current is higher to provide additional torque. But the
6.13. ADVANTAGES AND DISADVANTAGES OF DIFFERENT MOTORS 187
higher current also means higher heat production. e worst case is when the load is so high that
the motor eventually stops due to the lack of enough torque, even though the back-emf is zero.
is is called stall condition, when heat generation is at its maximum. Stall condition can burn a
motor if it is not prevented.
In DC motors the current flows through the rotor coils and heat is therefore accumulated
in the rotor. is heat has to flow from the rotor through the air gap into the stator, and through
the stator and the body of the motor, out to the environment either by radiation (if you are close
to a motor, you may feel its heat even if you do not touch it), convection (air circulating around
the motor’s body and taking the heat with it as it warms up), or in some cases by conduction
(when the motor is attached to something else and the heat flows through it). is is a long
path with high resistance provided by the air gap (air is a very good heat insulator compared to
metals). On the other hand, heat generation in AC motors is in the stator because the current
flows through the stator coil. erefore, heat only has to flow through and out of the body. e
heat path is much shorter and simpler as shown in Figure 6.27. erefore, AC-type construction
where the stator carries the current is more rugged, can withstand a much higher current, and
consequently, produce a larger torque. As a result, AC-type motors are generally more powerful
than their counterparts in DC. Notice that stepper motors have an AC-type construction where
the current flows through the stator although they operate on a DC current. erefore, they can
generally handle larger currents.
Figure 6.27: Heat rejection path for AC- and DC-type construction.
Permanent magnetor squirrel cagerotorCoil statorPermanentmagnetstatorCoil rotorAC-typeDC-type188
6. ELECTROMOTIVE FORCE
Reliability and life expectancy results from simplicity in design and using fewer parts.
DC motors have more parts, including commutators, brushes, and springs. Of more concern are
the brushes that wear out and need replacement once in a while. As a result, AC-type motors,
steppers motors, and brushless DC motors that do not employ commutators and brushes are
generally more rugged and longer lasting.
Torque rating relates to the response of the motor in relation to the supplied current. e
generated torque of DC motors is nearly proportional with the supplied current. AC motors can
handle larger currents and as a result can be more powerful for the same coil sizes and dimensions.
However, stepper motors are generally the weakest. e largest torque they develop (called holding
torque) is when they do not rotate at all. As they start to rotate, their torque decreases rather rapidly
to the point that if they rotate fast, the torque becomes so small that the motor will miss steps.
Since steppers do not usually have feedback systems, the controller will not be aware of their
missed steps; this can have unacceptable consequences. e main reason for this behavior is that
since the fields in the stepper coils are turned on and off very rapidly as their speed increases, the
back-emf current fights the supplied current and severely affects the torque. ere are remedies
to minimize the effects of back-emf (such as micro-stepping and the application of zener-diodes,
etc.) at an added cost.
e ability to reverse direction is a major factor. In many cases there is no need to reverse
the direction of rotation of the motor (such as a fan). In that case, AC-type motors have many
advantages. But when direction control is important, the choice is either a reversible AC motor
or a DC motor (including universal motors). Because of this, most servomotors are of DC-type.
Displacement and speed control is another deciding factor in the choice of motors. For
example, stepper motors and brushless DC motors are run one step at a time, and consequently, it
is easy to count the number of steps (or actually command the controller to send a known number
of signals to the motor to move the rotor an exact number of degrees) and how fast the signals
are sent and therefore control the displacement and speed of the motor. erefore, there is no
need for a controller system or feedback sensors to measure the motion of the motors. However,
these motors do require circuitry to operate with the added cost. Additionally, if these motors
miss a step, there is no control system involved and no feedback system to determine the error
and correct the motor. As a result, they can only be used for situations where there is little chance
of missing a step or when they are reset often, like in a printer where the motor’s position is reset
at the end of every line.
DC and AC motors simply rotate as long as there is a current, and therefore, there is no
control over their motion unless we employ a control system that through the use of sensors,
measures both the rotational displacement and speed of the rotor and provides control over them
(namely a servomotor). In this case, there is no need for drive circuitry as in a stepper motor or
brushless DC motor, but there is a need for a control system. So depending on the application,
the users must select the best choice based on their needs.
6.13. ADVANTAGES AND DISADVANTAGES OF DIFFERENT MOTORS 189
ese characteristics are also present in other types of motors. Depending on their char-
acteristics, you can determine their utility and application. For example, any motor that does not
have a rotor through which current flows will not have to deal with heat rejection issues. Un-
less other means are provided, AC currents from an outlet cannot be modified, and therefore,
AC-type motors cannot be reversed. So even if we have not discussed other motors here, you can
determine their characteristics by comparison with these fundamental types.
Next time you see a motor or a transformer, especially if you have the chance to open it up
and see its inside, see if you can determine what type of a motor it is and why it is used for that
particular purpose. Electromagnetism and electromotive force are an important part of our daily
lives in countless ways.
Author’s Biography
191
SAEED BENJAMIN NIKU
Saeed Benjamin Niku is a professor of mechanical engineering at California Polytechnic State
University (Cal Poly), San Luis Obispo, California. He has taught courses in mechanics, robotics,
design, and creativity, and has been involved in the design of many products, including many
assistive devices for the disabled and new robotic devices. Dr. Niku’s publications include a statics
workbook, an introduction to robotic analysis book (in its second edition), and a creative design of
products and systems book. He enjoys making furniture and utilitarian products as well as artistic
artifacts with wood, metals, glass, leather, and other materials.
He received a B.S. mechanical engineering from Tehran Polytechnic in 1975, an M.S.
mechanical engineering from Stanford University in 1976, and a Ph.D. in mechanical engineering
from the University of California, Davis in 1982.
Dr. Niku is a licensed professional engineer in the State of California.
193
Index
absolute zero, 7
AC current, 159, 164, 176, 181, 189
AC motor, 163, 188
acceleration, 25, 57, 62, 84, 91, 147
acceleration, angular, 148, 151
acceleration, centripetal, 28
aileron, 87
altitude, 57
amplitude modulation, AM, 45
angular acceleration, 148, 151
angular deflection, 137, 139
angular momentum, 83
Arctic Circle, 75
asynchronous motor, 165
auditory canal, 51
auditory nerve, 52
Autumnal Equinox, 75
back-emf, 155, 161, 182, 185, 188
basilar membrane, 52
Bernoulli, 85
Bismuth, 3
bottom-dead-center, 107, 110
brushless motor, 188
caliper, 174
Caloric intake, 6
Calorie, 8, 54
canstack stepper motor, 169
cantilevered beam, 32, 42
capacitance, 44
capacitor, 15, 43, 182
center-tapped, 165
centripetal acceleration, 28, 64, 71
centroid, 131
chemical energy, 7
coefficient of drag, 18
coefficient of friction, 156
cold working, 145
combined cycle, 124
commutator, 162
compression ratio, 108, 118
condenser, 9, 101
connecting rod, 106
conservation of energy, 85
convection, 72
Coriolis acceleration, 57, 69, 75, 91
crankshaft, 106
critical damping , 27
cross product, 59
current, AC, 14
current, DC, 14
damping, 23
damping ratio, 26
DC current, 159, 177, 181
DC motor, 188
deceleration, 63, 185
deflection, 128, 142
detonation, 107
Diesel, 118
dot product, 59
194
INDEX
ear, 50
eccentric shaft, 121
eddy current, 183
Edison, omas, 180
efficiency, 5, 10, 16, 97, 117, 158, 173, 176
elastic energy, 24
elastic limit, 144
elastic strength, 144
electric car, 16
electric energy, 179
electric power, 160
electromotive force, 13, 155, 161
elongation, 142
EMF, 13, 18, 155
energy conversion, 11
entropy, 1, 97, 115
epitrochoid, 121
equilibrium, 23
Eustachian tube, 51
evaporator, 9
exergy, 11
Faraday’s Law, 160, 182
feedback control, 186
Ferrel cell, 74-81
filter, low pass, 47
first law, 7, 11
flux, 157, 176
flywheel, 90, 150
Fourier series, 53
Freon-12, 99
frequency modulation (FM), 45
friction, 11, 19, 25, 37, 97
fuel-air, 6, 107, 118
Gemini Agena, 92
generator, AC, 182
generator, DC, 181
grag, 86
gyroscope, 90
gyroscopic motion, 81
Hadley cell, 74-81
halfback array magnet, 169
half-stepping, 168
hardening, 145
harmonics, 42
hearing, 23
heat exchanger, 10, 102
heat loss, 8
heat rejection, 186
helix angle, 40
hybrid, 12
hybrid stepper motor, 174
hydroelectric powerplant, 16
ideal gas, 96
inductance, 44
induction heating, 183
induction motor, 165
internal combustion engine, 12, 105, 151
Joseph Henry, 160
Kelvin, 118
kinetic energy, 11, 15, 23, 31, 72, 120, 179
knock sensor, 109
knocking, 107
larynx, 42
latitude, 57
life expectancy, 188
longitude, 57
low-pass filter, 182
LVDT, 158
magnetic field, 157
magnetic field current, 13
magnification factor, 26
mass moment of inertia, 147, 151, 153
mechanical energy, 7
metabolic rate, 8
micro stepping, 188
microprocessor, 172
modulus of elasticity, 33, 128, 141, 144
modulus of rigidity, 139
moment of inertia, 53, 61, 125
motor, AC, 163, 188
motor, asynchronous, 165
motor, brushless, 188
motor, canstack stepper, 169
motor, DC, 188
motor, hybrid stepper, 174
motor, induction, 165
motor, reversible AC, 165
motor, servo, 185, 188
motor, squirrel cage, 165
motor, stepper, 166
motor, synchronous, 165
motor, universal, 166
natural frequency, 23, 31, 42, 49, 53
nature, 6, 73, 133
neutral axis, 131, 146
non-interference engine, 113
North Pole, 57
nuclear powerplant, 16
Octane number, 108
ossicle, 51
oval window, 51
Ozone layer, 100
parallel axis theorem, 133, 153
pendulum, 31, 44, 53, 161
Petra Valley, 3
pinna, 50
pitch axis, 85
INDEX 195
pitot tube, 85
plastic deformation, 144
PLC, 166
Polar cell, 74-81
polar moment of inertia, 137, 138
potential energy, 23, 31, 61, 179
power plant, 10, 16, 121, 179
prevailing winds, 57
programmable logic controller, 166
proportional stress limit, 143
radiation, 7
Rankin, 118
rectifier, 182
reliability, 188
reluctance, 167
residual stress, 12
resolver, 178
resultant, 58
reversible AC motor, 165
right-hand-rule, 60
robot, 68, 91
roll axis, 85
rotary engine, 105, 121
rotating frame, 65, 69
rotor, 162, 175, 183, 187
rudder, 89
scalar, 58
second law, 7, 10, 12
second moment of the area, 125, 146
self locking , 41
servomotor, 185, 188
shear, 130, 146
Siberian Express, 57
slider crank mechanism, 106
solenoid, 158
sound energy, 51
spark-ignition, 105
196
INDEX
specific gas constant, 96
specific volume, 96
specific weight, 149
spring constant, 23, 142
squirrel cage motor, 163, 165, 182, 187
stabilator, 88
stabilizer, 88
stainless steel, 4
stall condition, 187
stator, 162, 164, 175, 182, 187
stepper motor, 166
strain, 141
stress, 141, 146
Summer Solstice, 75
synchronous motor, 165
tachometer, 33
Tacoma Narrows bridge, 54
Tesla Motor, 16
thermal energy, 7, 12, 20
thermodynamics, 7, 11, 95, 116
thruster, 93
top-dead-center, 106, 110
torque, 61, 87, 139, 182, 188
torque, holding, 188
torsion, 137
transformer, 162, 176, 179, 183
translation, 68
tremolo, 36
tuning fork, 33
tympanic membrane, 51
ultimate strength, 145
ultra capacitor, 13
ultraviolet light, 4
universal motor, 166
vector, 58, 70
vehicle, battery, 13
vehicle, electric, 16
vehicle, plug-in hybrid, 16
vehicle, zero emission, 13
venturi, 85
Vernal Equinox, 75
vibration, 23
vocal cords, 42
Wankel engine, 105, 121
Westinghouse, George, 181
wind induced flutter, 54
Winter Solstice, 75
work, 7, 61, 96, 109
worm gear, 37
yaw axis, 85
yield strength, 144
zener diode, 188
zero emission, 13
zeroth law, 7
zone of mixing, 74
|
https://ieeexplore.ieee.org/xpl/ebooks/bookPdfWithBanner.jsp?fileName=6813538.pdf&bkn=6813537&pdfType=book
|
SYNTHESIS LECTURES ON ENGINEERING
Series ISSN: 1939-5221
Systems Engineering
Building Successful Systems
Howard Eisner, The George Washington University
This book provides an overview of systems engineering, its important elements, and aspects of management
that will lead in the direction of building systems with a greater likelihood of success. Emphasis is placed
upon the following elements:
• How the systems approach is defined, and how it guides the systems engineering processes
• How systems thinking helps in combination with the systems approach and systems engineering
• Time lines that define the life cycle dimensions of a system
• System properties, attributes, features, measures and parameters
• Approaches to architecting systems
• Dealing with requirements, synthesis, analysis and cost effectiveness considerations
• Life cycle costing of systems
• Modeling, simulation and other analysis methods
• Technology and its interplay with risk and its management
• Systems acquisition and integration
• Systems of systems
• Thinking outside the box
• Success and failure factors
• Software engineering
• Standards
• Systems engineering management
Together, these top-level aspects of systems engineering need to be understood and mastered in order
to improve the way we build systems, as they typically become larger and more complex.
About SYNTHESIs
This volume is a printed version of a work that appears in the Synthesis
Digital Library of Engineering and Computer Science. Synthesis Lectures
provide concise, original presentations of important research and development
topics, published quickly, in digital and print formats. For more information
visit www.morganclaypool.com
Mor gan Cl aypool Publishers
&
ISBN: 978-1-60845-701-4
90000
w w w . m o r g a n c l a y p o o l . c o m
9 781608 457014
E
I
S
N
E
R
S
Y
S
T
E
M
S
E
N
G
I
N
E
E
R
I
N
G
M
o
r
g
a
n
&
C
l
a
y
p
o
o
l
&
CM& Mor gan Cl aypool Publishers
Systems Engineering
Building Successful Systems
Howard Eisner
SYNTHESIS LECTURES ON ENGINEERING
Systems Engineering:
Building Successful Systems
Copyright © 2011 by Morgan & Claypool
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in
any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations in
printed reviews, without the prior permission of the publisher.
Systems Engineering: Building Successful Systems
Howard Eisner
www.morganclaypool.com
ISBN: 9781608457014
paperback
ISBN: 9781608457021
ebook
DOI 10.2200/S00349ED1V01Y201104ENG014
A Publication in the Morgan & Claypool Publishers series
SYNTHESIS LECTURES ON ENGINEERING
Lecture #14
Series ISSN
Synthesis Lectures on Engineering
Print 1939-5221 Electronic 1939-523X
Synthesis Lectures on
Systems Engineering: Building Succesful Systems
Howard Eisner
2011
Fin Shape Thermal Optimization Using Bejan’s Constructal Theory
Giulio Lorenzini, Simone Moretti, and Alessandra Conti
2011
Geometric Programming for Design and Cost Optimization (with illustrative case study
problems and solutions), Second Edition
Robert C. Creese
2010
Survive and Thrive: A Guide for Untenured Faculty
Wendy C. Crone
2010
Geometric Programming for Design and Cost Optimization (with Illustrative Case Study
Problems and Solutions)
Robert C. Creese
2009
Style and Ethics of Communication in Science and Engineering
Jay D. Humphrey and Jeffrey W. Holmes
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Analog Multimedia
Explorations
Lina J. Karam and Naji Mounsef
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Digital Multimedia and
Robotics Explorations
Lina J. Karam and Naji Mounsef
2008
CAD/CAM of Sculptured Surfaces on Multi-Axis NC Machine: The DG/K-Based
Approach
Stephen P. Radzevich
2008
iv
Tensor Properties of Solids, Part Two: Transport Properties of Solids
Richard F. Tinder
2007
Tensor Properties of Solids, Part One: Equilibrium Tensor Properties of Solids
Richard F. Tinder
2007
Essentials of Applied Mathematics for Scientists and Engineers
Robert G. Watts
2007
Project Management for Engineering Design
Charles Lessard and Joseph Lessard
2007
Relativistic Flight Mechanics and Space Travel
Richard F. Tinder
2006
Systems Engineering:
Building Successful Systems
Howard Eisner
The George Washington University
SYNTHESIS LECTURES ON ENGINEERING #14
CM&
Morgan
&
cLaypool
publishers
ABSTRACT
This book provides an overview of systems engineering, its important elements, and aspects of
management that will lead in the direction of building systems with a greater likelihood of success.
Emphasis is placed upon the following elements:
(cid:129) How the systems approach is defined, and how it guides the systems engineering processes
(cid:129) How systems thinking helps in combination with the systems approach and systems engineer-
ing
(cid:129) Time lines that define the life cycle dimensions of a system
(cid:129) System properties, attributes, features, measures and parameters
(cid:129) Approaches to architecting systems
(cid:129) Dealing with requirements, synthesis, analysis and cost effectiveness considerations
(cid:129) Life cycle costing of systems
(cid:129) Modeling, simulation and other analysis methods
(cid:129) Technology and its interplay with risk and its management
(cid:129) Systems acquisition and integration
(cid:129) Systems of systems
(cid:129) Thinking outside the box
(cid:129) Success and failure factors
(cid:129) Software engineering
(cid:129) Standards
(cid:129) Systems engineering management
Together, these top-level aspects of systems engineering need to be understood and mastered in
order to improve the way we build systems, as they typically become larger and more complex.
KEYWORDS
systems engineering, systems approach, systems life cycle, system measures, architecture,
synthesis, analysis, cost-effectiveness, system costing, technology, risk management,
software engineering, systems acquisition, integration
Contents
vii
1
2
3
4
5
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Definitions and Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Definitions and Difficulties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
The Systems Engineer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2
1.3
Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Director of Systems Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
The Systems Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1
Additional Related Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Systems Thinking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
The Fifth Discipline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1
Thinking in Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2
Systems Thinking and Heuristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.3
Systems Thinking and Special Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.4
3.5 General Systems Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Key Elements of Systems Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.1 Other Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
The Life Cycle Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.1 Generic Life Cycle Phases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
A DoD Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.2
A NASA Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.3
Systems Engineering Across the Life Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5.4
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
viii
6
7
8
9
System Properties, Attributes and Features (PAFs) . . . . . . . . . . . . . . . . . . . . . . . . . . 21
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Measures and Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Architecting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Functional Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
9.1
9.2
9.3
9.4
A Simple Computer System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
A C4ISR System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Earth-Observing System (EOSDIS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
FAA’s NextGen System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
10 Requirements Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
10.1 Requirements Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
10.2 Derived Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
10.3
Some NASA Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
10.4 Top Half Dozen Requirements Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . 37
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
11
Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
11.1
Supporting Tables and Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
12 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
12.1 Deeper Levels of Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
12.2 Analysis of Alternatives (AoA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
13 Cost-Effectiveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
13.1 Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
ix
14
Life Cycle Costing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
14.1 Life Cycle Cost Model Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
14.2 Bottoms-Up and Top Down Cost Estimation Notions . . . . . . . . . . . . . . . . . . . . . . 53
14.3 Price . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
14.4 NASA and Cost Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
15 Modeling and Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
15.1 Four Illustrative Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
15.2
15.3 Domains of Interest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
15.4 Modeling and Simulation in the DoD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
16 Other Analysis Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
16.1
System Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
16.2 Errors as Requirements or Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
16.3 Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
16.4 Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
“Subjective” Analysis and Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
16.5
16.6 Other Topics of Interest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
17 The Role of Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
17.1 Office of Technology Assessment (OTA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
17.2 The Department of Defense (DoD) and Technology . . . . . . . . . . . . . . . . . . . . . . . . 64
17.3 Criticisms and Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
17.4 Technology Readiness Levels (TRLs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
17.5 The Technology Readiness Assessment (TRA) Deskbook . . . . . . . . . . . . . . . . . . . 66
17.6 A Closing List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
18 Risk Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
18.1 Basic Risk Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
18.2 Risk Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
18.3 NASA and Risk Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
x
18.4 Additional Risk Management Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
19 Testing, Verification, and Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
19.1 Test and Evaluation (T & E) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
19.2 Verification and Validation (V & V) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
20
Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
21
22
20.1 Brief Definition of Systems Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
20.2
Systems Integration Core Competencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
20.3 The Stovepipe Issue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
20.4 Evolutionary Development and Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
20.5
20.6
Integration Readiness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Integrability? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
20.7 The Bottom Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Systems Engineering Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
21.1 The SEMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
21.2 The SEP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Project Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
22.1 Goals, Objectives and Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
22.2 Task Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
22.3 Technical Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
22.4 Organization and Staffing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
22.5
Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
22.6 Budget . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
22.7 Risk Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
22.8 Earned Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
22.9 The Project Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
xi
23
24
25
Software Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Software Development Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
23.1
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
23.2 The Capability Maturity Model
23.3 COCOMO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
23.4 Top Ten for Software Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Systems Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
24.1 The 5000 Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
24.2 Defense Acquisition Performance Assessment (DAPA) Report [4] . . . . . . . . . . . . 97
24.3 Weapon System Acquisition Reform Act of 2009 . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
24.4 Greater Efficiency and Productivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
24.5 Evolutionary Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Systems of Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Some Perspectives Regarding Systems of Systems . . . . . . . . . . . . . . . . . . . . . . . . . 100
25.1
25.2 Cost Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
25.3 The Ubiquitous Department of Defense (DoD) . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
26.1
26.2
26 Thinking Outside the Box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Inside the Box 1: Build systems so as to maximally integrate all stovepipes . . . . 104
Inside the Box 2: It’s not possible to make changes so as to achieve more than
marginal improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Inside the Box 3: Requirements should be considered fixed and inviolate . . . . . . 105
Inside the Box 4: There is no silver bullet that can fix a poorly performing
acquisition system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
26.5 Nine Suggestions for Thinking Outside the Box . . . . . . . . . . . . . . . . . . . . . . . . . . 106
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
26.3
26.4
27 Ten Failure Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
27.1 One—Unrealistic Schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
27.2 Two—Unrealistic Budgets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
27.3 Three—Too Many Risks in the Performance Dimension . . . . . . . . . . . . . . . . . . . 110
27.4 Four—Lots of Risk Analysis, Not Enough Risk Mitigation . . . . . . . . . . . . . . . . . 110
xii
27.5 Five—Lip Service to “The Learning Organization” . . . . . . . . . . . . . . . . . . . . . . . . 110
Six—Poor Requirements Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
27.6
Seven—Failure to Buy Into Evolutionary Development . . . . . . . . . . . . . . . . . . . . 111
27.7
27.8 Eight—Insufficient Communications and Teamwork . . . . . . . . . . . . . . . . . . . . . . 112
27.9 Nine—Slippage in the Practices of Systems Engineering . . . . . . . . . . . . . . . . . . . 112
27.10 Ten—We Know What to Do; Why Won’t We Do It? . . . . . . . . . . . . . . . . . . . . . . 112
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
28 A Success Audit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
29
Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
29.1 Military Standard 499B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
29.2
IEEE P1200 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
29.3 EIA 632 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
ISO/IEC 15288 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
29.4
IEEE/EIA 12207 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
29.5
IEEE P1471 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
29.6
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Author’s Biography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Preface
This book basically provides a top-level overview of Systems Engineering. As such, it may be
considered a Primer on systems engineering, looking at some 30 main topics that tell a good part
of the story. It leans heavily upon the views of a few government agencies and how they perceive
and practice systems engineering. These agencies tend to drive as well as use systems engineering
on the numerous projects that they sponsor. They are major customers, and we note that listening
to customers is usually a very good idea.
These 30 main topics represent what the students and practitioners of systems engineering
need to know. It is expected that readers, after a quick perusal of the entire book, will want to “drill
down” into subjects of special interest. Many other sources are identified for these purposes, both at
the end of each chapter as well as in the Bibliography (Chapter 30).
It is further noted that the book’s subtitle is “Building Successful Systems”. In a real-world
context, that’s what systems engineering should be all about. There’s no one formula that will ensure
this outcome. Pointers in that direction appear throughout the book, and two chapters focus more
sharply on how we might be more successful. Respectively, they look at “doing it wrong”, and “doing
it right”. These chapters are the following:
(cid:129) Chapter 27 - Ten Failure Factors
(cid:129) Chapter 28 - A Success Audit
This book has been designed also for introductory courses in systems engineering at either the
undergraduate or graduate level. A follow-up more intense treatment, as per a 2nd course in systems
engineering, may be found among the chapter references as well as the citations in the Bibliography
(Chapter 30).
I am appreciative of the fact that my Publisher, Joel Claypool, immediately saw the need for
this book, and its top-level approach. I am also appreciative of the constant encouragement from
my wife, June Linowitz, who has been supportive of all of my writing adventures in and around the
world of systems engineering and related disciplines.
A Good Day to All,
Howard Eisner
April 2011
C H A P T E R 1
1
Definitions and Background
1.1 DEFINITIONS AND DIFFICULTIES
This is a book about building relatively large systems using a discipline known as systems engineering.
Many definitions of this discipline have been suggested, as for example:
(cid:129) Systems engineering is “an interdisciplinary approach and means to enable the realization of
successful systems” [1].
(cid:129) Systems engineering is “an iterative process of top-down synthesis, development, and operation
of a real-world system that satisfies, in a near-optimal manner, the full range of requirements
for the system” [2].
(cid:129) Systems engineering is a “methodical, disciplined approach for the design, realization, technical
management, operations, and retirement of a system” [3], and a system is defined as “a construct
or collection of different elements that together produce results not obtainable by the elements
alone.”
(cid:129) Systems engineering is “an interdisciplinary management process to evolve and verify an inte-
grated, life cycle balanced set of system solutions that satisfy customer needs” [4]. This source
also defines a system as “an integrated composite of people, products, and processes that provide
a capability to satisfy a stated need or objective.”
In this book, considerable attention is paid to elaborating upon these short-form definitions
and providing a rationale for the explanations.
We note, however, that despite the fact that we have considerable background on what systems
engineering is, and how it should be applied, we still experience great difficulties in building large
systems. A sense of some of these problem areas can be gleaned from looking at some of the reports
produced by such groups as the GAO (Government Accountability Office). Here are some of their
observations [5]:
1. We did not have sufficient technology maturity to justify moving forward.
2. Knowledge with respect to design and production, at important milestones, was lacking.
3. We were using high-risk contracting procedures with insufficient accountability.
4. We had poor cost estimating methods.
2
1. DEFINITIONS AND BACKGROUND
5. We had been experiencing unacceptable cost growth in too many of our important systems.
In addition, there are other reports that suggest there is considerable room for improvement.
A rather well known analysis, known as the Standish Report, provides some data points for us to
consider [6]:
(cid:129) Only about 16% of all information technology projects were concluded on time and within
the allocated budget (1999 data).
(cid:129) About 30% of the above programs were canceled prior to their scheduled completion (1999
data).
(cid:129) Results in 2009 revealed a decrease in the success rates of projects, with these rates in the
vicinity of 32% of all projects (on time, budget, and with proper features).
(cid:129) The 2009 data indicated “the highest failure rate in over a decade.”
An internally-directed look at systems engineering problems from the NDIA (National De-
fense Industrial Association) gave us yet another perspective [7], summarized below.
(cid:129) Urgent user demand requires fielding capabilities more rapidly than we are doing today.
(cid:129) The systems engineering expertise is wanting in both quality and quantity.
(cid:129) Practices known to be effective are not consistently applied.
(cid:129) Technical decision makers do not have the proper information at the proper times.
(cid:129) Poor impacts are resulting from lack of technical authority.
These issues are known to adversely affect our ability to build successful systems, especially in
the government and defense worlds.
So – we continue to look for better ways to understand and apply principles of systems
engineering to the systems we are building. If we are able to do so, we expect that we will be more
successful as we move forward, and that the systems themselves will operate more successfully.
1.2 THE SYSTEMS ENGINEER
The systems engineer is in considerable demand, and in a 2009 survey was cited as ranking first in
terms of the “best job in America” in the Information Technology Sector [8]. This source claimed
that demand was soaring, moving from a niche position in the aerospace and defense industries to
an expanding set of potential employers ranging from “medical device makers to corporations like
Xerox and BMW.” The median salary at that time for experienced people was $87,100, with a top
pay of some $130,000 per year. From these data alone, it is easy to see why many technical folks
would try to qualify as systems engineers.
1.3. PROCESS 3
We cite in Table 1.1 ten of the attributes that are sought in terms of a highly qualified systems
engineer.
Although we may know all the details of the systems engineering processes, it is the competent
systems engineer who makes it all happen. Having said that, however, we must also recognize that
the individual systems engineer cannot do it alone. In today’s world of building large systems, the
engineers are part of a team, and it is the highly functional team that leads to success. Put another
way, if the competent systems engineer is embedded in a poor team, the results are not likely to
be acceptable. For that reason, we pay a great deal of attention to the matter of building high
performance teams (HPTs) in attempting to construct successful systems.
1.3
PROCESS
As we explore the various aspects of systems engineering in this book, we will note an emphasis
on “process.” For example, some of the cited standards have basically identified all the processes
that need to be properly executed in an attempt to build successful systems. Here, we acknowledge
process as critically important but claim that it is still a necessary but insufficient condition. In our
search for the latter, we look in the direction of subject matter expertise. We look for the systems
engineer who deeply understands systems engineering, but who also has the subject knowledge that
is critical to success in this world of large and complex systems.
1.4 DIRECTOR OF SYSTEMS ENGINEERING
In completing this first chapter, we note that there is a Director of Systems Engineering (DSE) that
deals with key issues within the Department of Defense (DoD) [9]. This office, as of this writing,
has adopted a set of priorities, as listed below:
Table 1.1: Selected Desirable Attributes of the Systems Engineer.
1. Broad technical skills – the ability to solve problems in several technical domains.
2 Open-minded – willing to change one’s mind and modify any pre-conceived notion.
3. Facilitator – noticeably assists in group problem-solving
4. Excellent listener – is receptive to hearing the views of others.
5. Integrator – is able to bring ideas together to formulate new solutions.
6. Superior people skills – relates very well to all members of the team as well as bosses.
7. Inquisitive – often explores information at the “edges” of the immediate problem.
8. Analytical – thinks logically and with precision and persistence.
9. Team Player – willingly listens to and supports other team members.
10. Technical leader – is competent and able to take a lead position on technical matters and
problem solving.
4 REFERENCES
(cid:129) “Support the current fight; manage risk with discipline.
(cid:129) Grow engineering capabilities to address emerging challenges.
(cid:129) Support realistic program formulation through the application of development planning and
early systems engineering.
(cid:129) Increase focus on reliability, affordability, and total ownership cost.
(cid:129) Champion systems engineering as a tool to improve acquisition quality.
(cid:129) Develop future technical leaders across the acquisition enterprise.”
This office has a significant influence in the world of systems engineering and is likely to
continue to play an important role in the years to come.
REFERENCES
[1] International Council on Systems Engineering (INCOSE), www.incose.org 1
[2] Eisner, H., Essentials of Project and Systems Engineering Management, 3rd Edition, John Wiley,
2008. 1
[3] “NASA Systems Engineering Handbook,” NASA/SP-2007–6105, NASA Headquarters,
Washington, DC. 1
[4] “Systems Engineering Fundamentals,” Defense Systems Management College (DSMC), Oc-
tober 1999, Fort Belvoir, VA. 1
[5] GAO Highlights, “Assessment of Selected Major Weapon Programs,” March 2005 and March
2006, see also www.gao.gov 1
[6] Standish CHAOS Reports, standishgroup.com, 1999, 2010. 2
[7] www.ndia.org 2
[8] “Best Jobs in America,” Number 1. Systems Engineer, CNNMoney.com, money.cnn.com 2
[9] www.acq.osd.mil/se/ 3
C H A P T E R 2
5
The Systems Approach
In this chapter, we define what is referred to as the “Systems Approach.” We will look at some
definitions in the literature but, ultimately, will accept and expand upon this author’s definition. In
this regard, the seven elements of the “systems approach” have been set forth as cited and discussed
below [1]:
1. Establish and follow a systematic and repeatable process.
A proven process is emphasized and must be able to be carried out by different teams in differing
environments. New processes can be introduced but only after they have been established as
close as possible to best practices.
2. Assure interoperability and harmonious system operation.
The internal components of a system must interoperate, and these must also interoperate with
other systems if they are designed to do so. We are striving for internal as well as external
harmonious operations such that stresses are minimized.
3. Be dedicated to the consideration of alternatives.
A central feature of the “systems approach,” we construct and evaluate alternative approaches
and designs, whenever feasible.This notion applies both at the architectural design level as well
as the detailed sub-system design level. Failure to do this can lead to trying to use yesterday’s
solutions to today’s problems.
4. Use iterations to refine and converge.
We recognize the complexity of building large systems and use iteration and recursion to allow
us to move forward, even when we do not have a solution immediately at hand. We are able
to use the “TBD” (to be determined) in a systematic way, and iterate to ultimately find what
we believe to be a best solution.
5. Create a robust and slow-die system.
We insist upon having our systems, as best we can, not be subject to single-point catastrophic
failures. As parts (e.g., components) of our systems fail, performance may be degraded, but a
little at a time.
6. Satisfy all agreed-upon user/customer requirements.
6
2. THE SYSTEMS APPROACH
Requirements may change and “creep” during the development process, but when they become
stable and agreed-upon, there is an obligation to satisfy these requirements. Developers and
acquisition agents must find new and better ways to negotiate appropriate solutions to this
often controversial area.
7. Provide a cost-effective solution.
The over-riding consideration in the systems approach is to develop a cost-effective solution
to the customer’s problem. This usually involves the appropriate consideration of alternatives,
from which the most cost-effective solution is selected.
We will now expand the above seven features to a total of ten. On that basis, we add the
following three items, along with a short discussion of each:
8. Assure the system’s sustainability.
The new systems we are building must not lead to massive depletion of our resources. They
must be sustainable, in the long run, even though we may need to invest disproportionately in
the short run. This is more than a dollars and cents issue: applying as well to natural as well as
man-made resources, and the rates at which we use them.
9. Utilize advanced technology, at appropriate levels of risk.
History has shown that many of our most important large-scale systems (e.g., telephone,
electrical power grid, transportation, defense) have moved forward only as a result of the
technology that we have been able to develop and apply. Technology is thus a basis for many
of our new systems, and we need to find the proper balance, every step of the way, between
advanced technology and level of risk.
10. Employ systems thinking.
“Systems thinking” is considering the development, operation, and maintenance of our systems
in a holistic sense. It is a perspective that allows us to go beyond the views of individual compo-
nents and sub-systems, to total systems as well as systems-of-systems. Due to its importance,
and also its lack of intuitive precision and clarity, the next chapter is devoted to exploring this
topic in a more comprehensive manner.
The basic claim in this book is that the well-considered use of the systems approach will
help to increase the likelihood of success in building systems. The Department of Defense, in
its consideration of recommended acquisition practices, makes the point as below with their own
perspective as to what constitutes the total systems approach [2]:
“Total Systems Approach – The Project Manager (PM) shall be the single point of account-
ability for accomplishing program objectives for total life-cycle systems management, including
sustainment. The PM shall apply human systems integration to optimize total system performance
(hardware, software and human), operational effectiveness, and suitability, survivability, safety, and
2.1. ADDITIONAL RELATED FACTORS 7
affordability. PMs shall consider supportability, life cycle costs, performance, and schedule compa-
rable in making program decisions. Planning for Operation and Support and the estimation of total
ownership costs shall begin as early as possible…”
Yet another view of the “systems approach” is provided by NASA in their systems engineering
handbook [3]:
(cid:129) The systems approach is “the application of a systematic, disciplined engineering approach
that is quantifiable, recursive, iterative, and repeatable for the development, operation, and
maintenance of systems, integrated into a whole throughout the life cycle of a project or
program.”
We also have a quite interesting and coherent exposition on the systems approach from one
of our gurus in the overall field of systems engineering. This exposition took the form of a long
paper entitled “The Systems Approach,” and was co-authored by Simon Ramo [4], whose name
was included in the company title known as “TRW” (i.e.,Thompson-Ramo-Wooldridge). Dr. Ramo
makes many points, among them the following:
(cid:129) The systems approach uses objectivity, logic, and automated common sense.
(cid:129) It typically uses a team of experts, dignifying the problem and the implied methodology.
(cid:129) A skillful team will zero in on the problem (and its solution).
(cid:129) The systems approach has the potential for solving many of our most important and vexing
problems if we can find and develop the appropriate practitioners and have a forum for listening
and implementing solutions.
2.1 ADDITIONAL RELATED FACTORS
We cite now a few factors that relate to the systems approach, and we may well become an integral
part of it (i.e., on our “top ten” list).
Stakeholders. As we try to find the best system, we must ask the question: best for who? This
raises the notion that many of our systems have a large number of stakeholders, and the interests of
these stakeholders may be in conflict. The systems approach should be able to account for possible
disparate interests and none-the-less find a solution that most, if not all, will accept.
Tradeoffs at the Systems Level. Connected to the above idea is the notion that we need to
look at the various tradeoffs that exist at the top-most level in question. To illustrate - who gets the
service and who makes the profit are but two of many questions that need to be explored.
Architectures and Balance. Architectures are crucial features of our systems, and systems
approach considerations in that respect are “simplification, compromise and balance” [5]. How is
that to be achieved? See Eb Rechtin’s seminal book on systems architecting [5].
Design for Integration. If we are to be more successful bringing whole systems together, we
need to do better in our designs to account for, and facilitate, the downstream integration. Conversely,
8 REFERENCES
systems not designed to be integrated are not likely to be amenable to such considerations or goals.
This is a problem area that needs considerable work in the future, given our tendencies to look for
ways to integrate “stovepipes.”
REFERENCES
[1] Eisner, H., Essentials of Project and Systems Engineering Management, 3rd Edition, John Wiley,
2008. 5
[2] DoD Directive 5000.1, “The Defense Acquisition System,” May 12, 2003, Department of
Defense, Washington, DC. 6
[3] NASA/SP-2007–6105, Rev. 1, “NASA Systems Engineering Handbook,” December 2007,
NASA, Washington, DC, p. 276. 7
[4] Ramo, S., and Robin K. St. Clair, “The Systems Approach,” www.incose.org 7
[5] Rechtin, E., System Architecting, Prentice-Hall, 1991. 7
C H A P T E R 3
Systems Thinking
9
We recall, from the previous chapter, that “systems thinking” is listed as one of the features of the
systems approach. Thus, systems thinking helps us to carry out that approach as part of the overall
discipline of systems engineering.
In broad terms, systems thinking involves looking at the system we are building, or analyzing,
as a whole, rather than as an assemblage of parts. Systems thinking leads us to considering the
behavior of the total system in its current environment or in a situation that may change from time
to time. Systems thinking takes us away from a reductionist viewpoint to one that is more holistic. In
this chapter, we look at what several students and teachers of systems thinking have espoused and
attempt to apply some of these notions to systems engineering.
3.1 THE FIFTH DISCIPLINE
There is no better place to begin to explore systems thinking than to go to the work of Peter Senge [1]
called “The Fifth Discipline.” The overall topic of the “art and practice of the learning organization”
is examined in some detail, with the conclusion that there are five disciplines that need to be mastered
in order to create and sustain “the learning organization,” namely:
1. Building a shared vision.
2. Personal mastery.
3. Mental models.
4. Team learning, and,
5. Systems thinking (the fifth discipline).
The nature of systems thinking is thus examined, with the following key observations:
(cid:129) “systems diagrams” represent and help us understand the behavior of systems.
(cid:129) Reinforcing and balancing feedback and delays are among the building blocks of systems
thinking.
(cid:129) A bottom line of systems thinking is the creation of leverage.
(cid:129) Each of the five learning disciplines depends upon thinking at the levels of (a) practices, (b)
principles, and (c) essences.
10
3. SYSTEMS THINKING
(cid:129) Essences depend upon holism and interconnectedness; principles depend upon structure, policy
and leverage; practices depend upon system archetypes and simulation.
Senge’s approach, in part, is based upon a well-known modeling and analysis procedure known
as System Dynamics, originally developed by Forrester [2]. This procedure is used to study and
characterize the behavior of organizations. It is also directly applicable to systems of all types. In the
latter sense, we may think of Forrester’s work as one of many tools that the systems engineer is able
to use to explore the behavior of a variety of systems.
3.2 THINKING IN SYSTEMS
We see further systems thinking perspectives in the work of Donella Meadows [3], who along with
her husband Dennis, studied “limits to growth” and was a strong purveyor of the System Dynamics
approach to systems thinking and analysis. Some of the ideas set forth in her book on systems
thinking include the following:
(cid:129) A system consists of three types of things: elements, interconnections and a function or purpose;
the latter often has the greatest influence on system behavior.
(cid:129) A system, typically, is more than the simple sum of its parts.
(cid:129) Information feedback is critical to the stability of systems.
(cid:129) The resilience of systems is very important, and it has its limits.
(cid:129) Models are our ways of thinking about and evaluating systems, but they are still models and
not reality.
(cid:129) Most of our systems exhibit extreme non-linearities, which make them very difficult to analyze.
(cid:129) One of our challenges is still to “go for the good of the whole.”
3.3
SYSTEMS THINKING AND HEURISTICS
A rich supply of systems thinking in relation to systems engineering and architecting can be found
embedded in the subject of system “heuristics.” And perhaps the most outstanding developer of
system heuristics was E. Rechtin [4]. In his classic book on system architecting, Dr. Rechtin gave
us the benefit of some of his systems thinking, represented in the short sample of heuristics cited
below:
1. “Except for good and sufficient reasons, functional and physical structuring should match;
2. No complex system can be optimum to all parties concerned, nor all functions optimized;
3. Build in and maintain options as long as possible in the design and implementation of complex
systems;
4. A model is not reality” (see a similar comment above from Meadows).
3.4. SYSTEMS THINKING AND SPECIAL TOPICS 11
Undoubtedly, the above and other heuristics came forth from his keen observations about
systems over a period of some 50 years. And his “systems perspective” was likely derived from his
integration skills that allowed him to focus upon the top-level behavior of systems and the teams
that were building these systems.
3.4
SYSTEMS THINKING AND SPECIAL TOPICS
Over the years, systems thinking has gravitated to a series of topics that have been difficult to
contemplate and resolve. Researchers continue to make progress, but additional systems thinking
would be most welcome with respect to the partial list of such matters as cited below:
◦ system complexity,
◦ measures of interoperability and
integrability,
◦ emergent properties,
◦ resilience,
◦ adaptation,
◦ self organization,
◦ reflexivity,
◦ transformative factors and features,
◦ total system models and simulations,
◦ a breakthrough general systems theory.
Having pointed to the above, however, we need to understand that our current state-of-the-art
in systems thinking is extensive, with several “sub-schools” of thought within the overall field.
3.5 GENERAL SYSTEMS THEORY
Many of the aspects of systems thinking are traceable back to yet another classic work, that of
Bertalanffy’s “General System Theory” [5]. Many of the researchers in a general systems theory
point back to Bertalanffy as the source of their thinking, and inspiration. As we look at systems
theory and systems thinking from the perspective of systems engineering, we are aware of one
top-level aspiration:
(cid:129) That if we had a stronger and more coherent base of systems theory and thinking, we would
do a better job at systems engineering, and thus would be able to be more successful in our
systems engineering undertakings.
Systems thinking and theory thus provide a challenge for the “systems” community. That
challenge is to put the pieces together such that all of the related fields, including systems engineering,
are illuminated.To that end, we provide here as the last reference a long list of researchers [6] that can
be accessed by the reader to explore their contributions, and the possibility that a totally integrative
treatment is within our grasp, not too far down the road.
12 REFERENCES
REFERENCES
[1] Senge, Peter, The Fifth Discipline – The Art & Practice of the Learning Organization, Double-
day/Currency, 1990. 9
[2] Forrester, Jay, System Dynamics, Pegasus Communications, 1968. 10
[3] Meadows, D., Thinking in Systems, Chelsea Green Publishing, 2008. 10
[4] Rechtin, E., Systems Architecting, Prentice Hall, 1991. 10
[5] Bertalanffy, Ludwig von, General System Theory, George Braziller, 1968. 11
[6] Systems Researchers: R. Ackoff, W. R. Ashby, B. Babathy, G. Bateson, K. Boulding, S. Beer,
F. Capra, P. Checkland, C. W. Churchman, R. Flood, H. Foerster, J. Gall, R. Hutchins, M.
C. Jackson, Klir, E. Laszlo, I. Prigogine, A. Rapoport, A. Sage, L. Skyttner, F. Vester, J. von
Neumann, M. Weber, G. Weinberg, N. Wiener, B. Wilson, (see, also chapter thirty). 11
C H A P T E R 4
13
Key Elements of Systems
Engineering
The key elements of systems engineering depend upon which approach one takes to the overall
application of systems engineering. Three such approaches can be identified:
1. The Process-Oriented Approach (POA).
2. The Model-Based Approach (MBSE), and,
3. The Tailored Activity Approach (TAA).
In this chapter, we explore in some detail the latter approach, such that each activity represents a
basic element of systems engineering. Less emphasis is placed upon the former two approaches.
An overview of this approach can be explored by defining four aspects of systems engineering
that tend to be both large and important in scope and correlated with a system’s life cycle. These
four aspects are the following:
A. System Architecting.
B. Subsystem Design.
C. Construction, Test and Evaluation.
D. Operations, Maintenance and Reengineering.
A critical aspect of systems engineering is the first part of the design process known as System
Architecting. It is during this activity that the design team comes to terms with, and defines, the
overall system design. Errors here propagate throughout the design and can be fatal to the overall
effort. Once the architecture is formulated and agreed to, the Subsystem Design proceeds. These
first two aspects constitute the overall design activity for the system [1]. An accepted design, down to
the subsystem level, allows the team to build the system with specific implementations of hardware,
software, and the human element. At the end of this Construction process, Test and Evaluation
confirms that the physical system satisfies the stated requirements, and the system is able to move
forward into Operations, Maintenance, and Reengineering.
This overview becomes the background for the more formal definition of the elements of sys-
tems engineering, using the Tailored Activity Approach. The overall notion is depicted in Figure 4.1.
14
4. KEY ELEMENTS OF SYSTEMS ENGINEERING
Table 4.1: Six Categories and Thirty Elements of Systems Engineering.
Category A: Developer Design-Related
1. System Architecture
2. Analysis and Evaluation of Alternatives
3. Technical Performance Measurement
4. Life Cycle Costing
5. Risk Analysis and Mitigation
6. Hardware, Software and Human Engineering
Category B: Developer Integration and Test
7. Integration
8. Verification and Validation
9. Test and Evaluation
Category C: Key Support Elements
10. Concurrent Engineering
11. Specification Development
12. Interface Control
13. Computer Tool Use
14. Technical Data Management and Documentation
15. Integrated Logistics Support and Sustainment
16. Reliability, Maintainability and Availability
17. Quality Assurance
18. Configuration Management
19. Specialty Engineering
20. Preplanned Product Improvement
Category D: Fielding, Operations and Support
21. Training
22. Production and Deployment
23. Operations and Maintenance
24. Operations Evaluation and Reengineering
25. System Disposal
Category E: Customer-Defined Elements
26. Needs, Goals and Objectives
27. Mission Definitions
28. Requirements
29. Functions
Category F: Overall Management
30. Management of All the Above Elements
Systems(cid:3)Engineering(cid:3)Aspects(cid:3)
(cid:3)(cid:3)(cid:3)Implemented(cid:3)Via(cid:3)30(cid:3)Elements(cid:3)
4.1. OTHER APPROACHES 15
-
(cid:882)System(cid:3)Architecting(cid:3)
(cid:882)Subsystem(cid:3)Design(cid:3)
(cid:882)Construction,(cid:3)Test(cid:3)&(cid:3)Evaluation(cid:3)
(cid:882)Operations,(cid:3)Maintenance(cid:3)and(cid:3)
Reengineering(cid:3)
(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)In(cid:3)Six(cid:3)Categories(cid:3)
(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(cid:3)(see(cid:3)Table(cid:3)4.1)(cid:3)
Figure 4.1: Aspects, Categories, and Elements of Systems Engineering.
The representation of the six categories, and the 30 elements, is shown below in Table 4.1 [2].
The above construction of systems engineering is considered to be part of the Tailored Activity
Approach (TAA) since the amount of effort to be devoted to each element is to be determined
explicitly by the project management that is responsible for the system that is being designed and
built. This project management can be thought of as consisting of three people: the Project Manager
(PM), the Chief Systems Engineer (CSE), and the Project Controller [3].Thus, it is both possible and
critical that this team decide which elements are funded, and by how much. Not all projects require
all of the elements, and depending upon size, complexity, and other factors, the project management
must tailor the systems engineering activities to the overall program needs and constraints.
4.1 OTHER APPROACHES
The above discussion leaves open the question as to what the elements are for the Process-Oriented
Approach (POA) and the Model-Based Systems Engineering (MBSE) approach. We will not give a
definitive answer to this question in this text. However, it is clear from the work of INCOSE [4] and
the ISO/IEC 15288 Standard [5] that the Process-Oriented approach is based upon twenty-five
specific processes in the categories of: agreements, enterprises, projects, and technical considerations.
With respect to the Model-Based Systems Engineering approach, we refer the reader to the work
of Friedenthal [6], Estefan [7], and others [8]. Some of the key ideas in the MBSE approach are
listed below:
- It represents a movement from a document-based method to a model-based method.
- It has the potential to improve the quality and uniformity of how systems engineering is
executed (e.g., in the flow-down of requirements).
16 REFERENCES
- In applications to date, we see the use of the Systems Modeling Language, which itself rep-
resents an improvement over the Unified Modeling Language [6].
- There are several noteworthy MBSE methodologies [7].
Whichever approach selected by the systems engineering team, we note strong support for
the use of systems engineering [9], and evidence that with its proper application, the likelihood of
success is increased [10].
REFERENCES
[1] Eisner, H., “System Design, Architecting and Heuristics,” International Conference on In-
dustrial Engineering and Systems Management, IESM 2009, Montreal, Canada, May 13–15,
2009. 13
[2] Eisner, H., Managing Complex Systems—Thinking Outside the Box, John Wiley, 2005. 15
[3] Eisner, H., Essentials of Project and Systems Engineering Management, 3rd Edition, John Wiley,
2008. 15
[4] Systems Engineering Handbook, vers. 3.2, INCOSE, www.incose.org 15
[5] Systems Engineering – System Life Cycle Processes, (2002), ISO/IEC 15288, Geneva, Switzerland.
15
[6] Friedenthal, S., A. Moore and R. Steiner, A Practical Guide to SysML: The Systems Modeling
Language, Elsevier, 2008. 15, 16
[7] Estefan, J., Survey of Model-Based Systems Engineering Methodologies, INCOSE MBSE Initia-
tive, INCOSE, May 23, 2008, www.incose.org 15, 16
[8] Wymore, A. W., Model-Based Systems Engineering, CRC Press, 1993. 15
[9] DoD Directive 5000.1,The Defense Acquisition System, Department of Defense (DoD), Wash-
ington, DC, May 12, 2003. 16
[10] www.ndia.org 16
C H A P T E R 5
17
The Life Cycle Dimension
Most systems are built and put into service on a well-defined time line. Most agencies and companies
have a well-defined understanding of the time lines they tend to use, and the various phases that are
part of the time lines. In this chapter, we look briefly at three of these time lines and phases.
5.1 GENERIC LIFE CYCLE PHASES
An example of what might be called a set of generic life cycle phases is displayed in Figure 5.1
below [1]:
Need(cid:3)
Develop(cid:882)
ment(cid:3)
Concept(cid:3)
Definition(cid:3)
Concept(cid:3)
Validation(cid:3)
Engineering(cid:3)
Development(cid:3)
Pro(cid:882)
duction(cid:3)
Opera(cid:882)(cid:3)
tions(cid:3)
Figure 5.1: Generic Life Cycle Phases.
Here we see the system phases described as:
- Need Development.
- Concept Definition.
- Concept Validation.
- Engineering Development.
- Production, and
- Operations.
The program is formally approved after the need has been verified; at which time, the team
begins work on defining the concept, in detail. During this phase, there is also an SRR, a system
requirements review. Proceeding into concept validation, there is a system design review (SDR).
After validation, we move into engineering development. During this key phase, there are two
reviews: the PDR (preliminary design review) and CDR (critical design review). Just about all life
18
5. THE LIFE CYCLE DIMENSION
cycle representations include formal reviews of critical importance to the success (or lack of it) in a
program or project. An interim operational capability (IOC) is typically achieved during production,
with a final operational capability (FOC) available at the start of operations.
5.2 A DOD EXAMPLE
As one might expect, the DoD has been clear about its life cycle phases, which are defined as part
of its Defense Acquisition Management System [2]. A representation of that system is shown here
in Figure 5.2.
User(cid:3)Needs(cid:3)and(cid:3)
Technology(cid:3)
Opportunities(cid:3)
A
B
C
Materiel
Sol. Analysis
Technology(cid:3)
Development(cid:3)
Eng. & Manuf.
Development
Production(cid:3)and(cid:3)
Deployment(cid:3)
Operations(cid:3)and(cid:3)
Support(cid:3)
Concept(cid:3)
Decision(cid:3)
Program(cid:3)
Initiation
Figure 5.2: Life Cycle Phases With Emphasis on Technology [2].
The phases of interest here are:
- Materiel Solution Analysis.
- Technology Development.
- Engineering and Manufacturing Development.
- Production and Deployment, and
- Operations and Support (O & S).
The first phase accepts “user needs” and “technology opportunities” as inputs, explicitly stress-
ing the importance of technology in our military systems. This point is reinforced by the second
phase that formalizes, and even develops, the technology needed for the system.The program cannot
be initiated until it is completely clear, so that the selected technology is, or will be, available when
5.3. A NASA EXAMPLE 19
it is called for. The formal “systems acquisition” process starts with engineering and manufacturing
development and continues on into production and deployment. That is followed by the operations
and support (O & S) phase. One important difference between the DoD and the generic life cycle
phases can be seen by the inclusion of a “technology” emphasis in our defense world. This is entirely
appropriate as we must assure that our military systems have the state-of-the-art technologies for
the conflicts of the day.
As in the generic case, the DoD has many reviews during which progress can be assessed.
Listed below are eleven reviews that are part of the process:
- Initial Technical Review (ITR).
- Alternative Systems Review (ASR).
- System Requirements Review (SRR).
- System Functional Review (SFR).
- Preliminary Design Review (PDR).
- Critical Design Review (CDR).
- Test Readiness Review (TRR).
- System Verification Review (SVR).
- Production Readiness Review (PRR).
- Operational Test Readiness Review (OTRR).
- In-Service Review (ISR).
5.3 A NASA EXAMPLE
The NASA approach to defining life cycle phases is provided in their Systems Engineering Hand-
book [3], which shows a total of seven life cycle phases:
- Pre-Phase A: Concept Studies.
- Phase A: Concept and Technology Development.
- Phase B: Preliminary Design and Technology Completion.
- Phase C: Final Design and Fabrication.
- Phase D: Assembly, Integration and Test; Launch.
- Phase E: Operations and Sustainment.
20 REFERENCES
- Phase F: Closeout.
A considerable amount of work is accomplished prior to formal “approval,” which is needed
before final design and fabrication. As with the DoD, we see a strong role for technology and the
need for its development and completion before approval. Another difference is the final phase for
program closeout. Despite some minor differences, we also see a considerable amount of “up front”
design work in an attempt to assure that the system is sound before we write software and “bend
metal.”
5.4
SYSTEMS ENGINEERING ACROSS THE LIFE CYCLE
From chapter four, we can see that the elements of systems engineering are needed throughout the
system’s life cycle. This is reinforced in one of the more important DoD acquisition Instructions [2],
as follows:
“Rigorous systems engineering discipline is necessary to ensure that the Department of De-
fense meets the challenge of developing and maintaining needed warfighting capability. Systems
engineering provides the integrating technical processes to define and balance system performance,
cost, schedule, and risk within a family-of-systems and systems-of-systems context. Systems engi-
neering shall be embedded in program planning and be designed to support the entire acquisition
to support the entire acquisition life cycle” [2, enclosure 12].
REFERENCES
[1] Eisner, H., Computer-Aided Systems Engineering, Prentice-Hall, 1988. 17
[2] DoD Instruction 5000.02, “Operation of the Defense Acquisition System,” Dec. 8, 2008,
Department of Defense, Washington, DC. 18, 20
[3] “NASA Systems Engineering Handbook,” NASA/SP-2007–6105 Rev. 1, NASA Headquar-
ters, Washington, DC, December 2007. 19
C H A P T E R 6
21
System Properties, Attributes
and Features (PAFs)
We tend to characterize systems by their properties, attributes and features. Some of these can be
quite general, like maintainability, and some tend to pertain to various kinds or types of systems.
To illustrate, Table 6.1 lists some of these PAFs for (a) an automobile, (b) a house, and (c) a radar
system.
We will generally treat properties, attributes and features as more-or-less the same. As can
be seen from the above table, users and purchasers of these systems have a strong interest in under-
standing the PAFs, often in quite a lot of detail. Failure to do so can easily lead to purchases that
are regretted from the point of view of buying an inferior system or buying the wrong system. All of
this points us in the direction of wanting to have more information about these PAFs. We cannot be
good users or purchasers when we don’t have the right kinds of information. And the providers of
our systems don’t necessarily work hard at supplying the appropriate information. Here are a couple
of examples in this respect.
You buy a TV and eventually lose the remote clicker for it. So you go out and buy another
generic remote. Now you have to program the new remote to a code for your TV. What is the code
number? Where is it written down? Who knows?
You buy a vacuum cleaner and a piece of it breaks. So you try to buy a replacement part. How
do you describe the part? What’s its name? What’s its part number? Who knows?
The information age is upon us and is manifested in all types of ways and for all kinds of
systems.
In the world of systems, we also find PAFs that are both interesting as well as possibly obscure.
Here are a few of them, along with a first-order description:
- System resilience: the ability of a system to recover to normal operation, or to a form of
degraded operation, after some type of significant stress has been placed upon the system.
- System complexity: a level of extreme complication represented by the system and its behavior.
- Emergent behavior: one or more new forms of behavior of the system under stress and/or
after the system has been in its operating environment for some period of time.
- Sustainability: the degree to which a system is able to continue to operate without massive
infusions of new types of energy that are not cost-effectively provided.
22
6. SYSTEM PROPERTIES, ATTRIBUTES AND FEATURES (PAFS)
Table 6.1: Selected Properties, Attributes and Features of (a) automobile, (b) a house,
and (c) a radar.
(a) An automobile
Weight
Length
Color
Type (e.g., sedan, convertible)
Cost
Fuel Efficiency
Maintainability
Horsepower
Capacity (e.g., for people)
(b) A House
Type (e.g., colonial, ranch…)
Number of Rooms, by type
Number of Square Feet
Construction (e.g., brick, wood…)
Cost
Color
Age
Location (e.g., district, neighborhood…)
(c) A Radar
Transmitted Power
Location (ground, airborne…)
Frequency of Operation
Type (pulse, continuous…)
Directivity
Resolution
Bandwidth
Size of Antennae
Cost
- Self-Regulatory: the degree to which a system has the internal property to remain in a stable
domain of operation over the long haul.
- Vulnerability: the extent to which the system will fail to carry out its main functions when
subjected to severe stresses. Is this the approximate opposite of resilience?
This small sample will give the reader some idea as to the perplexities that face the systems
engineer from time to time.
REFERENCES 23
Ultimately, the systems engineer would like to have a precise definition of these PAFs, and also
a means of measuring them. If the engineer and the customer can agree on such a precise definition,
there is a much greater chance that a system will be built (some day) such that both parties have
a win-win proposition and venture. That’s the “end game” - a system for which both parties have
done it correctly, and a “happy customer” (as well as a good reference) is the result. And as we saw
in an earlier chapter, this is by no means what is happening, with high likelihood, in that real world
out there.
So now we come directly to the matter of measurement.
When we see a set of PAFs, we try to find a way to measure them, to the maximum extent
possible or practicable.The easier and more widely accepted the method of measurement, the happier
we are. The more subjective, the unhappier. But we don’t leave it at that. There are times that we
almost insist upon “measuring the unmeasurable.” Other times, we rely on a research-oriented
program whose objective is to find an appropriate measure down-the-road. It’s all good, and often
it’s also quite complicated.
For example, if we look at a system, we still don’t have a good way of measuring its complexity.
So we are studying the matter. On the other hand, we do have a way of measuring the complexity
of software. Some years ago, in a seminal paper, Tom McCabe suggested a software measure that
is called McCabe’s Cyclomatic Complexity for software [1]. It was a brilliant contribution, and
it led to a quite successful company that Mr. McCabe built, all around the matter of measuring
the complexity (as well as other attributes) of software. Perhaps, in the near future, we will have a
comparable story to tell about systems. More about this matter of measurement, an important part
of systems engineering, is in the next chapter.
REFERENCES
[1] McCabe, T., (1976), “A Complexity Measure,” IEEE Computer Magazine (February). 23
24
C H A P T E R 7
Measures and Parameters
The last chapter cited a variety of PAFs (properties, attributes, features) that were of interest for three
types of systems. It was indicated that some of these are measurable and some are not, at least for
now. The systems engineer pays special attention to making measurements; this activity represents a
serious part of having the appropriate “tools of the trade.” We need to make these measurements, so
that we can be precise and correct about what it is that we are building. We need these measurements,
also, to prove to our customers (and ourselves) that we have satisfied their requirements.
We tend to give a special name to those measures that receive special attention, and they are
referred to as technical performance measures, or TPMs. The lists below illustrate TPMs for four
types of systems, (a) a transportation system, (b) a communication system, (c) an on-line transaction
processor, and (d) a radar system (Table 7.1).
These are all “technical” measures and therefore do not refer to systems costs by definition.
Matters of cost will be considered in a later chapter. You may also note that there might be other
TPMs that come to mind as you read the above lists. Try adding two more to each category.
When the systems engineer has enumerated a number of TPMs for the system in question, it
is also usual to look at them more deeply and decide which of them can be further categorized as Key
Performance Parameters (KPPs) for the system. One of the important documents dealing with
the acquisition of systems says that we should identify a “minimum” set of KPPs for the system [1].
These are used as a basis for building and testing the system, and also tracking progress as to how we
are doing during the system’s development. We also note that the identification is for a “minimum’
set, not a maximum set. We want to continually try to manage these difficult programs and systems
by a sharp focus on what is most important, not on everything we can think of. That perspective, we
recognize, may make the difference between success and failure. In this connection, we can ponder
a quote from Dr. Eberhardt Rechtin as he constructed a set of useful “heuristics” for systems: “amid
a wash of paper, a small number of documents become critical pivots around which every project’s
management revolves…” [2].
We also recognize that having defined these important TPMs and KPPs, we need to then
address the matter of how we are going to calculate these measures and parameters. In some cases,
the issue is relatively straightforward. In others, we may have to build a model of the system to do
the calculation. That model might include the notion of simulation of prospective system behavior
on one or more computers. Two examples of relatively simple and well-accepted calculations are
Table 7.1: Illustrative Technical Performance Measures (TPMs).
25
(a) A Transportation System
Capacity
Trip Time
Frequency of Service
Energy Efficiency
Throughput
(b) A Communication System
Capacity
Signal-to-Noise Ratio
Bandwidth
Number of Users
Error Rate
Quality of Service
Speed of Service
(c) An On-Line Transaction Processor (OLTP)
Response Time
Accuracy
Number of Simultaneous Users
Security Level
(d) A Radar System
Range
Probability of Detection
False Alarm Probability
Transmitted Power
Frequency Band of Operation
Signal-to-Noise Ratio
those of system reliability and system availability, using the formulas below:
Reliability = exp(−λ t)
Availability = A =
MTBF
MTBF + MDT
Where λ is the failure rate in failures per unit of time (e.g., hours)
and MTBF = Mean-Time-Between Failures, hours,
and MDT = Mean Down Time, hours.
Much more complex relationships are brought into play for large and complex systems such
as air defense, air traffic control and networked communications systems.
26 REFERENCES
There are times, for good and sufficient reason, when we attempt to “measure the unmeasur-
able,” as suggested in the last chapter. Some examples will illustrate the point without stretching the
overall credibility of the endeavor. In the first case, we have created a set of “Capability Maturity
Models” in the systems and software (and other) domains. In doing so, we identify key process
areas (KPAs), and we associate levels of achievement in these areas with specific numbers. So in the
original CMM for software, if an organization demonstrates competency in six important KPAs,
then it has achieved level 2 on the CMM scale [3]. In another example, we are estimating the level
of effort (in person months) required to carry out a software development program [4]. The overall
relationship can be simply stated:
PM (effort) = A (size)B .
Where PM is the effort measured in person-months, A is found by considering a set of “effort
multipliers” and B represents several “scale factors.” These scale factors, specifically, are the following:
1. Precedentedness.
2. Development flexibility.
3. Risk resolution.
4. Team cohesion.
5. Process maturity.
Each of these scale factors is evaluated on the basis of a six level scale, namely: Very Low, Low,
Nominal, High, Very High and Extra High. When this is completed, a “table lookup” yields a value
for “B,” which lies between 1.01 and 1.26 for the COCOMO II early design and post-architecture
models. So in this way, we are able to move from a somewhat subjective set of estimates to a precise
value of the scale factor variable. This type of estimation process is not physics, but it does move us
a step on down the road in estimating effort levels for software, a quite valuable activity. Success in
constructing large scale systems, in fact, depends upon having these types of procedures available to
the systems engineering team.
REFERENCES
[1] DoD Instruction 5000.2, “Operation of the Defense Acquisition System,” May 12, 2003,
Department of Defense (DoD), Washington, DC. 24
[2] Rechtin, E., Systems Architecting, Prentice-Hall, 1991. 24
[3] Paulk, M. B. Curtis and M. B. Chrissis (1991). “Capability Maturity Model for Software,”
CMU/SEI-91-TR-24, Pittsburgh, Software Engineering Institute. 26
[4] Boehm, B., et al, Software Cost Estimation with COCOMO II, Prentice-Hall, 2000. 26
C H A P T E R 8
Architecting
27
The overall conception of a system and how it works is set forth in the system’s architecture. Ar-
chitecting, a process that leads to the selection of a preferred architecture, is the “front-end” of the
system design activity. The “back-end” is simply called the subsystem design [1].
One way to understand the difference between the first step of design (the architecting) and
the second step (subsystem design) is to draw an analogy between what tasks are carried out, approx-
imately, in an A & E (architect and engineer) firm. The Architects do the front-end architecting,
which is then followed by the engineers who fill in the important engineering subsystems and details.
The latter is generally not undertaken until the overall architecture is well-defined and approved by
the client. Clearly, different skills and training are required to carry out these far-ranging activities.
Dr. Eberhardt Rechtin, a pioneer in examining the matter of system architecting, provided
his insights into the process of architecting by exploring such topics as [2]: who and what is the
system architect, an approach to building the system, acceptance testing, modeling and simulation,
tools that the architect uses, boundaries and interfaces, and a quite instructive as well as useful list
of “heuristics.”
Several top-level features will help to further explain the nature of an architecture. If we look
at a large-scale communications system, two distinguishing aspects would be whether the system
employs frequency division multiplexing (FDM) or time division multiplexing (TDM). These are
two significantly different features, and, typically, the architecture employs one or the other, but not
both. Other approaches in the multiplexing domain might also be alternatives, such as code division
multiplexing (CDM). Yet another broad “dichotomy” is centralized vs. decentralized command and
control. A choice of one or the other is typical although one can conceive of a “hybrid” approach. A
third dichotomy might well be an “open” or a “closed” (specialized) architecture. These are but three
examples of names that we give to different architectural approaches or alternatives.
In 1997, the Department of Defense (DoD) published a C3I Architectural Framework that
was eventually re-named the DoDAF [3]. At this time, the DoDAF remains the centerpiece of
the DoD approach to architecting. It is largely based upon the construction of the following three
“views” of an architecture:
1. The operational view.
2. The systems view, and
3. The technical view.
28
8. ARCHITECTING
The DoD has also described their six aspects of actually developing an architecture [3], and they are
cited below:
1. Articulate the intended use of the architecture.
2. Establish the scope, context, environment, and any other assumptions of the architecture.
3. Determine which characteristics the architecture needs to capture.
4. Establish which architecture views and supporting products should be built.
5. Build the needed products.
6. Use the architecture for its intended purpose.
Although these are interesting and informative steps, they are not sufficiently definitive, so that one
would expect two persons to actually follow the same process, resulting in similar products.
The DoDAF approach has become quite far-ranging, and the three “views” originally pre-
sented have been expanded and described in a great amount of detail. In the defense domain especially,
and as expected, the DoDAF procedure is widely accepted and understood.
Another architecting approach, set forth by this author, has been called the EAM (Eisner
Architecting Method [4]). This approach focuses on examining the cost-effectiveness of alternative
architectures. Explicit calculations are made of the cost and effectiveness levels of alternatives, and
the results are compared. The most cost-effective solution, given the situation at hand, is called the
“preferred” architecture.
There are four key steps that constitute the EAM approach, as listed below:
1. Functional decomposition and requirements allocation.
2. Synthesis.
3. Analysis.
4. Cost-effectiveness comparisons.
These steps have been defined, generally, such that the output of each step implies the process
yielding that output, and vice versa. The word used here to describe this type of situation is to say
that the steps and the outputs are “congruent.” This is considered to be a highly desirable feature of
this architecting procedure. Each of these four steps is also described in some detail in later chapters.
A very short citation of these steps follows.
Functional Decomposition and Requirements Allocation. In this step, the overall system
is decomposed according to the functions that the system is to perform. Each function is
typically also decomposed into sub-functions. There is no attempt during this step to define
the manner in which the functions and sub-functions are to be instantiated, i.e., implemented
in hardware, software and/or the human element. The system requirements are then allocated
to the functions and sub-functions to which they pertain.
REFERENCES 29
Synthesis. In this step, design approaches for all sub-functions, and for several alternatives,
are defined. This is the essence of this architecting procedure. The precise manner in which this
is done is described in chapter eleven, a most important explanation of the overall technique.
Analysis. The essential purpose of this step is to evaluate the architectural alternatives from a
cost-effectiveness point of view. Thus, it is during this step that we actually calculate numerical
values for the cost and effectiveness of the various alternatives.
Cost-Effectiveness Comparisons. We enter this step, ideally, with measures of the cost and
effectiveness of the alternatives under consideration. We compare these data as well as examine
system constraints (like budget limitations) to try to find a preferred architecture. We look also
at sensitivities and tradeoffs that might lead to a preferred solution.
Substantially, more about these important steps is provided in chapters eleven through four-
teen.
By way of an interim summary, we are now in a position to define what we mean by an
architecture. In this regard, we will use most of what this author constructed as a definition in a
previous work [5]:
An Architecture is a construction of a set of design choices for the various sub-functions
of the overall system where these choices are believed to be interoperable and also satisfy the
stated user requirements.
In addition to the above discussions of architecting and architectures, there are still other
approaches that the reader may wish to explore to round out his or her understanding of this
complex and important topic. These include [5] the following:
1. MoDAF.
2. Enterprise Architecture (EA).
3. Service-Oriented Architecture (SOA).
4. IEEE explanation of Architectures.
5. Zachman approach to architecting.
Architectures and architecting are considered to be critical aspects of building successful systems. If
you get it wrong, there, typically, is no end to the trouble you will encounter. Getting it right usually
means that the pieces tend to fall into place, leading to an often elusive but very welcome success
story.
REFERENCES
[1] Eisner, H., “System Design, Architecting and Heuristics,” International Conference on In-
dustrial Engineering and Systems Management, IESM 2009, Montreal, Canada, May 13–15,
2009. 27
30 REFERENCES
[2] Rechtin, E., Systems Architecting, Prentice-Hall, 1991. 27
[3] C4ISR Architectural Framework, version 2.0, U. S. Department of Defense (DoD), Washing-
ton, DC, Dec. 18, 1997. 27, 28
[4] Eisner, H., “Eisner’s Architecting Method (EAM): Prescriptive Process and Products,” Tuto-
rial, INCOSE 2003, Arlington, VA, 29 June – 3 July 2002. 28
[5] Eisner, H., Essentials of Project and Systems Engineering Management, 3rd edition, John Wiley,
2008. 29
C H A P T E R 9
31
Functional Decomposition
The system design process begins with functional decomposition. That is, all of the system’s func-
tions are identified and further broken down into sub-functions. One is careful to deal only with
functionality and not the suggested or implied instantiations of these functions. The functions are
the “what” is to be part of the system and not the “how” each function is to be built. This distinction
is very important. We do not want to pre-judge the technical solutions, which are to be defined later
in the design process. Some examples of functional decomposition follow.
9.1 A SIMPLE COMPUTER SYSTEM
Here is a short list of most of the important functions of a computer system:
1. Input.
2. Storage.
3. Processing.
4. Output.
5. Operating Software.
6. Applications Software.
7. Security.
8. Network Connectivity.
9. Power Supply.
10. Physical Structure.
Note the absence of even a hint as to how these functions are to be designed. That will be part of
the synthesis activity.
9.2 A C4ISR SYSTEM
Seven functions are immediately suggested from the title of this type of system:
1. Command.
32
9. FUNCTIONAL DECOMPOSITION
2. Control.
3. Communications.
4. Computation.
5. Intelligence.
6. Surveillance.
7. Reconnaissance.
If we extend the system to air defense, for example, we may add some additional functions,
such as the following:
(cid:129) Threat assessment.
(cid:129) Scanning.
(cid:129) Detection.
(cid:129) Tracking.
(cid:129) Missile/Target assignment.
(cid:129) Launch.
(cid:129) Target Kill.
(cid:129) Damage assessment.
9.3 EARTH-OBSERVING SYSTEM (EOSDIS)
A system that NASA developed was initially structured into Segments and Functions [1]. These
can easily be reiterated in the form of functions and sub-functions, as illustrated in Table 9.1:
9.4
FAA’S NEXTGEN SYSTEM
The FAA (Federal Aviation Administration) has been planning the Next Generation System (a
system of systems) for some time [2]. Basic functions of NextGen have been described largely (but
not exclusively) in terms of “services,” as follows:
1. Interaction Services.
2. Mission Services.
3. Support Services.
9.4. FAA’S NEXTGEN SYSTEM 33
Table 9.1: Functions and Subfunctions for the Earth Observing System (EOSDIS) [1].
Function 1—Flight Operations
Subfunction 1.1—Mission Control
Subfunction 1.2—Mission Planning and Scheduling
Subfunction 1.3—Instrument Command Support
Subfunction 1.4—Mission Operations
Function 2—Science Data Processing
Subfunction 2.1—Data Processing
Subfunction 2.2—Data Archiving
Subfunction 2.3—Data Distribution
Subfunction 2.4—Data Information Management
Subfunction 2.5—User Support for Data Information
Subfunction 2.6—User Support for Data Requests
Subfunction 2.7—User Support for Data Acquisition and Processing Requests
Function 3—Communications and System Management
Subfunction 3.1—Distribution of EOS Data to Nodes
Subfunction 3.2—Distribution of Data Among Active Archives
Subfunction 3.3—Interface with External Networks
Subfunction 3.4—Network/Comms Management and Services
Subfunction 3.5—Configuration Management
Subfunction 3.6—Site Processing Assignment and Scheduling
Subfunction 3.7—Performance, Fault and Security Management
Subfunction 3.8—Accounting and Billing
4. SOA Core Services.
5. Security Services.
6. Technical Infrastructure Services.
7. Administrative Services.
8. Services Provisioning Management.
9. Enterprise Governance.
10. SOA Governance.
As a point of information, the FAA cites these Services as part of the Systems View framework
that is recommended in the DoDAF approach to architecting [3]. In this case, the overall architecture
has been selected as the SOA, or service-oriented architecture.
34 REFERENCES
Several additional points are made here with respect to functional decomposition. The first
is that the functions need to be defined as independently as possible from one another. The second
is that ultimately we will wish to know what the possible interactions might be between functions,
as in data/information flows. Another related point was suggested by Rechtin in his articulation of
system heuristics, namely [4, p. 313]:
“Do not partition by slicing through regions where high rates of information exchange are
required.”
Yet another point is that we will also be associating a set of requirements with each and every
function and subfunction, to the maximum extent possible, as part of the requirements engineering
process. Finally, there remains a question as to how many levels of decomposition are appropriate.
At this point, the answer is twofold: (a) it depends upon the size of the system, and (b), often, for
purposes of architecting a relatively small system, a breakdown to functions and subfunctions is
workable. This is where the experience factor comes into play when architecting a real world system.
REFERENCES
[1] H. Eisner, Essentials of Project and Systems Engineering Management, 3rd edition, John Wiley,
2008. 32, 33
[2] www.faa.gov/nextgen 32
[3] C4ISR Architectural Framework, version 2.0, U. S. Department of Defense (DoD), December
18, 1997. 33
[4] E. Rechtin, Systems Architecting, Prentice-Hall, 1991. 34
C H A P T E R 10
35
Requirements Engineering
The area of requirements has been a problem for many years. It seems to be one of the weak points
in the systems engineering process, and, collectively, we have had great difficulty in terms of finding
real solutions.
For example, the NDIA (National Defense Industrial Association) cites “Inadequate Require-
ments Engineering” as one of the top five systems engineering problems [1].The GAO (Government
Accountability Office) does very much the same in terms of reasons why our systems are not be-
ing built and delivered within acceptable cost, schedule and technical performance profiles [2]. The
DoD (Department of Defense, DUSD, 2006) claims that requirements engineering is one of our key
software issues [3]. Finally, in terms of this presentation, we have the DAPA (Defense Acquisition
Performance Assessment) report that confirms “requirements” as one of six broad problem areas [4].
Just about all of our systems engineering processes include the definition of requirements as an
early milestone. After all, how can we proceed with designing and building a system without a clear
set of requirements that have been defined “up front”? The basic answer is that we do an articulation
of requirements up front, but it is at that very time that we know the least about a system. However,
having written these requirements, they tend to be cast in stone, and many program managers are
very hesitant to change them, for a variety of reasons. Indeed, one of the issues with requirements,
viewed as one of the reasons we get into trouble, is known as “requirements creep.” So despite our
tendency to keep requirements fixed, they somehow creep, leading to difficulties.
This author takes the view that, in many cases, requirements need to be challenged and changed
as we become smarter about the system we are trying to build. Indeed, a good systems integration
company should recommend changes when they appear to be the correct course of action, Further,
if the customer agrees, it should be possible to make such changes in an expeditious manner. This
point is made by this author in a text dealing with these types of matters [5].
One of our leading software engineers, Barry Boehm, tells a story that provides a sharp focus
on this matter of reconsidering and changing requirements [6]. His company was building a “search”
system with a requirement to have a query response time of one second. After working on the system
and the one second response time matter for some time, the company concluded that in order to
meet this requirement, the system cost would have to increase from $30M to $100M. However, if
the response time requirement were changed to 4 seconds (90 percent of the queries were all right
with 4 seconds), the system could be delivered at the original budget of $30M. In other words,
moving the response time from one second to four seconds would “save” some $70M. This led to
changing the requirement, with all parties pleased with the ultimate decision.
36
10. REQUIREMENTS ENGINEERING
10.1 REQUIREMENTS ALLOCATION
We wish to relate requirements engineering with the previous topic of functional decomposition.
After the latter is achieved, and a complete requirements document written, the next step is to allocate
each and every requirement to one or more of the system functions and/or sub-functions. This is a
crucial step, allowing one to proceed with the selection of design alternatives for the functions and
sub-functions. Without knowing the requirements for these functions, there is typically not a good
way to proceed. It is at this point, however, that we run into difficulties, generally of two types.
In one case, we wind up with functions or sub-functions for which no requirements have been
defined. Broadly speaking, there are two ways to solve this problem. The first is to select a design
approach that reflects the state-of-the-art. The second is to derive a requirement when none was
explicitly stated. Hence, we have the topic of derived requirements as a subject of interest, to be
referred to again below.
Another possibility is that we have requirements that appear to “have no home.” That is, we
cannot see exactly how to allocate these requirements to the functions and/or sub-functions. In such
a case, we typically need to re-structure the functional decomposition to accommodate the “extra”
requirements. Sometimes this step simply means that the requirement applies but at a lower and not
precisely defined part of the decomposition.
10.2 DERIVED REQUIREMENTS
If the user/customer develops a set of requirements, and many functions and/or sub-functions have
no associated requirement, then it is usually the responsibility of the systems integrator to derive a
set of “missing” requirements. This set is then submitted for approval by the customer. When finally
approved, the process can continue such that we have a “complete” set of requirements.
This step is a critical one as can be seen especially when we are deriving an error budget for a
system in which errors need to be precisely defined as well as controlled. This can be illustrated by
reference to a simple error model in which there is an overall “pointing” error, T, which is one half
of one degree (the stated requirement as a standard deviation). Here is a simple set of derived errors
if we have four independent and additive errors that contribute to the overall error:
T = W + X + Y + Z (error variables are additive and independent)
Var(T ) = Var(W ) + Var(X) + Var(Y ) + Var(Z) (variance of sum is sum of variances)
(.5)2 = .25 = .05 + .06 + .06 + .08 .
Deriving new requirements, not previously stated, is an important part of the responsibility
of the systems integrator.
10.3 SOME NASA PERSPECTIVES
NASA has been working with high-tech requirements for many years. Accordingly, they have a
rather well-defined set of procedures dealing with requirements as part of their system engineering
10.4. TOP HALF DOZEN REQUIREMENTS RECOMMENDATIONS 37
discipline [7]. As just one example, Table 10.1 shows a partial allocation and flow-down of science
pointing requirements. The flow-down implies an overall error model:
Table 10.1: Partial Allocation and Flow-down of Science Pointing Requirements [7].
SCIENCE POINTING REQUIREMENTS
1. Spacecraft Requirements
1.1 Attitude Determination Requirements
1.1.1 Total Gyro to Star Tracker Error
1.1.2 Attitude Estimation Error
1.2 Science Axis Knowledge Requirements
1.2.1 Instrument Boresight to Science Axis
1.2.2 Science Axis to Attitude Control System Reference
2. Ground Requirements
Here are four very specific requirements statements that NASA points to in relation to a
Thrust Vector Controller (TVC):
- The TVC shall gimbal the engine a maximum of 9 degrees, plus or minus 0.1 degrees.
- The TVC shall gimbal the engine at a maximum rate of 5 degrees/second, plus or minus
0.3 degrees per second.
- The TVC shall provide a force of 40,000 pounds, plus or minus 500 pounds.
- The TVC shall have a frequency response of 20 Hz, plus or minus 0.1 Hz.
Other illustrative numerical requirements that one might see in a requirements document
include the following:
- The system shall have an overall availability of 0.98.
- The system shall have an overall mean-time between failures (MTBF) of 500 hours.
- The system shall have a response time of not more than 5 seconds, 90 percent of the time.
- The system shall have a bit error rate (BER) of no worse than (10)
−12.
- The system shall have a probability of detection of at least 0.98.
10.4 TOP HALF DOZEN REQUIREMENTS
RECOMMENDATIONS
We now suggest some six notions having to do with the appropriate treatment of requirements.
38 REFERENCES
1. Make sure to allocate requirements.
One should seek to allocate each and every requirement to the system functions and sub-
functions, as well as the overall system, as appropriate. This allows the synthesis process to
proceed such that all design engineers know what the requirements are.
2. Derive new requirements when appropriate.
New requirements need to be derived when there are functions and/or sub-functions that
have no requirements associated with them during the above allocation process. These derived
requirements need to be approved by the customer before they are fully accepted.
3. Locate and track all high risk requirements.
This necessitates a deep review of all requirements to see which ones are likely to lead to trouble
downstream.Those confirmed as high risk need to be questioned, and alternative requirements
suggested. We are trying to reduce risk, all the time.
4. Carry out a trade-off analysis of important and difficult requirements.
As changes in requirements may be suggested, we need to determine the possible impacts of
such changes. We are looking for changes in schedule, costs and/or performance. A previously
discussed situation in this chapter noted that an increase in system response time from 1 to 4
seconds resulted in a projected decrease in system cost of $70 million. This is a situation worth
noting!
5. Use the principle of “iteration” with respect to requirements.
The notion here is that when we don’t have a solid requirement, we insert and accept a
temporary “TBD” (to be determined). That is a place-holder that triggers a deeper look at
what the appropriate requirement should be. When that look is completed, we come back to
the TBD and put in the appropriate number.
6. Use “Requirements Tools” when warranted.
As systems have become larger and more complex, we need to use an automated “requirements
tool” to help us manage the overall requirements engineering process. The companies that provide
these tools have proven their value. Look for tools like “Core” or “Doors,” as well as others with
these kinds of capabilities.
REFERENCES
[1] www.ndia.org 35
[2] www.gao.gov 35
REFERENCES 39
[3] Schaeffer, M. D. (2006). “DoD Systems and Software Engineering – Taking It to the Next
Level,” Systems and Software Engineering, Office of the Deputy Under Secretary of Defense
(A & T), October 25, 2006. 35
[4] Department of Defense (DoD), “Defense Acquisition Performance Assessment,” Kadish Re-
port, January 2006, Washington, DC. 35
[5] Eisner, H., Managing Complex Systems – Thinking Outside the Box, John Wiley, 2005. 35
[6] Boehm, B., “Unifying Systems and Software Engineering,” Computer Magazine, March 2002,
pp. 114–116. 35
[7] “NASA Systems Engineering Handbook,” NASA/SP-2007–6105 Rev. 1, NASA Headquar-
ters, Washington, DC, December 2007. 37
40
C H A P T E R 11
Synthesis
This is a most crucial step in the process of constructing an architecture for the system. Both the
functional decomposition and the requirements engineering are preparatory steps. The essence of
synthesis is to develop a set of design alternatives for all of the sub-functions that have been defined for
the system. The simplest way to take this step is to envision and then fill out a table that shows these
design alternatives in a clearly understandable form. Such a table is illustrated below:
Table 11.1: Synthesis Construction.
FUNCTIONS
SUB-
FUNCTIONS
“Low-End”
System
Moderate
Upgrade
Major
Upgrade
F1
F2
FN
F1.1
F1.2
F1.3
F2.1
F2.2
FN.1
FN.2
FN.3
FN.4
We note that the suggested table has the functions and sub-functions listed as the rows, and
three alternative system architectures are listed as the columns. At this point, the alternatives can be
thought of as the following:
1. A “low-end” system.
2. A moderate upgrade beyond the “low-end” system.
3. A major upgrade above the “low-end” system.
Another way to envision these alternatives is explored in a later chapter. For now, we think in
terms of constructing three alternative architectures, with increasing performance or effectiveness.
All three alternatives satisfy the stated requirements for the system. An example of how the design
entries might be stated, for a simple computer system, is shown in Table 11.2.
41
Table 11.2: Illustrative Computer Design Alternatives as Part of the Synthesis
Process.
FUNCTION
“Low-End”
system
Moderate
Upgrade
Major Upgrade
INPUT
Keyboard
Mouse
CD Drive
USB
Telephone
Keyboard
Mouse
CD Drive
USB
Telephone
Touchpad
Video
Microphone
PROCESSING X GHz
MEMORY
R GB
(X+Y ) GHz
(R+S) GB
Keyboard
Mouse
CD Drive
USB
Telephone
Touchpad
Video
Microphone
Touch Screen
Voice Recognition
Voice Recording
Fiber Optic (new)
(X+Y+Z) GHz
(R+S+T) GB
Only three functions for the computer system are illustrated - the input, the processing, and
the memory. We develop the table by working on a row at a time, moving from left to right. Each
time this is done, we envision improvements in the performance, or effectiveness, of the approach.
After all rows have been completed, we take the columns, one at a time, and look at them from top
to bottom. In doing so, we are verifying that the entries are interoperable with one another. When
all this is completed, we have a synthesis “product” that is what we want and also implies the process
by which it was constructed. This means that when we see the completed table, it is easy to envision
the manner in which it was developed.
By definition, each of the three alternatives is an architecture, defining the principal ways
in which the functions and sub-functions are instantiated. If the design team wishes to consider
more than three alternatives, there is no a priori constraint against doing so. Also, the suggested
process is one in which the best design/architecture members of the team spend whatever time is
necessary, working together, to construct the table. It is truly a team approach to building the systems
architecture, in a team setting.
42
11. SYNTHESIS
When the table is complete, for all system functions, the stated alternatives are the best that
the design team is able to construct. If some very good approach is not included, it is a failing of
the design team. If the PM and Chief Systems Engineer wish to guard against that possibility, the
synthesis step can be the following:
- Carried out by two parallel and independent teams, and/or
- Have the results reviewed in detail by a set of senior consultants.
Other variations on these themes can also be considered. The basic notion here is that we wish
to construct the best possible alternative architectures as part of our solution to the overall problem.
Making a serious mistake in architecting is very likely to be fatal to the overall success of the project
and system.
Another interesting question about this process: what happens if it is discovered that one or
more sub-functions have been missed? The basic answer is that at the point of discovery, the team
needs to go back and include the missed sub-function and its instantiations. A further question: can
some of the architectures include functions that the others do not include? The answer to that is,
“no,” since in such a case, we would be using different requirement sets for the different alternatives.
In the world of both mandatory and optional requirements, however, this result is an acceptable, and
even expected, possibility.
At this point, there has been no formal evaluation of the alternatives against one another. That
evaluation is reserved for the “analysis” step that follows, and it is explored in some detail in the next
chapter. However, if the formal analysis reveals serious flaws in the definition of the alternatives,
then one returns to the synthesis step and makes the suggested improvements. In this fashion, we
are using one of the key aspects of the systems approach, i.e., to make progress and improvements
through “iteration.” This is one of the important places in which iteration works for the project
team, and it is necessary to allow time for such a contingency. In other words, we have a type of loop
between synthesis and analysis that should be a natural part of how we build large complex systems.
11.1 SUPPORTING TABLES AND VIEWS
The synthesis chart is central to the design of alternative architectures [1, 2]. It is simple and easy to
understand. That is one of its virtues. However, it is also suggested that the design/synthesis team
consider adding additional information that provides support to the process. This information can
come in the form of tables and views that help to explain and document what is meant by each of
the alternatives. For example, the three DoDAF views (operational, systems, technical) views can be
added here, as can other recommended views. These are more explanatory at this point then they
are evaluative. We do not, in general, want to divert attention from the main tasks of synthesis and
analysis, as described here, in order to come up with a preferred architecture. The development of
additional information should flow from the need for such information, as perceived by the design
team. Now we move on to the next step, the formal process of “analysis.”
REFERENCES
[1] Eisner, H.,“Eisners Architecting Method (EAM): Prescriptive Process and Products,” Tutorial,
INCOSE 2003, Arlington, VA, 29 June - 3 July 2003. 42
[2] Eisner, H., Essentials of Project and Systems Engineering Management, 3rd Edition, John Wiley,
2008. 42
REFERENCES 43
44
C H A P T E R 12
Analysis
The primary focus of the “analysis” activity discussed here is to evaluate the architecture alternatives
that are synthesized as part of the previous chapter’s design work. As such it is necessarily limited.
But at the same time, due to its narrow scope and perspective, the analysis can address the matter
of which architecture has the highest levels of performance and effectiveness. Ultimately, we are
interested in selecting a preferred architecture for the system under consideration. During this first
iteration of the analysis, our primary interest is in comparative (vs. absolute) results. In a second or
third iteration, we will be in a position to deepen our analysis since we generally will have more time,
and likely more resources to apply to the problem.
We take as a given input the three (or more) alternatives that we have constructed during the
synthesis step. These remain:
1. The “Low-End” System.
2. The Moderate Upgrade, and
3. The Major Upgrade.
The immediate consideration is to decide upon the evaluation criteria that we will use in
assessing the merits of the three alternatives. Criteria that we tend to use for generic classes of
transportation and communication systems are listed below in Table 12.1.
In the evaluation suggested here, which is one of the least complex assessment procedures, we
will basically look at a rating and weighting scheme for comparative analysis. The analysis takes the
form of Table 12.2 as below.
In Table 12.2, we are showing the three alternatives (columns) against a set of nine evaluation
criteria (rows). The rating scheme is on a scale of 0 − 10, and the weights are selected in the interval
0 − 1 (adding to unity). We evaluate each alternative on the given scale and multiply the ratings
by the weights to yield the comparative results in Table 12.2. We also interpret the “bottom-line”
numbers as measures of the effectiveness (MOEs) of the alternatives. This represents the first-order
calculation of the effectiveness of each of the alternatives.
The numbers in Table 12.2 suggest that the moderate upgrade represents about a 12% im-
provement over the “low-end” system, and that the major upgrade is (a) about a 5% improvement
over the moderate upgrade and (b) about a 17% improvement over the “low-end” system. As a
ground-rule, all systems satisfy the designated requirements.
We note that this type of rating and rating evaluation scheme:
a. Represents a preliminary assessment of the alternatives.
Table 12.1: Typical Evaluation Criteria for Transportation and Communications
Systems.
TRANSPORTATION SYSTEM
CRITERIA
COMMUNICATIONS SYSTEM
CRITERIA
45
Capacity
Speed
Frequency of Service
Risk
Reliability
Safety
Trip Time
Environmental Effects
Growth Capability
Capacity
Bandwidth
Grade of Service
Connectivity
Security
Survivability
Risk
Expandability
Availability
EVALUATION
CRITERIA
Table 12.2: An Evaluation Framework for Alternative Transportation Systems.
“Low-End”
System
Rating/
Product
Moderate
Upgrade
Rating/
Product
Major
Upgrade
Rating/
Product
Weights
(%)
Capacity
Frequency
Risk
Reliability
Safety
Trip Time
Environment
Growth
Availability
15
10
20
10
15
5
10
5
10
SUMS
100
6/.9
8/.8
9/1.8
7/.7
7/1.05
7/.35
6/.6
6/.3
7/.7
7.2
7/1.05
8/.8
8 8/1.6
8/.8
8/1.2
8/.4
9/.9
8/.4
9/.9
8.05
9/1.35
8/.8
7/1.4
9/.9
9/1.35
8/.4
9/.9
8/.4
9/.9
8.41
b. Will ultimately be followed by a more detailed quantitative procedure.
c. Needs to be subjected to a “sensitivity analysis,” e.g., seeing how the results change if the
weights are varied; seeing how the results change if a new group of evaluators provide the
ratings, etc.
d. Needs to be followed by a cost analysis of the alternatives, resulting in specific cost estimates.
46
12. ANALYSIS
12.1 DEEPER LEVELS OF ANALYSIS
The world of “systems analysis,” of which the above is just a narrow sliver, is wide and deep. Many
claim that it was born during the “Whiz Kids” regime under Robert McNamara in the Department
of Defense during the Kennedy administration [1]. Accordingly, there are many techniques and
approaches that can be said to be part of systems analysis [2, 3]. As a result, this author has reserved
two additional chapters in this book that will elaborate further on this topic, a critically important
one in the “systems” world. These chapters are the following:
- Chapter 15: Modeling and Simulation.
- Chapter 16: Other Analysis Relationships.
These treatments will give the reader a broader understanding of the “analysis” world and its con-
stituent elements.
12.2 ANALYSIS OF ALTERNATIVES (AOA)
The analysis of alternatives was implied by the various aspects of the “systems approach,” as presented
in chapter two of this book. To the maximum extent feasible, we are always asking the question: is
there a better approach or alternative? In this case, we are looking at alternative architectures and
asking this same question.
The notion of an “analysis of alternatives” (AoA) was made very concrete as the DoD set forth
its acquisition instructions and directives [4]. An AoA plan is required during the system refinement
activities, and it is used to guide the process. The preferred solution is derived from an AoA.
AoA precepts were incorporated in an “AoA Handbook,” produced by an Air Force group
dealing with aerospace studies [5]. Important aspects of this work included the following:
- An AoA definition: the evaluation of the performance, operational effectiveness, operational
suitability, and estimated costs of alternative systems to meet a mission capability.
- Key AoA components dealing with the operational environment, alternative developments,
effectiveness analysis, cost analysis, risk analysis and cost-effectiveness analysis.
- An articulation of analysis types, namely, analyses of (a) risk, (b) suitability, (c) sensitivity,
(d) risk, (e) trade space, and (f ) comparative analysis.
The above, and other AoA activities, especially in the federal government, are critically important
in bringing the consideration of alternatives into play to the maximum extent possible, within the
constraints of any program. This is absolutely necessary for success as we continue to build larger
and more complex systems.
REFERENCES 47
REFERENCES
[1] Eisner, H., Essentials of Project and Systems Engineering Management, 3rd Edition, John Wiley,
2008. 46
[2] Gibson, J., W. Scherer and W. Gibson, How To Do Systems Analysis, John Wiley, 2007. 46
[3] Buede, D., The Engineering Design of Systems, John Wiley, 2000. 46
[4] DoD Instruction 5000.2, “Operation of the Defense Acquisition System,” May 12, 2003,
Department of Defense (DoD), Washington, DC. 46
[5] AoA Handbook, www.oas.kirtland.af.mil 46
48
C H A P T E R 13
Cost-Effectiveness
Recall that we are attempting to construct a cost-effective solution to our client’s problem. Thus,
cost-effectiveness is a dominant consideration, as is the notion that we should be selecting a preferred
architecture from among a set of alternative architectures.
The cost-effectiveness perspective forms the basis for the recommended architecting approach
in this book. It is also reinforced by a simple statement in a key Department of Defense (DoD)
Directive, a part of the so called “5000 series” [1]. That statement says that those dealing with the
acquisition of large-scale systems “shall seek the most cost-effective solution over the system’s life
cycle.”
In general, terms, we consider three overall domains for architecting, such domains having
the descriptive names:
(a) The low cost domain.
(b) The high effectiveness domain, and
(c) The best value domain.
These domains, looking at effectiveness plotted vs. system cost, are depicted in Figure 13.1.
In the low cost domain, we are specifically architecting a low-cost solution that still satisfies
the overall customer needs and requirements. In the high effectiveness domain, we are looking for
very high performance, with cost not representing a dominant factor. In the best-value domain, we
are searching for the “knee-of-the-curve” such that we obtain large increases in effectiveness per
dollar expended, to the point at which the curve starts to show diminishing returns.
Each of these domains has its place in the building of both government and commercial
systems. Here are some examples.
In the low-cost domain, the government is often looking for a “plain-vanilla” personnel track-
ing system that does not push the state-of-the-art, saving more substantial funding for less de-
manding applications. In the high effectiveness domain, the government might be looking for a
high performance fighter that has no equal anywhere in the world. In the best-value domain, the
government might be trying to obtain the “best bang for the buck” and can likely go beyond the
minimum cost system in order to get to the “knee-of-the-curve.”
The domain of interest, in any particular case, may ultimately depend upon the specific program
requirements and constraints. For example, a severely limited budget or a schedule constraint both
point toward a low cost domain solution. A need to have the best technological solution pushes
toward the high effectiveness domain, but the need must be accompanied by a supporting budget
49
High(cid:3)
Effectiveness(cid:3)
(cid:3)
s
s
e
n
e
v
i
t
c
e
f
f
E
(cid:3)
(cid:3)
(cid:3)
(cid:3)
(cid:3)
(cid:3)
(cid:3)
(cid:3)
(cid:3)
Knee(cid:3)of(cid:3)
Curve;(cid:3)
Best(cid:3)
Value(cid:3)
Cost(cid:3)
Low(cid:3)
Cost
Figure 13.1: Low Cost, High Effectiveness and Best Value Domains.
and schedule. Without the latter, risk begins to skyrocket, and the program starts to be headed
toward failure.
It is recommended here that the “best-value” domain be used to the maximum extent possible.
This approach tends to be justified on its own merits, and it tends also to support technology and
system upgrades if such are part of the overall plan for the system.
Once the cost-effectiveness domain is established, the need remains for the consideration
of alternatives within that domain. For example, if one is budget constrained and therefore in the
low-cost domain, it is still both appropriate and important to consider two to three alternatives from
which to ultimately select a best or preferred architecture. This notion is accepted as a matter of
principle even at the subsystem design level.
As an example in the high-effectiveness domain, we ultimately held a competition for the Joint
Strike Fighter ( JSF) between Boeing and Lockheed-Martin. The latter won that competition, but
both fighter designs were clearly in the high-effectiveness domain, for good and sufficient reasons.
The low-cost, knee-of-the-curve, and high effectiveness domains show themselves in terms
of how consumers buy automobiles, as an example. At the low-end, there are many cars that sell in
the $9,000 to $14,000 range, and many purchasers are completely comfortable with these low-cost
offerings. At the same time, there is a quite healthy domain that consists of automobiles at about
twice these prices. Many would identify that area as containing best-value solutions. Finally, although
most basic transportation needs can be satisfied by these two domains, there is also a clear market for
50
13. COST-EFFECTIVENESS
the $80-100K automobile and its associated high-effectiveness domain. Today’s systems engineer
must be prepared for projects and systems that can operate at any point on the cost-effectiveness
scale.
13.1 VIEWS
The cost-effectiveness graph in Figure 13.1 is the principal overview of the merits of each alternative.
Typically, however, we will want to develop other “views” of the alternatives, such that these views
provide insight into key system relationships, parameters and tradeoffs. Table 13.1 below provides
a list of other views that might be used in order to more clearly characterize the system alternatives
under consideration [1].
Table 13.1: Other Views That Might Be Used to Evaluate Alternative
Systems and Architectures [2].
A. Requirements Satisfaction
B. Risk and Requirements
C. Cost by Function, and by Requirements
D.
E. Effectiveness vs. Human Factors
F.
G. Effectiveness vs. RMA
Effectiveness vs. Risk
Sensitivity to Changes in Criteria Weights
We note the tendency toward measurement in looking for other views. As best we can, we
wish to formulate an objective and quantitative understanding of the systems under examination.
Each view adds information that will help to quantify and also clarify the situation at hand.
As we complete this chapter, it is worth noting two key observations made by Eberhardt
Rechtin, cited earlier in this book (i.e., Chapters 2 and 3) in reference to the overall topic of archi-
tecting. These statements are as follows [3]:
- “no complex system can be optimized to all parties concerned,” and
- “the choice between architectures may well depend upon which set of drawbacks the client
can handle best.”
The factors that go into the calculation of effectiveness, and especially their weights, may well differ,
depending upon who is doing the architecting and who is acquiring the system. In short, various
stakeholders see the system differently. This point was reinforced on a consulting assignment of
this author some years ago. In support of nine members of the Aviation Advisory Commission
(AAC), the question was raised as to what weights they would give to a set of criteria associated with
alternative aviation systems of the future. Table 13.2 below shows the diverse views of the evaluators
in regard to each of the specific criteria [4]. We also note that the evaluators decided that investment
and operating costs had to be part of the process, whereas in many other frameworks, cost is taken
to be an independent variable.
REFERENCES 51
Table 13.2: Weights (in %) Given to a Set of Criteria Used to Evaluate Alternative Aviation
Systems [4].
Criteria
E1% E2% E3% E4% E5% E6% E7% E8% E9% Average
Social
Environmental
Service
Quality
System
Capacity
Human
Factors
Internat’l
Economic
Investment
Cost
Operating
Cost
5
20
20
10
5
5
15
20
10
40
10
10
10
10
5
5
15
10
10
10
10
5
20
20
10
15
15
15
5
10
15
15
15
20
20
20
5
5
5
10
5
5
15
20
10
15
15
15
10
15
15
15
5
10
15
15
21
8
19
15
1
10
12
14
8
12
18
13
6
13
14
16
11.0
16.1
15.8
14.2
6.3
9.2
12.9
14.4
What all this means is that the systems engineer must not lose sight of the fact that various
measures of the system’s effectiveness will differ in their importance, depending upon the vantage
point of the observer/evaluator. The systems engineer must therefore always be prepared to exam-
ine the sensitivities of the solutions to these factors. At the same time, this does not change the
basic notion that the “bottom line” is to seek the most cost-effective solution from among a set of
alternatives.
REFERENCES
[1] DoD Directive 5000.1, “The Defense Acquisition System,” May 12, 2003, Department of
Defense (DoD), Washington, DC. 48, 50
[2] Eisner, H., “New Systems Architecture Views,” 25th National Conference of the American
Society of Engineering Management (ASEM), Alexandria, Va, Oct. 20–23, 2004. 50
[3] Rechtin, E., Systems Architecting, Prentice-Hall, 1991. 50
[4] Eisner, H., Computer-Aided Systems Engineering, Prentice-Hall, 1988. 51
52
C H A P T E R 14
Life Cycle Costing
It is clear that we need to estimate the costs of systems in order to carry out the cost-effectiveness
analysis cited in Chapter 13. We also need to understand the trade-offs associated with various
design choices in terms of their costs. And the ultimate budgets for systems are based upon our
agreement, or not, that systems can be built for a designated set of costs.
There also has been great concern regarding the escalation of costs over time and, therefore,
the affordability of the systems that we build and buy. This implies that we need to know not only
what today’s costs are, but what tomorrow’s costs are likely to be. This might turn out to be one of
the most serious unknowns and challenges that need to be faced by a system’s project manager (PM)
and also the Chief Systems Engineer.
14.1 LIFE CYCLE COST MODEL STRUCTURE
The overall approach, in the context of systems engineering, is to take a “life cycle” view of the
issue of system costing. That is, we estimate the costs of systems over their entire life cycles, and we
compare alternatives on the same basis. We will use the “three by three” approach as designated by
this author and described below.
There are three categories of cost that we use in our life cycle cost model (LCCM), namely:
1. Research, development, test and evaluation (RDT&E).
2. Procurement (or acquisition).
3. Operations and Maintenance (or Support).
The first category deals with all the costs that are expended before we actually purchase or acquire
one or more copies of the system. The second includes the dollars spent for the acquisition. The
third funds the operations and support phase, usually the longest (and most expensive) part of the
life cycle. The details of subordinate costs can be found in an earlier text of this [1] and other [2]
authors.
The other aspects of the model are the three dimensions:
1. The three cost elements (as above).
2. The specific years of the system’s life cycle, and
3. The sub-systems of which the overall system is constructed.
14.2. BOTTOMS-UP AND TOP DOWN COST ESTIMATION NOTIONS 53
These three dimensions can be seen to fit very well into the structure of a 3-dimensional spreadsheet
for which the rows are the cost elements, the columns are the years, and the “sheets” are the various
sub-systems. Thus, the “three by three” designation refers to the three categories of cost and also the
three dimensions of the overall life cycle cost model.
14.2 BOTTOMS-UP AND TOP DOWN COST ESTIMATION
NOTIONS
If we look at the above “three by three” model, we are able to get some idea as to the number of data
elements that might be part of such a model. If we assume 25 cost elements, 15 years of the system’s
life cycle, and 8 sub-systems, then we have (25)(15)(8) = 3,000 data “cells,” each containing a cost
estimate. Estimating such costs is not an impossible task, but it takes time and some “digging” to
develop these estimates. Following this approach is basically a “bottom’s up” procedure to building a
life cycle cost model. It is data-intensive, and we have a great deal of visibility into elements of cost.
But it also requires good databases that we can access in order to retrieve data from systems that are
similar to the one we are building.
In addition to the above approach, there is a top-down approach. In such a case, we look at
similar systems, and we use real data from these systems in order to extrapolate to the system in
question. These extrapolations are generally called “cost-estimating relationships” (CERs), and they
allow for the estimation of only a few key parameters, or variables, in order to develop a cost estimate
at the “aggregated” or “system” level. Table 14.1 shows a few examples [1].
Table 14.1: Illustrative Cost Estimating Relationships [1].
Cost Estimated for
Primary Cost Drivers
A Computer
Aircraft Engine
A Radio
A Radar System
An Antenna
Software
Speed, Storage Capacity
Bypass Ratio, Thrust
Frequency, Power, Number of Channels
Frequency, Bandwidth, Weight, Output Power
Frequency, Size
Delivered Source Instructions, Environment (see brief
discussion of COCOMO below)
We can illustrate a well-known CER area by a brief examination of the Constructive Cost
Model known as COCOMO. This model focuses upon the costs of software and has taken the
simple form:
PM = A (KDSI)B
Where PM = person-months, which is easily converted into a dollar amount, and
KDSI = thousands of delivered source instructions.
54
14. LIFE CYCLE COSTING
A is called an effort multiplier, and B is referred to as a scale factor [3, 4]. The latter is a function
of five variables, and the former is a function of seven or seventeen other variables, depending upon
the situation [5]. So we have a relatively simple way to estimate the costs of software, with “delivered
source instructions” remaining as the key independent variable. Extensions of this notion allow us to
further calculate the recommended development time, the productivity and the required manpower
levels.
14.3 PRICE
PRICE is a series of cost models that originated with the company RCA, and it survives as an
important independent source of cost information, based upon real world data and its analysis.
There are several PRICE models as cited below:
(cid:129) PRICE H—focused upon the cost of hardware
(cid:129) PRICE HL—expands to the hardware life cycle, including cost-effectiveness and system
availability
(cid:129) PRICE M—oriented to the costs of electronic systems.
PRICE, as well as many other formal sources of parametric life cycle cost information, provides the
systems engineer with several alternatives in terms of data on real systems that would otherwise not
be readily accessible.
14.4 NASA AND COST MANAGEMENT
NASA has been very active over the years in dealing with cost estimation and management. Some
years ago, they started several new initiatives in order to “strengthen cost estimating and project
cost management”[6]. NASA has also produced a Cost Estimating Handbook [7], an excellent idea
that helps to define their approach to this important matter. Some of the main topics addressed by
NASA include the following:
Cost Estimating.
(cid:129) The role of cost estimating.
(cid:129) Cost estimating at NASA.
(cid:129) The Cost estimating process.
Cost Risk.
(cid:129) Cost risk.
(cid:129) Cost risk approaches.
REFERENCES 55
(cid:129) Model Summaries/Overview.
(cid:129) Risk Handbook Summaries.
Economic and Supporting Analyses.
(cid:129) Economic analysis.
(cid:129) Other Cost Estimating considerations.
We note the scope of these topics as well as the expansion into economics. It is interesting also that
NASA highlighted “cost risk” in their overall deliberations. Both economics and risk are appropriately
part of the cost estimation and management issue.
REFERENCES
[1] Eisner, H., Computer-Aided Systems Engineering, Prentice-Hall, 1988. 52, 53
[2] Blanchard, B., and W. Fabrycky, Systems Engineering and Analysis, 5th Edition, Prentice-Hall,
2011. 52
[3] Boehm, B., Software Engineering Economics, Prentice-Hall, 1981. 54
[4] Boehm, B. et al, Software Cost Estimation with COCOMO II, Prentice-Hall, 2000. 54
[5] usc.sunset.edu 54
[6] Gregory, F., NASA Deputy Administrator, “Implementation of Cost Management References
Initiatives,” December 23, 2004, NASA, Washington, DC. 54
[7] NASA Cost Estimating Handbook, 2008, NASA Headquarters, Washington, DC. 54
56
C H A P T E R 15
Modeling and Simulation
One of the principal reasons we use modeling and simulation (M & S) is to be able to predict the
performance of systems before we actually build these systems. The results, therefore, can strongly
influence and contribute to our design of these systems. Or at least, we hope so. To the extent that
this is not the case, M & S canstill play a valuable role for systems in the process of being developed
and tested, and for systems that are being integrated.
The “modeling” part of M & S can range far and wide since there are numerous interpretations
of what a model is and what it is not. This author see models as a way to represent and explore, in
one way or another, the behavior of the system that is to be built. So we produce a model, or a replica,
of the system, in one or more domains of the system. We then subject the model to trade-offs and
testing, believing that we are learning something about the real system. If we are good “modelers,”
then we are learning. Bad modelers, unfortunately, learn very little about the real system. Whether
good or bad, we need to keep in mind one of the heuristics suggested by Eberhardt Rechtin, one
of our best systems engineers [1]. He appropriately reminded us that “a model is not reality.” This
means we need to know what our model can do for us, and what it may not be able to do, i.e., its
limitations.
15.1 FOUR ILLUSTRATIVE MODELS
We will use four specific examples in order to illustrate the notion of a model. The first will be a
system “computational model,” as discussed below.
We have stressed the importance of being able to compute (calculate) the technical perfor-
mance measures (TPMs) as well as the key performance parameters (KPPs) of a system.To assist us in
this endeavor, we can construct what this author has called a “Parameter Dependency Diagram”[2].
Such a diagram represents a computational model of the system in question. One begins by identify-
ing the key parameters and measures that are of interest to compute. A long list is made of these and
prioritized. The highest priority parameters and measures are examined, and for each, the question is
raised: what does this parameter depend upon? One answers the question by deciding, for example,
that parameter A depends upon parameters X, Y, and Z (see Figure 15.1).
The same question is then asked about parameters X, Y, and Z, with the answers shown in
diagrammatic form, following Figure 15.1. This process is continued until the diagram displays the
dependencies between all important parameters and measures. The dependencies, ultimately, are
converted into quantitative relationships (i.e., formulas for computation).
X
Y
Z
15.2. SIMULATION 57
A(cid:3)
Figure 15.1: A Step in Constructing a Parameter Dependency Diagram.
For the information flow model, we focus initially upon the functional decomposition of the
system (see Chapter 9). We then ask the question: what is the nature of all information flows between
all functions and sub-functions? The results are then entered into a table that has these functions
and sub-functions as both its rows and columns. Several information flow descriptors can be used
as the cell entries (e.g., type, rate of flow, security level, etc.)
Yet another model is itself a descriptor of the systems engineering process.This model is called
the systems engineering “Vee” model [3]. It is formulated as a “V,” such that the downward stroke
of the “V” has the early systems engineering activities that move from requirements to design speci-
fications, and the upward stroke completes design engineering and moves through the construction
and integration of system configuration items.
A fourth model is called the “Spiral Model,” and it too depicts major parts of the systems
engineering process but in the form of a spiral [3, p. 18]. By means of its spiral structure, it demon-
strates the fact that we tend to iterate parts of the process rather than proceed in a totally linear
manner.
Numerous other “models” have been developed with specialized purposes that help us to build
better systems. Often, these have been based upon some type of diagram, for example: functional
flows and data descriptions, flow charts, state transition diagrams, IDEF charts, GERT charts, and
many others.
Another touchstone for modeling, especially relevant to the main subject of this book, is the
work on model-based systems engineering (see also Chapter 4). This is based upon the notion that
models form an excellent foundation for carrying out systems engineering.Two prominent aspects of
this approach include the UML (unified modeling language) and the SySML (systems engineering
modeling language) [2].
15.2 SIMULATION
This descriptor is used when we attempt to actually simulate the behavior and operation of the system
in question. A simulation is a model, but a model is not necessarily a simulation. There are generally
two approaches in the simulation world. We can build a new simulation, writing all the software
“from scratch.” Or, we can use an existing software “package” that is provided by a commercial
58
15. MODELING AND SIMULATION
vendor. As an example of the first approach, we might build our own simulation if we wanted to
capture the possible behavior of a new Metro system that we wanted to build for a city with such
a need. In the second case, numerous packages are available to us, depending upon the problem we
are trying to address. A book by this author [2] provided a list of such software, along with a set of
potential suppliers. The latter approach works very well when the user determines that the software
in question applies (for sure) to the problem at hand. The short Table 15.1 below illustrates the type
of software that is available, along with the problem domain and the name of the vendor.
Table 15.1: Selected Modeling or Simulation Packages Available From Suppliers.
Software Name
Problem Domain
Vendor
GPSS/H
LANNET II.5
PROMOD
SIMFACTORY
SLAM
Mathematica 7
GAMS Mathematical
SAS
CORE System
General Purpose Systems
Local Area Network
General Systems
Factory Operations
General Systems
Mathematical Analyses
Programming
Statistics & Optimization
Requirements
Wolverine
CACI
Promod
CACI
Pritsker
Wolfram
GAMS Dev. Corp.
SAS Institute
Vitech
15.3 DOMAINS OF INTEREST
Table 15.1 provides only a small sample of the domains that have been addressed by Modeling and
Simulation. Table 15.2 lists another dozen such domains in which a systems engineer can obtain
assistance in evaluating the performance of a variety of systems of interest.
15.4 MODELING AND SIMULATION IN THE DOD
The DoD has embraced M & S , finding it extremely useful with respect to building complex systems.
There is a Modeling and Simulation Information Analysis Center (MSIAC) that provides support
to developers and users. Their overall mission is to “access, acquire, collect, analyze, synthesize,
generate, and disseminate scientific, technical and operational support information in the area”[4].
The MSIAC is oriented to providing the above type of support to its customers.
M & S is considered to be a key enabler such that data and related services are accessible
across the various parts of the agency. The management of M & S is especially important in helping
the DoD achieve its goals by [5]:
(cid:129) Leading investments in M & S.
(cid:129) Helping with collaborative R & D.
Table 15.2: A Dozen Domains Which Have M & S Support Packages.
REFERENCES 59
• Operations Research
• Control Systems Analysis
• Decision Analysis
• Queuing Theory
• Linear and Non-Linear Systems Analysis
• Reliability, Availability and Maintainability
• Probability and Statistics
• War Gaming
• Forecasting
• Risk Assessments
• System Dynamics
• Process Reengineering
(cid:129) Maximizing commonality, reuse, effectiveness and efficiency.
The DoD also pays special attention to M & S verification, validation and accreditation (VV&A).
Many of our successes in M & S are traceable to the DoD. At the same time, it can be said that we
have a vibrant and productive commercial M & S industry that is very important to the world of
systems engineering.
REFERENCES
[1] Rechtin, E., Systems Architecting, Prentice-Hall, 1991. 56
[2] Eisner, H., Essentials of Project and Systems Engineering Management, 2008, 3rd Edition, John
Wiley, 2008. 56, 57, 58
[3] Buede, D., The Engineering Design of Systems, John Wiley, 2000, page 10. 57
[4] Modeling and Simulation Information Analysis Center, www.dod-msiac.org 58
[5] DoDD 5000.59, “DoD Modeling and Simulation (M & S) Management,” August 6, 2007,
Department of Defense, Washington, DC. 58
60
C H A P T E R 16
Other Analysis Relationships
In this chapter, we explore a limited number of analysis relationships that are firmly “tools of the
trade” for the systems engineer. These relationships tend to be focused on calculations of the system
technical performance measures rather than on evaluating alternative systems architectures, as in the
analysis of Chapter 12. They also tend to relate to some of the modeling and simulation methods
examined in Chapter 15.
16.1 SYSTEM ERRORS
All systems are capable of making errors, and we pay a great deal of attention to understanding and
controlling system errors. Two classic types of errors:
1. The system fails to do what it is designed (intended) to do.
2. The system does one or more things that it was not supposed to do.
We look at the former as a failure in the performance or effectiveness domain. The latter is usually
called a “false alarm.” A simple example is the automobile air bag system. It clearly can fail to “go off ”
when a bad accident is occurring (error 1 above). Or it can “go off ” when no accident has occurred
(error 2 above). Both are very bad news, with different consequences.
For many systems, we characterize, calculate and control the errors. A failure to do so may
well make the system inoperative or even dangerous to use. In doing do, we create a “model” of the
system errors, which is basically a roadmap for how to calculate and control. We usually characterize
errors as the standard deviation (σ ) of a probability distribution. Its square is the error expressed as a
variance (σ 2). If the error model consists of a total error as the sum of a set of independent random
variables, then the defining total error variance is calculated:
σ 2(T ) = σ 2(x) + σ 2(y) + σ 2(z)
where the overall error variable (T ) is the sum of three other random variables, x, y, and z. Ex-
pressed another way, the total error variance is the sum of the variances of the three subordinate and
independent error variables.
If the error variables are not independent but correlated in some way or another, the above
relationship does not hold. Instead, one needs to account for covariance terms in a formula that can
be found in most books on probability and statistics. Dealing with covariance terms is a well known
part of a well-developed theory.
16.2. ERRORS AS REQUIREMENTS OR SPECIFICATIONS 61
16.2 ERRORS AS REQUIREMENTS OR SPECIFICATIONS
Two cases are cited here whereby errors are defined as requirements or specifications. The first is
the case of a radar system such that the detection probability is designated as 0.98, and the false
−6. The first limits one error to (1 − .98) or 2 percent. Thus, we fail to detect
alarm probability is 10
a target when a target is present no more than 2 percent of the time. For the second error, the false
alarm rate is limited to 1 time in a million.
The second case is that of an on line transaction processor (OLTP). Here we can focus upon
the response time and specify that the response time be less than 15 seconds, for 90 percent of
the transactions. We can expand that, if we wish, to also say that the response time be less than
25 seconds, for 95 percent of the transactions. This also means that the response time may be greater
than 25 seconds, for only five percent of the time (one out of 20 transactions, on the average). So
there are many ways to express error bounds, and the systems engineering team must be prepared to
build a system that satisfies these types of statements.
16.3 RELIABILITY
The reliability of a system is defined as the probability that a system will operate without failure to
a specified time “t.” It is given by the formula:
R = exp(−λt) .
Where R is the reliability, “t” is the time in question, and λ is the system failure rate. The failure rate
is also the reciprocal of the system mean-time-between-failure (MTBF).
With the above straightforward equation, it is a simple matter to calculate the likelihood that
a system will operate successfully to its MTBF. The operation is shown below:
R = exp(−λt) = exp(−t/MTBF) = exp(−MTBF/MTBF) = exp(−1) = 0.368 .
It immediately follows that the probability that a system will operate without failure to twice its
MTBF is exp(−2) = 0.135.
The above formula holds when the system in question has independent subsystems and when
the failure rate is approximately constant (the system has “no memory”). Systems with demonstrable
wear-out violate this constraint, and one cannot use the exponential distribution in such a case. The
recommended approach in such a situation is to use the Weibull distribution.This is a two-parameter
distribution which has a density function of f (t) = αλt α−1 exp(−λt α). When α = 1, this reduces
to the exponential case.
We are able to improve or increase reliability by placing systems, or components of systems, in
a parallel redundancy configuration. This basically means that both of the components need to have
failed in order to have the overall system fail. For example, if you have a small business running out
of your home office, and you are concerned about having a computer working (along with a printer)
with high reliability, then it makes sense to buy two computers and two printers. One set is kept in
62
16. OTHER ANALYSIS RELATIONSHIPS
“parallel,” so that in the case of failure, a back-up is present. We are able to demonstrate how this
works with two simple formulas:
R (series) = R∗R = R2 where R is the reliability of a component
R (parallel) = 1 − (1 − R)(1 − R) .
If the basic reliability of one component is 0.9, the series and parallel reliabilities are found:
R (series) = (0.9)(0.9) = 0.81
R (parallel) = 1 − (1 − .9)(1 − .9) = 1 − (.1)(.1) = 1 − .01 = 0.99 .
The redundant arrangement has thus improved the reliability from 0.9 to 0.99, but it comes with a
price. This is the essence of a trade-off whereby we wish to improve a feature of the system, with a
cost that is acceptable.
16.4 AVAILABILITY
The Availability of a system is defined as the probability that a system will operate successfully when
called upon to do so. It is given by the formula:
A = MTBF/(MTBF + MDT) .
Where A is the availability, MTBF is the mean-time-between failure, and MDT is the system mean
down time. If, for example, the MTBF is 500 hours and the MDT is 4 hours, the availability is
calculated as:
A = 500/(500 + 4) = 500/504 = 0.992 .
Thus, we see that being able to detect a failure and then repair it quickly (low MDT) will improve
the system’s availability, which is what our intuition tells us about such a matter.
16.5 “SUBJECTIVE” ANALYSIS AND MEASUREMENT
In the world of building real systems, many “subjective” methods are used that lead to progress.
For example, the various forms of capability maturity models are largely based upon “subjective”
assessments that define levels of capability in several domains (e.g., systems engineering, software
engineering, etc.). Two additional areas include evaluating the extent to which systems are interop-
erable and integrable [1]. Progress is being made in terms of measuring and evaluating important
features such as system complexity, resilience and sustainability. Often, the early steps in such fields
start with a type of subjective analysis. This is also, at times, characterized as “soft” science, as
contrasted with “hard” science or their equivalent engineering forms.
16.6. OTHER TOPICS OF INTEREST 63
16.6 OTHER TOPICS OF INTEREST
Under the overall topic of analysis, there is a large body of knowledge known as linear systems
theory. We know a great deal about how to analyze linear systems, and usually we use mean-square
error (variance) as a key parameter. However, when the system exhibits non-linear behavior, we have
greater difficulty with both the underlying theory as well as ease of calculation. In many cases, we
go to the world of modeling and simulation to find a solution, as discussed in an earlier chapter.
Another area of no small interest is the topic of System Dynamics, formulated by Jay For-
rester [2], and moved forward by many others. Subtopics include such features as stocks, flows, causal
loops and the necessary equations to tie all these together, supported by a diagrammatic representa-
tion. A related and well-known simulation package called DYNAMO has been available for users
from a variety of fields.
Finally, we are able to see analysis topics of interest as represented in one of our earliest treatises
on systems engineering [3]. These include:
• Design of experiments.
• System logic.
• Queuing theory.
• Game theory.
• Linear programming.
• Cybernetics.
• Information Theory.
• Servomechanism theory.
• Human engineering.
• Group dynamics.
Clearly, there are numerous dimensions to the world of analysis. And although the systems
engineer cannot be an expert in even a large proportion of these, it is necessary to know when and
where each of the analysis methods are able to be used to solve a systems problem.
REFERENCES
[1] Eisner, H., “Toward Measures of Interoperability and Integrability for System Architectures,”
2008 INFORMS Telecon Conference, University of Maryland, College Park, MD. 62
[2] Forrester, J., System Dynamics, Pegasus Communications, 1968. 63
[3] Goode, H. and R. Machol, System Engineering, McGraw-Hill, 1957. 63
64
C H A P T E R 17
The Role of Technology
Most large-scale systems today lean heavily upon technology for their performance, from automobiles
to air traffic control to the Internet to the telephone. Technology has made it possible to achieve our
high levels of productivity, as well as our overall quality of life. And technology is absolutely crucial
to our superiority in the defense and security of our nation.
In terms of building new systems, we look to technology to provide higher levels of perfor-
mance. These new levels keep us competitive. However, as we look at advanced technology we ask
at least these three questions:
(cid:129) Is this technology necessary to meet the performance requirements of the system?
(cid:129) Does this technology increase the level of risk in terms of cost and schedule constraints?
(cid:129) Does this technology come to us at increasing, decreasing or the same costs, as compared to
other alternatives?
Thus, we see potential trade-offs between better technology, risk and the overall costs of that im-
proved technology. Project managers and chief engineers need to grapple with that problem every
time a higher performing technology is under serious consideration.
17.1 OFFICE OF TECHNOLOGY ASSESSMENT (OTA)
The importance of technology to this country is underscored by the fact that an Office of Technology
Assessment was operative over the time period 1972 to 1995. The purpose of that Office was to
look at all aspects of technology on behalf of Congress. Public Law 92-484 established the Office,
and it was extremely valuable in examining the advantages and disadvantages of various types of
technology. Many of the publications of the OTA have been made available through Princeton
University, and a form of technology assessment took root in Europe after the OTA closure in the
United States. Today, technology remains no less important, and it may turn out that the old OTA
is resurrected down the road, in one form or another.
17.2 THE DEPARTMENT OF DEFENSE (DOD) AND
TECHNOLOGY
We are easily reminded of the role of technology in terms of defense by looking at the latest DoD
system acquisition guidance. It is recalled that in Chapter 5, we briefly examined the system life
17.3. CRITICISMS AND TECHNOLOGY
65
cycle and a DoD Instruction [1] that pertained to the phases of that life cycle. In particular, we note
that a “Technology Development Phase” (TDP) was inserted between the initial Materiel Solution
Analysis Phase and the Engineering and Manufacturing Development (EMD) Phase.This expresses
the important notion that technology is to be analyzed in great detail, and it needs to be confirmed
that the technology selected for large systems will be appropriate and available. The cited purpose
of the TDP is cited:
(cid:129) To reduce technology risk, as well as determine and mature the appropriate set of technologies
to be integrated into a full system.
(cid:129) To have a continuous technology discovery and development process that demonstrates close
collaboration between the S & T (science and technology) community, the user and the system
developer.
(cid:129) To assure iteration to assess the viability of technologies while also refining user requirements.
The Instruction continues to describe the Phase in detail and cite the exit focus:
(cid:129) Exit when an “affordable program or increment of militarily useful capability has been iden-
tified.”
This strengthens the accepted notion of providing a system in an evolutionary manner. More about
that later in the book.
17.3 CRITICISMS AND TECHNOLOGY
The Government Accountability Office (GAO), for several years, has been evaluating the practices of
various executive agencies, especially in regard to systems that they consider not to be successful. For
example, one report looked at 18 large-scale NASA projects with a total life cycle cost over $50 billion
and saw “significant cost and/or schedule growth”[2]. Problems were cited especially with respect to
developing new technologies or retrofitting old technologies, as well as understanding the risks tied
to these technologies. Another GAO report looked more-so at “technology transition” processes on
the part of the DoD [2], concluding that they routinely accepted “high levels of technology risk,”
often in the form of including technologies before they were ready to be transitioned. Shortcomings
in the technology domain are considered to lead directly to certain “poor cost and schedule” outcomes.
In many of their assessments, the GAO examined programs from the points of view of
(a) technology maturity, (b) design, and (c) production. A relatively large number of problems in
these programs were attributed to lack of technology maturity. That leads us to looking more closely
at this topic and the closely related subject of “technology readiness.”
66
17. THE ROLE OF TECHNOLOGY
17.4 TECHNOLOGY READINESS LEVELS (TRLS)
Both the DoD and NASA (and others) have accepted the notion of “Technology Readiness Level”
as a way of specifically measuring the extent to which various technologies can be brought into the
systems they are building. Table 17.1 below briefly describes nine such levels [3].
Table 17.1: Short Descriptions of Nine Technology Readiness Levels [3].
Technology Readiness Level
Brief Description
1. Basic principles observed and reported.
2. Technology concept and/or application
formulated.
3. Analytical and experimental critical
function and/or characteristic proof of
concept.
4. Component and/or breadboard
validation in laboratory environment.
5. Component and/or breadboard
validation in relevant environment.
6. System/subsystem model or prototype
demonstration in a relevant
environment.
Scientific research begins to be translated into
applied research and development.
Invention begins. Application is speculative.
Components not yet integrated.
Active research and development is initiated.
Includes analytic and laboratory studies.
Basic technological components are integrated.
Low fidelity compared to eventual system.
Fidelity of breadboard technology increases
significantly. Basic components integrated.
Model tested in relevant environment.
7. System prototype demonstration in an
operational environment.
8. Actual system completed and “flight
qualified” through test and
demonstration.
9. Actual system “flight proven” through
successful mission operations.
Prototype near or at planned operational
system. Major step up from TRL 6.
Technology proven to work in its final form and
under expected conditions. Represents end of
true system development.
Actual application of technology in its final
form and under mission conditions.
Having this set of definitions definitely helps in terms of understanding where a program
might stand in terms of incorporating various kinds of technology into real systems.
17.5 THE TECHNOLOGY READINESS ASSESSMENT (TRA)
DESKBOOK
The above Technology Readiness Levels can be used as a basis for a formal Technology Readiness
Assessment (TRA). Quoting the intention [4]:
17.6. A CLOSING LIST 67
(cid:129) “A TRA is a formal, systematic, metrics-based process and accompanying report that assesses
the maturity of technologies called Critical Technology Elements (CTEs) to be used in sys-
tems. CTEs can be hardware or software.”
The way it works is that an independent review team (IRT) with the appropriate subject matter
expertise uses the TRLs as a metric to assess the maturity of the CTEs. All of this has the purpose
of making sure that the technologies selected for a system are sufficiently mature and that they are
ready to be part of that system, with minimum risk. A detailed explanation of how this works is
provided in the cited references dealing with technology readiness and assessment.
17.6 A CLOSING LIST
As we close this chapter, we take note of some every-day high-technology systems and components
that have come into the lives of most of us, and thereby influence us in a positive way as we go about
our chores at home and at work:
• The hybrid automobile.
• The GPS receivers and position trackers.
• The Internet.
• New arrays of phones.
• Very low price computers.
• Massive storage devices, like flash drives.
• Fiber-optic connectivity.
• Pods, Pads, Nooks and Kindles.
Try making a short list of those technology areas that tend to support your own life, both at
home and at work. New ones seem to be appearing almost every day.
REFERENCES
[1] Department of Defense (DoD) Instruction 5000.02, “Operation of the Defense Acquisition
System,” December 8, 2008, USD(AT&L). 65
[2] www.gao.gov 65
[3] DoD 5000.2-R, “Mandatory Procedures for Major Defense Acquisition Programs (MDAPS)
and Major Automated Information Systems (MAIS) Acquisition Programs,” April 5, 2002,
Department of Defense. 66
[4] “Technology Readiness Assessment (TRA)” Deskbook, Department of Defense (DoD), July
2009, Prepared by the Director, Research Directorate (DRD), Office of the Director, Defense
Research and Engineering (DDR&E). 66
68
C H A P T E R 18
Risk Management
The previous chapter briefly explored the role of technology in our systems, as well as the notion
that when technology is used in a system and that technology is not yet mature or ready, the risk to
all is increased. This “technology risk” leads to one or more of what we might call the four generic
risks in a system:
(cid:129) Performance Risk.
(cid:129) Cost Risk.
(cid:129) Schedule Risk.
(cid:129) Societal Risk.
In this chapter, we examine various approaches to limit these risks, otherwise known as risk man-
agement (RM). We also note that as part of the process, we almost always try to measure risk, one
way or another.
18.1 BASIC RISK PERSPECTIVE
A fundamental perspective regarding risk is that it consists of two major components:
(cid:129) The likelihood (or probability) that an exceedingly bad event will occur, and
(cid:129) The consequences of the occurrence of that event.
We have several notable examples to cite here but not examine in detail. The first is the BP oil
spill in the Gulf, which was very upsetting to many people and businesses in that area as well as
other impacted locales. It took a while to “cap” the leaking oil, with apprehension and anger growing
almost every day. It was a low probability event, with quite high consequences. Another well-known
set of events was that of losing manned spacecrafts in both the Columbia and Challenger missions.
This was heart-wrenching as the nation mourned the loss of these very brave astronauts. In terms
of nuclear facilities, we have the Chernobyl incident in Russia as well as the Three Mile Island case
in the United States. We very seriously guard against nuclear problems of this type, but yet they
occurred. Finally, you may remember that we lost a spacecraft on a mission to Mars (Mars Climate
Observer) due to an inconsistency in the use of the metric vs. the English system of measurement.
We saw it happen, and we assume that measures were taken that will make such an event one of
very low likelihood for any and all future missions.
Looking at the above, and factoring in several experiences of this author, we pause here to
suggest four actions that are likely to improve any given “risk” situation:
1. Look in detail for cases in which a single point of failure will almost immediately lead to
mission failure.
18.2. RISK MATRIX 69
2. Fix these cases (mostly through design changes) before the design is “frozen.”
3. Review the risk situation, in a systematic matter, every month.
4. Carry out the above with the very best and proactive systems/risk engineers.
18.2 RISK MATRIX
A convenient way to examine the probability-consequence approach is to construct a “risk matrix”
such as that shown below in Table 18.1 [1].
Table 18.1: Risk Matrix: Rows are Levels of Likelihood;
Columns are Levels of Consequence; Higher Numbers Are
Greater Likelihoods and Consequences.
L5-C1
L4-C1
L3-C1
L2-C1
L1-C1
L5-C2
L4-C2
L3-C2
L2-C2
L1-C2
8
L4-C3
L3-C3
L2-C3
L1-C3
9
8
L3-C4
L2-C4
L1-C4
10
9
8
L2-C5
L1-C5
In the risk matrix, we see five levels of likelihood (rows) mapped against five levels of conse-
quences (columns). The top right corner of some six “cells” represent areas of greatest concern since
both the probabilities and the consequences are high. As shown, numbers can be assigned to these
cells as the sums of the likelihood and the consequence levels. For these six situations, therefore,
it is of utmost importance to dig deeply into ways to decrease both the probabilities as well as the
consequences, if possible. Often, the emphasis is to re-design the system so that the probabilities are
decreased and multiple failures are necessary before the system stops working.
18.3 NASA AND RISK MANAGEMENT
NASA has reacted well to the Challenger, Columbia and other risk events and has done extremely
well in terms of its programs and perspectives with regard to risk management. They continue to
define and refine risk management procedures [2] adopting what appears to be a continuous im-
provement approach. This approach leans heavily upon Risk-Informed Decision Making (RIDM)
70
18. RISK MANAGEMENT
and Continuous Risk Management (CRM). The former has three main parts, including (1) identi-
fication of alternatives, (2) risk analysis of alternatives, and (3) risk-informing alternative selection.
The latter has six elements:
1. Identify: what are the main contributors to risk?
2. Analyze: estimate the likelihoods and consequences.
3. Plan: the risk disposition, including mitigate plans.
4. Track: monitor progress in all of the above.
5. Control: verify effectiveness of mitigation plans and actions.
6. Communicate and Document: throughout the process.
NASA has recognized the role that knowledge management (KM) plays in the domain of risk man-
agement. In doing so, they have emphasized such activities: (a) continuous risk management, (b) risk
management case studies, (c) knowledge capture and transfer, (d) wiki-enabled teams, (e) knowledge-
based risks, and (f ) decision support [3].
NASA has also championed a field known as “Quantitative Risk Assessment” (QRA) that
has carefully defined methods and procedures for numerical analyses of risks. This has involved,
among others, some of the notions and schema listed below:
1. The precise formulation of physical and functional hierarchies.
2. Separation of mission phases.
3. Event sequence diagramming.
4. Event and fault trees.
5. Common-cause failure modeling.
6. Construction of binary decision diagrams.
7. Probability modeling and analysis.
NASA’s approach is therefore both comprehensive and quantitative. It also takes account of the need
to convert qualitative and quantitative assessments to real-world decision making.
18.4 ADDITIONAL RISK MANAGEMENT PERSPECTIVES
Earlier discussions have indicated that the GAO, in some respects, has taken the DoD to task for
using “immature” technology and thus placed some programs at risk. Despite that, however, this
author believes that the DoD has an excellent perspective on the matter of risk and the management
thereof. Here, for example, are some key features of the DoD’s risk management process [4]:
REFERENCES 71
(cid:129) Continuous process over system life cycle.
(cid:129) Organized methodology.
(cid:129) Measuring unknowns and implementing risk mitigations.
(cid:129) Continuous monitoring and assessments.
They have defined and implemented a “risk management process model” consisting of the following
important five steps:
1. Risk identification.
2. Risk analysis.
3. Risk mitigation planning.
4. Risk mitigation plan implementation.
5. Risk tracking
Finally, the DoD has looked in considerable detail at what works and what does not in terms of an
effective risk management approach. Listed below are some of their suggestions [4]:
(cid:129) A process integral to a sound acquisition approach.
(cid:129) Complete risk analyses and follow-up mitigations.
(cid:129) Continuous and iterative risk assessments.
(cid:129) Well-defined thresholds and success criteria.
(cid:129) Technical reviews as early as possible in the life cycle.
Risk management is a topic that we clearly understand quite well, but we do not always implement
in accordance with what our plans and reviews are telling us. Too many of our failures in this domain,
unfortunately, can be attributed to human error. How do we reduce human error? The answer points
to two areas: better training, and more intense monitoring.
REFERENCES
[1] NASA Systems Engineering Handbook, NASA/SP-2007–6105 Rev 1, December 2007,
NASA Headquarters, Washington, DC. 69
[2] NASA Procedural Requirements, NPR 8000.4A, December 16, 2008, “Agency Risk Manage-
ment Procedural Requirements,” Office of Safety and Mission Assurance, NASA, Washington,
DC. 69
72 REFERENCES
[3] “Integrated Risk and Knowledge Management Systems,” D. Lengyel, ESMD Risk and Knowl-
edge Management Officer, Exploration Systems Mission Directorate (ESMD), NASA Head-
quarters, Washington, DC. 70
[4] “Risk Management Guide for DoD Acquisition,” 6th Edition, August 2006, OUSD(AT&L),
The Department of Defense, Washington, DC. 70, 71
C H A P T E R 19
73
Testing, Verification, and
Validation
This chapter considers testing as well as verification and validation (V & V), three topics that are
critical to systems engineering and building successful systems. Testing will be dealt with in two
contexts that can be expressed as testing in the small and testing in the large. For the former, we
are concerned with integrating small units (components, configuration items) and then testing them
to see if they work after the integration. In that sense, we see many “integration and test” (I & T)
instances that are a natural part of building a system, piece by piece. Over the years, many have
adopted the view that the best way to handle this type of testing is to accept the notion: “build a
little, test a little.” This incremental approach is a good one that tends to lead to more than acceptable
results. However, one possible problem shows itself in this regard. In putting together the master
project schedule, not enough time is allocated to this multi-cycled integration and test set of activities.
So what often happens is there are failures that need to be fixed (the test fails), and these are not
really accounted for. It is rare to have many (I & T) cycles during which no failures are experienced.
This is easily remedied by using experience factors to leave more time for cases in which the tests
fail.
Testing in the large, for this text, will mean testing within the context of Test and Evaluation
(T & E). Some of the features of this very important topic of T & E will be explored below.
19.1 TEST AND EVALUATION (T & E)
Two critical aspects of T & E are Development T & E (DT&E) and Operational T & E (OT&E).
The overall strategy regarding T & E is formulated relatively early, i.e., during the Concept and
Technology Development Phase. The notion of early planning for T & E has now been widely
accepted, and doing so is an important part of overall project success. OT&E is the “last” key
milestone that needs to be achieved in order to have the overall system accepted. Thus, it is crucial
for both the “customer” as well as the “contractor.” There are special problems associated with OT&E
that may not be obvious. During this “exit” activity, which may last for quite a long time, the overall
system is being tested and evaluated in terms of satisfying operational requirements of the system
“in the field.” This means that one must either place the system “in the field” or try to simulate an
operational environment as best as possible. Further, in no small number of cases, demonstration of
operational performance is extremely difficult. For example, if a weapon system has a kill probability
of “X,” then it is literally impossible to prove this by running a large number of targets during the
74
19. TESTING, VERIFICATION, AND VALIDATION
testing to statistically make the case. So a procedure short of achieving absolute statistical validity
is needed. This requires considerable creativity, imagination and engineering acumen as well as
management judgment.
T & E has a lot of history in the Department of Defense (DoD). In that sense, one might say
that the DoD is an excellent source of good information about how to conduct T & E successfully.
There are lots of “lessons learned” that have been acknowledged and accepted by this important
Department that takes an integrated view, describing the purpose of T & E as “to provide knowledge
to assist in managing the risks involved in developing, producing, operating and sustaining systems
and capabilities”[1].
We note that this statement of purpose brings us to a connection with risk management, as
discussed in the previous chapter.
A prominent part of carrying out T & E is to produce a Test and Evaluation Master Plan
(TEMP), which: “shall describe planned development, operational, and live-fire testing, including
measures to evaluate the performance of the system during these test periods, an integrated test
schedule, and the resource requirements to accomplish the planned testing”[1].
It is to be noted that another aspect of T & E is todeal with matters of interoperability, a
key area that has given us trouble over many years. Interoperability requires that we understand the
systems with which we need to interoperate, which, in turn, means that our testing needs greater
integration and collaboration. This notion has been called “integration testing,” characterized as the
“collaborative planning and collaborative execution of test phases and events to provide shared data
in support of independent analysis, evaluation and reporting by all stakeholders, particularly the
developmental (contractor and government) and operational test and evaluation communities”[2].
The acting Director of the Developmental Test & Evaluation activity in the DoD has defined key
issues in three principal areas: (1) sharing and access to data, (2) control of test events, and (3) possible
overreaction to interim test results [3]. He also notes that solutions to these types of issues are mostly
cultural, and also solvable.
19.2 VERIFICATION AND VALIDATION (V & V)
Verification and Validation (V & V) has come to be a well-known method for assuring that a
system meets all aspects of its requirements. It has come upon the scene largely in the software arena
where we were (and still are) experiencing many development problems. Overview definitions of V
and V [4]:
Verification is a set of activities confirming that the products of a phase of development satisfies the
requirements from an earlier phase, and
Validation is directed toward confirming that various subsystems, or the overall system, complies
with the overall requirements
19.2. VERIFICATION AND VALIDATION (V & V)
75
Some have suggested that an appropriate way to look at V & V is to say that validation is
concerned with whether or not one is constructing the right product, whereas verification addresses
the matter of whether or not it is being constructed the right way.
The IEEE has been a great supporter of the need for V & V, especially with respect to
software [4]. An overview of their perspective:
Software verification and validation processes determine whether the development products
of a given activity conform to the requirements of that activity and whether the software satisfies its
intended use as well as established user needs.
When the V & V is carried out by an independent party, it is called IV&V. This is also a
well established practice whose basic purpose is to add a level of assurance and avoid self-serving
or conflict of interest situations. NASA has set up a separate IV&V facility that is focused upon
software [5]. Key activities at that facility include:
(cid:129) Carrying out IV & V on safety and mission critical software.
(cid:129) Providing software expertise to other groups at NASA.
(cid:129) Reducing program and project risk as it relates to software.
(cid:129) Carrying out research that can enhance the methods used in software assurance and software
IV & V in general.
The Argonne National Laboratory (ANL) has made IV & V a core competency and area of focus,
attempting to assure that the IV & V is flexible, applied at the systems level, tailored to the client’s
needs, and of the correct level of effort, especially when there are high-consequence projects to be
addressed. They define IV & V as a “systems engineering process used for evaluating the integrity
and quality of the process and products of a systems development effort”[6]. Their IV & V efforts
can be characterized as being rigorous, adaptive, integrated and self-improving. As the cornerstones
of their programs, they tend to focus on (a) integrity, (b) surety, and (c) competence, especially with
respect to requirements, development and testing.
A previous discussion (chapter fifteen) has dealt with modeling and simulation (M & S). The
need for V & V in that connection has also been well recognized. That is, we know it is critically
important to verify and validate the methods, assumptions and results of M & S sothat we can use
them in an appropriate way. As an example, the Coast Guard has set forth an instruction dealing
with verification, validation and accreditation (VV & A) of models and simulations [7]. The DoD
is also well aware of such a need [8].
This chapter has briefly discussed testing and verification and validation. These are attempts
to assure that we have a quality system that satisfies the agreed-upon requirements. Both are also
connected to ways of managing risk. We have learned, at times rather painfully, that there is little
substitute for putting the real system through its paces in order to confirm that it works, and will
continue to work, over its full mission.
76 REFERENCES
REFERENCES
[1] DoD Instruction 5000.02, “Operation of the Defense Acquisition System,” December 8, 2008,
Department of Defense (DoD), Washington, DC. 74
[2] Defense Acquisition Guidebook, Test and Evaluation (T & E) Chapter, Department of De-
fense (DoD), Washington, DC. 74
[3] DiPetto, C.,“Implementing Integrated Testing,” International Test and Evaluation Association
(ITEA) Journal, September 2009. 74
[4] IEEE 1012–2004, IEEE Standard for Software Verification and Validation, 8 June 2004,
IEEE, Piscataway, New Jersey. 74, 75
[5] NASA IV & V Facility, 100 University Drive, Fairmont, WV 26554, www.nasa.gov/
centers/ivv 75
[6] Argonne National Laboratory (ANL), U. S. Department of Energy Laboratory, managed by
UChicago Argonne, LLC, www.anl.gov 75
[7] Commandant Instruction 5200.40, Verification, Validation and Accreditation (VV&A) of
Models and Simulations, 22 December 2006, U. S. Coast Guard, Department of Homeland
Security, 2100 Second Street, SW, Washington, DC. 75
[8] DoD IV&V for M & S(Modeling and Simulation), DoD Instruction 5000.61, December 9,
2009, U. S. Department of Defense (DOD), Washington, DC. 75
C H A P T E R 20
Integration
77
As with testing, integration can be thought of in the “small” and in the “large.” For the former
situation, we are simply integrating components, assemblies, units, and the like, and then testing
them to see if the integrated products work. For the latter case, integration is part of “Systems
Integration,” and many of our largest companies present themselves as “Systems Integrators” (SIs).
This is entirely appropriate since they do in fact have the expertise to integrate all aspects of systems
about as well as such an activity can be accomplished. This expertise might also be thought of as a
set of core competencies. These are considered here, after a short definition of systems integration.
20.1 BRIEF DEFINITION OF SYSTEMS INTEGRATION
Systems Integration (SI) is a set of processes whereby subsystems are identified, developed, and
connected such that they inter-operate harmoniously and represent a cost-effective solution to the
problem and needs set forth by a customer.
20.2 SYSTEMS INTEGRATION CORE COMPETENCIES
In broad terms, systems integration requires the skills that are part of both systems engineering
and project management. Looking more closely, the following dozen areas are named here as the
critical core competencies:
1. Dealing constructively with the customer.
2. Requirements engineering.
3. System architecting.
4. Project and program management.
5. Life cycle costing.
6. Technical performance measurement.
7. Risk Management.
8. Test and Evaluation.
9. Interface Analysis and Control.
10. Software engineering.
11. Building highly productive teams.
12. Systems engineering management.
Other areas that may represent the next level of capability are (a) rapid prototyping and
deployment, (b) evolutionary design, and (c) architecting systems of systems.
20.3 THE STOVEPIPE ISSUE
There is considerable pressure in the U.S. for the Systems Integrator (SI) to build “fully integrated”
systems. This is often extended to situations where there are several “stovepipes.” The instruction to
78
20. INTEGRATION
the SI is to maximally integrate such stovepipes, with a goal of 90% integration, or more. In other
words, the more integration, the better.
For this author, this is not an appropriate perspective or instruction. What is the reason? For
some systems/stovepipes, integration is well-advised and feasible. For others, integration is virtually
impossible and therefore not well advised. The desirability of high levels of stovepipe integration
depends strongly upon the specific design features of the stovepipes. Only one general rule should
be applied to this “stovepipe integration issue:”
(cid:129) Integrate stovepipes such that the resultant system represents the most cost-effective solution
to the customer’s problem.
In other words, maintain the architecting perspective that even in the case where there are existing
stovepipes, we are still seeking an overall system that is most cost-effective. If that overall system
has a minimal level of integration, so be it.
This conclusion has been reached with the aid of several personal experiences as well as
observations of the scene relative to stovepipe integration attempts. In one noteworthy experience,
an integration approach was abandoned after the program missed several key cost and schedule
milestones. It was just too hard to integrate the stovepipes, and stay within budget and schedule
constraints.
20.4 EVOLUTIONARY DEVELOPMENT AND INTEGRATION
Today’s approach to building large-scale systems has been called “evolutionary,” which is entirely
appropriate. The final overall system is successfully constructed through the planned integration
of several “builds,” such that these builds themselves represent a truly functioning capability. This
may be illustrated by a simplified example. Suppose an overall system has been designed to have
some nine functions, one through nine. The system architects ultimately decide upon an overall
architecture consisting of three builds:
(cid:129) Build One provides functions 2, 5 and 6.
(cid:129) Build Two provides functions 1, 4 and 7.
(cid:129) Build Three provides functions 3, 8 and 9.
The overall architecture is considered to be the most cost-effective solution and the three builds
represent well-defined and needed capabilities that will ultimately be built, integrated and delivered.
The customer does not have to wait for deployment of the whole system; early builds are provided
that serve more immediate needs. Thus, evolutionary development and pre-planning for integration
go hand-in-hand. The system is designed such that the builds can be integrated at the appropriate
level of integration. Designing for integration, at whatever level, is generally a whole lot better than
trying to integrate after the fact.
20.5. INTEGRATION READINESS 79
20.5 INTEGRATION READINESS
The notions of System Readiness as well as Integration Readiness have been explored by various
researchers in Systems Engineering, paving the way toward a better understanding of readiness,
in general. In moving from “TRL to SRL” (technology readiness level to systems readiness level),
several authors have defined an “Integration Readiness Level” (IRL) whose purpose is to evaluate
the risk of integration [1]. In this investigation, seven integration readiness levels are defined, with
the following key words for the various levels:
Level 7 – verified and validated integration of technologies.
Level 6 – technologies can accept, translate and structure information.
Level 5 – control between technologies.
Level 4 – detail in the quality and assurance of the integration.
Level 3 – compatibility between technologies.
Level 2 – specificity regarding interaction.
Level 1 – interface between technologies identified.
We note that the System Readiness is a function of both the Technology Readiness Level and the
Integration Readiness Level. Research of this type continues to shed light upon ways in which
technologies and integration inter-relate in terms of the readiness of the system in question.
20.6 INTEGRABILITY?
Another approach considers the formulation of a metric for the ways in which parts of a system
may integrate with other parts, or systems with systems [2]. That approach suggests specific ways
to measure what is called “integrability,” using subjective analytic techniques. An example is shown
that develops percent integrability measures for four stovepipes taken two at a time (six pairs). This
author suggests that we will eventually be formulating and using this type of metric to deal more
effectively with the pervasive issue of integrating stovepipes, or any sets of systems and subsystems.
This same paper also sets forth a method for developing interoperability indices that are relevant to
another integration problem, that is, the interoperability of systems and subsystems.
20.7 THE BOTTOM LINE
The bottom line here is that Systems Integration (SI) can be one of the most difficult aspects of
building successful systems. Forcing stovepipes, subsystems, COTS and GOTS together , when they
have not be designed to inter-relate, is basically not a good idea. Is there a better idea? The answer is
“yes,” and it lies in holding firmly to an over-arching design that is the most cost-effective solution,
chosen from among a set of well-defined alternatives. Despite all the rhetoric about integrated
systems and solutions, we would do better to design systems to be integrated rather than try to
“crunch” them together after the fact. Building successful systems may ultimately depend upon our
understanding of this notion.
80 REFERENCES
REFERENCES
[1] Sauser, B., Dinesh Verma, J. Ramirez-Marquez, and R. Gove,“From TRL to SRL:The Concept
of Systems Readiness Levels,” Stevens Institute of Technology, Hoboken, New Jersey, Paper
#126. 79
[2] H. Eisner, “Toward Measures of Interoperability and Integrability for System Architectures,”
2008 INFORMS Telecom Conference, University of Maryland, College Park, MD. 79
C H A P T E R 21
81
Systems Engineering
Management
There is clearly a need to manage the varied activities that represent the elements of systems en-
gineering. Generically, this would be called Systems Engineering Management (SEM), and one of
the keys to success is to be able to carry out such management efficiently and effectively. Some of
the most important attributes that contribute positively to successful SEM include the following:
Ability to Communicate – This shows up on virtually every list. Management, at all levels,
is all about communicating, and in all directions. And, of course, we are talking about solid,
honest and inclusive communications that reaches people in a positive way.
Effective Team Building and Operation – Today’s manager needs to have the skills to build a
team that is highly functional, motivated and productive. Having an assemblage of very smart
people in a room is not necessarily constructing a team. A good team needs to be built, and it
does not magically appear just as a result of bringing a group of people to a problem-solving
session.
Creative and Adaptive – In the world of large and complex systems, many unforeseen hap-
penings occur. The systems engineering manager, in the midst of such happenings, must find
creative solutions, generally one at a time. And he or she must also be highly adaptable to new
situations and environments. If the previously planned path is clogged, for whatever reason, a
new path must be found.
Technical Understanding – The best managers of systems engineering have a solid technical
background and understanding. With such skills, they are usually held in high regard for both
their managerial as well as their technical contributions. This, in turn, makes for a more solid
systems engineering team.
Persistence and Dedication – A large and complex system has numerous activities that are
being executed simultaneously and in rapid fire. The manager needs to tackle all of the issues
that come to his or her desk one at a time, and with patience. This takes being persistent and
dedicated to a well-considered pace. Good decisions require time to collect data, think and
confer with others on the team.
82
21. SYSTEMS ENGINEERING MANAGEMENT
Systematic and Organized – In the above scenario, it is clearly necessary to organize all the
pieces and deal with them in a systematic manner. A friend, as well as others, has suggested
the three “C” approach: remain calm, cool and collected.
This overall area, relative to the requisite skills of a manager, is explored further in the chap-
ter on Project Management. And beyond such management skills, there is a critical document in
Systems Engineering Management. Several years ago, that document was called a Systems Engi-
neering Management Plan (SEMP). Now, it is called the Systems Engineering Plan (SEP). Both
are discussed in some detail in the text that follows.
21.1 THE SEMP
The major components of the SEMP have been set forth [1]:
(cid:129) The Systems Engineering Process.
(cid:129) Systems Analysis and Control.
(cid:129) Technology Transition.
(cid:129) Technical Integration Teams.
Of special note are the last two items on this list. The next to last reinforces the need to consider
the technologies in the system, and how they are to be transitioned. The last item listed supports an
emphasis on two notions: teams and integration. We deeply understand that our systems must be
built by highly functional teams. We are not, however, always able to achieve this goal.
21.2 THE SEP
The SEP was basically mandated in February 2004 by the Deputy Under Secretary of Defense as a
crucial part of how Systems Engineering was to be carried out in the Department of Defense [2].
In that context, the plan was to do the following:
(cid:129) Describe the program’s overall technical approach, including processes, resources, metrics, and
applicable performance incentives.
(cid:129) Detail the timing, conduct and success criteria of technical reviews.
This same charter became a requirement in a key acquisition instruction [3].
Going beyond the above, the DoD suggested that the SEP be organized to cover five major
focus areas [4]:
(cid:129) Program requirements.
(cid:129) Technical staffing and organization planning.
REFERENCES 83
(cid:129) Technical baseline management.
(cid:129) Technical review planning, and
(cid:129) Integration with the overall management of the program.
The SEP is an important document that is updated appropriately depending upon which phase of
the program one is carrying out. The SEP is to be available for each milestone review and thereby
remains a most important way to try to manage the program. In addition, the discipline of systems
engineering is reinforced and embedded in the technical design of the system and the various
technical reviews that are so critical to program success. Another important part of the overall SEP
is the role of the Systems Engineering Working-level Integrated Product Team (SE WIPT) which
includes key systems engineers that bring their best ideas and talents to the system and program
in question. Finally, modeling and simulation (M & S) is recognized as a “key enabler throughout
the acquisition life cycle.” As considered in a previous chapter, M & S remains an integral part of
predicting the performance of complex systems even when not a part of the system has actually been
built.
The SEP has an associated preparation guide that can be used by the program manager and
the chief systems engineer. Some of the more important notions cited in that guide [5]:
(cid:129) The SEP is a blueprint for the overall management of the technical aspects of a program.
(cid:129) The SEP has a preferred format, but is to be tailored.
(cid:129) The SEP is to be continuously updated.
(cid:129) The SEP is to identify linkages with other relevant technical and programmatic activities.
(cid:129) A sound systems engineering strategy is crucial.
(cid:129) The SEP details the critical technical issues as well as how they are to be addressed and solved.
The SEP remains one of the most critical documents for technical management and thus plays a
crucial role if one is to achieve program and system success.
REFERENCES
[1] “Systems Engineering,” Military Standard 499B (1971), U.S. Department of Defense (DoD),
Washington, DC. 82
[2] M. W. Wynne, Acting DUSD for AT&L, “Policy for Systems Engineering in DoD,” February
20, 2004. 82
[3] DODI 5000.02, “Operation of the Defense Acquisition System.” December 8, 2008, U. S.
Department of Defense (DoD), Washington, DC. 82
84 REFERENCES
[4] ODUSD (A & T) “Systems and Software Engineering Enterprise Development,” Technical
Planning for Mission Success, version 2.01, U.S. Department of Defense (DoD), April 2008. 82
[5] SEP Preparation Guide, version 0.95, December 2004, Department of Defense (DoD), Wash-
ington, DC. 83
C H A P T E R 22
85
Project Management
Systems, large and small, are typically built as a part of a project. The project is run by a Project
Manager (PM). The Chief Systems Engineer as well as the Project Controller typically report to
the PM. In that sense, the PM, Chief Systems Engineer and the Project Controller can be thought
of as a triumvirate that manages all aspects of the project. The classical four elements of projects are
the following:
(cid:129) Planning.
(cid:129) Organizing.
(cid:129) Directing, and
(cid:129) Controlling.
At times, the “controlling” element is called “monitoring” since control can be achieved by combining
monitoring and directing. In the context of this book, the primary activity of the project is to build
a system for which the key discipline is systems engineering.
As implied above, the first need is to formulate a project plan. This document can be said
to contain seven parts, as (1) Goals, Objectives and Requirements (GORs), (2) Task Statements,
(3) Technical Approach, (4) Organization and Staffing, (5) Schedule, (6) Budget, and (7) Risk
Analysis. Each is briefly discussed below.
22.1 GOALS, OBJECTIVES AND REQUIREMENTS
Objectives are usually sub-sets of goals, as shown by the following example:
Goal 1—Assure the financial stability of the enterprise.
Objective 1.1: Build and install a financial management system.
Objective 1.2: Install a COTS project cost monitoring system.
Objective 1.3: Upgrade to a COTS accounting system.
Goal 2—Enhance the technical capabilities of the enterprise.
Objective 2.1: Establish an internal technical training program.
Objective 2.2: Provide and support an external training program.
86
22. PROJECT MANAGEMENT
Objective 2.3: Establish a technical mentoring and information exchange program.
The objectives can be made more quantitative by adding, for example, milestone dates. Re-
quirements are much more specific and relate directly to the functionally decomposed elements of a
system. See chapters nine and ten for more about this area.
22.2 TASK STATEMENTS
These define the work elements to be carried out. They can also be called parts of a work breakdown
structure (WBS). Here are some generic task statements relative to building a model:
Task One - Conceptual design of the model. Task Four - Installing the Model.
Task Two - Building the model.
Task Three - Validating the model.
Task Five - Using the Validated Model.
22.3 TECHNICAL APPROACH
This section of a plan describes the main points in terms of approach to each of the task statements.
For the above statements, the key questions to be answered are the following:
- What is the suggested approach to conceptualizing the model?
- How is the model construction to be approached?
- How is the model to be validated?
- How is the model to be installed in all the offices?
- What are the plans for using the model?
22.4 ORGANIZATION AND STAFFING
This part of the plan typically has two parts: (1) what is the organization of the project, and (2) which
people are assigned to each task? The latter is often shown as a task responsibility matrix (TRM)
that lists the percent of time each person is assigned to each of the tasks. Additional information can
be appended, such as more detail about the tasks (such as sub-tasks) and the people (such as how
to contact, what is their home organization, etc.). Summations yield estimates of the overall effort
that is being applied to each of the tasks.
22.5 SCHEDULE
A key part of the plan is the project schedule. For a large and complex effort, a PERT diagram is
often used to capture the schedule. This approach has many excellent features, such as showing task
dependencies and also the critical path. For a relatively small project, the Gantt chart is still used.
Examples of both of these charting procedures are available in the references cited at the end of this
chapter.
22.6. BUDGET 87
22.6 BUDGET
Every project carries with it a budget, which is the maximum expenditure that will be allowed.
The direct labor component can be constructed by starting with the task responsibility matrix. As
indicated above, total person-weeks is a typical output, leading directly to an estimate of labor costs.
An overhead cost is usually a percentage of that cost, including fringe benefit costs. Then we add
other costs such as expected travel costs, and apply appropriate G & A(general and administrative)
costs to that total. Then a fee, or profit cost, is added to obtain the overall project cost plus fee. That
number is usually called the price of the project.
The budget is typically shown by month, detailing the amount of money that is planned to be
spent every month. As the actual monies are spent, the remaining funds to be spent are re-allocated
by month.
22.7 RISK ANALYSIS
This section of the project plan identifies risks and also plans for mitigating such risks. Typical
risks fall into the categories of schedule risk, cost risk and performance risk. Experienced systems
engineers usually do well with being able to look realistically at such risks and putting into place
very specific mitigation strategies as well as tactics.
The above seven elements represent the overall plan for the project, and it should be up-to-
date and as short as necessary to do the job. New employees can be brought up to speed by reading
the plan, and management can obtain a top-level view of the key aspects of the project. Of course,
the plan is a very good way of keeping top management informed and apprised of progress.
22.8 EARNED VALUE
A well-accepted element of project management is the method of earned value analysis (EVA). The
basic idea of EVA can be illustrated by an example. Suppose a project is at the 6 month point of a
12 month overall schedule. One looks at the project cost reports and finds that half of the budgeted
funds have been spent. So we are half-way through the schedule and half-way through the available
dollars. One might conclude that all is well. However, has the project made the technical progress
that it was supposed to achieve? A deeper look at this matter reveals that only a quarter of the work
to be performed has actually been carried out. So the project is actually in trouble in terms of work
that was planned. The EVA quantifies this area, and it estimates where the project is likely to wind
up in terms of schedule, cost and work accomplished.
88
22. PROJECT MANAGEMENT
22.9 THE PROJECT MANAGER
A great deal of a systems engineering program’s success, or lack of it, depends upon the Project
Manager (PM). The PM must have an appropriate set of skills to make it all work. We cite below
some nine characteristics of an effective PM, as set forth in an IEEE Engineering Management
Review [1].
1. Leads by Example. The members of a team are always watching the PM as to what will
happen next. The solid PM shows the way and, hopefully, sets a good example as to what to
do and what not to do.
2. Is a Visionary. Although this attribute tends to be applied more to leadership traits, it applies
as well at the project level. The good PM is constantly reflecting a positive vision as to where
the project is going and what it takes to be successful.
3. Is Technically Competent. A PM that knows the subject matter of the project is more likely
to be successful than the one who does not. Often, the project personnel will not respect a
leader (the PM) who is not technically competent.
4. Is Decisive. The successful PM knows when it is time to stop discussing and gathering data
(as well as opinions) and make a decision as to which way to go.
5. Communicates Well. This is on everyone’s list in terms of effective management as well as
leadership.
6. Motivates. The effective PM knows how to motivate his or her people, going beyond the
“standard tools” such as salary increases and bonuses. The centerpiece of this is often finding
the correct and elegant solution for the customer as well as pride in the team’s accomplishments.
7. Stands Up to Upper Management. The ability to do this, when necessary, is very important
in terms of the team’s solidarity as well as performance.
8. Supports Team Members. Every member of the team needs to feel valued and needs to be
productive. Support from the PM is an essential ingredient in both.
9. Encourages New Ideas. All the good ideas do not reside in the brain of the PM. It therefore
behooves the PM to understand that and to elicit the best ideas from all team members.
There are, of course, many other factors that go into being an excellent PM. The reader is encouraged
to go beyond the above listing and into the references that explore this interesting and important
topic in considerably more detail.
The first of the references cited below refers to the key attributes of a Project Manager.
The following references are recommended with respect to covering the scope and intricacies
of project management. There are, of course, many other worthy texts in this densely populated
field.
REFERENCES 89
REFERENCES
[1] Zimmerman, T. and Yasin, M., “A Leadership Profile of American Project Managers,” IEEE
Engineering Management Review, Vol. 26, No. 4, Winter 1998, pp. 5–11. 88
[2] Kerzner, H., Project Management, 10th Edition, 2009, John Wiley.
[3] Forsberg, K., H. Mooz and H. Cotterman, Visualizing Project Management, John Wiley, 2000.
[4] Keszbom, D., D. Schilling and K. Edward, Dynamic Project Management, John Wiley, 1989.
[5] Eisner, H., Essentials of Project and Systems Engineering Management, 3rd Edition, John Wiley,
2008.
90
C H A P T E R 23
Software Engineering
In this book, as well as most others, software engineering is considered a part of (a subset of ) the
overall topic of systems engineering. However, it occupies a critical space in systems engineering
due to the fact that (a) many of our systems are composed of large segments of software, and (b) a
relatively large number of problems are encountered in these software segments. As an example, we
see references to software problems and issues in many of the reports issued by the GAO (government
accountability office), as cited below [1]:
(cid:129) Defense attempting to address major software challenges.
(cid:129) NASA’s software development approach increases safety and cost risk.
(cid:129) Embedded computer systems: significant software problems need to be addressed.
(cid:129) Immature software development processes increase risk.
In addition to the above sampling, from time to time, we see a particular system with software
difficulties singled out and reported in the daily media. Such was the case with the FBI’s Sentinel
System which caught the public’s attention with quotes like [2]:
(cid:129) The system is approximately $100 million over budget and two years behind schedule.
(cid:129) It could cost $350 more and an additional six years to complete.
(cid:129) Some 90% of the $451 million budget for the entire system was already spent, with delivery
of only two of the program’s four phases.
(cid:129) Much of the above is attributable to software issues and problems.
So – even though software is a very flexible building block of systems, it remains a key problem area
that still requires a lot of attention.
23.1 SOFTWARE DEVELOPMENT STEPS
The major steps that are needed to develop software are deceptively simple:
1. Formulating the system requirements.
2. Deriving the software requirements from the system requirements.
23.2. THE CAPABILITY MATURITY MODEL 91
3. Constructing and validating the software architecture.
4. Building, integrating and testing to the confirmed architecture.
5. Passing the overall acceptance tests for the system.
6. Installing the system with the appropriate O & M training.
We have found that the so-called “waterfall model” for software development does not work, and
we have accepted the “spiral model” as largely developed by Barry Boehm [3]. That has been an
appropriate step forward but perhaps not well enough understood in all of its dimensions. Basically,
this model incorporates iterations to assure that the system is on track from an overall cost, schedule,
risk and performance point of view.
23.2 THE CAPABILITY MATURITY MODEL
One of the early steps taken to try to improve software development was to formulate a Capability
Maturity Model (CMM) [4]. This well accepted advance put forth the notion that we could measure
the extent to which an organization was capable of producing correct software. This model defined
5 levels of capability in the software domain, and set forth some 18 Key Process Areas (KPAs) that
were most important in demonstrating that capability. Since these early days, the basic model has
been expanded to what is known as the “integrated” model, the CMMI [5]. This model expanded
from software and on into other system development areas such as systems engineering and inte-
grated product and process development (SE/SW/IPPD). Version 1.0 had 186 specific practices,
54 goals and 24 process areas. The latter are shown below in Table 23.1. In today’s world, the soft-
ware developers need to demonstrate their capabilities in the larger integrated context. The mature
“systems” enterprises have been stepping up to that challenge.
23.3 COCOMO
Measuring important aspects of software has been one of the key elements of software engineering.
Key measures have included size, progress at milestones, resource utilization, amount of reuse, staffing
levels, numbers and types of problems (including defects), requirements creep, complexity and cost.
The latter measure has been the domain of a model known as COCOMO, the Constructive Cost
Model. It is so central to the matter of software engineering that a short explanation is set forth
here.
COCOMO has been described in two books [6, 7] as well as a USC website [8]. The simplest
form of COCOMO, known as COCOMO I, has taken the form of the following four equations:
PM = A (KDSI)x.
TDEV = B (PM)y.
PROD = DSI/PM.
FTES = PM/TDEV, where
92
23. SOFTWARE ENGINEERING
Table 23.1: CMMI Twenty-Four Process Areas [5].
Category: Process Management
Organizational Process Definition
Organizational Process Focus
Organizational Training
Organizational Process Performance
Organizational Innovation and Deployment
Category: Engineering
Requirements Management
Requirements Development
Technical Solution
Product Integration
Verification
Validation
Category: Project Management
Project Planning
Project Monitoring and Control
Supplier Agreement Management
Integrated Project Management
Risk Management
Integrated Teaming
Quantitative Project Management
Category: Support
Configuration Management
Process and Product Quality Management
Measurement and Analysis
Decision Analysis and Resolution
Organizational Environment for Integration
Causal Analysis and Resolution
PM = person-months, KDSI = thousands of delivered source instructions, TDEV = development
time, PROD = productivity, DSI = delivered source instructions, FTES = fulltime equivalent staff,
A and B are multipliers, and x and y are scale factors. For a particular set of software development
conditions (the organic mode), we will take the following values for the multipliers and scale factors:
A = 2.4, B = 2.5, x = 1.05, and y = 0.38, and a project with DSI equal to 80,000, as below:
PM = 2.4(KDSI)1.05 = 2.4(80)1.05 = 2.4(99.6) = 239 person-months.
TDEV = 2.5(239)0.38 = 2.5(8.01) = 20 months.
PROD = 80000/239 = 334.7 DSI per person-month.
FTES = 239/20 = 11.95 persons.
Additional information regarding COCOMO II has been provided in Chapter 14. It is clear that
COCOMO remains an important contribution to software engineering, especially with respect to
costs and the various factors that affect costs.
23.4 TOP TEN FOR SOFTWARE ENGINEERING
We close this chapter with the author’s “Top Ten” for the overall field of software engineering.
ONE—IDENTIFY AND TRACK PROGRESS ON ALL BUILDS EVERY WEEK,
WITH A PROMINENT DISPLAY
One of the more important actions that can be taken on a software development program is to
make sure that progress is sharply defined and measured, each and every week. This would seem like
overkill, but it generally is not, considering how fast one can fall behind. In addition, this suggestion
also calls for a prominent display (on a prominent wall) of current status as compared with planned
status. Problem areas are not likely to be overlooked, especially when they are staring at everyone in
red on an extremely large chart.
23.4. TOP TEN FOR SOFTWARE ENGINEERING 93
TWO—USE COCOMO AND OTHER CERS
As seen above, the use of COCOMO provides critical estimates of software cost that can be used
independently and also compared with more conventional build by build estimates of cost. The most
advanced form of COCOMO should be used, consistent with the availability of input information
on the various effort multipliers and scale factors. Other cost-estimating relationships (CERs) are
recommended, depending upon the type of system one is working on. For example, function point
analysis is a CER that has proven to be useful on some programs.
THREE—ASSURE MULTIPLE AND INDEPENDENT ESTIMATES
All software related models (such as COCOMO or software reliability) require input estimates. A
critical input for COCOMO is an estimate of KDSI, or thousands of delivered source instructions.
For all such estimates, one should obtain multiple and independent values from different people on
your team. An example is to obtain three estimates of critical parameters from three groups who
are not to confer on the numbers or the process. We are trying to have the best possible inputs and
avoid the GIGO (garbage in–garbage out) problem.
FOUR—ACCEPT THE CMMI NOTIONS AND DISCIPLINE
The CMMI principles are good ones and need to be followed by all serious software development
organizations. Tailored approaches are acceptable in order to match the processes used to the par-
ticulars of a given program, and avoid “overkill.” Successful team practices should be accepted and
used as models for all relevant parts of an organization. Trends toward agile development need to be
taken seriously. So do the process areas shown in Table 23.1 here.
FIVE—ASSURE A SOFTWARE “TEAM”
Software engineering, as is the case with systems engineering, is a team endeavor. The PM and lead
systems and software engineers need to build the appropriate teams that lead to high productivity
and a sense of accomplishment as well as commitment. People who refuse to join the team need to
be shown their errant ways, as well as the door.
SIX—KEEP TIME AND COST RESERVES IN CASE OF TROUBLE
Reserves should be SOP (standard operating practice) for difficult software developments. A risk
analysis will tend to reveal areas of probable troubles, but resources need to be set aside to deal with
such troubles. The PM will appreciate that kind of help, in distinction to “let’s all now go to a 60 hour
work week.”
94 REFERENCES
SEVEN—IDENTIFY AND FOLLOW BEST PRACTICES
Each software development team should know what works for it and what does not, as best practices
in their particular environments.This does imply that various teams may have different best practices,
and that is true. But tailored best practices is not another way of describing a free-for-all. There are
also many sources that can be accessed that will reveal recommended practices that have worked for
other teams and organizations.
EIGHT—LEAVE EXTRA TIME FOR INTEGRATION AND TEST (I & T) AND
ITERATIONS THEREOF
History on software programs suggests that not enough time is typically allocated to iterations of the
integration and test (I & T) cycles. The expectation is that most, if not all, tests will be positive. That
is rarely the case. Failures require changes in the original code, most of which were not predicted or
anticipated. These unexpected iterations can lead down some dark alleys so a good countermeasure
is simply to leave extra time to achieve the appropriate I & T results.
NINE—PAY SPECIAL ATTENTION TO THE HW-SW INTERFACES AND
TRADEOFFS
Systems tend to fail at the interfaces, and the hardware-software interface is one of the most impor-
tant. In addition, there are often trade-offs to be explored in terms of design choices as well as what
should be implemented in hardware and what in software. The answers here tend to not be obvious,
and they often require additional attention.
TEN—PROVIDE SPECIAL CARE (& FEEDING) OF YOUR BEST SW
ENGINEERS
While we are paying special attention, we must recognize that it is the software engineers that are
ultimately making all the good things happen. They often need special treatment and will deliver
for the project if this treatment is forthcoming. Such is life. Software is a domain that needs extra
attention, and so do the folks who find the right answers for the project.
REFERENCES
[1] www.gao.gov 90
[2] J. Stein, “Report Criticizes FBI on Computer Project,” Washington Post, October 21, 2010.
90
[3] Buede, D., The Engineering Design of Systems, John Wiley, 2009. 91
[4] Paulk, M., B. Curtis and M. B. Chrissis (1991), “Capability Maturity Model for Software,”
CMU/SEI-91-TR-24, Pittsburgh, Software Engineering Institute. 91
[5] Ahern, D., A. Clouse and R. Turner, CMMI Distilled, Addison-Wesley, 2001. 91, 92
[6] Boehm, B., Software Engineering Economics, Prentice-Hall, 1981. 91
[7] Boehm, B., et.al, Software Cost Estimation with COCOMO II, Prentice-Hall, 2000. 91
[8] www.sunset.usc 91
REFERENCES 95
96
C H A P T E R 24
Systems Acquisition
Systems are acquired under various sets of rules. System acquirers need to know these rules and also
need to be improving the processes they use, whenever possible. The U. S. government, for obvious
reasons, documents its acquisition rules, often in great detail. In this chapter, we take an overview
look at some of these acquisition rules, perspectives, and concerns, especially those promulgated by
the Department of Defense (DoD).The systems engineering community, including the ever-present
systems integrators, need to know all the actual and implied acquisition guidance in order to do the
best job they can for their customers.
24.1 THE 5000 SERIES
The so-called “5000 series” represents a DoD Directive [1] as well as an Instruction [2]. In Tables 24.1
and 24.2 below, we list some of the more important points that are made in these two significant
documents.
Table 24.1: Key Points in the DoD Directive: The Defense Acquisition System [1].
• The primary objective is to acquire quality products that meet user needs at a reasonable
price.
• Use competition to innovate, reduce costs, and increase quality.
• Deal appropriately with system responsiveness, flexibility, discipline, information
assurance, interoperability, and integrated test and evaluation.
• Multiple concepts and alternative ways to meet user needs shall be considered.
• Seek the most cost-effective solution over the system’s life cycle.
• Apply a systems engineering approach that optimizes system performance and
minimizes total ownership costs.
• Utilize a “Total Systems Approach.”
Many additional points are made by the 2008 version (an eighty page document) of the
above Instruction [3] that the reader may wish to examine. These include further details regarding
integrated test and evaluation as well as systems engineering. The latter confirmed the importance
of systems engineering and its role in systems acquisition.
24.2. DEFENSE ACQUISITION PERFORMANCE ASSESSMENT (DAPA) REPORT 97
Table 24.2: Key Points in the DoD Instruction: Operation of the Defense Acquisition
System [2].
• Provide an analysis of alternatives (AoA) as the basis for the Technology
Development strategy.
• The Technology Development Phase will reduce technology risk.
• Approve a minimum set of key performance parameters (KPPs).
• Operations and Support (O & S) shall sustain the system in the most cost-effective
way over its full life cycle.
• Develop integrated plans and capability roadmaps.
• Maintain the continuous application of a robust systems engineering methodology.
24.2 DEFENSE ACQUISITION PERFORMANCE ASSESSMENT
(DAPA) REPORT [4]
Continuing concern regarding the acquisition process within the DoD was the basis for the 2006
DAPA report. This report looked at 42 important areas of concern and results were presented in the
following areas:
1. Organization.
2. Workforce.
3. Budget.
4. Requirements.
5. Acquisition.
6. Industry.
This report provided a clear indication of what needed to be improved and how to go about
doing so, as of that time. Despite this rather well-defined prescription, major concerns continued
to be expressed about the acquisition system. So shortly thereafter, we have the Weapon System
Acquisition Reform Act of 2009 (WSARA) [5].
24.3 WEAPON SYSTEM ACQUISITION REFORM ACT OF 2009
Many of the key aspects of the WSARA of 2009 can be gleaned from the subjects of some of the
sections under titles 1 and 2, as below in Table 24.3.
Each of the weapon systems produced and used by the DoD need to be appropriately supported
in the field. Such support hopefully leads to greater efficiency as well as more cost-effective readiness
outcomes. The DoD is looking for improved operations in these eight principal areas: (1) product
support business model, (2) industrial integration strategy, (3) supply chain operational strategy,
(4) governance, (5) metrics, (6) operating and support costs, (7) analytical tools, and (8) human
capital.
98
24. SYSTEMS ACQUISITION
Table 24.3: Sections of Weapon System Acquisition Reform Act (WSARA) of 2009.
Title One – Acquisition Organization
Section 101 – Systems Engineering Capabilities
Section 102 – Developmental Testing
Section 103 – Technological Maturity Testing
Section 104 – Independent Cost Assessment
Section 105 – Combat Commanders
Title Two – Acquisition Policy
Section 201 – Trade-offs of Cost Schedule and Performance
Section 202 – Preliminary Design Review
Section 203 – Life-Cycle Competition
Section 204 – Nunn-McCurdy Breaches
Section 205 – Organizational Conflicts of Interest
Section 206 – Acquisition Excellence
24.4 GREATER EFFICIENCY AND PRODUCTIVITY
If we move into September of 2010, we see important messages conveyed to all acquisition profes-
sionals. One was from the DUSD (A, T & L) of the DoD, and it emphasized the need to “do more
without more”[6]. At the same time, Secretary of Defense Gates issued a memorandum calling for
new ways to increase efficiency and productivity [6]. Five specific areas are emphasized, including:
1. Affordability and control of cost growth.
2. Incentivizing productivity and innovation in industry.
3. Promoting real competition.
4. Improving tradecraft in services acquisition.
5. Reducing non-productive processes and bureaucracy.
This latter initiative from the Secretary of Defense has some 23 subordinate points within the above
five categories. Overall, the objective is to provide guidance as to how to achieve greater efficiency
and productivity, and thereby obtain better buying power.
24.5 EVOLUTIONARY ACQUISITION
From the above, one can see enormous efforts to reform the acquisition system, and one can achieve
greater performance with reduced expenditures. As this chapter closes, however, one other important
perspective was accepted earlier as an important part of the systems acquisition process, namely,
evolutionary acquisition. This approach mandated that we were to build systems incrementally and
get these increments out to the war-fighter as soon as possible. This activity, coupled with the several
systemic changes in the acquisition system identified above, will likely tell much of the acquisition
“story” for years to come. Will it all lead to greater success in building our large systems? This author
believes that it will, but only if we actually implement a reasonable number of changes that have
been mandated and demanded by the Secretary of Defense and his Deputy Undersecretary for A,
T & L.
REFERENCES 99
REFERENCES
[1] DoD Directive 5000.1, “The Defense Acquisition System,” May 12, 2003, Department of
Defense (DoD), Washington, DC. 96
[2] DoD Instruction 5000.2, “Operation of the Defense Acquisition System,” May 12, 2003,
Department of Defense (DoD), Washington, DC. 96, 97
[3] DoD Instruction 5000.02, “Operation of the Defense Acquisition System,” December 8, 2008,
Department of Defense (DoD), Washington, DC. 96
[4] “Defense Acquisition Performance Assessment (DAPA) Report,” Kadish Report, January
2006, Department of Defense (DoD), Washington, DC. xi, 97
[5] Weapon System Acquisition Reform Act (WSARA) of 2009, Public Law 111–23. May 22,
2009. 97
[6] Ashton Carter, “Memorandum for Acquisition Professionals,” September 14, 2010, DUSD
(A, T & L), The Pentagon, Department of Defense (DoD),Washington, DC. 98
100
C H A P T E R 25
Systems of Systems
As systems have become larger and more complex, there has been a formal acknowledgment that
true “systems of systems” have indeed come upon the scene. And the question immediately behind
that is: what does the systems engineering and integration community need to do to successfully
build and maintain these systems of systems?
There are numerous examples of systems of systems. One that is readily apparent is our
national air traffic control (ATC) system. This system is a central part of what might be called our
National Aviation System (NAS), and it clearly has subordinate systems dealing with navigation,
radars, aircraft landing, communications, and others, including massive display systems in the air
and on the ground. Other domains for systems of systems include our national telecommunications
system, our highway system, our train system, our defense system and our energy delivery system.
25.1 SOME PERSPECTIVES REGARDING SYSTEMS OF
SYSTEMS
In 2001, Sage and Cuppan examined the systems engineering and management of systems of systems
(SoS) and federations of systems (FoS) [1]. One result of this examination was the conclusion that
such systems and federations possessed the “characteristics of complex adaptive systems.” They also
reinforced points made about five characteristics of systems of systems, namely:
(cid:129) Operational independence of the constituent systems.
(cid:129) Managerial independence of the systems.
(cid:129) Geographic dispersion.
(cid:129) Emergent behavior.
(cid:129) Evolutionary development.
The emergent behavior is of special interest, meaning that the overall system of systems can exhibit
patterns of behavior that are not present in any one of the individual systems. Also, the latter-listed
item implies that the system of systems may be continuously growing under general principles of
evolutionary development.
A couple of years later, a group of researchers published their work regarding “System of
Systems Engineering”[2]. One of the points of interest is their comparison between conventional
systems engineering and system of systems engineering (SoSE), in eight domains, i.e., (1) focus,
25.1. SOME PERSPECTIVES REGARDING SYSTEMS OF SYSTEMS 101
(2) objective, (3) approach, (4) expectation, (5) problem, (6) analysis, (7) goals, and (8) boundaries.
Their “bottom line” is that System of Systems Engineering (SoSE) may be defined as the following:
“The design, deployment, operation and transformation of higher level metasystems that
must function as an integrated complex system to produce desirable results. These metasystems are
themselves comprised of multiple autonomous embedded complex systems that can be diverse in
technology, context, operation, geography, and conceptual frame”[2].
The above paper also sets forth four unique and distinct system contexts that need to be
considered. These are the following: (1) new system design, (2) existing system transformation,
(3) operation and maintenance, and (4) evaluation and evolution. This is followed by suggestions
for research in the domain of system of systems engineering.
A third approach to matters concerning systems of systems, and the engineering thereof,
was set forth by this author, working with other colleagues in the industry. This approach was
documented, in summary form, in a textbook dealing with project management and systems engi-
neering [3]. The first area of focus was to define the “elements” of system of systems engineering,
under the following 3 major categories:
1. Integration Engineering.
2. Integration Management.
3. Transition Management.
A total of 15 subordinate elements were established under these three topics. The basic notion
was, and remains, that “integration” was a key issue under a System of Systems perspective, and it
had to be approached from both an engineering as well as a management point of view. Further,
“management” was also very important, showing up in “integration” as well as “transition” issues.
The 15 elements that become critical areas in SoSE focus upon the most important activities, given
that each of the systems that are part of an SoS have a coherent systems engineering methodology
that has already been applied. Stated another way, the matter of SoSE tends to zero in on “systems”
issues having to do primarily with integration and transition.
As part of the aforementioned third approach, the authors also formulated a facilitating
concept they called “Rapid Computer-Aided System of Systems Engineering” (RCASSE) [3]. As
can be inferred by the title, the focus here was rapid or agile systems engineering, as applied to
systems of systems, and with special attention to how the computer could be used to make the entire
process more efficient. The basic steps of RCASSE were defined:
1. Mission engineering.
2. Baseline architecting.
3. Performance assessment.
4. Specialty engineering.
5. Interfaces/compatibility evaluation.
6. Software issues/sizing.
7. Risk definition/mitigation.
8. Scheduling.
9. Pre-planned product improvement.
10. Life-cycle cost issue assessment.
102
25. SYSTEMS OF SYSTEMS
Following these steps is recommended in order to deal with a complex system of systems and
the often stringent demands of schedule. These demands are usually also coupled with cost con-
straints. The overall point is that RCASSE is an example of how conventional systems engineering
needs to evolve so as to be viable in the domain of complex systems of systems.
25.2 COST ESTIMATION
An important question is that of estimating the development effort and costs of a system of systems.
We have COCOMO (and other approaches, such as Function Points) for more conventional systems.
What might we have for systems of systems? As it turns out, the same COCOMO group, under
the direction of Barry Boehm, has tackled this problem, and offers an approach by the name of
COSOSIMO [4]. Key inputs are set forth as size drivers and scale factors. The former is determined
to be a function of the following:
(cid:129) Number of SoS interface protocols.
(cid:129) Number of independent system component organizations.
Each of these is weighted in terms of expected complexity.
Scale factors have been selected:
1. SoS architecture maturity.
2. Cost/schedule compression.
3. Integration risk resolution.
4. Component system maturity.
5. Component system readiness.
6. Integration team capability.
7. Maturity of the integration processes.
The current status of COSOSIMO can be ascertained by exploring the website for the CO-
COMO team [5].
Implicit in the work on estimating costs for systems of systems is the notion of the SoS Lead
System Integrator (LSI), what the LSI does in terms of activities, and how this might differ from
more conventional systems engineering activities [6]. But a bottom line in this investigation is to
view SoS LSI teams as complex adaptive organizations and to derive from that a set of ways to
possibly improve success rates and efficiency with systems of systems. A sampling of these ways
includes [6]:
(cid:129) Planning for risk-driven spiral processes.
(cid:129) More up-front architecting and engineering.
(cid:129) Support for innovation and learning.
(cid:129) More mission-driven and change tolerant.
25.3. THE UBIQUITOUS DEPARTMENT OF DEFENSE (DOD)
25.3 THE UBIQUITOUS DEPARTMENT OF DEFENSE (DOD)
103
We end this chapter on systems of systems by recognizing that the DoD, as in many areas of
systems engineering, was on the scene early to formulate some guidance and perspectives so as to
try to increase success rates. The DoD Systems Engineering Guide for Systems of Systems [7] is
an excellent source for additional reading. Its short roadmap provides, for example, pointers to the
following material:
(cid:129) A description of types of SoS.
(cid:129) A comparison of systems and SoS.
(cid:129) A high level overview, as well as detailed description, of SoS systems engineering core elements.
(cid:129) How the systems engineering processes support SoS core elements.
(cid:129) A high level summary of the Guide.
It seems clear that we will see many papers, studies and case histories of how to build and manage
systems of systems as we continue on down the road in this complex area.
REFERENCES
[1] Sage, A. and C. Cuppan (2001), “On the Systems Engineering and Management of Systems of
Systems and Federations of Systems,” Information, Knowledge, and Systems Management, 2(4):
325–345. 100
[2] Keating, C., et al. (2003), “System of Systems Engineering,” Engineering Management Journal,
15(3). 100, 101
[3] Eisner, H., Essentials of Project and Systems Engineering Management, (1998, 2002, 2008), John
Wiley. 101
[4] Lane, J. A., “Estimating System-of-Systems Development Effort,” Software Tech News, DoD,
Vol. 9, No. 1, March 2006. 102
[5] sunset.usc.edu 102
[6] Lane, J. A. and B. Boehm, “Systems of Systems Lead System Integrators: Where Do They
Spend Their Time and What Makes Them More or Less Efficient?.” Systems Engineering,
Vol. 11, Number 1, Spring 2008. 102
[7] “Systems Engineering Guide for Systems of Systems,” version 1.0, August 2008, Director,
Systems and Software Engineering, DUSD (Acquisition and Technology), Department of
Defense, Washington, DC. 103
104
C H A P T E R 26
Thinking Outside the Box
The many problems we have had with building successful systems suggest that there are times when
we need to approach several of the issues of systems engineering by trying to “think outside the box.”
Often, this simply implies that, from time to time, it is necessary to question conventional wisdom.
Many have reported that they try doing exactly that, but that most of the time, the conventional
wisdom prevails. On the other hand, when new approaches are accepted, and prove to be better, we
are all encouraged to think even harder about more productive solutions. In many ways, that’s how
we make progress—lots of folks trying new ways (in the face of the naysayers) and making them
work.
To make these notions more concrete, we look below at a half dozen declarations of what
might be called “inside the box” thinking in some aspects of systems engineering.
26.1 INSIDE THE BOX 1: BUILD SYSTEMS SO AS TO
MAXIMALLY INTEGRATE ALL STOVEPIPES
Many managers in the federal government arena come into leadership positions with responsibility
for large systems that are mainly composed of stovepipes. In a rather plausible way that has a lot of
appeal, they then declare that the first priority is to integrate all the stovepipes. The organization
then follows that lead, perhaps not aware of the difficulties that might lie ahead. The problem that
may be lurking is simply that the stovepipes may not be “integrable” within any reasonable time and
cost boundaries. Most of the stovepipes in question were not designed to be integrated with the
other stovepipes. So why expect that this is an approach that is actually achievable. Is there another
more plausible approach?
It is suggested here that the “out of the box” approach simply be determined as follows:
integrate those stovepipes for which it is cost-effective to do so; otherwise, do not try to crunch
systems together that don’t go together. Why abandon our overall sensible approach to build and
deploy systems on the basis of their cost-effectiveness?
26.2 INSIDE THE BOX 2: IT’S NOT POSSIBLE TO MAKE
CHANGES SO AS TO ACHIEVE MORE THAN MARGINAL
IMPROVEMENTS
Marginal improvements can be thought of as in the 5-15% range, and according to conventional
wisdom, we should accept that type of target and expectation. However, upon more thought, we
26.3. INSIDETHE BOX 3: REQUIREMENTS SHOULD BE CONSIDERED FIXED AND INVIOLATE 105
might find that significant changes in the process lead naturally to improvements that could be as
much as ten times these amounts. What would you say to the approach Federal Express took by way
of changing the process? What would you say to the approach Xerox took to constructing a new type
of copying machine? What would you say to the business machines (i.e., typewriters, computers)
that were built by IBM in terms of improvements in productivity?
We build new large systems according to a time-honored set of procedures whereby most of
them are designed by means of a “clean sheet of paper” paradigm. What if we were, for certain classes
of systems, to try to re-use entire best-of-breed systems. This author set forth such an approach, and
showed that improvements in the 2500 percent (!) region were feasible [1]. The world awaits new
ideas embedded in new processes, and the two smart graduate students showed us the way with a
little system called “Google.”
26.3 INSIDE THE BOX 3: REQUIREMENTS SHOULD BE
CONSIDERED FIXED AND INVIOLATE
We are very nervous about the notion of changing requirements. Indeed, “creeping” requirements
are singled out as one of the reasons we fail in building new systems. However, it makes sense that we
should be changing requirements as we learn more and more about a system and what requirements
make sense and what do not. If a requirement does not make sense, we should be encouraged to
change it.
Barry Boehm, one of our software gurus, tells a true story about a 1 second response require-
ment that his company could not meet without over-running the system’s original budget of $30
million by an additional $70 million [2]. His company found that by changing the requirement to 4
seconds, which was all right for 90 percent of the cases, the original budget was achievable. That’s
the way to treat requirements. How sensitive are schedules and budgets to changes in requirements?
How much do we actually need the more stringent requirement in an operational setting?
26.4 INSIDE THE BOX 4: THERE IS NO SILVER BULLET THAT
CAN FIX A POORLY PERFORMING ACQUISITION
SYSTEM
We have been very concerned about our acquisition system, especially in the Department of Defense.
Back in 2006, we did a deep special study of this system, the result of which was the so-called DAPA
(Defense Acquisition Performance Assessment) report [3]. This report examined 42 issues and
defined a set of recommendations in six categories. In September of 2010, Defense Secretary Gates
set forth some 23 changes that were needed in the DoD [4]. Questions: Were the DAPA report
recommendations implemented? Will the SecDef Gates changes be implemented?
The inside the box perspective says that there is no silver bullet that will lead to a reformation
of our acquisition system. Yet, in reference to the overall problem, a clear-headed thinker by the
name of Norman Augustine gave us the silver bullet when he said [5]:
106
26. THINKING OUTSIDE THE BOX
“the difficulty resides in having the will to do anything about these problems”
It is clear that we know what the problems are, and we also know what the plausible solutions
are. As Augustine has said, do we have the will to actually make the necessary changes? That’s the
silver bullet.
Inside the Box 5: Big systems are hard to build, so as our systems get larger, we should
expect more and more failures
This inside the box proposition can be translated into: let’s lower our expectations as we attempt
to build larger systems. Why not rather set out to (a) build the smarter “learning organization” [6],
(b) accept the principles of continuous improvement and six sigma, and then (c) have fewer failures as
we move on down the road? That’s what progress is all about, and that’s what we should be planning
for as well as achieving in the real world.
Inside the Box 6: Try as we may, we will never get away from major patterns of falling
behind in cost, schedule and performance
Yes, these do seem to be today’s patterns - many programs falling behind, almost at the outset,
in one or more of these critical dimensions. Is there another way? The answer is yes, but it likely
lies “outside the box”. What are its governing principles? The answers seem to lie in the direction of
(a) incentivizing to “tell the truth”, and (b) providing appropriate penalties when agreed-upon goals
are not met. Can we change our approach to a “getting ahead of the power curve” mentality? The
technical domain is not the problem. The problem lies in the management domain, and in taking a
closer look at Augustine’s wise words, as stated above.
26.5 NINE SUGGESTIONS FOR THINKING OUTSIDE THE
BOX
In 2005, this author published a book that set forth nine perspectives for “thinking outside the
box” [7]. A very short overview of these perspectives completes this chapter.
1. Broaden and Generalize
Open your field of view to admit new strategies and possible solutions. A narrow view will
limit your perspective and creativity.
2. Crossover
Build a system for one customer and then sell it to many customers. It’s reuse taken to its
natural edges, and gaining as much leverage as possible.
3. Question Conventional Wisdom
Conventional wisdom changes with the times, so try getting a bit ahead of the pack. If you
have a new idea, press it into service.
4. Back of the Envelope
A new solution usually does not require the longest equation ever written. Give your intuitive
side a chance to converse with your experienced side to produce a new answer on one sheet of
paper.
REFERENCES 107
5. Expand the Dimensions
Problems as well as solutions have many dimensions. Here’s a case whereby just recognizing
the dimensions allows a solution to step forward.
6. Obversity
The negative side of a positive proposition is what you’re searching for. Why? It may cast a
new light on what makes sense and what may not.
7. Remove Constraints
Often, perceived constraints turn out to not be real constraints. Remove them, one by one,
and a new solution may then become obvious.
8. Thinking With Pictures
We know that visual perception is a cognitive activity. Collect, draw and view the pictures to
allow the right insights to come forth.
9. Systems Approach
There are some seven to ten elements of the systems approach (see chapter two). Try as many
as you need to find the right solution. Keep the alternatives on the table to help with the
process.
REFERENCES
[1] Eisner, H., (1995), Reengineering the Software Acquisition Process Using Developer-Off-the-
Shelf Systems (DOTSS), IEEE International Conference on Systems, Man and Cybernetics,
Vancouver, British Columbia, Canada, October 22–25; see also reference [7] 105
[2] Boehm, B., (2002), “Unifying Software Engineering and Systems Engineering”, Computer
Magazine, (March), pp. 114–116. 105
[3] “Defense Acquisition Performance Assessment”, DAPA Report, U. S. Department of Defense
(DoD), January 2006, Washington, DC. 105
[4] Carter, A., “Memorandum for Acquisition Professionals”, DUSD (A, T & L), September 14,
2010, Department of Defense (DoD), Washington, DC. 105
[5] Eisner, H., Essentials of Project and Systems Engineering Management, 2008, John Wiley, page
415. 105
108 REFERENCES
[6] Senge, P., The Fifth Discipline – The Art & Practice of the Learning Organization, Double-
day/Currency, 1990. 106
[7] Eisner, H., Managing Complex Systems – Thinking Outside the Box, John Wiley, 2005. 106, 107
C H A P T E R 27
Ten Failure Factors
109
This is a book about attempting to build successful systems. We have seen, from various sections in
several chapters, that this is not a simple thing to do. There are many hurdles, obstacles and potholes
that lay ahead as one seeks a success pathway. And we would be remiss here not to try to cite some
of the ways that managers and programs have basically failed, the notion being that we should try
to keep away from these actions and scenarios.
27.1 ONE—UNREALISTIC SCHEDULES
Many of the “failure stories” report that this and that program is months, or even years, behind
the original schedule. How can this be? Are we literally that bad at the age-old art, and science, of
scheduling?
The fact is that we know how to schedule and can do so in great detail and with great precision.
However, schedule problems still arise because we have a tendency to accept unrealistic schedules.
The pressure to do so can be intense, and it is difficult to say no to your boss who says—“The
schedule came to us from on high and it is our job to deliver to that schedule. If you can’t do it, let
me know now and I’ll find someone (other than you) who can!”. So your dilemma is clear. If you
say no, the path to promotion may be blocked forever. If you say yes, and then fail, you’re then a bad
manager who couldn’t step up to a challenge.
Is there another way? Is there a different “solution”? This author believes there is. The essence
of it lies in de-scoping the system, thereby defining what you truly believe can actually be delivered
to the unrealistic schedule. The de-scoped system is then fully documented along with a cover letter
saying: “This is the system I’ve signed up to deliver within the given schedule”. The usual question
at that point is then—“OK, what will it take to deliver the original system, not a de-scoped system?”
Then you need to be prepared to reveal your fully studied plan for a new and realistic schedule. The
two alternatives, and the two schedules, are at that point very clear, and you are off the horns of the
dilemma.
27.2 TWO—UNREALISTIC BUDGETS
As you might expect, the countermeasure here is much the same as that for the unrealistic schedule.
Typically, one can save money by de-scoping and relaxing some of the more difficult and stringent
system requirements. Typically, one can save money by “simplifying” the design to a sturdy but not as
far-reaching approach. These are the preferred answers. And behind them is the ultimate question—
110
27. TEN FAILURE FACTORS
what will it take, by way of budgeted dollars, to meet the original requirements? Be prepared to reveal
your well thought-out answer to that question, as well.
27.3 THREE—TOO MANY RISKS IN THE PERFORMANCE
DIMENSION
The three classical risk areas for systems include (1) schedule risk, (2) cost risk, and (3) performance
risk. If a system is pushing the state-of-the-art in terms of required performance, we too often
and rather cavalierly accept performance risk without the necessary challenges and without the
analysis of what it really takes to reduce the risk. The probability of success takes a nosedive as more
performance risks are not dealt with. There is considerable evidence that we have tried to fill the gap
with “immature” technology. That has basically not worked. So where might a potential answer lie?
In this regard, it would seem that E. Rechtin pointed us on the right direction. In his well defined
heuristics [1], he made it clear that after all his years at building and evaluating systems, a viable
strategy, in many situations, was still the K.I.S.S. approach.
27.4 FOUR—LOTS OF RISK ANALYSIS, NOT ENOUGH RISK
MITIGATION
While we’re on the subject of risk, we need to re-emphasize that knowing about a problem, in great
detail, is not the same as actually fixing the problem. If risk analysis is to have an appropriate payoff,
we must do it early and often, always with an eye toward true mitigation. That can mean changing
our design (heaven forbid) backtracking for a while and running new tests that were not originally
planned. We do not wish to have risks disappear, we make them disappear.
Our level of understanding and commitment to risk mitigation is always enhanced by even a
quick review of the “Challenger” incident. This was the well-publicized “O”-ring problem that led
to a space shuttle mission failure in January of 1986. None of the seven person crew survived, and
this major failure was soon investigated by the Rogers Commission. Apparently, many knew about
the risk attendant to the O-ring design, but the culture and decision-making processes did not lead
to appropriate before-the-fact action. So the bottom line is the following: it’s not what you know
about high risks, it’s what you do about what you know.
27.5 FIVE—LIP SERVICE TO “THE LEARNING
ORGANIZATION”
The PM (Project Manager) and the CSE (Chief Systems Engineer) are not supposed to carry the
organization of which they are a part on their backs. Although their efforts are often Superman-like,
we cannot count on it. The organization is supposed to help, support, and facilitate. And when it
does not, potential successes can be turned into failures.
27.6. SIX—POOR REQUIREMENTS ENGINEERING 111
Very clear guidance has been given to us by Peter Senge in his “The Fifth Discipline” [2]. In
that exposition, he highlighted five critical disciplines (listed below) and also called for building and
sustaining a “learning” organization.
1. Building a Shared Vision
2. Personal Mastery
3. Mental Models
4. Team Learning
5. Systems Thinking (The Fifth Discipline)
Table 1—Senge’s Five Disciplines [2]
A learning organization tends to not repeat errors and mistakes, and it constantly moves in
a “continuous improvement” direction. The learning organization tends to keep away from failure
scenarios by its very nature.
27.6 SIX—POOR REQUIREMENTS ENGINEERING
Almost every list of “problems” in building systems has this item on the list. Perhaps the simplest
way to describe a major part of the problem is to say that (a) we do not know, well enough, how
to write an exemplary set of requirements at the beginning of a program, and (b) we do not know,
well enough, how to improve poor requirements, or just plain get rid of them. The latter is probably
more significant, and it implies a well defined and accepted way of changing requirements when it
makes good sense to do so. Poor requirements should not be set in stone but rather should be subject
to challenge, negotiations, trade-off analysis and change. New ways of advancing this notion need
to be worked on and used to improve a situation that has been in need of a new approach for many
years.
27.7 SEVEN—FAILURE TO BUY INTO EVOLUTIONARY
DEVELOPMENT
We have basically accepted the notion of evolutionary development as the appropriate way to build
large-scale systems. But still there are many systems that are being constructed on the rejected
“grand design” (some say grandiose) idea. Yes, we need to have an overall design concept for the
whole system. But no, we don’t have to build and acquire the system as if it’s all or nothing. The
evolutionary (some say incremental) approach is the right answer, and the increments need to be
useful, functional, and acceptable as separate elements (or subsystems) of the overall system. The
documented acquisition plan for all large systems must pass this elemental approach.
112
27. TEN FAILURE FACTORS
27.8 EIGHT—INSUFFICIENT COMMUNICATIONS AND
TEAMWORK
The failure of many projects can be traceable, ultimately, to poor communications. Each project
needs to work on and establish the best communications that are possible under the circumstances.
And it starts with regular and well designed project meetings chaired by the PM who understands
how to build the appropriate communications discipline. This, as it turns out, is also a key link in
establishing a true team. Although we know the importance of Integrated Product Teams (IPTs),
we still have a long way to go in actually achieving the same. Formal and informal training would
also help.
27.9 NINE—SLIPPAGE IN THE PRACTICES OF SYSTEMS
ENGINEERING
Although what it takes to establish a solid systems engineering program for the building of systems
is well known and well documented, there remains considerable slippage between what we need and
what we have. Some aspects of systems engineering need to be tailored to the situation at hand, and
many organizations do not know how to do that in an agile way. And some organizations simply
do not have the appropriate and integrated mix of domain knowledge and systems engineering
expertise.
On this overall point, we can consider the following five systems engineering issues that were
defined by the NDIA (National Defense Industrial Association [3]):
1. Lack of awareness of the importance of systems engineering.
2. Inadequate qualified resources.
3. Insufficient tools and environments for systems engineering execution.
4. Inadequate requirements engineering.
5. Poor initial program formulation.
This is but one of many sources that lead us to a deeper understanding of what might need to be
improved in order to build successful systems.
27.10 TEN—WE KNOW WHAT TO DO; WHY WON’T WE DO
IT?
Many study groups, from the Defense Science Board to various industry associations such as the
NDIA as cited above, have issued reports on the problems we have in building large systems. In
essentially all of these reports, there is a long list of what to do to fix these problems. So—in a
very real sense—we know what the problems are, and we know how to fix them. So what then is
the problem? As eloquently stated by one of our best systems engineers and executives, Norman
Augustine [4], the problem lies in our failure to have to will to take the needed actions, in the real
world.
REFERENCES 113
REFERENCES
[1] Rechtin, E., System Architecting, Prentice-Hall, 1991. 110
[2] Senge, P., The Fifth Discipline – The Art & Practice of the Learning Organization, Double-
day/Currency, 1990. 111
[3] www.ndia.org 112
[4] Eisner, H., Essentials of Project and Systems Engineering Management, 3rd edition, John Wiley,
page 415. 113
114
C H A P T E R 28
A Success Audit
The sub-title of this book is “Building Successful Systems.” So now is the time to summarize the
key factors of success in building systems, using the overall disciplines of systems engineering and
project management. This will be approached by a “success audit,” using a series of questions, which
follow:
Are you-
1. Using the systems approach?
The elements of the systems approach were discussed very early in this book. Each and every
one of them is important in terms of the overall goal of building a successful system.
2. Considering an appropriate set of alternatives?
This notion must be called out separately even though it is one of the elements of the systems
approach. Mature development teams are always aware of the need to look at other possibilities.
Failure to do so opens the door to one or more of your competitors.
3. Doing the risk analyses and mitigation actions that are called for?
This cites both the analyses and the follow-up actions. Many programs fail since they do not
give priority to the actions that might be required to mitigate high risk scenarios.
4. Not accepting low probability of success scenarios and constraints? (i.e., impossible schedules,
budgets and technical performance goals)?
If your success probabilities were 0.8 each for cost, schedule and performance, then the likeli-
hood that you would fail in at least one of these areas (assuming independence) is around fifty
percent. That’s still not good enough. The three success probabilities need to increase if you
are looking for overall success (which you are).
5. Challenging high risk requirements?
If you are really committed to success, you must say “no” to high risk areas that become system
requirements. Look for ways to back up on these requirements.
6. Looking at sensible ways to simplify?
This book suggests that trying to simplify (remember the KISS notion) is a positive attribute
even as systems are being put forth that have higher and higher performance levels. Can your
team find the simpler, more elegant, solution? Is it looking for it?
115
7. Using the appropriate architecting process and validating the results?
There are several architecting processes that you can choose from. A recommended process
is suggested here, but “hybrid” approaches are also available. Pick the one that best fits your
program needs, and figure out how to validate the answers. Ways to accomplish the latter
include modeling and simulation.
8. Able to confirm the cost-effectiveness of your preferred architecture?
As reiterated several times, the basis for selecting a preferred architecture is its cost-
effectiveness, as compared with alternatives. Further, we have specific ways that we use to
compute the effectiveness, as well as the costs, of the various alternatives. It is generally not a
good idea to suddenly change the basis for selection.
9. Confirming that you have a “robust” design?
A key feature of a robust design is that the system be able to carry out its primary mission
despite the existence of various failures in the system. A robust system is also a “slow die”
system. In general, we should not allow a single point failure to bring down the entire system.
10. Assuring the system’s interoperability (internal and external)?
All systems have internal as well as external interfaces, by design. A critical part of both types
of interfaces is interoperability. We should also be better at measuring the degree or level of
interoperability that we have achieved, as compared with our goal in this respect.
11. Establishing the maturity of the selected technologies?
Using technology in a system when that technology is not fully mature is a clear risk. It is
necessary to confirm that our selections of technologies are sound in terms of capabilities (they
satisfy the requirements) as well as availability (they are there when we need them). In short,
the selected technologies need to be “demonstrated.”
12. Using the evolutionary development approach?
This approach means that we are able to construct parts of the system and bring them into
operation, while keeping other parts on track for a later implementation. It is a “chunking”
approach that has proven to be successful, especially for large and complex systems.
13. Challenging unrealistic stovepipe integration notions?
In general, we are not striving to maximize the number of stovepipes that are integrated.
Rather, we are trying to find cost-effective solutions to the problem at hand. If this means very
little stovepipe integration, so be it.
14. Assuring the technical competence of every member of your team?
116
28. A SUCCESS AUDIT
You need to have a contribution from every member of the team, which, in turn, means that
they need to have the skills in order to make that contribution. To have otherwise, jeopardizes
the success of the project and the system.
15. Able to confirm that you have a highly functional and communicative team?
Having skilled personnel is a necessary, but not a sufficient condition for success. These folks
need to be molded into a highly functional team, which depends upon attitude and high levels
of honest informative communications. Divisive team members should not be tolerated.
16. Assuring that you have the appropriate project support from your company or organization?
This includes such items as support from the IT people, contracts, accounting and the one of
most importance, your management chain.
17. Mobilizing an excellent test and evaluation (T & E) capability?
Test and evaluation represents the sign-off milestone for the system. Great T & E people are
worth their weight in gold. Get them going as early as possible.
18. Assuring the support of a mature verification and validation (V & V) group?
Having this support can set the stage for a successful T & E as well asbuilding incremental
confidence in the system and its performance against requirements.
19. Using “thinking outside the box” principles?
Nine ideas for thinking outside the box have been presented here. Are you able to use some
of them? Which ones, and in what context?
20. Invoking project management principles in a rigorous manner? This refers to all aspects of the
classical tasks of project management—planning, organizing, directing and controlling.
21. Able to use appropriate “progress tracking” systems?
We are always measuring the progress we are making against our plans, even as our plans are
being updated. These tracking systems need to provide timely and accurate information.
22. Obtaining inputs from the key members of your team, and using them in making decisions
(e.g., multiple independent estimates on cost and schedule)?
23. Maintaining appropriate contingencies and reserves?
These generally take the form of budget dollars as well as schedule. Success often depends
upon having some types of reserves for problems that were not foreseen.
24. Living up to each and every one of your commitments?
The project/systems engineering team, including of course its leadership, needs to maintain
the position that all commitments are real, and the team will, in fact, do what it has said it
would do. This “attitude” is part of a success scenario.
25. Building and maintaining a constructive relationship with your customer?
Even though this is the last question on our list, we must not and will not forget our customer.
This is not to be an adversarial relationship. Rather, it’s one in which both you and your customer,
with honesty and trust, solve the many problems attendant to building today’s large-scale high-tech
systems.
If we wish to try going a step further with the above 25 questions, assign a number to each in
terms of your experience with a specific program. Here are some numbers to use:
117
4 = a strong “yes.”
3 = mostly/often.
2 = under consideration.
1 = mostly “no.”
0 = not at all/off the screen.
Now add all your assigned numbers. If you get less than 75 there is trouble ahead, depending upon
which questions got the lowest score. Now look at the questions that got “1” or “0” and see what you
might want to do about them.
Success in building large-scale systems requires paying attention, every day, to the types of
questions posed here. There is no single “silver bullet;” the closest thing to it is having the will and
determination to do what is necessary over a long period of time.
118
C H A P T E R 29
Standards
Systems engineering has several standards that have contributed to its development and understand-
ing. In this chapter, we are able to briefly examine three standards that apply specifically to systems
engineering and two that apply to software engineering. Other documents are cited, as well, that
relate to the field’s overall body of knowledge (BoK).
29.1 MILITARY STANDARD 499B
A convenient starting point for examining standards in systems engineering is the well-known
“499B”[1]. This document goes back to the year 1971 and sets forth some of the important early
concepts of systems engineering. Among these is what they called the overall systems engineering
process, characterized by the diagram below (Figure 29.1).
Requirements(cid:3)Analysis(cid:3)
System(cid:3)Analysis(cid:3)and(cid:3)(cid:3)(cid:3)
Control(cid:3)
Functional(cid:3)Analysis/Allocation(cid:3)
Synthesis(cid:3)
Figure 29.1: Four Key Elements of Systems Engineering [1].
The above chart focuses upon four key elements of systems engineering, namely:
1. Requirements Analysis.
2. Functional Analysis/Allocation.
3. Synthesis.
4. Systems Analysis and Control.
Upon more detailed scrutiny, we see these elements as central to system architecting rather than
representing the overall systems engineering process. However, the clear articulation of these “top
four” notions advanced the thinking in the overall systems engineering community.
Another important aspect of 499B was its definition of the SEMP—the Systems Engineering
Management Plan. The overall SEMP, as therein defined, contained the following key topics:
29.2. IEEE P1200
119
1. Systems Engineering Process.
2. Systems Analysis and Control.
- Systems Analysis.
- Technical Performance Measurement.
- Technical Reviews.
3. Technology Transition.
4. Technical Integration Teams.
Even today, there is much wisdom in paying special attention to these aspects of systems engineering.
Of special note is the critical need for performance measurement, technology, and teams.These needs
remain true today, in a quite fundamental way.
29.2 IEEE P1200
This IEEE standard directly focuses upon systems engineering [2]. In doing so, it accepted the
above four elements from 499B (Figure 29.1) and added a fifth element, namely, verification and
validation. The fact that the IEEE decided to include systems engineering within its field of view
helped to support the notion that today’s engineers, whatever their specialty discipline, needed to
learn about and be able to apply the principles of systems engineering.
29.3 EIA 632
This standard deals with “processes for engineering a system” [3]. The nature of the standard is
revealed by its 13 processes, aligned into five categories, as below (Table 29.1).
We note the emphasis on “process,” and the idea that the entire scope of systems engineering
can be addressed by means of the 13 defined processes.
29.4 ISO/IEC 15288
This is an especially important standard that deals with systems engineering in the context of system
life cycle processes [4]. The key word, again, is “process.” This standard defines some 25 processes
that fall into four categories, namely, agreements, enterprises, projects, and technical matters. This
standard has been widely accepted and served as the basis for an important update of the INCOSE
120
29. STANDARDS
Table 29.1: Categories and Processes in EIA 632 Standard [3].
A. Acquisition and Supply
1. Supply Process
2. Acquisition Process
B. Technical Management
3. Planning Process
4. Assessment Process
5. Control Process
C. System Design
6. Requirements Definition Process
7. Solution Definition Process
D. Product Realization
8. Implementation Process
9. Transition to Use Process
E. Technical Evaluation
10. Systems Analysis Process
11. Requirements Validation Process
12. System Verification Process
13. End Products Validation Process
systems engineering handbook.This update, in turn, plays a central role in the INCOSE certification
of systems engineers. In other words, to be certified by INCOSE, it is important that the candidate
know and understand their handbook. The full details, including processes, cannot be included in
this document. The reader is encouraged to look into the matter further through either INCOSE
or the ISO/IEC representatives.
29.5 IEEE/EIA 12207
This is a software standard that is based upon life cycle processes in three domains: (1) primary areas,
(2) supporting areas, and (3) organizational areas [5]. A total of some seventeen processes cover all
these areas. The processes that are part of the primary area include the following: (1) acquisition,
(2) supply, (3) development, (4) operation, and (5) maintenance. Even though we are dealing with
software here, we notice some similarities to the 632 systems engineering standard briefly cited
above.
29.6 IEEE P1471
This software standard moves directly into the matter of software architectures and descriptions
thereof [6]. The standard sets forth a recommended practice for architectural description (AD),
REFERENCES 121
and it acknowledges that there is not a reliable consensus on the precise meaning of a software
architecture. At the same time, it states that architectural descriptions, or views, are very important
as we proceed with our software designs and implementations. A rather complete list of the uses of
ADs is provided.
Standards in systems and software engineering are extremely useful in terms of developing
common understandings and trying to embrace best practices. The standards cited in this short
chapter give us some sense of what has been documented, and also what directions have been
established as we move forward in these important fields.
REFERENCES
[1] “Engineering Management,” Military Standard 499B (1971), Department of Defense (DoD),
Washington, DC. 118
[2] IEEE P1200, “Standard for Systems Engineering,” April 7, 1994, IEEE, 445 Hoes Way,
Piscataway, New Jersey. 119
[3] “Processes for Engineering a System,” EIA Standard 632 (1998), Washington, DC. 119, 120
[4] ISO/IEC 15288, “Systems Engineering – System Life Cycle Processes,” 2003. 119
[5] IEEE/EIA 12207, “Software Life Cycle Processes,” April 1998, IEEE, 445 Hoes Way, Pis-
cataway, New Jersey. 120
[6] IEEE P1471, “Recommended Practice for Architectural Description” (AD), December 1999,
IEEE, 445 Hoes Way, Piscataway, New Jersey. 120
122
References
This bibliography cites a number of important books in the fields of systems and software
engineering. In general, papers are cited at the end of each of the relevant chapters. The books are
in alphabetic order with respect to the first-named author.
REFERENCES
[1] Ahern, D., A. Clouse and R. Turner, CMMI Distilled, Addison-Wesley, 2001.
[2] Augustine, N., Augustine’s Laws, 6th edition, AIAA, Reston, VA, 1982.
[3] Beam, W., Systems Engineering, McGraw-Hill, 1990.
[4] Bertalanffy, L. von, General System Theory, George Braziller, 1968.
[5] Boardman, J. and B. Sauser, Systems Thinking: Coping with 21st Century Problems, CRC Press,
2008.
[6] Boehm, B., Software Engineering Economics, Prentice-Hall, 1981.
[7] Boehm, B. et. al., Software Cost Estimation with COCOMO II, Prentice-Hall, 2000.
[8] Blanchard, B., Systems Engineering Management, John Wiley, 1998.
[9] Blanchard, B. and W. Fabrycky, Systems Engineering and Analysis, 5th edition, Prentice-Hall,
2011.
[10] Buede, D., The Engineering Design of Systems, John Wiley, 2000.
[11] Cleland, D. and L. Ireland, Project Management, 5th edition, McGraw-Hill, 2007.
[12] Eisner, H., Essentials of Project and Systems Engineering Management, 3rd edition, John Wiley,
2008.
[13] Eisner, H., Managing Complex Systems – Thinking Outside the Box, John Wiley, 2005.
[14] Eisner, H., Computer-Aided Systems Engineering, Prentice-Hall, 1988.
[15] Eisner, H., Reengineering Yourself and Your Company, Artech House, 2000.
[16] Forrester, J., System Dynamics, Pegasus Communications, 1968.
[17] Forsberg, K., H. Mooz and H. Cotterman, Visualizing Project Management, John Wiley, 2000.
[18] Friedenthal, S., A. Moore and R. Steiner, A Practical Guide to SysML: The Systems Modeling
Language, Elsevier, 2008.
[19] Gibson, J., W. Scherer and W. Gibson, How To Do Systems Analysis, John Wiley, 2007.
[20] Goode, H. and R. Machol, System Engineering, McGraw-Hill, 1957.
REFERENCES 123
[21] Hall, C., The Age of Synthesis, Peter Lang, 1995.
[22] Kerzner, H., Project Management, John Wiley, 2009.
[23] Keszbom, D., D. Schilling and K. Edward, Dynamic Project Management, John Wiley, 1989.
[24] Kossiakoff, A. and W. Sweet, Systems Engineering Principles and Practice, John Wiley, 2003.
[25] Laszlo, The Systems View of the World, George Braziller, 1972.
[26] Meadows, D., Thinking in Systems, Chelsea Green Publishing, 2008.
[27] Martin, James N., Systems Engineering Guidebook, CRC Press, 1997.
[28] “NASA Systems Engineering Handbook,” NASA/SP-2007–6105 Rev. 1, NASA Headquar-
ters, Washington, DC, December 2007.
[29] Parnell,G., P. Driscoll and H. Henderson, Decision Making on Systems Engineering and Man-
agement, John Wiley, 2008.
[30] Rechtin, E., System Architecting – Creating and Building Complex Systems, Prentice-Hall, 1991.
[31] Rechtin, E. and M. Maier, The Art of Systems Architecting, CRC Press, 1997.
[32] Rechtin, E., Systems Architecting of Organizations, CRC Press, 2000.
[33] Sage, A. and J. Palmer, Systems and Software Engineering, John Wiley, 1990.
[34] Sage, A., Systems Management for Information Technology and Systems Engineering, John Wiley,
1995.
[35] Sage, A., Introduction to Systems Engineering, John Wiley, 2000.
[36] Sage, A. and J. Armstrong, Introduction to Systems Engineering, John Wiley, 2000.
[37] Sage, A. and W. Rouse, Handbook of Systems Engineering and Management, John Wiley, 1999.
[38] Senge, P., The Fifth Discipline – The Art & Practice of the Learning Organization, Double-
day/Currency, 1990.
124 REFERENCES
[39] “Systems Engineering Handbook,” vers. 3.2, International Council on Systems Engineering
(INCOSE), 2010.
[40] Wasson, C., Systems Analysis, Design and Development, John Wiley, 2006.
[41] Weinberg, G., Rethinking Systems Analysis and Design, Dorset House, 1988.
[42] Westerman, H. R., Systems Engineering Principles and Practice, Artech House, 2001.
[43] Wymore, A. W., Model-Based Systems Engineering, CRC Press, 1993.
Author’s Biography
125
HOWARD EISNER
Howard Eisner came to The George Washington University
(GWU) as a full professor in 1989 after thirty years as an ex-
ecutive and research engineer with ORI, Inc. and the Atlantic
Research Corporation (ARC). In addition to these positions, he
served as president of two systems engineering companies, the In-
tercon Systems Corporation and the Atlantic Research Services
Company. Dr. Eisner has written four books that relate directly to
systems engineering, its management, and connected disciplines.
He was trained as an engineer and spent much of his career in the
command, control, communications and intelligence arena. He is
a Life Fellow of the IEEE (Institute of Electrical and Electronics Engineers) and a Fellow of IN-
COSE (International Council on Systems Engineering) and The New York Academy of Sciences.
He is also a member of Tau Beta Pi, Eta Kappa Nu, Sigma Xi and Omega Rho, various engineering
and research honor societies. In 1994 he was given the Outstanding Achievement Award from the
GWU Engineering Alumni. He holds BEE, MS and Doctor of Science degrees from the City
College of New York, Columbia University, and The George Washington University, respectively.
|
https://ieeexplore.ieee.org/xpl/ebooks/bookPdfWithBanner.jsp?fileName=6812861.pdf&bkn=6812860&pdfType=book
|
MOBK076-FM
MOBKXXX-Sample.cls
April 1, 2007
13:28
Project Management for
Engineering Design
Copyright © 2007 by Morgan & Claypool
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or
transmitted in any form or by any means—electronic, mechanical, photocopy, recording, or any
other except for brief quotations in printed reviews, without the prior permission of the publisher.
Project Management for Engineering Design
Charles Lessard and Joseph Lessard
www.morganclaypool.com
ISBN-10: 1598291742
paperback
ISBN-13: 9781598291742 paperback
ISBN-10: 1598291750
ebook
ISBN-13: 9781598291759 ebook
DOI 10.2200/S00075ED1V01Y200612ENG002
A Publication in the Morgan & Claypool Publishers Series
SYNTHESIS LECTURES ON ENGINEERING #2
Lecture #2
Series ISSN: 1559-811X
Series ISSN: 1559-8128
print
electronic
First Edition
10 9 8 7 6 5 4 3 2 1
Printed in the United States of America
MOBK076-FM
MOBKXXX-Sample.cls
April 1, 2007
13:28
Project Management for
Engineering Design
Charles Lessard
Texas A&M University
Joseph Lessard
Globeleq Inc.
SYNTHESIS LECTURES ON ENGINEERING #2
M&C M o r g a n & C l a y p o o l P u b l i s h e r s
MOBK076-FM
MOBKXXX-Sample.cls
April 1, 2007
13:28
iv
ABSTRACT
This lecture book is an introduction to project management. It will be of use for engineering
students working on project design in all engineering disciplines and will also be of high value to
practicing engineers in the work force. Few engineering programs prepare students in methods
of project design and configuration management used within industry and government.
This book emphasizes teams throughout and includes coverage of an introduction to
project management, project definition, researching intellectual property (patent search), project
scope, idealizing & conceptualizing a design, converting product requirements to engineering
specifications, project integration, project communications management, and conducting design
reviews. The overall objectives of the book are for the readers to understand and manage their
project by employing the good engineering practice used by medical and other industries in
design and development of medical devices, engineered products and systems. The goal is for
the engineer and student to work well on large projects requiring a team environment, and to
effectively communicate technical matters in both written documents and oral presentations.
KEYWORDS
Engineering Design, Teams, Conflict Resolution, Decision Making, Project Management,
Time Management, Cost Management, Risk Management, Earned Value Analysis.
MOBK076-FM
MOBKXXX-Sample.cls
April 1, 2007
13:28
v
Contents
1.
2.
3.
4.
Introduction to Engineering Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Teams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Defining the Project or Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Design Phases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Project Management Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Project Management Knowledge Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1
Project Life Cycles and Project Phases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2
2.3
Product Life Cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4 Organizational Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Project Management Job Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Project Integration Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Project Plan Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.1
Project Plan Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2
3.3
Project Controlling Process and Change Control . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.4 Change Control System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.5 Configuration Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.6 Need for Top Management Commitment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Project Scope Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Project Scope Management Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.1
4.2
Selecting Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.3 Weighted Scoring Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.4
Project Charters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.5 Work Breakdown Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.6 Approaches to Developing Work Breakdown Structures (WBSs) . . . . . . . . . . . . 28
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
MOBK076-FM
MOBKXXX-Sample.cls
April 1, 2007
13:28
vi PROJECT MANAGEMENT FOR ENGINEERING DESIGN
5.
6.
7.
8.
Personal and Project Time Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Personal Time Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.1
“Work Smarter, Not Harder” [1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.2
Project Time Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.3
Project Time Management Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.4
Project Network Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.5
5.6
Precedence Diagramming Method (PDM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.7 Estimation of Activity Times (Duration) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Schedule Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.8
5.9 Critical Path Method (CPM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.10 Program Evaluation and Review Technique (PERT) . . . . . . . . . . . . . . . . . . . . . . . 36
5.11 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Project Cost Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.1
Project Cost Management Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.2 Resource Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
6.3 Cost Estimating. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .40
6.4 Cost Estimation Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Problems in Cost Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6.5
6.6 Cost Budgeting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
6.7 Guidelines for Preparing Budget . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
6.8 Cost Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Reference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .43
Earned Value Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.1 Work Breakdown Structure (WBS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.2 Calculating Earned Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
7.3 Earned Value Management System (EVMS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.4 Tools and Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
7.5
Reference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .49
Project Quality Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .51
8.1
Project Quality Management Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
8.2 Quality Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
8.3 Quality Assurance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
8.4 Quality Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
MOBK076-FM
MOBKXXX-Sample.cls
April 1, 2007
13:28
CONTENTS vii
8.5
8.4.1
Pareto Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
8.4.2 Quality Control Charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Statistical Sampling and Standard Deviation . . . . . . . . . . . . . . . . . . . . . . . . 53
8.4.3
8.4.4 Basic Statistical Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
8.4.5 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Improving Project Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
8.5.1 Maturity Models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55
8.6 Cost of Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
International Organization for Standardization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
8.7
8.8 Good Manufacturing Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
8.9
Reference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .57
9.
9.3
Project Procurement Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Project Procurement Management Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
9.1
Procurement Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
9.2
9.2.1 Types of Contracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Solicitation Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
9.3.1
Statement of Work (SOW) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Solicitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
9.4
9.5
Source Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
9.6 Contract Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
9.7 Contract Closeout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
10. Project Human Resource Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
10.1 Managing People . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
10.2 Improving Effectiveness: Covey’s Seven Habits. . . . . . . . . . . . . . . . . . . . . . . . . . . . .67
10.2.1 The Seven Habits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
10.2.2 Personality and Behavioral Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
10.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
11. Project Communications Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
11.1 Project Communications Management Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
11.2 Communications Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
11.3 Information Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
11.4 Span of Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
MOBK076-FM
MOBKXXX-Sample.cls
April 1, 2007
13:28
viii PROJECT MANAGEMENT FOR ENGINEERING DESIGN
11.5 Performance Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
11.5.1 Template for Weekly Progress Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Reference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76
12. Project Risk Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
12.1 Project Risk Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
12.2 Types of Project Risks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
12.3 Risk Quantification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
12.4 Risk Responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
12.5 Causes of Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
12.6 Risk Management Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
12.7 Risk Response Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
12.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
13. Project Closeout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
13.1 Closing Processes and Outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
13.1.1 Administrative Closure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
13.1.2 Approval Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
13.1.3 Procurement Contract Closure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
13.2 Outcomes Assessment Meeting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
13.3 Outcomes Assessment Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
13.4 Transition Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
13.5 Project Documents to be Archived . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
13.6 Critical Success Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
13.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
14. Project Design Reviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
14.1 Prelude to Conducting a Design Review Meeting . . . . . . . . . . . . . . . . . . . . . . . . . . 89
14.2 Entry Criteria for Design Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
14.3 Conducting the Design Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
14.4 Design Review Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
14.5 Exit Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
15. Making Technical Decisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
15.1 Group Decision-Making Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
15.1.1 The Rational Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
MOBK076-FM
MOBKXXX-Sample.cls
April 1, 2007
13:28
CONTENTS ix
15.1.2 The Political Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
15.1.3 The Process Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
15.1.4 The Garbage Can Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
15.2 U.S. Navy Executive Decision-Making Framework . . . . . . . . . . . . . . . . . . . . . . . . . 99
15.3 Decision Matrix or Utility Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
15.3.1 Weighted Function Evaluation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .100
15.3.2 Authors’ Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
15.3.3 Utility Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
15.4 Factor Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
15.5 Grading Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
15.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
16. Management of Team Conflict . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
16.1 Giving Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
16.2 Conflict Resolution Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
16.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Author Biography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
MOBK076-FM
MOBKXXX-Sample.cls
April 1, 2007
13:28
book
Mobk076
April 2, 2007
18:9
1
C H A P T E R 1
Introduction to Engineering Design
The engineering design process is similar to problem-solving processes taught in engineering
colleges. Most commercial products begin by identifying the commercial market needs of some
novel “ideas,” which may impact some commercial market. Hence, the first steps in a product
design may include
1. novel idea, or potential market needs,
2.
forming a team, since most complex problems or products are not developed by a single
individual,
3. development of generalized broad product requirements:
a. Generally, user requirements may not be technical; e.g., a doctor wants to monitor
temperature, or a veterinarian wants to remove purring sound from a stethoscope so
he can hear the cat’s heart valve sounds. What are the technical specifications?
4. define required input and outputs,
5. define the product outcome:
a. What is it used for, by whom? Inputs?
b. How do we measure the desired outcome?
c. How do we test the desired outcome?
d. What are the criteria for acceptance?
TEAMS
1.1
Webster [1] defines the word “team—a noun” in various ways:
1. Two or more draft animals harnessed to a vehicle or farm implement.
2. A vehicle along with the animal or animals harnessed to it.
3. A group of animals exhibited or performing together.
4. A group of players on the same side in a game.
5. Any group organized to work together.
book
Mobk076
April 2, 2007
18:9
2 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
Engineering has expanded the definition to “A team is a small group of people with complementary
skills who are committed to a common purpose, performance goals, and approach for which they
hold themselves mutually accountable” [2].
You might wonder why work in teams? One might cite that “Team skills are valued
by industry” or that one person does not have in-depth knowledge in all the disciplines and
functions of engineering to design and produce a spacecraft or a space station. It requires a
“team” of people with diverse backgrounds in the necessary technical areas to produce the
product for space. Simply put, teams need engineers with a broad set of talents, i.e.
1.
2.
technical knowledge
creativity
3. people skills
4. planning ability
5. management skills.
Diverse abilities and diverse ways of thinking and viewing problems is the strength of teams
that function well together. Because teams are important in the success of projects, project
managers must understand teams and consider five issues in team building.
1.
Interdependence: It is the issue of how each member’s outcomes are determined, at least
in part, by the actions of the other members. Functioning independently of one another
or competing with your teammates may lead to suboptimal or disastrous outcomes for
both the entire team and the project.
2. Goal specification: It is very important for team members to have common goals for
team achievement, as well as to communicate clearly individual goals that members
may have.
3. Cohesiveness: It refers to the attractiveness of the team membership. Teams are cohesive
to the extent that membership in them is positively valued, that is, members are drawn
toward the team. Patterns of interpersonal attraction within a team are a very prominent
concern. Task cohesiveness refers to the way in which skills and abilities of the team
members mesh to allow effective performance.
4. Roles and norms: All teams need to develop a set of roles and norms over time.
a. Roles: For a student team, the role structure will enable the team to cope more
effectively with the requirements of a given task. The roles may be rotated so that all
team members experience, and learn from, the various positions held. It is extremely
important that the roles are understood and accepted by team members.
book
Mobk076
April 2, 2007
18:9
INTRODUCTION TO ENGINEERING DESIGN 3
b. Norms: For a student team, norms are the rules governing the behavior of team mem-
bers, and include the rewards for behaving in accord with normative requirements,
as well as the sanctions for norm violations. It is not uncommon for a set of norms
to develop between team members that are never actively discussed. However, it is
always better to have interaction rules appear in the form of a written document,
such as in a “code of cooperation”: the agreed upon rules governing the behavior of
team members, as well as any appropriate rewards and sanctions. The team code of
cooperation sets a norm for acceptable behavior for each team member and repre-
sents how the team members will interact with one another. It should be developed,
adopted, improved, and/or modified by all team members on a continuous basis;
and copies should be easily accessible by team members.
5. Communication: Effective interpersonal communication is vital for the smooth func-
tioning of any task team. It is also important for a team to develop an effective com-
munication network: who communicates to whom; is there anybody “out of the loop?”
Norms will develop by governing communication. Do those norms encourage every-
one to participate, or do they allow one or two dominant members to claim all the “air
time?” [3].
Key team roles include the following:
1. Meeting coordinator: Coordinates and prepares the agenda (i.e., what needs to be ac-
complished, establishes a process, etc.); coordinates time, date, and place of meetings;
ensures all necessary resources are available for the meetings; keeper of the code of
cooperation (to be discussed); monitors the decision-making process; coordinates the
process check. However, this person is not the “boss.”
2. Recorder: The recorder is the person responsible for doing the writing of the team
whenever group work is being done, which should maximize participation by the rest
of the team, since no one else needs to worry about it. If required, the recorder also
ensures that the process(es) being used by the team is (are) documented and/or prepares
an “action list” to keep a record of the assigned actions. In addition, the recorder makes
sure that copies of their work are provided to the rest of the team.
3. Time keeper: The time keeper has the responsibility of keeping tract of time, as well as
keeping the team moving so that they can finish the task at hand.
4. Encourager/gatekeeper: The encourager/gatekeeper has the task of giving encouragement
to all the other team members. The person also has the responsibility of maintaining
a balanced level of participation for all the members. They will encourage the silent
members and try to hold back the verbose, dominate members.
book
Mobk076
April 2, 2007
18:9
4 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
5. Devil’s advocate: The devil’s advocate takes a position opposite to that held by the
team to ensure that all sides of an issue are considered. This responsibility should be
undertaken by all team members [4].
In a learning environment such as student teams, the roles should rotate among team members.
In summary, effective teams include the use of roles, the development of a code of cooperation,
the use of the check for understanding to make sure everybody is “on the same page,” the
development of effective listening skills, the ability to give and take effective constructive
feedback, the use of agendas for planning meetings, the use of contact before work to provide
time for nontask-related discussions, the definition of decision-making processes to be included
in the agenda, the use of the issue bin to provide time for discussion of items not in the agenda,
the use of action lists to keep a record of assigned actions, the use of a process check for
continuous improvement, and a commitment from all the members of the team. Once the
team is established, the purpose and objectives of the team should be defined and documented.
Subsequently, the project must be adequately defined.
1.2 DEFINING THE PROJECT OR PROBLEM
One of the most important aspects in product development and engineering design is to ad-
equately define the scope of the problem. Often, the problem is stated initially in terms of
vague project requirements. The team must redefine the product requirements in terms of in-
puts, output, and appearances, then convert and link requirements to “technical specifications,”
e.g., performance, accuracy, tolerances, etc. One should keep in mind that “all specifications
must be tested.” Additionally, the team must develop and document “pass or fail acceptance
criteria” for each specification, as well as goals or criteria for success and constraints (part of
scope). Typical goals or criteria for success include aesthetics, performance, quality, human
factors, costs (“initial capital” and “life cycle” costs), safety, operating environment, interface
with other systems, effects on surroundings, logistics, reliability, maintainability (preventive
and corrective maintenance), serviceability, and availability. Constraints usually include the
following factors: budget, time, personnel, legal, material properties, availability of materials,
off-the-shelf purchase versus fabrication/construction, competition, and manufacturability (can
it be manufactured?) [5].
BACKGROUND
1.3
Investors or a company marketing survey may have expressed the need for a new or a better
product as an idea or in very general terms, but engineers working on the project design require
book
Mobk076
April 2, 2007
18:9
greater background knowledge. Knowledge in the form of publications on other similar designs
that may be found in
INTRODUCTION TO ENGINEERING DESIGN 5
1.
library literature searches,
2. Web-based searches, or
3. patent searches.
Patent searches are a key element in products being designed and developed for the commercial
sector, since patent infringement may lead to lawsuits. Without doubt, searches take a lot of
time, and results of the searches must be analyzed, documented, and reported in the “back-
ground” section of the proposal, such as a small business incentive research (SBIR) proposal.
This step also takes a lot of time, often months; it may be continued throughout the product
development period.
1.4 DESIGN PHASES
Most student teams believe design is a single-step process, which it is not. Designing in
industry is a multipass process that includes various phases: the conceptualization phase, feasibility
study phase, preliminary design phase, and detailed design phase. In the feasibility study phase,
conceptualization and “brainstorming” ideas are roughed out.
In searching for a solution, selection of the most feasible ideas for refinement is a team
decision. The following heuristics are often applied in the search for a solution:
1.
2.
3.
4.
5.
6.
7.
challenge basic assumptions
employ analogies
identify critical parameters
switch functions
alter sequence of steps
reverse the problem
separate or combine functions
8. use vision
9.
employ basic engineering principles.
In the preliminary design phase, the most promising ideas are explored and analyzed in
more detail. Finally, in the detailed prototype design phase, the team develops highly detailed
book
Mobk076
April 2, 2007
18:9
6 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
Feasibility
Phase
Implementation
Phase
Design
Phase
Integration
Phase
Final
Product
Alpha
Test
Phase
Beta
Test
Phase
FIGURE 1.1: The design methodology of IWT on “real-world” experience
drawings with final specifications prepared for the “best design option.” In all design phases,
the following four steps are repeated:
1.
2.
analyzing each potential solution,
choosing the best results or solutions,
3. documenting the results or solutions,
4.
communicating results or solutions to management.
A detailed design phase is when companies usually construct or fabricate prototypes. Marketing
strategies would be in development in parallel about this time. Once the prototype is finished,
it undergoes “verification” or “alpha” testing and evaluation of design specifications, perfor-
mance parameters, safety, etc. The results are compared against the acceptance criteria. If the
specification fails the acceptance criteria, the prototype may undergo a redesign. If all tests are
acceptable, the product may undergo validation or “beta” testing, which means the product will
be sent to various users to test the product with noncompany subjects.
Innovative Wireless Technologies (IWT) structured their strict design methodology on
“real-world” experience (Fig. 1.1). Their feasibility phase includes requirements, project budget,
project schedule, and design specifications. In their design phase, IWT includes technical
patent search, functional design specs (FDS), FDS design review, and product verification
plan. The implementation phase includes prototype development, integration test plan, patent
review, factory prototype review, and product launch. The integration phase includes plans
and documentation for factory prototype development, integration testing, and environmental
testing. In the alpha test phase (verification), test engineers conduct alpha testing, preseries
production, release the design, and develop the beta test plan. Validation test phase includes
book
Mobk076
April 2, 2007
18:9
INTRODUCTION TO ENGINEERING DESIGN 7
beta testing, release customer documentation, initial series production, yield analysis, and
training [6].
Testing in industry is not the same as a test conducted by a student in academic lab-
oratories. The team must develop test plans and test procedures before the breadboard and
prototype system is developed and tested. In the verification testing, the prototype (hardware
and software) is also verified in an integrated test environment with all necessary test equipment
for EMI/EMC and pre-FCC verification.
SUMMARY
1.5
Project managers and their teams should focus their attention and efforts on meeting project
objectives and producing positive results. It is recommended that instead of blaming team
members, the focus should be turned to fixing the problem. Project managers should establish
regular, effective meetings with agenda and openness. Part of the program managers tasks
include nurturing team members, encouraging them to help each other, and to acknowledge
in public the individual and group accomplishments. Additional information on teams may be
found in [7, 8].
REFERENCES
[1] Webster’s New Collegiate Dictionary, G&C Merriam Co., Springfield, MA, 1973.
[2]
J. R. Katzenbach and D. K. Smith, The Wisdom of Teams. Boston, MA: Harvard Business
School Press, 1993.
[3] Surviving the Group Project: A Note on Working in Teams
http://web.cba.neu.edu/∼ewertheim/teams/ovrvw2.htm, 2005.
[Online]. Available:
[4] D. A. Nadler, Designing Effective Work Teams. New York: Delta Consulting Group, 1985.
[5] G. P. Shea and R. A. Guzzo, “Group effectiveness: What really matters,” Sloan Manage.
[6]
Rev., vol. 3, pp. 25–31, 1987.
Innovative Wireless Technologies
http://www.iwtwireless.com, 2004.
(IWT) Design Methodology
[Online]. Available:
[7] V. R.
Johnson.
(2005, Mar.). Understanding and assessing team dynamics.
IEEE-USA Today’s Eng. [Online]. Available: http://www.todaysengineer.org/2005/
Mar/team dynamics.asp.
[8] V. R. Johnson. (2004, Nov.). Understanding and assessing team dynamics. IEEE-
USA Today’s Eng. [Online]. Available: http://www.todaysengineer.org/2004/Nov/self-
assessment.asp.
book
Mobk076
April 2, 2007
18:9
book
Mobk076
April 2, 2007
18:9
9
C H A P T E R 2
Project Management Overview
Before diving into project management, let us begin with a simple question. What is a “project”?
A project is a temporary endeavor undertaken to accomplish a unique purpose. I have managed
projects ranging from simple projects, like the development of an automated EEG analyzer,
to complex high dollar value, such as the installation of Spain’s Air Defense System. Yet,
regardless of complexity, all projects have similar attributes:
1. Each project has its own “unique purpose.”
2. Projects are “temporary” with time constraints.
3. Projects require resources (manpower, funding, and materials), often from various
areas.
4. Commercial projects should, and usually, have a primary sponsor and/or customer.
5. All projects involve uncertainty.
6. Every project is constrained in different ways by its
a. scope goals
b. time goals
c. cost goals.
It is the project manager’s responsibility to balance these three competing goals. So, what is
project management? Project management is “The application of knowledge, skills, tools, and
techniques to project activities in order to meet or exceed stakeholder needs and expectations
from a project” [1].
In the definition of project management, the terminology is often misinterpreted to mean
investors with stock in the company or “stockholders”; whereas, “stakeholders” are the people
involved in or affected by project activities. Thus, “stakeholders” include the project sponsor and
all members of the project team, support staff, customers, users, suppliers, and even opponents
to the project.
book
Mobk076
April 2, 2007
18:9
10 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
PROJECT MANAGEMENT KNOWLEDGE AREAS
2.1
Knowledge areas describe the nine key competencies that project managers must develop.
There are four core knowledge areas that lead to specific project objectives (scope, time, cost,
and quality). There are also four facilitating knowledge areas that are the means through
which the project objectives are achieved (human resources, communication management, risk
management, and procurement management). The final knowledge area (project integration
management) affects and is affected by all of the other eight knowledge areas. Although much
of the knowledge needed to manage the projects is unique to project management, nevertheless,
project managers must also have knowledge and experience in “general management” and in
the application area of the project. Ultimately, project managers must focus on meeting specific
project objectives. This book will develop and elaborate on each of the nine knowledge areas in
separate chapters.
There are several project management tools and techniques that assist project managers
and their teams in various aspects of project management. Some specific tools include
1. project charter
2. Work breakdown schedule (WBS) or scope
3. Gantt charts, PERT charts, critical path analysis (time)
4. Cost estimates and earned value analysis (cost).
Most of these tools are developed with software programs, such as Microsoft Project
2003.
So what is the advantage of implementing project management on any project? The
advantages that program management offers might include that “Bosses,” customers, and other
stakeholders do not like surprises especially, “Bad News Surprises.” Good project management
provides assurance and reduces the risk of project failure or large cost overrun. Project man-
agement provides the tools and environment to plan, monitor, track, and manage schedules,
resources, costs, and quality of the product (project). Project management also provides a his-
tory or metrics base for future planning as well as good documentation, which is required by
the Food and Drug Administration (FDA) and Good Manufacturing Practice. Perhaps for the
students, the greatest advantage is that project team members learn and grow by working in a
cross-functional team environment.
Some books contend that “modern project management” began with the Manhattan
Project, which the U.S. military led to develop the atomic bomb. Yet, some may argue that
it was not until systems approach emerged in the 1950s which described a more analytical
book
Mobk076
April 2, 2007
18:9
approach to management and problem solving that modern project management really began.
The systems approach to project management includes three parts:
PROJECT MANAGEMENT OVERVIEW 11
1. Systems philosophy: Project managers should view projects and things as systems, inter-
acting components working within an environment to fulfill some purpose.
2. Systems analysis: Project managers should use a problem-solving approach, which en-
gineering students are taught.
3. Systems management: Project managers should address business, technological, and
organizational issues before making changes to systems.
Project managers need to take a holistic or systems view of a project and understand how it
is situated within the larger organization, since projects developed must operate in a broad
organizational environment; meaning, “projects cannot be run in isolation.”
PROJECT LIFE CYCLES AND PROJECT PHASES
2.2
A project life cycle is a collection of project phases, which vary with the project or industry.
Table 2.1 shows some general phases that include concept, development, implementation, and
support.
PRODUCT LIFE CYCLES
2.3
Products also have life cycles. The systems development life cycle (SDLC) is a framework for
describing the phases involved in developing and maintaining information systems. Typical
SDLC phases include planning, analysis, design, implementation, and support. There are
TABLE 2.1: Phases of the Project Life Cycle
PROJECT
FEASIBILITY
PROJECT
ACQUISITION
Concept
Development
Implementation
Closeout
Management plan Project plan
Last work package
Completed work
Preliminary cost
estimates
3-level WBS
Budgetary cost Definitive cost estimates Lessons learned
estimates
6+−level WBS Bulk of time spent in
Customer acceptance
this phase
book
Mobk076
April 2, 2007
18:9
12 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
several SDLC models such as
1.
2.
the waterfall model that has well-defined, linear stages of systems development and
support,
the spiral model which shows that products are developed using an iterative approach
rather than a linear approach.
In addition, there are the incremental release model and the prototyping model that are used
for developing prototypes to clarify the user requirements.
Project life cycle applies to all projects, regardless of the products being produced, and
product life cycle models vary considerably based on the nature of the product. Most large
projects are developed as a series of smaller projects, and then integrated. Project management
activities are done through the entire product life cycle phases. A project should successfully
pass through each of the project phases in order to continue on to the next phase of the life
cycle. To verify that all the requirements of a phase were completed satisfactory, the program
manager should conduct project reviews (also called project management review or program
management review) at preset project milestones. Management reviews (often called phase exits
or kill points) should occur after each phase to evaluate the project’s progress, likely success,
and continued compatibility with organizational goals.
2.4 ORGANIZATIONAL STRUCTURES
To understand how the various organizational structures and frames can help or impede the
program manager in product development, one needs to understand organizations. There are
four basic organizational frames:
1. The structural frame that focuses on roles and responsibilities, coordination and control.
Organization charts help define this frame.
2. The political frame that assumes organizations are coalitions composed of varied indi-
viduals and interest groups. Conflict and power are key issues within this frame.
3. The human resources frame that focuses on providing harmony between needs of the
organization and needs of the people.
4. The symbolic frame that focuses on symbols and meanings related to events. In this
frame, culture is important.
Most managers and people understand what organizational charts are; yet, many new managers
try to change organizational structure rather than concentrating on other changes that are
book
Mobk076
April 2, 2007
18:9
PROJECT MANAGEMENT OVERVIEW 13
really needed. There are three basic organization structures: functional, project, and matrix,
as shown in Table 2.2. The first column in Table 2.2 lists project characteristics, and their
influence on the project based on the type of organizational structure are compared in the
rows. The table also indicates that the project-oriented organizational structure provides the
project manager with the highest level of authority, personnel and administrative staff that are
assign “full-time” to work on the project, and that his role and title are “full-time” and “project
manager,” respectively. Although the organizational structure influences the project manager’s
authority, project managers also need to remember and address the human resources, political,
and symbolic frames.
Recall that project stakeholders are the people involved in or affected by project activities;
hence, project managers must take time to identify, understand, and manage relationships with
all project stakeholders. Using the four frames of organizations can help meet stakeholder needs
and expectations.
PROJECT MANAGEMENT JOB FUNCTIONS
2.5
At this point, you (the reader) may still be asking, “But what does the project manager do?”
Most organizations establish job positions with a description of the responsibilities and func-
tions of the position. The job description for the position of project manager usually requires
that the project manager define the scope of project, form a team, identify stakeholders, identify
decision-makers, and establish escalation procedures should the project encounter major prob-
lem requiring a higher level decision. He is also responsible for the development of a detailed
task list or work breakdown structures for the project. Additionally, he is responsible for the
estimation of the time requirements not only for the project, but also for each task in the
work breakdown schedule. The project manager is responsible for the development of initial
project management flow chart and identification of required resources with budget estimates.
He evaluates the project requirements, identifies and evaluates the risks, and is responsible for
preparing contingency plans.
The program manager must identify interdependencies within and outside of the orga-
nization. He is required to identify and track critical milestones, and conduct or participate in
project progress and phase reviews. He has the responsibility of securing the needed resources
in a timely manner. Additionally, the program manager is responsible for the management
of the change control process, which may require establishment of a “change control board”
to administer the handling of product configuration and changes to the baseline configura-
tion. The final job function of project managers is the collection of information and prepa-
ration of project status reports in documents and in presentation at higher level “program
reviews” [2].
book
Mobk076
April 2, 2007
18:9
14
D
E
Z
I
T
C
E
J
O
R
P
X
I
R
T
A
M
X
I
R
T
A
M
X
I
R
T
A
M
L
A
N
O
I
T
C
N
U
F
S
C
I
T
S
I
R
E
T
C
A
R
A
H
C
G
N
O
R
T
S
D
E
C
N
A
L
A
B
X
I
R
T
A
M
K
A
E
W
T
C
E
J
O
R
P
X
I
R
T
A
M
E
P
Y
T
L
A
N
O
I
T
A
Z
I
N
A
G
R
O
s
t
c
e
j
o
r
P
n
o
e
r
u
t
c
u
r
t
S
n
o
i
t
a
z
i
n
a
g
r
O
f
o
s
e
c
n
e
u
fl
n
I
:
2
.
2
E
L
B
A
T
t
s
o
m
l
a
o
t
h
g
i
H
e
t
a
r
e
d
o
M
o
t
w
o
L
l
a
t
o
t
h
g
i
h
o
t
e
t
a
r
e
d
o
m
d
e
t
i
m
L
i
e
n
o
n
r
o
e
l
t
t
i
L
%
0
0
1
–
5
8
%
5
9
–
0
5
%
0
6
–
5
1
%
5
2
–
0
e
n
o
n
y
l
l
a
u
t
r
i
V
d
e
n
g
i
s
s
a
l
e
n
n
o
s
r
e
p
t
c
e
j
o
r
p
o
t
e
m
i
t
-
l
l
u
f
s
’
n
o
i
t
a
z
i
n
a
g
r
o
k
r
o
w
s
’
r
e
g
a
n
a
m
t
c
e
j
o
r
P
y
t
i
r
o
h
t
u
a
g
n
i
m
r
o
f
r
e
P
e
m
i
t
-
l
l
u
F
e
m
i
t
-
l
l
u
F
e
m
i
t
-
l
l
u
F
e
m
i
t
-
t
r
a
P
e
m
i
t
-
t
r
a
P
s
’
r
e
g
a
n
a
m
t
c
e
j
o
r
P
e
l
o
r
,
r
e
g
a
n
a
m
,
r
e
g
a
n
a
m
,
r
e
g
a
n
a
m
,
r
o
t
a
n
i
d
r
o
o
c
,
r
o
t
a
n
i
d
r
o
o
c
m
a
r
g
o
r
p
r
e
g
a
n
a
m
m
a
r
g
o
r
p
r
e
g
a
n
a
m
t
c
e
j
o
r
p
r
e
d
a
e
l
t
c
e
j
o
r
p
r
e
d
a
e
l
t
c
e
j
o
r
p
r
e
d
a
e
l
t
c
e
j
o
r
p
r
o
f
s
’
r
e
g
a
n
a
m
e
l
o
r
t
c
e
j
o
r
P
t
c
e
j
o
r
P
t
c
e
j
o
r
P
t
c
e
j
o
r
P
t
c
e
j
o
r
P
e
l
t
i
t
n
o
m
m
o
C
e
m
i
t
-
l
l
u
F
e
m
i
t
-
l
l
u
F
e
m
i
t
-
t
r
a
P
e
m
i
t
-
t
r
a
P
e
m
i
t
-
t
r
a
P
t
n
e
m
e
g
a
n
a
m
t
c
e
j
o
r
P
f
f
a
t
s
e
v
i
t
a
r
t
s
i
n
i
m
d
a
book
Mobk076
April 2, 2007
18:9
TABLE 2.3: Comparison of Characteristics of Effective and Ineffective Project Managers
PROJECT MANAGEMENT OVERVIEW 15
EFFECTIVE PROJECT
MANAGER
Leadership by example
Visionary
Technically competent
Decisive
Good communicator
Good motivator
Stands up to upper management when necessary
Supports team members
Encourages new ideas
INEFFECTIVE PROJECT
MANAGER
Sets bad example
Not self-assured
Lacks technical expertise
Indecisive
Poor communicator
Poor motivator
It is strongly suggested by the author that the project managers develop the following
skills:
1. Communication skills: listening and persuading. From elementary grades, we were taught
how to speak, read, and write, but we were not taught how to listen. Communication
theory requires three elements for effective communication: a transmitter (the speaker),
a common media (the language), and the receiver (the listener).
2. Organizational skills: planning, goal setting, and analyzing.
3. Team building skills: people skills, empathy, motivation, and esprit de corps.
4. Leadership skills: sets example, energetic, vision (big picture), delegates, and positive.
5. Coping skills: flexibility, creativity, patience, and persistence.
6. Technological skills: technical knowledge, project knowledge, and experience.
Table 2.3 compares the most common characteristics found amongst “effective” and
“ineffective” project managers.
In summary, project management may be viewed as a number of interlinked management
processes that include initiating processes, planning processes, executing processes, controlling
processes, and closing processes. Table 2.4 shows the relationship between the project manager’s
knowledge areas, project processes, and the activities required in project management. From
book
Mobk076
April 2, 2007
18:9
16
s
a
e
r
A
e
g
d
e
l
w
o
n
K
d
n
a
,
s
e
i
t
i
v
i
t
c
A
,
s
e
s
s
e
c
o
r
P
t
c
e
j
o
r
P
g
n
o
m
A
s
p
i
h
s
n
o
i
t
a
l
e
R
:
4
.
2
E
L
B
A
T
S
E
S
S
E
C
O
R
P
T
C
E
J
O
R
P
-
L
A
I
T
I
N
I
E
G
D
E
L
W
O
N
K
G
N
I
S
O
L
C
G
N
I
L
L
O
R
T
N
O
C
G
N
I
T
U
C
E
X
E
G
N
I
N
N
A
L
P
G
N
I
Z
I
l
o
r
t
n
o
c
y
t
i
l
a
u
Q
e
c
n
a
r
u
s
s
a
y
t
i
l
a
u
Q
g
n
i
n
n
a
l
p
y
t
i
l
a
u
Q
l
o
r
t
n
o
c
t
s
o
C
e
g
n
a
h
c
l
l
a
r
e
v
O
l
o
r
t
n
o
c
e
g
n
a
h
c
e
p
o
c
S
l
o
r
t
n
o
c
n
o
i
t
a
c
fi
i
r
e
v
n
o
i
t
u
c
e
x
e
n
a
l
p
t
c
e
j
o
r
P
e
p
o
c
S
n
o
i
t
a
m
i
t
s
e
n
o
i
t
a
r
u
d
t
n
e
m
p
o
l
e
v
e
d
e
l
u
d
e
h
c
S
,
n
o
i
t
i
n
fi
e
d
y
t
i
v
i
t
c
A
,
g
n
i
c
n
e
u
q
e
s
g
n
i
n
n
a
l
p
e
c
r
u
o
s
e
R
g
n
i
n
n
a
l
p
g
n
i
t
e
g
d
u
b
,
g
n
i
t
a
m
i
t
s
e
t
s
o
C
t
n
e
m
p
o
l
e
v
e
d
n
a
l
p
t
c
e
j
o
r
P
e
p
o
c
S
n
o
i
t
a
i
t
i
n
I
n
o
i
t
a
r
g
e
t
n
I
A
E
R
A
e
p
o
c
S
e
m
T
i
y
t
i
l
a
u
Q
t
s
o
C
t
n
e
m
p
o
l
e
v
e
d
m
a
e
T
g
n
i
n
n
a
l
p
l
a
n
o
i
t
a
z
i
n
a
g
r
O
s
e
c
r
u
o
s
e
r
n
a
m
u
H
n
o
i
t
a
r
t
s
i
n
i
m
d
A
e
r
u
s
o
l
c
g
n
i
t
r
o
p
e
r
e
c
n
a
m
r
o
f
r
e
P
n
o
i
t
u
b
i
r
t
s
i
d
n
o
i
t
a
m
r
o
f
n
I
t
u
o
e
s
o
l
c
t
c
a
r
t
n
o
C
e
s
n
o
p
s
e
r
k
s
i
R
n
o
i
t
a
r
t
s
i
n
i
m
d
a
t
c
a
r
t
n
o
C
g
n
i
n
n
a
l
p
n
o
i
t
a
t
i
c
i
l
o
S
e
c
r
u
o
s
,
n
o
i
t
a
t
i
c
i
l
o
S
n
o
i
t
c
e
l
e
s
t
n
e
m
e
r
u
c
o
r
P
g
n
i
n
n
a
l
p
l
o
r
t
n
o
c
t
n
e
m
p
o
l
e
v
e
d
e
s
n
o
p
s
e
r
n
o
i
t
i
s
i
u
q
c
a
f
f
a
t
S
n
o
i
t
a
c
i
n
u
m
m
o
C
g
n
i
n
n
a
l
p
,
n
o
i
t
a
c
fi
i
t
n
e
d
i
k
s
i
R
,
n
o
i
t
a
c
fi
i
t
n
a
u
q
s
n
o
i
t
a
c
i
n
u
m
m
o
C
k
s
i
R
t
n
e
m
e
r
u
c
o
r
P
book
Mobk076
April 2, 2007
18:9
PROJECT MANAGEMENT OVERVIEW 17
the table, one can surmise that the project manager will use all knowledge areas in the planning
phase, but must apply integration, scope, quality, communications, and procurement knowledge
areas throughout the project processes or “life cycle” [3]. On the other hand, team members
will spend the majority of their time in the “execution” phase of the project life cycle.
REFERENCES
[1] Project Management Body of Knowledge (PMBOK Guide), Program Management Institute,
Newtown Square, PA, 1996, p. 6.
[2] Building a Foundation for Tomorrow: Skills Standards for Information Technology, Northwest
Center for Emerging Technologies, Belleview, WA, 1997.
[3] Project Management, Univ. Washington [Online]. Available: http://www.washington.
edu/computing/pm, 2003.
book
Mobk076
April 2, 2007
18:9
book
Mobk076
April 2, 2007
18:9
19
C H A P T E R 3
Project Integration Management
Project managers that tend to focus on technical or on too many details have trouble keeping
in mind and visualizing the “big picture.” As stated in the previous chapter, Project managers
must coordinate all of the other knowledge areas throughout a project’s life cycle [1]. Table 2.4
(Chapter 2) shows that project plan development is a part of the “integration” knowledge area
that takes place during the project “planning process.” Project plan development entails taking
the results of other planning processes and putting them into a consistent, coherent document
(the project plan, also referred to as the program manager’s project plan). Integration during the
“executing process” entails carrying out the project plan (project plan execution), and during
the “controlling process,” the program manager oversees the overall change control process
and ensures coordination of changes across the entire project. Interface management involves
identifying and managing the points of interaction between various elements of the project.
Project managers must establish and maintain good communication and relationships
across organizational interfaces.
PROJECT PLAN DEVELOPMENT
3.1
A project plan is a document used to coordinate all project planning documents. The main
purpose of project plans is to “guide project execution,” and to assist the project manager in
leading the project team and assessing project status. Project plans are unique just as projects
are unique. The main attributes of project plans are as follows:
1. They should be dynamic.
2. They should be flexible.
3. They should be updated as changes occur.
4. They should first and foremost guide project execution.
Let us reemphasize the fourth attribute. Project plans are guides that may be changed; they are
not rigid regulations or laws.
The most common elements of a project management plan include an introduction or
an overview of the project, a section describing how the project is organized, a section on
book
Mobk076
April 2, 2007
18:9
20 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
management and technical processes used on the project, and a section describing the work to
be done, a section containing the work schedule with details on the “work breakout schedule”
(WBS), with budget information. Table 3.1 contains a sample outline for a project management
plan (PMP). The introduction section should start with a statement of the problem as the very
first sentence of the section followed by the rationale for working on the project. The section
should contain full sentences and paragraphs on the project overview, background on previous
or similar projects, any reference materials, what the expected outputs or deliverables, and a
glossary of definitions and/or acronyms [2]. The project organization section should contain
a paragraph describing the process model to be used with justification, paragraphs discussing
the organizational structure, its boundaries, and interfaces. Project responsibilities should be
clearly assigned and described. The section on managerial process should contain paragraphs
detailing the managerial objectives, priorities, assumptions, dependencies, and constraints, and
the monitoring and controlling mechanisms to be used. In addition, the section should contain
detailed descriptions of the staffing plan and of the risk management process. The technical
process section should contain detailed information on the method, tools, and techniques
in the product development, as well as documentation on hardware and software design,
drawings, operation description, and maintenance. The section should discuss project support
functions. The last section contains descriptions of the work packages, dependencies, resource
requirements with justifications, budget and resource allocations, and the project schedule.
PROJECT PLAN EXECUTION
3.2
Project plan execution involves managing and performing the work described in the project
plan. In general, the majority of time and money is usually spent on the execution phase. The
application area or the project directly affects project execution, because the products of the
project are produced during execution. Project managers will use the following skills during the
execution phase:
1. General management skills, which include leadership, communication, and political
skills.
2. Product skills and knowledge.
In addition, the execution phase requires skills in the use of specialized tools and techniques.
Techniques to assist the project manager during project execution include the work
authorization system that provides a method for ensuring that qualified people do work at the
right time and in the proper sequence, and conducting “status review meetings” on a regular
(weekly or monthly) schedule to exchange project information and evaluate project progress.
The authors prefer weekly face-to-face meetings with monthly written status (progress) reports.
21
s
n
o
i
t
c
n
u
f
e
l
u
d
e
h
c
s
t
r
o
p
p
u
s
t
c
e
j
o
r
p
n
a
l
p
book
Mobk076
April 2, 2007
18:9
,
S
E
G
A
K
C
A
P
K
R
O
W
,
E
L
U
D
E
H
C
S
L
A
C
I
N
H
C
E
T
L
A
I
R
E
G
A
N
A
M
T
C
E
J
O
R
P
T
E
G
D
U
B
D
N
A
S
S
E
C
O
R
P
S
S
E
C
O
R
P
N
O
I
T
A
Z
I
N
A
G
R
O
N
O
I
T
C
U
D
O
R
T
N
I
)
P
M
P
(
n
a
l
P
t
n
e
m
e
g
a
n
a
M
t
c
e
j
o
r
P
a
r
o
f
e
n
i
l
t
u
O
e
l
p
m
a
S
:
1
.
3
E
L
B
A
T
;
s
t
n
e
m
e
r
i
u
q
e
r
;
n
o
i
t
a
t
n
e
m
u
c
e
d
;
t
n
e
m
e
g
a
n
a
m
k
s
i
r
;
s
e
c
a
f
r
e
t
n
i
d
n
a
;
s
l
a
i
r
e
t
a
m
e
c
n
e
r
e
f
e
r
d
n
a
t
e
g
d
u
b
e
r
a
w
t
f
o
s
g
n
i
l
l
o
r
t
n
o
c
;
g
n
i
r
o
t
i
n
o
m
s
e
i
t
i
l
i
b
i
s
n
o
p
s
e
r
d
n
a
s
n
o
i
t
i
n
fi
e
d
e
c
r
u
o
s
e
r
e
r
a
w
d
r
a
h
;
s
t
n
i
a
r
t
s
n
o
c
,
s
e
i
c
n
e
d
n
e
p
e
d
,
s
e
i
r
a
d
n
u
o
b
,
s
e
r
u
t
c
u
r
t
s
;
P
M
P
f
o
n
o
i
t
u
l
o
v
e
;
s
e
i
c
n
e
d
n
e
p
e
d
;
s
e
u
q
i
n
h
c
e
t
d
n
a
,
s
n
o
i
t
p
m
u
s
s
a
;
s
e
i
t
i
r
o
i
r
p
l
a
n
o
i
t
a
z
i
n
a
g
r
o
;
s
e
l
b
a
r
e
v
i
l
e
d
s
c
i
p
o
t
s
e
g
a
k
c
a
p
k
r
o
W
,
s
l
o
o
t
,
d
o
h
t
e
M
d
n
a
s
e
v
i
t
c
e
j
b
o
t
n
e
m
e
g
a
n
a
M
;
l
e
d
o
m
s
s
e
c
o
r
P
,
w
e
i
v
r
e
v
o
t
c
e
j
o
r
P
n
o
i
t
c
e
S
;
s
n
o
i
t
a
c
o
l
l
a
;
n
o
i
t
a
t
n
e
m
u
c
o
d
g
n
fi
f
a
t
s
;
s
m
s
i
n
a
h
c
e
m
s
e
i
t
i
l
i
b
i
s
n
o
p
s
e
r
t
c
e
j
o
r
p
s
m
y
n
o
r
c
a
book
Mobk076
April 2, 2007
18:9
22 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
To assist project managers in managing projects, there are special project management software,
e.g., Microsoft Office Project. The final project process requiring the “integration” knowledge
area is the “project controlling process,” which requires the project manager to oversee and
control product or project changes. For the medical device industry, FDA requires a paper trail
of documents on any changes to a product.
3.3
PROJECT CONTROLLING PROCESS AND CHANGE
CONTROL
Overall change control involves identifying, evaluating, and managing changes throughout the
project life cycle. Three main objectives of change control are to
1.
influence the factors that create changes to ensure they are beneficial,
2. determine that a change has occurred, and
3. manage actual changes when and as they occur.
Changes to a project or product may result from the need to take some corrective action, change
requests, or from the reviews of status and progress reports. Starting with the established baseline
plan or documents, status and progress reports are compared to the baselines. If it is determined
that changes have occurred or should (need) to occur, then some corrective action or actions
must be taken, which requires updating and documenting the update in the modified project
plans. The update project plans become the current established baseline. The process is not
as simple as it may appear, since government regulations mandate a formal “change control
system.”
CHANGE CONTROL SYSTEM
3.4
A change control system is a formal, documented process that describes when and how the
official project documents and work may be changed. The change control system describes
who is authorized to make changes and how to make the changes. Thus, the project manager
must establish a change control board (CCB); develop a configuration management office with
personnel and a configuration management plan; and establish a process for communicating
changes to all stakeholders. The CCB is a formal group of people responsible for approving
or rejecting changes on a project. The CCB provides guidelines for preparing or changing
requests, evaluates all requests, and manages the implementation of approved changes. The
board should include stakeholders from the entire organization.
Since some CCBs only meet occasionally, it may take too long for changes to occur in a
timely manner. Therefore, some organizations have policies in place for time-sensitive changes.
book
Mobk076
April 2, 2007
18:9
The “48-h policy” permits project team members to make decisions, and then they have an
additional 48 h to reverse the decision pending senior management approval.
PROJECT INTEGRATION MANAGEMENT 23
CONFIGURATION MANAGEMENT
3.5
Configuration management ensures that the products and their descriptions are correct and
complete. It concentrates on the management of technology by identifying and controlling the
functional and physical design characteristics of products.
Configuration management specialists identify and document configuration require-
ments, control changes, record and report changes, and audit the products to verify confor-
mance to requirements. Because of its importance, the authors suggest that project managers
view project management as a process of constant communications and negotiations. Man-
agers should plan for change; therefore, establish a formal change control system, including a
CCB. The manager should oversee the use of good configuration management with defined
procedures for making timely decisions on smaller changes. Managers should not rely solely
on verbal communications, but should use written and oral performance reports to help iden-
tify and manage change. Project managers should learn to use project management and other
software to help manage and communicate changes.
3.6 NEED FOR TOP MANAGEMENT COMMITMENT
Several studies cite top management commitment as one of the key factors associated with
project success. A study by Pinto and Slevin in 1987 lists the key factors as
1.
2.
top management support
clear project mission
3. good project schedule/plan
4. good client consultation.
Whereas, the Standish Group Study (1995) lists the key factors as
1.
2.
executive management support
clear statement of requirements
3. proper planning
4. user involvement.
Top management can help project managers secure adequate resources, get approval for unique
project needs in a timely manner, receive cooperation from people throughout the organization,
and learn how to be better leaders. Project managers should meet the need for organizational
book
Mobk076
April 2, 2007
18:9
24 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
standards. Senior management should encourage the use of standard forms and software for
project management, the development and use of guidelines for writing project plans or provid-
ing status information, and the creation of a project management office or center of excellence.
It is a well-tried and proven fact that standards and guidelines help project managers to be more
effective.
REFERENCES
[1] Project Integration, Project Management, University of Washington [Online]. Available:
http://www.washington.edu/computing/pm/plan/integration.html, 2003.
[2] Project Plan, Management, University of Washington [Online]. Available: http://www.
washington.edu/computing/pm/plan, 2003.
book
Mobk076
April 2, 2007
18:9
25
C H A P T E R 4
Project Scope Management
Studies in the mid-1990s cite a clear project mission with a clear statement of requirements
as being important for project success, e.g., the Keller Graduate School of Management cites
proper project definition and scope as the main reasons of project failure. Defining the project,
product, or problem should not be limited to what is to be accomplished, but should also include
what will not be accomplished; i.e., setting boundaries as to what the product is expected to do,
and what the product is not being designed to do.
So what is project scope management? Project scope refers to all the work involved in
creating the products of the project and the processes used to create them. It is important that
project scope management include the processes involved in defining and controlling “what is”
and/or “what is not” included in the project. Therefore, it is essential that the project team and
stakeholders must have the same understanding of what products will be produced as a result
of the project, and what processes will be used in producing them.
PROJECT SCOPE MANAGEMENT PROCESSES
4.1
Recall from Fig. 2.4 of Chapter 2 that project managers will use the project scope knowledge
area throughout the project processes. Project initiation process occurs at the beginning of a
project or when the project continues from a completed phase to the next phase.
The first step in initiating the projects is to look at the big picture or strategic plan of
an organization. Strategic planning involves determining long-term business objectives, and
it is the project managers to make sure those projects support strategic and financial business
objectives.
Many organizations follow a planning process for selecting projects. The first step is to
develop a strategic plan based on the organization’s overall strategic plan. The second step is
to perform a business area analysis, and then potential projects, project scope, benefits, and
constraints are defined. The final step is to select the most viable projects and assign resources.
During the planning process, the project manager develops project scope planning documents
to provide the basis for future project decisions. It is during this process that the project
manager develops the scope definition, subdividing the major project deliverables into smaller,
book
Mobk076
April 2, 2007
18:9
26 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
more manageable components. During the executing process, the project scope verification
documentation are developed and executed in formalizing acceptance of the project scope.
Project scope change control documents are developed and used during the controlling process
to ensure compliance in controlling changes to project scope.
SELECTING PROJECTS
4.2
There are usually more projects than the available time and resources to implement them;
therefore, it is important to follow a logical process in selecting projects to work on. Methods
for selecting projects to work on include focusing on broad needs, categorizing projects, financial
methods, and weighted scoring models.
It is often difficult to provide strong justification for many projects, even though everyone
agrees they have a high value. The “focusing on broad organizational needs” approach is based
on meeting three important criteria for projects:
1. There must be a need for the project.
2. Funds must be available for the project.
3. There must be a strong will to make the project succeed.
The “categorizing projects” approach is based on the following categories:
1. What does the project addresses?
a. a problem,
b. an opportunity, or
c. a directive for higher management.
2. How long it will take to do the project and when it is needed?
3. What is the overall priority of the project within the organization?
The “financial analysis of projects” approach is based on the premise that financial considerations
are an important consideration in selecting projects. Three primary methods for determining the
projected financial value of projects are net present value (NPV) analysis, return on investment
(ROI), and payback analysis. NPV analysis is a method of calculating the expected net monetary
gain or loss from a project by discounting all the expected future cash inflows and outflows to
the present point in time. If financial value is a key criterion, then projects with a positive NPV
should be considered: the higher the NPV, the better.
ROI is the income divided by investment, as shown in Eq. (4.1):
ROI = total discounted benefits − total discounted costs
discounted costs
(4.1)
book
Mobk076
April 2, 2007
18:9
PROJECT SCOPE MANAGEMENT 27
Most organizations have a required rate of return or minimum acceptable rate of return on
investment for projects; thus, the higher the ROI, the better.
Another important financial consideration is “payback analysis.” The payback period is
the amount of time it will take to recoup, in the form of net cash inflows, the net dollars invested
in a project. Payback occurs when the cumulative discounted benefits and costs are greater than
zero. Many organizations want projects to have a fairly short payback period.
4.3 WEIGHTED SCORING MODEL
A weighted scoring model is a tool that provides a systematic process for selecting projects
based on many criteria. The first step in the weighted scoring model is to identify the criteria
important for the project selection process. The second step is to assign weights (percentages)
to each criterion so that the total weights add up to 100%. The next step is to assemble an
evaluation team, and have each member evaluate and assign scores to each criterion for each
project. In the last step, the scores are multiplied by the weights and the resulting products
are summed to get the total weighted scores. Projects with higher weighted scores are the best
options for selection, since “the higher the weighted score, the better.”
PROJECT CHARTERS
4.4
After an organization or the program manager has decided what project to work on, it is
important to formalize projects with official documents. A project charter is a document that
formally recognizes the existence of a project and provides direction on the project’s objectives
and management. It is important to have key project stakeholders and senior leadership (man-
agement) sign a project charter to acknowledge the agreement on the need and intent of the
project. Either the project charter or the project management plan should contain a formal
scope statement. A scope statement is a document used to develop and confirm a common
understanding of the project scope, and it should include the following sections: a project justi-
fication, a brief description of the project’s products, a summary of all project deliverables, and
a statement of what determines project success (What are the criteria for the project’s success?).
4.5 WORK BREAKDOWN STRUCTURE
After completing scope planning, the next step is to further define the work by breaking it
into manageable pieces. Good scope definition helps improve the accuracy of time, cost, and
resource estimates, defines a baseline for performance measurement and project control, and
aids in communicating clear work responsibilities. A WBS is an outcome-oriented analysis of
the work involved in a project that defines the total scope of the project.
book
Mobk076
April 2, 2007
18:9
28 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
TABLE 4.1: Example of WBS in Tabular Form
1.0 Concept
1.1 Evaluate current systems
1.2 Define requirements
1.2.1 Define user requirements
1.2.2 Define content requirements
1.2.3 Define product requirements
1.3 Define specific functionality
1.4 Define risks and risk management approach
1.5 Develop project plan
1.6 Brief web development team
2.0 Product design
3.0 Product development
3.1 Product testing
4.0 Roll out
5.0 Support
6.0 Deliverables
It is a foundation document in project management, because it provides the basis for
planning and managing project schedules, costs, and changes. An example of a WBS is given
in Table 4.1.
4.6
APPROACHES TO DEVELOPING WORK BREAKDOWN
STRUCTURES (WBSS)
There are four basic approaches to developing WBSs:
1. The use guidelines approach: Some organizations, like the Department of Defense
(DOD), provide guidelines for preparing WBSs.
2. The analogy approach: It often helps to review WBSs of similar projects.
3. The top-down approach: Start with the largest items of the project and keep breaking
them down.
4. The bottoms-up approach: Start with the detailed tasks and roll them up.
Most project managers will use the top-down approach and may continue to break tasks down
further as the need arises.
book
Mobk076
April 2, 2007
18:9
PROJECT SCOPE MANAGEMENT 29
Here are some basic principles for creating WBSs [1]:
1. A unit of work should appear at only one place in the WBS.
2. The work content of a WBS item is the sum of the WBS items below it.
3. A WBS item is the responsibility of only one individual, even though many people
may be working on it.
4. The WBS must be consistent with the way in which work is actually going to be
performed; it should serve the project team first and other purposes only if practical.
5. Project team members should be involved in developing the WBS to ensure consistency
and buy-in.
6. Each WBS item must be documented to ensure an accurate understanding of the scope
of work included and not included in that item.
7. The WBS must be a flexible tool to accommodate inevitable changes while properly
maintaining control of the work content in the project according to the scope statement.
It is very difficult to create a good scope statement and WBS for a project, and it is even more
difficult to verify the project scope and minimize scope changes. Many projects suffer from what
is referred to as “scope creep” and poor scope verification. Scope creep occurs when additional
requirements (specifications or configuration changes) are added to the project without going
through an official configuration control process. Johnson [2] published a list of the top 10
factors causing project problems. The list is given in Table 4.2. The factors ranked second and
third deal with inadequate scope definition and scope creep, respectively.
TABLE 4.2: Top 10 Factors Causing Project Problems
FACTOR
RANK
Lack of user input
Incomplete requirements and specifications
Changing requirements and specifications
Lack of executive support
Technology incompetence
Lack of resources
Unrealistic expectations
Unclear objectives
Unrealistic time frames
New technology
1
2
3
4
5
6
7
8
9
10
book
Mobk076
April 2, 2007
18:9
30 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
The following suggestions are offered for reducing the incomplete and changing require-
ments:
1. Develop and follow a requirements management process.
2. Employ techniques such as prototyping, use case modeling, and joint application design
to thoroughly understand the user requirements.
3. Put all requirements in writing and create a requirements management database.
4. Use a process for reviewing requested changes from a systems perspective.
5. Provide adequate testing and emphasize completion dates.
REFERENCES
[1] D. I. Cleland, Project Management: Strategic Design and Implementation. New York:
[2]
McGraw-Hill, 1994.
J. Johnson. (1995, Jan.). CHAOS: The dollar drain of IT project failures. Appl. Dev.
Trends [Online]. Available: www.stadishgroup.com/chaos.html.
book
Mobk076
April 2, 2007
18:9
31
C H A P T E R 5
Personal and Project Time
Management
PERSONAL TIME MANAGEMENT
5.1
Project time management is similar to personal time management. Not many young individuals
(students) are very good at managing; yet it is a skill that once acquired may become a habit. I
wrote a small booklet on how to manage one’s time in college [1], which was used by the College
of Engineering for freshmen. To compare the similarities between personal time management
and project time management, let us start with the former:
To accomplish anything, one must first have a goal; however, a goal is no more than a
dream, unless you plan to accomplish the desired goal!
Setting goals should not be taken lightly, since those goals may impact one’s future career and
life. Therefore, be careful in determining goals and in planning how to achieve your goals. Start
with an outline of what you wish to happen, and be sure to set both short-range goals and
long-range goals. Then determine
1. Why are those goals necessary?
2. What are the benefits and consequences of each goal?
3. How can each goal be accomplished? (Planning)
In developing goals, be sure to make goals that are realistic, action-oriented, measurable, and
include “time limits” for accomplishment of each goal. Last but not least, “prioritize the list of
personal goals.”
When asked, “What leads to success in achieving personal goals?” The response is,
“Planning what needs to be done in order to achieve desired goals, and control, which means
using time efficiently and effectively.”
The next task is to create a general schedule for the week, which includes determining
tasks for the week, setting priorities with “due dates,” determining when to devote time to
the tasks, and determining the time limits (the amount of time for each task). The first step
book
Mobk076
April 2, 2007
18:9
32 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
is to fill in the working information needed for the “worksheet,” then fill in all the “time
blocks” scheduled for tasks (classes and labs). Next, fill in the pre-class and post-class (special
study) times, and any essential times, e.g., meals, sleep, etc. Is that all? No! There is the task
of managing “daily” activities. If a daily planner or organizer is used, it should be reviewed
regularly, completed tasks should be crossed out, and tasks that were not completed during
the week should be carried over to the next week. The final step is “implementation” of the
schedule: follow the schedule and do not procrastinate.
“WORK SMARTER, NOT HARDER” [1]
5.2
In summary, to successfully manage your personal time, determine requirements for coming
week. Set priorities and goals for week. Make a daily “do-list,” and determine daily priority
tasks. Review your daily “do-lists” in the morning before work and in the evening. Postpone
unnecessary activities/tasks, and do not spread yourself too thin (learn to say, “no!”). When
working, do only one task at a time.
PROJECT TIME MANAGEMENT
5.3
As noted in personal time management, schedules are important, and it is even more important
to project managers, who indicate that delivering projects on time as one of their biggest
challenges, since time schedules often have the least amount of flexibility. Unlike fictional
movies, time cannot be replayed in real life. Schedule issues are the main reason for delays
or conflicts on projects, especially during the execution phase of projects. Because of trying
to meet some schedule, products are often introduced before all “buds” or problems with the
product have been resolved. It is not surprising to read that the average time overrun on project
exceeds 200%. Recently, Chip Reid, NBC Nightly News, Washington, DC (June 6, 2006)
reported
A vital $7 billion program, now approaching $11 billion, with nowhere to go, critics say,
but up. A government investigation says the new polar satellite program is more than $3
billion over budget and as much as three years behind schedule. Why? The report blames
“poor management oversight” by government agencies [2].
Congressional Hearing [3] on the same project was held on June 8, 2006, and was reported in
the New York Times (June 9, 2006) [4]. Additionally, for those who may be interested, refer to
the full audit report on the polar satellite audit by the U.S. Department of Commerce, Office
of Inspector General, May 2006 [5].
book
Mobk076
April 2, 2007
18:9
PERSONAL AND PROJECT TIME MANAGEMENT 33
PROJECT TIME MANAGEMENT PROCESSES
5.4
Project time management involves processes required to ensure timely completion of a project.
The processes include
1.
2.
3.
4.
5.
activity definition
activity sequencing
activity duration estimating
schedule development
schedule control.
Project schedules are developed from the basic documents that initiate a project, for example,
the project charter includes start and end dates of the project with some budget information,
e.g., a budget ceiling of not to exceed some target amount. The scope statement and work
breakdown schedule (WBS) help define what will be done.
Activity definition involves developing a more detailed WBS and supporting explanations
to understand all the work to be done; whereas, activity sequencing involves reviewing activities
and determining the type of dependencies. Mandatory dependencies are inherent in the nature
of the work, which are considered as hard logic; on the other hand, discretionary dependencies
are defined by the project team and are considered as soft logic. External dependencies involve
relationships between project and nonproject activities. In order to use critical path analysis,
program managers must first determine dependencies.
PROJECT NETWORK DIAGRAMS
5.5
Project network diagrams is one technique for showing activity sequencing. A project network
diagram is a schematic display of the logical relationships among project activities and/or
sequencing of project activities. In the “arrow diagramming method,” also called activity-on-
arrow (AOA) project network diagrams, activities are represented by arrows, nodes or circles
are the starting and ending points of activities. Limitation of the arrow diagramming method
is that it can only show finish-to-start dependencies.
The steps in creating AOA diagrams are as follows:
1. Find all of the activities that start at the first node (node #1). Draw their finish nodes
and draw arrows between node #1 and those finish nodes. Put the activity letter or
name and duration estimate on the associated arrow.
2. Continuing drawing the network diagram, working from left to right. Look for bursts
and merges. Bursts occur when a single node is followed by two or more activities. A
merge occurs when two or more nodes precede a single node.
book
Mobk076
April 2, 2007
18:9
34 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
3. Continue drawing the project network diagram until all activities are included on the
diagram that have dependencies.
4. As a rule of thumb, all arrowheads should face toward the right, and no arrows should
cross on an AOA network diagram.
PRECEDENCE DIAGRAMMING METHOD (PDM)
5.6
Instead of using arrows to represent activities, the precedence diagramming method used boxes
to represent activities (tasks) and arrows show relationships between activities. Many project
managers will use software like Microsoft Project because of its visual approach in showing
the different types of dependencies. Figure 5.1 shows how the four types of dependencies are
presented in Microsoft Project.
ESTIMATION OF ACTIVITY TIMES (DURATION)
5.7
After defining activities and determining their sequence, the next step in time management is
to estimate duration time for each activity. It is important to get the individuals who will be
doing the actual work to help project managers create the activity estimates, and then have an
expert in this area review the results.
FIGURE 5.1: Activity or task dependencies. There are four types of activity dependencies: finish-to-
start (FS), start-to-start (SS), finish-to-finish (FF), and start-to-finish (SF)
book
Mobk076
April 2, 2007
18:9
PERSONAL AND PROJECT TIME MANAGEMENT 35
FIGURE 5.2: Example of a Gantt chart. Note that horizontal bars denote tasks and the arrows show
the dependencies between the tasks
SCHEDULE DEVELOPMENT
5.8
Schedule development uses results of the other time management processes to determine the
start and end dates of the project and its activities. A key challenge of project management is
the creation of realistic schedules, and subsequently, to implement and stick to the schedule.
The ultimate goal is to create a realistic project schedule that provides a basis for monitoring
project progress for the time dimension of the project.
Important tools and techniques to assist the project manager include Gantt charts, PERT
analysis, and critical path analysis. The Gantt chart was developed in 1917 by Henry Gantt as
a tool for scheduling work. The Gantt chart provides a standard format for displaying project
schedule information by listing project activities with corresponding start and finish dates in a
calendar format. Figure 5.2 is an example of a Gantt chart. Note that horizontal bars denote
tasks, and that the arrows show the dependencies between tasks. Task name and duration are
shown in columns 3 and 4, where the start and finish dates are given in columns 5 and 6.
CRITICAL PATH METHOD (CPM)
5.9
The critical path method (CPM) is a project network analysis technique used to predict total
project duration. A critical path for a project is the series of activities that determines the
earliest time by which the project can be completed and the longest path through the network
diagram, which has the least amount of slack time. If any activity on the critical path takes
longer than planned, then the project schedule will slip unless corrective action is taken. There
are several misconceptions about the critical path. First, the critical path is not the path that
book
Mobk076
April 2, 2007
18:9
36 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
accounts for all the critical activities, since the critical path only accounts for time. Second,
there may be more than one critical path in a project, if the lengths of time of more than one
path are the same. Finally, the critical path is not fixed or rigid; the critical path can change
as the project progresses. It is important for project managers to frequently update the project
schedule information, since the critical path may change as actual start and finish dates are
entered.
If the project schedule slips, then the project manager will take corrective action by
applying one of the techniques for shortening a project schedule. One method in shortening
durations of critical tasks is to add more resources (man hours and workers) or change the scope
of the task (this may require a scope change or CCB). Another method, called “crashing,”
is to compress the schedule as much as possible for the least amount of incremental cost. If
it is possible, the project manager may “fast track” tasks by working the tasks in parallel or
overlapping them. However, if it is known that the project completion date will slip, project
managers must inform stakeholders, company executives, and negotiate with the project sponsor
for project time and perhaps cost overrun.
5.10 PROGRAM EVALUATION AND REVIEW TECHNIQUE (PERT)
PERT charts were developed by the Navy in 1958; however, it was not until the 1970s
that the military began using project management software. PERT is a network analysis
technique used to estimate project duration when there is a high degree of uncertainty about
the individual activity duration estimates. PERT uses probabilistic time estimates based on
using optimistic, most likely, and pessimistic estimates of activity durations. PERT uses a basic
statistical weighted average formula, given by
optimistic time + 4 × most likely time + pessimistic time
6
(5.1)
5.11 SUMMARY
In summary, there are several hints in controlling changes to the project schedule. Project man-
agers should perform reality checks on schedules on a regular basis and allow for contingencies.
Few things work exactly as they are planned. Managers should not plan for everyone to work
at 100% capacity all the time; we all need some break time. It is highly recommended that
program managers hold regularly scheduled progress meetings with stakeholders, and be clear
and honest in communicating schedule or project issues. In dealing with people issues, keep in
mind that strong (good) leadership helps projects succeed more than all the good Gantt and/or
PERT charts. Many project managers misuse project management software tools, because they
do not understand important concepts of the tools and/or they have not had good training in
book
Mobk076
April 2, 2007
18:9
the use of the tools or in leadership. Project managers should have good interpersonal (people)
skill and should not rely solely on software tools in managing time.
PERSONAL AND PROJECT TIME MANAGEMENT 37
REFERENCES
[1] C. Lessard, How to Succeed in Today’s TAMU. McGraw-Hill Primis, 2001, ISBN 0-390-
1-245-8.
[2] C. Reid, NOAA Weather Satellite Program, TV Broadcast, NBC Nightly News,
Washington, DC, June 6, 2006.
[3] NOAA Weather Satellite Program Capitol Hill Hearing Testimony, Federal Document
Clearing House, Congressional Quarterly, June 8, 2006.
[4] K. Chang, Officials Report Progress in Weather Satellite Effort, New York Times, Section A;
Column 5; National Desk; The New York Times Company, June 9, 2006, p. 25.
[5] Poor Management Oversight and Ineffective Incentives Leave NPOESS Program Well Over
Budget and Behind Schedule, U.S. Department of Commerce, Office of Inspector General,
Audit Report no. OIG-17794-6-0001, May 2006.
book
Mobk076
April 2, 2007
18:9
book
Mobk076
April 2, 2007
18:9
39
C H A P T E R 6
Project Cost Management
Why is project cost management so important? Technical projects in the United States have an
extremely poor performance record for meeting cost goals with the average cost overrun over
189% of the original estimates and cancellation of technical projects costing the United States
over $200 billion in the mid-1990s.
So, what is cost and project cost management? It is well known that to produce a product,
resources in terms of personnel, materials, and finances are necessary. Cost is a resource, usually
measured in monetary units like dollars, used to achieve a specific objective or given up in
exchange for something. Project cost management includes the processes required to ensure
that the project is completed within an approved budget (in monetary units).
PROJECT COST MANAGEMENT PROCESSES
6.1
From Table 2.4 in Chapter 2, it is noted that project cost management takes place during the
planning and controlling phases of project processes. Project cost management includes the
following knowledge areas during the planning process:
1. Resource planning requires the project manager to determine what resources and the
amount of resources are necessary for the project.
2. Cost estimation requires the project manager to develop an estimate of the costs for
the resources needed to complete a project.
3. Cost budgeting requires allocating the overall cost estimate to each individual work
activity in order to establish a baseline for measuring performance.
Cost control is the knowledge area that occurs in the controlling process. Cost control requires
the project manager to control changes to the project budget.
Since most chief operating officers (CEOs) and company board members may know a lot
more about finance than do young engineers or project managers, it is highly recommended that
one must at least learn to speak their (CEO and company officers) language or attend formal
book
Mobk076
April 2, 2007
18:9
40 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
business courses in economics, accounting, and finance. Here are some simplified definitions
of financial terms:
1. Profits are revenues minus expenses.
2. Project life cycle costing means determining and developing estimates for the cost of a
project over its entire life.
3. Cash flow analysis means determining and developing the estimated annual costs and
benefits for a project.
4. Benefits and costs can be tangible or intangible, direct or indirect.
5. Sunk cost should not be the criteria in project selection.
RESOURCE PLANNING
6.2
Since resource planning may be affected by the nature of the project and/or the organization,
project managers should take into consideration the following questions:
1. How difficult will it be to accomplish specific tasks on the project?
2. What, if anything, is there in the project’s scope statement that may affect resources?
3. What is the organization’s past history in accomplishing similar projects or tasks?
4. Does the organization have in place the people, equipment, and materials that are
capable and available for performing the work?
5.
If not, can the organization acquire the necessary resources in a timely manner so as
not to delay the project or tasks start times.
COST ESTIMATING
6.3
Project managers should realize that the important output of project cost management is a good
cost estimate; additionally, it is important to develop a cost management plan that describes how
cost variances will be managed on the project. To assist project managers in this endeavor, there
are several types of tools and three techniques (rough order of magnitude (ROM), budgetary,
and definitive) to help create cost estimates. Table 6.1 may help in determining when each
technique may be applied in the development of a project or product.
COST ESTIMATION TECHNIQUES
6.4
Cost estimation techniques include the “top-down” or “analogous” approach that depends on
using the actual cost of a previous and similar project as the basis for the new estimate. This
book
Mobk076
April 2, 2007
18:9
PROJECT COST MANAGEMENT 41
TABLE 6.1: Types of Cost Estimates
TYPE OF ESTIMATE WHEN?
Rough order of
Very early in
WHY?
Rough estimate for ±25%
ACCURACY
magnitude (ROM)
planning phase
decision selection
Budgetary
Definitive
Early in the budget
$ into budget plans ±10–20%
planning
in planning phase
Later in the project
in execution phase
Actual cost; detail
for purchasing
±5–10%
approach depends on the availability of organization archives (files) or personnel that were
involved in the previous project. The “bottom-up” techniques require project managers or
team to arrive at individual work item estimates without previous examples and to sum tasks
estimates to obtain a total cost estimate. Additionally, there is the “parametric” technique that
uses project characteristics in a mathematical model to estimate costs, e.g., the constructive cost
model (COCOMO).
The developers of COCOMO contend that parametric model is the only technique
that does not suffer from the limits of human decision-making. From experience, computer
decision-making is only as good as the programming logic developed by humans. Flights to the
moon have shown that the human decision-making could not be replaced by any automated
computer program. There are computerized tools, such as spreadsheets, project management
software, or other software to help project managers estimate costs.
PROBLEMS IN COST ESTIMATION
6.5
As stated previously, project cost estimates are done at various stages of the project. Developing
cost estimates for a large project is a complex task requiring a significant amount of effort.
A potential arises from the lack of experience of individuals attempting to develop project
and/or task estimates; if this is the case, the project manager should provide cost estimation
training and mentor individuals working on estimations. Another shortcoming is that most
individuals have a bias toward underestimation; therefore, outside experts should be contracted
to review estimates and ask important questions to make sure estimates are not biased. In many
organizations, upper management wants a number for a bid, whether the final cost estimate is
or is not a real estimate. The authors have experienced companies that have underestimated
costs in order to obtain a contract. Project managers must negotiate with project sponsors to
create “realistic cost estimates.”
book
Mobk076
April 2, 2007
18:9
42 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
COST BUDGETING
6.6
Cost budget involves allocating the project cost estimate to individual work activities and
providing a cost baseline. Most organizations have developed a standardized format to be
used in developing a project budget. Deviations from the format may make it difficult to
understand, evaluate, and/or compare the budget request with competing applications. The
budget should contain justification stating why and how funds in each budget category are to
be used. Justifications need not be elaborate, but must present a clear rationale for the use of
the requested funds [1].
6.7 GUIDELINES FOR PREPARING BUDGET
Company guidelines are based on management policies found in the respective company’s
project manager’s program guide. Guidelines should be followed when preparing the budgets
and justifications. The following sections are generally included in most budgets:
A. Personnel
(1) Direct labor (salaries): List separately by name and program title for each person
to be supported by budget, list annual salary, percent of time, and the number of
months to be supported.
(2) Benefits: Fringe benefits are to be included in project budget requests, and the fringe
benefit rate is to be presented as a part of the budget.
(3) Temporary help: Fees for clerical or other staff, who are engaged on a short-term
hourly basis, should be projected. List hourly rate and total hours.
(4) Consultants: If the name of the consultant is known, show name and title. Indicate
fees by the number of days and daily rate.
B. Supply and expense
(1) General expenses: Include program and office-related expenses (e.g., photocopy ex-
penses such as paper, copier rental, service contract, etc.).
(2) Communications expenses: Include telephone and postage expenses.
(3) Publication costs: Include, but do not limit to, newsletters, continuing education
calendars, announcements, and educational materials you will publish or cause to be
published.
(4) Other expenses: Include subscriptions, books, audiovisuals, and miscellaneous ex-
penses not covered in any of the above three categories.
C. Rental
(1) Justification is necessary for each rental required to support the project, present
monthly cost, and the number of months rented.
book
Mobk076
April 2, 2007
18:9
PROJECT COST MANAGEMENT 43
D. Meeting expenses
(1) Meeting funds may support planning and development of continuing professional
education. Allowable costs include, but are not limited to, meeting room rental and
room use charges and equipment use charge for meetings.
E. Travel
(1) Display number of trips, origin and destination, and round trip rate for airfare.
Automobile usage should display total mileage and per mile rate. If per diem is
requested, show the number of days and per diem rate.
F. Equipment
1.
If property is to be acquired on this grant, show each item separately, indicating
manufacturer or seller of the equipment, brand name, model number, and cost.
COST CONTROL
6.8
Project cost control is performed during the controlling phase of the project. As the term cost
control implies, it requires the project manager to monitor “cost performance” and to ensure that
only appropriate project changes are included in any necessary revision of the “cost baseline.”
Should there be any delays in the schedule or changes in configuration, the project manager
needs to inform the project stakeholders of “authorized” changes to the project that will affect
costs.
The “earned value analysis (EVA)” is an important tool used by project managers for
cost control. EVA is a project performance measurement technique that integrates scope, time,
and cost data [1]. Given the original planned baseline cost plus approved changes, a project
manager can determine how well the project is meeting its goals: scope, time, and cost. EVA
is explained in greater detail in Chapter 7.
REFERENCE
[1] Office of Management and Budgets. (2006). Preparation and Submission of Budget
Estimates, OMB Circular no. A-11 [Online]. Available: http://www.whitehouse.gov/
omb/circulars/a11/current year/guide.pdf
book
Mobk076
April 2, 2007
18:9
book
Mobk076
April 2, 2007
18:9
45
C H A P T E R 7
Earned Value Analysis
Earned value analysis (EVA) is an industry standard method of measuring a project’s progress at
any given point of time, forecasting its completion date and final cost, and analyzing variances in
the schedule and budget as the project proceeds. It compares the planned amount of work with
what has actually been completed, to determine if the cost, schedule, and work accomplished
are progressing in accordance with the plan. As work is completed, it is considered “earned.”
The Office of Management and Budget prescribed in Circular A-11, Part 7, that EVA is
required on construction projects:
Agencies must use a performance-based acquisition management system, based on
ANSI/EIA Standard 748, to measure achievement of the cost, schedule and performance
goals [1].
EVA is a snapshot in time, which can be used as a management tool as an early warning
system to detect deficient or endangered progress. It ensures a clear definition of work prior to
beginning that work. It provides an objective measure of accomplishments, and an early and
accurate picture of the project status. It can be as simple as tracking an elemental cost estimate
breakdown as a design progresses from concept through to 100% construction documents, or
it can be calculated and tracked using a series of mathematical formulae (see below). In either
case, it provides a basis for course correction. It answers two key questions:
1. At the end of the project, is it likely that the cost will be less than, equal to, or greater
than the original estimate?
2. Will the project likely be completed on time?
7.1 WORK BREAKDOWN STRUCTURE (WBS)
EVA works most effectively when it is compartmentalized, i.e., when the project is broken
down into an organized work breakdown structure (WBS). The WBS is used as the basic
building block for the planning of the project. It is a product-oriented division of project tasks
that ensures the entire scope of work is captured, and allows for the integration of technical,
schedule, and cost information. It breaks down all the work scope into appropriate elements
book
Mobk076
April 2, 2007
18:9
46 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
for planning, budgeting, scheduling, cost accounting, work authorization, progress measuring,
and management control. The two most common WBS systems are the Construction Speci-
fications Institute (CSI) [2] format and the Uniformat II [3]. Often at the preliminary stages
of design, the Uniformat II lends a better understanding of the cost centers, and at final bid
level of documents, often the CSI format is used. The indirect costs of design, oversight, and
management must be included in the WBS to reflect the full budget.
CALCULATING EARNED VALUE
7.2
Earned value management measures progress against a baseline. It involves calculating three
key values for each activity in the work breakout schedule (WBS):
1. The planned value (PV), formerly known as the “budgeted cost of work scheduled”
(BCWS) or simply called the “budget,” is that portion of the approved cost estimate
planned to be spent on the given activity during a given period.
2. The actual cost (AC), formerly known as the “actual cost of work performed” (ACWP)
is the total of the costs incurred in accomplishing work on the activity in a given
period. Actual cost must correspond to whatever activities or tasks were budgeted for
the planned value and the earned value, e.g., all labor, material, equipment, and indirect
costs.
3. The earned value (EV), formerly known as the “budget cost of work performed”
(BCWP) is the value of the work actually completed.
These three values are combined to determine at that point of time whether or not work is
being accomplished as planned. The most commonly used measures are the cost variance (CV),
which is the difference between EV and AC, and is given by
CV = EV − AC
(7.1)
and the schedule variance (SV), which is the difference between EV and PV or budget, is
calculated as
SV = EV − PV
(7.2)
These two values can be converted to efficiency indicators to reflect the cost and schedule
performance of the project. The most commonly used cost-efficiency indicator is the cost
performance index (CPI), which is the ratio of EV to AC, and is calculated as
CPI = EV
AC
(7.3)
book
Mobk076
April 2, 2007
18:9
The sum of all individual EV budgets divided by the sum of all individual ACs is known as
the cumulative cost performance index (CCPI) and is generally used to forecast the cost to
complete a project.
The schedule performance index (SPI) is the ratio of EV to PV, and is calculated as
EARNED VALUE ANALYSIS 47
SPI = EV
PV
(7.4)
SPI is often used with the CPI to forecast overall project completion estimates. The general
rules in interpreting EVA numbers are as follows:
1. Negative numbers for cost and schedule variance indicate problems in those respective
areas.
2. A negative SV calculated at a given point of time means the project is behind schedule,
while a negative CV means the project is over budget.
3. CPI and SPI less than 100% indicate problems.
EARNED VALUE MANAGEMENT SYSTEM (EVMS)
7.3
Section A-11, Part 7, of the ANSI Standard 748 [4] requires an earned value management
system (EVMS) to be used to comply with the standard. A list of guidelines is provided
that covers areas such as planning, scheduling and budgeting, accounting issues, management
reports, and so forth; however, there are no “approved” systems identified.
The basics of any EVMS are
1.
2.
3.
a methodical, organized, thorough, and complete WBS,
a baseline schedule,
a baseline budget, organized into control accounts,
4. measurement of the work by control account (e.g., $, units in place, man hours, etc.).
Scheduling the authorized work is no different than in any large construction project—it is
a necessary activity for the success of the project. However, in an EVMS, the schedule will
integrate all of the technical, cost, and schedule aspects of the work, resulting in the expected
sequence of work. Interdependencies are established that result in the total work time and reveal
the critical path, which is also the shortest project duration.
Within each task, it is then necessary to identify objective interim measures to allow for
accurate performance assessment each month. A sufficient number of these interim measures
will be defined after the detailed schedule is established to ensure the performance is measured
as accurately as possible.
book
Mobk076
April 2, 2007
18:9
48 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
A time-phased budget baseline, at the control account level, must also be established
and maintained. The assignment of budgets to work activities or tasks results in a plan against
which actual performance can be measured. This is referred to as the performance measurement
baseline (PMB), and it should be established as early as possible after a notice to proceed has
been issued. The PMB includes direct hours/dollars, direct material dollars, equipment and any
other direct costs, and any indirect costs for the agreed scope. The indirect costs associated with
design, oversight, and management must also be included. Essentially, the PMB represents the
formal plan for the project manager to accomplish all the work required in the time allotted
and within the budget provided.
ANSI 748 also requires
On at least a monthly basis, generate schedule variance data that provide visibility into
root causes and establish actions to achieve project completion. The first intent if this
criterion is to establish the fact that analysis, to remain viable, must be accomplished on
a regular, periodic basis. The second intent is to foster analyses and identification of root
cause and resulting impacts at the control account level.
The monthly performance report must include
1. budget, earned value, and actual costs (reconcilable with accounting system),
2. CV,
3. SV,
4.
5.
variance at completion (VAR),
a variance analysis narrative (root causes, impacts at completion, and management
actions).
TOOLS AND TECHNIQUES
7.4
Spreadsheets are a common tool for resource planning, cost estimating, cost budgeting, and
cost control. Many organizations prefer to use more sophisticated and centralized financial
applications software for cost information. Additionally, there are several software packages in
the market to help the project managers prepare EVA, e.g.
1. Schedulemaker
2. Planisware OPX2
3. Risk Trak
4. Winsight
5. Primavera.
book
Mobk076
April 2, 2007
18:9
EARNED VALUE ANALYSIS 49
SUMMARY
7.5
Since EVA is an industry standard method of measuring a project’s progress, project managers
should be skilled in applying and interpreting EVA values. Project managers should not be
afraid of measuring a project’s progress and performance on a regular basis. Team members
should be taught to use and report their respective activities performance using the EVA.
Information on EVM systems is available at the Web site www.acq.osd.mil/pm.
REFERENCES
[1] Office of Management and Budgets. (2006). Preparation and Submission of Bud-
get Estimates, OMB Circular no. A-11 [Online]. Available: http://www.whitehouse.
gov/omb/circulars/a11/current year/guide.pdf.
[2] Construction Specifications
http://www.csinet.org, 2004.
Institute
(CSI). WBS Format
[Online]. Available:
[3] Uniformat II. (1998, May). New building design management tools for project man-
agers. Project Manager [Online]. Available: http://www.uniformat.com/building-design-
management.html.
[4] Earned Value Management Systems, American National Standards Institute (ANSI)/
Electronic Industries Alliance (EIA) Standard 748-1998, May 19, 1998.
book
Mobk076
April 2, 2007
18:9
book
Mobk076
April 2, 2007
18:9
51
C H A P T E R 8
Project Quality Management
In the past couple decades, there have appeared many articles in newspapers and on TV related
to quality problems in U.S. products; not to mention the numerous jokes around the country
about the poor quality of U.S. cars and computer softwares. Since the public seems to accept
systems being down occasionally or needing repairs, a basic question is should we accept lower
quality from newer products, with more innovation? If so, watch out for those new futuristic
cars or airplanes.
Quality is defined by the International Organization for Standardization (ISO) as all
the characteristics of an entity that bear on its ability to satisfy stated or implied needs. Other
organizations define quality as conformity to requirements in meeting written specifications
and ensuring that a product is fit to use as it was intended.
PROJECT QUALITY MANAGEMENT PROCESSES
8.1
Project quality management processes take place during the planning, execution, and control
phases of project management, as shown in Chapter 2 (Fig. 2.4). Quality planning process,
which takes place during the planning phase, identifies which quality standards are relevant to
the project and how to satisfy them. Quality assurance is done throughout the execution phase
to evaluate the overall project performance, and to ensure the project (product) satisfies the
applicable quality standards while identifying ways to improve overall quality. Quality control
is accomplished during the controlling phase by monitoring specific project (product) results to
ensure that they comply with the relevant quality standards.
The basic requirements or objectives of quality management include
1.
the requirement for customer satisfaction,
2. preference for prevention over inspection, and
3.
recognizing that management has the responsibility for quality.
8.2 QUALITY PLANNING
Project managers should recognize the importance of considering quality in the very early stages
of a product design and in communicating important factors that directly contribute to meeting
book
Mobk076
April 2, 2007
18:9
52 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
the customer’s requirements. Often during the feasibility phase, project teams may have to
design experiments that can help in identifying which variables have the most influence on
the overall outcome of a process. The project manager should keep in mind that many scope
aspects of projects may affect quality, i.e., functionality, features, system outputs, performance,
reliability, and maintainability.
8.3 QUALITY ASSURANCE
Quality assurance includes all the activities related to satisfying the relevant quality standards for
a project; however, another goal of quality assurance is to provide continuous quality improve-
ment. For example, benchmarking can be used to generate ideas for quality improvements, and
quality audits can help identify lessons learned that may be used to improve performance on
current or future projects.
8.4 QUALITY CONTROL
Quality control in essences requires testing or monitoring a specific product to ensure com-
pliance with quality standards. The main outputs of quality control include making an ac-
ceptance decision on whether the product met required specifications and standard or it
did not. If the product does not meet the quality standards then the product must be re-
jected for “rework” and/or the production process must be reviewed and perhaps require
adjustments.
Some tools and techniques used in quality control include
1. Pareto analysis,
2.
statistical sampling,
3. quality control charts, and
4.
testing.
8.4.1 Pareto Analysis
Pareto analysis involves identifying the principle factors that account for the most quality
problems in a system. Pareto analysis is also called the 80–20 rule, which means that 80% of
problems are often due to 20% of the causes (factors). To help identify and prioritize problem
areas in a system, Pareto diagrams or histograms are used by management personnel. Dr. Joseph
Juran expanded the Pareto principle to quality issues, which is also known as the “vital few and
the trivial many,” thus implying that the remaining 80% of the causes should not be totally
ignored [1].
book
Mobk076
April 2, 2007
18:9
PROJECT QUALITY MANAGEMENT 53
TABLE 8.1: Common Certainty Factors
DESIRED
CERTAINTY (%)
CERTAINTY
FACTOR
95
90
80
1.960
1.645
1.281
SIGNIFICANCE
LEVEL (α)
0.05
0.10
0.20
SAMPLE
SIZE (N)
384
68
10
8.4.2 Quality Control Charts
Quality control charts graphically display quality data to show the results of a process over
time, thus helping to determine if the process is in control or out of control, and to prevent
product defects. One of the quality control charts examines the process for nonrandom problems
through the application of the “Seven Run Rule,” which states, “If seven data points in a row
are all below the mean, above the mean, increasing, or decreasing, then the process needs to be
examined for non-random problems.”
8.4.3 Statistical Sampling and Standard Deviation
Statistical sampling involves choosing part (N number of samples) of a population of interest
for inspection. The size of a sample (N) depends on how representative is the desired certainty.
The formula for calculating the sample size is given by
Sample size: N = (0.25) (certainty factor / acceptable error: α)2
(8.1)
Table 8.1 shows several common certainty factors used in calculating the sample size. It should
be noted that the significance level or acceptable error (α) is equal to one minus the desired
certainty expressed in decimal format: for example, α = 0.05 = 1 − 0.95.
Equation (8.2) is an example for calculating the sample size when the desired certainty is
95%:
Sample size (N) = (0.25)(1.960/0.05)2 = 384
(8.2)
8.4.4 Basic Statistical Measures
Most college students are familiar with basic descriptive statistics, e.g., mean, median, mode,
variance, and standard deviation. The standard deviation (σ ) is a measure of how much variation
exists in a distribution of data. A small standard deviation means that data are clustered closely
around the middle of a distribution and that there is little variability among the data. The
standard normal distribution, often referred to by students as the “bell curve,” is symmetrical
book
Mobk076
April 2, 2007
18:9
54 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
FIGURE 8.1: The standard normal distribution
about the mean or average value of a population or samples (Fig. 8.1). One should know that
within ±1σ , 68.3% of the samples lie within the one standard deviation range, 95.5% of the
samples lie within ±2σ , and 99.7% of the samples lie within ±3σ .
Quality control personnel use the terminology of “four sigma” (±4σ ) to indicate how
good the quality of their product is. Four sigma means that only 0.0063% of the products do
not meet the required standards and are therefore rejected. Table 8.2 shows the relationship
between sigma and the number of defective units based on 1 billion units.
Greenberg and Hemphill [1] in a white paper contend that the main shortcoming of
current quality control systems has been their inability to provide effective links to integrate
with enterprise management systems.
TABLE 8.2: Relationship Between Sigma and The Number of Defective Units
SIGNIFICANCE
RANGE (±σ )
±1
±2
±3
±4
±5
±6
PAPULATION WITHIN DEFECTIVE UNITS
RANGE (%)
PER BILLION
68.27
95.45
99.73
99.9937
99.999943
99.9999998
317,300,000
45,400,000
2,700,000
63,000
57
2
book
Mobk076
April 2, 2007
18:9
PROJECT QUALITY MANAGEMENT 55
Quality managers have long been utilizing quality control systems, including statistical
process control, production part approval process, failure mode effects analysis, gage
calibration and document control. But these systems traditionally have been stand-alone
applications. Although these individual applications have been touted as complete quality
management systems, they cannot meet all quality objectives for data collection and
information sharing required for today’s complex manufacturing processes [2].
8.4.5 Testing
Testing is one of the quality management functions that should be done during almost every
phase of the product development life cycle; even though some managers prefer to think of
testing as a stage that follows at the end of the product development process.
Test requirements and test plans are developed in the planning process (Chapter 2, Table
2.4) with the development of the quality plan well before the product is produced. The program
manager should make sure that every specification is tested and that the criteria for acceptance
or rejection (pass or fail) are included in the test plan.
One may think that testing is testing, but in the development of a product, there are
four basic types of tests. In “unit testing,” every individual component of a product is tested to
ensure that the component or unit is free of defects. “Integration testing” groups components
and tests the units for functionally. “Integration testing” occurs between unit and system testing
or “verification testing”; then the entire system as one entity is tested in “system testing.” The last
set of tests is the “user acceptance testing” that has end users independently perform “validation
tests” prior to accepting the delivered system.
IMPROVING PROJECT QUALITY
8.5
A goal of quality assurance is to provide continuous quality improvement. Various suggestions
have been offered for improving the quality of projects which include that program managers
and organizational leadership should understand the cost of quality, and promote a quality
environment and frame of mind. The suggestions include focusing on organizational influences
and workplace factors that may affect product quality. Additionally, project managers should
use some kind of maturity model to improve product quality.
8.5.1 Maturity Models
Maturity models are frameworks for helping organization improve their processes and systems;
yet there are several categories describing the type of project management maturity model. For
example, the “ad hoc” maturity model refers to a project management process that may be
described as disorganized, and occasionally even chaotic, since the organization has not defined
systems and processes, and project success depends on individual effort. Additionally, project
book
Mobk076
April 2, 2007
18:9
56 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
will have a history of chronic cost and schedule problems. With the “abbreviated” maturity
model, there are some project management processes and systems in place to track cost, schedule,
and scope; however, success of the project is unpredictable with cost and schedule problems
becoming the norm. In the “organized” maturity model, there are standardized, documented
project management processes and systems that are integrated into the rest of the organization.
Project success with the organized type of maturity model is more predictable with improved
cost and schedule performance. In the “managed” maturity model, management collects and
uses detailed measures of the effectiveness of project management; hence, the project success is
more uniform with cost and schedule performance conforming to plan. Lastly, in the “adaptive”
maturity model, feedback from the project management process, and from innovative pilot ideas
and/or technologies enables continuous improvement to the project processes. Project success
is the norm, with continuous improvements in cost and schedule performance.
Dr. Joseph Juran, well known as a business and industrial quality “guru” (known worldwide
as one of the most important twentieth century thinkers in quality management), wrote several
books including the Quality Control Handbook, in which he outlined 10 steps for quality
improvement. Juran wrote
It is most important that top management be quality-minded. In the absence of sincere
manifestation of interest at the top, little will happen below [4].
COST OF QUALITY
8.6
The cost of quality is often misunderstood by those who insist that the cost of quality only
includes the cost of conformance or delivering products that meet requirements and fitness
for use, since those individuals have not accounted for the cost of nonconformance or taking
responsibility for failures or not meeting quality expectations into the cost of quality. When
examining the cost categories related to quality, five types of cost are generally cited:
1. Prevention cost, which includes the cost of planning and executing a project so that it
is error-free and/or within an acceptable error range.
2. Appraisal cost, which takes into account the cost of evaluating processes and their
outputs to ensure quality.
3.
Internal failure cost, which is the cost incurred to correct an identified defect before
the customer receives the product.
4. External failure cost, which relates to all errors not detected and corrected before
delivery to the customer. Keep in mind that these costs may include recalls and liability
suites.
book
Mobk076
April 2, 2007
18:9
5. Measurement and test equipment costs, which include the capital cost of equipment
used to perform prevention and appraisal activities.
PROJECT QUALITY MANAGEMENT 57
8.7
INTERNATIONAL ORGANIZATION FOR
STANDARDIZATION
The ISO created the Quality Management System (QMS) standards in 1987. Modified in
subsequent years, the ISO 9004:2000 document gives guidelines for performance improvement
over and above the basic standard. The QMS is a system that outlines the policies and procedures
necessary to improve and control the various processes that will ultimately lead to improved
business performance and better quality control in manufacturing.
8.8 GOOD MANUFACTURING PRACTICE
According to the current Good Manufacturing Practice (GMP), medical device manufacturers
should use good judgment when developing their quality system, and apply those sections of
the Food and Drug Administration (FDA) Quality System (QS) Regulation that are applicable
to their specific products and operations, 21 CFR 820.5 of the QS regulation. The regulation
makes clear that it is the responsibility of each manufacturer to establish requirements for each
type or family of devices that will result in devices that are safe and effective. Additionally, the
manufacturer is responsible for establishing methods and procedures to design, produce, and
distribute devices that meet the quality system requirements [5, 6].
SUMMARY
8.9
In summary, project quality management processes take place during the planning, execution,
and control phases of project management. It is not only important that the project man-
ager continuously assess project and product quality, but it is even more important that top
management be quality-minded.
REFERENCES
[1]
Juran. Pareto Principle
J. M.
Dr. Joseph Moses Juran#Pareto principle, 2006.
[Online]. Available: http://en.wikipedia.org/wiki/
[2] N. Greenberg and L. Hemphill. A New Approach: Enterprisewide Quality Manage-
ment Systems, White Paper, DATANET Quality Systems [Online]. Available: http://
www.winspc.com/whitepapers.htm, 2005.
[3] DATANET Quality Systems. Quality Digest Magazine [Online]. Available: http://
[4]
www.winspc.com/quality-management-systems.htm, 2005.
J. M.
wikipedia.org/wiki/Dr. Joseph Moses Juran.
Juran.
(1945). Quality Control Handbook [Online]. Available: http://en.
book
Mobk076
April 2, 2007
18:9
58 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
[5] Quality Management Wikipedia. The Free Encyclopedia [Online]. Available: http://
en.wikipedia.org/wiki/Quality management, 2006.
[6] Quality Management Wikipedia. The Free Encyclopedia [Online]. Available: http://
en.wikipedia.org/wiki/Quality Management System, 2006.
book
Mobk076
April 2, 2007
18:9
59
C H A P T E R 9
Project Procurement Management
So far, Chapters 4 through 8 have covered the four core knowledge areas: scope, time, cost,
and quality management, respectively; Chapters 9 through 12 will cover the four facilitat-
ing knowledge areas: procurement management, human resources, communication, and risk.
This chapter focuses on project procurement management. The term “procurement” generally
means acquiring goods and/or services from an outside source; however, other terms such as
“purchasing” or “outsourcing” are often used interchangeably to mean procurement. Often an
organization will outsource or purchase components or subunits for another source (company)
for various reasons, such as possible reduction in fixed and recurrent costs, to increase ac-
countability, to provide the flexibility for the organization to focus on its core business, or to
gain access to technical skills, expertise, and/or technologies that the organization does not
possess [1].
9.1
PROJECT PROCUREMENT MANAGEMENT PROCESSES
The project procurement management processes, as shown in Table 9.1, include the follow-
ing:
1. Procurement planning that takes place during the “project planning process” to deter-
mine what part or systems to procure and when to make the purchases.
2.
3.
4.
5.
Solicitation planning that takes place during the “project planning process” to document
product requirements (materials, parts, components, etc.) and to identify potential
sources (vendors) for procurement of required parts, etc.
Solicitation, which usually occurs during the “project executing process” to obtain
quotations, bids, offers, or proposals as appropriate from the vendors.
Source selection that takes places after solicitation during the “project executing process”
to select the best offer for the best product from potential vendors.
Involving “contract administration” to manage the contractual relationship with the
vendor. Keep in mind that engineers generally are not trained in contractual and legal
matters. “Leave contractual matters to qualified contract administrators!”
book
Mobk076
April 2, 2007
18:9
60 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
TABLE 9.1: Project Procurement Processes (Excerpt from Chapter 2, Table 2.4)
PROJECT PROCESSES
KNOWLEDGE INITIAL-
AREA
IZING
EXECUT-
PLANNING ING
CONTROL-
LING
Procurement
Procurement Solicitation,
planning
source
selection
Solicitation Contract
planning
administration
CLOSING
Contract
closeout
6. The final project procurement process, contract closeout, which usually occurs during
the “project closing process” as a formal process at the completion and settlement of
the contract.
PROCUREMENT PLANNING
9.2
Procurement planning involves identifying which project needs can be best met by using
products or services outside the organization. Thus, the project manager and his team must
evaluate various alternatives and decide whether to make or buy. “Make-or-buy analysis” is a
process used to determine whether a particular product or service should be made inside the
organization or purchased from some source outside of the organization. Often, the make-or-
buy analysis involves doing some financial analysis. Additionally, they must decide on “what,”
“how much,” and “when” to procure or purchase. The purchasing decision must include “when”
to procure so that the required materials or parts are on hand so as not to cause delays in the
project schedule. From experience, I was told by the manufacturer of a part that they had a
52-week backlog; meaning that the project would be delayed about a year if I insisted on that
part. The only available option at that time was to redesign the product with an equivalent
component. If members of the project team are not familiar with the procurement process, the
project manager should seek experts within the organization and/or from consultants outside
of the organization that can provide valuable and experienced inputs in procurement decisions.
9.2.1 Types of Contracts
There are three basic types of contracts: fixed price, cost reimbursable, and unit price contract [2]:
1. The “fixed price contract,” also referred to as the “lump sum contract,” infers that the
contract has a fixed total price for a well-defined product or service. A “fixed price
book
Mobk076
April 2, 2007
18:9
PROJECT PROCUREMENT MANAGEMENT 61
TABLE 9.2: Contract Types and Associated Risk
BUYER RISK
TYPE OF CONTRACT
VENDOR RISK
Low
Fixed price
Medium low
Fixed price incentive (FPI)
Medium
Cost plus insentive fee (CPIF)
Medium high
Cost plus fixed fee (CPFF)
High
Medium high
Medium
Medium low
High
Cost plus percentage of costs (CPPC)
Low
incentive” contract is similar to the fixed price contract with the exception that the
buyer will pay some incentive or bonus if the seller performs better than that written
into the contract (especially, time; i.e., if the end product is produced and delivered
earlier than schedule).
2. The “cost reimbursable contract” involves payment to the seller for direct and indi-
rect costs. There are three different types of contracts within the cost reimbursable
framework: cost plus incentive fee (CPIF), cost plus fixed fee (CPFF), and cost plus
percentage of costs (CPPC):
a. With a CPIF contract, the buyer pays the seller for allowable performance costs plus
a predetermined fee and an incentive bonus.
b. With the CPFF contract, the buyer pays the seller for allowable performance costs
plus a fixed fee payment that is usually based on a percentage of estimated costs.
c. With the CPPC contract, the buyer pays the seller for allowable performance costs
plus a predetermined percentage based on total costs.
3. The unit price contract requires the buyer to pay the seller a predetermined amount
per unit of service.
All contracts involve some risks for both the vendor (seller) and the buyer. Table 9.2
summarizes the risks associated with each type of contract. Note that the lowest risk associated
with the type of contract to the buyer is a fixed priced contract; however, the vendor may not
negotiate this type of contract, since the fixed price contract presents the highest risk to the
vendor.
SOLICITATION PLANNING
9.3
In solicitation planning, several documents must be prepared by the procurement team. The
first document, called the “request for proposals (RFP),” is used to solicit proposals from
book
Mobk076
April 2, 2007
18:9
62 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
prospective sellers where there are several ways to meet the sellers’ needs. On the other hand,
“requests for quotes (RFQ)” are used to solicit quotes for well-defined procurements invita-
tions for bid or negotiation in which initial contractor responses are also part of solicitation
planning.
An RFP usually includes sections on the purpose of the RFP, the organization’s
background, requirements, environments, statement of work (SOW) with schedule, and re-
quired deliverables with schedule. Almost all mutually binding agreements or contracts in-
clude a SOW. Additionally, the contracting office will add boilerplate information required
by law.
9.3.1 Statement of Work (SOW)
The SOW is usually developed by the engineering members of the procurement team. It is
a description of the work required for the procurement; hence, a good SOW gives bidders a
better understanding of the buyer’s expectations. General format for a SOW include sections
on the scope and location of work, period of work to be performed (usually, end dates) with
scheduled deliverables.
The scope of work should describe in as much detail as possible the exact nature of work
to be accomplished by the contractor. Not only should the hardware and software be specified,
but also the required tolerances and/or industry or ISO standards to be met. If the work must
be performed in a specific location, such as a designated standard clean room where employees
must perform the work on hardware or a secure location for work, then the environment and
location of work must be described in the SOW. Contacts often specify within the SOW
when the work is expected to start and end, working hours, number of hours that can be billed
per week, where the work must be performed, and related schedule information. Deliverables
schedule list specific deliverables, describe what is to be delivered in detail, and when the
deliverables are due.
SOLICITATION
9.4
Solicitation is a function that occurs in the executing process, and involves obtaining proposals
or bids from prospective sellers in response to an RFP or RFQ. Organizations can advertise
to procure goods and services in several ways: advertising to anyone that may be interested via
some publication or announcement; for example, government agencies place their solicitation
in the commerce daily bulletin or on their respective Web sites. Formal evaluation procedures
for selecting vendors should be developed and documented before solicitation. Depending on
the price and total spending, some organization may approach several potential vendors for
quotations or the purchasing agency may only approach preferred vendors, often referred to
book
Mobk076
April 2, 2007
18:9
PROJECT PROCUREMENT MANAGEMENT 63
as the “buyers short list”. It is not unusual in large complex procurement to host a bidders’
conference to help clarify the buyer’s expectations, thus reducing the number of nonresponsive
responses.
Responses to RFPs or RFQs always have a cutoff date, after which time any responses
arriving are considered “nonresponsive” and are not evaluated.
SOURCE SELECTION
9.5
After receiving bidders’ responses, procurement must assemble a source selection committee.
Source selection involves evaluation of bidders’ proposals and selection of the best proposal.
From this point on, the purchasing department with their contract specialist (contract admin-
istrator) negotiate the contract with terms and conditions, and award the contract.
CONTRACT ADMINISTRATION
9.6
Contract administration ensures that the seller’s performance meets the contractual require-
ments. All contracts are legal relationships between purchasing organizations and selling or
service organizations, so it is important that legal and contracting professionals be involved in
writing and administering contracts. Project managers and members of the design team should
not make comments that may be misinterpreted as redirection of the contract terms or SOW.
On the other hand, those project managers who ignore contractual issues may find that those
issues result in serious problems. Project managers should be aware that changes to any part
of the project including contract changes need to be reviewed, approved, and documented in
the same way that the original part of the plan was approved. Evaluation of any changes to the
project should include an impact analysis, “How will the change affect the scope, time, cost,
and quality of the goods or services being provided?” Along with documenting in writing any
changes, project team members must document all important meetings and telephone phone
calls that deal with project matters.
CONTRACT CLOSEOUT
9.7
The final process in project procurement management is the formal closing of contracts.
Contract closeout includes verification to determine if all work was completed correctly and
satisfactorily. If there were any discrepancies or deficiencies, those deficiencies have to be dealt
with either correction or wavering of the deficiencies. Contract administration is required to
update records of administrative activities to reflect final contract results and to archive contract
information for future use. Usually, procurement audits are performed to identify “lessons
learned” in the procurement process.
book
Mobk076
April 2, 2007
18:9
64 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
REFERENCES
[1] Procurement, Wikipedia. The Free Encyclopedia [Online]. Available: http://en.
[2]
wikipedia.org/wiki/Procurement, 2005.
J. Bronzino, Assessment and Acquisition, Management of Medical Technology. London:
Butterworth–Heinemann, 1992, ch. 4, pp. 111–152.
book
Mobk076
April 2, 2007
18:9
65
C H A P T E R 10
Project Human Resource
Management
So, what is project human resource management and why is this area important to project
managers? Project human resource management may be defined as the processes of making
the most effective use of the people involved with a project. Basically, any human resource
management implies having “the right people to the right place at the right time,” which
requires organizational planning as to the type of personnel (engineers, support staff, etc.) that
must be recruited or reassigned from within the organization and/or hired if the required talents
or skills are not within the organization to work on the project. Subsequently, those resources
(humans) must be developed or molded into an effective project team [1].
Having served on the Institute of Electrical and Electronic Engineers (IEEE) Workforce
Committee and the American Association of Engineers Workforce Commission for a decade,
the author can assert, “people determine the success and failure of projects and organizations.”
The IEEE and the Bureau of Labor and Statistics have cited for the past decade shortages
of trained engineers to fill between one-fourth and one-third of a million jobs openings in
engineering, which makes human resource management even more challenging for projects.
Many CEOs listed the lack of highly skilled, trained workers as the primary barrier to growth;
hence, they lobby the U.S. Congress to increase the annual H1B immigration quota to over
one-third of a million foreign immigrant workers.
Congress, universities, and many technical societies have wrestled with the problem of
“How to increase the U.S. engineering labor pool.” The consensus of the various organizations
points to the undesirable stereotyping of engineers as “nerds” as a factor in keeping the U.S.
students away from the engineering career field. Further stereotyping the noted problems of
engineering is hard work requiring higher level math than simple addition and subtraction,
long work hours (days), and constantly staying abreast of changes in the field. Stereotyping
engineering disciplines as male-dominated tends to keep women from entering the engineering
career field. Problems of shortages and reduced numbers of young engineers entering into the
human resource pool means that there is a need for better human resource management within
organizations.
book
Mobk076
April 2, 2007
18:9
66 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
10.1 MANAGING PEOPLE
Project managers should have not only some formal training in managing people, but also
field experience in managing people at work. Important knowledge areas related to project
management include
1. motivation
2.
3.
influence and power
effectiveness.
Maslow developed a theory that people’s behaviors are guided by a sequence of needs, and he
argued that humans possess unique qualities that enable them to make independent choices,
thus giving them control of their destiny [2]. Maslow’s Hierarchy of Needs starts with the need
to satisfy physiological needs as the lowest motivator. One may think of these needs as the
survival mode, where satisfying hunger and thirst to survive is the paramount need. From the
lowest motivator to the highest, the needs to satisfy are physiological, safety, social, esteem,
and self-actualization. Maslow contends that growth motives (being motives) are relatively
independent of the environment and are unique to the individual [3]. He states that
“The esteem needs usually act as motivators only if the three lower types have been satisfied
to some degree.”
Maslow cautions that true self-esteem is based on real competence and significant achievement,
rather than on external fame. The highest form of motivation is the need for self-actualization.
Herzberg [4] distinguishes between “motivational factors” and “hygiene factors.” Moti-
vational factors include achievement, recognition, the work itself, responsibility, advancement,
and growth, which produce job satisfaction. Examples of motivation factors include higher
salaries, more supervision responsibilities, and a more attractive work environment. On the
other hand, hygiene factors cause dissatisfaction if not present, but do not motivate workers to
do more.
Thamhain and Wilemon [5] list nine ways in which project managers have to influence
projects: these ways or methods include the following:
1. Authority: The project manager’s legitimate hierarchical right to issue orders.
2. Assignment: The project manager’s perceived ability to influence a worker’s later work
assignments.
3. Budget: The project manager’s perceived ability to authorize others to use discretionary
funds.
4. Promotion: The project manager’s ability to improve a worker’s position.
book
Mobk076
April 2, 2007
18:9
PROJECT HUMAN RESOURCE MANAGEMENT 67
5. Money: The project manager’s ability to increase a worker’s pay and benefits.
6. Penalty: The project manager’s perceived ability to dispense or cause punishment.
7. Work challenge: The project manager’s ability to assign work that capitalizes on a
worker’s enjoyment of doing a particular task.
8. Expertise: The project manager’s perceived special knowledge that others deem impor-
tant.
9. Friendship: The project manager’s ability to establish friendly personal relationships
between the project manager and others.
One should keep in mind that projects are more likely to succeed when project managers
influence with expertise and work challenges; whereas projects are more likely to fail when
project managers rely too heavily on authority, money, and penalty [6].
Power is defined as the potential ability to influence behavior to get people to do things
they would not otherwise do. There are several types of power including legitimate, expert,
reward, coercive (with treat of penalty or punishment), and referent (meaning to refer to a
decision/problem to someone).
IMPROVING EFFECTIVENESS: COVEY’S SEVEN HABITS
10.2
Covey first published The Seven Habits of Highly Effective People in 1989 and the 15th anniversary
edition in 2004. The book lists seven principles that, if established as habits, Covey contends,
are supposed to help a person achieve true interdependent “effectiveness” [7].
10.2.1 The Seven Habits
Covey presents his teachings in a series of habits—a progression from dependence, to indepen-
dence, to interdependence. The seven habits are as follows:
Habit 1: Be Proactive: Principles of Personal Vision
Habit 2: Begin with the End in Mind: Principles of Personal Leadership
Habit 3: Put First Things First: Principles of Personal Management
Habit 4: Think Win/Win: Principles of Interpersonal Leadership
Habit 5: Seek First to Understand, Then to be Understood: Principles of Empathetic Com-
munication
Habit 6: Synergize: Principles of Creative Communication
Habit 7: Sharpen the Saw: Principles of Balanced Self-Renewal
book
Mobk076
April 2, 2007
18:9
68 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
Expansion of Covey’s habits are quoted from the Wikipedia Web site (http://en.wikipedia.
org/wiki/Stephen Covey) [8].
1. Be Proactive. Here, Covey emphasizes the original sense of the term “proactive” as
coined by Victor Frank. Being “proactive” means taking responsibility for everything
in life, rather than blaming other people and circumstances for obstacles or problems.
Initiative and taking action will then follow (the authors of this book also extend the
meaning to include thinking of potential problem areas before they occur and planning
alternative solutions prior to the occurrence of the bad event.) [8].
2. Begin with the End in Mind, which deals with setting long-term goals based on “true-
north principles.” Covey recommends formulating a “personal mission statement” to
document one’s perception of one’s own purpose in life. He sees visualization as an
important tool to develop this. He also deals with organizational mission statements,
which he claims to be more effective if developed and supported by all members of an
organization, rather than being prescribed [8].
3. Put “First Things First”. Covey describes a framework for prioritizing work that is
aimed at long-term goals, at the expense of tasks that appear to be urgent, but are in
fact less important. Delegation is presented as an important part of time management.
Successful delegation, according to Covey, focuses on results and benchmarks that are
to be agreed in advance, rather than on prescribing detailed work plans [8].
4. Think Win/Win describes an attitude whereby mutually beneficial solutions are sought,
that satisfy the needs of oneself as well as others, or, in the case of a conflict, both parties
involved [8].
5. Seek First to Understand, then to be Understood. Covey warns that giving out advice
before having empathetically understood a person and their situation will likely result in
that advice being rejected. Thoroughly listening to another person’s concerns instead of
reading out your own autobiography is purported to increase the chance of establishing
a working communication [8].
Good project managers are empathic listeners; they listen with the intent to understand.
Before one can communicate with others, a rapport with the other individual should
be established; e.g., a social gathering or a meal so as to get to know the other person
on a nonbusiness, more personal manner. Some times mirroring is a technique to
help establish rapport. Project managers need to develop empathic listening and other
people’s skills to improve relationships with users and other stakeholders.
6. Synergize describes a way of working in teams. Apply effective problem solving.
Apply collaborative decision making. Value differences. Build on divergent strengths.
book
Mobk076
April 2, 2007
18:9
PROJECT HUMAN RESOURCE MANAGEMENT 69
Leverage creative collaboration. Embrace and leverage innovation. It is put forth that,
when this is pursued as a habit, the result of the teamwork will exceed the sum of what
each of the members could have achieved on their own. The whole is greater than the
sum of its parts [8].
7. Sharpen the saw focuses on balanced self-renewal. Regaining what Covey calls “pro-
ductive capacity” by engaging in carefully selected recreational activities [8].
10.2.2 Personality and Behavioral Tools
There are several personality and behavioral tools that help human resource managers, and
could help project managers in developing an effective working team. The Meyers–Briggs
type indicator (MBTI) is a popular tool for determining personality preferences and helping
teammates understand each other. The MBTI has four dimensions in which individuals are
classified as either an extrovert (E) or introvert (I), as sensation (S) or intuition (N), as thinking
(T) or feeling (F), and as using judgment (J) or perception (P). Most professionals seem to fall
within the category of intuition and thinking (NTs) or as being rationals.
Based on the work by Charles Marston (1928), the Texas A&M University Employee
Assistance Program Office developed a Behavior Profile tool, which they called “DISC” [9].
The acronym DISC stands for
1. dominance, which pertains to how individuals respond to problems or challenges,
2.
3.
4.
influence, which is defined as how individuals influence contacts with others by chang-
ing their point of view to your point of the individual,
steadiness, which deals with consistency on how individuals respond to the pace of the
environment,
compliance, which addresses the issue of constraints or how individuals respond to
rules and procedures set by others.
The DISC sectors denoting behavioral quadrants are shown in Fig. 10.1 [9]. The DISC
behavior profiles with comments are shown in Fig. 10.2.
Compliance
Dominance
Steadiness
Influence
FIGURE 10.1: DISC. Quadrants show the dominant behavior profile
book
Mobk076
April 2, 2007
18:9
70 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
Precise
Accurate
Concern for Quality
Critical Listener
Non-verbal
C
Product oriented
Slow to change
Self-disciplined
Pessimistic
S
Accommodating
Dislikes
Confrontation
Persistent
Controls Emotion
Creative
Slow Start/Fast
Finish
Fear
Criticism
Fear Being
Taken
Fear Loss
Of Security
Fear Social
Rejection
Good Supporter
Team Player
Persistent
Cooperative
Competitive
Confrontational
Direct
Results-Oriented
Sense of Urgency
Change Agent
D
Process Oriented
Quick to change
Independent
Optimistic
I
High Trust Level
Not Fearful of Change
Verbal Skills
Projects Self-
Confidence
FIGURE 10.2: Behavior Profile tool, called the “DISC” [9]
10.3 SUMMARY
Repeating the advice given in Chapter 1, project managers and their teams should focus their
primary attention and efforts on meeting project objectives and producing positive results. It
is recommended that instead of blaming team members, the focus should be turned to fixing
the problem. Project managers should establish regular, effective meetings with agenda and
openness. Part of the program managers tasks include nurturing team members, encouraging
them to help each other, and to acknowledge in public individual and group accomplishments.
Project managers should remember to have a team-based reward and recognition systems,
since such consideration of others can promote teamwork. Also give some thought on rewarding
teams for achieving specific goals. Additionally, project managers should allow time for team
members to mentor and help each other meet project goals and develop interpersonal skills.
REFERENCES
[1] Resource Assessment, Program Management, University of Washington [Online]. Available:
http://www.washington.edu/computing/pm/plan/resource.html, 2005.
[2] A. H. Maslow (2003). Self-actualization theory (II). An Introduction to Theories of Per-
sonality (6th ed.), R. B. Ewen, Ed. Houston, TX: Questia Media America, Inc. [Online].
ch. 10. Available: www.questia.com.
[3] A. H. Maslow, Motivation and Personality, 2nd ed. New York: Harper and Row, 1970.
book
Mobk076
April 2, 2007
18:9
[4] F. Herzberg, B. Mausner, and B. B. Snyderman, The Motivation to Work, 2nd ed. New
PROJECT HUMAN RESOURCE MANAGEMENT 71
York: Wiley, 1959.
[5] H. Thamhain and D. L. Wilemon, “One more time: How do you motivate employees?,”
Harvard Business Rev., pp. 51–62, 1968.
[6] B. Nath. CSS, University of Melbourne [Online]. Available: http://www.cs.mu.oz.
au/443/slides/443Lec12.pdf.
[7] S. R. Covey. (1989 and 2004). The Seven Habits of Highly Effective People (Paper-
back Publication, ISBN 0-671-70863-5) [Online]. Available: http://en.wikipedia.org/
wiki/The Seven Habits of Highly Effective People.
[8] The 7 Habits of Highly Effective People [Online]. Available: http://en.wikipedia.
org/wiki/Stephen Covey, 2005.
[9] DISC Behavior Profile, Texas A&M University Employee Assistance Program Office,
Texas A&M University, College Station, TX, 1997.
book
Mobk076
April 2, 2007
18:9
book
Mobk076
April 2, 2007
18:9
73
C H A P T E R 11
Project Communications
Management
It is interesting to note that our society and culture do not portray engineers as good commu-
nicators. Both the New York Times and the Washington DC have printed front page articles on
the poor communications skills of engineers. Without a doubt, history has shown that engi-
neers must be able to communicate effectively to succeed in their positions, since strong verbal
skills are a key factor in career advancement for engineering professionals. The IEEE USA
Professional Activities has published newsletter with the fact that being technically qualified
is insufficient to maintain employment, the engineer and specially engineering managers must
possess interpersonal and communications skills. Project managers are taught that one of the
greatest threats to many projects is a failure to communicate.
11.1 PROJECT COMMUNICATIONS MANAGEMENT PROCESSES
As shown in Table 11.1, the project communications management processes occur throughout
the project development phases, from planning through closing. During the project planning
phase, the planning teams must determine the information and communications needs of
the stakeholders, then develop and document a communications plan. During the project
executing process, information on the project is collected and distributed; making sure that
the necessary or needed information is made available to stakeholders in a timely manner. In
the controlling process of the project, performance information are collected, analyzed, and
disseminated in performance reports. Administrative closure, which occurs during the closing
process, consists of generating, gathering, and disseminating information to formalize phase or
project completion.
11.2 COMMUNICATIONS PLANNING
The communications management plan is a document that guides project communications. As
an aid in communications planning, the project manager should create a stakeholder analysis
book
Mobk076
April 2, 2007
18:9
74 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
TABLE 11.1: Project Communications Management Processes
KNOWLEDGE INITIAL-
AREA
IZING
Communications
EXECUT- CONTROL-
PLANNING ING
LING
CLOSING
Communi-
cations
planning
Information Performance Administrative
reporting
distribution
closure
TABLE 11.2: Example of Stakeholder Analysis for Project Communications
STAKEHOLDER DOCUMENT FORMAT CONTACT PERSON DUE
Customer
Monthly status E-mail and List name and
management
report
hard copy
phone number
First of
month
for project communications. It should go unsaid that every project should include some type of
communications management plan. Communications management plans usually contain
1.
2.
3.
4.
5.
6.
a description of a collection and filing structure for gathering and storing various types
of information,
a distribution structure describing what information goes to whom, when, and how,
a format for communicating key project information; formats facilitate a better under-
standing of the communications,
a project schedule for producing the information,
access methods for obtaining desired information,
a method for updating the communications management plans as the project progresses
and develops,
7.
and last but not least, a stakeholder communications analysis [1].
Table 11.2 is a short example of stakeholder communications analysis format. The table rows
would describe the required documents, format, and due date for each stakeholder.
INFORMATION DISTRIBUTION
11.3
Getting the right information to the right people at the right time and in a useful format is just
as important as developing the information in the first place. Project managers should consider
book
Mobk076
April 2, 2007
18:9
PROJECT COMMUNICATIONS MANAGEMENT 75
the importance of using current electronic communication technologies to enhance informa-
tion distribution, as well as considering different formal and informal methods for distributing
information. Project teams should know and understand the organization’s communications
infrastructure, which are a set of tools, techniques, and principles that provide a foundation for
the effective transfer of information. The tools may include e-mail, project management soft-
ware, groupware, fax machines, telephones, teleconferencing systems, document management
systems, and word processors. The techniques may include reporting guidelines and templates,
meeting ground rules and procedures, decision-making processes, problem-solving approaches,
and conflicting resolution and negotiation techniques. Additionally, the principles foundation
for the effective transfer of information should include an agreed upon work ethic and the use
of free, honest, and open dialog.
11.4 SPAN OF CONTROL
Most managers know how difficult it is to control a large group of individuals. Training in
military leadership and experiences in managing projects have taught the authors that for good
management of people, the span of control lie between the minimum of 5 and the maximum
of 12 individuals. Managing more than 12 requires the manager to delegate the responsibilities
and authority (full power) to submanagers, so they may be in full control of their responsible
areas.
11.5 PERFORMANCE REPORTING
Performance reporting keeps higher level management and stakeholders informed about how
resources are being used to achieve project objectives. “Status” reports describe where the project
stands at a specific point in time, whereas “progress” reports describe what the project team has
accomplished during a certain period of time. Performance reporting should include earned
value analysis (EVA) and project forecasting to predict what the project status and/or progress
will be in the future based on past information and trends analysis. Project managers should hold
status review meetings often (weekly or monthly) and include performance reporting (EVA) in
all reports. The final process in communication management occurs during the project closure
phase, and is referred to as administrative closure, which produces the following documentation:
1. project archives
2.
3.
formal acceptance
lessons learned.
In summary, a template for a team’s “weekly progress report” is given.
book
Mobk076
April 2, 2007
18:9
76 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
11.5.1 Template for Weekly Progress Report
1. Accomplishments for past week (include appropriate dates):
a. Detailed description of accomplishments. Who did the work? Relate accomplish-
ments to project’s Gantt chart by task number (refer to task number).
b. If any issues were resolved from a previous report, list them as accomplishments.
c. Write in full English sentences/paragraphs, not just bullets.
2. Plans for coming week (include appropriate dates):
a. Detailed description of planned work task items to be accomplished in the next
week.
b. Again, who is going to do the work and relate to the project’s Gantt chart by task?
c. Describe any other unplanned items to accomplish, which are not on the Gantt
tasks.
d. Write in full English sentences/paragraphs, not just bullets.
3.
Issues:
a. Discuss issues (problems encountered) that surfaced or are still important.
b. If problem encountered, discuss impact on schedule (time line), funding/cost, and
proposed possible remedy (how does the team plan to get back on schedule?).
c. Managers do not like surprises, so be sure to discuss issues in detail.
4. Project changes (date and description):
a. List any approved or requested changes to the project.
b. Include the date of the change and a detailed description with the reason (why?).
c. Include proposed changes to a new Gant chart and schedule.
5. Attachments:
a. Previous Gant Chart (before Update); Previous Budget (before Update),
b. Updated Gant Chart (Time Line Update); Updated Budget,
c. Excel Chart of Time Sheet (Hours worked).
REFERENCE
[1] Communication Plan, Project Management, University of Washington [Online]. Available:
http://www.washington.edu/computing/pm/plan/communication.html, 2002.
book
Mobk076
April 2, 2007
18:9
77
C H A P T E R 12
Project Risk Management
Risk is defined in the dictionary [1] as, “The possibility of loss or injury” and often expressed in
terms of severity and probability. However, Project Risk Management is defined as the art and
science of identifying, assigning, and responding to risk throughout the life of a project and in
the best interests of meeting project objectives. Project risk involves understanding potential
problems that might occur on the project and how those problems could impede project success.
Risk management is often overlooked on projects by executive managers, but it can help project
managers improve project success by helping select good projects, determining project scope,
and developing realistic estimates. Thus, risk management may be considered as a form of an
investment or insurance.
Government organizations define the terms in a slightly different manner. For example:
1. Hazard is defined as a condition with the potential to cause personal injury or death,
property damage, or degradation of performance.
2. Risk is an expression of possible loss in terms of severity and probability.
3. Severity is the worst credible consequence that can occur as a result of a hazard.
4. Probability is the likelihood that a hazard will result in a mishap or loss.
5. Risk Assessment is the process of detecting hazards and assessing associated risks.
6. Control is a method for reducing risk for an identified hazard by lowering the probability
of occurrence, decreasing potential severity, or both.
Risk management in the Department of Defense is referred to as Operations Risk
management and is defined as the process of dealing with risks associated with daily operations,
which includes risk assessment, risk decision-making, and implementation of effective risk
controls [2].
12.1 PROJECT RISK MANAGEMENT
Project Risk Management is the process of identifying, assigning, and responding to risks asso-
ciated with a project, rather than operations. One may question why organizations would want
book
Mobk076
April 2, 2007
18:9
78 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
to venture into a risky project, and the response would be because the opportunities outweighed
the risk. So, “What is Project Risk Management?” The goal of project risk management is to
minimize potential risks while maximizing potential opportunities. Major risk management
processes include:
1. Risk identification, which is the process of determining risks that are likely to affect a
project.
2. Risk quantification, which requires evaluating risks to assess the range of possible
project outcomes.
3. Risk response development, which includes taking steps to enhance opportunities and
developing responses to each threat.
4. Risk response control is the process of responding to risks over the course of the
project.
Risk identification is the process of understanding what potential unsatisfactory outcomes
are associated with a particular project.
12.2 TYPES OF PROJECT RISKS
There are three types of risks associated with commercial ventures in developing a new product;
market risk, financial risk, and technology risk. Market risk is associated with determining if the
new product will be useful to the organization or marketable to others and if users will accept
and use the product or service? Financial risk is associated with determining if the organization
can afford to undertake the project and determining if this project is the best way to use the
company’s financial resources. Technology risk is associated with determining if the project is
technically feasible and if the technology could become obsolete before a useful product can be
produced? Table 12.1 is a short list of the potential risk conditions that are associated with each
of the knowledge areas.
12.3 RISK QUANTIFICATION
Risk quantification or risk analysis is the process of evaluating risks to assess the range of
possible project outcomes. The first step in the risk analysis process is to determine the risk
probability of occurrence and its impact (consequences) to the project if the risk does occur.
Risk quantification techniques include expected monetary value analysis (EVA), calculation
of risk factors, PERT estimations, simulations, and expert judgment. Risk simulation uses a
representation or model of a system to analyze the expected behavior or performance of the
system. The Monte Carlo analysis simulates a model’s outcome many times in order to provide
book
Mobk076
April 2, 2007
18:9
PROJECT RISK MANAGEMENT 79
TABLE 12.1: Potential Risk Conditions Associated with Knowledge Areas
KNOWLEDGE AREAS
RISK CONDITIONS
Integration
Inadequate planning; poor resource allocation;
Scope
Time
Cost
Quality
poor integration management; lack of
postproject review
Poor definition of scope or work packages;
incomplete definition of quality managements;
inadequate scope control
Errors in estimating time or resource
availability; poor allocation and management
of float; early release of competitive
products
Estimating errors; inadequate productivity,
cost, change, or contingency control; poor
maintenance, security, purchasing, etc.
Poor attitude toward quality; substandard
design/materials/workmanship; inadequate
quality assurance program
Human resources
Poor conflict management; poor project
Communications
Risk
organization and definition of responsibilities;
absence of leadership
Carelessness in planning or communicating;
lack of consultation with key stakeholders
Ignoring risk; unclear assignment of risk;
poor insurance management
Procurement
Unenforceable conditions or contract clauses;
adversarial relation
a statistical distribution of the calculated results. Some organizations rely on the experience of
experts to help identify potential project risks. If the organization uses a number of experts,
then the Delphi method is used to derive a consensus among a panel of experts in deriving
predictions about future developments [3].
book
Mobk076
April 2, 2007
18:9
80 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
12.4 RISK RESPONSES
When faced with a hazard or potential risk, individuals and organizations will respond in one
of three ways.
1. Risk avoidance response tries to eliminate a specific threat or risk, by eliminating its
causes. The approach may take a lot of time, since to find the cause will require an
in-depth look at the process with a detailed analysis. The cause should subsequently
be fixed. Dorfman [4] defines avoidance as not performing an activity that could
carry risk. Avoidance may seem the answer to all risks, but avoiding risks also means
losing out on the potential gain that accepting (retaining) the risk may have allowed.
Not entering a business to avoid the risk of loss also avoids the possibility of earning
profits.
2. Risk acceptance response means accepting the consequences should a risk occur.
3. Risk mitigation response attempts to reduce the impact of a risk event by reducing the
probability of its occurrence.
12.5 CAUSES OF RISK
It would seem to some individuals that risk avoidance should be the only response to a hazard
or a risk; so, let us examine the causes. Change is considered the “Mother” of Risk, meaning
that changes in any project will increase the risk and in some cases significantly. To change,
one can add development of new technology, complexity of the system, resource constraints,
organizational constraints, speed or tempo of work, environmental influences, work requiring
high energy levels, and human nature [2]. The changes are not given in any order, but keep in
mind that two or more causes may combine or result in one occurrence of a hazard or a risk.
Table 12.2 shows strategies used by organizations to mitigate the various types of risks. To read
the table, start with a type of risk, then read the optional mitigation strategies in that vertical
column.
12.6 RISK MANAGEMENT PLANS
Project managers should develop three different risk management plans. The “Risk Manage-
ment Plan” documents the procedures for managing risk throughout the project, but the project
manager should also develop “Contingency Plans” that predefine actions the project team will
take if an identified risk event occurs. Additionally, the project manager should have “Contin-
gency Reserves,” which are provisions held by the project sponsor for possible changes in project
scope or quality that can be used to mitigate cost and/or schedule risk. Always plan ahead, be
book
Mobk076
April 2, 2007
18:9
PROJECT RISK MANAGEMENT 81
TABLE 12.2: Risk Mitigation Strategies for Technical, Cost, and Schedule Risks
TECHNICAL RISKS
COST RISKS
SCHEDULE RISKS
Emphasize team support and
avoid stand-alone project
structure
Increase the frequency
Increase the frequency
of project monitoring
of project monitoring
Increase project manager
Use WBS and
PERT/CPM
Use WBS and
PERT/CPM
authority
Improve problem
handling and
communication
Improve communication,
Select the most
project goals understanding,
and team support
experienced project
manager
Increase the frequency
Increase project manager
of project monitoring
authority
Use WBS and PERT/CPM
proactive in risk management. Questions that need to be addressed in a Risk Management Plan
include:
1. Why is it important to take or not take this risk in relation to the project objectives?
2. What specifically is the risk and what are the risk mitigation deliverables?
3. How is the risk going to be mitigated?
4. What risk mitigation approach should be used?
5. Who are the individuals responsible for implementing the risk management plan?
6. When will the milestones associated with the mitigation approach occur?
7. How much resources are required to mitigate risk [5], [6] ?
12.7 RISK RESPONSE CONTROL
Risk response control involves executing the risk management processes and the risk manage-
ment plan to respond to risk events. Risks must be monitored based on defined milestones
and decisions made regarding risks and mitigation strategies. If there are no contingency plans
in place, then workarounds or unplanned responses to risk events are necessary. A tool for
maintaining an awareness of risk throughout the life of a project is the tracking of the project’s
Top 10 risk items. The list of the Top 10 project risk items should be reviewed periodically and
modified as the risk ranks change. Hence, a listing of the current ranking, previous ranking,
book
Mobk076
April 2, 2007
18:9
82 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
number of times the risk appears on the list over a period of time, and a summary of progress
made in resolving the risk item should be developed. Project managers may use Project Risk
Management software (databases or spreadsheets) to assist in keeping track of risks and quan-
tifying risks. There are several more sophisticated risk management software available in the
market that may help the project manager in developing models and/or simulations to analyze
and respond to various project risks.
12.8 SUMMARY
The Risk Management Processes include identifying the risk or hazard, assessing or analyzing
all risks and/or hazards, making risk response decisions, implementing controls to mitigate the
risk, and supervising or overseeing the implementation of the responses or corrective actions.
Unlike crisis management, good project risk management often goes unnoticed, because well-
run projects appear to be almost effortless, but, in reality, a lot of work has gone into running
the project well. Hence, project managers should strive to make their jobs look easy; thus,
reflecting the results of well-run projects.
REFERENCES
[1] Risk Management. Wikipedia. Encyclopedia. Available: http://en.wikipedia.org, 2005.
[2] D. Faherty, “U.S. Navy, operations risk management,” presented at the 2nd Workshop on
Risk Analysis and Safety Performance Measurements in Aviation, FAA, Atlantic City,
NJ, Aug. 2000.
[3] RISK + C/S Solutions Newsletter. Available: http://www.cssi.com/
[4] M. S. Dorfman, Introduction to Risk Management and Insurance, 6th ed. Englewood Cliffs,
NJ: Prentice-Hall, 1997, ISBN 0-13-752106-5.
[5] B. C. Chadbourne, “To the heart of risk management: Teaching project teams to com-
bat risk,” in Proc. 30th Annu. Project Manage. Inst. Semin. Symp., Philadelphia, PA,
Oct. 10–16, 1999.
[6] A. Jaafari, “Management of risks, uncertainties, and opportunities on projects: Time
for a fundamental shift,” Int. J. Project Manage., vol. 19, no. 2, pp. 89–101, Feb. 2001.
doi:10.1016/S0263-7863(99)00047-2
book
Mobk076
April 2, 2007
18:9
83
C H A P T E R 13
Project Closeout
Project Closeout is the last major stage of a project’s life cycle and is completed when all defined
project tasks and milestones have been completed, and the customer has accepted the project’s
deliverables.
So, what is involved in closing projects [1]? Project Closeout includes the following
actions:
1. First and foremost is the gaining of stakeholder acceptance of the final product,
2. Verification of formal acceptance by stakeholders and steering committee,
3. Bringing the project and all its phases to an orderly end,
4. Verify that all of the deliverables have been completed and delivered,
5. Completing a Project Audit (usually internal audit),
6. Redistributing resources; i.e., staff, facilities, equipment, and automated systems,
7. Closing out all financial issues such as labor charge codes and contract closure,
8. Documenting the successes, problems, and issues experienced during the project,
9. Documenting “Lessons Learned,”
10. Producing an Outcomes Assessment Report,
11. Completing, collecting, and archiving Project Records,
12. And finally, it is recommended that the Project Team celebrate project success.
13.1 CLOSING PROCESSES AND OUTPUTS
Most of the closing processes involve the communications and procurement knowledge areas
as shown in Fig. 13.1.
The process of “Administrative Closure” involves collecting project records, verifying
and documenting project results to formalize acceptance of the products produced, analyzing
whether the project was successful and effective, ensuring products meet specifications, and
archiving project information for future use. Table 13.1 Column 3, row 2, shows the outputs
that are the result of the administrative closure.
book
Mobk076
April 2, 2007
18:9
84 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
TABLE 13.1: Closing Processes and Outputs
KNOWLEDGE AREA
PROCESS
OUTPUTS
Communications
Administrative closure
Procurement
Contract Close-out
1. Project archives
2. Formal acceptance
3. Lessons learned
1. Contract files
2. Formal acceptance
3. Formal closure
13.1.1 Administrative Closure
The issue of primary importance with project closure is the acceptance of the product or project
deliverables by the customer for which they were created [2]. During the administrative closure,
the project manager should conduct a formal “Final Acceptance Meeting.” The best way is to
convene a final meeting with stakeholders to review the product delivered against the baseline
requirements and specifications. It is always a good policy to make the stakeholders aware of the
baseline deviations, with justifications for the deviations, and of future action plans to correct
or to waive the deviations. Deviations from the established baseline should be documented and
approved at the committee for subsequent signatures by responsible executive managers of the
organization. Open Action Items or program level issues can be officially closed or reassigned
to the support organization. Drawing the stakeholders together in a single meeting helps avoid
clearing up open issues on an individual basis.
13.1.2 Approval Verification
Approval is verified via the signature of a project closure document by the stakeholders who
signed the original project baseline documentation (i.e., the Project Plan). Acceptance document
should be customized to the particular project to include:
1. Pertinent deliverables,
2. Key features, and
3.
Information about final product delivery.
13.1.3 Procurement Contract Closure
Contract closure is the process of terminating contracts with outside organizations or businesses.
Contracts may be for providing technical support, consulting, or any services. Contracts are
usually brought to closure upon contract completion, early termination for cause, such as, failure
book
Mobk076
April 2, 2007
18:9
to perform. Closing a contract usually requires assistance from the Contracts Administrator,
since close attention must be paid to ensure all obligations of the contract have been met or
formally waived and to prevent any liability for the organization. Normally, procurement will
conduct the “Final Contract Review Meeting.” Project managers should make a checklist of all
items that must be addressed during contract closure, such as:
PROJECT CLOSEOUT 85
1. Review contract and related documents,
2. Validate that the contractor has met all of its contractual requirements,
3. Document any contractor variances,
4. Resolve contractor variances and issues,
5. Validate that the organization has met all of its contractual requirements,
6. Document organization’s variances and issues,
7. Resolve Agency organization’s variances
8. Ensure that all vendor responsibilities have been transferred to the organization or
another vendor,
9. Terminate current contract, and
10. Verify that all contractual obligations have been met or formally waived.
13.2 OUTCOMES ASSESSMENT MEETING
“Another meeting,” you ask? Well, so far, only two have been covered. Do not be surprised
if there are more. In conducting “Outcomes Assessment Meetings,” project managers provide
a forum for discussing the various aspects of the project with the primary focus on project
successes, problems, and issues, “Lessons Learned,” and recommendations for future process
improvements. Program managers should use the information and documentation from the
“Final System Acceptance Meeting” as a basis for the Outcomes Assessment Meeting discus-
sions. Outcomes Assessment Meetings are usually attended by the project manager as chairman
or moderator, all members of the Project Team, along with representation from Stakeholders,
Executive Management, Maintenance, and Operations Staff [3]. It is always wise to include
some oversight members that are external to the project and even the organization.
Typical questions that should be addressed in the Outcomes Assessment meeting include
the following:
1. To what extent did the delivered product meet the specified requirements and goals of
the project?
2. Was the customer satisfied with the end product?
book
Mobk076
April 2, 2007
18:9
86 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
3. Were cost budgets met?
4. Was the schedule met?
5. Were risks identified and mitigated?
6. Did the project management methodology work?
7. What could be done to improve the process?
13.3 OUTCOMES ASSESSMENT REPORT
After the meeting, the project manager and project team must generate an Outcomes As-
sessment Report that documents the successes and failures of the project. The Outcomes
Assessment Report provides an historical record of the planned and actual budget and
schedule. The report should include description with rationale of selected metrics that were
collected on the project and were based on documented procedures. The report should also
contain recommendations for future projects of similar size and scope [4]. Outcome Assessment
Reports should contain the following information:
1. Project sign-off,
2. Staffing and skills,
3. Project organizational structure,
4. Schedule management,
5. Cost management,
6. Risk management,
7. Quality management,
8. Configuration management,
9. Customer expectations management,
10. Lessons learned, and
11. Recommendations for process improvement.
13.4 TRANSITION PLANNING
Before projects are closed, it is important for organizations to plan for and execute a smooth
transition of the project into the normal operations of the company. Most projects produce re-
sults (resources) that are integrated into the existing organizational structure, some may require
modification of the organizational structures, whereas, some other projects are terminated before
completion. Additionally, the organization must develop a plan on how to redistribute resources;
i.e., project team members, support staff, materials, facilities, equipment, and automated
book
Mobk076
April 2, 2007
18:9
systems, before projects are closed or cancelled. The Project Manager is responsible for turning
over to the operations and maintenance organizations all documentation that has anything to
do with the product including design documents, schematics, and technical manuals.
PROJECT CLOSEOUT 87
13.5 PROJECT DOCUMENTS TO BE ARCHIVED
Some of the typical project documents to be archived include:
1. Project Business Case,
2. Project Plan, including Project Charter, Project Scope Statement, Risk Assessment,
Risk Mitigation,
3. Management Plan, Communications Plan, Quality Assurance Plan, etc.,
4. Financial Records,
5. All correspondence on project matters,
6. Meeting Notes,
7. Status/Progress Reports,
8. Procurements and Contract File,
9. Test Plans and Results,
10. Technical Documents,
11. Files, Programs, Tools, etc., placed under Configuration Management, and
12. All other documents and information pertaining to the project.
13.6 CRITICAL SUCCESS FACTORS
The most critical factors used to measure project closeout success are first and foremost ac-
ceptance by the end-user, followed by having achieved the business objectives and anticipated
benefits. Next factors are the achievement of project objectives and knowledge transfer. The
final factor is archiving of all project materials.
13.7 SUMMARY
Generally, Project Closeouts include the following key elements:
1. Verification of formal acceptance by Stakeholders and Steering Committee,
2. Redistributing resources; i.e., staff, facilities, equipment, and automated systems,
3. Closing out any financial issues such as labor charge codes and contract closure,
4. Documenting the successes, problems, and issues of the project,
5. Documenting “Lessons Learned,”
book
Mobk076
April 2, 2007
18:9
88 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
6. Producing an Outcomes Assessment Report,
7. Completing, collecting, and archiving project records, and
8. Celebrating Project Success: “End it with a Bang!”
REFERENCES
[1] Project Close Out Check List. Project Management. University of Washington,
[Online]. Available: http://www.washington.edu/computing/pm/end/
Seattle, WA.
closeout.html, 2002.
[2] Close the Project. Project Management. University of Washington, Seattle, WA.
[Online]. Available: http://www.washington.edu/computing/pm/end, 2002.
[3] Close/Audit Phase. Trainers Direct. [Online]. Available: http://www.trainersdirect.com/
resources/Project%20Management/CloseoutPhase.htm, 2005.
[4] Project Closeout Report. Document 06-114. (2006, Apr.). History. Texas Depart-
ment of Information Resources, Austin, TX. [Online]. Available: http://www.dir.
state.tx.us/pubs/pfr/06-114/instruction.pdf.
book
Mobk076
April 2, 2007
18:9
89
C H A P T E R 14
Project Design Reviews
Part of this chapter is based on excerpts from a slide presentation given at the Naval Air Warfare
Center in 2000. Why should companies conduct design review? The main reason may be that
design reviews are required by some “Regulation” in all government departments or agencies
dealing with commercial or military products, e.g., Food and Drug Administration, Federal
Aviation Administration, Department of Commerce, National Institute of Standards and
Technology, or Department of Defense Regulations. Following the regulation, guidelines are
of interest to those companies or industries that propose to enter the U.S. commercial market.
Not following the regulations could result in product “Liability” issues. Most court rulings
are based on the engineering practice of following “Good Common Practice” (standards and
regulations) and abiding by professional ethical codes that hold the “Health and Welfare of
the Public” as paramount. The purpose of including this chapter is to provide students and
new employees the guidance necessary in the preparation and conduct of Preliminary Design
Reviews (PDR) and Critical Design Reviews (CDR).
14.1 PRELUDE TO CONDUCTING A DESIGN REVIEW MEETING
The primary objective in conducting design review processes is to ensure that the design ful-
fills the performance requirements. In conducting design reviews, the program manager or his
designated chairperson must first identify the design review objectives, list the entry and exit
requirements for design reviews, and state the responsibilities for the committee conducting
design reviews. The Design Review Committee Chairperson must form the committee with
project team members, stakeholders, sponsors, technical area experts, and independent mem-
bers; and coordinate availability to ensure participation by essential members in the Design
Review. Additionally, a “Meeting Agenda” is prepared, coordinated with the committee mem-
bers, and subsequently accepted prior to the meeting. Before entering into the design review,
the chairperson must ensure that committee members have necessary documents, such as:
1. Requirements Traceability to Specifications Matrix, which is sufficient for the prelim-
inary design review.
2. Math Model Report, which is sufficient for the preliminary design review.
book
Mobk076
April 2, 2007
18:9
90 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
3. Any design documents; i.e., AutoCAD drawings, circuit drawings, or layouts.
4. Risk assessments and risk mitigation plans that are to be formally addressed.
When asked, “Is there a format or any one format for the design review?” The response
is, “Not Really, because design reviews are carried out at various intervals (phases) during the
development of a product.” Hence, the design reviews may address specific points or all of the
major concerns in a phase. Table 14.1 contains an example of a design review agenda. During
a test phase prior to any tests being conducted, the project manager may conduct a review of
the test plan, testing procedures, and verify availability of test personnel and that all necessary
test equipment are in place, functioning, and calibrated to some standards lab with current
calibration stickers. After the tests are conducted and analyzed, the project manager would
conduct another review of the test results, listing all deficiencies, and perhaps formulate options
for correction of the deficiencies.
14.2 ENTRY CRITERIA FOR DESIGN REVIEW
Entry Criteria are the minimum essential items necessary to enter into a design review. If the
design review is not a preliminary review, then one of the most important criteria for entry
is that there are “No outstanding prereview action items.” The project manager or the review
committee chairperson should define the design baseline and provide the framework for the
design review; including specific items in the Breakout of Tasks Work (WBS), requirements
traceability matrix, specifications, and items describing the design. Prior to the design review
meeting, the chairperson should submit the meeting agenda with specified items to members
for review giving ample time for members to make corrections or comments.
The Requirements Traceability Matrix (RTM) provides the information-linking require-
ments to all of the design documentation, and is the tool that enables a company to verify that all
of the design requirements are being addressed. If the product includes software, then the Soft-
ware Design Documentation should adequately disclose software design approach information.
Typical software design documents include:
1. Math calculations and Model simulation reports,
2. Software Design Description, requirements, and specifications,
3.
Interface Design Description, requirements/specifications, and
4. Database Design Description,
5. Software Test Plans, and
6. All Software Development Folders (flow charts, source code listings, etc.).
book
Mobk076
April 2, 2007
18:9
PROJECT DESIGN REVIEWS 91
TABLE 14.1: Example of a Design Review Agenda (Preliminary Design Review) c(cid:1)2002 DRM
Associates
REVIEW TOPIC
Project Definition
(cid:1) Customer changes to the program since
last review (if any)
REQUIRED OUTPUTS
Program/Team Charter
Program Requirements/Deliverables
Concept Approach Changes Since Last Review
(if any)
(cid:1) Changes to customer requirements &
specifications since last review (if any)
(cid:1)
Specification issues (if any)
(cid:1) Changes to system architecture and
concept approach for the system (if any)
(cid:1) Changes to product concept design (if
any)
Product Design
Review of design concept for each
product
(cid:1)
(cid:1)
(cid:1)
Budget & Schedule Changes
Product Specifications
Concept Design
Component Drawings/CAD Models
Assembly Drawings/CAD Models
Product structure walk-through
Schematic/Net List
Schematic and functional design review
(if applicable)
PCB Layout
Product Bill of Material
(cid:1) Assembly drawing/model review
(cid:1) Component drawing/model review
(cid:1)
Part specifications, significant character-
istics, and tolerances
(cid:1) Design for manufacturability, design for
assembly, and mistake-proofing review
(cid:1) Design and drawing/modeling standards
compliance
(cid:1) Technical issues and risks
(continued)
book
Mobk076
April 2, 2007
18:9
92 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
TABLE 14.1: (continued)
REVIEW TOPIC
REQUIRED OUTPUTS
Product Testing and Verification
(cid:1) Test requirements and plan
(cid:1) Test results
(cid:1)
Issues in meeting specifications
Preliminary Process Design
Test Requirements
Test Plan
Test Results
Build Report
(cid:1)
Process approach and operation flow
Process Flow Diagram
(cid:1)
Feedback from engineering model build
(cid:1) Tooling, fixture, production equipment,
and test equipment requirements
(cid:1) Tooling and fixture design
(cid:1) Tooling, fixture, production equipment,
and test equipment cost estimates/quotes
Tooling and Fixture Design
Tooling & Fixture Cost Estimates
Quality Planning
(cid:1) Quality issues on similar components and
countermeasures taken
Design FMEA
Process FMEA
(cid:1) Design FMEA, reliability issues, and
Control Plan
mitigation steps
(cid:1)
Process FMEA, reliability issues, and
mitigation steps
(cid:1) Control Plan for design validation build
Supplier Sourcing and Status
Supplier Plan
(cid:1)
(cid:1)
(cid:1)
Supplier selection and capability for each
component
Production, capability, quality, lead-time,
and cost issues for each supplier and
component
Inbound packaging requirements or
specifications
book
Mobk076
April 2, 2007
18:9
PROJECT DESIGN REVIEWS 93
TABLE 14.1: (continued)
REVIEW TOPIC
Product Cost and Profitability
(cid:1) Current cost estimate compared with
target cost
(cid:1)
Product profitability
Program Plan and Management
(cid:1)
(cid:1)
(cid:1)
(cid:1)
Project plan
Schedule issues
Resource issues
Process deviations
REQUIRED OUTPUTS
Target Cost Worksheet
Project Plan
Project Budget
Review of Issues and Follow-up Actions
Open Issues List
Permissions to use the tables were granted by DRM Associates, November 8, 2006.
Note the tables were primarily designed for the automotive industry; however, the tables
contain general information that can be tailored to specific design reviews.
Hardware Design Documentation should adequately disclose all hardware design infor-
mation. Typical hardware design documents include:
1. Theoretical math calculations, Models, and/or Simulation Reports,
2. Hardware requirements and specification,
3. Hardware Interface requirements and specification,
4. Circuit and/or AutoCAD drawings with parts listings,
5. Hardware Test Plans and Test Results Documents, and
6. Training Plan and Training Manuals.
7. Additionally, Risk Assessments and Mitigation Plans must be formally addressed
within the design documentation.
14.3 CONDUCTING THE DESIGN REVIEW
Normally, the Design Review Chairperson is responsible for conducting the design review;
however, for student design, the Team Chairperson is responsible for conducting the design
book
Mobk076
April 2, 2007
18:9
94 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
review. In most organizations, it is essential that customer representatives and users participate
in design reviews; however, for student teams, it is essential that the industry sponsors partici-
pate in the review. For either commercial companies or student teams, it is essential to include
a representation from appropriate specialties; e.g., hardware, systems, software, human factors
integration, facilities engineering, etc. It is recommended that student teams hold their review
at the sponsor’s facility. Chairpersons should make sure that all participants have ample oppor-
tunity to address questions, issues, and concerns. Typically, the design reviews take the form
of formal presentations by the design team to the full Review Committee. The presentations
should begin with a brief overview of the overall program (scope, deliverables, and milestone
schedules) to set the stage for the design briefs and an overall systems perspective reflecting
the major subsystems and how they interface to comprise the total system. The briefing format
should reveal the following:
1.
Identification of each requirement referenced to the appropriate specification paragraph
and/or work task.
2. The design approach for preliminary design review (PDR) or detailed design review
(CDR) for each requirement. Illustrations should be included wherever feasible.
3. Risk assessment for each requirement and the risk mitigation techniques employed to
manage the risk.
4. Risk management plan including who is responsible for carrying out the risk mitigation
strategies.
5. Safety and Human factors.
14.4 DESIGN REVIEW OUTPUT
At the conclusion of the design review, minutes of the meeting or a Summary Report of the
meeting must be generated. Documents to be submitted with the Summary Report include
all the design documents used during the course of the design review. As a minimum, the
following documents should be attached or forwarded with the report:
1. Contractor’s Proposal or copy of signed contract,
2. SOW (Tasking with Gantt Chart),
3. Specifications,
4. Requirements Traceability Matrix (RTM),
5. All Circuit, AutoCAD Drawings,
6. All Design Documents, and
7. Requests for Action (RFA).
book
Mobk076
April 2, 2007
18:9
PROJECT DESIGN REVIEWS 95
Requests for Action (RFA) are formal forms generated to document questions, issues, and
concerns that surface during the design review. It is essential that suspense dates and responsi-
bilities for resolving the RFAs be assigned before completion of the design review. The student
design teams must be provided some form of RFA as shown in the Appendix of the Design
Review Report.
14.5 EXIT CRITERIA
Exit criteria are the minimum essential items necessary to successfully complete a design
review before proceeding into the next phase. Therefore, project managers should review items
specified in the statement of work, the specifications, and the requisite items describing the
design have been successful resolved and all action items are closed. They should also ensure
acceptance of required items and acceptance of the design review minutes. Is it over now? It
is not over until management has made the determination from the Exit Documents as to
whether or not the program/project/design is ready to proceed into the next phase based upon
successful completion of the exit criteria (Signatures are required).
REFERENCES
[1] Technical Design Reviews, Naval Air Warfare Center, Training Systems Division,
2002.
[2] DRM Associates. (2002). Example of a Design Review Agenda (Preliminary Design Review)
[Online]. Available: http://www.npd-solutions.com/designreview.html
book
Mobk076
April 2, 2007
18:9
book
Mobk076
April 2, 2007
18:9
97
C H A P T E R 15
Making Technical Decisions
This chapter is added to this book on project management to help student teams learn to
make rational informed decisions during their senior design projects. Even though everyone
makes daily decisions, not many of those decisions are associated with project management and
technical matters. The psychology of decision-making varies among individuals. Comedians
poke fun at the decision process between men and women when they draw analogies to shopping
differences. Women spend hours in a store buying one item, because they search for all sorts
of alternatives with the lowest price being one of the factors; whereas, men go in, see what
they came for, get it, and they are out of there in a couple of minutes. Comedians propose that
the basis for the difference in decision behavioral patterns goes way back to our cave dwelling
ancestors when women would go “berry picking” and they were very picky about their berries.
This one is good, no good, ok, bad; hence cavewomen would spend hours picking berries. Cave
men, on the other hand, were hunters. “There is the rabbit, kill it!” Off goes the arrow; “got it,
now time to go home and eat! Ugh!” Truly, this caveman approach is not the way of modern
decision-making with today’s technological advances.
Webster’s dictionary [1] defines “Decision” as the act of making up one’s mind; the result
or conclusion arrived at by deciding. “Decision-Making” is defined in Webster’s dictionary as
the process by which decisions are made. The Center for the Study of Work Teams (CSWT) at
the University of North Texas [2] defined “Group Decision-Making” as the process of arriving
at a judgment based upon information and the feedback of multiple individuals.
15.1 GROUP DECISION-MAKING PROCESS
Various organizations use different Decision-Making Models to establish a systematic means
of developing effective group decision-making. Since a multiplicity of models exists, only the
four basic “Group Decision-Making Models” will be discussed:
1. Rational Model,
2. Political Model,
3. Process Model, and
4. Garbage Can Model.
book
Mobk076
April 2, 2007
18:9
98 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
15.1.1 The Rational Model
The Rational Model is based on an economic view of decision-making and grounded on
goals/objectives, alternatives, consequences, benefits or opportunities, and optimization.
The Rational Model assumes that complete information regarding the decision to be
made is available; thus, decision-makers consistently assess advantages and disadvantages of
alternatives with goals and objectives in mind. Additionally, they will evaluate the consequences
of selecting or not selecting each alternative. Finally, the alternative that provides the maximum
utility (i.e., the optimal choice) will be selected as the best choice or solution. The rational
model is often used as the baseline against which other models are compared. With the
Rational Model, decisions are made deductively by determining goals and objectives to be
obtained, evaluating the potential alternatives based on the information at hand and choosing
the optimal alternative.
The advantage of the Rational Model is that it uses a logical and sequential approach.
The disadvantage of the model is that it assumes no intrinsic biases to the decision-making
process.
15.1.2 The Political Model
With the Political Model, groups or individuals do not decide through rational choice with
regard to objectives; instead, the decision-makers are motivated by and act on their own needs
and perceptions. The model involves bargaining among the decision-makers as each one tries
to get his/her perspective to be the one of choice and does not involve or require making full
information available. The only advantage of the Political Model is that it emulates how the real
world operates (i.e., bargaining related to personal agendas). The greatest disadvantage of the
model is that the best solution or decision may not be selected; for example, decision-making
in the “U.S. CONGRESS” is based along party or constituency lines rather than on rational
goal-oriented or technical merits [2].
15.1.3 The Process Model
Process Model decisions are based on standard operating procedures, or pre-established guide-
lines within the organization in which conformity to past and present organizational rules
and standards is an integral part. Conformity relates to the fact that reasoning for the deci-
sion is based on the predetermined guidelines: REGULATIONS! Again, large government
organizations too often tend to quote regulations [2].
15.1.4 The Garbage Can Model
The last decision model is the Garbage Can Model, which is used to make judgment or decisions
on tasks within organizations where the technologies are not clear. In the Garbage Can Model,
book
Mobk076
April 2, 2007
18:9
MAKING TECHNICAL DECISIONS 99
the involvement of participants as well as the amount of time and effort given to the decision
process fluctuates such that choices are usually inconsistent and not well defined. However, the
model provides a real-world representation of the nonrational manner in which decisions are
often made by individuals and within some organizations. “Ad Hoc” decisions made by “Flying
by the seat of the pants!” are not the most efficient means of making a decision. Let us hope
that student design teams avoid and never use this model in making technical (engineering)
decisions [2].
15.2 U.S. NAVY EXECUTIVE DECISION-MAKING FRAMEWORK
The U.S. Navy includes definition, analysis, decision, reconciliation, and execution phases in
their Decision-Making Framework [3]. The definition phase requires describing in detail the
following:
1. Problem Statement,
2. Decision Objectives,
3. Context,
4. Boundaries or Limits, and
5. Analytic Objectives.
The analysis phase requires development of decision criteria based on examining the
validity, reliability, practicality of the solution, the uncertainty and risks, the analytical method,
sensitivity analysis, the decision model, and alternatives.
The decision phase requires taking time to review the entire decision process for timing,
any spillover effects, organizational issues, political issues, evaluating the internal decision, and
performing a reality check on the process and solution.
The reconciliation phase may be thought of as “Smoothing ruffled feathers,” that is
removing all negative effects on participants. One may also consider this phase as conflict
resolution among participants. What strategies or techniques to use in conflict resolution, i.e.,
win–win compromise with mutual gains or zero-sum on the scorecards?
The last phase is execution of the decision. Implementation of any decision or solution
requires planning on how to carry out the decision and verifying that the decision is being carried
out correctly. The plan must detail who (Which individual or organization is responsible for
implementing the decision?), how (How is the decision to be executed?), and what controls?
The execution of the decision is not simply, “Here is a memo, go do it!” Implementation of
the decision should be “verified” by those making and issuing the implementation plans or
directives. Verification requires measurement of some metric, some feedback mechanism on
the progress; i.e., EVA, QA, etc., and a mechanism for adjustments to the implementation.
book
Mobk076
April 2, 2007
18:9
100 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
15.3 DECISION MATRIX OR UTILITY FUNCTION
Lessard [4] and Bronzino [5] used a method for making technical decisions based on the
Rational Model, but is referred to in various terms, i.e., Decision Matrix [6], Weighted Function
Evaluation [5], and Utility Function [4].
15.3.1 Weighted Function Evaluation
Weighted function evaluation requires assigning a weighing factor to denote the relative impor-
tance of each of the desired attributes; usually, 0–10. In collaboration with the other evaluators,
it determines how well each device meets each attribute with individually assigned scores (be-
tween 0 and 10), and then multiplies the scores by the respective weighing factor to determine
a weighted score for each attribute for each device. The total weighted scores (Averaged for
multievaluators.) are used to determine the overall rating for each device. The technical deci-
sion uses these ratings to determine the relative ability of each device in meeting the specified
requirements [4].
15.3.2 Authors’ Recommendations
The authors recommend that student teams use the “Utility Function Method” for evaluation
and decision-making. Teams should start by evaluating product objectives and user require-
ments, and translate requirements into engineering system or device specifications. Lastly, apply
the “Utility Function” in making technical or engineering decisions.
15.3.3 Utility Function
The Utility Function is a quantitative methodology appropriate to assess the relative merits
of the available methods, systems, or devices. The first step is to determine essential variables
and their respective weights. Variables are defined as those factors necessary to evaluate with
some figure of merit the most useful system. A limitation of the utility function analysis is that
the outcome of the evaluation may not be the same for other applications, e.g., purchasing a
medical device for use in a medical evacuation aircraft or a large well-equipped hospital. Some
consider the fact that the function may neither be unique nor does the function include all
possible variables as a limitation. The fact is that very seldom are all the critical factors known
and included.
The selection of factors is a commonsense approach in which it is necessary to evaluate
the importance of what is being measured; e.g., in the determination of vital life signs. How the
measurement is obtained is not as important as the need to evaluate how accurate the system
may measure an essential vital sign. The next step is to prioritize or assign Weighting Factors
by order of importance.
book
Mobk076
April 2, 2007
18:9
MAKING TECHNICAL DECISIONS 101
15.4 FACTOR WEIGHTS
Factor weights are the coefficients or multiplying factors by which the variables (factors) are
multiplied. The magnitudes or values assigned to the factor weights are not unique. One
method of obtaining the weights is to conduct a survey by having a large number of engineers
or specialists in the area assign values based on some guidelines and criteria. Surveys require an
extended period of time to collect and analyze. One should always question, “How dependable
and reliable will the final results be?” The answer is that the results will depend on how well
the selection of factors and assignment of weights describe or represent the usefulness of the
criteria for the specified conditions.
15.5 GRADING SCALE
The next step is to select a scale for the factor weights, for example, one may use a scale from 5
to 0 in three discrete levels to represent:
1. Most desirable (5 points),
2. Acceptable (3 or 2 points), or
3. Unacceptable (1 or 0 points).
A scale may be doubled for the very important factors; thus, the factors were given weights
based on their relative importance; for example: It is determined that the types of electrodes
used are very important and are weighted 10 points with three levels and points:
1. Noncontact—10 points.
2. Noninvasive without media—5 points.
3. Noninvasive with media (i.e., gel)—0 point.
In evaluating, the relative merit and/or utility value of each candidate system is calculated after
considering all the factors in the model [Eq. (15.1)]:
(cid:1)
=
yn
ai j xi
(15.1)
where
xi is the ith factor,
ai j is the weight of the ith factor with j degrees, and
yn is the utility value of the nth system.
Since the factors are descriptors, ai j xi is not a product, but rather a designation of points that
are summed to yield a utility value.
book
Mobk076
April 2, 2007
18:9
102 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
15.6 SUMMARY
In making technical decisions, teams should evaluate objectives and user or system requirements.
Then, convert requirements into engineering specifications of a system or device. Be sure to
use the Utility Function Method for evaluation of alternatives leading to a rational, systematic
“Group Decision” based on the average of individual analyses.
REFERENCES
[1] Webster’s New Collegiate Dictionary. Springfield, MA: G. & C. Merriam, 1973.
[2] Center for the Study of Work Teams, Group Decision Making Within the Organization:
Can Models Help? Denton, TX: University of North Texas, 1996.
[3] US Navy Command and Staff School. (1987). Navy Web Site. [Online].
[4] C. S. Lessard and W. C. Wong, “Evaluation of noninvasive measurement methods and
systems for application in vital signs detection,” USAFSAM-TR-85-44-PT-1, 1983.
J. Bronzino, Management of Medical Technology. London, U.K.: Butterworth, 1992.
[5]
[6] Executive Decision. New York: McGraw-Hill, 1964
book
Mobk076
April 2, 2007
18:9
103
C H A P T E R 16
Management of Team Conflict
This chapter is presented in part from an article by Vern R. Johnson in the IEEE-USA Today’s
Engineer with the permission of the IEEE-USA Publishing [1].
Conflict is defined in Webster’s dictionary [2] as a noun meaning argument, dispute, or
controversy; and, conflict is also defined as a verb meaning to disagree, to differ, to vary, or to
disturb.
Disagreements and conflicts are often described as the heat generated by change. Some
individuals claim they can “feel” when conflict exists; whereas, others believe the conflicts
“. . . are unavoidable.” Nevertheless, “A Conflict will arise!” whenever individuals disagree in
a heated verbal exchange about what direction to take when changes occur or are necessary.
Usually, changes due to business directions, customers, and technology within a project may
require realigning strategies and goals around the new direction with team members’ agreement
or compromise about the realignment [1].
Three basic causes of conflict include [1]:
1.
Information: Refers to the conflict that is based on incorrect information or the lack
of information. For example, if someone does not know when or where the team is to
meet and does not attend the meeting, that individual’s absence will limit the team’s
progress and may create “Conflict” between members.
2. Goals/Roles: Refers to the conflict that will arise if some team members do not under-
stand the team’s task, or if members do not know specifically what their assignments
are, those members cannot align themselves with the project or the team.
3. Values: Refers to the conflict that will be present if team members do not share values
relative to the task and the approach used in the tasks.
The ability of individuals and/or teams to accomplish tasks has been correlated directly
with the team’s ability to handle conflict. Since it is well known that conflict interferes with
team productivity, when a team experiences conflict, it is essential that its members resolve the
conflict before moving forward [1].
book
Mobk076
April 2, 2007
18:9
104 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
TABLE 16.1: Communication Model [1]
INDIVIDUAL A
INDIVIDUAL B
1
2
3
4
5
6
Data
Interpret
Feelings
Needed action
Listen
Listen
Listen
Listen
Listen
Listen
Echo
Decide
Often, what appears to be a conflict may simply be a misunderstanding. One approach
to determine if a conflict exists is to use the communication model in Table 16.1.
Individual A does the following four things while Individual B listens [1]:
1. Give the data as an objective statement, without making a judgment or offering feelings.
For example, “I see that you . . . ” or “I noticed that . . . .”
2. Make an interpretation. Individual A shares his/her judgment of what the data means.
“I interpret this to mean . . . .”
3.
Identify the feelings that result from interpretation. There are always “feelings”; i.e.,
variations of anger, sorrow, joy, or fear associated with conflict. The individual A should
make a simple statement that recognizes those feelings, such as, “I feel . . . about it” or
“. . . and it makes me angry.”
4. State the need to be filled or the required action. For example, “I would like you to . . . ,”
or “I want you to . . . as a demonstration that you are still part of the team.”
Then, individual A should stop talking and listen while individual B responds (steps 5
and 6).
5. Echo the expected action. Individual B should parrot back what was just heard to
validate understanding of the message. For example, “I see that you are angry and you
don’t think I care,” or “You want me to . . . as a demonstration that I am part of the
team.”
6. Decide what you are willing to do about person A’s concern and respond accordingly;
e.g., “To prove myself, I will . . . ,” or “I had no idea you would interpret my actions
that way. In the future, I will . . . .”
book
Mobk076
April 2, 2007
18:9
MANAGEMENT OF TEAM CONFLICT 105
If individual B’s answer is, “I will cooperate with your request,” then the situation resolves
itself. However, if individual B’s answer is, “I will ignore your need or requested action,” then
there is a conflict that must be resolved before individuals A and B can move forward and be
productive.
Without a doubt, most individuals prefer to give advice or feedback, than to take or
receive feedback. However, sometimes it may be necessary to notify team members that some
of their actions are not acceptable, often resulting from oversight regarding ground rules, shared
responsibilities, or personal behaviors. Being invited to examine your behavior is receiving a
“wake-up call” from one’s peers that may be an emotional, awkward, or uninvited experience. It
is critical that the person who approaches a team member to give appropriate feedback knows
how to give feedback without antagonizing the team member or members. Therefore, it is
important for both the individual giving feedback and the individual receiving feedback to
express appropriate attitudes during any discussion [1].
16.1 GIVING FEEDBACK
In giving feedback, make the meeting cordial and individuals should give appropriate feedback
in a simple, straightforward way, but with some degree of care. It is not a time for team members
to gang up on the person or to express a litany of concerns. However, do not withhold concerns
until they have become overwhelming, because the sooner the problems are approached, the
easier it will be to resolve them. The focus should be on the individual’s behaviors and the results
of those behaviors, or on the technical merits of an approach, rather than on the individual’s
personality. Cite a specific situation as an example of unacceptable behavior and describe the
change that may need to be made; however, when expressing concerns, avoid giving advice and
then allow time for the individual to respond [1].
It is just as important for people receiving corrective feedback to know how to respond.
Here are some basic guidelines for receiving feedback. Since there is a problem and one needs
to understand the basic problem, therefore, listen with an open mind. Make sure to understand
what teammates are saying, therefore, do not hesitate to ask questions for clarification if
necessary. Do not overreact, or agree with, or reject the confrontation, the initial objective this
time is to gather information. Express appreciation for the information, since teammates are
taking a risk by trying to help [1].
16.2 CONFLICT RESOLUTION METHODS
There are four conflict resolution methods [1]:
1. Avoidance: Avoidance is the general method used when it has been determined that
the relationship is not important enough to save. For example, “I can’t handle this. I’m
book
Mobk076
April 2, 2007
18:9
106 PROJECT MANAGEMENT FOR ENGINEERING DESIGN
out of here.” or, “The cost of complying with your request is just too high. Let’s call
the whole thing off.”
2. Exercise power: When power is exercised, an individual takes an assertive position based
on power or position; e.g., “I am in control.” or “I am the boss.”
3. Who is right? This method requires a third party to mediate the conflict. If the conflict
is based on flawed information or confusion over goals or roles, it may help to go to an
expert or consult a reference book to find an acceptable solution.
4.
Interest-based. When a conflict exists between individuals, each takes a position, they
anchor themselves to their positions, and they become entrenched with a barrier be-
tween them.
What should a project manager do when there appear to be conflicts among team
members? First, find out “why” the individuals took the position they did. What is behind it?
From the answers, determine what interest the individuals have in common. The objective is
to have the individuals concentrate on the common interest rather than on their differences.
Common interests can lead to compromise, which, in turn, helps those in conflict to relax from
their entrenched positions [1].
16.3 SUMMARY
A simple three-step process that should be used by the project manager includes:
1. Achieve contact
a. Validate the feelings of other people.
b. Learn why they have taken a position.
c. Understand them.
2. Boil down the problem
a. Ask clarifying questions about the issues that appear to exist.
b. Prioritize these issues.
3. Choice making
a. Attempt to identify alternatives that can be chosen to provide an appropriate com-
promise.
b. Protect the common interest.
All members must take responsibility for implementation, and once implemented, the
conflict will recede. If accountability for implementation is not verified, the conflict can return
book
Mobk076
April 2, 2007
18:9
without warning. As a final note, after a conflict is resolved, it is amazing how effective a “Thank
You” is at bringing goodwill back into the relationship [1].
MANAGEMENT OF TEAM CONFLICT 107
REFERENCES
[1] V. R. Johnson. (2005, Jan.) Managing conflict in a small team setting. IEEE-USA
Today’s Eng. [Online]. Copyright IEEE 2006. Available: http://www.todaysengineer.
org/2005/Jan/conflict.asp.
[2] Webster’s New Collegiate Dictionary. Springfield, MA: G. & C. Merriam, 1973.
book
Mobk076
April 2, 2007
18:9
book
Mobk076
April 2, 2007
18:9
109
Author Biography
Charles S. Lessard, Ph.D., Lt Colonel, United States Air Force (Retired), is an Associate Pro-
fessor in the Department of Biomedical Engineering at Texas A&M University. His areas of
specialization include Physiological Signal Processing, Design of Virtual Medical Instrumenta-
tion, Noninvasive Physiological Measurements, Vital Signs, Nystagmus, Sleep & Performance
Decrement, Spatial Disorientation, G-induced Loss of Consciousness G-LOC). Neural Net-
work Analysis. Dr. Lessard received a B.S. in Electrical Engineering from Texas A&M (1958)
a M.S. from the U.S. Air Force Institute of Technology (1965), and Ph.D. from Marquette
University (1972).
As an officer in the U.S. Air Force, Lessard was a pilot of F86L Interceptors and
B-52G Strategic Bombers. He also served as Research Scientist and Chief of Biomedical
Engineering Research for the Aerospace Medical Division of the School of Aerospace Medicine,
at Brooks Air Force Base, Texas. In this capacity he planned and directed efforts in biomedical
projects associated with the Manned Orbiting Laboratory Program (MOL), developed medical
instrumentation (EEG Analyzer), conducted research on computer on the analysis of sleep
brainwaves and cardiac signals, and the effects of zero-gravity (0-G) on the cardiac response
during valsalva maneuvers. U.S. Air Force Medical Research Laboratories, Wright-Patterson
AFB, Lessard with Biocybernetics Wing Engineering and worked on neural networks, self-
organizing controls (SOC), and remotely piloted vehicles. He was the Field Office Director.
Program Manager, with the Electronics Systems Division of the Air Force Systems Command
during the installation and testing of Spain’s Automated Air Defense System as a part of the
Treaty of Friendship and Cooperation between the US and Spain. Dr. Lessard retired from the
U.S. Air Force in 1981 after serving as the Deputy Director Bioengineering and Biodynamic
Division at Aerospace Medical Research Laboratory (AMRL), Wright-Patterson Air Forces.
He began his academic career with Texas A&M University in 1981. His program management
experiences are applied in his two Senior Design Courses.
Charles Lessard was a Senior Scientist for Veridian Inc. at Brooks Air Force and lead sci-
entist for KRUG Life Sciences, Inc. in the psychological and neurophysiological manifestations
of spatial orientation, mechanisms of spatial orientation in and countermeasures against spatial
disorientation. Additionally, he was responsible for developing and conducting research in spa-
tial disorientation and high acceleration (Gz forces) induced loss of consciousness (G-LOC).
He was a science and engineering expert for the USAF, Air Force Research Laboratories and
book
Mobk076
April 2, 2007
18:9
110
Wyle Laboratories, Inc. on joint military (Air Force and Navy) G-LOC studies performing
analysis of physiological data, i.e., Auditory Evoked Responses (AER), electroencephalograms
(EEG), electrocardiograms (ECG), electro-oculograms (EOG), Oxygen saturation (SaO2),
and Tracking Tasks Performance data.
Joseph P. Lessard, is the Vice President of Americas for Globeleq, Inc. Globeleq Inc. is a global
owner and operator of power assets focused on the emerging markets. He is responsible for all
business development activities in Latin America and the Caribbean. Mr. Lessard received a
B.S. in Electrical Engineering in 1987 and an M.B.A. in 1994 from Texas A&M.
As an officer in the U.S. Navy, Mr. Lessard trained in nuclear power and served on the
ballistic missile submarine USS Alabama.
Mr. Lessard entered the private power industry in 1994 as a project manager for the
Coastal Power Company. The following year he was named Regional Managing Director
with responsibility for business activities in Southeast Asia. In 1997, Mr. Lessard shifted his
attention to the U.S. power market as Managing Director of the Northeast United States. He
had profit and loss responsibility for three power plants and led a successful acquisition of a
fourth power plant.
Mr. Lessard left Coastal Power Company in 1999 to form Hart Energy International,
a start-up power company focused on the aggregation of power asset investments in Latin
America. Hart Energy’s successful direction of two acquisitions – EGE Haina in the Dominican
Republic and Entergy’s Latin American portfolio – led to the launch of Globeleq, Inc. in June
2002.
At Globeleq, Mr. Lessard has led the company’s efforts in Latin America including the
acquisition of a 200MW hydroelectric company, the divestiture of two non-strategic assets,
the development of a greenfield thermal power plant, and the placement of two local bond
issues. He is currently directing three greenfield development projects in Guatemala, Panama
and Peru.
|
https://ieeexplore.ieee.org/xpl/ebooks/bookPdfWithBanner.jsp?fileName=6812953.pdf&bkn=6812952&pdfType=book
|
Series ISSN: 1939-5221
Series ISSN: 1939-5221
Series ISSN: 1939-5221
SYNTHESIS LECTURES ON ENGINEERING
SYNTHESIS LECTURES ON ENGINEERING
SYNTHESIS LECTURES ON ENGINEERING
MATLAB for Engineering and the
MATLAB for Engineering and the
MATLAB for Engineering and the
Life Sciences
Life Sciences
Life Sciences
Joseph V. Tranquillo, Bucknell University
Joseph V. Tranquillo, Bucknell University
Joseph V. Tranquillo, Bucknell University
In recent years, the life sciences have embraced simulation as an important tool in biomedical research.
In recent years, the life sciences have embraced simulation as an important tool in biomedical research.
In recent years, the life sciences have embraced simulation as an important tool in biomedical research.
Engineers are also using simulation as a powerful step in the design process. In both arenas, Matlab has
Engineers are also using simulation as a powerful step in the design process. In both arenas, Matlab has
Engineers are also using simulation as a powerful step in the design process. In both arenas, Matlab has
become the gold standard. It is easy to learn, flexible, and has a large and growing userbase. MATLAB
become the gold standard. It is easy to learn, flexible, and has a large and growing userbase. MATLAB
become the gold standard. It is easy to learn, flexible, and has a large and growing userbase. MATLAB
for Engineering and the Life Sciences is a self-guided tour of the basic functionality of Matlab along
for Engineering and the Life Sciences is a self-guided tour of the basic functionality of Matlab along
for Engineering and the Life Sciences is a self-guided tour of the basic functionality of Matlab along
with the functions that are most commonly used in biomedical engineering and other life sciences.
with the functions that are most commonly used in biomedical engineering and other life sciences.
with the functions that are most commonly used in biomedical engineering and other life sciences.
Although the text is written for undergraduates, graduate students and academics, those in industry may
Although the text is written for undergraduates, graduate students and academics, those in industry may
Although the text is written for undergraduates, graduate students and academics, those in industry may
also find value in learning Matlab through biologically inspired examples. For instructors, the book is
also find value in learning Matlab through biologically inspired examples. For instructors, the book is
also find value in learning Matlab through biologically inspired examples. For instructors, the book is
intended to take the emphasis off of learning syntax so that the course can focus more on algorithmic
intended to take the emphasis off of learning syntax so that the course can focus more on algorithmic
intended to take the emphasis off of learning syntax so that the course can focus more on algorithmic
thinking. Although it is not assumed that the reader has taken differential equations or a linear algebra
thinking. Although it is not assumed that the reader has taken differential equations or a linear algebra
thinking. Although it is not assumed that the reader has taken differential equations or a linear algebra
class, there are short introductions to many of these concepts. Following a short history of computing,
class, there are short introductions to many of these concepts. Following a short history of computing,
class, there are short introductions to many of these concepts. Following a short history of computing,
the Matlab environment is introduced. Next, vectors and matrices are discussed, followed by matrix-vector
the Matlab environment is introduced. Next, vectors and matrices are discussed, followed by matrix-vector
the Matlab environment is introduced. Next, vectors and matrices are discussed, followed by matrix-vector
operations. The core programming elements of Matlab are introduced in three successive chapters on
operations. The core programming elements of Matlab are introduced in three successive chapters on
operations. The core programming elements of Matlab are introduced in three successive chapters on
scripts, loops, and conditional logic. The last three chapters outline how to manage the input and output
scripts, loops, and conditional logic. The last three chapters outline how to manage the input and output
scripts, loops, and conditional logic. The last three chapters outline how to manage the input and output
of data, create professional quality graphics and find and use Matlab toolboxes. Throughout, biomedical
of data, create professional quality graphics and find and use Matlab toolboxes. Throughout, biomedical
of data, create professional quality graphics and find and use Matlab toolboxes. Throughout, biomedical
examples are used to illustrate Matlab’s capabilities.
examples are used to illustrate Matlab’s capabilities.
examples are used to illustrate Matlab’s capabilities.
About SYNTHESIs
About SYNTHESIs
About SYNTHESIs
This volume is a printed version of a work that appears in the Synthesis
This volume is a printed version of a work that appears in the Synthesis
This volume is a printed version of a work that appears in the Synthesis
Digital Library of Engineering and Computer Science. Synthesis Lectures
Digital Library of Engineering and Computer Science. Synthesis Lectures
Digital Library of Engineering and Computer Science. Synthesis Lectures
provide concise, original presentations of important research and development
provide concise, original presentations of important research and development
provide concise, original presentations of important research and development
topics, published quickly, in digital and print formats. For more information
topics, published quickly, in digital and print formats. For more information
topics, published quickly, in digital and print formats. For more information
visit www.morganclaypool.com
visit www.morganclaypool.com
visit www.morganclaypool.com
Mor gan Cl aypool Publishers
Mor gan Cl aypool Publishers
Mor gan Cl aypool Publishers
&
&
&
ISBN: 978-1-60845-710-6
ISBN: 978-1-60845-710-6
ISBN: 978-1-60845-710-6
90000
90000
90000
w w w . m o r g a n c l a y p o o l . c o m
w w w . m o r g a n c l a y p o o l . c o m
w w w . m o r g a n c l a y p o o l . c o m
9 781608 457106
9 781608 457106
9 781608 457106
T
R
A
N
Q
U
I
L
L
O
T
T
R
R
A
A
N
N
Q
Q
U
U
I
I
L
L
L
L
O
O
M
M
M
A
A
A
T
T
T
L
L
L
A
A
A
B
B
B
F
F
F
O
O
O
R
R
R
E
E
E
N
N
N
G
G
G
I
I
I
N
N
N
E
E
E
E
E
E
R
R
R
I
I
I
N
N
N
G
G
G
A
A
A
N
N
N
D
D
D
T
T
T
H
H
H
E
E
E
L
L
L
I
I
I
F
F
F
E
E
E
S
S
S
C
C
C
I
I
I
E
E
E
N
N
N
C
C
C
E
E
E
S
S
S
M
M
M
o
o
o
r
r
r
g
g
g
a
a
a
n
n
n
&
&
&
C
C
C
l
l
l
a
a
a
y
y
y
p
p
p
o
o
o
o
o
o
l
l
l
CM& Mor gan Cl aypool Publishers
CM& Mor gan Cl aypool Publishers
CM& Mor gan Cl aypool Publishers
&
&
&
MATLAB for
MATLAB for
MATLAB for
Engineering and
Engineering and
Engineering and
the Life Sciences
the Life Sciences
the Life Sciences
Joseph V. Tranquillo
Joseph V. Tranquillo
Joseph V. Tranquillo
SYNTHESIS LECTURES ON ENGINEERING
SYNTHESIS LECTURES ON ENGINEERING
SYNTHESIS LECTURES ON ENGINEERING
MATLAB for Engineering
and the Life Sciences
Synthesis Lectures on
Engineering
MATLAB for Engineering and the Life Sciences
Joseph V. Tranquillo
2011
Systems Engineering: Building Successful Systems
Howard Eisner
2011
Fin Shape Thermal Optimization Using Bejan’s Constructal Theory
Giulio Lorenzini, Simone Moretti, and Alessandra Conti
2011
Geometric Programming for Design and Cost Optimization (with illustrative case study
problems and solutions), Second Edition
Robert C. Creese
2010
Survive and Thrive: A Guide for Untenured Faculty
Wendy C. Crone
2010
Geometric Programming for Design and Cost Optimization (with Illustrative Case Study
Problems and Solutions)
Robert C. Creese
2009
Style and Ethics of Communication in Science and Engineering
Jay D. Humphrey and Jeffrey W. Holmes
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Analog Multimedia
Explorations
Lina J. Karam and Naji Mounsef
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Digital Multimedia and
Robotics Explorations
Lina J. Karam and Naji Mounsef
2008
CAD/CAM of Sculptured Surfaces on Multi-Axis NC Machine: The DG/K-Based
Approach
Stephen P. Radzevich
2008
iii
Tensor Properties of Solids, Part Two: Transport Properties of Solids
Richard F. Tinder
2007
Tensor Properties of Solids, Part One: Equilibrium Tensor Properties of Solids
Richard F. Tinder
2007
Essentials of Applied Mathematics for Scientists and Engineers
Robert G. Watts
2007
Project Management for Engineering Design
Charles Lessard and Joseph Lessard
2007
Relativistic Flight Mechanics and Space Travel
Richard F. Tinder
2006
Copyright © 2011 by Morgan & Claypool
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in
any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations in
printed reviews, without the prior permission of the publisher.
MATLAB for Engineering and the Life Sciences
Joseph V. Tranquillo
www.morganclaypool.com
ISBN: 9781608457106
paperback
ISBN: 9781608457113
ebook
DOI 10.2200/S00375ED1V01Y201107ENG015
A Publication in the Morgan & Claypool Publishers series
SYNTHESIS LECTURES ON ENGINEERING
Lecture #15
Series ISSN
Synthesis Lectures on Engineering
Print 1939-5221 Electronic 1939-523X
MATLAB for Engineering
and the Life Sciences
Joseph V. Tranquillo
Bucknell University
SYNTHESIS LECTURES ON ENGINEERING #15
CM&
Morgan
&
cLaypool
publishers
ABSTRACT
In recent years, the life sciences have embraced simulation as an important tool in biomedical research.
Engineers are also using simulation as a powerful step in the design process. In both arenas, Matlab
has become the gold standard. It is easy to learn, flexible, and has a large and growing userbase.
MATLAB for Engineering and the Life Sciences is a self-guided tour of the basic functionality of
Matlab along with the functions that are most commonly used in biomedical engineering and other
life sciences. Although the text is written for undergraduates, graduate students and academics,
those in industry may also find value in learning Matlab through biologically inspired examples. For
instructors, the book is intended to take the emphasis off of learning syntax so that the course can
focus more on algorithmic thinking. Although it is not assumed that the reader has taken differential
equations or a linear algebra class, there are short introductions to many of these concepts. Following
a short history of computing, the Matlab environment is introduced. Next, vectors and matrices
are discussed, followed by matrix-vector operations. The core programming elements of Matlab
are introduced in three successive chapters on scripts, loops, and conditional logic. The last three
chapters outline how to manage the input and output of data, create professional quality graphics
and find and use Matlab toolboxes. Throughout, biomedical examples are used to illustrate Matlab’s
capabilities.
KEYWORDS
computing, MATLAB, matrix, vector, loops, scripting, conditional logic, biological
computing, programming, simulation
Contents
vii
1
2
3
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1
1.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
A Short History of Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2.1 The Pre-history of Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2.2 The Early History of Digital Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.3 Modern Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3
A History of Matlab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Why Matlab? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Matlab Programming Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
The Matlab Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
The Diary Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
An Introduction to Scalars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Basic Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5.1 Priority of Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5.2 Reissuing Previous Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5.3 Built-in Constants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5.4 Finding Unknown Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
The Logistic Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Clearing Variables and Quitting Matlab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.1
3.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Vectors in Matlab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2.1 Creating Vectors in Matlab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2.2 Creating Regular Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2.3 Special Vectors and Memory Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
viii
4
5
Vector Indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.3
Strings as Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.4
Saving Your Workspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.5
3.6 Graphical Representation of Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.6.1 Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.7
Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.3
4.1
4.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Creating a Matrix and Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.2.1 Simplified Methods of Creating Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.2.2 Sparse Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Indexing a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.3.1 Higher Dimensional Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Simple Matrix Routines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Visualizing a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.5.1 Spy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.5.2 Imagesc and Print . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.6 More Complex Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.6.1 Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.6.2 Cell Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.4
4.5
4.7
Matrix – Vector Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.3
5.1
5.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Basic Vector Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.2.1 Vector Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.2.2 Vector Transpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.2.3 Vector - Vector Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Basic Matrix Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.3.1 Simple Matrix Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.4 Matrix-Vector Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.4.1 Outer Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.4.2 Matrix Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.5 Other Linear Algebra Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.6 Matrix Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.7
6
Scripts and Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
ix
6.1
6.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6.3 Good Programming Habits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
6.3.1 Comments and Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
6.3.2 Catching Errors and Displaying Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6.4
6.5
Script Example - The Random Walk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
6.5.1 Input-Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
6.5.2 Inline Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
6.5.3 The Matlab Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
6.5.4 Function Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
6.6 Debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
6.7
6.8
6.9
User Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
6.7.1 input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
6.7.2 ginput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Function Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
7
Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
7.1
7.2
7.3
7.4
7.5
7.6
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
The For Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
7.2.1 For Loops Over Non-Integers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7.2.2 Variable Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7.2.3 For Loops Over an Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
7.2.4 Storing Results in a Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Euler Integration Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
7.3.1 Numerical Integration of Protein Expression . . . . . . . . . . . . . . . . . . . . . . . . 58
The Logistic Equation Revisited . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
The While Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Nested Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
7.6.1 Looping over Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
7.6.2 Parameter Variation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
7.7
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
x
8
Conditional Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
8.1
8.2
8.3
8.4
8.5
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Logical Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
8.2.1 Random Booleans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
8.2.2 Logical Operations on Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
8.2.3 Logic and the Find Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
If, elseif and else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
8.3.1 The Integrate and Fire Neuron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
8.3.2 Catching Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
8.3.3 Function Flexibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
8.3.4 While Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
8.3.5 Steady-State of Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
8.3.6 Breaking a Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
8.3.7 Killing Runaway Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Switch Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
9
Data In, Data Out . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
9.4
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
9.1
Built In Readers and Writers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
9.2
9.3 Writing Arrays and Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
9.3.1 Diffusion Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
9.3.2 Excitable Membrane Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Reading in Arrays and Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
9.4.1 Irregular Text Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Reading and Writing Movies and Sounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
9.5.1 Sounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
9.5.2 Reading in Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Binary Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
9.6.1 Writing Binary Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
9.6.2 Reading Binary Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
9.6.3 Headers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
9.5
9.6
9.7
10 Graphics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
10.1
xi
10.2 Displaying 2D Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
10.2.1 Figure Numbers and Saving Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
10.2.2 Velocity Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
10.2.3 Log and Semi-Log Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
10.2.4 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
10.2.5 Other 2D Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
10.2.6 Subplots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
10.3 Figure Handles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
10.3.1 The Hierarchy of Figure Handles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
10.3.2 Generating Publication Quality Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
10.4 Displaying 3D Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
10.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
11.1
11.2
11 Toolboxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Statistical Analysis and Curve Fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
11.2.1 Data Fits to Nonlinear Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
11.2.2 Interpolation and Splines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
11.3 Differential and Integral Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
11.3.1 Integrals and Quadrature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Signal Processing Toolbox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
11.4
Imaging Processing Toolbox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
11.5
11.6
Symbolic Solver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
11.7 Additional Toolboxes and Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
11.7.1 Matlab Central and Other Online Help . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Author’s Biography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Preface
In 2004, Joel Cohen published a paper in the Public Library of Science (PLoS) Biology, titled
“Mathematics is Biology’s Next Microscope, only Better; Biology is Mathematics’ Next Physics,
Only Better”. The premise of the article was that in the near future there will be an explosion in
both math and biology as the two develop a similar synergistic relationship to the one that exists
between math and physics. The article goes on to hint that the computer will play a very large role
in this revolution, pushing mathematicians to confront the complexity and unpredictable nature of
biology, and pushing biologists to become more comfortable with the rigor of mathematics. To quote
directly,
The four main points of the applied mathematical landscape are data structures, algorithms,
theories and models (including all pure mathematics), and computers and software. Data
structures are ways to organize data, such as the matrix used above to describe the biological
landscape. Algorithms are procedures for manipulating symbols. Some algorithms are used to
analyze data, others to analyze models. Theories and models, including the theories of pure
mathematics, are used to analyze both data and ideas. Mathematics and mathematical theories
provide a testing ground for ideas in which the strength of competing theories can be measured.
Computers and software are an important, and frequently the most visible, vertex of the applied
mathematical landscape.
If you are going to work in the life sciences in the coming decades, it will be necessary for you to
master the rigor of algorithmic thinking, ways of storing and manipulating data, and the simulation
of biological models.
Engineers have been using simulation as a tool for many decades. It has been incorporated
into nearly every phase of the design process and is a core tool of engineering science. As such,
an amazing array of specialized computing languages have cropped up, each tailored to particular
needs. As learning to program can be a significant investment, you many wonder which tool you
should learn. It should be a tool that is easy to learn and useful right away. A good first language
should introduce you to the main elements of every programming language so that you can easily
learn a more specific language later. It should also be a language with a large enough userbase
that you can share code and algorithms. As evidenced by the number of courses taught at the
undergraduate level, Matlab fits the bill on all counts.
No one source could possibly do justice to the enormous capabilities of Matlab. You can
think of this text as the survival version of Matlab. As such, it is written to be a breadth-first
xiv PREFACE
approach, from which the reader can jump off to other sources to go into more depth. Given the
outstanding world-wide support for Matlab, after reading this text, you should be able to find what
you need online.
For the student, it is important to understand that coding is like learning to ride a bike -
you can only learn through concentrated, hands-on experience. And, like learning to ride one bike
will make learning to ride another bike easier, once you learn how to program in Matlab, it will
be much easier to pick up another programming language. A common problem with learning a
language from a programming manual is that the language is divorced from actual problems. In this
text, an effort was made to tie programming techniques and concepts to biomedical or biological
applications. As such, there is a bias toward simulating classic models from theoretical biology and
biophysics. At the end of most chapters are a series of exercises. It is important to complete them
because it is here that additional Matlab commands and concepts are introduced. You will also find
that you will be typing in much of the code that appears in the text by hand. There is a reason why
it is important to type in the code yourself - in doing so, you will have time to question the purpose
of the line.
For the instructor, the intent of this text is to take the emphasis off of learning the syntax
of Matlab so that your class and lab time can focus on algorithmic thinking, mathematical routines,
and other higher-level topics that are difficult to learn from a text. A command line approach is used
rather than to rely on Matlab’s many built-in graphical user interfaces because the command line is
backward compatible and will work on different computing architectures. The author has written a
conference proceeding for the American Society of Engineering Education (2011), “A Semester-
Long Student-Driven Computational Project” (download from www.asee.org or contact the author
at [email protected]), that details how the text was incorporated into a course. In particular,
the paper advocates the idea of “Coding to Think”, the engineering science equivalent of “Writing
to Think”. The article also contains a template for a semester-long project, ideas for games that
can teach algorithmic thinking as well as a number of references to other computing education papers.
important groups.
This text would not have been possible without the support of several
First, I would like to thank the Biomedical Engineering Department at Bucknell University, most
especially the Class of 2012 who used the first version of this text. Second, I would like to thank
a number of faculty colleagues, most especially Jim Maneval, Ryan Snyder, Jim Baish, Donna
Ebenstein and Josh Steinhurst, for their helpful conversation and comments. Last, I wish to thank
my family for their patience and for keeping me focused on what is really important in life.
Joseph V. Tranquillo
Lewisburg, Pennsylvania
C H A P T E R 1
Introduction
1
1.1
INTRODUCTION
Learning to program can be compared to learning to ride a bike - you can’t really learn it from a
book, but once you do learn you will never forget how. The reason is that learning to program is
really learning a thought process.
This text is not meant to be a supplement for a rigorous approach to Matlab. It is meant to
explain why Matlab is an important tool to learn as an undergraduate and to highlight the portions
of Matlab that are used on a daily basis. Unfortunately, you will not find the coolest, fanciest or
even the best parts of Matlab here, but rather a biased view of the most useful parts. You can think
of this as survival Matlab.
1.2 A SHORT HISTORY OF COMPUTING
Matlab is in some sense a blip in the overall history of computing. To provide some context, below
is an abbreviated history of computing.
1.2.1 THE PRE-HISTORY OF COMPUTING
Any history of computing must start with the logic of Aristotle. He was responsible for what in
computing has become known as conditional logic (what Aristotle called a Syllogism and later was
called deductive reasoning). For example, “If all men are mortal and Socrates is a man, then Socrates
is mortal”. Aristotle went on to categorize various types of conditionals, including the logical ideas
of AND, OR and NOT.
The next great computational hurdle occurred with the publication of An Investigation of the
Laws of Thought, on Which are Founded the Mathematical Theories of Logic and Probabilities in 1854 by
George Boole. In that work, Boole laid out a method of transforming Aristotle’s logical statements
into formal mathematical calculus. The key insight was that just as there are formal operations that
work on numbers, e.g., addition and division, there also exist formal operations that work on logical
statements. More work by Boole and Augustus De Morgan advocated the position that logical
human thought was simply computation following mathematical rules, and that a machine could
in principle perform the same functions.
2
1. INTRODUCTION
An attempt to build such a machine was carried out by Charles Babbage, an English math-
ematician and mechanical engineer, even before Boole had published his work. Babbage laid out
plans for what was called a analytical engine, a mechanical device that would realize the type of
general computing outlined by Boole. As in most practical applications of theory, there were a
number of technical obstacles to overcome. Perhaps the greatest was that Babbage could not secure
funding to build his device. It was in fact never built until the London Science Museum used
Babbage’s original plans to make a perfectly working analytical engine in 1991. What is amazing is
that Babbage foresaw the type of architecture that would be necessary to make a working computer,
including a CPU with some sort of programming language, a data bus, memory and even a type of
printer and screen to visualize outputs.
As luck would have it the daughter of Lord Byron, Ada Lovelace, helped to translate Bab-
bage’s work into French. In her translation, she added in many of her own ideas which came to the
attention of Babbage. As a result of those notes, Babbage realized that for his general computing
device (hardware) to perform specific functions, a programming language (software) would be
necessary. While Babbage focused on the mechanics of the device, Lovelace began to write the first
computer algorithms. The first computer program ever written was by Ada and computed Bernoulli
numbers. It is amazing that she was writing algorithms for a machine that didn’t even exist! One of
Lovelace’s major advancements was to show how data and the computer instructions for operating
on that data could be saved in the same memory. She also was the first to recognize that a computer
could do more than act as a fancy calculator.
While computing is now often regarded as a practical pursuit, there are some who have
gone down the more philosophical road outlined by Aristotle and Boole. For example, Steven
Wolfram, the creator of Mathematica, published a book called A New Kind of Science in 2002
that advocated the idea that all of reality (including time and space) are the result of a giant
algorithm. Others in the artificial intelligence and cognitive science fields have used the computer
as a metaphor (either for or against) the human mind as nothing but a very good computer. There
is now even a field called experimental mathematics which attempts to prove true mathematical
statements with a new criteria for proof that uses numerical approximations on a computer.
1.2.2 THE EARLY HISTORY OF DIGITAL COMPUTING
The history of digital computing in the form we know it today began with a series of seminal
papers in the 1930s by Kurt Godel, Alonzo Church, Emil Post and Alan Turing. All helped to put
down a mathematical formulation for what it means to compute something. With an eye toward
practical applications, they also outlined how it might be possible to build a device that could
perform automatic computations. In a convergence of ideas, all of their proposals were found to be
equivalent ways of computing.
1.2. A SHORT HISTORY OF COMPUTING 3
The difference between the 20th century and 19th century approaches to computing was
that the 20th century approach was based upon electricity, not mechanics. As such, switching
between states was faster and more functions could be performed in a given amount of time. With
the advent of World War II, both the Americans and English attempted to use computers to
perform tasks such as computing navigation instructions for ships and trajectories of ballistics. In
one of the most celebrated moments in computing history, an algorithm developed by Alan Turing
cracked the German Enigma cipher, enabling the English to have detailed knowledge of German
plans.
These first computers used vacuum tubes as the way to transition between states. In 1947,
Bell Labs created the first transistor, a tiny electrical switch that relied on quantum mechanics.
The development, and subsequent miniaturization of the transistor, decreased the power and size
of the computer hardware, while at the same time increasing switching speed. The trend to make
transistors smaller and more efficient has continued to push the entire electronics industry to greater
and greater feats of engineering.
Following the hardware/software split of Babbage and Lovelace, some worked on comput-
ing hardware while others worked on developing algorithms. It was during 1940-1960 that many
of the most important computer algorithms were developed, including the Monte Carlo method
( John von Neumann, Stan Ulam and Nick Metropolis), the simplex method of linear programming
(George Dantzig), the Householder matrix decomposition (Alston Householder), the quicksort
method of sorting (Tony Hoare) and Fast Fourier Transform ( James Cooley and John Tukey).
In the early history of computing, programmers were forced to speak the same language as
the computers (1s and 0s). This was very tedious and soon was replaced with a somewhat more
convenient type of programming language called assembly. Assembly languages allow programmers
to avoid using 1s and 0s but programming was still very cryptic. In 1957, John Backus led a team at
IBM to create FORTRAN, the first high-level programming language. It introduced plain English
keywords into programming, e.g., if, while, for and read, that made code easier to write, read and
share. Other high-level languages followed, such as LISP, COBOL, ALGOL and C. The major
advantage of these high level languages was that they were not specific to particular computer
hardware. As such, users could share programs more easily. The breakthrough made by Backus and
his team was to create a compiler. The purpose of a compiler is to make the translation from the
human text to an executable, the particular language of 1s and 0s that can be read by that particular
computer. It is important to realize that while the text of the program is not unique, the compiler
and the executable is unique to a computer’s type of hardware. For example, if C code is compiled
on a PC, it will run on the PC but not on a MAC. The actual code, however, could be written and
tested on either machine.
4
1. INTRODUCTION
1.2.3 MODERN COMPUTING
While there was a clear dividing line between the pre-history of computing and early computing,
no one event signaled the modern era of computing. In fact, there were several events, most of
which enabled non-computer users to begin using a computer. Below we discuss a few of the more
recent advances which are part of modern computing.
In the early days of computing, only one program was allowed to run on the hardware at a
time. With the advent of operating systems, computers gained the ability to multitask. An operating
system is a master program that directs which programs can run and for how long. To be clear, the
computer is still doing one thing at a time, but now it can switch back and forth between tasks. If
the switching is fast enough, it can give the user the illusion that the computer is multitasking. This
is the reason why you can surf the web, listen to music and work on your Matlab homework all at
the same time. The three most common operating systems are Microsoft Windows, Mac OS and
various favors of UNIX (including Linux).
The machine language of 1s and 0s is sometimes called the first generation of computing
languages. Assembly and high-level languages are the second and third generation. The theme is
that each generation built upon the generations that came before. As such, there are two new types
of languages that can be called the fourth generation of programming. They are the interpreted
and object-oriented languages. Interpreted languages can be contrasted with compiled languages.
In a compiled language, an executable is created by the user that will only run on a particular
computer. If the user wants to modify their code or move it to another type of computer, they
must create a new executable. To overcome these limitations, the user could use an interpreted
language. Here the computer has a program called an interpreter which will read the lines of text
one by one and transform them into 1s and 0s on-the-fly. This has several advantages. First, the
user now can modify code and run it without needing to compile the code. Second, it is possible
to move code from one type of machine to another as long as both have the right interpreter. The
disadvantage of an interpreter is that it has more overhead in terms of memory and operations, and
so an interpreted program will run much slower than a compiled program. Examples of interpreted
languages are Perl, Python and Matlab. Object-oriented languages focus on allowing data to be
handled in a more natural way. It is often the case that what we call data is not simply one number,
but it is a series of objects. These objects may be numbers, text or other values. As a medical example,
imagine that we wish to create a database of patient records. In an object oriented language we
would create a data structure (called a class) which contains all of the information for a generic
patient. Then for each individual patient we would create a particular instance of the class with the
appropriate attributes. Although object-oriented programming can be traced back to the 1960s,
the ideas did not propagate out of the database and artificial intelligence communities until the
1980s. Now, nearly every programming language has some support for object oriented programming.
1.3. A HISTORY OF MATLAB 5
The advent of the personal computer occurred in the 1980s, followed quickly afterward by
the laptop revolution. Prior to that time, most computer users required access to a shared computer
mainframe. You would submit your job into a queue and then wait until it was your turn.
Unfortunately, you had no way to test your program on your own. It might be the case that you
made a very small error in your program and had to wait for the output to find out where you
went wrong. And then after you had corrected your error, you would still need to resubmit your
job to the queue. The personal computer changed all of that, but it also had another side effect.
Personal computing opened up an enormous business niche because, where before there were
only a few computer users at university and government labs, now nearly everyone owned a computer.
The last modern change to computing has been the creation of networks of computers. The
first and widest reaching is also the most obvious - the internet. One way to think of the internet
is as a giant computer that connects people through the computers they use. It has given rise to
ideas such as cloud computing and dynamic web pages. Another version of network computing is
parallel computing and distributed computing. Both are used to solve very big or very hard problems
with many computers at once. Although there is not a clear dividing line between the two, usually
parallel computing is when processors communicate on some dedicated network, while distributed
processing uses a non-dedicated network, such as the internet.
1.3 A HISTORY OF MATLAB
In the previous section, high-level computer languages and algorithm development were discussed.
One problem was that many algorithms were written but all in different languages and for different
computers. The US government recognized this problem and so Argonne National Labs took
up the task of writing a standard set of numerical algorithms that would be freely available (and
still are at www.netlib.org). They were the BLAS (Basic Linear Algebra Subroutines), LINPACK
(Linear Algebra Subroutines for Vector-Matrix Operations), EISPACK (To compute eigenvalues
and eigenvectors) and other general purpose libraries of algorithms. Cleve Moler was one of the
programmers who helped write the LINPACK and EISPACK libraries. He later went on to
teach at the University of New Mexico. While there, he realized that the Argonne libraries were
cumbersome for those not already firmly grounded in scientific computing. Moler recognized in
1983 that the libraries could reach a much wider audience if a nicer interface was written. And so
he created Matlab, the “MATrix LABoratory” in 1983 using FORTRAN. The first version was in
fact used by Moler to teach undergraduates the basics of programming. Over the next two years,
Moler visited a number of universities. At each university he would give a talk and leave a copy of
Matlab installed on the university computer system. The new program quickly became a sort of
“cult” phenomenon on campuses around the US.
In 1984, Jack Little and Steve Bangert were so impressed with the new programming envi-
ronment that they rewrote many of the Matlab functions in C allowing it to run on the new
6
1. INTRODUCTION
IBM PC. They also added some built in functions that allowed Matlab to have its own native
programming language, better graphics and a Matlab interpreter. Initially, this version was free, but
Moler, Little and Bangert soon banded together to formed Mathworks. At the end of 1984 they
sold their first product to Professor Nick Trefethen at MIT.
In the intervening years, Matlab has become the standard for engineering computing, with
millions of users around the world. While the original version of Matlab had only 80 built-in
functions, it now has thousands, written not only by Mathworks employees, but by users. Mathworks
itself has undergone a transformation from selling the standard Matlab distribution, to a variety
of toolboxes that are of interest to particular disciplines. It also has the capability (like Maple and
Mathematica) to perform symbolic logic and contains a system simulation program called Simulink.
1.4 WHY MATLAB?
At various times during my career I have been asked why I have chosen Matlab as a programming
language. And throughout my career I have given various answers ranging from “it is what I
know best” to “it does what I want”. What is important is that both answers are probably not
your answers right now, and they may never be. Below are some reasons why you should take
learning Matlab seriously. First, engineers often need to perform tedious calculation tasks over
and over again. The calculations might range from something simple, e.g., taking the average of a
list of numbers, to something more complex, e.g., simulating how an ecosystem will react to the
introduction of a non-native species. Here most any type of computing language can help. But,
some languages are easier to learn and others are more flexible. Unfortunately it seems to be the
trend that the most powerful languages are also the most difficult to learn. Matlab strikes a good
balance between being easily learned and flexible. For example, Matlab is an interpreted language
(see the advantages above) but also can be compiled (see the advantages above). Second, engineers
often make figures to represent large quantities of data. Matlab is one of the few programming
languages that has graphics capabilities built-in. It is important to say a word here about Excel.
It may be very tempting to default to Excel. After all, it can perform calculations and does have
graphics capabilities. The problem is that unless you are willing to learn Visual Basic and write
macros, Excel is very limited in its computational abilities. Furthermore, Visual Basic tends to be
more difficult to learn than Matlab. Matlab also wins out in that it was designed for engineers,
whereas Excel was not. As such, Matlab is a common programming language spoken by nearly all
engineers, regardless of their training. Third, because there is a large community of Matlab users,
there are many great Matlab books and online resources. It is often the case that a problem you
must solve has already been solved by another Matlab user and they have posted their code online
at Matlab Central (http://www.mathworks.com/matlabcentral/).
There is no claim that Matlab is the “best”. If you are looking to write a database, perform
language
high-end scientific computing on huge supercomputers, or do any type of natural
processing, some other language may be what you want. But as a language that is easy to learn,
designed for engineers, and the common computing language of engineering, you can’t do better
than Matlab.
1.4. WHY MATLAB?
7
C H A P T E R 2
9
Matlab Programming
Environment
2.1
INTRODUCTION
The purpose of this chapter is to introduce you to the basic functions and environment of Matlab.
Most every other capability within Matlab is built from these basic concepts, so it is important to
have a good understanding of what follows.
2.2 THE MATLAB ENVIRONMENT
Although Matlab goes by one name, it is in reality a full service computational engine composed
of many integrated parts. There is the core of the program that contains the most basic operations.
These functions are often very deeply compiled and cannot be changed. There are graphical libraries
that allow users to create plots and images as well as graphical user interfaces for interacting
with programs. Users can create their own functions or take advantage of the additional libraries
contained in toolboxes. Luckily, Matlab provides an environment, shown in Figure 2.1, that ties
everything together.
At the very top of the window there are a series of toolbars. It is here that you may obtain
useful information, such as the built-in help. Below the toolbar is a large window that is mostly
blank. It contains what is known as a command prompt, designated by >>. It is to the command
prompt that you will type commands that will be executed by Matlab. In the top right there
is a Workspace window that lists all of the variables that you currently have in memory. In
the bottom right there is a Command History window that lists commands that have been is-
sued to the command line. On the left side is a list of all of the files in your current working directory.
Matlab has many tricks that involve the graphical interface. Versions of Matlab, however, do
vary, and Matlab is not the same on every computing architecture. To allow the most flexibility, we
will use the common element of all Matlab environments; the command line.
2.3 THE DIARY COMMAND
In the following sections, you will be asked to enter commands on the command line and then
observe the results. As a portion of your assignment (and other assignments to follow), you will
10
2. MATLAB PROGRAMMING ENVIRONMENT
Command Prompt
Figure 2.1: The Matlab Command Window
demonstrate that you have followed the tutorial below by keeping a record of your commands.
Matlab has a command called diary that will create a text file containing a record of your commands.
Throughout this text, italics are used to indicate a built-in Matlab function.
To begin, create a folder on your desktop and then navigate your current path (at the top of
the toolbar) to the folder. Next, enter the following at the command prompt
>> diary Problem1.diary
This command will record all subsequent commands into a file called “Problem1.diary”. As outlined
in a separate document, all diary files should end with “.diary”. After you issue the command, you
should see the file in the Current Directory window. When you wish to turn off the diary command
you can enter
>> diary off
You should wait to turn off the diary until after you have finished the exercises below.
2.4 AN INTRODUCTION TO SCALARS
In the following section, we will go on a self-guided tour of some basic Matlab functions. To begin,
type the following at the command prompt.
2.4. AN INTRODUCTION TO SCALARS 11
>> r = 5;
Notice that the variable r now appears in the workspace window. Now type in the following command
which is similar to the above but without the semicolon
>> a = 25
You should see that the Matlab command line echos back what you typed, verifying that the variable
a is set to a value of 25. This is your first introduction to a feature of Matlab. Adding a semicolon to
a line will suppress any output to the command line, while no semicolon will print out the value of
the variable. Next type the following on the command line
>> whos
The whos command will print out all of the variables in your workspace to the command line.
You should notice that there is additional information given. For example, you will see in the row
corresponding to the variable a
Name
a
Size
1x1
Bytes
Class
Attributes
8 double
Here we can see that a is of size 1×1, meaning that it is a scalar. Later we will discuss variables of
other sizes, e.g., vectors and matrices. We also can see that a is stored in 8 bytes as a double class.
There are in fact a number of different types of data which can be stored in Matlab. For example, to
learn more about the data type single you can type the following on the command line.
>> help single
First, you should note that we used a new command, help. This is very useful when you know the
name of a command but do not know how to use it. By typing “help single”, you learn that the
command single will convert a variable to type “single”. You will also see in the second line how to
use the command, e.g., Y = SINGLE(X), along with what options are available. Try typing
>> b = single(a)
You should also note that Matlab is sensitive to case
>> r = 3;
>> R = 11;
will create two different variables, r and R.
There are a few things to note about this command. First, Matlab’s help uses all capitals to
designate functions, e.g., Y=SINGLE(X), but they are not entered this way on the command line.
Second, anything to the right of the equal sign in parentheses is consider an input. Anything to the
left of the equal sign is considered an output. There are some functions that take more than one
12
2. MATLAB PROGRAMMING ENVIRONMENT
input and can send out multiple outputs. Lastly, if you now type ‘whos’ on the command line, you
will see that ‘a’ is a double (8 bytes), but ‘b’ is a single (4 bytes).
As an exercise, investigate the sin and log commands. Does sin accept inputs in degrees or
radians? Does log take the natural log or the log base ten? Note that both commands have some
related functions listed under “See also”.
2.5
BASIC ARITHMETIC
Matlab has all of the functions of your calculator (and many more as you will see in future chapters).
In this section, we will investigate some of the basic functions that are most likely present on your
calculator. We can add two numbers
>> 4+5
or multiply numbers
>> 5*6
and Matlab supports subtraction
>> 4-5
and division
>> 5/6
2.5.1 PRIORITY OF COMMANDS
One problem with basic arithmetic is that it is not always clear what order the operations should be
performed in. For example, try entering
>> 4+5*6
It should be clear from the result that the multiplication was performed first and then the addition.
This is because in Matlab multiplication has a higher precedence than addition. But what if we wanted
to perform the addition first and then the multiplication? In this case, we can use parentheses to
make the order clear
>> (4+5)*6
Although Matlab has a built in ordering of preference, it is generally helpful for debugging to use
parenthesis to make the intended ordering more clear.
2.5.2 REISSUING PREVIOUS COMMANDS
There are many times in Matlab when you may wish to either repeat a command over again, or enter
a command that is similar to one previously issued. Matlab has three nice features that can save you
some typing time. First, in the example above, you may wish to reissue the command 4 + 5 ∗ 6. If
you double click on this command in the Command History window (lower right), it will copy and
2.5. BASIC ARITHMETIC 13
paste the command and execute it for you on the command line. Although this may not seem like a
great savings in time, in the future you may have issued so many commands that the one you want is
not easy to find. The second helpful feature is the up and down arrows. Try using the up arrow and
you will find that Matlab will move through previous commands. If you hit the enter key, the current
command will be executed again. Likewise, you can use the down arrow key if you have moved past
a command. The third feature is known as completion. If you start typing a command and then hit
the up arrow, it will find the last time you typed a command that began that way. As an example, in
an earlier exercise you set the variable r to a value of 5. If you wish to change that value, you could
use the arrow key. But, a faster way is to begin entering the command
>> r =
and then hit the up arrow. You could then change r to 4 and hit enter to update the value.
2.5.3 BUILT-IN CONSTANTS
Let us assume for the moment that r is actually the radius of a circle and we would like to find the
circumference, e.g., 2π r. In Matlab, we could define a constant with the value of π and then perform
the multiplication. Matlab, however, already has built in a number of predefined constants, such as
pi for π . On the command line enter
>> Circ = 2*pi*r;
Because the semicolon was used, Matlab did not print out the result to the command line. It did,
however, store the value in the variable Circ. To see the value, simply type ‘Circ’ on the command
line and press enter. If you wanted to find the area, e.g., πr 2, try typing
>> Area = pi*r ˆ 2
√
Note that the symbol ∧ is for any root. So if you wanted to find the value of 3
6.4, you would enter
>> 6.4 ˆ (1/3)
Note that parentheses were used to indicate the desired order of precedence. The symbol ∧ can also
be useful for specifying large and small numbers, for example
>> 10 ˆ -6
>> 10 ˆ 8
A second very common constant is the imaginary number i.
>> c = 4 + 3i
In this example, the variable c is a complex number. You can verify this by issuing the whos command.
You can also view the help for the following constants within Matlab, inf, nan.
2.5.4 FINDING UNKNOWN COMMANDS
Because Matlab commands are not on buttons you can see (like on your calculator), it can sometimes
be difficult to know the name of what you want. To this purpose, Matlab has a function called lookfor
that will search all of the help files for your entry. For example, if you know that you would like to
find the hyperbolic tangent function but don’t know its name in Matlab, try typing
14
2. MATLAB PROGRAMMING ENVIRONMENT
>> lookfor tangent
You will get a list of the functions that use the word “tangent” in their help files. Although the list
may be long, you can scan it and find that tanh is the function you want.
2.6 THE LOGISTIC EQUATION
One of the more famous equations in mathematics is the logistic equation. The connection to
biomedical research is that the logistic equation was created to study the growth of a population
of reproducing species. The catch is that most environments can only sustain so many individuals
(usually because of some finite resource like food). So, after an initial explosion in growth, the
population size will settle down to some reasonable number. This equation is sometimes used as a
simple model of bacterial infection. The logistic equation can be written in the form of a difference
equation
zn+1 = rzn[1 − zn]
(2.1)
where zn is the current value of z, zn+1 is the next value of z and r is a constant. It is important to
realize that the logistic equation is scaled, e.g., normalized, so that the maximum population is 1.
Using what you already know, set r = 1.5 and the initial population value to z = 0.5, e.g., half of
the maximum. Then issue the following command
>> z = r*z*(1-z)
Because the variable z appears on the left and right side of the equation, it is important to understand
how Matlab processes a command of this type. Remember that anything to the right is an input
and anything to the left is an output. So, the old value of z is used on the right side, e.g., zn, but then
that value of z is overwritten in memory to form the output, e.g., zn+1. The reason for writing out
commands in this way is that we can issue it once and then use the up arrow to keep issuing it again
and again. What is happening here is that you are using the current value to find the next value. You
should try this to see what happens to the value of z (you should see it decrease, headed for 0.0).
We will return to this example in future chapters to show how using some simple programming we
can make this type of iterative operation much more efficient.
To give a bit of a flavor for some strange things that can happen using the Logistic equa-
tion, try repeating the above (remember to reset the initial value of z to 0.5 for each trial) but with
the following values for r
r = 3.2
r = 3.45
r = 3.6
2.7. CLEARING VARIABLES AND QUITTING MATLAB 15
At this point, you should turn off the diary command (“diary off ”). Then using a text edi-
tor, you can open the diary file. At the end of the text file, you must write a brief description of what
you found for the different values of r. What would this mean for a real population of bacteria?
2.7 CLEARING VARIABLES AND QUITTING MATLAB
A useful (but also very dangerous) command is clear. If you issue the following to the command line
>> clear
everything in Matlab’s memory will be erased. There are times when this is very useful, but it should
be used carefully. An alternative is to only clear specific variables, for example
>> a = 34.5;
>> b = 27.4;
>> whos
>> clear a
>> whos
Note that in this example we only cleared the variable a.
Matlab is very simple to quit. Simply type quit on the command line. It is important to
know that when Matlab is quit, all variables that were created will be lost.
2.8 EXAMPLES
Each chapter will contain a series of problems that will help you strengthen your understanding of
the material presented. For all problems below, you should begin a diary entry, complete the problem
and then end the diary entry.
1. As problem 1 was following along with the chapter, your first exercise is simply to rename your
diary file “Problem1.diary”.
2. Use the lookfor command to find the name of Matlab’s function for taking the exponential of a
number. Then use the help function to find out how it works. Then demonstrate that you can
find the value of e3.4. Do the same for the operation of a factorial and then report the value
for 18!
3. Create two complex variables g = 25 + 2i and h = 1 − 4i and then multiply them in Matlab
and store the result in a variable k. Demonstrate that you can use the real and imag commands
in Matlab to find the real and complex parts of k.
4. When a fluid is placed in a thin tube, it will rise up that tube, against gravity, due to capillary
forces (what causes dyed water to be sucked up the xylem of a celery stalk). It can be calculated
16
2. MATLAB PROGRAMMING ENVIRONMENT
analytically how high a liquid would rise (h) in a tube due to the capillary effect
h = 2σ cos(φ)
rρg
(2.2)
where σ = 0.0728J /m2 is the surface tension, φ = 0.35 radians is the contact angle, r =
0.001m is the tube radius, ρ = 1000kg/m3 is the fluid density and g = 9.8m/s2 is the force
of gravity. Using these values, compute the rise of the fluid. First, declare all of the variables
and then find the answer using an equation with only variable names and the constant 2. Then
change r = 0.002m and recompute the rise.
C H A P T E R 3
Vectors
17
3.1
INTRODUCTION
In this chapter and the next, we will discuss the core of what makes Matlab special - the ability
to efficiently operate on vectors and matrices. Although Matlab has come a long way since its
humble beginnings, vectors and matrices remain the core of nearly everything that happens in Matlab.
You will want to turn on the diary, as your first exercise is to follow along with the tutorial
below.
3.2
VECTORS IN MATLAB
Whereas previous chapters considered single numbers, called scalars, vectors are a fancy name for a
list. It could be a to-do list, a list of fruits, or for most scientific and engineering applications, some
sort of numerical data.
3.2.1 CREATING VECTORS IN MATLAB
>> a = [1 1 2 3 5 8 13];
>> whos
This command will simply list the number of elements in a and their type.
We can perform operations on the vector a. Matlab has many built-in functions. Some
work on scalars, as in the previous chapter, but others work on vectors. For example,
>> b = sum(a);
will sum up all of the values of a and report the outcome in b. In this example, we turned a vector,
a, into a scalar, b. A similar command, prod, will return the product of all values in a vector. Another
useful function is the length command which gives the number of elements in a vector
>> NumElements = length(a)
Matlab also can perform operations that transform one vector into another vector. For example, in
the previous chapter, you learned that the hyperbolic tangent function is tanh. If you enter on the
command line
>> b = tanh(a)
b will be a vector that contains the hyperbolic tangent of each entry of a.
18
3. VECTORS
3.2.2 CREATING REGULAR VECTORS
You could simply enter in each value of a vector, as shown above. Sometimes, however, you may want
to create a vector with a more regular pattern. For example,
>> c = 1:21;
will create a vector, c, that has 21 entries, where each entry is one bigger than the last, e.g., a list
from 1 to 21. You can verify this by typing c on the command line and pressing enter (without
a semi-colon). What is more, you can use another feature of the colon notion to skip values. For
example,
>> d = 1:2:21
will generate a vector, d, that starts at 1 and then skips every other number until reaching 21. You
should verify this by entering d on the command line. The colon notion is always start:skip:stop, and
can be very useful for creating patterned vectors. For example, you might want to have all numbers
that are multiples of 10 (skip) between 50 (start) and 200 (stop)
>> e = 50:10:200;
3.2.3
SPECIAL VECTORS AND MEMORY ALLOCATION
Many other programming languages, especially those that are compiled, require that the user specify
up front exactly what variables will be used throughout the program. What is required is memory
to be reserved for storage. The big advantage of allocating memory up front is that it is easier to
write (and read) to a vector if it is all in the same place in the computer’s memory. As an interpreted
language, Matlab does not require you to specify variables up front. This can be a very nice feature,
but it can also cause Matlab to operate slowly at times. To overcome this limitation, Matlab will
allow the user to allocate space for a vector. The most usual way to do so is using the zeros command
>> f = zeros(25,1);
that will create a vector, f , with zeros in all 25 locations. A second useful feature of Matlab is the
ones function
>> g = ones(45,1);
that will create a vector of length 45 where every entry is a 1. The ones command is useful for the
following
>> h = 0.255*ones(18,1)
In your diary, explain the output of the command above.
3.3
VECTOR INDICES
You can think of a vector as a series of bins, one for each element. What makes vectors powerful is
that it is not only the value that is saved but also a type of address, called an index. For example, if
we would like to get the fourth entry of a, e.g., the value 3, we can get only that value by typing
3.3. VECTOR INDICES 19
>> a(4)
What is more, Matlab uses the same colon notion explained above to index vectors. For example,
>> a(3:5)
will return the values of a at indices 3,4 and 5. So the output will be a subset of the entire array, here
the values 2, 3 and 5. Also as above, the colon notion can be used to skip values. For example,
>> a(1:2:7)
will start with the first value and then skip every other value until getting to the 7th value. So the
output will be 1, 2, 5, and 13. This type of regular skipping can be very useful in large vectors, for
example, if you wanted only every 10th value.
There is an additional function in Matlab that can be very useful.
>> a(end)
will return the very last value of the vector. The advantage here is that you can use end with the colon
notion
>> a(1:2:end)
>> a(1:2:end-1)
Explain in your diary why the two commands above give different answers. A hint is that end-1 is
the second to last entry of the vector a.
When dealing with very large vectors, it can be helpful to use the find command in Matlab.
For example,
>> j = find(a==8)
In this statement, we are using find to search the vector a for the value 8. If a value of 8 is found,
find will return the index where the value was found. In this case, j =6, because the value 8 is in the
6th location of the vector a. In the future, you will learn about conditional statements which will
allow you to find entries that are, for example, greater than some number.
One last example will demonstrate the power of indexing in Matlab. Imagine that you would like
to enter some values into the vector f , which was created above. More specifically you would like
to place the values 3, 25.7 and 91.6 in the 4th, 9th and 25th locations of f . You could enter the
following commands
>> indices = [4 9 25];
>> values = [3 25.7 91.6];
>> f(indices) = values;
>> f
20
3. VECTORS
3.4
STRINGS AS VECTORS
In Matlab, a vector can be composed of any type of data. For example, vectors could have complex
values, e.g., contain real and imaginary parts, integers or even characters. In Matlab, a vector of
characters is called a string.
>> s = 'Matlab is Great!';
s is a vector of characters, or a string. Note that strings in Matlab are enclosed in single quotes. You
can verify that this is indeed stored in the same way as a usual vector by entering
>> s(3)
>> s(10:end)
Strings stored as a vector of characters can be very important in cases such as creating file names.
Let us assume that you have a series of 100 files that contain data on patients 1 through 200. A
common operation might be to open up each file to search for some important characteristic, e.g.,
blood pressure. Although you will learn how to perform the details of a this task in later chapters, a
first step is generating each unique filename. Rather than entering all 200 in by hand, we can specify
a variable
>> x = 1:2:200;
>> ThisPatient = x(29);
>> PatientFilename = ['Patient' num2str(ThisPatient) 'Data.dat'];
You will want to begin by understanding the first two lines. First, a vector is created that ranges from
1 to 200, skipping every other number. Then we will pick off the number in the 29th spot and store
it in a scalar called T hisP atient. In the last line, we build up a vector of characters by starting with
the string “Patient”. The num2str command is used because we can not mix different data types
within a single vector. So, the numerical value of T hisP atient must be converted to a string. You
should see the help file for num2str for more information. Lastly, we add “Data.dat” onto the end
of the file. You should verify that your commands have worked by typing in
>> PatientFilename
3.5
SAVING YOUR WORKSPACE
There are times when it is helpful to save all of the data in Matlab’s memory. For example, there are
some large projects where you may want to quickly be able to pick up where you left off.
>> save MyFirstMatlabWorkspace
The save command will save all of the variables in your workspace to a file called “MyFirstMatlab-
Workspace.mat” in your current working directory. The “.mat” file extension indicates a Matlab data
file. The advantage of a Matlab data file is that it will work on any version of Matlab - PC, MAC
or UNIX. To return the variable to the Matlab memory
>> clear
>> whos
>> load MyFirstMatlabWorkspace
3.6. GRAPHICAL REPRESENTATION OF VECTORS 21
Note that the load command would work even if you had quit Matlab and then wanted to reload
the variables in your workspace. You may have noticed that the save command saves your entire
workspace. If you are only interested in the two vectors a and f , you can enter
>> save MySecondMatlabWorkspace a f
You will be using the save in some of your homework exercises.
3.6 GRAPHICAL REPRESENTATION OF VECTORS
With short vectors, such as those created so far, it is possible to view our data on the command line
(by simply leaving off a semi-colon). But when vectors become long, it can be very useful to take
advantage of the graphic capabilities of Matlab. Consider the following commands.
>> x = 0:0.01:10;
>> y = sin(x);
>> length(y)
The first command creates a vector, x that starts at 0 and ends at 10, but in increments of 0.01,
creating a vector with 1001 entries. The second line creates a vector, y that applies the sine function
to the vector x. Therefore, y will also have 1001 entries. To plot the two vectors against one another
>> plot(x,y)
It is also possible to plot a vector with specifying an x-axis vector
>> plot(y)
where it is assumed that the x-axis is incremented by one for each value of y. In future chapters, we
will focus more on how to fine tune Matlab’s graphics.
3.6.1 POLYNOMIALS
Matlab stores a number of useful structures as vectors. For example, polynomials can be manipulated
as a vector of coefficients. The polynomial
x3 + 2x2 − 4x + 8
(3.1)
can be represented in Matlab as
>> u = poly([1 2 -4 8]);
which will give the coefficients of the polynomial with the roots 1, 2, -4, and 8. Note that a third
order polynomial must contain four entries. A zero entry may also be used as a place holder. So
>> w = poly([-1 0 -1 4]);
represents
− x3 − x + 4
(3.2)
Finding the zeros crossings, e.g., roots, of a polynomial is useful in a number of situations. For second
order equations, it is simple to use the quadratic equation. For higher order polynomials, it becomes
practical to use a numerical method, e.g., Newton’s Method. Matlab has a built-in root finder called
roots.
22
3. VECTORS
>> roots(u)
>> roots(w)
Note how, in the last command, Matlab will report complex numbers.
3.7 EXERCISES
1. Turn in the diary of your commands for this chapter. You must also include the two “.mat”
files you created in Section 3.5 and answer the questions in Sections 3.2.3 and 3.3.
2. Sine waves are used in many life science and engineering applications to drive everything
from the varying magnetic fields of MRI machines to the supply of nutrients to tissue being
grown in a bioreactor. For this problem, you must show your commands in a diary. The general
equation for a sine wave is
y = A · sin [2πωt + φ]
(3.3)
where A is the amplitude, ω is the frequency in Hertz (1/seconds) and φ is an angle in radians.
First create a vector for time (t) that starts at 0, ends at 5 and skips in increments of 0.001.
Next, define A = 1, ω = 2H z and φ = π. Using the parameters and time vector, create a
vector y and then plot t against y.
In the next part of this problem, you will create a second sine wave (in a variable
z)
z = A · sin [2πωt + φ]
(3.4)
with A = 0.5, ω = 1H z and φ = π/2. Note that you can use the same time vector to create
z. You will next plot z against t, but on the same figure as your previous sine wave. To plot two
data sets on the same figure, you must use the hold on command. In general, plotting two data
sets will be achieved by
>> plot(t,y)
>> hold on
>> plot(t,z)
3. One important use for computing is to test out algorithms that will find some important
characteristic of a data set, e.g., frequency spectrum, time to reach a maximum. The problem
with biological data is that it often contains noise. Therefore, many algorithms designed to
be used on biological data must work even in the presence of noise. Matlab has a built in
command, rand, that can be used to generate random numbers. For example,
>> r = rand(1000,1);
3.7. EXERCISES 23
will create 1000 random numbers between 0 and 1. Use the help function (hint: look at the
examples in the help file) to create 2000 random numbers that range between -1 and 2 and
store them in a variable, RandomNumbers. It may be helpful to plot RandomNumbers to
check that you have created what you intended. Use the save command to save the numbers
into a file, “RandomNumbers.mat”.
4. A Holter Monitor is a device that is used to continuously record ECG, even when the patient
may be at home. It is often the case that a patient will allow the device to record for some period
of time and then upload the data (using the internet or a phone line) back to the physician.
Create a character string, called F irstName, with your first name. Create a character string,
called LastN ame, with your last name. Then create a filename (another string) with the
following format
LastName,FirstName-HolterMonitor6.2.2011.dat
In this way, the physician will know exactly what data he/she is looking at. Enter your com-
mands for creating the filename in a diary file.
C H A P T E R 4
Matrices
25
4.1
INTRODUCTION
While vectors are useful for storing lists, matrices are good for storing lists of lists. As such, you can
think of a matrix as a group of vectors. Although the data can be of any type, e.g., doubles, characters,
integers, all of the vectors that make up a matrix must have the same length.
4.2 CREATING A MATRIX AND INDEXING
Creating a matrix in Matlab is not all that different from creating a vector. For example, to create
the matrix
⎡
⎢
⎢
⎣
⎤
⎥
⎥
⎦
4
8
1 2 3
5
6
0 4 6
7 0 3 12 7
8
2 5 0
4
you would enter
>> A = [1 2 3 4 5; 0 4 6 8 6; 7 0 3 12 7; 2 5 0 4 8];
>> whos
You should see that matrix A has the dimensions 4×5 and consists of doubles (Matlab’s default
format). Rows are separated by semi-colons. It is also convention that matrices in Matlab use
capital letters, while scalars and vectors are lower case. This is not necessary but can help distinguish
between the two types of data.
There are some occasions when you may want to know the dimension of a matrix. Similar
to the length command for vectors, you can use the size command for matrices
>> [NumRows NumCol] = size(A);
where the variables N umRows and NumCol will contain the number of rows and columns of the
matrix.
4.2.1
SIMPLIFIED METHODS OF CREATING MATRICES
There are several additional ways that a matrix could be created.
>> B = zeros(5,6);
26
4. MATRICES
will create a matrix with 5 rows and 6 columns where all the entries are zeros. This is a generalization
of the zeros command and is a good way to allocate memory for a matrix. Similarly,
>> C = ones(4,3);
will create a 4×3 matrix of ones. Again, as in the vector version of the ones command, we can scale
the entire matrix.
>> C = 23*ones(4,3);
A very common matrix to create is one that has values only along a diagonal, and zeros everywhere
else. For example,
>> diagvector = ones(7,1);
>> D = diag(diagvector)
The first command creates a vector of ones that is of length 7. Then this vector is placed on the
diagonal of the matrix D. In using the diag command, Matlab automatically fills in the necessary
zeros. In fact, the example above is a matrix that is used very often in linear algebra called the Identity
Matrix. Matlab has a command, eye, which creates any N ×N identity matrix. The diag function,
however, can be used to do much more. Try to interpret the following sequence of commands
>> DiagVector = [1 2 3 4 5 6];
>> UpDiagVector = [7 8 9 10 11];
>> DownDiagVector = [12 13 14 15 16];
>> E = diag(DiagVector) + diag(UpDiagVector,1) + diag(DownDiagVector,-1)
Notice how the diag command can be used to put values above (indicated with a 1) or below
(indicated with a -1) the diagonal, but that the vector used must be the proper length. For example,
in a 6×6 matrix, you would place a vector of length 5 above or below the diagonal. If at some later
time you wanted to add another diagonal 2 off the main diagonal,
>> E = E + diag([17 18 19 20], 2)
There are also some occasions where you may want to create a random matrix.
>> F = rand(5,6);
creates a 5×6 matrix with random values that range between 0 and 1.
4.2.2
SPARSE MATRICES
So far we have created matrices that have specified every value in the matrix, even if they have a
value of zero. But zeros take up space in memory. Matlab has a way to compress a matrix in such a
way that any zeros are not stored. This is especially useful for very large matrices that contain mostly
zeros. In scientific computing these type of matrices are known as sparse.
>> G = sparse(E);
>> whos
You should notice that the matrix G is now a sparse matrix and takes up fewer bytes in memory
than the original matrix E. In small matrices the savings is usually not worth it. But, try issuing the
following commands and observe the difference in memory used by matrices H and J .
4.3. INDEXING A MATRIX 27
>> H = diag(ones(500,1));
>> J = sparse(H);
>> whos
4.3
INDEXING A MATRIX
Values can be read from a matrix in much the same way as they are read from a vector. For example,
>> E(1,2)
will retrieve the value of 7 in the first row and second column. Try getting the value of 15 in the
fifth row and fifth column. You can also get entire columns or rows from a matrix using the colon
notation.
>> E(2,:)
will get the entire second row of E.
>> ColThree = E(:,3)
will get the entire third column of E and store it in a vector called ColT hree. What is more, we can
get any submatrix of a larger matrix.
>> SubMatrixOfE = E(2:5,3:6)
will get from row 2 to row 6 between columns 3 and 6. Note that this submatrix will be a 4×4 matrix
of its own. How might you get the bottom right 3×3 submatrix? Note that you can even use the end
command introduced in the previous chapter.
4.3.1 HIGHER DIMENSIONAL MATRICES
Although Matlab is designed to handle two-dimensional matrices, there is some support for higher
dimensional matrices.
>> P = zeros(2,3,5);
>> Q = rand(3,4,5);
>> whos
You can read from higher dimensional matrices in the same way as other matrices
>> R = Q(1,2:3,2:4)
>> whos
4.4
SIMPLE MATRIX ROUTINES
There are a number of matrix routines that can be useful in real problems. First, there are occasions
where we may want to transform a matrix into a vector using the reshape command
>> k = reshape(E,36,1)
Here the vector k contains the same values as the matrix E, where each of the columns have been
stacked on top of one another. But we can use reshape to do some other interesting things as well.
28
4. MATRICES
>> L = reshape(E,9,4)
changes the 6×6 E matrix into a 9×4 matrix, again wrapping through the columns.
Another command that can be useful is the squeeze command. You may have noticed that
the R matrix above is a 1×2×3 which is really a 2×3 matrix. To have Matlab compress any
dimensions of 1
>> S = squeeze(R)
>> whos
4.5
VISUALIZING A MATRIX
4.5.1
SPY
It is often useful to be able to visualize the entries of a matrix. For example,
>> A = diag(ones(100,1))+diag(-3*ones(99,1),1)+diag(25*ones(99,1),-1);
First make sure that you understand how this matrix was built up using diag and ones commands.
In some applications it is useful to know where any non-zero entries exist in the matrix.
>> spy(A)
will bring up a graph showing the location of all non-zero entries. At the bottom of the graph you
will also find the number of non-zero entries in the matrix. In the toolbar (at the top of the graph)
you will see a small magnifying glass with a ‘+’ sign. Click on this symbol and select an area that
contains some dots. You can use this zoom function on any type of plot within Matlab. To zoom
back to the original view simply double click anywhere on the plot. Note that next to the positive
magnifying glass is a negative magnifying class which can be used to zoom out.
4.5.2
IMAGESC AND PRINT
There are many situations where a matrix will be used to store data. In these situations, we might
want to view all of the data at once.
>> B = rand(100,100);
>> B(25:50,50:75) = 1.5;
>> imagesc(B)
>> colorbar
creates a random 100×100 matrix (with values ranging from 0 to 1, representing some data
we have collected from an experiment). The second command fills a block of the matrix with
the value 1.5. The next two commands create a color plot of the values in B and then adds a color bar.
There are times when it is helpful to create a jpeg of an image so it can be imported into
other applications, e.g., PowerPoint, Word. The Matlab print command can be used to create an
external file
4.6. MORE COMPLEX DATA STRUCTURES 29
>> print('-djpeg','MyFirstImageFile.jpeg');
If you look at the help file for print you will notice that the first entry specifies the file type (in this
case a jpeg). You should notice that there are many other types of image files that you could create.
The second entry specifies the name of the file. You should try to open “MyFirstImageFile.jpeg” to
be sure it has been created properly.
4.6 MORE COMPLEX DATA STRUCTURES
Standard matrices and vectors must contain all of the same type of data. For example, we may have
an entire vector of characters or an entire matrix of integers. Although there are cases where we
want to mix types of data, matrices and vectors will not help us. But what if we want to have data
for a patient that contains a combination of names and medications (strings), heart rate and blood
pressure (doubles) and number of times a nurse has checked in today (integer). Matlab does have
two ways to handle this sort of problem.
4.6.1
STRUCTURES
The idea of a structure is that data types can be mixed, but then referenced easily. Enter the following
commands
>> P(1).Name = 'John Doe';
>> P(1).HeartRate = 70.5;
>> P(1).Bloodpressure = [120 80];
>> P(1).Medication = 'Penicillin';
>> P(1).TimesChecked = int16(4);
>> who
>> P
Above we have created a data structure, P , for the patient “John Doe”. You will note that P is now
of type struct and that when you type P on the command line it will tell you which data are in P .
To retrieve a value from P
>> P(1).Name
>> P(1).Bloodpressure
You may wonder why we needed to reference the first value of P , e.g., P(1). The reason is that now
we can easily enter another patient
>> P(2).Name = 'Jane Doe';
>> P(2).HeartRate = 91.3;
>> P(2).Bloodpressure = [150 100];
>> P(2).Medication = 'Coffee';
>> P(2).TimesChecked = int16(2);
>> who
>> P
30
4. MATRICES
It should be noted that you can have fields within fields too
>> P(2).Family.Father = 'Jerry Doe';
>> P(2).Family.Mother = 'Julia Doe';
>> P
>> P.Family
4.6.2 CELL ARRAYS
While a structure must be referenced by a known name, a cell is simply a matrix of other data
structures, where the data structures do not need to match.
>> T = {rand(3,3), 5.3,'BMEG 220'; int16(27),˜[24.5 37.8],'Matlab'};
>> who
>> T
Here T is a matrix, but each element can take the form of any other data structure. For example,
>> T{1,3}
will retrieve the text string in the first column and third row. Notice that for cells you index using
brackets, not parentheses. How can you retrieve the 3×3 random matrix? How about the value 27?
One last example will demonstrate the power of cells.
>> U = {T 5; 'Computing' [39.2 47]};
>> U{1,1}
>> U{1,1}{2,3}
Here the cell array U has been embedded within it another cell array T . The second command
retrieves the T cell and the third command will retrieve a specific entry within T . You should save
only U and P in a Matlab file called “MatrixStructures.mat”. For help on saving a Matlab data file,
see the previous chapter.
4.7 EXERCISES
1. Turn in the diary for this chapter along with the image file created in Section 4.5.2 and .mat
file in Section 4.6.2.
2. Matrices can be a very good way to visualize distributions in space. For example, in modeling
the spread of a disease throughout a population, it is important to know who is infected, who is
immune and who is susceptible to infection. Let us assign anyone infected a value of 1, anyone
immune a value of 2 and anyone susceptible a value of 3. We can visualize what is happening at a
particular time with a 2D plot, where each coordinate represents a portion of a college campus.
Start by creating a 40×40 matrix that contains all 3’s (hint: use the ones command).
Next, create two small regions of infections, one bounded by x=3, x=7, y=8, y=11, and another
4.7. EXERCISES 31
bounded by x=35, x=39, y=27, y=32. Next, create a line of immunization (students vaccinated)
that ranges from x=2 to x=38 and y=20. Use the imagesc and print commands to visualize the
distribution of infected and not infected students and then print the image to a jpeg. You
should include a colorbar.
3. A matrix can be very useful for storing the structure of a network. In biological applications,
a network may be the connections between cardiac or neural cells, the interactions between
different genes or even the relationships between species in an ecosystem. Typically, each
“player” (also called an “agent” or “unit”) is assigned an integer that is in the range 1 to N
(where N is the total number of players in the network). Each row of our matrix will correspond
to a player. For example, row one is dedicated to player 1, row two is dedicated to player 2 and
so on. In each row (corresponding to the current player) we place a 1 in the location of any
player with whom it can interact. For example, if row 8 is
[001000100001]
(4.1)
it means that player 8 has some direct relationship with players 3, 7 and 12. This type of
matrix is called an adjacency matrix.
A very common and simple type of network structure is a one-dimensional ring. In a
ring, each player is connected to the player before and after. For example, player 4 would be
connected to players 3 and 5. Because of the ring structure, player 1 is also connected to the
very last player, completing the ring.
Create an adjacency matrix, A, that describes a ring with 10 players. Remember that
player 1 is connected to player 10 (and vice-versa), completing the ring. You should first write
out the matrix by hand so you understand the basic structure. One method of creating the
matrix would then be to simply program in the entire matrix by hand. For this exercise you
must use the diag command to create the matrix. Note that you may need to add in a few
extra commands (using what you know about indexing a matrix) to complete the loop. You
should be able to create the entire matrix in 3 commands which you show in a diary file.
Because the matrix is mostly made of zeros, create a new sparse matrix, B. Then save
A and B into a “.mat” file. Lastly, use the spy command on matrix B (note that spy works on
sparse matrices too). Save the figure in a jpeg file.
4. Bacteria, and other micro-organisms, navigate throughout space using a variety of sensors. For
example, they may simultaneously detect pH, glucose concentration gradient, light direction
and temperature. To move in a particular direction a bacterium must decide which stimulus to
move toward (or away from), but it may sometimes have conflicting “motivations”, e.g., moving
32
4. MATRICES
toward warmth may move it away from glucose. To model a large population of bacteria we
would need to keep track of the state of each bacteria. Let’s assume that pH is a number,
glucose concentration gradient is a 2×2 matrix (defines 2 unit vectors), light direction is three
numbers (angles in degrees) and temperature is a qualitative string (hot, just right and cold).
In this exercise, you will create a Matlab structure, Bac to store the data for 3 bacteria. The
names of the structures should be, pH , Glucose, Light and T emp. You should be able to
index data using Bac(1), Bac(2) and Bac(3). For example, Bac(1).pH should return a single
number, Bac(2).T emp should return a string, and Bac(3).Glucose should return a matrix.
You can make up any values you want for the entries for each bacteria. What is important is
the makeup of the structure. When you are finished, save only the Bacteria Structure, Bac,
into a “.mat” file.
C H A P T E R 5
33
Matrix – Vector Operations
5.1
INTRODUCTION
The original intent of Matlab was to provide an all purpose platform for performing operations on
matrices and vectors. Although Matlab has progressed much farther since these early ambitions,
matrix-vector operations are still the core of the environment. In this chapter, we explore the basic
functions Matlab has for performing operations on matrices and vectors.
This text is not meant to teach you about linear algebra. But there is a small amount of lin-
ear algebra that is necessary to understand before we move on to Matlab’s matrix-vector operations.
Perhaps the most important concept, is multiplication of a matrix and a vector. To demonstrate the
concept, we will work with a 3×3 matrix and a 3×1 vector.
⎡
⎣
a
d
g h
b
c
e f
i
⎤
⎡
⎦
⎣
A
B
C
⎤
⎡
⎦ =
⎣
⎤
⎦
?
?
?
To perform this multiplication we first multiply the values across the top row of the matrix by the
values down the vector. The result will be
a ∗ A + b ∗ B + c ∗ C
(5.1)
This term will form the first value in the resulting vector
⎡
⎣
a
d
g h
b
c
e f
i
⎤
⎡
⎦
⎣
A
B
C
⎤
⎡
⎦ =
⎣
a*A + b*B + c*C
?
?
⎤
⎦
We then move down one row of the matrix and multiply again by the elements in the vector
⎡
⎣
a
d
g h
b
c
e f
i
⎤
⎡
⎦
⎣
A
B
C
⎤
⎡
⎦ =
⎣
a*A + b*B + c*C
d*A + e*B + f*C
?
⎤
⎦
and again for the last row of the matrix
34
5. MATRIX – VECTOR OPERATIONS
⎡
⎣
a
d
g h
b
c
e f
i
⎤
⎡
⎦
⎣
A
B
C
⎤
⎡
⎦ =
⎣
a*A + b*B + c*C
d*A + e*B + f*C
g*A + h*B + i*C
⎤
⎦
Note that the mechanics of matrix-vector multiplication is to go across the rows of the matrix and
down the column of the vector. To make the above more concrete, try to follow the example below
⎡
⎣
1 2 3
4 5 6
7 8 9
⎤
⎡
⎦
⎣
10
11
12
⎤
⎡
⎦ =
⎣
1*10 + 2*11 + 3*12
4*10 + 5*11 + 6*12
7*10 + 8*11 + 9*12
⎤
⎡
⎦ =
⎣
⎤
⎦
68
167
266
Two points are important about matrix-vector multiplication. First, the number of columns in the
matrix must be the same as the number of rows in the vector. Otherwise, the multiplication of matrix
rows against the vector column will not work out right. The implication is that we need a matrix that
is N × M and a vector that is Mx1 where M and N are any integers. Second, it is often a convention
to use bold capital letters to denote matrices and bold lower case letters for vectors. We can therefore
rewrite a matrix-vector multiplication as
Ax = b
(5.2)
where A is the matrix, x is the vector and b is the result of the multiplication.
5.2
BASIC VECTOR OPERATIONS
We have already explored a few operations on vectors. For example, in Chapter 3 we use the sum
command. There are a number of additional commands that work on vectors. For a small sample,
try issuing the commands below. You may also want to view the help files for these commands to
better understand how they work.
>> v = [8 -1 3 -12 37 54.3];
>> mean(v)
>> std(v)
>> sort(v)
>> sin(v)
>> log(v)
>> max(v)
>> min(v)
>> i = find(v==-12)
It is important to note that some of the above commands will output only a single number, while
others will apply a function to each element of the vector.
5.2.1 VECTOR ARITHMETIC
There are a number of very simple arithmetic operations which work on vectors. The most simple
is to multiply all values by some constant value. For example,
5.2. BASIC VECTOR OPERATIONS 35
>> y = [1 3 5 7];
>> x = 2*y
Note that the * in Matlab denotes multiplication. You can also add a constant value to all elements
of a vector.
>> z = y + 2
You should try subtracting (use “-” symbol) a constant or dividing (use “/” symbol) by a constant.
5.2.2 VECTOR TRANSPOSE
Above we created a vector y that has the dimensions 1×4. The problem is that we might eventually
want to multiply a matrix by this vector and it is in the wrong format. To transform y into a 4×1
vector we can simply use the transpose command.
>> w = y’
>> whos
5.2.3 VECTOR - VECTOR OPERATIONS
Matlab can also perform operations on two vectors. Try the four operations below and do not worry
if you get errors.
>> l = [1 4 9 10];
>> m = [-1 35 2 0.5];
>> m-l
>> m+l
>> m/l
>> m*l
You will notice that the operations of addition and subtraction act the way you would expect. The
division and multiplication, however, do not do what we expect. For the division we would expect
to simply divide each element of vector m by each element of vector l and return a vector of the
results. Instead, a single number was returned. In the multiplication example, Matlab would not even
perform the operation. To understand the problem we will present two different scenarios. First,
(cid:8)
1 4 9 10
⎤
(cid:8)
⎥
⎥
⎦ =
⎡
⎢
⎢
⎣
(cid:9)
−1
35
2
0.5
1*-1 + 4*35 + 9*2 + 10*0.5
(cid:9)
Here we are simply using the type of multiplication used in the first section of this chapter for
matrices and vectors. The only exception is that we are multiplying a 1×4 matrix by a 4×1 vector.
The result is a 1X1 scalar. To perform this operation in Matlab
36
5. MATRIX – VECTOR OPERATIONS
>> m*l’
Here the transpose command was used to turn the 1×4 l vector into a 4×1 vector. In fact, this
type of vector-vector multiplication has the name of “dot product” or “inner product” in linear
algebra. Matlab has a command, dot for taking dot products between two vectors, regardless of
their dimensions.
The dot product, however, was not what we originally intended. What we expected was to
multiply each of the elements of m by each of the elements of l and then place them in a new vector.
To make this operation distinct from the dot product, Matlab has a special command.
>> m.*l
will perform the desired operation.The “.*” command is shorthand for an operation that is performed
on each element. Try the following to see how element division works.
>> m./l
This still leaves us with why the division command in the original example yielded any answer at
all. The answer is that “/” means something in Matlab that relates to matrix-vector division. It will
be discussed below.
5.3
BASIC MATRIX OPERATIONS
Many of the basic commands we have already discussed work on matrices. First, we will use a built-in
command to generate a matrix
>> B = magic(4)
%You may want to look at the help for 'magic'
First, you should notice that % is a special symbol that denotes a comment in Matlab. So, any
text that appears after % will not be sent to Matlab. Below, comments will be used to help you
understand the commands. You do not need to enter comments on your command line.
You may also have noticed in some of the help files that many commands that work on
scalars also work on vectors and matrices. For example,
>> sum(B)
>> sum(B,2)
>> mean(B)
>> min(B)
>> max(B)
>> min(min(B))
>> exp(B)
>> [i j] = find(B==10)
% sum across the rows
% sum down the columns
% mean down the rows
% minimum down the rows
% maximum down the rows
% find the absolute minimum of B
% take the exponential of each element of B
% find row (i) and column (j) of value 10
This is a strength of Matlab - most operations are defined for scalars, vectors and matrices. By
default, Matlab assumes that all operations on a matrix will be performed on the rows. The “sum(B)”
5.3. BASIC MATRIX OPERATIONS 37
command thus returns a vector that is the sum of each row. But the sum command can total up
columns instead by “sum(B,2)”. You may also have noticed that you can embed commands within
one another. For example, “min(min(B))” will first create a vector with the minimum value for each
row (with the inner min command), but then take the minimum of that vector (with the outer min
commands). The result will be the minimum value in the entire matrix.
5.3.1
SIMPLE MATRIX FUNCTIONS
One of the simplest matrix functions is the transpose
>> B’
The meaning of a matrix transpose is that the first row becomes the first column. The second row
becomes the second column and so on. Although the example above is a 4×4 (or square matrix) it
should be easy to check that the transpose of a N × M matrix is a M × N matrix.
Like vectors, we can perform basic arithmetic operations on matrices.
>> C = [1 2 3; 4 5 6; 7 8 9];
>> D = [-3 -2 -1; -6 -5 -4; -9 -8 -7];
>> C-D
>> C+D
>> C.*D
>> C./D
>> C.ˆ3
% subtract the elements of D from C
% add the elements of C and D
% multiply the elements of C and D
% divide the elements of C by the elements of D
% cube each element in C
Each of the operations above is performed element-wise (because + and - are that way naturally, and
we used “.*”, “./” and “.∧”). It is possible, however, to multiply one matrix by another matrix.
% perform a full matrix multiply
>> C*D
where the pattern used for matrix-vector multiplication is used to create the entries of a new matrix.
For example,
⎡
⎣
C11 C12 C13
C21 C22 C23
C31 C32 C33
⎤
⎡
⎦
⎣
D11 D12 D13
D21 D22 D23
D31 D32 D33
⎤
⎦ =
⎡
⎣
C11D11 + C12D21 + C13D31 C11D12 + C12D22 + C13D32 C11D13 + C12D23 + C13D33
C21D11 + C22D21 + C23D31 C21D12 + C22D22 + C23D32 C21D13 + C22D23 + C23D33
C31D11 + C32D21 + C33D31 C31D12 + C32D22 + C33D32 C31D13 + C32D23 + C33D33
⎤
⎦
Again, multiplications is carried out by moving across the rows of the first matrix and down
the columns of the second matrix. Another way to think about matrix-matrix multiplication
is that if we want to know the entry at any row and column of the resulting matrix, we sim-
ply take the dot product of that row of the first matrix (C) with the column of the second matrix (D).
38
5. MATRIX – VECTOR OPERATIONS
Using the idea of matrix-matrix multiplication we can also perform matrix exponentiation
>> Cˆ4
which is the same as the matrix multiplication C*C*C*C.
5.4 MATRIX-VECTOR OPERATIONS
As the core of Matlab is vector and matrix operations, it is not surprising that there are many
functions for such operations. Below we explore only a few of the most important.
5.4.1 OUTER PRODUCTS
The inner product is an operation on two vectors that results in a scalar. The outer product is an
operation on two vectors that results in a matrix. The key to understanding the difference is in the
dimensions of the two vectors.
⎤
⎥
⎥
⎦
(cid:8)
⎡
⎢
⎢
⎣
a
b
c
d
(cid:9)
=
s
t
⎡
⎢
⎢
⎣
a ∗ s
b ∗ s
c ∗ s
d ∗ s
a ∗ t
b ∗ t
c ∗ t
d ∗ t
⎤
⎥
⎥
⎦
You should note that this operation is the same as the matrix-matrix multiplication but now with
two vectors. In general, the outer product of a Nx1 vector and 1×M vector is a N ×M matrix.
5.4.2 MATRIX INVERSE
Earlier we introduced the matrix-vector equation
Ax = b
(5.3)
Many problems in engineering can be put in these terms, where we know A and b and would like
to find x. Using some ideas from linear algebra, we could rearrange our equation.
A
(A
−1(Ax) = A
−1A)x = A
(I)x = A
x = A
Ax = b
−1b
−1b
−1b
−1b
(5.4)
(5.5)
(5.6)
(5.7)
(5.8)
where A
−1 is called the inverse of A. By definition
5.5. OTHER LINEAR ALGEBRA FUNCTIONS 39
−1A = I
A
(5.9)
where I is the identity matrix, e.g., ones along the diagonal and zeros elsewhere.
There are two problems with inverses. The first is mathematical and that is that not every
matrix has an inverse. The criterion to have an inverse is that the matrix must be square, e.g.,
same number of rows and columns, and must have a determinate (det command in Matlab) that
is non-zero. A matrix with these properties is called invertible or non-singular. A matrix that
doesn’t have an inverse is called singular. Second is a more practical concern. Even for a matrix
−1 given A. Matlab has
that is invertible, there are a number of numerical methods for finding A
a command (inv) for just this sort of operation, but due to the limits of numerical accuracy, it
sometimes will not be able to return an inverse.
>> A = [3 2 -5; 1 -3 2; 5 -1 4];
>> det(A);
>> Ainverse = inv(A);
%Check to make sure an inverse will work
Now if we define a vector b we can try to solve for x
>> b = [12; -13; 10];
Note that b is a 3×1 vector. This situation corresponds to the following set of linear equations.
⎡
⎤
⎡
⎤
⎤
⎡
⎤
⎡
⎣
3
1 −3
5 −1
2 −5
2
4
x1
x2
x3
⎦
⎣
⎦ =
⎣
3x1 + 2x2 − 5x3
1x1 − 3x2 + 2x3
5x1 − 1x2 + 4x3
⎦ =
⎣
⎦
12
−13
10
Where the vector x is what we want to solve for. In Matlab there are two ways to solve for x. The
first is to compute the inverse and then solve.
>> x = Ainverse*b
The second reveals the special role played by the \ symbol in Matlab
>> x = A\b
And you can verify that x really is the solution by
>> A*x
It is important to note that Matlab uses different numerical methods when the inverse is taken
directly and when the \ symbol is used. The \ symbol is nearly always better for both accuracy and
time.
5.5 OTHER LINEAR ALGEBRA FUNCTIONS
Some other commands that have a direct tie to linear algebra are rank, trace, det (determinate) and
eig (eigenvalues and eigenvectors).
40
5. MATRIX – VECTOR OPERATIONS
>> Rand = rand(5,5);
>> rank(Rand)
>> trace (Rand)
>> [V,D] = eig(Rand)
>> det(Rand)
You may wish to view the help for these functions or consult a linear algebra text book.
5.6 MATRIX CONDITION
There are some instances when a set of equations are encountered, which in theory have a solution,
but it is difficult to compute an inverse numerically. Consider the following system of equations:
a + b = 2
a + (1 + (cid:7))b = 3
(5.10)
(5.11)
where (cid:7) is some small number. We can solve this equation by subtracting Equation 5.11 from
Equation 5.10. The result is
b = 3
(cid:7)
(5.12)
so the smaller (cid:7) the larger b will be, and with it, a will become large and negative. Matlab typically
does not do well when numbers become too small or too large.
We can recast our problem in Ax = b form
(cid:10)
1
1
1 1 + (cid:7)
(cid:11) (cid:10)
(cid:11)
(cid:10)
=
(cid:11)
2
3
a
b
Note that there is nothing special about x or b, but A is where the problem lies. The idea of matrix
condition is a way to quantify how much numerical error will be associated with inverting a particular
matrix. The Matlab condition command is cond. To expose this problem in Matlab, try the following
commands, where (cid:7) = 1−20
>> A = [1 1; 1 1+1e-20];
>> inv(A)
>> cond(A)
Although in theory, you should be able to take the inverse of A, Matlab will give an error that the
matrix is singular to working precision.
5.7 EXERCISES
1. Turn in the diary for this chapter.
5.7. EXERCISES 41
2. Create a random 25×1 vector, r, using the rand command. Sort this vector using the sort
command in DESCENDING order. Turn in your sorted vector in a “.mat” file.
3. Nearly every ECG device has an automated heart beat detector (detects the peak of the R-
wave), and can count the number of beats since turning on the device. The problem is that
it requires these counts to be turned into heart rate changes over time. The data is stored in
a vector, with each entry as the number of beats per minute, but where the counting of beats
from the beginning did not stop at the end of each minute. Below is data from the first 7
minutes of recording.
(cid:8)
a =
0 64 137 188 260 328 397 464
(cid:9)
Use the diff command to take the difference between each value. Note that the length of the
difference is one less than the length of the original vector. Find the average heart rate over
these 7 minutes. Show your commands in a diary file.
4. The Nernst Equation is used to compute the voltage across a cellular membrane given a
difference in ionic concentration
Ek = RT
F
ln
(cid:11)
(cid:10)
[K]e
[K]i
(5.13)
where Ek is the Nernst Potential in mV , R is the Ideal Gas constant (1.98 cal
K·mol ), F is
Faraday’s constant (96480 C
mol ) and [K]i and [K]e are the concentrations of intracellular and
extracellular Potassium in mM respectively. While working in a lab you recognize that in
performing repeated cellular experiments, you must know the value of the Potassium Nernst
potent. In the experiment, you can control the temperature and the extracellular concentration
of Potassium. To avoid needing to compute the Nernst Potential each time, you decide to
create a lookup table, i.e., a matrix, containing the Nernst Potentials, thus avoiding the need
to perform a new calculation for every experiment.
Begin by typing in the following commands
>> Temp = [280:5:320];
>> Ke = [0.5:1:29.5];
>> R = 1.98;
>> F = 96480;
>> Ki = 140;
42
5. MATRIX – VECTOR OPERATIONS
You then recognize that you can form Ek from two vectors
A = RT
F
B = ln
(cid:11)
(cid:10)
[K]e
[K]i
Ek = A ∗ B
where A*B is an outer product. Note that you will need to use the log command in Matlab and
make sure that your vectors are the appropriate dimensions for an outer product. Store your
result in the matrix Ek as a “.mat” file
5. As a biomedical engineer you may be interested in designing a bioreactor for cell culture. A
typical chemical formula for aerobic growth of a microorganisms is
C2H5OH + aO2 + bNH3 → cCH1.7H0.15O0.4 + dH2O + eCO2
where term CH1.7H0.15O0.4 represents metabolism of the microorganism. The ratio of moles
of CO2 produced per mole of O2 consumed is called the respiratory quotient, RQ, which can
be determined experimentally. Given this ratio we have 4 constants, a-d that are unknown.
We can perform a mass balance on each of the four key elements
Carbon : 2
Hydrogen : 6 + 3b
Oxygen : 1 + 2a
N itrogen : b
= c + (RQ)a
= 1.7c + 2d
= 0.4c + d + 2(RQ)a
= 0.15c
in matrix notation
⎡
⎢
⎢
⎣
0
−3
0
RQ
0
−2
0 − 1 0.15
1
0
1.7 2
0.4 1
0
⎤
⎡
⎥
⎥
⎦
⎢
⎢
⎣
a
b
c
d
⎤
⎥
⎥
⎦ =
⎡
⎢
⎢
⎣
⎤
⎥
⎥
⎦
2
6
1 − 2RQ
0
Let RQ = 0.8 and then find the vector [a b c d] using Matlab’s matrix solve. Turn in a diary
and “.mat” file with the final vector.
C H A P T E R 6
43
Scripts and Functions
INTRODUCTION
6.1
In this chapter, we will learn about two related ideas that allow Matlab to be a very powerful all-
purpose engineering application. The first is the idea of a script and the second is the concept of a
function. It will often be the case, that future homework problems will require you to write either a
script or a function.
SCRIPTS
6.2
You may have found in previous sections that you would often type in a number of statements on
the command line in some sequence. For example, imagine trying to set up the following matrix.
⎡
⎢
⎢
⎢
⎢
⎣
0
1 2 0
0 4 6
8
0 0 3 12
4
0 0 0
0
0 0 0
⎤
⎥
⎥
⎥
⎥
⎦
0
0
0
8
10
One strategy would be to enter the entire matrix on one line. A second, more reasonable strategy
may be to recognize that you might create a few diagonal vectors using the diag command. The
problem is that you will need to issue a sequence of statements to the command line in the right
order. For a 5×5 matrix, this may not be all that bad, but imagine if this were a 200×200 matrix.
To make matters even more difficult, there are many computer algorithms that would require
thousands of lines to be typed in one by one. One mistake might mean reentering all commands
from the beginning.
To overcome this problem, Matlab makes it possible to create a script - a text file that con-
tains all of the commands to be entered in sequence. We will use the matrix above to motivate the
creation of our first script.
>> edit MyFirstScript
The edit command in Matlab opens up the Matlab text editor. The second argument is the name
of a file where the script will be saved. In fact, the file will be called ‘MyFirstScript.m’ where the
‘.m’ designates that the file is a Matlab script. It is important to know that any text editor will do
(a script file is simply a text file), but Matlab’s editor has some built in functions that can help you
check for errors. Enter the following commands into the script file
44
6. SCRIPTS AND FUNCTIONS
DiagVec = [1 4 3 4 10];
UpDiagVec = [2 6 12 8];
A = diag(DiagVec) + diag(UpDiagVec,1);
A(2,4) = 8;
%Create Diagonal Vector
%Create Upper Diagonal Vector
%Create Matrix
%Add in one more value in the second row, fourth column
Note that you can include % followed by comments. Matlab will not execute anything on the line
that follows a %. Now save the file and close the editor. You should see the file MyFirstScript.m in
your folder directory window. To issue the commands in your script to the Matlab command line,
you simply enter
>> MyFirstScript
>> whos
>> A
Matlab will literally enter the commands one at a time in sequence. This method of computing is
called interpreted programming or scripting and is the default in Matlab. You should notice that from
this point onward, any command that contains >> is meant to be entered on the command line. If
no >> is included, it means you should include the command in a script.
6.3 GOOD PROGRAMMING HABITS
It was mentioned above that some programs could contain thousands (maybe even millions) of lines
of code. In such a long program, it would be very easy to become lost. Below we will discuss some
ways to make your code more easy to read.
6.3.1 COMMENTS AND VARIABLES
The first script created above was relatively simple and an experienced Matlab programmer could
easily decode its intent. As you become more versed in Matlab, your scripts will become more
complex and much more difficult to understand. Comments are the key to explaining complex code.
Imagine two scenarios when you are including comments: 1) If you handed your code to another
Matlab programmer, could they follow what you were trying to do? 2) If you came across your own
code five years from now, would you remember what you were doing and why?
There are some standard ways of commenting that can be very helpful. First is to include a
purpose at the top of your script. It is helpful to include in this statement a date or version number.
In this way, if you create a newer version you will not confuse the two. If you are writing code which
may someday be shared with another programmer, you should include your name as the author.
Second, it is very helpful to add comments to any variables you define, along with units and a brief
note on the relationship to your problem. For example, if you are solving a circuit problem, you may
initialize a vector, v, at the top of your script that will contain a vector of voltages at each node
>> v = zeros(4,1);
%Voltage vector at nodes in mV
6.4. SCRIPT EXAMPLE - THE RANDOM WALK 45
Third, there are often blocks of code (ranging from a few lines to hundreds) which together perform
some subprocess of the entire script. In these cases, it is helpful to have a header at the beginning of
each block of code that explains the lines below. For example, the matrix, A above may have been
created as the resistance between nodes, i.e., the coefficients of simultaneous equations. Furthermore,
you should include the following line to “MyFirstScript.m”.
%%--------------Create Coefficient Matrix for Circuit---------------
The double percent sign is a feature of Matlab 7 or later and allows what is know as Cell Mode. In
Cell Mode, regions of code will be blocked together and easy to identify as a functional unit in the
code.
Fourth, there may be Matlab commands which require some explanation (you do not need
to enter the command below)
A(g(l),h(m)) = sum(min(B(g(i)-10:g(i)+10,B(h(i)-10:h(i)+10)))));
Although a command such as the one above may make sense at the time, it has so many embedded
functions and indices that it will be difficult to understand what is going on. A comment can help
in these cases.
Lastly, it is often helpful to use indentation to make the beginning and end of a functional
block of code. This concept will become more clear in Chapter 7.
6.3.2 CATCHING ERRORS AND DISPLAYING TEXT
There are two commands in Matlab that can allow you to be alerted to errors and display information.
Open up the file “MyFirstScript.m” and add the following line between the third and fourth line
disp('Diagonals created, now entering off diagonal entries');
In more advanced programs you may also want to be able to report out any errors. To view how this
will work, enter the following on the last line of your first script
error('Check to make sure matrix is non-singular first!');
Try running the script with these two additions and observe the results. You may want to view the
help files to understand more about the features of disp and error. Do not be alarmed when the error
command sends a text to your screen in red.
The difference between disp and error is that disp will simply send text to the command
line and continue on with your script. On the other hand, error will display text in red and then
terminate your program.
6.4
SCRIPT EXAMPLE - THE RANDOM WALK
A bacteria placed in a uniform concentration of nutrients will follow what is known as a random
walk. There has been much written on random walks, in fields as diverse as thermodynamics,
46
6. SCRIPTS AND FUNCTIONS
economics and history. In a biological context, a random walk can be thought of as a search process,
e.g., for food, light or a potential mate, when there are no outside cues.
To program a random walk, open a script file called “RandomWalk.m” and enter the initial
x − y position on the first line.
C = [1,1];
%Initial Coordinate
Next, we define a random distance to move in the x and y directions
Move = randn(1,2);
% get next move
and, lastly, move to a new spot
C = [C; C+Move];
% make next move relative to current position
The line above requires some explanation. In the first line of our script we created a vector [1 1]. In
the second line we created a direction vector, [NextCx NextCy]. In the third line, we are turning C
into a matrix. The first row of the matrix will contain the first x-y coordinate and the second row
will contain the second x-y coordinate (C+Move). To make another move
Move = randn(1,2);
C = [C; C(end,:)+Move]; % make next move relative to current position
% get next move
Here the general idea is the same as above, we are adding a third location to the bottom of the
matrix C, which is now 3×2. The seemingly strange indexing (C(end, :)) is to keep the dimensions
uniform. Remember that Move is a 1×2 vector and can only be added to another 1×2 vector, in
this case the current last row of C. Try running your script to be sure that it works. Then try adding
8 more lines of the Move and C commands to your script. At the bottom of your script include the
following line
plot(C(:,1),C(:,2))
%Plot the random path traversed
You should see that the x-y position of your bacteria has been printed out. An enormous advantage
of a script is that you can run your script again very easily without needing to reenter each command
one at a time. And due to the nature of a random walk, you should get a different result each time.
6.5
FUNCTIONS
In 1962, Herbert Simon, a pioneer in the field of artificial intelligence and winner of the Nobel
Prize in Economics, wrote an article called, “The Architecture of Complexity”. In it was a parable
of a watchmaker who is interrupted every few minutes to answer the phone. In frustration the
watchmaker develops a solution - make a number of very simple parts that take less time to
create, then assemble these simpler parts together into the watch. In this way, if interrupted, the
watchmaker will know where to pick up after an interruption. Simon’s conclusion was that any
system that is sufficiently complex will only become so, and continue to function, if it is arranged in
some sort of hierarchy. Simple operations are put together to make more complex functions and so
on up the hierarchy.
6.5. FUNCTIONS 47
The computational equivalent of building a hierarchy is to split up a long and complex pro-
gramming assignment into smaller functions (also sometimes called subroutines). The simpler
functions can then be combined together to perform the more complex task.
The reason for programming this way is four-fold. First, and already mentioned above, is
clarity. Imagine that you write a long section of code, leave it for a year and then come back to it.
Will you know where you left off? This has echos of the watchmaker parable above. A function
can help make clear what has and has not already been finished. Second, it is much easier to fix
small sections of code as you go. This is the idea of debugging, a topic that will be discussed at the
end of this chapter. Third is reuse of code. There are often times when a particular function will be
used more than once in a single programming assignment. And there are many instances where a
particular function may be used again in a completely separate assignment. For example, a function
that builds an adjacency matrix is a task that might be reused by programs for cardiac, neural, gene,
social and ecological networks. Fourth is collaboration. Functions allow teams of programmers to
coordinate their work to solve very large problems.
6.5.1
INPUT-OUTPUT
The coordination that would be required to collaborate is contained largely in how a function will
transform inputs into outputs. Below is the general form that would allow a function to be called
from the Matlab command line
OUTPUTS = FunctionName(INPUTS);
It may come as no surprise that many of the operations you have been using in Matlab are really
just functions that are part of the Matlab distribution. To view how a built-in function is written in
Matlab, use the type command
>> type mean
which will display the actual Matlab code that is executed when you use the mean command. You
should first notice that the first line of mean contains a function declaration with the following form
function y = mean(x,dim)
The key word function tells Matlab the following file will be a function with the form OUTPUTS
= FunctionName(INPUTS). It is important that your “FunctionName” is also the name of the
script file you created. In our example, the text file that contains the mean function is called
“mean.m”. Therefore, if you create a function called “SquareTwoNumbers”, your text file would
have the name “SquareTwoNumbers.m” and would have a first line of [xsquared,ysquared] =
SquareTwoNumbers(x,y).
Following the function declaration for mean is a long section of code that is commented
(%) out. This is in fact the help for the mean command. One very nice feature of Matlab is that
48
6. SCRIPTS AND FUNCTIONS
the code and help information are all in the same file. When you write a function, it is good
programming form to always include a help section. And you should follow the convention used in
Matlab of explaining the format for the inputs (are they scalars, vectors or matrices?) and outputs.
It is also helpful to have a few brief words on how the function works.
You should note that not all built-in Matlab functions are “.m” files.
>> type sum
will return “sum is a built-in function”. There are a number of core Matlab functions that have been
compiled. The reason to compile a function (as opposed to running it as an interpreted script) is
that it will run much faster. Although we will not discuss them here, Matlab has a way of deeply
compiling your own functions. You can find out more about this by typing “help mex”.
6.5.2
INLINE FUNCTIONS
Matlab does allow users to create what are called inline functions. These are functions that appear
in the same text file. Although they can be very useful in some situations, we will not discuss them
here.
6.5.3 THE MATLAB PATH
You have already seen that Matlab has built-in functions, some compiled and some as scripts. But
all of these scripts are stored in directories and chances are very good that you are not in those
directories. Matlab needs to know how to find “mean.m” when you use that particular function. For
this reason, Matlab contains a list of directories to search for “mean.m”. You can see this list by typing
in
>> path
The advantage of the path is that you do not need to tell Matlab where to look for built-in functions.
The disadvantage is that if you write a new function, and are not in the directory that contains your
new function, Matlab will not know where to look for it. There are two potential solutions. First,
you could simply move into the directory that contains your function. For simple programming
exercises this is your easiest solution. In a large programming assignment, however, you may wish to
have several directories to help organize your functions. In this case, you will need to add a search
path to Matlab. You can view the help for path to see examples for your particular operating system.
You may also use the addpath command.
One additional consideration when naming functions is that Matlab already has a number
of functions. If you name your new function the same as a built-in function, Matlab may become
confused as to which function to use. It is a good idea to use unique names for your functions.
6.6. DEBUGGING 49
6.5.4 FUNCTION SIZE
So how big should your function be? The answer will depend upon the nature of your problem. In
general, a function is meant to achieve one task. It should be something that a user could pick up
and have a good idea of how it will work. On the other hand, the task should not be trivial. The
balance of these two will change as you become a better programmer. For now, you should focus
on writing short, simple functions and nest them in a hierarchy - i.e., use simple functions to build
functions of medium complexity and then use the medium complexity functions to create more
complex functions, and so on.
6.6 DEBUGGING
In writing both scripts and functions it is very helpful to have some techniques for debugging your
code. First, Matlab does have a built in debugging tool. Although we will not discuss the debugging
tool here, you can find out more information by viewing the help for the debug command. Second,
trying to write all of your code at once may help you organize your ideas, but you should not expect it
to run properly. Rather it is good programming practice to write your code in small sections, testing
them out as you go. In this way, you can add a line or two to your program and then test them out to
be sure they work. If there is an error, you will know that it has occurred in those lines. Third, you can
take the semi-colon off of any line in a Matlab script and the output will be sent to the command-line
as the script runs. The advantage is that you can check to make sure that scalars, vectors or matrices
look right. The one danger in this method of debugging is that you might overload the command
line, e.g., sending a 1000×1000 matrix to the command line is not a good idea. Rather, you may
create a new variable that will give you the information you need to know. For example, to perform
a quick check on a 1000×1000 matrix you might simply include the following line
[N, M] = size(A)
which will allow you to see the size of your matrix on the command line. Alternatively, it might be
the sum or spy command that you may want to use to help you debug.
When it comes to writing functions, it is often a good idea to write your desired function
as a script first. In this way, you can simply put the inputs and outputs at the top of the file and
change them by hand. Once you become confident that your code is working the way you intended,
you can turn it into a function.
6.7 USER INPUT
Earlier it was mentioned that a good Matlab function has defined inputs and outputs. In a large
program, one function may call another, which called another, and so the output of one function
may become the input to another function. But often, we also want our program to be interactive,
meaning that the user can change inputs that impact the flow of the program.
50
6. SCRIPTS AND FUNCTIONS
6.7.1
INPUT
There are times when you may wish to allow the user to enter some input to a script or function.
The input may be to make a decision about how to proceed, e.g., perform algorithm 1 or algorithm
2, change a parameter value, e.g., coefficient or an initial condition, or even terminate the program.
The command in Matlab to use in these instances is input.
result = input('The rate constant is-->');
When placed in a script (or function), the text “The rate constant is –>” will be displayed on the
command line. Matlab will then wait for the user to enter a command. Once the user enters a number
and presses enter, that number will be stored in the variable result and will then be available to the
rest of the script. The input command can be used to return any Matlab data type.
6.7.2 GINPUT
You will create a new script that contain the following commands. Name your script file “MySec-
ondScript”. First, create the matrix A in Matlab and then use the imagesc command to visualize it
in a figure.
⎡
⎢
⎢
⎢
⎢
⎣
A =
6
1 2 0
0 4 6
8
2 0 3 12
4
8 2 0
0
7 0 5
⎤
⎥
⎥
⎥
⎥
⎦
1
0
2
8
10
Next, you should also add a color bar using the colorbar command. On the next line you should
include the following line
[x, y] = ginput(2);
The meaning of the line above is to bring up a cursor which you can move around with your mouse.
When you click your right mouse button, the x-y, coordinate of that first click will be stored in the
value x and y. But the argument to ginput is a 2, meaning that the command will wait for 2 clicks of
the mouse. Save and run your script and then try clicking two different points on your figure. View
the 2×1 vectors x and y to verify that they have selected the coordinates of your mouse clicks.
6.8
FUNCTION EXAMPLE
The random walk script created above was able to simulate the random movement of a bacterium.The
script was limited in at least two ways. First, there was no way for the user to specify parameters from
the outside. Second, two lines were repeated over and over again. In a preview of the next chapter,
we will begin by tackling this second problem. Open up a script called “RandomWalkReprise.m”
and enter
6.8. FUNCTION EXAMPLE 51
figure(1)
hold on
C = [1,1];
for i=1:10
%Initial Coordinate
Move = randn(1,2);
C = [C; C(end,:)+Move]; % make next move relative to current position
plot(C(end-1:end,1),C(end-1:end,2));
pause(1)
% get next move
end
plot(C(:,1),C(:,2))
The first two lines create a blank figure and then make sure that every future plot command will
be sent to Figure 1. The next line creates the initial coordinate. The following line contains what
is known as a for loop. You will learn much more about loops in the next chapter. For now, all you
need to know is that Matlab will execute the commands between the for and end ten times, e.g.,
1:10. The first two lines inside the for loop are familiar. The plot command should also be familiar,
but you should make sure that you understand the indexing (hint: we are drawing a line from the
previous point to the current point each time through the loop). The last line in the for loop will
pause for one second to give you a chance to see what is happening. Run the script and observe the
results.
Now that the script is working, we can begin to make it more general. Copy RandomWalkReprise.m
to RandomWalkReprise2.m and modify the new script to match the code below
Initialx = [2,1];
NumLoops = 50;
pausetime = 0.2;
%Initial coordiate
%Number of steps to take
%Duration of pause
figure(2)
hold on
C = Initialx;
for i=1:NumLoops
%Change to not confuse with previous script
%Initial Coordinate
Move = randn(1,2);
C = [C; C(end,:)+Move]; % make next move relative to current position
plot(C(end-1:end,1),C(end-1:end,2));
pause(pausetime)
% get next move
end
plot(C(:,1),C(:,2))
In this version of the script, all of the values which we want to be user defined have been placed at
the top of the script. The last step is to turn I nitialx, NumLoops and pausetime into inputs to a
function, called “RandomWalk”, with the matrix C as the output
52
6. SCRIPTS AND FUNCTIONS
function C = RandomWalker(Initialx, NumLoops, pausetime)
figure(2)
hold on
C = Initialx;
for i=1:NumLoops
%Initial Coordinate
%Change to not confuse with previous script
Move = randn(1,2);
C = [C; C(end,:)+Move]; % make next move relative to current position
plot(C(end-1:end,1),C(end-1:end,2));
pause(pausetime)
% get next move
end
Then on the command line try
>> clear
>> C = RandomWalker([3,4], 400, 0.01);
>> whos
The last important note about functions is that when they are completed, Matlab’s memory will
contain only the inputs and outputs. For example, Move is a variable that is used only internal to
the function and therefore does not show up in Matlab’s memory after the function has completed.
6.9 EXERCISES
1. Turn in the “.m” files MyFirstScript.m, MySecondScript.m, RandomWalk.m, Ran-
domWalkReprise.m, RandomWalkReprise2.m and RandomWalker.m.
2. Epidemics occur when an infection appears seemingly out of nowhere, spreads throughout
a population, sometimes disappearing just as suddenly. The Kermack-McKendrick model
describes the number of susceptible, xn, the number of infected, yn, and the number of removals
(quarantined, immunized), zn. For a constant population we can assume xn + yn + zn = N.
Once N is set, we therefore only need to know two of the variables to know the remaining
variable. The dynamics of Kermack-McKendrick model are
xn+1 = xne−ayn
yn+1 = (1 − e−ayn)xn + byn
(6.1)
(6.2)
where e can be represented by the exp function in Matlab, a and b are constants. First, create
a script called ‘KMmodelScript’ that has the following lines at the top
a = 0.02;
b = 0.0;
xn = 25;
yn = 5;
6.9. EXERCISES 53
Then add two more lines
xn1 = ?
yn1 = ?
where you fill in ? with the Kermack-McKendrick model. You must turn in this script. Next,
create a function called “KMmodel” that will take as inputs a, b, xn and yn and output xn+1 and
yn+1. You then must create a script called ’KMmodelFunctionTest’ that contains the following
lines
a = 0.02;
b = 0.0;
xn = 25;
yn = 5;
[xn1,yn1] = KMmodel(a,b,xn,yn);
Turn in KMmodel.m and KMmodelFunctionTest.m.
3. One way to model the dynamics of a population of a particular species is to think of it as a
series of generational groups, for example, infants, juveniles, mature and post-reproduction. In
all cases, as the years go by, some animals will move from one category to another, some will
die, others will be born as infants. We can define a vector
xn =
⎤
⎥
⎥
⎥
⎦
⎡
⎢
⎢
⎢
⎣
xn
inf ant
xn
j uvenile
xn
mature
xn
old
Then we can hypothesize that there is a matrix, A, which will transform xn to xn+1
xn+1 = Axn
(6.3)
Such a matrix is called a stage structure matrix. From 1973 to 1987 Brault and Caswell studied
killer whale populations and developed the following A matrix
⎡
⎢
⎢
⎣
A =
0
0.0043 0.1132
0.9775 0.9111
0
0.0736 0.9534
0
0
0
0.0452 0.9804
⎤
⎥
⎥
⎦
0
0
0
Moving the current population forward one year therefore requires multiplication by A. Mov-
ing the population forward by two years requires two multiplications by A
54
6. SCRIPTS AND FUNCTIONS
or
xn+1 = Axn
xn+2 = Axn+1
xn+2 = AAxn
xn+2 = A2xn
(6.4)
(6.5)
(6.6)
(6.7)
(6.8)
Therefore, if we want to predict the population at any year in the future we can use the idea
of matrix exponentiation
xn+2 = AAxn
xn+y = Ayxn
(6.9)
(6.10)
(6.11)
where the exponent y is the years in the future.
Write a function (“PopulationStage”) that will take in any current population vector x, a stage
structure matrix A and predict the population at any given year y. Write a script to test your new
function with the A as defined above and a current population vector x = [10, 60, 110, 70].
Show that you can predict the population 50 years into the future.
C H A P T E R 7
Loops
55
7.1
INTRODUCTION
One of the prime purposes of using a computer is to automate a task that would be very tedious to
perform by hand (either with pencil and paper or on a calculator). The usual implication is that some
task is to be performed over and over again in some systematic way. This chapter will be concerned
with the programming concept of a loop, a feature that is at the heart of nearly every computer
algorithm. Perhaps the most important concept to understand about loops is the least intuitive; how
to get them to stop. In fact, the method of stopping a loop is often how they are classified. In this
chapter, we will explore the for loop in detail and wait until the next chapter to explore the while
loop.
7.2 THE FOR LOOP
There are often algorithms where you know ahead of time exactly how many times an operation
must be performed before stopping the loop. For example, if we must perform some data analysis
and know that there are 1037 points, there should be some way to move from point 1 (perform the
analysis) to point 2 (perform the analysis) to point 3, and so on. In Matlab, and nearly all other
programming languages, this type of operation is performed by a for loop.
In Matlab, the most basic type of for loop is the following (do not enter the following
commands)
for i = 1:200
COMMANDS HERE
end
There are four parts to any for loop. The first is a variable (in this case, i) that will be used
to keep track of the number of times through the loop. The second is the values the variable
i will take on as it repeats. In our example, i will start by taking the value 1, then 2, then 3
and so on until reaching 200. Third is to place a bound on what will be looped over, denoted
in Matlab by the keyword end. Fourth are the commands that will be executed (there can be
as many as needed) between the for and the end lines. Note that this is a good example of us-
ing indentation to make clear where the loop starts and ends, as well as the commands to be executed.
To better understand how loops work, analyze the following code.
56
7. LOOPS
x = 1;
for i = 1:5
x = x + 1;
end
In this code, the variable x will start at a value of 1. Then the loop will begin with i = 1. On the
first pass through the loop, x will be increased by 1 (e.g., x = 2). On the second pass through the
loop, (i = 2) x will be increased again by 1, e.g., x = 3, and so on until i = 5.
It should be noted that there is no reason why you must start a loop at 1. The following
will all yield valid loops (do not enter on the command line, simply view the lines below):
for i = 89:108
for i = -27:38
for i = 234:20
for i = 23.5:57.5
You should see that you can start at any number (positive, negative or even a non-integer) and then
loop up or down.
7.2.1 FOR LOOPS OVER NON-INTEGERS
You probably noticed that the colon notation was used to specify that Matlab should loop from
some number to another number in increments of 1. There are often cases where we would like to
increment by some other value. For example, imagine that you would like to assign a value at points
along a line of length 1cm every 0.01cm.
dx = 0.01
for x = 0:dx:1
v = xˆ3 + xˆ2 + 3
end
Upon executing the commands, you should see the values of v on the command line.
7.2.2 VARIABLE CODING
Above we defined a variable dx to specify how much to increment each time through the loop.
There is no need to do this, e.g., for x = 0:0.01:1 would have worked, but it demonstrates the good
programming practice of variable coding as opposed to hard coding. In hard coding, we simply write
out all of the numbers in our program, e.g., type in 0.01 for dx everywhere in the code. In soft
coding, we define a variable, e.g., dx, and then use that variable throughout the program. There are
two reasons why it is good practice to use variable coding. First, by defining a variable dx, we can
signal that there is some actual meaning (here a spatial size) to the variable. Second, if at any point
in time you would like to change dx, you can make the change in one place, eliminating the need to
search for all places where a step size is needed.
7.2.3 FOR LOOPS OVER AN ARRAY
A second trick that is often helpful is to loop over some array that has already been created
7.2. THE FOR LOOP 57
>> a = [2 50 34.5 27 91];
>> for i = 1:length(a)
>>
>>end
2*a(i)+100
%Any command here that requires a(i)
Here a vector a is created before the loop. The goal is to progress through the entire vector one
element at a time, e.g., a(i), performing operations on that element. Note that, in this case, the
variable i is playing two roles. It is the loop variable, but it is also the index to the vector, s. This is
one of the most powerful aspects of having a loop variable.
7.2.4
STORING RESULTS IN A VECTOR
There are two problems with the above script. First, sending the output to the command line makes
it difficult to interpret trends in the results. Second, we do not have any record of the resulting
calculation. To fix both problems, we can use the loop variable i. Enter the following into a script.
% spatial step in cm
% create space vector
dx = 0.01;
space = 0:dx:2;
%Allocate memory for a Concentration vector
Concentration = zeros(length(space),1);
for i = 1:length(space)
%Any command here that requires a(i)
Concentration(i) = space(i)ˆ2;
end
plot(space,Concentration)
In this example, a space vector is created. Then the a vector Concentration is built up one element
at a time by looping over the entire space vector. The catch is that the space vector is also used to
create the Concentration vector. As shown in the plot, the above commands create a concentration
gradient in space.
A careful reader may realize that the above section of code could have been created much
more efficiently without any loops, by using a simple element-wise vector multiply
dx = 0.01;
space = 0:dx:2;
Concentration= space.ˆ2;
plot(space,Concentration)
% spatial step in cm
% create space vector
% simple element-wise operation
58
7. LOOPS
In fact, if faced with this type of problem you should choose the above solution, as loops are generally
much slower than matrix-vector operations. There are, however, many instances where a problem
can not be decomposed into matrix-vector operations. One such case is when you must know one
result to compute another result, as in the Kermack-McKendrick difference equation in the previous
chapter. A second example is the numerical integration of a differential equation.
7.3 EULER INTEGRATION METHOD
Great emphasis is often placed on solving differential equations, meaning that a closed form
analytical solution can be written down. The vast majority of differential equations, however,
can not be solved analytically. This is especially true for biological systems, where the equations
that govern how quantities change are almost always non-linear. In cases where a differen-
tial equation can not be solved analytically, we can approximate a solution numerically. There
are a number of numerical integration methods, but the simplest to understand is the Euler Method.
At the heart of the Euler Method is knowing where you are (current state) and where you
˙V t . We
are headed (slope). In Figure 7.1, we define the current state as V t and the current slope as
can then approximate the slope as
If we now pick a (cid:8)t we can predict where V will be at time = t + (cid:8)t
dV
dt
= ˙V t = (cid:8)V t
(cid:8)t
V t+(cid:8)t = V t + (cid:8)V
V t+(cid:8)t = V t + (cid:8)t · ˙V t
(7.1)
(7.2)
(7.3)
Note that on the left-hand side is the prediction at time = t + (cid:8)t and on the right-hand side are
all terms at time = t. In the figure, the predicted V t+(cid:8)t is not the exact solution, i.e., it does not fall
on the solid line. But, as (cid:8)t is made small, the approximation will become better and better.
The generic approach of moving step-by-step through a differential equation is called nu-
merical integration. The ability to integrate a differential equation (even a non-linear one) is a
very important role for computer programming. And the Euler Method will work even if the
independent variable is something other than time. You should also note that there are many more
numerical integration methods that we will touch upon in future chapters.
7.3.1 NUMERICAL INTEGRATION OF PROTEIN EXPRESSION
To demonstrate how the Euler Method works in a real situation, we will integrate the differential
equation for protein expression.
7.3. EULER INTEGRATION METHOD 59
.
Vt
t+Δt
V
ΔV
Vt
Δt
Figure 7.1: Demonstration of Euler Method. The dotted line indicates the slope of V at time t. The
solid line indicates an analytical solution.
dY
dt
f (X) =
= f (x) − αY
βX
K n + Xn
(7.4)
(7.5)
The differential equation above explains how a protein X can promote the upregulation of a gene
that produces protein Y . The term −αY expresses how over time the concentration of Y will fall
due to protein degradation and cell division. The constant term α describes the rate at which this
loss of Y will occur. The term f (X) governs the upregulation of Y , e.g., increase in concentration
of Y , and is dependent upon the concentration of X. The form of f (X) can vary, but we have
chosen to use the Hill equation for a promotor protein. The Hill equation has three variables.
K is the activation coefficient and governs how much of X must be present to cause Y to be
expressed half as much as the maximum expression rate. β is the maximum expression, and the ex-
ponent n governs how quickly a promotor can switch from on to off as a function of X concentration.
Using some fancy analytical technique, we may be able to solve the equation analytically,
but the Euler method and some basic programming skills will allow us to investigate the meaning
of the parameters much more easily.
alpha = 1.0;
%Degradation of Y
60
7. LOOPS
beta = 1000.0;
K = 20;
n = 3;
%Hill Maximal Expression Level
%Hill Half-Maximal Activation
%Hill Exponent
%Create time vector
dt = 0.01;
EndTime = 20;
time = 0:dt:EndTime;
%Find Rate of Y production given concentration of X
%Concentration of Promotor x
x = 10;
fx = beta*x/(Kˆn+xˆn); %rate of Y production
%Initialize Y concentration vector
Y = zeros(length(time),1);
Y(1) = 0.0;
%set to 0 but could be changed
%Loop over time using Euler Method
for i = 2:length(time)
Y(i) = Y(i-1) + dt*(fx-alpha*Y(i-1));
end
%Plot out expression of Y over time
plot(time,Y);
Running the script above should result in a plot of the time course of Y expression. You will revisit
this example in an exercise below, so save your script as “ProteinExpressionScript.m”.
7.4 THE LOGISTIC EQUATION REVISITED
In Section 2.6, the logistic equation was described as
zn+1 = rzn[1 − zn]
(7.6)
where zn is the current value of z, zn+1 is the next value of z and r is a constant. For example,
we will set r = 3.2 and the initial population value to z = 0.5, e.g., half of the maximum. In our
previous use of the logistic equation, we simply reentered the command over and over again. Now
with scripting and loops, we can automate the process. Enter the following commands into a script
called “Logistic.m”.
r = 3.2;
Initialz = 0.5;
%constant for logistic equation
%initial condition
N = 100;
z = zeros(N,1);
z(1) = Initialz;
for i=2:N
7.5. THE WHILE LOOP 61
%number of iterations to perform
%Create vector to store z
%Set first value to initial
z(i) = r*z(i-1)*(1-z(i-1));
%Start at 2 because we already z(1)
%calculate next value
end
plot(z)
The script above will iterate through the logistic equation 100 times and then plot the results.
7.5 THE WHILE LOOP
There are times when it is not possible to know exactly how many times a loop should be performed.
It is still very important, however, to have some way to stop the loop from iterating forever. In
these types of situations, you may use a while loop. Because while loops require checking a logical
condition, e.g., is some variable larger than some other variable, we will not discuss while loops until
the next chapter.
7.6 NESTED LOOPS
Loops are assigned a variable for two purposes. The first is so that the variable can be used to perform
some useful function, e.g., as an index to an array or matrix. The second is because loops may exist
within other loops, and we need variables to make it clear where in the iteration sequence we are.
Do not enter the commands below.
for i=1:N
for j=1:M
COMMANDS HERE
end
end
In the template code above i is set to 1, then j is looped from 1 to M. Then i is set to 2 and j is
again looped from 1 to M and so on until i = N. You are certainly not limited to two nested loops,
and you can mix and match for and while loops as needed. Below we discuss two situations where
this type of nested loop structure can be very useful.
7.6.1 LOOPING OVER MATRICES
There are situations where it may be helpful to move systematically through a matrix. For example,
consider that a bacterium is capable of swimming up a sucrose gradient to reach a plentiful supply
of nutrients. If we were to simulate a bacterium swimming in such an environment, one of our first
62
7. LOOPS
steps would be to create a concentration gradient. Let us assume that we will create a gradient that
is low in the middle of a 200×200 grid and increases as we radiate outward from the center. The
problem is that we can not simply issue a few diag (or other Matlab) commands to create our matrix.
We must move through each point on the grid and compute its distance from the center.
C = zeros(200,200);
CenterPoint = [100 100];
dx = 0.01;
%[xcoordinate ycoordinate]
%spatial step size
%compute real physical location of center
CenterLocation = CenterPoint*dx;
a = 2.5;
%scale factor for distance-concentration
for i = 1:200
for j = 1:200
%x loop
%y loop
%x location
%y location
XL = dx*i;
YL = dx*j;
DistanceFromCenter = sqrt((XL-CenterLocation(1))ˆ2+ ...
(YL-CenterLocation(2))ˆ2);
C(i,j) = a*DistanceFromCenter;
end
end
imagesc(C);
colorbar
In the line that created DistanceF romCenter we used another feature of Matlab. There are some
cases where a line must be very long, and therefore can be difficult to read. The “...” means to
continue the command onto the next line.
It is important to note that we could have easily created a more general script by not as-
suming the grid would be 200 x 200. We also have the flexibility to make the center point
anywhere on the grid we would like. Set CenterP oint to [50, 75] and create a figure called
“SucroseGradient.jpeg” showing the result. Can you see how you might turn a script such as the
one above into a general function? Can you also see how you could create three nested loops to
create a 3D gradient? There are other situations where you may need to use nested loops to move
through a data structure, but you should use it only as a last resort. If you can find a way to perform
an operation without loops, it will generally execute much faster.
7.6.2 PARAMETER VARIATION
In Section 7.4, a function was created to iterate through the logistic equation. In this section we will
explore how the dynamics of the logistic equation change as the parameter r is varied. Copy the file
“Logistic.m” to a new script called “LogisticBifurcation.m”.
r = [2:0.001:4];
% create vector with r values
7.6. NESTED LOOPS 63
figure(1)
hold on
for j = 1:length(r)
Initialz = 0.5;
N = 1000;
z = zeros(N,1);
z(1) = Initialz;
for i=2:N
% loop over the r vector
%CHANGED TO 1000
%calculate next value of x using current r value
z(i) = r(j)*z(i-1)*(1-z(i-1));
end
%create a vector with all values of r(j)
rvec = r(j)*ones(500,1);
%only use the last 500 values of z
truncatedz = z(501:end);
%plot points, but do not connect lines
plot(rvec,truncatedz,'.');
end
%End r loop
In the script above, one loop is used to systematically change r and a second loop is used to perform
the iteration of the logistic equation. You should notice that some of the skills learned in the sections
above have been used here, for example, creating the r vector first and then looping through it
with the loop variable j . You should also notice that we created two temporary variables, rvec and
truncatedx, for the purposes of removing any of the initial dynamics (transients) of the logistic
iteration. Here we are only interested in the long-term behavior. Save your bifurcation plot in a
figure called “LogisticBif.jpeg”.
We can now turn to the meaning of the plot. On the x-axis, is the parameter r that we var-
ied from experiment to experiment. On the y-axis we have done something a bit unusual -
64
7. LOOPS
we have plotted the entire time series on top of one another. To gain insight into the logistic
behavior at a particular value of r, we can simply draw a vertical line upward to see where it
intersects values for z. If z has reached a steady-state, it will appear as only one point because
every point is the same. On the other hand, if z oscillates between two values, our vertical line
will intersect at 2 points. You should notice in your plot that as r is increased, we move from
steady-state to oscillating between 2 z values, to oscillating between 4 z values, to 8 z values and so on.
There are two points about the plot you have created that are important in many biological
systems. The first is that as r is changed, the system moves from one type of behavior to another, e.g.,
from steady-state to oscillating. What is most striking is that the transition is abrupt and occurs at a
particular value of r. These types of abrupt transitions are called bifurcations and are found in many
biological systems. The parameter r is called the bifurcation parameter. In the logistic equation,
there is only one parameter, r, that needs to be varied to drive the system through a bifurcation. In
many biomedical problems, however, it is some combination of variables that will move the system
through a bifurcation. Systems with bifurcations are excellent candidates for parametric studies and
highlight one of the strengths of using computation to study biological systems.
The second important point is that for some values of r, the behavior of the logistic equa-
tion becomes chaotic. It is easy to see from the plot that as r is increased, the period of oscillation
doubles, then doubles again, then again, and so on. The meaning is that the time series will visit
first 1, then 2, then 4, then 8, and then 16 unique values of z. This doubling continues until
the period is infinite. The meaning of an infinite number of possible values for z is that there
is no longer any defined period, e.g., the behavior never repeats. The logistic equation demon-
strates one of the most studied pathways to a chaotic system - a series of period doubling bifurcations.
Now that you have a bifurcation map and understand its meaning, you should copy “Logis-
tic.m” to “LogisticTest.m” and then try different values of r. You should be able to predict when r
will result in steady-state, periodic or chaotic behavior from the bifurcation plot. Create a plot of a
chaotic time series of the logistic equation and save it in a figure called “LogisticChaos.jpeg”.
7.7 EXERCISES
1. Turn in “Logistic.m”, “LogisticBifurcation”, “ProteinExpressionScript.m”, “LogisticBif.jpeg”,
“SucroseGradient.jpeg”,“LogisticChaos.jpeg”.
2. In Section 7.3.1, you created a script to compute expression of protein Y over time as a function
of a promotor protein X. In this exercise, you will modify you script to evaluate how Y expression
changes as X is changed. To do so, you will treat X as a parameter that will vary from 0 to 50
in increments of 1. The goal is to record the steady-state, e.g., t → inf, value of Y . Below is a
portion of a script that should help you organize your script, “Assignment6-Problem2.m”.
7.7. EXERCISES 65
figure(1)
hold on
%Find Rate of Y production given concentration of X
x = [0:1:50];
SteadyStateY = zeros(length(x),1);
for j = 1:length(x)
fx = beta*x(j)/(Kˆn+x(j)ˆn); %rate of Y production
%PLACE APPROPRIATE COMMANDS HERE
%Plot out expression of Y over time
plot(time,Y);
pause(0.2)
SteadyStateY(j) = Y(end);
end
figure(2)
plot(x,SteadyStateY);
In Figure 1, we will plot the time courses of Y for each value of x. The pause command is
used to clarify the trends as x is increased. In Figure 2, we are plotting only the very last
(steady-state) value for Y as a function of x.
3. Many biological systems can not be characterized by a single differential equation. In these
cases, we can think of the system as a set of coupled differential equations. For example,
dx
dt
dy
dt
dz
dt
= f (x, y, z)
= g(x, y, z)
= h(x, y, z)
(7.7)
(7.8)
(7.9)
(7.10)
notice that functions f , g and h are functions of all three variables, i.e., the equations are
coupled. Also notice that f , g and h could take any form and will mostly likely not be linear.
For example, the FitzHugh-Nagumo model of a neuron is
= V − V 3
3
= a ∗ (V + b − cW )
− W + I
dV
dt
dW
dt
(7.11)
(7.12)
66
7. LOOPS
where V is the cell membrane potential, W is a recovery variable and I is a stimulus current.
We will assume the constants are a = 0.08, b = 0.7 and c = 0.8. Because these equations are
non-linear, we cannot transform them to the form
d
dt
x = Ax
(7.13)
and therefore need to use a numerical integration technique. You can assume that the initial
values for V and W are both 0. Integrate the equations using the Euler Method with a
(cid:8)t = 0.01 from time = 0 to time = 500. You should create a script called, “FHN.m” that
will allow you to change parameters. At the end of the script, you should plot the membrane
voltage, V , versus time. The parameter to vary is the stimulus current I . First try, I = 0. Next
try I = 1.0. Hint: You should see two very different behaviors. The stimulus current is in fact
a bifurcation parameter of the system. Find the value of I at which the bifurcation occurs.
Place this value in a comment at the bottom of your script “FHN.m”.
C H A P T E R 8
Conditional Logic
67
8.1
INTRODUCTION
Conditional logic is the use of true and false statements in programming. When given a statement
it will either definitely be true or definitely be false. In computing terms, we can assign a “1” to any
statement that is true and a “0” to any statement which is false. The two types of states, true or false,
also goes by the name of Boolean logic.
The purpose of using Boolean logic is that it can be used to alter the flow of a program.
Here we introduce idea of the state of the program, which is simply the values of all of the
variables up to that point that are available in memory. Using this state, we can send the program
down different pathways. Therefore, conditional logic allows a programmer to write a much more
sophisticated and flexible program.
8.2
LOGICAL OPERATORS
To begin understanding how logical operators work, enter the following commands.
>> a = 5;
>> b = -1;
>> c = 0;
>> d = 3;
>> e = 3;
You can now think of the state as the variables a − e in memory. Next enter the following commands
one at a time to the command line. For more help, you may wish to type “help relop”.
>> logical(a)
>> logical(c)
>> a==b
>> b˜=c
>> d>e
>> d>=e
>> d!=b
>> d<a
%true (1) because a has a value other than 0
%false (0) because c has a numerical value of 0
%false because a is not equal to b
%true because b is not equal to c
%false because d is equal to d
%true because d is equal to e
%true because d is not equal to b
%true vecause d is less than a
And, logical statements can be combined together using && (logical AND) and || (logical OR)
68
8. CONDITIONAL LOGIC
>> (a==b)||(d==e)
>> (a==b)&&(d==e)
>> (a >= d)&&(c)
%True because d is equal to e
%False because a is not equal to b
%false because c is logically 0
8.2.1 RANDOM BOOLEANS
Logical operations can be used to create a number of interesting data structures. One that has been
studied extensively by mathematicians, and has applications in systems biology, is a description of a
randomly connected network. In Chapter 4, we created an adjacency matrix for a ring. Using logical
operations, we can create the adjacency matrix for randomly connected nodes.
%Number of Nodes
>>N = 100;
>>Temp = rand(N,N); %Temp filled with numbers between 0 and 1
>>A = Temp>0.5;
>>spy(A)
%Any value >0.5 becomes 1, any value < 0.5 becomes 0
Because of the random distribution between 0 and 1, and the value 0.5, the matrix will be half filled
of 0s, with the other half filled with 1s. Remembering that a 1 indicates a connection; this means that
each node is connected to, on average, half of the other nodes. To change the number of connections,
all that is necessary is to change the line “A = Temp>0.5”.
8.2.2 LOGICAL OPERATIONS ON STRINGS
A second situation where logical operations can be helpful is in checking if two character strings
are the same. For example, such an operation may be very useful in searching or sorting the patient
database created in Section 4.6.1. Matlab has a number of commands specifically for
>>PatientName1 = 'John Doe';
>>PatientName2 = 'Jane Doe';
>>strcmp(PatientName1,PatientName2)
>>strcmp(PatientName1,'John Doe');
For more logical operations on strings, view the help for strcmp.
8.2.3 LOGIC AND THE FIND COMMAND
In an exercise in Chapter 5, you created a vector that contained the first 7 minutes of heart rate
recordings.
>>HeartRateData = [0
64
137
188
260
328
397
464];
Now you would like to find the minutes when the heart rate went above 70.
>> highHRminute = find(diff(HeartRateData)>70)
In one command, using some logic and the diff and find commands, we can identify the minutes
where the heart rate when above 70.
8.3
IF, ELSEIF AND ELSE
Logical commands can be used to control the flow of a program using the if, elseif and else structures
in Matlab. Do not enter the commands below.
8.3. IF, ELSEIF AND ELSE 69
x = 2;
if (x>1)
COMMANDS HERE FOR x GREATER THAN 1
elseif (x==1)
COMMANDS HERE FOR x EQUAL TO 1
else
end
COMMANDS HERE FOR x LESS THAN 1
In this template, code we would issue the commands in “COMMANDS HERE FOR x GREATER
THAN 1”, because x = 2. It is important to note that there can be any number of commands.
8.3.1 THE INTEGRATE AND FIRE NEURON
One of the most simplistic models of an excitable cell, e.g., neuron and muscle, is known as the
integrate and fire model. The model has two phases: 1) a period where it will integrate any electrical
input and charge up the cell membrane, and 2) a period when the cell produces a spike in membrane
potential and then resets back to rest. During the charging phase, we can have the cell obey a simple
differential equation for an RC circuit.
dV
dt
= I − V
R · C
(8.1)
where R and C are the membrane resistance and capacitance. I is the current entering the cell.
By looking at the equation, if I is a constant current input, V (t) will rise up to some steady-state
value (exactly like the charging of an RC circuit). To switch to the second phase, we must define
a threshold voltage, Vt . When the cell membrane voltage reaches Vt , the program will stop using
the differential equation and then do two things. First, a “spike” will be issued. The meaning of the
spike is to set V to some constant value Vpeak for only that time step. Second, on the following time
step, V will be reset back to an initial value of 0. After the reset, the cell membrane is ready to be
charged again. These ideas can be captured in the following code called, “IAF.m”.
dt = 0.01;
EndTime = 50.0;
time = 0:dt:EndTime;
Vt = 5;
R = 1;
C = 2;
% Threshold voltage
% Set to 1 for simplicity
% set to 2 for simplicity
70
8. CONDITIONAL LOGIC
I = 3;
Vpeak = 100;
% set membrane voltage of spikes
V = zeros(length(time),1);
V(1) = 0;
for i=2:length(time)
if (V(i-1)>Vt)
V(i-1) = Vpeak;
V(i) = 0;
else
end
end
V(i) = V(i-1) + dt*(I-V(i-1)/(R*C));
plot(time,V);
You should notice in your plot that the membrane charges to V = 5, generates a spike and then
returns to 0. Upon returning to zero, it will begin charging again. The lines
if (V(i-1)>Vt)
V(i-1) = Vpeak;
V(i) = 0;
else
require some explanation because a trick was used here. At a particular iteration through the loop,
the loop variable has the value of i. When we perform the logical statement V (i − 1) > Vt , we
are really checking if the previous value was above threshold. If the statement is true we want the
previous value to instead be replaced with a spike, thus the statement V (i − 1) = Vpeak. Then we
want the current value of V to be reset (V(i)=0). This appears strange because we are going back to
a previous value of V and then changing it. There are other ways to write this section of code, but
this is much more efficient.
As a test of your script, you should slowly increase I , observing how the pattern of spikes
changes. You should notice that as I increases, the rate of firing increases. This is exactly what
occurs in most neurons in the brain. If you then decrease I below some level, no firing will occur at
all. This behavior is also observed in neurons.
8.3.2 CATCHING ERRORS
In Section 6.3.2, we learned how to create an error in Matlab, which will display red text and
terminate the script. The combination of error and if-else logic can be used to catch problems in a
function and report the problem to the user. For example,
8.3. IF, ELSEIF AND ELSE 71
x = 1;
a = 5;
if (a>0)
b=a/x;
else
end
error('Divide by Negative Number Not Allowed');
8.3.3 FUNCTION FLEXIBILITY
You may have noticed that some Matlab functions can take a number of different arguments. For
example, you can call the function mean in two different ways.
>> mean(A);
>> mean(A,2);
Inside the function (mean.m), the program must first determine if there are 1 or 2 input arguments.
At that point, it will determine how to proceed. In every function, there is a built-in variable that
is defined nargin (look at the help for nargin for other useful function commands) that contains the
number of input arguments. You should try type mean.m to see how the function was written to take
into account the two different ways of called mean.
8.3.4 WHILE LOOPS
In the previous chapter, the while loop was introduced as a way to begin an iteration without knowing
exactly how many times it should cycle through. As pointed out, however, every loop must have some
way to terminate. In the case of the while loop, we will continue to iterate until some logical condition
is met
counter = 1;
while (counter<=25)
counter = counter+1
end
In the above code, we set counter equal to 1. On the first time through the while loop, we check if
counter is less than or equal to 25. If it is, then we proceed through the next iteration. But, within
each iteration, counter is being incremented by 1. Again, at the beginning of each iteration, the
condition counter <= 25 is checked. When counter becomes 26, the condition is not met and the
while loop terminates.
8.3.5
STEADY-STATE OF DIFFERENTIAL EQUATIONS
The above section demonstrated how a while loop uses logical statements to terminate a loop. It
should be noted that any logical statement can be used, and that the loop will terminate when
the statement becomes false. To demonstrate a more practical reason to use a while loop we will
72
8. CONDITIONAL LOGIC
consider iterating through the Kermack and KcKendrick differential equation model for the spread
of a disease.
dI
dt
= −βSI
dS
dt
= βSI − γ I
dR
dt
= γ I
(8.2)
(8.3)
(8.4)
where S is the population of Susceptible, I is the population of infected and R is the population
of recovered. β is a constant that reflects the number of susceptible that become infected each time
step. γ is a constant that reflects the number of infected which recover each time step. The script,
“SIR.m” below, uses a while loop to check when the variable S does not change.
dt = 0.01;
change = 0.001; %how to define stop condition
beta = 0.003;
gamma = 0.01;
%initial values
S = [0 100];
I = [0 2];
R = [0 0];
i = 2;
while (abs(S(i-1)-S(i))>change)
i = i+1;
S(i) = S(i-1) + dt*(-beta*S(i-1)*I(i-1));
I(i) = I(i-1) + dt*(beta*S(i-1)*I(i-1)-gamma*I(i-1));
R(i) = R(i-1) + dt*(gamma*I(i-1));
end
figure(1)
hold on
time = dt*[1:length(S)];
plot(time,S)
plot(time,I,'k');
plot(time,R,'r');
The reason for the while loop only becomes apparent as the values for β and α are changed. First,
change “figure(1)” to “figure(2)” so that you can view both figures side-by-side. Then change β = 0.3
and rerun “SIR.m”. You should notice that the time on the y-axis is drastically different in the two
figures. If we used a for loop, we would not know ahead of time how many iterations to execute
before S reached a constant value. But, with a while loop, we can explicitly check when S reaches a
steady-state.
8.4. SWITCH STATEMENTS 73
8.3.6 BREAKING A LOOP
A for loop should be used when it is known how many times an operation should be performed. A
while loop should be used when an operation should be repeated until some condition is met. There
are times, however, when a loop is necessary that does not fit easily into either of these predetermined
types. To help gain more flexibility, Matlab has a break command. Note that the break command
typically would be placed inside of an if statement within a loop.
counter = 1;
for i=1:100
counter = counter+1;
if(counter>=76)
break;
end
end
counter
The code above will terminate the loop early when counter reaches 76.
8.3.7 KILLING RUNAWAY JOBS
Matlab can sometimes get carried away with an operation, usually if a command in the loop uses too
much memory or a loop becomes infinite. In these instances, it is helpful to be able to kill whatever
Matlab is doing. Although any work that has been performed will be destroyed, you may find that
it is useful in some cases. To suspend a process, press Control C.
8.4
SWITCH STATEMENTS
The if, elseif, else format is very good to use if you need to direct your program in only a few
directions. Here you can imagine the flow of your program proceeding down a main trunk and
then reaching a branch point. If there are only two or three branches, you can easily use if-else
statements. If the branch that occurs in the code must go in more than three directions, there is
another type of logical structure that can be very helpful - the switch structure.
Let us assume that you wish to study how neurons might synchronize to one another, and
how hypersynchronization may lead to epilepsy. You have the dynamics of the individual neurons
already coded, probably in the form of a difference or differential equation. Now you want to test out
74
8. CONDITIONAL LOGIC
how different networks of neurons might synchronize. The task of the study is to determine if some
networks are easier (or harder) to synchronize. For this study, you will need to create a variety of
different networks. You would like to try a one-dimensional ring (see Chapter 4), a two-dimensional
grid with Von Neumann connections, randomly connected neurons (see Section 8.2.1) and what is
known as a small world network. Here you could define a variable NetworkT ype that contains
a character string. Then you could include a switch statement that will send the code in different
directions, depending upon the variable NetworkT ype. You do not need to enter the commands
below.
NetworkType = '2DGrid';
switch(NetworkType)
case {’1DRing’}
N = input('Enter Size of Ring:');
Network = %Create Ring Network Adjacency Matrix Here
case {’2DGrid’}
[Nx,Ny] = input('Enter number of rows and columns');
Network = %Create 2D Diffusion-type Adjacency Matrix Here
case {’Random’}
N = input('Enter number of random points:');
Network = %Create Random 2D Adjacency Matrix Here
case{’SmallWorld’}
N = input('Enter number of nodes in small world');
Network = %Create Small World Network Here
otherwise
disp('Please Enter a Valid Network Type');
end
spy(Network)
In this instance, you used a character string as the variable that controls switching. But, you could
use any data type in Matlab. You could also go back to this code at some later time and easily add
another type of network, e.g., A Moore Neighborhood for the 2D grid.
8.5 EXERCISES
1. Turn in “IAF.m”, “SIR.m”.
2. A great deal of study has gone into how patterns are generated in nature, for example, the
spots on a leopard, stripes on a Zebra or more intricate patterns on some sea shells. It would be
tempting to attribute these patterns to genetics, but experiments have shown that the patterns
are very sensitive to environmental conditions present at specific stages of development. For
example, a mollusk that is placed in a colder environment may display a very different pattern.
As such, theoretical biologists proposed that the genetics lay down some very simple rules
that in slightly different situations can lead to drastically different patterns.
8.5. EXERCISES 75
One of the simplest pattern generating mechanisms was discovered in the early 1980s
by Steven Wolfram. It is a subclass of mathematical systems called cellular automata. Imagine
a one-dimensional line composed of a number of individual elements. Each element can
be either on (logical 1) or off (logical 0). We can then make a second one-dimensional line
below the first line. But, we can make the on and off pattern of this second line dependent
upon what is on or off in the first line. Then we can add a third line, with values depen-
dent upon the second line. We can continue this pattern indefinitely. In this case, the first
line is like a boundary condition, and then we simply follow some rules to continue adding lines.
Figure 8.1 shows the Wolfram rules that look to only three elements of the previous
line: the element directly above you, the element above and to the left, and the element above
and to the right. You should note that, in the figure, the 8 combinations in the top row are
the only possible combinations of three elements. These combinations are always given in
the same order. The rule shown is [ 0 1 1 0 1 1 1 0], which completely specifies the result
for all possible combinations of the three values in the previous one-dimensional line. A
bit of calculation will show that there are 256 (28) possible rules. Wolfram explored all 256
and showed that some are boring, some interesting, and others are very unexpected. Below
is code, which you should enter into a script called, “WolframCA.m”, that implements the
[0 1 1 0 1 1 1 0] rule.
L = 300;
T = 300;
%length of one dimensional line
%number of one dimensional lines to add
A = zeros(T,L);
A(1,L) = 1;
%Initialize A matrix to store CA
%Initial Condition
for i = 2:T
for j = 2:L-1
%Don’t loop over the two end points
l=A(i-1,j-1);
m=A(i-1,j);
r=A(i-1,j+1);
%left at previous time
%this node at previous time
%right at previous time
% Use logic to go through all 8 cases
if (l && m && r)
A(i,j) = 0;
end
76
8. CONDITIONAL LOGIC
if (l && m && ˜r)
A(i,j) = 1;
end
if (l && ˜m && r)
A(i,j) = 1;
end
if (l && ˜m && ˜r)
A(i,j) = 0;
end
if (˜l && m && r)
A(i,j) = 1;
end
if (˜l && m && ˜r)
A(i,j) = 1;
end
if (˜l && ˜m && r)
A(i,j) = 1;
end
if(˜l && ˜m && ˜r)
A(i,j) = 0;
end
end
end
colormap(gray(2));
image(2-A);
Explore different rules by changing the values for A(i, j ) = terms. You should note that even
one change can completely alter the patterns generated. At the bottom of your script, you must
report (in comments) the types of behavior you observed.
8.5. EXERCISES 77
Figure 8.1: Example of Wolfram Cellular Automaton Rules.
C H A P T E R 9
Data In, Data Out
79
9.1
INTRODUCTION
In previous sections, it was shown how, using the save and load commands, we could easily store
and then restore anything in Matlab’s memory. But, most powerful programming languages have
some mechanism for reading in data from other applications, as well as some way to write data out
that can be read by other applications. For example, you may wish to create geometries (adjacency
matrices) or initial conditions in Matlab, but then send these files to another program that will run
on a supercomputer, i.e., on many computers in parallel. In this way, you could run an enormous
biomedical simulation with millions (or maybe even billions of points) in a compiled, i.e., faster than
matlab, program. Alternatively, you may receive data from a large simulation and need to read data
into matlab to analyze it. In this chapter, you will learn how to read in and write out data to and
from matlab.
9.2
BUILT IN READERS AND WRITERS
The problem with the “.mat” file format is that you cannot open the file in anything other than
matlab. Matlab has a number of functions that allow for files to be read and written in other formats.
For example, there are two commands, xlsread and xlswrite, that allow matlab to share files with
Excel.
A = rand(20,30);
xlswrite('FirstExcelExample.xls',A);
There are many options to use with xlswrite, and you should view them if you need to perform a
more sophisticated function. Matlab also has a method of reading from an excel file.
[Numeric,Txt,Raw]=xlsread('FirstExcelExample.xls');
It should be noted that for communication with external programs, it is important to have the
proper version of matlab. It is possible that your version may not support all of the input and output
formats explained in this chapter.
An even more basic file is known as a flat text file. Here the data is simply arranged in a
text file in columns and rows. Enter the following numbers into a file called “MyFirstTextFile.txt”.
1
6
11
2
7
12
3
8
13
4
9
14
5
10
15
80
9. DATA IN, DATA OUT
To load the following into matlab you would type
[B] = load('MyFirstTextFile.txt');
In the file above, matlab used spaces (or tabs) to make clear where the breaks are between numbers.
Here the spaces (or tabs) were used as deliminators. A deliminator is simply any character that
separates other characters. For many reasons, it may have been more clear for the file to have had
the format
1:2:3:4:5
6:7:8:9:10
11:12:13:14:15
where “:” is being used as a deliminator. Matlab has a variety of commands for reading (dlmread
and textscan) and writing (dlmwrite) these types of files.
For a more complete list of all the types of file formats supported in matlab, view the help
for fileformats.
9.3 WRITING ARRAYS AND VECTORS
In the above example, we created a relatively small matrix (A) that could easily be stored in memory.
Then we could simply send the entire matrix to a file. In the context of a simulation, however, we
might be generating the values as we progress throughout a simulation, and we generally do not
need to know all values at all times. In these situations, there is no need to store all of the data in
memory. Rather, we can send the data to a file on the hard drive as it is generated. We will now take
a bit of a detour to create a simulation that generates more data than can be stored in memory, and
therefore requires the creation of an external data file.
9.3.1 DIFFUSION MATRICES
To demonstrate a very generic situation where writing to a file is very helpful, we will examine an
important concept in the movement of any group of particles that is conserved as it flows through
a two-dimensional grid. The particles could be ions, animals of a species, a volume of fluid, or even
something less tangible such as heat energy. The key is that given a type of particle, q, we can define
a flow rate, dq
dt . In electrical circuits the conserved quantity is charge (q) and the flow is current, I .
I = dq
dt
(9.1)
Because the particles (charge ions in this case) must be conserved, we can total up all of the charge
entering a point and all the charge leaving a point, and they must be equal.
Iin = Iout
0 = Iin − Iout
(9.2)
(9.3)
9.3. WRITING ARRAYS AND VECTORS 81
Figure 9.1: Two dimensional resistor grid, showing the neighbors of a generic node i
Figure 9.1 shows a small portion of a large two-dimensional grid. Each point in the grid is connected
to its left, right, up and down neighbors and may share charge with only those nodes. Therefore,
current may flow left, right, up or down through some resistance, and if we total up the currents
entering and leaving any node, they should sum to zero. Given the directions of our arrows, which
are arbitrary but should be consistent for every node, we can derive
I1 + I2 − I3 − I4 = 0
(9.4)
Using Ohm’s Law, we can reexpress the above equation in terms of the voltages at the neighboring
nodes (V ) and resistances between nodes (R)
Vi−1 − Vi
R
− Vi
Vi−Ny
+
R
Vi−1 − 2Vi + Vi+1
R
Vi−Ny
− Vi − Vi+1
R
Vi−Ny
Vi − Vi+Ny
R
−
− 2Vi + Vi+Ny
+
+ Vi−1 − 4Vi + Vi+1 + Vi+Ny
R
R
= 0
= 0
= 0
(9.5)
(9.6)
(9.7)
where subscripts denote the node numbers. We have used i as the node number to indicate that our
analysis will work for any node in the grid. The left and right neighbors therefore have subscripts
i − 1 and i + 1. The node numbers for the up and down neighbors are offset by the number of
nodes in each row of the grid.
If we were to write out an equation for each node in the grid, we would have a very similar
form to the equation above. The node itself would have a coefficient of −4/R, and the neighbors
82
9. DATA IN, DATA OUT
would each have coefficients of 1/R. The nodes at the corners and at edges will have slightly
different equations, but they could easily be derived by the same conservation of current laws. In
fact, the voltages and coefficients can be decoupled and take the form of
Dv
(9.8)
where D contains the coefficients and v contains a vector of the voltages. Nearly all physical
quantities have some sort of conservation law, and therefore our analysis applies to much more than
electronic circuits.
The code below creates a diffusion matrix, T , for a grid that is M × N. You should save the
file as “FHNPropagate.m”. The code is admittedly long. You may wish to read the text after the
code to gain some overall understanding of how it works. Then, as you write each line, you should
be thinking about how it fits into the overall context of the code.
M = 5;
N = 5;
%Number of row
%Number of columns
T = zeros(N*M,N*M);
for i = 1:M
for j=1:N
%Loop over rows
%Loop over columns
%Computer node number assumed numbering
%starts going across rows
Node = (i-1)*N+j;
%All Interior Nodes
if ((i>1) && (j>1) &&(i<M) &&(j<N))
T(Node,Node-1) = 1;
T(Node,Node+1) = 1;
T(Node,Node+N) = 1;
T(Node,Node-N) = 1;
T(Node,Node) = -1.0*sum(T(Node,:));
end
%Top Boundary
if(i==1)
if (j==1)
%Upper left corner
T(Node,Node+1) = 2;
T(Node,Node+N) = 2;
T(Node,Node) = -1.0*sum(T(Node,:));
9.3. WRITING ARRAYS AND VECTORS 83
elseif(j==N) %Upper right corner
T(Node,Node-1) = 2;
T(Node,Node+N) = 2;
T(Node,Node) = -1.0*sum(T(Node,:));
else
%Top edge
T(Node,Node+1) = 2;
T(Node,Node-1) = 1;
T(Node,Node+N) = 1;
T(Node,Node) = -1.0*sum(T(Node,:));
end
end
%Bottom Boundary
if (i==M)
if(j==1)
%Lower left corner
T(Node,Node+1) = 2;
T(Node,Node-N) = 2;
T(Node,Node) = -1.0*sum(T(Node,:));
%Lower right corner
elseif(j==N)
T(Node,Node-1) = 2;
T(Node,Node-N) = 2;
T(Node,Node) = -1.0*sum(T(Node,:));
else
%bottom edge
T(Node,Node-1) = 1;
T(Node,Node+1) = 1;
T(Node,Node-N) = 2;
T(Node,Node) = -1.0*sum(T(Node,:));
end
end
%Left Boundary
if((j==1)&&(i˜=1)&&(i˜=M))
T(Node,Node+1) = 2;
T(Node,Node-N) = 1;
T(Node,Node+N) = 1;
T(Node,Node) = -1.0*sum(T(Node,:));
end
84
9. DATA IN, DATA OUT
%RightBoundary
if((j==N)&&(i˜=1)&&(i˜=M))
T(Node,Node-1) = 2;
T(Node,Node+N) = 1;
T(Node,Node-N) = 1;
T(Node,Node) = -1.0*sum(T(Node,:));
end
end
end
spy(T)
There are a few points that are important to note. First, we have created a 5×5 grid in this example
but could easily change M and N. Second, the numbering of nodes is to start with 1 and move
across a row until reaching Node = 5. Then the numbering starts up again on the next row and
continues until N ode = 10. Although there are nested loops (loop over rows and then columns),
we can, for any i and j , compute the node number (stored in Node). It then makes it much easier
to reference any other node to the current node. For example, the node to the left will be Node − 1,
the node to the right will be Node + 1, the node above would be Node + N and the node below
will be N ode − N . Inside the loops are a series of if statements that handle interior, edge and corner
nodes. Although we will not go into the theoretical rationale, it is convention for the coefficient
to be “2” when a neighbor does not have a pair. What we mean by a pair is best illustrated by an
example. For nodes on the left edge, there is no connection between Node and Node − 1 (because
it is on the other side of the grid). In this case, we then make Node + 1 count twice. You may notice
that the first if statement handles all of the interior nodes, while all the other if statements handle
the edge and corner nodes. The last command (spy(T )) will allow you to visualize the entries. You
should note that a diffusion matrix is a special type of adjacency matrix.
The last point is that from the spy command, T is sparse and all entries are along 5 diago-
nal lines. For this reason, the above code could have been greatly condensed, i.e., optimized, by
removing the if commands and replacing them with the appropriate diag commands. The code
was created as above for illustration purposes. A further reduction in memory could be achieved by
using the sparse command, as introduced in Section 4.2.2.
9.3.2 EXCITABLE MEMBRANE PROPAGATION
The heart is often thought of as a mechanical pump. But the control and coordination of the pump
is largely achieved through electrical communication between neighboring cells. In fact, we can use a
diffusion matrix to explain how ionic currents are shared between cells. In this section, we will com-
bine the diffusion matrix above and the excitable FitzHugh-Nagumo model introduced in Chapter 7.
You should add the following lines to “FHNPropagate.m”.
9.3. WRITING ARRAYS AND VECTORS 85
dt = 0.01;
EndTime = 500;
time = 0:dt:EndTime;
a = 0.08;
b = 0.7;
c = 0.8;
I = zeros(M*N,1);
I(1) = 0.6;
%Simulus Current Vector
%Stimulate only Node 1
V = zeros(M*N,1);
W = zeros(M*N,1);
VOld = zeros(M*N,1);
WOld = zeros(M*N,1);
%Set up Vector to hold V
%Set up Vector to hold W
%Set up Vector to hold V
%Set up Vector to hold W
fid = fopen('FHNProp.txt','w');
for i=2:length(time)
V = VOld + dt.*(VOld-(VOld.ˆ3)/3-WOld+I - T*V);
W = WOld + dt.*(a.*(VOld+b-c.*WOld));
fprintf(fid,'%f\t',time(i));
for k = 1:M*N
%Print out time
%loop over V values
fprintf(fid,'%f\t',V(k));
end
fprintf(fid,'\n');
%Go to next line
VOld = V;
WOld = W;
end
fclose(fid);
You should run the code above. After completion, you should see a file “FHNProp.txt” in your
directory. We will first discuss how this implementation of the FitzHugh-Nagomo model is
different from the one in Chapter 7. The major difference is that rather than simulate one
FitzHugh-Nagomo cell we are simulating one at every grid point ,e.g., N × M = 25. For this
reason, we have assigned each cell a number, corresponding to the Node number discussed above.
86
9. DATA IN, DATA OUT
In numbering each cell, we can create a vector V and vector W that hold the values at all 25 cells
at any one time. You will also note that we created two additional vectors, V Old and W Old,
which are temporary vectors that will be used in calculations. Within the time loop, we now
perform an update on the differential equations for V and W for all 25 nodes in one step. This
is an excellent example of how Matlab’s matrix and vector operations can help make code easier
to write, read and run more efficiently. You should note that the left side contains all V -terms
and the right side contains all V Old-terms. This is done so that we do not mix up previous
and current values. For example, we want to use the same V Old in the updating of V as we
use in the updating of W . You will also note that a new term, −T ∗ V has appeared in the up-
date for V .This is a vector that describes the current entering (or leaving) nodes from their neighbors.
You should note that the methods used above could be used to describe nearly any interact-
ing nodes, described by any adjacency matrix. In this way, we can decouple the dynamics at any
node from the way nodes are connected together in a network.
We are not saving the value of V in memory at every node at every time step. Rather, we
are generating a new value of V and then sending it to a file using the fprintf command. First,
before the time loop begins, we open a file using the open command. The input to the command is
the filename with the option “w”, meaning that this files is being opened for writing. Within the
loop, the fprintf command is used to send data to the file, and then after the loop is over, the file is
closed using the fclose command. The variable f id is called a file identifier. This file identifier can
become very important if more than one file is open at any one time, as it will make clear which
data is going to which file.
The fprintf command has a very particular format. The first input is the file identifier. The
second input is the format of the text that will be sent to the data file. The third input is the value
of any variables. You should view the help for fprintf for details. In the example above, before the
k-loop, the value for time is printed. The format is “%f\t”, meaning that a numerical double, %f
will be printed, followed by a tab (\t). The third option is what places the value of time into the
place of %f. The loop moves though the V vector, printing a particular value of V followed by a
tab. After the loop is printed, a return character,\n, is printed Therefore, on each iteration through
the time loop, a new line is created in the open file that contains all of the values of V , separated by tabs.
The command fprintf is the most general and powerful way to print out data, but there are
other commands, e.g., sprintf, that can be useful and more efficient.
9.4 READING IN ARRAYS AND VECTORS
The load command introduced earlier can be used to read in saved matlab work spaces. It can also
be used to read in text files that were created in text format.
9.5. READING AND WRITING MOVIES AND SOUNDS 87
Y = load('FHNProp.txt');
will load the entire data file “FHNProp.txt” into the matrix Y . We could then plot values against
one another. For example,
plot(Y(:,1),Y(:,2))
will plot the time vector, Y(:,1), against the first node, Y(:,2). The load command is very limited and
if a more complex text file must be loaded in, e.g., a delimitated text file, the textscan is the command
to use.
9.4.1
IRREGULAR TEXT FILES
In some situations, a file does not have a regular layout. Enter the following into a text file, called
“IrregularText.txt”, using a text editor or Matlab’s editor.
FHN Parameters
0.01 500
0.08 0.7 0.8
0.6
To read in the text above, enter the following code into a script called “IrregularTextReader.m”
fid = fopen('IrregularText.txt',’r’);
FirstLine = fscanf(fid,'%s %s',2);
Temp = fscanf(fid,'%f %d',2);
dt = Temp(1);
TimeSteps = Temp(2);
Temp = fscanf(fid,'%f %f %f',3);
a = Temp(1);
b = Temp(2);
c = Temp(3);
I = fscanf(fid,'%f',1);
fclose(fid)
The fscanf command will read one line at a time in the format specified. You should view the help
file for more details.
9.5 READING AND WRITING MOVIES AND SOUNDS
Movies are simply a series of images played in sequence. In previous chapters, we have created a
simple type of movie using a loop and the pause command. But, matlab also has support for creating
88
9. DATA IN, DATA OUT
standalone movies that can be run outside of matlab, i.e., in a presentation. To demonstrate the
ability to create movies, we will create a script that simulates John Conway’s Game of Life. The
game of life is played on a grid where each element in the grid has eight neighbors, the usual up,
down, left and right, but also the four diagonals. Each element can be on (1) or off (0) at any one
time. We begin with an initial condition (pattern of 1s and 0s on the grid). Then to get to the next
time step, we apply two simple rules to each element. If an element is on and either 2 or 3 of the
neighbors are also on, that element will be on the next step. Otherwise, the node is turned off. If an
element is off but exactly 3 of its neighbors are on, it will be on the next step. Otherwise, the node
remains off on the next step.
Conway’s rules are called the game of life because we can think of the rules as correspond-
ing to some real situations. If a node is “alive”, it could die by two different mechanisms. First, it
could die of “loneliness” if not enough of its neighbors are on. Second, it could die if it is crowded
out. It will only survive if 2 or 3 of its neighbors are on (not lonely but not overcrowded either). On
the other hand, 3 neighbors can “reproduce” and give rise to a new alive element. Below is code
which will play the Game of Life on a 50×50 grid for 100 iterations. The initial condition is set
up such that activity will not simply die out. There are many other patterns that could be created,
and you could experiment with different initial conditions as an exercise. Below is a script called
“GameOfLife.m”.
% GameOfLife.m - Conway’s Game of Life
%Set up Parameters of Simulation
Dim=50;
T = 100;
Delay = 0.1;
%Initial Conditions
GRID = zeros(Dim,Dim);
GRID(10,10:11)=1;
GRID(11,11)=1;
GRID(11,15:17)=1;
GRID(9,16)=1;
%the world is round
up=[2:Dim 1];
down=[Dim 1:Dim-1];
for i=1:T
%Establish pattern 1 counting up
%Establish pattern 2 counting down
neighbours=GRID(up,:)+GRID(down,:)+GRID(:,up)+GRID(:,down)+...
9.5. READING AND WRITING MOVIES AND SOUNDS 89
GRID(up,up)+GRID(up,down)+GRID(down,up)+GRID(down,down);
GRID = (neighbours==3) | (GRID & neighbours==2);
imagesc(GRID);
M(i,:) = getframe;
%M(i) = getframe;
pause(Delay);
end
%May be need for some Matlab versions
The purpose of the script above is to demonstrate how to capture the dynamics in a movie. You will
notice that after the imagesc command displays GRI D there is a command M(i) = getf rame. The
getframe command will store the current figure in a special movie frame structure. If you run the
script and type whos, you will notice that a very large structure, M, has been created. To play the
movie in matlab
>> movie(M)
You can view the movie command to see other options for how you can play back the movie you
have created. The movie command, however, will only work within matlab. To export the movie so
it can be played outside of matlab
>> movie2avi(M,'CATestMovie.avi');
which will create an “avi” file that can be played with many external movie players, including inside a
power point presentation. Depending upon how matlab was set up, you may get a warning when you
issue this command. You should still see the file “CATestMovie.avi” in your directory. Try to play it
using RealPlayer or Windows Media Player. You should view the help for the movie2avi command
to learn how to change options such as frame rate, compression and quality. There is also an “mpeg”
writer that can be downloaded from the Mathworks website.
9.5.1
SOUNDS
Matlab also has support for recording, e.g., wavwrite, wavrecord, and playing, e.g., sound, waveplay,
wavered, and wave. To test your system, try entering
>> load handel
>> sound(y,Fs);
To learn more about the support for movies and other audio/visual commands, view the help for
audiovideo.
9.5.2 READING IN IMAGES
You have already learned how to write images out using the print command. Matlab also has
commands for reading in images. First, we will create a “jpeg” image from one of Matlab’s built in
images.
90
9. DATA IN, DATA OUT
>> load cape
>> imagesc(X)
>> colormap(map)
>> print('-djpeg','CapeCod.jpeg');
>> close all
You should note that the command close all will close all open figures. To load in the jpeg you created
>> A = imread('CapeCod.jpeg','jpeg');
>> imagesc(A)
The imread command can be used to load many different types of image files into a matrix. Once
in Matlab’s memory, the image can be displayed in the same way as any other matrix.
9.6
BINARY FILES
It is very simple to write out data to a text file using fprintf or other such commands. The problem is
that the size of the text file can become very large because when a number, for example “0.0012345”
is sent to the file, it is typically stored as a character string. Then when it is read back in, it is
converted back to a numerical value. The problem should become apparent if you issue the following
commands
>> a = ' 0.00123456789';
>> b = 0.00123456789;
>> whos
%character string
%numerical value
Storing as a character requires 26 Bytes where as a floating point number requires only 8. So, there
can be a very large savings in hard drive space if numbers are stored as numbers. This format is
typically called binary because the numbers are stored in machine language (1s and 0s).
9.6.1 WRITING BINARY FILES
To read and write in binary requires a few different commands in matlab, for example fwrite.
Copy “FHNPropagate.m” to “FHNPropagateBinaryOutput.m”. Then change the line fid =
fopen(‘FHNProp.txt’,‘w’); to
fidb = fopen('FHNProp.bin','wb');
The option “wb” specifies that the file is writable and binary. Next replace
fprintf(fid,'%f\t',time(i));
for k = 1:M*N
%Print out time
%loop over V values
fprintf(fid,'%f\t',V(k));
end
fprintf(fid,’\n’);
with
fwrite(fidb,V,’double’);
%Go to next line
9.6. BINARY FILES 91
Lastly, change fclose(fid); to
fclose(fidb);
If you run “FHNPropagateBinaryOutput.m” a file, “FHNProp.bin” will be created. In addition to
the file being somewhat smaller, you will note that there is the added benefit to writing out the data
in binary - you no longer need the k loop. Rather, you can simply have matlab send the entire vector
V to the file.
9.6.2 READING BINARY FILES
load and other commands will read in files saved in text format. Matlab also has commands to read
in binary files. We will not expand upon these commands here, but you can analyze the following
commands that read in a particular time step of “FHNProp.bin”.
TimeStep = 200;
fid = fopen('FHNProp.bin','rb');
%Move from the beginning of the file to the timestep
fseek(fid,TimeStep*8*M*N,-1);
%Read in one time step worth of data in double format
Data = fread(fid,M*N,'double');
%Reshape Data to be an MxN matrix
Data = reshape(Data,M,N);
%display the image at timestep 200
imagesc(Data)
fclose(fid)
Of course, it is possible to read in the entire data file, by placing the commands above in a loop that
iterates through T imeStep.
9.6.3 HEADERS
Nearly all files will be stored in one of two ways - text or binary. The key is to understand the
format so that you can write a script to read the data. The format should be contained within the
documentation for the program. A good file format will contain what is known as a header. In the
header is important information, which would be helpful for reading in data. For example, in our
previous example of storing binary data, it would be helpful to know the dimensions M and N,
number of timesteps and the type of data. It would therefore have been helpful to first print out a
text line at the top of the binary data file.
1000 500 200 double
BINARY DATA
92
9. DATA IN, DATA OUT
The above file header would let a user know that the following binary data is 200, 1000×500 blocks
of data that have the double format. In read or writing a header, you will mix Matlab’s text and
binary reading capabilities.
9.7 EXERCISES
1. Turn in “FirstExcelExample.xls”, “FHNPropagate.m”, “IrregularText.txt”, “IrregularTex-
tReader.m”, “GameOfLife.m”, “CATestMovie.avi”, “FHNPropagateBinaryOutput.m”.
2. Create a function that will compute the Fibonacci series starting with any two numbers.
Remember that the series is defined by
Fn = Fn−1 + Fn−2
(9.9)
Your function should be of the form
function [Series]=FSeries(Num1,Num2,NumIterations);
3. Create a script,“WriteOutFibonacci.m” that generates a file with the following format
Fibonacci Series Starting with Num1 and ending with Num2
NumIterations
BINARY DATA HERE
Num1, Num2 and NumIterations should be numerical values. Note that to create the text
strings you may need to use the num2str command learned in Section 3.4. Note that in your
script you must pick values for Num1, Num2 and NumIterations, then call the F Series
function to generate the vector Series. Then you can generate the header and write Series
as binary data. Note that you should have Num1 and Num2 be some value other than 1, and
NumIterations should be at least 100.
4. Create a function, “ReadInFibonacci” that will read in any file generated by “WriteOutFi-
bonacci.m”. The function should be of the form
function [Series]=ReadInFibonacci(filename);
C H A P T E R 10
Graphics
93
10.1 INTRODUCTION
Built-in graphics is one of the key features of Matlab. In previous chapters, the plot and imagesc
commands were introduced as ways of graphically displaying data. In this chapter, we will introduce
more graphical options as well as explain some of the tools in Matlab for fine tuning graphics.
10.2 DISPLAYING 2D DATA
Two dimensional data is very often a simple line plot of an independent variable versus a dependent
variable. For example, Schnakenberger (1979) gives a set of differential equations that describes an
oscillating chemical reaction
dx
dt
dy
dt
= x2y − x
= a − x2y
(10.1)
(10.2)
where x and y are the chemicals and a is a constant parameter. The two equations above are non-
linear, and so we could use the Euler Method to solve them. An alternative is to use a graphical
non-linear dynamics technique where we plot all of the values for x and y that cause dx
= 0. In
dt
other words, we wish to plot the function
0 = x2y − x
(10.3)
on an x-y axis for the first equation. We can do the same for the second differential equation. After
a bit of algebra, we find that we must plot the following two functions
y = 1
x
y = a
x2
(10.4)
(10.5)
In non-linear dynamics, these two curves are called nullclines. y = 1
set of points where dx
dt
make these plots, we can issue the following commands
= 0, and y = a
x2 is the nullcline of y, e.g., the set of points where dx
x is the nullcline of x, e.g., the
= 0. To
dt
a = 1;
x = -0.5:0.01:0.5;
%define a range for x
94
10. GRAPHICS
yxnull = 1./x;
yynull = a./(x.ˆ2);
plot(x,yxnull,'r');
hold on
plot(x,yynull,'g');
%create x nullcline
%create y nullcline
%plot x null in red
%plot y null in green
You should first notice that the colors of the plots can be changed using an option to the plot
command (view the help for plot for more color options). There are a few issues with the plot that
is generated. First, there is a problem with the scales of the plots because some parts go to infinity.
To rescale the axes, the following commands can be used
>> axis([-0.5 0.25 -10 100])
where the axis takes a vector that has the form [xmin, xmax, ymin, ymax]. You can therefore zoom in
to any portion of your data to take a closer look.
There are times when you must distinguish your plot in some way other than to use colors.
The plot command has a number of options for that as well.
a = 1;
x = -0.5:0.01:0.5;
yxnull = 1./x;
yynull = a./(x.ˆ2);
plot(x,yxnull,'k');
hold on
plot(x,yynull,'k-.');
axis([-0.5 0.5 -10 100])
%Make solid line
%Make dash-dot line
Now both plots are in black, but the y nullcline is a solid line, and the x nullcline is a dash-dot line.
You should view the help for plot to see the other options.
It is also important to add appropriate axis labels and a title for your graph by issuing the
following commands
xlabel('x variable');
ylabel('y variable');
title('X and Y Nullclines for Schnakenberger (1979) Model');
There are times when it is useful to add grid lines
grid on
We will discuss in a future section how to have more control over the scale of the grid. To turn off a
grid you can simply execute
grid off
You can do the same with the upper and right bounding boxes on the plots using the command box.
By default Matlab has the box on, so
box off
will create a plot with the usual x-y axes. You may also want to adjust the aspect ratio of your plot,
e.g., relative lengths of the x-y axes. For example,
axis square
will create equal equal sizes for the x and y axes. See the axis command for more options.
10.2. DISPLAYING 2D DATA 95
10.2.1 FIGURE NUMBERS AND SAVING FIGURES
You may have noticed that if you simply issue a plot command, Matlab will automatically start with
“Figure 1”. If another plot command is issued, it will simply write over Figure 1. To start a new figure
>> figure(2)
A second problem is that we may want to create a figure but then add to it later. To illustrate how
this can be accomplished, we will the examine the Hindmarsh Rose model for cortical neurons.
dx
dt
= −ax3 + bx2 + y − z + I0
= −dx2 − y + c
dy
dt
= rsx − rz − rsx1
dz
dt
(10.6)
(10.7)
(10.8)
where the values of a, b, c, d, r, s and x1 are constants and I0 is an externally applied current. We
will assume that r is small and so z adapts slowly enough that we can treat it as a constant, e.g.,
z = 0. Furthermore, we can assume there is no external stimulus, e.g., I0 = 0. Therefore, our set of
equations becomes
dx
dt
= −ax3 + bx2 + y
dy
dt
= −dx2 − y + c
and we can then plot x and y nullclines using the following equations
0 = −ax3 + bx2 + y
0 = −dx2 − y + c
and therefore we must plot the functions
y = ax3 − bx2
y = −dx2 + c
(10.9)
(10.10)
(10.11)
(10.12)
(10.13)
(10.14)
(10.15)
(10.16)
(10.17)
So we do not conflict with what has already been plotted in Figure 1; type the following into a script
called “HMRModel.m”.
96
10. GRAPHICS
figure(2)
which will open up a new (and blank) figure. Then type the following commands into the script
a = 0.3;
b = 2.5;
c = -1.;
d = 1.0;
x = -5:0.01:10;
yxnull = a.*x.ˆ3 - b.*x.ˆ2;
yynull = -d.*x.ˆ2 + c;
plot(x,yxnull,'k');
hold on
plot(x,yynull,'k-.');
xlabel('x variable')
ylabel('y variable');
title('X and Y Nullclines for Hindmarsh-Rose Model')
axis([-5 10 -70 10])
%define a range for x
%create x nullcline
%create y nullcline
You should note that, unlike the previous model, in the Hindmarsh-Rose model, there are three
points where the nullclines intersect. The meaning of these intersections is that both dx
= 0 and
dt
dy
= 0, and therefore the x − y values for these intersections are equilibrium points. We do not
dt
know from the plot if they are stable or unstable, but from the plot, we do know that they exist even
without solving the equations numerically.
To finish off our plot, it would be good to add a legend. Add the following command to
the end of your script.
legend('x nullcline','y nullcline');
The last step is to save your plot in a file on your hard drive. In previous sections, you created a “jpeg”
file using the print command. You could easily change the options to create other file formats. But,
Matlab has its own special file format (“.fig”) for images.
saveas(gcf,'HMR.fig');
The saveas command will save the current figure (specified by gcf, meaning “get current figure”) into
a file “HMR.fig”. You should then close your figure. But you have saved all of the information in
“HMR.fig”. To bring the figure back into the Matlab environment
open 'HMR.fig'
will reopen the figure. You will see in later sections that you can continue to modify the figure. In
this way, you can save work on any graphic and then reopen it at a later time.
10.2.2 VELOCITY MAPS
Non-linear differential equations, such at the Hindmarsh-Rose model may be solved numerically
using integration methods such as the Euler Method. An alternative was provided by plotting the
nullclines in state space. But, there is another way to gain a more intuitive understanding of the
dynamics of differential equations.
10.2. DISPLAYING 2D DATA 97
a = 0.3;
b = 2.5;
c = -1.;
d = 1.0;
[X,Y] = meshgrid(-5:10,-70:10:10);
xvec = -a.*X.ˆ3 + b.*X.ˆ2 + Y;
yvec = -d.*X.ˆ2 -Y + c;
quiver(X,Y,xvec,yvec,0.3);
%Make a grid over x and y
%create x vector
%create y vector
%Plot Velocity Vector Field
The command quiver generates an arrow at each point in the grid specified by X and Y that has a
length of xvec in the x direction and yvec in the y direction. The extra value of 0.3 simply scales
the vectors. The command meshgrid simply creates the X and Y matrices.
The meaning of a velocity map is that if we start the system, i.e., initial condition, at any
x-y point, the system will follow the velocity vectors. In fact, the length of the vectors even gives
us some indication of how fast the system will move from one point to another. For this reason,
velocity maps are sometimes called flow maps - we can think of an initial condition flowing around
the state space.
10.2.3 LOG AND SEMI-LOG PLOTS
In 1935, George Zipf, a linguistics professor at Harvard, made an incredible claim. If you analyze a
large volume written in English and keep track of the words used, a pattern emerges. We can rank
the word used most often as 1, second as 2, third as 3 and so on. We can then count the number of
times each word is used. If we plot the word rank, x, against the number of times used, f (x), the
data surprisingly fits
f (x) = ax−k
(10.18)
where a and k are constants that can be determined from experimental data. What is more, Zipf
and others found that it was not only English that followed this pattern, but nearly all other
languages. US City populations were found to follow the same equation. So was the distribution
of wealth in many different economies. The size of rivers, newspaper distributions, and even
computer memory were found to follow the same trend. Another name for Zipf ’s Law is a Power
Law. It was only a matter of time before biologists began to test if Zipf ’s Law applied to living systems.
98
10. GRAPHICS
Let us assume that we would like to plot a function that represents the ranking of the size
of bloods vessel (cm) in the brain
a = 2;
k = 1.4;
x = logspace(1,3,100);
fx = a.*x.ˆ(-k);
figure(1)
plot(x,fx)
figure(2)
loglog(x,fx)
The command logspace generates 100 points between 101 and 103.The command loglog plots log(f x)
against log(x). You can verify that the log-log plot should be a straight line with a bit of analysis.
Matlab also has support for semi-log plots in the commands semiology and semiology.
10.2.4 IMAGES
In an earlier section, the commands imagesc and colorbar were introduced. There are some additional
commands that can allow for more flexibility.
%minimum value
%maximum value
minc = -2;
maxc = 5;
%random numbers between minc and maxc
X = minc + (maxc-minc).*rand(100,100);
X(50,50) = 100; %make one value very large
imagesc(X)
colorbar
The problem with these commands is that all of the values are between -2 and 5, except for the one
value in the center. When imagesc is used, it automatically scales all of the values. To fix this problem,
add the following command to the end of the script above
caxis([-2 5])
The last command, caxis, will rescale the color map from -2 to 5. The value of 100 will be treated as
a value of 5, i.e., it will appear red.
In our examples so far, we have used the default colormap. You should view the help for col-
ormap and then issue the commands below to gain some idea of the types of colormaps that are
available.
load flujet
imagesc(X)
colormap(winter)
10.2. DISPLAYING 2D DATA 99
and
load spine
image(X)
colormap bone
10.2.5 OTHER 2D PLOTS
Matlab has support for a wide range of other two-dimensional plots. Below we explore only a few.
Enter the commands below to view each type of graphic. You can always view the help to learn more
about any of the functions introduced below.
>> pie([2 4 9 10],{'Mutation 1','Mutation 2','Mutation 3','Mutation 4'});
>> pie3([2 4 9 10],{'Mutation 1','Mutation 2','Mutation 3','Mutation 4'});
>> load carsmall
>> boxplot(MPG, Origin)
%Loads in premade data on cars
% Note the error bars
Sometimes we also wish to display discrete data, i.e., points are not connected together with lines.
>> x = 0:0.01:1;
%Generate x vector
%Generate sinusoid with random noise
>> y = 5*sin(2*pi.*x) + rand(length(x),1);
>> stairs(y)
>> stem(y)
At other times, you may wish to create a histogram that shows the frequencies of different types
of events. Below is code to generate a random 1000 x 1000 adjacency matrix where each node is
connected to 10% of the network.
>> A = rand(1000,1000);
>> A = A<0.1;
>> NumConnections = sum(A,2);
>> hist(NumConnections,20)
%CHECK THIS
The histogram shows how most nodes are connected to 10% of nodes (100 other nodes), but
because it is random, there is a distribution.
Lastly, Matlab will allow you to create bar charts that graphically compare many different
types of data.
B = rand(10,5);
bar(B)
xlabel('Gene Mutation')
ylabel('Frequency')
legend('American','Canadian','Mexican','English','German')
The chart above may have been a comparison of the frequency of 10 different gene mutations in 5
different populations.
100
10. GRAPHICS
10.2.6 SUBPLOTS
There are occasions when it is convenient to display several plots side-by-side. In these instances,
you will want to use the subplot command.
%Generate x vector
x = 0:0.01:1;
%Generate 4 noisy sinusoids
y1 = 5*sin(2*pi.*x) + rand(1,length(x));
y2 = 5*sin(2*2*pi.*x) + rand(1,length(x));
y3 = 5*sin(3*2*pi.*x) + rand(1,length(x));
y4 = 5*sin(4*2*pi.*x) + rand(1,length(x));
figure(1)
subplot(2,2,1)
plot(x,y1)
subplot(2,2,2)
plot(x,y2)
subplot(2,2,3)
plot(x,y3)
subplot(2,2,4)
plot(x,y4)
The help for the subplot command has more information about how to create more complex multi-
panel figures.
10.3 FIGURE HANDLES
In previous sections, we generated a number of plots and learned to alter some aspects of the figure.
To gain more flexibility over the appearance of figures, Matlab allows the user to output a figure
handle. First, we must clear everything in memory and close all figures.
clear all
close all
%clears everything in matlab’s memory
%closes all open figures
Next, we will create a figure that contains a sinusoid
x = 0:0.001:1;
h = plot(x,sin(2*pi*x));
where h is the figure handle.
get(h)
The command get will display all of the options in the figure handle. To get a particular option, you
can issue
get(h,'LineStyle')
We will not discuss all of the options, but there are some that are useful to know how to change. To
change an option, you can use the set command
set(h,'LineWidth',3);
which will change the line width from the default of 0.5 to 3 (make the line thicker).
10.3. FIGURE HANDLES 101
10.3.1 THE HIERARCHY OF FIGURE HANDLES
In the above section, we only showed some of the basics of getting and setting figure options. But the
figure you created contains much more information. The highest level handle is gcf which is equal
to 1.
gcf
get(gcf)
is a command that will show all of the main figure options. But in these figure options are Children,
other handles to various parts of the figure. For example, if we want the handle to the axis
AxisHandle = get(gcf,'Children')
The axis is given a special handle in Matlab called gca. If you type
gca
on the command line if should give you the same answer. The axis handle has its own Children that
contain the plot you created.
PlotHandle = get(AxisHandle,'Children')
h
get(gca,'Children')
and you can verify that all three commands should give the same figure handle.
This may all be a bit confusing, but it allows the user to keep track of the various parts of
complex plots, e.g., subplots. For example, we can now set the line width of out plot with
set(h,'LineWidth',3);
Alternatively, we could have issued
set(PlotHandel,'LineWidth',3);
Now we can add a second plot
hold on
g = plot(x,sin(1.5*2*pi*x),'r');
We have explicitly created a figure handle (g) to make it easier to change this plot. But try the
following command
l = get(gca,'Children');
You should notice that now there are two file handles, and we could treat each differently. In fact,
l(1) is our most recently created figure handle and l(2) is our previous handle. So, if we wanted to
view the options for our recently created figure, we could type
102
10. GRAPHICS
get(g)
or
get(l(1))
Now let’s suppose we wish to change the line thickness and the line type from solid to dash-dot.
set(l(1),'LineWidth',3)
set(l(1),'LineStyle','-.');
Figure handles can become very complex, but you should remember that they are nested in hierarchies
with gcf at the top and gca one step down.
10.3.2 GENERATING PUBLICATION QUALITY FIGURES
Given the flexibility in how aspects of figures can be changed, it should not come as a surprise
that many engineers and scientists create images for their journals and other technical writings in
Matlab. Below is a template that demonstrates how to generate a publication quality figure.
Brain waves are often thought of as being composed of multiple frequency bands. For ex-
ample, Delta waves range from 0-4Hz, Theta waves range from 4-7Hz, Alpha waves range from
8-12Hz, Beta waves range from 12-30Hz and Gamma waves are about 30Hz. The Electroen-
cephalogram (EEG) can be scored by a clinical neurologist based upon the strength, i.e., amplitude,
of the frequencies present. In fact, they can get an accurate picture of the state of the patient (awake,
sleeping, dreaming, thinking), just by looking at the EEG. Below is a section of code that will create
a theoretical EEG. Following the creation of the code are a series of commands that change shapes,
sizes, colors and even the aspect ratio of the figure. The last line creates an encapsulated postscript
file with high resolution (600 dots per square inch).
x = 0:0.001:2;
%Create EEG signal from various sinusoids
%Amplitudes reflect signal contribution
EEG = 1*sin(2*2*pi*x);
EEG = EEG + 1*sin(6*2*pi*x);
EEG = EEG + 1*sin(10*2*pi*x);
EEG = EEG + 4*sin(20*2*pi*x);
EEG = EEG + 2*sin(50*2*pi*x);
%2Hz Delta
%6Hz Theta
%10Hz Alpha
%20Hz Beta
%50Hz Gamma
h = plot(x,EEG,'k');
xlabel('Time (s)','FontSize',20)
ylabel('EEG (microV)','FontSize',20,'Rotation',90)
axis([0 2 -8 8])
set(gca,'Box','off')
set(gca,'TickDir','Out')
set(gca,'XTick',[0:0.25:2])
set(gca,'YTick',[-8:4:8])
set(gca,'FontSize',20)
set(gca,'LineWidth',2)
10.4. DISPLAYING 3D DATA 103
set(gcf,'Color','white')
set(gcf,'Position',[400 400 800 400])
set(gcf,'PaperPosition',[4 4 8 4])
print('-depsc','FrequencyAnalysis.eps','-r600');
You should also note that Matlab allows for inline text, arrows and other simple figures to be added
to the plot. These functions, however, are often better to include in power point or another graphics
package.
10.4 DISPLAYING 3D DATA
There are occasions where it is helpful to visualize data in three dimensions. Matlab has a number
of commands designed for 3D plots. Below are some lines which demonstrate some of Matlab’s
capabilities
figure(1)
t = 0:pi/50:10*pi;
plot3(sin(t),cos(t),t)
xlabel('sin(t)')
ylabel('cos(t)')
zlabel('t')
You can actively move around these data by clicking the rotation tool on the toolbar (next to the
hand tool). Try holding down the left mouse button and then dragging. This should rotate the figure.
figure(2)
[X,Y,Z] = peaks(30);
surfc(X,Y,Z)
figure(3)
contour(Z)
figure(4)
mesh(X,Y,Z)
You should view the help for these functions to learn about additional 3D plotting functions. You may
also wish to explore the help for view, a command that allows the user to set the three-dimensional
view point.
104
10. GRAPHICS
10.5 EXERCISES
1. Turn in “HMRModel.m”, “HMR.fig”, “FrequencyAnalysis.eps”.
2. A common device used to grow cells and bacteria is a chemostat. The idea is that as cells
metabolize they require a constant supply of nutrients but also need some way to eliminate
waste products. A Chemostat is a chamber that has a constant influx of nutrients, while at
the same time having a constant efflux of solution in the chamber. In such a situation, we can
imagine keeping track of both the cell population (N) and the concentration of the nutrient
(P ). Below are two differential equations that describe this situation.
(cid:12)
= N
KmaxC
Kn + C
(cid:12)
dN
dt
(Co − C) − αN
(cid:13)
− F N
V
(cid:13)
KmaxC
Kn + C
dC
dt
= F
V
(10.19)
(10.20)
where Co is the concentration of the supply, V is the volume of the growth chamber, F is the
input and output flow rate, α is a yield constant, Kmax is the maximum growth rate of cells
and Kn is a half maximum growth rate relative to concentration.
The two equations above can be rearranged using some algebra to find equations for
the N and C nullclines.
F
V Kn
C =
Kmax − F
V
N = F (Co − C) (Kn + C)
V αKmax
(10.21)
(10.22)
Therefore, the N nullcline is a quadratic and the C nullcline is a simple line. Create a script that
will plot both nullclines in a N-C phase space. The following code will create the nullclines
for specific values of the constants
alpha = 3.0;
F = 1.0;
V = 1.0;
Kmax = 6;
Kn = 50.0;
Co = 200.0;
%Yield Constant
%in and out flow of Chemostat
%Chemostat Volume
%Maximum Growth Rate
%Half Max of Growth Rate
%Concentration of Supply
N = 0:0.1:1000;
C = 0:0.1:200;
Cvec = (F/V).*Kn./(Kmax-(F/V));
Nvec = F.*(Co-C).*(Kn+C)./(V*alpha*Kmax);
%For N Nullcline
%For C Nullcline
10.5. EXERCISES 105
figure(1)
plot(N,Cvec)
hold on
plot(Nvec,C)
Create an image called “ChemostatNullclines.jpeg” that is of publication quality. You should
use your best judgment and what you have learned in this chapter as a guide.
C H A P T E R 11
Toolboxes
107
11.1 INTRODUCTION
As mentioned in chapter 1, Matlab is a number of environments all rolled into one. As originally
envisioned it is a programming environment, scripting language and a graphics package. Since the
early days of Matlab, however, many outside of Mathworks have contributed. Often these new
contributors are a group of researchers or industrial scientists writing a series of Matlab scripts (“.m”
files) that perform related functions.These suites of functions, after an evaluation process, sometimes
become part of Matlab. Matlab calls these packages toolboxes and allows users to purchase them as
products separate from the standard Matlab distribution. To check the packages in your version of
Matlab
>> ver
will display the current version of Matlab and any installed toolboxes. There are hundreds of
Matlab toolboxes that can be purchased. Some researchers also offer their own Matlab toolboxes
free of charge, although with no guarantees of proper functionality or optimization from Mathworks.
There are three ways to get help for toolboxes. First, Matlab has a website at www.mathworks.com
for each toolbox under the Support tab. The website gives a brief overview of the functionality of
the toolbox along with any new versions. Second, Matlab maintains a user’s guide that can be either
downloaded as a pdf or viewed online. It is here that all of the functions (“.m” files) will be listed
along with how to call them. This feature is nice because it allows a user to read a manual before
purchasing the toolbox. You can also find examples of how to use the toolbox and demonstrations.
Once a toolbox is installed, the third option is to view the built-in help.
>> help
will display the high-level help that is available in Matlab. Of these functions, some are the help for
the toolboxes listed in ver. For example,
>> help symbolic
will display the help for the Symbolic Math Toolbox.
In this chapter, we only cover a few of the toolboxes which are of most interest to Biomed-
ical Engineers and are readily available at most institutions.
108
11. TOOLBOXES
11.2 STATISTICAL ANALYSIS AND CURVE FITTING
Much of this text has focused on the simulation of mathematical models. But much biological and
biomedical research exists in the experimental realm. Here Matlab can be useful in the analysis of
data, specifically to perform statistical analysis and best fits. The Matlab statistics toolbox contains
functions for descriptive statistics for vectors and matrices of data, e.g., skewness for Skew, cov for
covariance, as well as sophisticated random number and probability distribution generators. It also
contains linear, e.g., anova1 and anova2 for one and two way analysis of variance, lscov for least-
squares, and non-linear, e.g., nlinfit) data fitting.There are even commands to help design experiments
and specialized graphic utilities. See
>> help stats
for more information.
11.2.1 DATA FITS TO NONLINEAR FUNCTION
It is often the case that an engineer or scientist will collect a series of discrete data points and
then need to move from the data to an analytical model, e.g., mathematical function. Typically, an
experimentalist will collect data as a series of points as a function of some independent variable
that can either be controlled or observed, e.g., time, space, concentration, current. For example, the
impedance of a biological material is the resistance (R) to current (I ) as a function of the frequency
(ω) of a sinusoidal forcing function. The experiment would send in an alternating current with a
particular frequency ω and then record the amplitude of the resulting voltage (also a sinewave). The
experimenter would then record the peak-to-peak amplitude of the voltage sinewave as a function
of the frequency.
>> omega = [0:10:100];
>> SineWaveAmplitude = [2 5 10 37 59 41 12 4 3 1 0.1];
>> plot(omega,SineWaveAmplitude,'*');
>> hold on
%frequency in Hz
%You will add to this plot later
Rearranging Ohm’s Law (V = I R) to R = V
I and assuming I is held constant, the measurement
of V is proportional to the impedance, R. From the data, it is then clear that this material will pass
current best at low and high frequencies, i.e., the impedance is highest around 40Hz.
In many biological applications, to begin creating a model, we need to fit a function to re-
sults of an experiment. In the experiment above, we may guess that the data is best fit to a
bell-shaped curve, known as a Gaussian Function.
G =
√
A
2πσ 2
− (x−μ)2
2σ 2
e
(11.1)
What we need to estimate are the parameters μ (mean), σ (standard deviation) and A (area under
the curve). To perform this fit we will use the nonlinear least-squares fit, nlinfit, function in Matlab.
The function has the following command line call
11.2. STATISTICAL ANALYSIS AND CURVE FITTING 109
Beta = nlinfit(x,y,Model,beta0);
where x and y are the independent and dependent variables of the real data. beta0 is the initial
guess for the parameters and Model is the name of a function that contains the guessed function.
You should note that the Model function and parameters, beta0, must match. The function returns
the best fit, Beta, for the parameters.
For initial guesses, we can assume the mean is at 40. We can also guess that our standard
deviation is approximately 15. Lastly, area we can estimate as 1000.
>> beta0 = [40 15 1000];
The last step is creating the function to pass to nlinfit. Open up a script called “Gauss.m” and enter
the following
function [G] = Gauss(beta,x);
mu = beta(1);
variance = beta(2)ˆ2;
area = beta(3);
%define mean
%define area
%define standard deviation
G = (area/sqrt(2*pi*variance)).*exp(-((x-mu).ˆ2)./(2*variance));
To test this function
>> x = 0:1:100;
>> G = Gauss(beta0,x);
>> plot(x,G,'r')
We are now ready to tune the parameters
>> Beta = nlinfit(SineWaveAmplitude,omega,@Gauss,beta0);
>> Beta
You should note that the @ symbol is used to reference a function. After many iterations, you will
have a best fit to the Gaussian parameters contained within Beta. You can then check the fit
>> NewG = Gauss(Beta,x);
>> plot(x,NewG,'g');
You should view the help for nlinfit to see how to measure the quality of the fit, as well as how to
bound certain parameters such as the maximum number of iterations to take in searching for a best fit.
Some other common functions to fit are linear and exponentially increasing or decreasing.
There are two other functions that deserve mention in the context of biomedical engineering
because they appear often. The first is the monotonically increasing or decreasing sigmoid.
S =
1
1 + eax
(11.2)
110
11. TOOLBOXES
where x is the independent variable. The sign of a (+ or -) will determine whether the function
increases or decreases and the magnitude of a will determine the rate of increase or decrease. You
may wish to generate a few lines of Matlab code to plot the sigmoid function - it should be “S” shaped.
A more general form of the Sigmoid function is the Boltzmann distribution
B =
1
1 + e(x−xo)/k
(11.3)
where k is the inverse of a, and therefore controls the slope and increasing or decreasing trend. You
may have noticed that the sigmoid was centered around zero. The term xo is an offset and controls
what is known as the half-max crossing point and will therefore translate the sigmoid. Note that
both functions range from 0 to 1, making scaling to fit experimental data a simple task.
Note that Matlab also has a series of commands for fitting data to surfaces. The idea is the
same, but now there are two independent variables and one dependent variable (often thought of
as the height of the surface).
11.2.2 INTERPOLATION AND SPLINES
Two very useful operations that follow directly from curve fitting are interpolation and extrapolation.
In the impedance example above, we may wish to estimate the voltage output every 1Hz even though
you only measured it every 10Hz. The function interp1 will allow you to interpolate values.
%frequency in Hz
>> figure(2)
>> omega = [0:10:100];
>> SineWaveAmplitude = [2 5 10 37 59 41 12 4 3 1 0.1];
>> plot(omega,SineWaveAmplitude,'*');
>> DesiredOmega = [0:1:100];
>> NewV = interp1(omega,SineWaveAmplitude,DesiredOmega);
>> hold on
>> plot(DesiredOmega,NewV,'r');
The inputs to interp1 are the original independent and dependent variables, along with the
desired independent variable. The output, NewV , is the new vector of voltages that correspond to
DesiredOmega. Finally, we plot the interpolated data over the original data.
With no options interp1 will default to a “linear” interpolation, meaning that all data points
in SineW aveAmplitude will be connected by straight lines. Interpolation can become much
more sophisticated by using higher order fits between data points. The most important options for
interp1 are “spline” and “cubic”. For example,
>> NewVSpline=interp1(omega,SineWaveAmplitude,DesiredOmega,'spline');
>> plot(DesiredOmega,NewVSpline,'g');
You should note that a spline is simply a local polynomial fit to data. In other words, a low order
polynomial is fit to only a few local points. A different fit is then created for other local points.
11.3. DIFFERENTIAL AND INTEGRAL EQUATIONS 111
You should be very careful not to confuse NewV with real measured experimental data or
the analytical function, created above. To get an analytical function we must assume a form for the
equation and then fit the parameters using a best fit. This fit was performed on the entire data
set. With N ewV the interpolation was created without assuming any function, only very simple
functions, e.g., polynomials, that span a few points between neighboring points.
In our example, we used all evenly spaced data points, but this does not need to be the
case. In other words, the independent variables omega and DesiredOmega could have been an
irregularly spaced vector. You should view the help for interp1 for more details. You can also view
the functions interp2, interp3 and interpn which perform 2, 3 and N dimensional interpolation.
The spline toolbox also has tools for extrapolation, confidence intervals, tools to remove
outliers and fitting to surfaces. You should type
>> help splines
for more information.
11.3 DIFFERENTIAL AND INTEGRAL EQUATIONS
In Section 7.3, we introduced Euler’s Method for the numerical integration of a differential equa-
tion. And an example in chapter 7 demonstrated how to use Euler’s Method for multiple coupled
differential equations. Although the Euler Method is easy to understand and program, it is limited
in that it is only first order accurate. The idea of the order of the method is one that comes up in
many numerical methods, and it signifies how well of an approximation the method will yield. In the
case of differential equations, we can write the solution to a differential equation as a Taylor series.
Then the order of the numerical integration technique is given as the number of terms in the Taylor
series that are included in the approximation. In the case of Euler, only the first term is included.
There are other numerical integration methods that use many more orders to improve accuracy.
The most popular are a series of methods with any desired order known as the Runga-Kutta method.
Although, in principle, the order can be increased to obtain more and more accurate solu-
tions, the computational cost (memory and time) increases as order increases. For most applications,
the 4-5 order is where the gain in numerical accuracy balances out the computational cost.
To show how the built-in solvers can be used, we will solve the same FitzHugh-Nagumo
112
11. TOOLBOXES
model of a neuron as in chapter 7
= V − V 3
3
= a ∗ (V + b − cW )
− W + I
dV
dt
dW
dt
(11.4)
(11.5)
where V is the cell membrane potential, W is a recovery variable and I is a stimulus current. We will
assume the constants are a = 0.08, b = 0.7 and c = 0.8. We will use the ode45 solver that has the
following command line call
[t,y]=ode45(odefun,tspan,y0);
where odef un is a function that will evaluate the derivatives given the variables in the vector y.
y0 are the initial conditions and tspan is a vector with two elements, [T 0, Tf inal]. First, open a
Matlab function “FHNFunction.m” and enter the following text
function dy = FHNFunction(t,y)
a = 0.08;
b = 0.7;
c = 0.8;
I = 0.556;
V = y(1);
W = y(2);
%Simulus Current
dy = zeros(2,1);
dy(1) = V-(Vˆ3)/3-W+I;
dy(2) = a*(V+b-c*W);
In this function, all that is reported is how to compute the right-hand term in the differential
equations. This is done because there is no assumption made about how the time step ((cid:8)t) will be
picked or how the solution will be advanced forward. Next, enter the following on the command
line.
>> tspan = [0 100];
>> y0 = [0 0];
>> [t,y]=ode45(@FHNFunction,tspan,y0);
>> plot(t,y);
You should note that the output t contains a vector of the times and y contains both the V and W
vectors.
One reason to use one of Matlab’s built-in solvers is that they very often have what are
known as adaptive time steppers. To view how Matlab has changed (cid:8)t during the simulation
>> plot(diff(t));
11.3. DIFFERENTIAL AND INTEGRAL EQUATIONS 113
You should note that unlike the Euler method used in chapter 7, the time step changes. In general,
(cid:8)t becomes large when the solution is not changing much and becomes small when the solution is
changing rapidly.
The help for ode45 contains a list of the other solvers that can be used, functions for evalu-
ating the accuracy of the solution (e.g., deval ) as well as some demonstrations. You should also note
that there is a partial differential equation toolbox for handling cases where dynamics are occurring
in both time and space.
11.3.1 INTEGRALS AND QUADRATURE
Finding the area under a curve can often yield valuable insight into a biological problem. For example,
we may need to find the area under a curve to compute total charge as a current that flows over time
Q =
(cid:14)
t2
t1
I (t)dt
or mechanical work as the integral of force applied over some distance.
W =
(cid:14)
x2
x1
F (x)dx
(11.6)
(11.7)
The problem is that rather than have the analytic functions I (t) or F (x), we have discrete points in
the vectors I or F . There are various methods for numerically computing areas under curves formed
by discrete points, and they all fall under the general category of Quadrature. For example,
>> x = 0:0.01:1;
>> y1 = 20*exp(x);
>> y2 = 10*rand(length(x),1);
>> Int1 = trapz(x,y1);
>> Int2 = trapz(x,y2);
And it is a simple matter to then get the area between the two curves as
>> Int1-Int2
The above example uses the trapezoidal approximation. Matlab has many other quadrature methods,
each with their advantages and disadvantages. As with most numerical methods, there is a trade off
between accuracy on the one hand and computing cost on the other. Some quadrature methods can
be found in the command quad. You should note that you can also evaluate double (quad2d ) and
triple (triplequad ) integrals.
114
11. TOOLBOXES
11.4 SIGNAL PROCESSING TOOLBOX
Biological signals are often noise, low in amplitude and composed of many superimposed streams of
information. Signal processing is what is necessary to isolate features of interest. For example, when
analyzing an EEG signal from the scalp, it may be important to determine the relative contribution
of a particular frequency band to the signal. Alternatively, it may be important to look for what is
known as a complex - a series of spikes and waves that are signatures of specific events in the brain.
Using these two types of information, a clinical neurologist can gain a great deal of information
about the healthy or abnormal function of a patient’s brain.
The type of operation to perform is nearly always some sort of filter, and these are typically
either in the time or frequency domain. In the time domain, the two most useful are moving
averages and correlations. Moving averages are important because a serious problem with most
experimental data is that it is noisy. The result is that an upward or downward trend can often be
lost when curve fitting or looking for trends. One often used solution is to smooth the data before
fitting to a function. The key is to average nearby points, but the weighting of those points may
vary.
>> w1 = hamming(64);
>> wvtool(w1)
>> Sig = rand(1000,1);
>> SigWin = conv(Sig,w1,'same');
will create and then display the Hamming Window with 64 samples. The idea is that this window
will be moved over the entire dataset, and each new point will then be a weighted average of the
64 points around it. Then the window is moved one step forward the averaging occurs again. For
example,
>> t = 0:0.01:1;
>> Sig = sin(2*pi*t);
>> Sig = Sig + rand(1,length(t));
>> plot(t,Sig);
>> hold on
>> SigWin = conv(Sig,w1,'same');
>> plot(t,SigWin,'r');
%noisy sinusoid
%smoothed sinusoid
Note that the general shape of the sine wave has been recovered, but the amplitude is off. More
processing would be needed to correct this. Also note that the convolving function (conv) was used
to apply the filter to all points. Other useful windows can be found in the help for window.
Another very common operation is to gain some quantitative measure of how similar two
signals are to one another. Here we are looking for correlations. Matlab has a number of functions,
e.g., conv, cov, corrcoef, for performing these types of operations. The theory behind these operations
will not be covered here, but you can view the help within Matlab and online for more information.
11.5. IMAGING PROCESSING TOOLBOX 115
In the frequency domain, we typically think of designing a filter that will eliminate some
frequencies but keep others. For example, a bandpass frequency filter may be needed to analyze
the results of fatigue testing on a muscle, but a low pass filter may be desirable for isolating low
frequencies in an EEG. To gain more appreciation for the options in the signal processing tool box,
you should open up an internet browser and navigate to www.mathworks.com. Click on Products
and Services and then on Product List. You should see a full list of the Matlab toolboxes. Scroll
down to the Signal Processing Toolbox and click on the link. On the left-hand menu, you will find
a link to Demos and Webinars. You should watch the short Introduction demo. On the left-hand
side, you will also find a description of the toolbox along with a complete list of all of the functions.
You should note that nearly all toolboxes in Matlab have good tutorials to help you get started.
11.5 IMAGING PROCESSING TOOLBOX
Biomedical engineers often generate images or display their data as images. In both cases, the raw
images, like biological signals, are typically noisy and low contrast. The image processing toolbox
contains algorithms for bringing features of interest to the forefront. There are also times when a
user may be looking for correlations between different parts of an image, for example to track a cell
as it crawls across a series of images that comprise a movie (known as image registration). Again,
the image processing toolbox contains routines for just such a task. Another important feature of
the image processing toolbox is edge detection. Below is a very brief demonstration.
>> IMAGE = imread('circuit.tif');
>> figure(1); imshow(IMAGE);
>> IMAGE2 = edge(IMAGE,'prewitt');
>> figure(2); imshow(IMAGE2);
>> IMAGE3 = edge(IMAGE,'canny');
>> figure(3); imshow(IMAGE3);
The above lines highlight three commands. The first is imread which is a general purpose reader for
many different image formats. Second is imshow which is a general function for displaying grayscale
images. Last is the edge command which finds edges of an image in the regions where there is a
large contrast in grayscale. edge contains many different methods and options, and you may want to
view the help file.
There are a number of other helpful commands, such as imresize, imrotate, imcrop and im-
transform as well as a number of sample images to test image processing algorithms, e.g., “board.tif ”,
“trees.tif ”,“cameraman.tif ”.
You may wish to view some of the built-in demonstrations that will give you some sense of the
power of the image processing toolbox.
>> iptdemos
116
11. TOOLBOXES
11.6 SYMBOLIC SOLVER
Thus far, we have focused on programming techniques and toolboxes which perform numerical
approximations. Matlab also has a toolbox for performing symbolic math, allowing for direct
analytical solutions. The functions in the Symbolic Toolbox are very similar to two other well-
known symbolic math processors, Mathematica (Wolfram Research) and Maple (MapleSoft). It
should be noted that Mathematica and Maple both allow algorithmic computing and graphics,
similar to the commands in the first 10 chapters, but their focus is on symbolic math. Matlab, on
the other hand, is designed for numerical approximations and has the symbolic toolbox as an addition.
In keeping with the theme of only showing the surface level of each toolbox, we will show
a few examples where the Symbolic Toolbox can be useful. An important distinction between
Symbolic and Algorithm solutions must be made first. Consider the following equations that
describe a compartment model of a drug in the body.
dB
dt
= −koA
dA
dt
= koA − k1B
dE
dt
= k1B
(11.8)
(11.9)
(11.10)
where A is concentration at the absorption site, B is the concentration in the body and E is the
concentration being eliminated. We have already learned a variety of ways to solve this problem given
the initial conditions A(0) = Ao, B(0) = 0 and E(0) = 0. These range from creating a matrix, to
finding eigenvalues to solving the equations numerically using the toolboxes described above. In all
cases, Matlab is using some form of a numerical method, e.g., a matrix solve, numerical integration.
Below are the commands for solving the equations above
>> % To be entered on the same line
>> [A,B,E]=dsolve('DA=-k0*A','DB=k0*A-k1*B','DE =k1*B',
>> 'A(0)=A0','B(0)=0','E(0)=0');
>> A = simplify(A);
>> B = simplify(B);
>> E = simplify(E);
Note that the solution is not a vector of values, but rather an equation. This is what makes symbolic
math different from numerical methods.
The symbolic math toolbox can also perform differentiation (diff) and integration (int). It
may seem strange that the diff command could either be used to find the difference between
elements in a vector or perform symbolic differentiation. This is an example where Matlab has what
is called overload methods. If you type
>> help diff
11.7. ADDITIONAL TOOLBOXES AND RESOURCES 117
you will notice that at the bottom of the help is a section on Overload Methods, and one of the
listed methods is sym/diff. If you click on this link, you will be taken to the help for the symbolic
differentiation.
This still does not explain how Matlab seems to know which version of diff to use. The an-
swer is in the values that are passed in. You should note that both commands take one variable, but
the numerical diff takes a vector whereas the symbolic diff takes a function.
>> x = sym('x'); % create a variable called x
>> t = sym('t'); % create a variable called t
%analytic result
>> diff(sin(xˆ2))
%evaluate analytic result
>> diff(tˆ6,6)
The same logic applies to int, which can be used either in a numerical or analytic way in Matlab.
You should note that the sym command was used to create a symbolic variable. You should view the
Matlab memory to verify that x and t are in fact of the variable class “symbolic”. Likewise, you may
want to force one function to be substituted into another using the sub command.
Above a series of differential equations were solved, but a more simple solve is for simulta-
neous algebraic equations
>> [x,y] = solve('xˆ2 + x*y + y = 3','xˆ2 - 4*x + 3 = 0')
You can even perform more advanced analytical processes such as a Taylor series expansion.
11.7 ADDITIONAL TOOLBOXES AND RESOURCES
There are a wide range of additional toolboxes available through Matlab. Below are some that have
direct ties to biology and biomedical engineering.
1. Simulink is a graphical programming interface for simulating systems. It consists of a series
of graphical blocks that can be connected by “wires”. In this way, the flow of the program is
visibly apparent and does not require the writing of a script.
2. Graphical User Interfaces (GUIs) allow a user to interact with programs (similar to those
described in Section 6.7), but in a custom designed window. Many of the same web-like inputs
are supported, including radio buttons and text boxes, but Matlab also supports sliders, dials
and the ability to embed custom graphics into the window, e.g., two and three-dimensional
plots, subplots. GUIs are a very powerful way to make a complex program more user friendly.
For more information, there are a number of good web resources as well as a built-in GUI
creator called guide.
3. Neural Networks are an abstraction of how real neurons connect to one another and perform
pattern recognition functions on data. They typically need to be trained on some data set,
118
11. TOOLBOXES
where the answer is known. Parameters of the network are adjusted to give the minimum error
between the actual output of the network and the desired output of the network. In this way,
neural networks are very similar to filters, but they are adaptable and tunable.
4. Genetic Algorithms are an abstraction of how evolution is thought to use random variation,
mutation, selection and mating to produce good (fit in the language of genetic algorithms)
solutions to a problem. For some problems, the possible solution is not obvious. Even worse,
because there may be many parameters involved, it would take an enormous amount of time to
perform a parametric study. In these situations, we can generate a random sampling of possible
solutions that must compete with one another for some resource. The most fit solutions will
outcompete the less fit. These most fit solutions will have the opportunity to “breed” with
other fit solutions in the hope of creating an even more fit solution. Superimposed on these
dynamics may also be mutation, which will widen the exploration of the solution space.
5. Control Systems is a diverse field that spans many engineering disciplines and is largely about
the idea of self-regulation and system characterization. In biological systems, this is similar
to the idea of homeostatis and there is an entire field called systems physiology that studies
biological function from a quantitative, systems point of view. Likewise, a biomedical engineer
often must create devices that compensate for some poorly functioning part of the body. The
control systems toolbox contains basic routines for simulating and characterizing systems as
well as special graphics routines.
6. SimBiology is a graphical interface for modeling systems in biology and pharmacokinetics. It
is similar in many ways to Simulink and also provides the user with some unique solvers and
analysis tools.
7. MEX, short for Matlab executable, allows users to write a function in a compiled language,
e.g., C, C++, FORTRAN, and then use that function in Matlab. MEX code is not written
in Matlab and therefore requires a user to have knowledge of another computing language.
MEX functions are very useful when an algorithm cannot make good use of matrix-vector
operations, i.e., it contains many loops. These functions will appear as built-in functions in
Matlab - recall an attempt in an earlier chapter to viewing the “.m” file for sum.
8. The Matlab Compiler allows a user to compile their Matlab code. There are at least two
reasons to compile code. First is to speed up simulation time. Remember that Matlab is an
interpreted scripting language, meaning that it is flexible but slow. Compiling could greatly
increase the speed of the code. Second is if you wish to share the function of your code, without
sharing the code itself. This can be useful if you work for a company and do not wish to share
the algorithms that are used.
9. Mathworks recently added a Parallel Computing toolbox to allow users with networked
computers to break up a large computational task into smaller tasks that can then be sent
to individual computers. It should be noted that some algorithms lend themselves to easy
adaptation to parallel computing, whereas others do not.
11.7. ADDITIONAL TOOLBOXES AND RESOURCES 119
11.7.1 MATLAB CENTRAL AND OTHER ONLINE HELP
There are a number of other very helpful online resources. The most important is MatlabCen-
tral, a place for Matlab users to ask for help, post help and post new code. It is located at
http://www.mathworks.com/matlabcentral/ and should be the first place you look if you have an
algorithm to write. It is accepted practice in the coding world to use the code of others and to cite
them appropriately. You may also find code by using a search engine to find the websites of others.
Both can be excellent sources of code, but you should remember that Mathworks does not verify the
code from outside parties.
Author’s Biography
121
JOSEPH V. TRANQUILLO
Joseph Tranquillo is an associate professor of biomedical engineering at Bucknell University
where he has been a faculty member since 2005. He received his Doctor of Philosophy degree in
biomedical engineering from Duke University (Durham, NC) and Bachelor of Science degree in
engineering from Trinity College (Hartford, CT).
His teaching interests are in biomedical signals and systems, neural and cardiac electrophys-
iology, and medical device design. Nationally Joe has published or presented over 40 peer reviewed
or invited works in the field of engineering education. He was the founder and inaugural chair of
the Undergraduate Research Track at the Biomedical Engineering Society (BMES) conference,
co-organized the Biomedical Engineering Body-Of-Knowledge Summit and currently serves
on the board of the Biomedical Engineering Division of the American Society of Engineering
Education (ASEE). He is the winner of the 2010 National ASEE Biomedical Engineering
Teaching Award.
His technical research interests are in non-linear dynamics in the heart and brain. He has
over 50 publications and presentations, and he has authored a textbook, Quantitative Neurophys-
iology. He is a member of the Biomedical Engineering Society, IEEE Engineering in Medicine
and Biology Society, American Physical Society and is an elected member of Sigma Xi and Heart
Rhythm.
When not teaching or doing research, he enjoys improvisational dance and music, running
trail marathons, backpacking, brewing Belgian beers, and raising his two children Laura and Paul.
|
https://ieeexplore.ieee.org/xpl/ebooks/bookPdfWithBanner.jsp?fileName=8379400.pdf&bkn=8379399&pdfType=book
|
Series ISSN 1939-5221
Strategic Cost Fundamentals
for Designers, Engineers, Technologists, Estimators, Project Managers, and Financial Analysts
Robert C. Creese, West Virginia University (Emeritus)
This book is designed to introduce designers, engineers, technologists, estimators, project managers,
and financial analysts as well as students in engineering and business to strategic cost tools for project
cost evaluations. The three main sections are as follows. (1) Cost Relationships, Financial Statements,
and Performance Measures–This section describes the relationships between cash flows and profits; the
relationships between financial statements and the Purcell Diagram; and the issues of cost estimating,
time-based breakeven analysis and time-based earned schedule. (2) Tools for Economic Evaluations–
This section considers the basic mathematical relations used behind the economic equations and factors;
discrete and continuous interest; depreciation terms and methods; and the Present Value of Principal
Approach for evaluating loans. (3) Methods for Project Evaluation and Risk Analysis–This section
considers payback periods, present worth analysis, return on investment, internal rate of return, benefit/
cost ratios and positive-negative project balances; risk techniques of sensitivity analysis, optimistic-
pessimistic analysis, discrete probability examples, and continuous probability models using the normal
and triangular distributions.
ABOUT SYNTHESIS
This volume is a printed version of a work that appears in the Synthesis Digital Library of Engineering and
Computer Science.
research and
development topics, published quickly in digital and print formats. For more information, visit our website:
http://store.morganclaypool.com
Synthesis Lectures provide concise original presentations of
important
store.morganclaypool.com
Strategic Cost
Fundamentals
for Designers, Engineers,
Technologists, Estimators, Project
Managers, and Financial Analysts
Robert C. Creese
C
R
E
E
S
E
S
T
R
A
T
E
G
I
C
C
O
S
T
F
U
N
D
A
M
E
N
T
A
L
S
M
O
R
G
A
N
&
C
L
A
Y
P
O
O
L
Strategic Cost Fundamentals
for Designers, Engineers, Technologists, Estimators,
Project Managers, and Financial Analysts
Synthesis Lectures on
Engineering
Each book in the series is written by a well known expert in the field. Most titles cover subjects
such as professional development, education, and study skills, as well as basic introductory
undergraduate material and other topics appropriate for a broader and less technical audience.
In addition, the series includes several titles written on very specific topics not covered
elsewhere in the Synthesis Digital Library.
Strategic Cost Fundamentals: for Designers, Engineers, Technologists, Estimators,
Project Managers, and Financial Analysts
Robert C. Creese
2018
Empowering Professional Teaching in Engineering: Sustaining the Scholarship of
Teaching
John Heywood
2018
The Human Side of Engineering
John Heywood
2017
Geometric Programming for Design Equation Development and Cost/Profit
Optimizaton, Third Edition
Robert C. Creese
2016
Engineering Principles in Everyday Life for Non-Engineers
Saeed Benjamin Niku
2016
A, B, See... in 3D: A Workbook to Improve 3-D Visualization Skills
Dan G. Dimitriu
2015
The Captains of Energy: Systems Dynamics from an Energy Perspective
Vincent C. Prantil and Timothy Decker
2015
iii
Lying by Approximation: The Truth about Finite Element Analysis
Vincent C. Prantil, Christopher Papadopoulos, and Paul D. Gessler
2013
Simplified Models for Assessing Heat and Mass Transfer in Evaporative Towers
Alessandra De Angelis, Onorio Saro, Giulio Lorenzini, Stefano D’Elia, and Marco Medici
2013
The Engineering Design Challenge: A Creative Process
Charles W. Dolan
2013
The Making of Green Engineers: Sustainable Development and the Hybrid Imagination
Andrew Jamison
2013
Crafting Your Research Future: A Guide to Successful Master’s and Ph.D. Degrees in
Science & Engineering
Charles X. Ling and Qiang Yang
2012
Fundamentals of Engineering Economics and Decision Analysis
David L. Whitman and Ronald E. Terry
2012
A Little Book on Teaching: A Beginner’s Guide for Educators of Engineering and
Applied Science
Steven F. Barrett
2012
Engineering Thermodynamics and 21st Century Energy Problems: A Textbook
Companion for Student Engagement
Donna Riley
2011
MATLAB for Engineering and the Life Sciences
Joseph V. Tranquillo
2011
Systems Engineering: Building Successful Systems
Howard Eisner
2011
Fin Shape Thermal Optimization Using Bejan’s Constructal Theory
Giulio Lorenzini, Simone Moretti, and Alessandra Conti
2011
iv
Geometric Programming for Design and Cost Optimization (with illustrative case study
problems and solutions), Second Edition
Robert C. Creese
2010
Survive and Thrive: A Guide for Untenured Faculty
Wendy C. Crone
2010
Geometric Programming for Design and Cost Optimization (with Illustrative Case Study
Problems and Solutions)
Robert C. Creese
2009
Style and Ethics of Communication in Science and Engineering
Jay D. Humphrey and Jeffrey W. Holmes
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Analog Multimedia
Explorations
Lina J. Karam and Naji Mounsef
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Digital Multimedia and
Robotics Explorations
Lina J. Karam and Naji Mounsef
2008
CAD/CAM of Sculptured Surfaces on Multi-Axis NC Machine: The DG/K-Based
Approach
Stephen P. Radzevich
2008
Tensor Properties of Solids, Part Two: Transport Properties of Solids
Richard F. Tinder
2007
Tensor Properties of Solids, Part One: Equilibrium Tensor Properties of Solids
Richard F. Tinder
2007
Essentials of Applied Mathematics for Scientists and Engineers
Robert G. Watts
2007
Project Management for Engineering Design
Charles Lessard and Joseph Lessard
2007
Relativistic Flight Mechanics and Space Travel
Richard F. Tinder
2006
v
Copyright © 2018 by Morgan & Claypool
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in
any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations
in printed reviews, without the prior permission of the publisher.
Strategic Cost Fundamentals: for Designers, Engineers, Technologists, Estimators, Project Managers,
and Financial Analysts
Robert C. Creese
www.morganclaypool.com
ISBN: 9781681733524
ISBN: 9781681733531
ISBN: 9781681733548
paperback
ebook
hardcover
DOI 10.2200/S00846ED1V01Y201804ENG032
A Publication in the Morgan & Claypool Publishers series
SYNTHESIS LECTURES ON ENGINEERING
Series ISSN
Print 1939-5221 Electronic 1939-523X
Strategic Cost Fundamentals
for Designers, Engineers, Technologists, Estimators,
Project Managers, and Financial Analysts
Robert C. Creese
SYNTHESIS LECTURES ON ENGINEERING #32
CM&cLaypoolMorganpublishers&ABSTRACT
This book is designed to introduce designers, engineers, technologists, estimators, project man-
agers, and financial analysts as well as students in engineering and business to strategic cost
tools for project cost evaluations. The three main sections are as follows. (1) Cost Relationships,
Financial Statements, and Performance Measures—This section describes the relationships be-
tween cash flows and profits; the relationships between financial statements and the Purcell Di-
agram; and the issues of cost estimating, time-based breakeven analysis and time-based earned
schedule. (2) Tools for Economic Evaluations—This section considers the basic mathematical
relations used behind the economic equations and factors; discrete and continuous interest; de-
preciation terms and methods; and the Present Value of Principal Approach for evaluating loans.
(3) Methods for Project Evaluation and Risk Analysis—This section considers payback peri-
ods, present worth analysis, return on investment, internal rate of return, benefit/cost ratios and
positive-negative project balances; risk techniques of sensitivity analysis, optimistic-pessimistic
analysis, discrete probability examples, and continuous probability models using the normal and
triangular distributions.
KEYWORDS
risk analysis, project evaluation, loans, Purcell diagram, engineering economic ex-
pressions, breakeven analysis, cost estimating and profit calculations, depreciation
methods, earned value management
Contents
ix
1
2
3
PART I Cost Relationships, Financial Statements, and
Performance Measures . . . . . . . . . . . . . . . . . . . . . . 1
Fundamental Terms and Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1
Basic Relationships between Cash Flows, Profits, Depreciation, and Taxes . . . 3
1.2
1.3 Cash Flow and Profit Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Cash Flow Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.6
Evaluative Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.7
Financial Statements and the Purcell Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1
2.2
Financial Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3 The Purcell Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.4
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.5
Evaluative Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.6
Costs and Cost Estimating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.2 Cost Components for Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.2.1 Basic Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.2.2 Traditional and ABC Overhead Allocation Methods . . . . . . . . . . . . . . 28
3.2.3 Profit Calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3 Cost Estimation Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.4
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.5
Evaluative Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.6
x
4
Breakeven Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.1
4.2
4.3
4.4
4.5
4.6
4.7
4.8
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Breakeven Model Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Breakeven Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.3.1 Categories and Typical Examples for the Production
Quantity-Based System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.3.2 Categories and Typical Examples for the Production Time-Based
System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Production Quantity-Based Breakeven Example . . . . . . . . . . . . . . . . . . . . . . . 46
Production Time-Based Breakeven Example . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Evaluative Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5
Earned Value Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.1
5.2
5.3
5.4
5.5
5.6
5.7
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Earned Value Management Performance Parameters . . . . . . . . . . . . . . . . . . . . 59
Example Problem Using Traditional Earned Value Management . . . . . . . . . . 62
Example Problem Using Earned Schedule in Earned Value Management . . . 63
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Evaluative Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
PART II Tools for Economic Evaluations . . . . . . . . . . . 75
6
Fundamental Definitions, Terms, and Concepts for Technical Economic
Evaluations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
6.1
6.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Fundamental Terms Related to Interest Calculations . . . . . . . . . . . . . . . . . . . . 77
6.2.1 Interest and Interest Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Actual, Compound, Nominal, and Effective Annual Interest Rates . . . . . . . . 81
6.3
Factors in Determining Interest Rates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
6.4
6.5
Inflation-Free Interest Rates, Constant Currency, and Actual Currency . . . . . 83
6.6 Currency Exchange Calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
xi
6.7
6.8
6.9
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Evaluative Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
7
8
Basic Mathematical Relationships for Economic Calculations . . . . . . . . . . . . . 87
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
7.1
7.2
Sums of Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
7.3 Geometric Progression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Infinite Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
7.4
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
7.5
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
7.6
Evaluative Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
7.7
Basic Economic Factors and Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
8.1
8.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Single Payment Discrete Interest Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
8.2.1 Discrete Interest Future Worth Factor (F=P; i; n) . . . . . . . . . . . . . . . . . 94
8.2.2 Discrete Interest Future Worth Example . . . . . . . . . . . . . . . . . . . . . . . . 95
8.2.3 Discrete Interest Present Worth Factor .P =F; i; n/ . . . . . . . . . . . . . . . . 95
8.2.4 Discrete Present Worth Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
8.3 Uniform Series Payments Discrete Interest Factors . . . . . . . . . . . . . . . . . . . . . 96
8.3.1 Uniform Series Discrete Interest Future Worth Factor .F=A; i; n/ . . . . 97
8.3.2 Uniform Series Discrete Interest Future Worth Example . . . . . . . . . . . 98
8.3.3 Sinking Fund Discrete Interest Factor .A=F; i; n/ . . . . . . . . . . . . . . . . . 98
8.3.4 Sinking Fund Discrete Interest Factor Example . . . . . . . . . . . . . . . . . . 99
8.3.5 Uniform Series Discrete Interest Present Worth Factor .P =A; i; n/ . . . 99
8.3.6 Uniform Series Discrete Interest Present Worth Example . . . . . . . . . 100
8.3.7 Capital Recovery Discrete Interest Factor .A=P; i; n/ . . . . . . . . . . . . . 101
8.3.8 Capital Recovery Discrete Interest Factor Example . . . . . . . . . . . . . . 101
Single Payment Continuous Interest Factors . . . . . . . . . . . . . . . . . . . . . . . . . 102
8.4.1 Continuous Interest Future Worth Single Payment Factor
8.4
.F=P; r; n/ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
8.4.2 Continuous Interest Future Worth Single Payment Example . . . . . . . 103
8.4.3 Continuous Interest Present Worth Single Payment Factor
.P =F; r; n/ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
8.4.4 Continuous Interest Present Worth Single Payment Example . . . . . . 104
8.5 Uniform Series Payments Continuous Interest Factors . . . . . . . . . . . . . . . . . 104
xii
8.5.1 Uniform Series Continuous Interest Factors–Future Worth,
Sinking Fund, Present Worth, and Capital Recovery . . . . . . . . . . . . . 104
8.5.2 Uniform Series Continuous Interest Future Worth .F=A; r; n/
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
8.5.3 Uniform Series Continuous Interest Sinking Fund .A=F; r; n/
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
8.5.4 Uniform Series Continuous Interest Present Worth .P =A; r; n/
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
8.5.5 Uniform Series Continuous Interest Capital Recovery Factor
.A=P; r; n/ Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Evaluative Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
8.6
8.7
8.8
9
Gradient Economic Factors and Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
9.6
9.5
9.1
9.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Standard Uniform Gradient Discrete Interest . . . . . . . . . . . . . . . . . . . . . . . . 113
9.2.1 Standard Uniform Gradient Discrete Interest Example . . . . . . . . . . . 116
9.3 Uniform Ramp Gradient Discrete Interest . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
9.3.1 Uniform Ramp Gradient Discrete Interest Example . . . . . . . . . . . . . . 119
9.4 Geometric Gradient Discrete Interest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
9.4.1 Geometric Gradient Discrete Interest Example . . . . . . . . . . . . . . . . . 123
Escalation Gradient Discrete Interest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
9.5.1 Escalation Gradient Discrete Interest Example . . . . . . . . . . . . . . . . . . 126
Standard Uniform Gradient Continuous Interest Formulas . . . . . . . . . . . . . . 129
9.6.1 Standard Uniform Gradient Continuous Interest Example . . . . . . . . 129
Ramp Uniform Gradient Continuous Interest Formulas . . . . . . . . . . . . . . . . 131
9.7.1 Uniform Ramp Gradient Continuous Interest Example . . . . . . . . . . . 132
9.8 Geometric Gradient Continuous Interest Formulas . . . . . . . . . . . . . . . . . . . . 133
9.8.1 Geometric Gradient Continuous Interest Example . . . . . . . . . . . . . . 134
Escalation Gradient Continuous Compounding Formulas . . . . . . . . . . . . . . 135
9.9.1 Escalation Gradient Continuous Interest Example . . . . . . . . . . . . . . . 136
9.10 Summary of Gradient Expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
9.11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
9.12 Evaluative Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
9.9
9.7
xiii
10 Depreciation Terms, Methods, and Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
10.1.1 Cash Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
10.2 Depreciation Terms and Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
10.2.1 Depreciation Classes of Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
10.2.2 Recovery Period and Depreciation Life . . . . . . . . . . . . . . . . . . . . . . . . 147
10.2.3 Depreciation Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
10.3 Traditional Methods of Depreciation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
10.3.1 Straight Line Depreciation Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
10.3.2 Declining Balance Depreciation Method . . . . . . . . . . . . . . . . . . . . . . . 151
10.3.3 Depreciation Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
10.4 The MACRS Depreciation Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
10.4.1 MACRS-GDS Recovery Periods and Property Classes . . . . . . . . . . . 154
10.4.2 MACRS-ADS Recovery Periods and Property Classes . . . . . . . . . . . 155
10.4.3 MACRS-GDS Mid-Year Recovery Periods . . . . . . . . . . . . . . . . . . . . 156
10.4.4 MACRS-ADS Mid-Year Recovery Periods . . . . . . . . . . . . . . . . . . . . . 156
10.5 Other Depreciation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
10.5.1 Section 179 Depreciation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
10.5.2 Production-Based Depreciation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
10.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
10.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
10.8 Evaluative Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
11 The Impact of Loans upon Cash Flows, Taxes, and Profits . . . . . . . . . . . . . . . . 167
11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
11.2 The Present Value of Principal Approach for Determining the Principal
and Interest Components of a Loan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
11.3 Example Problem of Loan Problem Using Present Value of Principal
Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
11.4 Loans with Cash Flows, Depreciation, Profits, and Taxes . . . . . . . . . . . . . . . 169
11.5 Example Problems of Loans with Cash Flows, Depreciation, Taxes, and
Profits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
11.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
11.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
11.8 Evaluative Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
xiv
PART III Methods for Project Evaluation and Risk
Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
12 Basic Project Evaluation Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
12.2 Payback Period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
12.2.1 Traditional Payback Period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
12.2.2 Discounted Payback Period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
12.3 Time Value of Money Analysis for Project Profit Evaluation . . . . . . . . . . . . 186
12.3.1 Present Worth Analysis of Profits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
12.3.2 Future Worth and Average Annual Worth of Profits . . . . . . . . . . . . . 187
12.4 Return of Original Investment (ROI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
12.4.1 ROI – Not Discounted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
12.4.2 ROI – Discounted (ROI-D) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
12.4.3 ROI Annual Worth – AW (ROI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
12.4.4 ROI Annual Worth (Base Time) – AW-b (ROI) . . . . . . . . . . . . . . . . 189
12.5 Return on Average Investment (RAI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
12.5.1 RAI – Not Discounted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
12.5.2 RAI – Discounted (RAI-D) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
12.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
12.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
12.8 Evaluative Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
13 Advanced Project Evaluation Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
13.2 Internal Rate of Return (IRR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
13.3 Modified Internal Rate of Return (MIRR) . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
13.4 Benefit/Cost Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
13.4.1 Conventional Benefit/Cost Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
13.4.2 Traffic Intersection Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
13.5 Modified Benefit/Cost Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
13.6 Positive and Negative Project Balances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
13.6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
13.6.2 Project A Example Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
13.6.3 Project Z Example Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
13.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
13.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
13.9 Evaluative Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
14
Introduction to Risk Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
xv
14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
14.1.1 Risk vs. Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
14.2 Sensitivity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
14.2.1 Innovative 3D Rapid Prototyping and Tooling Center Example
Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
14.2.2 Selling Price Sensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
14.2.3 Processing Capacity Sensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
14.2.4 Tax Rate Sensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
14.2.5 Investment Life Sensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
14.2.6 Required Rate of Return Sensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
14.2.7 Total Cost Sensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
14.3 Optimistic-Pessimistic Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
14.3.1 Innovative 3D Rapid Prototyping and Tooling Center Investor
Concerns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
14.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
14.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
14.6 Evaluative Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
15 Risk Analysis with Probability Considerations . . . . . . . . . . . . . . . . . . . . . . . . . 227
15.1 Probability Methods and Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
15.2 Discrete Probability Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
15.2.1 Donnie the Dealmaker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
15.2.2 The Innovative 3D Rapid Prototyping and Tooling Center . . . . . . . . 230
15.3 Continuous Probability Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
15.3.1 Normal Distribution Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
15.3.2 Triangular Distribution Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
15.4 Risk Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
15.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
15.6 Evaluative Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
A Discrete and Continuous Compounding Factors . . . . . . . . . . . . . . . . . . . . . . . 247
Author’s Biography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
PART I
Cost Relationships, Financial
Statements, and Performance
Measures
C H A P T E R 1
3
Fundamental Terms and
Concepts
1.1
INTRODUCTION
Strategic cost management includes economic analysis (engineering economics) as well as cost
estimating (cost engineering), project management, and financial analysis. Cash flows and prof-
its are the two major strategic cost measures for evaluating the success of an enterprise and taxes,
loans, and depreciation have major influences upon the cash flows and profits. Positive cash flows
are necessary for the success of an enterprise and is similar to the need of water to keep a plant
alive. Positive profits are the rewards of the enterprise and are similar to the fruit of the plant. If
there are no positive cash flows, the enterprise will fail similar to a plant without water and die.
If the enterprise does not produce positive profits, the enterprise fails as the investors will not
support it and plants that do not produce will also be left to die.
Equipment is needed to produce products and the cost of the equipment must be recovered
and that is done through the concept of depreciation. Depreciation amounts are spread over
the predicted economic life of the investment and different approaches for determining the
depreciation amounts can be used. Equipment purchases often require loans to assist in the
purchase of equipment and the loan interest is a depreciable expense. Taxes are assessed in a
variety of manners, such as on property values or on the amount of profits earned, and are used
to provide services for the community where the facility is located. Taxes and depreciation are
considered as expenses similar to the raw materials, labor, energy, and other items utilized to
produce the products that are sold to produce the profits and positive cash flows.
1.2 BASIC RELATIONSHIPS BETWEEN CASH FLOWS,
PROFITS, DEPRECIATION, AND TAXES
Cash flows represent the net monetary units, such as Dollars, Pounds, Euros, Yen, Won, Bit-
coins, Pesos, or other currency flowing into and out of a business financial venture and it is
desired to have a positive net cash flow. It represents the funds available for expenses and busi-
ness enterprises must have an adequacy of funds to pay for their expenses. Companies with
negative net cash flows will not survive very long, whereas companies with negative profits may
survive for several years if they have positive cash flows.
4
1. FUNDAMENTAL TERMS AND CONCEPTS
Net Cash Flow is the difference between total cash receipts (inflows) and total cash dis-
bursements (outflows) for a given period of time such as a month, quarter, or year. They repre-
sent the funds available to pay for expenses and savings for major future investments rather than
borrowing funds.
Profits represent the net revenues minus the expenses and companies must be profitable
to survive in the long-term. Short-term periods of losses often occur during the start-up of
businesses or during periods of economic recession, but long-term losses are not sustainable for
an enterprise. Net Profits are also referred to as Earnings and thus Net Profits per Share is the
same as Earnings per Share.
The following relations are utilizing the basic expressions without adjustments. More
items can be considered, but these are the primary relationships.
Gross Profits
Net Profits
Taxes
D
Net Profits
D
Tax Rate
.1
D
(cid:0)
Revenues
D
(cid:0)
Gross Profits
Costs
(cid:0)
Taxes
(cid:0)
Gross Profits
(cid:2)
Tax Rate/
Cash Flows
Revenues
Depreciation
Gross Profits
.1
(cid:2)
(cid:0)
Tax Rate/
D
(cid:2)
Costs
(cid:0)
Costs
.Revenues
Taxes
Tax Rate.Revenues
(cid:0)
(cid:0)
(cid:0)
Revenues
.1
(cid:0)
Tax Rate/
Costs
(cid:0)
.Revenues
(cid:0)
(cid:2)
Costs
(cid:0)
(cid:0)
D
D
D
Depreciation/
Costs
Depreciation/
(cid:0)
(cid:0)
Depreciation/
Cash Flows
Cash Flows
or
(1.1)
(1.2)
(1.3)
(1.4)
(1.5)
(1.6)
Cash Flows
Net Profits
D
this can also be written as:
Net Profits
D
Cash Flows
Depreciation
Depreciation
C
(cid:0)
Note that the tax rate is expressed as a decimal in the formulas, thus a 10% tax rate is
expressed in decimal form as 0.10. It can be noted from Equations (1.8) and (1.9) that depreci-
ation has a positive effect on cash flows and a negative effect upon net profits. A decision must
be made to select either cash flows or net profits as the primary objective of the corporation and
thus one must also focus on the depreciation methods typically used.
There are two major depreciation methods utilized which are the straight line method
which gives equal amounts of depreciation per year over the life of the investment or the Modi-
fied Accelerated Cost Recovery System (MACRS)—referred to as accelerated depreciation method
in this chapter) which gives higher depreciation amounts in the early years of the investment
life and lower amounts in the later years of the investment. There is an apparent dilemma as ac-
celerated depreciation would initially give higher cash flows and lower net profits than straight
line depreciation and that would not please stockholders and investors. So the business commu-
nity has decided to use both methods; they use accelerated depreciation methods to determine
Depreciation (1.7)
C
(1.8)
(1.9)
1.3. CASH FLOW AND PROFIT EXAMPLE 5
the taxes they report to the government and use the straight line depreciation method to report
profits to the stockholders and investors. Thus, the reported profits to stockholders and investors
are not the actual profits, but are inflated by the differences in the depreciation methods used.
The difference in the depreciation methods adjusted by the tax rate is reported as deferred taxes
in the report to the stockholders, but the difference in the net profits is usually not presented in
the stockholders’ report. The purpose of the accelerated depreciation method was to encourage
companies to make investments to modernize and improve their production processes. Since
accelerated depreciation improved the cash flows and a straight line was used to report prof-
its to the stockholders, the decision was made to focus upon cash flows rather than profits in
evaluating projects and/or investment alternatives.
A second advantage of the focus on cash flows is that in many cases only cost data is
available and the prices for the selling of the products is not known, so the alternatives being
investigated can be compared on a minimum cost basis rather than a maximum profit basis, but
care must be taken in this type of analysis.
1.3 CASH FLOW AND PROFIT EXAMPLE
An example illustration will be presented to show the differences in cash flows and profits using
straight line depreciation and accelerated depreciation methods. The data presented in Table 1.1
is used in the formulas presented.
Table 1.1: Cash flow and depreciation data for example problem
The first analysis will be using the accelerated depreciation method, which is what it will
be using to report to the government, and it will be assumed that the company has 10,000 shares
of stock.
Gross Profits
Gross Profits
Revenues
(cid:0)
$1;000; 000
D
D
(cid:0)
Costs
(cid:0)
$775;000
Depreciation
$50;000
(cid:0)
(1.1)
$175;000
D
ItemAmount ($)Amount (%)Revenue1,000,000Costs775,000Tax Rate (%)25Depreciable Investment250,000Depreciation—Straight Line25,00010Depreciation—Accelerated (MACRS)50,000206
1. FUNDAMENTAL TERMS AND CONCEPTS
Taxes
Taxes
D
D
Tax Rate
0:25
(cid:2)
$175;000
(cid:2)
Gross Profits
$43;750
D
(1.3)
Net Profits
Net Profits
D
D
Gross Profits
.1
$175;000
(cid:0)
Taxes
0:25/
(cid:2)
(cid:0)
Gross Profits
$131;250
.1
(cid:2)
(cid:0)
D
D
Tax Rate/
(1.2)
The earnings per share would be:
Earnings/share
$131;250=10;000 shares
D
13:125 $=share.
D
Cash Flows
Cash Flows
D
D
Revenues
(cid:0)
$1;000;000
Costs
Taxes
(cid:0)
$775;000
(cid:0)
$43;750
(cid:0)
D
$181;250
or
Cash Flows
Cash Flows
D
Net Profits
$131;250
C
$50;000
Depreciation
$181;250
C
The second analysis will be using the straight line depreciation method, which will be
reported to its stockholders, and it will also be assumed that the company has 10,000 shares of
stock.
D
D
Gross Profits
Gross Profits
D
D
Revenues
(cid:0)
$1;000;000
Costs
(cid:0)
$775;000
Depreciation
$25;000
(cid:0)
(cid:0)
$200;000
D
Taxes
Taxes
D
D
Tax Rate
0:25
(cid:2)
$200;000
(cid:2)
Gross Profits
$50;000
D
Net Profits
Net Profits
D
D
Gross Profits
.1
$200;000
(cid:0)
Taxes
0:25/
(cid:2)
(cid:0)
Gross Profits
(cid:2)
$150;000 instead of 131;250
Tax Rate/
.1
(cid:0)
The earnings per share would be:
Earnings/share (EPS)
D
$150;000=10;000 shares
15:00 $=share instead of 13.125 $/share.
D
D
D
(1.5)
(1.9)
(1.1)
(1.3)
(1.2)
The difference in (EPS)
15:00 $=share
(cid:0)
D
13:125 $=share
D
1:875 $=share
1.3. CASH FLOW AND PROFIT EXAMPLE 7
Cash Flows
Cash Flows
D
D
or
Revenues
(cid:0)
$1;000;000
Costs
Taxes
(cid:0)
$775;000
(cid:0)
$50;000
(cid:0)
D
$175;000
Cash Flows
Cash Flows
D
D
Net Profits
$150;000
C
Depreciation
C
$25;000
$175;000
D
Note that the difference in taxes which is called deferred taxes is:
Deferred Taxes
Deferred Taxes
D
D
Straight Line Taxes
$43;750
$50;000
(cid:0)
Accelerated Taxes
$6;250
(cid:0)
D
(1.5)
(1.9)
(1.10)
The deferred taxes can be determined by the product of the depreciation differences times the
tax rate:
Deferred Taxes
Deferred Taxes
D
D
.Accelerated Depreciation
(cid:0)
Straight Line Depreciation/
(cid:2)
.Tax Rate/
(1.11)
.$50;000
$25;000/
.0:25/
(cid:2)
D
$6;250
(cid:0)
Cash Flow .CF/ Difference
Cash Flow (CF) Difference
D
D
(cid:0)
$181;250
$175;000
$6;250
D
(cid:0)
Accelerated Depreciation CF
Straight Line Depreciation CF
(1.12)
The earnings per share difference can be determined can be determined by:
Earnings/Share(Difference)
.1
Tax Rate/
D
(cid:0)
..Accelerated Depreciation-Straight Line Depreciation/=.Number of Shares/
(cid:2)
(1.13)
Earnings/Share(Difference)
1:875 $=share
D
.1
(cid:0)
D
0:25/
(cid:2)
.$50;000
(cid:0)
$25;000/=10;000
In summary, the accelerated depreciation results in lower taxes paid and higher cash flows
than the straight line depreciation method, but it also results in lower profits and lower share
earnings. Thus, they use the accelerated depreciation method for reporting to the government
and the straight line depreciation method for reporting to shareholders. Although this practice
is legal, it does raise concerns about it being ethical.
8
1. FUNDAMENTAL TERMS AND CONCEPTS
In some instances, a company may have one project with negative cash flows but a net
positive cash flow overall. This can occur when a company has several projects and a start-up
project may be negative in its initial stages and other projects in the company may have positive
cash flows resulting in a net positive cash flow. Construction projects often take long time periods
and they are not paid until certain goals have been reached and thus may have negative cash flows
until they are paid. A longer working capital cycle often results in negative cash flows for short
periods. For example, when a housing developer builds a home, the developer does not recover
the expenses until the home is sold and has a negative cash flow until payment is received for
the home.
Hence, a reasonable amount of positive cash flows from operations is significant for three
reasons [1].
1. Healthy cash flows can help a company meet its funding requirements internally rather
than borrowing in a high cost environment; but for major capital expenditures borrowing
is often necessary and advantageous for the company.
2. Having cash available permits the making of purchases more quickly and often at lower
costs.
3. A company’s ability to manage its debts indicates the efficiency and strength of the business
to its customers, stockholders, and employees.
1.4 CASH FLOW DIAGRAMS
Cash flow diagrams are diagrams of the revenues and expenses over time. The abscissa (x-axis)
represents time and the time between periods, which is usually constant such as one year, whereas
the ordinate (y-axis) represents the amount of cash flow and is usually different for each time
period. Cash flows are assumed to occur at the end of the period. There are three primary types
of cash flow diagrams: (1) the basic cash flow diagram where the net cash revenues and net
cash expenses are both shown; (2) the net cash flow diagram where the net incomes per period
and net expenses per period are combined into a net overall cash flow per period; and (3) the
cumulative cash flow diagram where the cumulative of the net cash flows are plotted. The most
common diagram is the net cash flow diagram and the second most utilized is the cumulative
cash flow diagram to illustrate the breakeven time. Since the cash flows are considered to occur
at the end of the period, the time values on the diagram represent the end of the period. One
should always make a cash flow diagram when possible in solving problems and in most cases
the flows will be different for the various time periods.
We will consider an example with an initial investment of $10 which results in a net
revenue stream of $6 for each time period (which could be the sum of two or more individual
revenue streams) and a net expense stream of $3 over 5 time periods. Figure 1.1 shows the basic
cash flow diagram, with the investment (an expense) occurring at time zero and the net revenues
(positive) and net expenses (negative) for each period of the study.
1.5. SUMMARY 9
Figure 1.1: Basic cash flow diagram showing net revenue and net expense cash flows for each
time period.
Figure 1.2 shows the net cash flows, the difference between the net revenues and net
expenses. It is used as the starting point for calculations; somewhat like the use of the free-body
diagram for solving in statics courses.
Figure 1.2: Net cash flow diagram showing net cash flows for each time period.
Figure 1.3 shows the cumulative cash flows which gives an indication as to when the
product will become profitable, which is 4 years in this problem. If the cash flows were uniform
over the project period, the result could be and is often interpreted as being 3.33 years. However,
the cash flows are considered to occur at the end of the period and 4 years should be used if one
does not know that the cash flows are uniform over the project period.
Since all cash flows are assumed to be at the end of the period, the initial investment occurs
at time zero, which is the end of Period 0 and it is also the start of Period 1. Similarly, the end
of Period 1 is also the start of Period 2. As we get into more complex examples, the cash flow
per period will tend to be different for each period and the cash flow diagram helps in properly
formulating the problem. Some of the other items that will be considered are accounts payable,
accounts receivable, inventory changes, and etc., so some of the formulas presented must be
adjusted for these items.
1.5
SUMMARY
Cash flows and profits are the two primary measures of corporate projects, but cash flow is the
measure that is utilized for evaluating projects as positive cash flows are critical and profits are a
66666301012345Time 3333Revenues +Funds ($)Expenses —Revenues +Funds ($)Expenses —3333301012345Time 10
1. FUNDAMENTAL TERMS AND CONCEPTS
Figure 1.3: Cumulative cash flow diagram showing cumulative cash flow over each time period
which indicates project becomes positive between the end of time period 3 and the end of time
period 4.
component of the cash flows. Profits using accelerated depreciation are lower than the straight
line depreciation, so accelerated depreciation is used for determining taxes and straight line
depreciation is used for reporting to stockholders. The three cash flow diagrams were presented
and the net cash flow diagram is used for most economic evaluations and the cumulative cash
flow diagram is used for payback period calculations.
1.6 REFERENCES
[1] Creese, Robert C. and Adithan, M., Strategic Cost Analysis for Project Managers and Engi-
neers, 2nd ed., New Academic Science Ltd., page 3, 2012. 8
1.7 EVALUATIVE QUESTIONS
1. Which depreciation method, accelerated or straight line, gives lower amounts of depreci-
ation during the last stages of the project life?
2. Depreciation is considered to be what type of item and what is the main purpose of de-
preciation?
3. Use the information in Table 1.2 to determine the items listed below for the Accelerated
Depreciation Method and for the Straight Line Depreciation Method. The company has
20,000 shares of stock.
3.1 Use the Accelerated Depreciation method and determine:
(a) Gross Profits
(b) Net Profits
Revenues +Funds ($)Expenses —7412010123455Time Table 1.2: Cash flow and depreciation data for problem 3
1.7. EVALUATIVE QUESTIONS 11
(c) Taxes
(d) Cash Flows
(e) Net Profits/share
3.2 Use the Straight Line Depreciation method and determine:
(a) Gross Profits
(b) Net Profits
(c) Taxes
(d) Cash Flows
(e) Net Profits/share
3.3 Determine the amount of deferred taxes and the difference in the net profits/share
and the difference in cash flows between the Accelerated Depreciation and Straight
Line Depreciation methods.
3.4 An error was made in determining the costs and instead of $800,000 they were
$940,000. What does this difference make in the results? Calculate the Cash Flows
and Net Profits/Share using both depreciation methods and compare the results.
4. Use the information in Table 1.3 to determine the items listed below for the Accelerated
Depreciation Method and for the Straight Line Depreciation Method. The company has
20,000 shares of stock.
4.1 Use the Accelerated Depreciation method and determine:
(a) Gross Profits
(b) Net Profits
(c) Taxes
(d) Cash Flows
(e) Net Profits/share
ItemAmount ($)Amount (%)Revenue1,000,000Costs800,000Tax Rate (%)40Depreciable Investment400,000Depreciation—Straight Line40,00010Depreciation—Accelerated (MACRS)80,0002012
1. FUNDAMENTAL TERMS AND CONCEPTS
Table 1.3: Cash flow and depreciation data for problem 4.
4.2 Use the Straight Line Depreciation method and determine:
(a) Gross Profits
(b) Net Profits
(c) Taxes
(d) Cash Flows
(e) Net Profits/share
4.3 Determine the amount of deferred taxes and the difference in the net profits/share and
the difference in cash flows between the Accelerated and Straight Line Depreciation
methods.
5. The large companies use accelerated depreciation when reporting to the government and
straight line depreciation when reporting to stockholders.
5.1 This is legal, but is it ethical? Give your reasons for supporting your answer.
5.2 Should a law be made to permit the same depreciation method to both the stock-
holders and the government? Give your reasons for supporting your answer.
6. A company made an investment of $1,200,000 for a machine to manufacture a new prod-
uct. The sale of the product produced is expected to provide a uniform annual revenue
of $500,000 for 6 years. The annual operating, material, and maintenance expenses are
$250,000 and the salvage value of the machine at the end of the 6 years is $400,000. Draw
the cash flow diagram, the net cash flow diagram, and the cumulative cash flow diagram.
What is the breakeven payback period?
7. Explain the difference between cash flows and profits with respect to depreciation.
8.
(a) Given the following data in Table 1.4, create the basic cash flow diagram, the net
cash flow diagram, and the cumulative cash flow diagram.
ItemAmount ($)Amount (%)Revenue1,000,000Costs800,000Tax Rate (%)20Depreciable Investment500,000Depreciation—Straight Line50,00010Depreciation—Accelerated (MACRS)100,000201.7. EVALUATIVE QUESTIONS 13
(b) When is the breakeven or payback period?
(1) Assume end-of-year payments.
(2) Assume uniform payments throughout the year.
(c) What is the total profit for the project?
Table 1.4: Data for problem 8
PeriodInvestmentRevenueExpenses0181 546761223614477125610Totals186036C H A P T E R 2
15
Financial Statements and the
Purcell Diagram
2.1
INTRODUCTION
Financial statements are critical in measuring the performance of an enterprise, and the two
primary financial statements used are the Income Statement and the Balance Sheet. The Purcell
Diagram [1, 2], developed by W.R. Purcell, integrates these two financial statements into a
cash flow diagram. It indicates the overall cash flows for a company and illustrates the major
components. This is critical for financial analysts and it assists engineers in understanding the
importance of cash flows in the company picture and explains why project evaluations require a
cash flow analysis.
2.2
FINANCIAL STATEMENTS
The primary financial statements used for reporting are the Income Statement and the Balance
Sheet. The items such as long-term debt and short-term debt are not included to keep the
problem easy to visualize and to reduce the complexity as they are difficult to add to the basic
Purcell Diagram.
The income statement summarizes the revenues (sales), the major expenses (costs), the
depreciation, the taxes, and the profits. Some items have been added to the expenses on the
basic income statement to provide more detail on the total expenses. In the high technology
society, more money is being spent on Research and Development (R&D) than in the past and
these costs are significant. For example, as automobiles are switching from gasoline (fossil fuels)
engines to electric motor engines, the development of these electrical engine systems are being
done currently and the advanced models must undergo testing for performance and safety issues
before they can be marketed. This R&D must be expensed during the development stage and
the profits will occur only when the new technology is successfully marketed and those profits
will be used to support future R&D developments. More advanced production equipment is
being purchased to reduce labor and material costs and thus depreciation expenses will grow.
In addition to the federal government taxes, other taxes and fees have been increasing and are
added as a separate item. As labor costs are higher in the U.S., the top management costs have
also increased significantly in salaries and stock options and the legal and computer security
16
2. FINANCIAL STATEMENTS AND THE PURCELL DIAGRAM
costs have also increased greatly due to cybersecurity threats. The example income statement for
Shawnee Corporation with some of these components is presented in Table 2.1.
Table 2.1: Income statement for Shawnee Corporation in 2020
The balance sheet indicates the financial positions of the company at the beginning and
at the end of the accounting period to show the progress during the year. The basic equation of
the balance sheet is that:
The relationship between cash flows and profits developed in Chapter 1 was:
Assets
D
Liabilities
C
Equities
Cash Flows
Net Profits
C
D
Depreciation
(2.1)
(2.2)
The more inclusive relationship between cash flows and profits is written as indicated by Equa-
tion (2.3) as:
Cash Flows
Net Profits
C
D
Depreciation
C
Adjustments
(2.3)
The adjustments to the cash flows are the accounts receivable, accounts payable, new
equipment purchases, dividends paid, stock sales, and inventory changes and other items not
included in the profit calculations. These adjustments can have a major impact on the cash flows
and are considered in this chapter, but the long-term debt, short-term debt, principal, and in-
terest payments are not included at this level of development. However, these can be included
in the analysis in more advanced models. The balance sheet for Shawnee Corporation is shown
in Table 2.2.
Income StatementShawnee Corporation 2020 (End-of-Year)Sales620ExpensesCost of Goods Sold290Management Costs40R&D Expenses60Sales Expense70Other Taxes & Fees20Depreciation40520Profi t Before Tax100Taxes (Tax Rate 25%) 25Net Profit 75Table 2.2: Balance sheet for Shawnee Corporation in 2020
2.3. THE PURCELL DIAGRAM 17
2.3 THE PURCELL DIAGRAM
The Purcell Diagram gives the cash flows in a combined format of the Income Statement and
the Balance Sheet [3]. The Purcell Diagram combines the information of the Income Statement
and the Balance sheet into a single representation of the cash flows. The primary advantage of
the Purcell Diagram is that it shows how the cash is flowing through the company and is more
dynamic than the balance sheet and income statement. The Purcell diagram also uses more
detailed data, such as the Purchased Equipment ($50), Stock Sales ($5), and Dividends Paid
($10), and illustrates more details of the owner’s equity.
The Purcell Diagram shown in Figure 2.1 gives the cash flows in a much easier format
than the balance sheet and is recommended for use. If one rewrites Equation (2.3) to determine
the end-of-period cash flows, one has:
Cash Flows
Net Profits
Depreciation
Adjustments
D
C
Cash Flows (end)
Cash Flows (start)
C
Net Profits
D
(cid:0)
Depreciation
Adjustments
C
C
(2.3)
or
Cash Flows (end)
Cash Flows (start)
Net Profits
C
C
Depreciation
C
D
Adjustments
(2.4)
From Tables 2.1, 2.2, and Figure 2.1, one can calculate the ending cash flows as:
Balance SheetShawnee Corporation (Year 2020)StartEndStartEnd2020202020202020AssetsLiabilities & Equities Current Assets Current LiabilitiesCash260290Accounts Payable0 20Accounts Receivables040InventoriesFinished Goods5025Work-in-Progress4580Raw Materials 3030Fixed AssetsOwner’s EquityPlant & Equipment100110Common Stock48555548557548557518
2. FINANCIAL STATEMENTS AND THE PURCELL DIAGRAM
Figure 2.1: Purcell diagram for Shawnee Corporation in 2020.
CustomersTotalCostsEmployeesSuppliersManagementR&D ExpensesSales ExpensesOwner’s Equity485/555Dividends Paidto StockholdersStock SalesFees & Other TaxersGovernment Income TaxesCash260/290Finished Goods50/25WIP45/80Raw Materials30/30Fixed Assets100/110AccountsReceivable0/40NetEquityChange70ValueCreatedAccountsPayable0/20620 SalesDepreciation40Net Profits75Net Profits+75Purchased Equipment505802602801902020+70+5-10304029060702025407060702025510Cash Flows (end)
D
Cash Flows (start)
Net Profits
D C
260
{
{
(cid:0)
(cid:0)
50
C
75
C
Purchased Equipment
C
C
5
Dividends Paid
C
Total Inventory Increase (
2.4. SUMMARY 19
Adjustments
Depreciation
40
C
Stock Sales
C
25
35
0)
C
C
(cid:0)
Accounts Payable
Accounts Receivable }
(cid:0)
10
(cid:0)
(cid:0)
40 }
(cid:0)
75
40
50
5
10
10
20
40
C
(cid:0)
115 (cash flow change via profits and depreciation)
C
C
(cid:0)
(cid:0)
(cid:0)
85 (total adjustments)
(cid:0)
(cid:0)
C
10
20
C
260
260
290
C
C
(cid:0)
D
D
D
Cash Flows (end)
Note that the capital equipment is not listed as an expense item on the Income Statement
and appears only on the Purcell diagram. Capital equipment must be depreciated over the life of
the equipment and the depreciation is listed as an expense on the Income Statement. However,
it does affect the cash flows and is presented as one of the adjustments to the cash flows. It is
included in the fixed assets as the increase in the fixed assets represents the difference between
the purchased equipment (an increase in assets) and the depreciation (a decrease in the assets).
This can also be used in another manner for calculating the ending value of a particular
activity. The ending values need to be calculated and the general equation used is:
Ending Value
D
Starting Value
C
Inputs to Activity
(cid:0)
Outputs of Activity
(2.5)
If one examines the cash flow activity in the Purcell Diagram, it is noted that:
290.end/
D
260.start/
C
580.receipts from customers/
555.outgoing funds/
5.stock sales/
C
(cid:0)
Equation (2.5) can be used to determine all the ending values for the balance sheet and
Purcell Diagram. The Purcell diagram shows the inputs and outputs for the various activities. The
Purcell diagram is an excellent tool in complementing the Income Statement and the Balance
Sheet.
2.4
SUMMARY
The Purcell diagram ties together the Income Statement and Balance Sheet to illustrate how the
cash flows through the financial institution. It shows why cash flows are important and how they
are related to profits, owner’s equity, fixed assets, and the other key financial items for monitoring
the performance of the financial institution. The focus of most of the remaining chapters will be
20
2. FINANCIAL STATEMENTS AND THE PURCELL DIAGRAM
on the cash flow calculations and the Purcell Diagram indicates how they relate to the financial
statements of the Balance Sheet and Income Statement.
2.5 REFERENCES
[1] Purcell, W. R., Understanding a Company’s Finances—A Graphical Approach, Houghton
Mifflin Company, Boston, 1981. 15
[2] Purcell, W. R., Understanding a Company’s Finances: Look at Financial Reports, see a Finan-
cial Picture of the Business, July 25, 2009, Kindle eBook. 15
[3] Creese, Robert C. and Adithan, M., Strategic Cost Analysis for Project Managers and Engi-
neers, 2nd ed., New Academic Science Ltd., 2012. 17
2.6 EVALUATIVE QUESTIONS
1. Use the Income Statement and the Balance Sheet in Tables 2.3 and 2.4 to complete the
Purcell Diagram for the Financial Flows for the Shawnee Corporation in Figure 2.2. The
equipment purchased during the year was 70 and the labor was the same as 2020. Fill in
the blanks in Table 2.4 and in the Purcell Diagram of Figure 2.2.
Table 2.3: Income statement for Shawnee Corporation for 2021
Income StatementShawnee Corporation 2021 (End-of-Year)Sales655ExpensesCost of Goods Sold285Management Costs40R&D Expenses60Sales Expense70Other Taxes & Fees20Depreciation60535Profi t Before Tax120Taxes (Tax Rate 25%) 30Net Profi t902.6. EVALUATIVE QUESTIONS 21
Figure 2.2: Purcell diagram for Shawnee Corporation in 2021.
CustomersTotalCostsEmployeesSuppliersManagementR&D ExpensesSales ExpensesOwner’s Equity575/655Dividends Paidto StockholdersStock SalesFees & Other TaxersGovernment Income TaxesCash290/___(2)Finished Goods25/30WIP80/___(8)285COGS200Raw Materials30/50Fixed Assets110/120AccountsReceivable40/60NetEquityChange____ValueCreatedAccountsPayable20/30655 SalesDepreciation___(15)Net Profits(13)___Net Profits+90Purchased Equipment___(5)___(1)___(6)___(7)___(9)___(3)___(4)(10)___20(12)+___+10-2030406070204070607020(11)___(14)___102022
2. FINANCIAL STATEMENTS AND THE PURCELL DIAGRAM
Table 2.4: Balance sheet for Shawnee Corporation for 2021
2. Use the information in Tables 2.5 and 2.6 with the additional information and complete
the Purcell Diagram in Figure 2.3. The additional information given is for calculating some
of the adjustment terms in Equations (2.3) and (2.4).
Additional information for 2024 is:
Stock Sales
Dividends Paid
Labor Used
20 Cash Received from Customers
20 Raw Materials to WIP
WIP to Finished Goods
780
240
325
For Raw Materials
For WIP
For Finished Goods
Supplier Services
Equipment Purchased
40
35
55
90
Raw Materials for Prod.
200
WIP Services
45
Money Paid to Suppliers
325
Balance SheetShawnee Corporation (Year 2021)StartEndStartEnd2021202120212021AssetsLiabilities & Equities Current Assets Current LiabilitiesCash290 (1)Accounts Payable2030Accounts Receivables4060InventoriesFinished Goods2530Work-in-Progress8085Raw Materials 3040Fixed AssetsOwner’s EquityPlant & Equipment110 (2)Common Stock (4) (5)575 (3) 575 (6)2.6. EVALUATIVE QUESTIONS 23
Figure 2.3: Purcell diagram for NAC Corporation in 2024.
CustomersTotalCostsEmployeesSuppliersManagementR&D ExpensesSales ExpensesOwner’s Equity___/___Dividends Paidto StockholdersStock SalesFees & Other TaxersGovernment Income TaxesCash___/___Finished Goods___/___WIP___/______Raw Materials___/___Fixed Assets___/___AccountsReceivable___/___NetEquityChange____ValueCreatedAccountsPayable___/______ SalesDepreciation___Net Profits___Net Profits+___Purchased Equipment_________Stock Sales_____________________+___+___-_____________________________________________24
2. FINANCIAL STATEMENTS AND THE PURCELL DIAGRAM
Table 2.5: Income statement for NAC Corporation for 2024
Table 2.6: Balance sheet for NAC Corporation for 2024
Income StatementNAC Corporation 2024 (End-of-Year)Sales800ExpensesCost of Goods Sold400Management Costs40R&D Expenses60Sales Expense70Other Taxes & Fees30Depreciation80680Profi t Before Tax120Taxes (Tax Rate 25%) 30Net Profi t90Balance SheetNAC Corporation (Year 2024)StartEndStartEnd2024202420242024AssetsLiabilities & Equities Current Assets Current LiabilitiesCash320415Accounts Payable010Accounts Receivables020InventoriesFinished Goods4020Work-in-Progress9085Raw Materials 4040Fixed AssetsOwner’s EquityPlant & Equipment100110Common Stock590680590690 590 690C H A P T E R 3
25
Costs and Cost Estimating
3.1
INTRODUCTION
Costs, interest payments, and cost estimating have been in existence since the first merchants
started buying and selling products. The focus in business today tends to be on profits and cash
flows, but the primary component in determining the amount of profits or cash flows are the
expenses or costs. Some Biblical verses related to interest and cost estimating are presented to
indicate that the issues of interest and cost estimating have existed for a long time; the vs. are
paraphrased as:
1. Deuteronomy 23: 19–20 You must charge no interest on a loan made to your brother. On
a loan to a foreigner, you must charge interest…
2. Luke 14:28 For which of you, wanting to build a tower, does not first sit down to determine
the cost, to see if he has enough to complete it?
Although most religions now tend to charge interest to all people, a few coun-
tries/communities still tend to charge interest only to foreigners. Although costs, cost estimat-
ing, and interest payments are not new and have been in existence since pre-Biblical times, the
methods for evaluating them have changed considerably over the centuries.
Two terms that are related and represent the before and after of a project/product are
cost estimating and cost accounting. Cost estimating is done to determine if a project/product is
feasible and cost accounting is to determine the actual costs of the project/product for evaluating
the profits and determining the accuracy of the cost estimate. These two definitions are from
AACE International Recommended Practice No. 10S-90, Cost Engineering Terminology [1], as
revised October 30, 2017.
Cost Estimating: Cost estimating is the predictive process used to quantify, cost, and price
the resources required by the scope of an investment option, activity, or project. Cost estimating
is a process used to predict uncertain future costs. In that regard, a goal of cost estimating is
to minimize the uncertainty of the estimate given the level and quality of scope definition. The
outcome of cost estimating ideally includes both an expected cost and a probabilistic cost dis-
tribution. As a predictive process, historical reference cost data (where applicable) improve the
reliability of cost estimating. Cost estimating, by providing the basis for budgets, also shares a
goal with cost control of maximizing the probability of the actual cost outcome being the same
as predicted (November 2012).
26
3. COSTS AND COST ESTIMATING
Cost Accounting: The historical reporting of actual and/or committed disbursements (costs
and expenditures) on a project. Costs are denoted and segregated within cost codes that are
defined in a chart of accounts. In project control practice, cost accounting provides the mea-
sure of cost commitment and/or expenditure that can be compared to the measure of physical
completion (or earned value) of an account ( January 2003).
The calculation of profits requires the determination of both revenues and expenses/costs.
The expenses/costs represent the majority of the items on the Balance Sheet and management
has more control over the expenses/costs than on the revenues items. The revenues have more
outside factors affecting their control, such as customer demands, competition, and general eco-
nomic prosperity. Cash flows are closely related to profits and both are essential in determining
the success of a company.
To illustrate the influence of costs upon profits, Table 3.1 compares the effect of a 10%
change in increased price, increased sales volume, and cost reductions upon the gross profits [2].
It indicates that the price increase is the best option, but it is only slightly better than the 10%
cost reduction. It is much more likely for companies to achieve a cost reduction of 10% than
an increase of selling price of 10%. With this given scenario, the 10% price increase results in a
125% increase in total profits and a 10% cost reduction results in 115% increase in profits. This
is why companies focus on cost reductions when trying to increase profits as they have more
control on costs whereas price increases are difficult to achieve in a competitive market. When
there is a monopoly, outrageous price increases can occur, which has occurred in the medical
industry when only one supplier exists and when one has her/his life involved. This is, however,
is an exception to usual business practices, but it illustrates that unethical practices are often not
illegal.
There are three basic approaches to cost estimating which are: Top-Down Estimating,
Bottom-Up Estimating, and Combined Top-Down and Bottom-Up Estimating. There are
other approaches to cost estimating, but these are the primary approaches utilized. Each of
the approaches will be illustrated by examples, but the traditional method of cost estimating has
been the bottom-up approach.
In the bottom-up estimating approach the complete details of the production process are
known and cost for each step of the process can be estimated. The cost of all the steps of the
process are estimated and the total cost of all the steps is the estimate for the product or project.
This is done by most companies when the new products are similar to existing products and
relatively small changes occur in the processes and the estimates tend to be accurate. Greater
changes in the new product or process tend to result in more error in the estimate accuracy as
items may be omitted in the estimate or more corrections will be needed in the manufacturing
process. A foundry producing cast iron six-cylinder engine blocks could use that as the basis for
making an estimate for a cast iron four-cylinder or an eight-cylinder engine block as most of
the steps in the process would be similar. However, the changing from a six-cylinder cast iron
Table 3.1: Gross profit improvement analysis for a two product system [2]
3.1. INTRODUCTION 27
Current Practice10% Price Increase10% Sales Increases10% Cost ReductionTotal Cost + Profi tProduct AProduct BTotal8020100882211088221108020100Variable CostsProduct AProduct B5712571262.713.251.210.8Fixed CostsProduct AProduct B19419419417.13.6Total CostProduct AProduct B7616761681.717.268.414.4Gross Profi tProduct AProduct BTotal448126186.34.811.111.65.617.2Profi t IncreaseAmountProduct AProduct BTotalBaseCase82102.30.83.17.61.69.2Profi t IncreasePercentProduct A (%)Product B (%)Total PercentBaseCase2005012557.52038.81904011528
3. COSTS AND COST ESTIMATING
engine to a six-cylinder aluminum would be more difficult as the process steps differences would
be greater.
The traditional top-down estimating approach is usually based on Cost-Estimating Re-
lationships (CERs) using primary cost drivers as the actual process steps and design details may
not be known. These tend to focus on products with significant changes in design and where the
manufacturing steps and sequence are not known. This often occurs for new military equipment
to be manufactured and performance goals are specified, but the detailed product design has not
been completed.
A second approach to top-down estimating is Target Costing, which is done in industries
which purchase many of the components for their product. In the automobile industry, the
manufacturers must estimate the total cost of the model to assure that it will be competitive
in the market place. The automobile manufacturers produce the main components such as the
engine, power train, frame, and skin, but usually have suppliers produce the tires, wheels, seats,
wiring harnesses, glass, audio, etc. They need to set targets for these supplier components so that
they can meet their total cost target to be competitive in the market place.
The third cost approach is a combination of top-down and bottom-up costing. This is
done for projects of long duration, such as the production of an aircraft carrier. It takes several
years to make the product, such as 10 years, and technology will change significantly over that
time. The basic ship building costs can be estimated by bottom-up costing, but the cost for the
electronics such as radar systems and defensive systems that will not be designed until several
years in the future and thus these costs cannot be determined by bottom-up costing approaches,
but will need CERs to estimate the costs.
3.2 COST COMPONENTS FOR ESTIMATES
3.2.1 BASIC COMPONENTS
There are numerous cost components and they will vary by industry, company and product and
thus only a sample of the items commonly used in manufacturing are presented in Table 3.2.
Other companies, such as financial services, food services, medical services, hotel and motel
industries, and travel services will have similar components, perhaps with different names. In
activities such as R&D, the expenses of the R&D can be determined by direct cost methods,
but they must be allocated as an overhead to current products to recover the overhead costs.
3.2.2 TRADITIONAL AND ABC OVERHEAD ALLOCATION METHODS
The overhead costs are replacing direct labor costs as the largest cost component because direct
labor costs have decreased through automation replacing labor and a large increase has occurred
in overhead cost components such as R&D, legal, safety, and administrative management costs.
The material cost proportion can vary considerably depending up the amount of purchased ma-
terials involved. In the automotive manufacturing business, most auto companies purchase the
Table 3.2: Cost components for estimating in manufacturing
3.2. COST COMPONENTS FOR ESTIMATES 29
A. Pre-production Costs (often in Corporate Overhead and not as separate costs)1. Research & Development2. Engineering Design3. Process Engineering Design (Tooling)B. Production Costs (Direct)4. Direct Materials5. Direct Labor6. Other Direct Costsi. Toolingii. Processing7. Contingency/Riski. Product-Engineering Design Changesii. Process-Tool ChangesC. Overhead Costs (Indirect)8. Overhead Costsi. Shop (Including Indirect Materials & Labor)ii. Plantiii. Corporateiv. Sales9. Adjustments to Costsi. Quantity Adjustmentsii. Surchargesa. Special Testingb. Special DeliveryD. Total Costs10. Total Costs = ∑1 through 9 + TaxesE. Profi t11. Profi t (Mark-up) + Tax Estimate on Profi tF. Selling Price12. Selling Price Selling Price = Total Costs + Profi t + Estimated Profi t Tax30
3. COSTS AND COST ESTIMATING
glass, the interior seats, the tires, the head and tail light assemblies, the electrical harnesses, etc.,
so the total material costs are high as the purchased materials are the finished products of the
suppliers. In companies that produce the glass for the auto windows, the raw materials are a
relatively small portion of the total costs, but the processing costs are very high.
One approximate method for determining the selling price based on the material cost was
the 1-3-9 rule of Rondeau [3]. The 1 represented the material cost (or 1.2 including scrap and
tooling), the 3 representing the manufacturing cost, and the selling price represented by the 9.
Thus, for a product that used materials of $3, the manufacturing cost would be $9 and the selling
price would be $27. The manufacturing cost includes factory overhead including factory admin-
istration, product scheduling, quality control, material handling, shipping and receiving, as well
as direct labor. The selling price includes the administrative overhead, the R&D, information
systems and security (ISS), legal overheads, product testing, sales and marketing expenses, taxes,
and a mark-up of 10–25%. The ratio can be developed for specific companies or industries, but
the total overheads and other expenses are typically much greater than the manufacturing costs.
There are two major approaches to estimating of overhead costs. The traditional approach
uses one or more of the direct costs to estimate the overhead cost, such as direct labor dollars,
direct labor hours, or total direct costs. The Activity Based Costing (ABC) uses the measurement
of a major activity for a department, such as purchasing orders, and determines a unit cost for
that activity which can be assigned to a specific department or product. The determination of
specific activities for administration, R&D, sales, and marketing are difficult to develop and this
has been a problem with the traditional ABC.
The traditional allocation of overhead was based upon the amount of direct labor used in
the production, which could be measured in either hours or dollars. This was fairly accurate when
direct labor was largest cost component as in the early 1900s, but over the last 75 years companies
have modernized equipment and used more automated material handling and robotics to reduce
the amount of direct labor cost has been reduced from 40–60% of the total costs to 10–30%.
New costs such as ISS, computer maintenance, and enhanced legal and safety expenses have
increased over the years so labor is no longer the dominant cost as the total support costs far
exceed the labor costs. Today direct labor costs are only 10–30% of the total costs and overhead
rates based only upon direct labor would result in percentages from 300–500%. In many cases,
these overheads are not even closely related to the amount of direct labor and this results in
poor overhead allocations. This has resulted in other variables and/or additional variables being
utilized to estimate the overhead costs. Direct costs, such as material cost, can be assigned easily
to the products, but the support costs often have little direct connection to the various individual
products.
ABC takes the cost of the overhead activity and divides by the performance measure for
that activity to determine the rate. The overhead charge to the consuming department is the
product of the rate determined and the amount of the activity consumed by the department. For
example, the purchasing department is a service activity and its expenses are overhead. Thus, the
3.2. COST COMPONENTS FOR ESTIMATES 31
expenses of the department divided by the number of purchase orders processed would be the
rate and the production department would be charged by the product of the number of purchase
orders submitted times the rate. This led to several problems in determining rates as one purchase
order may contain only one item and another purchase order would contain numerous items.
Also, many of the purchase orders were for other support groups and could not be related to
specific products.
The newer approach is to develop rates based on the time for the activity and this has been
used in both ABC and traditional overhead determination. Time is what is being consumed
and the specific time values closely related to production are the direct labor time and the time
machines are being used for production. The use of time has changed the traditional ABC to
Time-Based ABC with the use of activities being related to time. The overhead costs are a much
larger component of the total cost and they are different in their support of the product, so
different measures are needed and they can be measured in units, such as units of production or
in time units such as machine hours, direct labor hours, or the sum of machine and labor hours.
Eventually the overheads need to be assigned to specific products to determine the product cost
and a selling price for recovering the costs and providing an acceptable profit.
An example is presented in Table 3.3 of a metal casting facility which is a supplier to the
automotive industry. The traditional ABC will be used for two activities and the administrative
and facility overheads will be determined by using the traditional direct labor hours, the machine
hours and the sum of the direct labor and machine hours. There are four products—pistons,
crankshafts, axles, and exhaust manifolds—and the unit selling price, unit direct material costs,
direct labor hours, and machine hours, as well as the ABC data for purchase orders and material
handling are presented. The machine hours are used for both the depreciation unit and utility
unit charge determinations.
The quantities, selling prices, and direct costs are known, and the problem is to determine
how to allocate the factory and administrative overhead expenses of $900,000 and the other
overheads of $652,000 to determine the individual product profit/loss. The ABC method utilizes
two drivers in the determination of the overhead and they were: Number of Production Orders
and Number of Moves. The Number of Machine Hours would be considered as a driver in either
the ABC or Traditional Method. The Number of Machine Hours and the Number of Labor
Hours are used for direct costs as they can be directly associated with a product. The data of
Tables 3.3 and 3.4 are used to determine the unit cost values for the four products.
32
3. COSTS AND COST ESTIMATING
Table 3.3: Cost data for overhead evaluations for an automotive metal casting facility
Table 3.4: Unit cost determination
Casting Products ProducedOverhead Item/ActivityCost DriverPistonCrank-shaftRearAxelExhaust ManifoldTotal # of ActivityTotal OH Cost - $OH Rate ($/unit)Purchase Orders# ofOrders1203002003801,00056,00056Material Handling# of Moves2003504504001,40056,00040DepreciationExpense# of Machine Hours2,0004,0002,0002,00010,000500,00050Utility Costs# of Machine Hours2,0004,0002,0002,00010,00040,0004Factory Expenses500,000AdministrtiveExpenses400,000Total Expenses1,552,000Casting Products ProducedProduction DataPistonCrank-shaftRearAxelExhaust ManifoldTotal # of ActivityTotal OH Cost - $OH Rate ($/unit)Production Units38,0008,0006,70034,000Selling Price: $/unit101327523Direct Labor: hours1,0005,0002,0005,00013,000520,00040Utility Cost: Machine hours2,0004,0002,0002,00010,00040,0004Direct Unit Costs and Overhead Activities Directly Related to ProductsProduction CostsPistonCrank-shaftRearAxelExhaust ManifoldDirect Materials$/unit4.00032.00019.0006.000Direct Labor$/unit1.05325.00011.9405.882Purchase OH$/unit0.1772.1001.6720.626Material Handling$/unit0.2111.7502.6870.471Depreciation Expenses$/unit2.63225.00014.9252.941Machine Utility$/unit0.2112.0001.1940.235Total Direct Costs$/unit8.28487.85051.41816.155The calculations for the unit costs for the Pistons in Table 3.4 will be illustrated.
3.2. COST COMPONENTS FOR ESTIMATES 33
The Direct Material Cost per unit is given as:
DMC
D
$4:00=unit
The Direct Labor Cost per unit is calculated by:
DLC
D
D
(direct labor overhead rate
(cid:2)
piston direct labor hours)/# of pistons produced
40 $=hr
(cid:2)
1,000 hrs/38,000 pistons
$1.053/unit
D
The Purchase Order Cost per unit is calculated by:
POC
D
D
(# of Piston Purchase Orders
(cid:2)
Purchase Order OH rate)/# of Pistons produced
(120 purchase orders
(cid:2)
$56/purchase order)/38,000 pistons
$0:177=unit
D
The Material Handling Cost per unit is calculated by:
(3.1)
(3.2)
MHC
D
D
(# of Piston Material Moves
(200 Piston Material Moves
(cid:2)
(cid:2)
Material Handling OH rate)/# of pistons produced
(3.3)
$40/purchase order/38,000 pistons
$0:211=unit
D
The Depreciation Expense Cost per unit can be calculated by:
DEC
D
D
(# of Piston Machine Hours
(cid:2)
Depreciation Expense OH rate)/# of pistons produced
(3.4)
(2,000 Piston Machine Hours
(cid:2)
$50/machine hour)/38,000 pistons
$2:632
D
The Machine Utility Cost per unit is also based upon machine hours and can be calculated
by:
MUC
D
D
(# of Piston Machine Hours
(cid:2)
Utility Cost OH rate)/# of pistons produced
(3.5)
(2,000 Piston Machine Hours
(cid:2)
$4/machine hour)/38,000 pistons)
0:211
D
34
3. COSTS AND COST ESTIMATING
The sum of these costs are presented in Table 3.4 and are the sum of the material cost and
the five calculated overhead costs as:
Total Direct Assignable Costs (TDAC)
TDAC
DMC
DLC
POC
C
C
C
MHC
C
DEC
C
MUC
(3.6)
4:000
8:284
C
1:053
0:177
C
C
0:211
C
2:632
C
0:211
D
D
D
The sum of these direct and overhead costs is $8.284, but this does not include the factory
overhead expenses and administrative expenses. The unit costs for the other three products—
Crankshaft, Rear Axle, and Exhaust Manifold—can be evaluated in a similar fashion to that
presented for the Pistons. The factory and administrative expenses are related to time as many
are salaries and benefits and the labor time and/or machine time are logical predictors for allo-
cating these overhead expenses. The considerations presented are labor time, machine time, and
the combined labor and machine time. Historical data should be used to select the best time base
to use to allocate the Factory Overhead, Administrative Overhead, and other overheads such
as R&D and Sales and Marketing. These overhead parameters may be different for the various
overheads, that is Factory Overhead may be related best with direct labor hours while Admin-
istrative overhead may best be related to total labor plus machine hours. Table 3.5 presents the
allocation of the Factory Overhead and Administrative Overhead first as a function of direct
labor hours, then with machine hours, and finally with the sum of labor and machine hours.
The factory and administrative overheads will be calculated for direct labor hour basis to
illustrate the calculations involved for the Piston.
The administrative overhead (PAOH) for the Piston in $/Piston is calculated as:
PAOH
PAOH
D
D
D
Piston Administrative Overhead
(Piston DL Hours/Total DL Hours)
(cid:2)
(Total Administrative Expenses/Total Pistons)
.1;000 hrs=13;000 hrs/
.$400;000=38;000/
0:8097
D
(cid:2)
The factory overhead (PFOH) for the Piston in $/Piston is calculated as:
PFOH
PFOH
D
D
D
Piston Factory Overhead
(Piston DL Hours/Total DL Hours)
(cid:2)
(Total Factory Expenses/Total Pistons)
.1;000 hrs=13;000 hrs/
.$500;000=38;000/
1:0121
D
(cid:2)
(3.7)
(3.8)
Table 3.5: Total product cost using different time basis
3.2. COST COMPONENTS FOR ESTIMATES 35
The total unit product cost is:
Total Unit Piston Cost (TUPC)
Total Direct Assignable Piston Costs (TDAPC)
Piston Administrative
Piston Factory Overhead (PFOH)
(3.9)
C
D
D
D
Overhead (PAOH)
1:012
0:810
C
8:284
C
C
10:106 $=piston
Direct Labor Hour Basis for Factory and Administrative Overhead Unit CostsPistonCrank-shaftRear AxelExhaust ManifoldTotal Direct Costs$/unit8.28487.85051.41816.155Factory OH$/unit1.01224.03811.4815.656Adminisstrative OH$/unit0.81019.2319.1854.525Total Product Cost$/unit10.106131.11972.08426.336Selling Price$/unit10.000132.00075.00023.000Product Profi t/Loss$/unit-0.1060.8812.916-3.336Machine Hour OH Basis for Factory and Administrative Overhead Unit CostsPistonCrank-shaftRear AxelExhaust ManifoldTotal Direct Costs$/unit8.28487.85051.41816.155Factory OH$/unit2.63225.00014.9252.941Adminisstrative OH$/unit2.10520.00011.9402.353Total Product Cost$/unit13.021132.85078.28421.449Selling Price$/unit10.000132.00075.00023.000Product Profi t/Loss$/unit-3.021-0.850-3.2841.551Total of Machine plus Labor Hour OH Basis for Factory and Administrative Overhead Unit CostsPistonCrank-shaftRear AxelExhaust ManifoldTotal Direct Costs$/unit8.28487.85051.41816.155Factory OH$/unit1.71624.45712.9794.476Adminisstrative OH$/unit1.37319.56510.3833.581Total Product Cost$/unit11.373131.87274.77924.212Selling Price$/unit10.000132.00075.00023.000Product Profi t/Loss$/unit-1.3730.1280.221-1.21236
3. COSTS AND COST ESTIMATING
The total profit would be the unit selling price minus the unit total cost which for the
piston is:
Total Profit for Piston
D
Piston Selling Price/unit
10:106
10:000
(cid:0)
0:106 $=piston (loss)
Piston Total Cost/unit
(3.10)
(cid:0)
D
D (cid:0)
This example problem is to show the methodology of the calculations. Note that the two
other methods of assigning overheads gave pistons larger losses, the largest loss was $
3.019
using machine hours as the base for the factory and administration overheads. Pistons would
require some precision machining and that would be a cause. This also suggests that the selling
price is too low. In the automotive industry, the competition is fierce with the auto companies
driving supplier prices down, but they do realize that they need the suppliers because they cannot
make all the components at competitive costs.
(cid:0)
Table 3.5 indicates that the rear axle has the highest profit amount using direct labor hours
of $2.916/axle, but when using the machine hour base it has the greatest loss of $
3.284/axle.
This variation indicates why it is important to determine which allocation base is best for allo-
cating the factory expense and administrative overheads. One could also use different overhead
bases, such as using direct labor hours for the factory expense allocation and machine hours for
the administrative expense allocation. A previous work [4] has additional examples on overhead
allocation.
(cid:0)
3.2.3 PROFIT CALCULATIONS
If you don’t know the costs, it is difficult to determine a competitive price, and nearly impossible
to predict the profitability of the product. Price is based upon cost, but also includes other factors
such as market conditions and the value of the product to the customer. Profit should not be a
constant percentage or constant amount for all products, but should vary with respect to your
ability to produce products. If you are “the best” on certain products, the profit should be greater
on those products and lower on those products for which you are not “the best.”
There are two methods commonly used to calculate the amount of profit to include in the
total cost calculation. The two methods are the percent of cost and the percent of selling price
and the difference is significant. An example of a product costing $100 will add 25% mark-up
for profit.
Percent of Cost Calculations
3.3. COST ESTIMATION ACCURACY 37
C
Product cost
0:25
Selling Price
Selling Price
The profit would be
D
D
D
D
D
C
Product Cost
$100
$100
$100
$25
$125
$125
$100
C
(cid:0)
(cid:2)
$25
D
decimal percent mark-up
(3.11)
(cid:2)
Percent of Selling Price
Selling Price
Selling Price
The profit would be
D
D
D
D
D
decimal percent mark-up)
(3.12)
(cid:0)
0:25/
(cid:0)
Product cost/(1.0
$100=.1:00
$100=.0:75/
$133:33
$133:33
$100
(cid:0)
D
33:33
Now since one is making 25% profit, a very good customer wants a 20% discount and the
seller would still expect to make a 5% profit. The discount is applied to the selling price, not to
the cost. Therefore, using the percent of cost calculations, the discount amount would amount
to 20% of $125 which is $25 and thus the seller would have zero profit instead of 5% profit.
If one uses the percent of selling price, the discount would be 20% of 133.33 which would be
$26.66 and the seller would still have made a profit of $6.66 after the 20% discount, which is
5% of the $133.33. This is why the preferred mark-up approach is to base it upon selling price
rather than cost.
Profit can also be considered on an after tax basis by including the estimated taxes as
part of the selling price. Consider the previous example with a product cost of $100, a desired
mark-up of 25% and an expected tax rate of 20%. What should the selling price be to obtain the
desired mark-up of 25% after taxes using the Percent of Selling Price Method?
The profit after taxes as calculated above would be
The total profit needed including the taxes (Gross Profit)
The selling price would be
The taxes would be 20% of the gross profit
100
The net profit is $141:66
8:33
(cid:0)
(cid:0)
D
D
D
D
D
$33:33
$33:33=.1
$100
0:20
C
(cid:0)
$41:66
41:666
(cid:2)
$33:33
0:20/
41:666
D
$141:666
8:33
D
D
3.3 COST ESTIMATION ACCURACY
The accuracy of estimates will vary considerably depending upon the amount and the accuracy
of the information and the degree of complexity, knowledge, and experience of the estima-
38
3. COSTS AND COST ESTIMATING
tors. AACE International has developed an estimating system [5] with five classes. The system
presented uses that data, but has modified it giving two accuracy ranges based on two degrees
of Difficulty/Complexity expressed as Low and High. The easy estimates (low difficulty) will
have a higher percentage of the total estimating information required than the difficult, com-
plex project estimates (high difficulty) and the accuracy ranges of the easy estimates would be
smaller. However, the total amount of information required for the easy estimates is much less
than the total amount of information required for the complex difficult projects and thus more
information is required for complex projects than for the easy projects for all estimate classes.
The primary factor is the known amount of information required to produce the product/project
vs. the total amount of information required. Table 3.6 presents the expected accuracy range for
an estimate based upon the estimate class, the percent of total estimating information required,
and the difficulty level of the estimate. For example, consider a Class 4 estimate where a sim-
ple project may require 10 items of information and a complex project may require 200 items
of information. If the simple project has one item of information and the complex project has
16 items of information, the percentages of total information required are 10% for the simple
project and only 8% for the complex project. Thus, simple projects have a higher percentage of
information required, but the complex projects require more information.
If a Class 2 estimate is made for a complex, difficult project and has an estimated value of
$3,000,000, what is the expect range of the total project cost?
C
15% to
For a Class 2 estimate of a project with a high degree of difficulty, the estimate range is
20%. That implies the estimate could vary from 85–120% of the project estimate
(cid:0)
which is a range from $2,550,000–$3,600,000 for a total variation range of $1,050.000. Al-
though the percentages of the estimate range seem small, for expensive projects the range is
large in terms of dollars and that is why accurate estimates are important. For a Class 2 project
5%, which would result in a range
with low difficulty, the range of the total project is
of $2,850,000–$3,150,000 and a total variation range of $300,000 which is less than 30% of the
high difficulty project range.
5% to
C
(cid:0)
Cost control is a strategic element in the success of an enterprise. Costs have a major im-
pact upon profits and cost is the major item over which management has control in the amount
of profits earned. With the reduction in direct labor costs, overhead costs have increased sig-
nificantly and are a major element in the total costs and are more difficult to assign to products
or projects than direct costs. Accurate estimates are necessary to provide information to deter-
mine the potential financial success of a product or project. The accuracy for estimates of small,
similar products with minor differences have narrower ranges than complex projects such as the
Mars Mission or newly designed complex products or projects. Cost engineers should monitor
their estimates to develop an accuracy distribution and range for their estimates which may be
different than those presented.
Table 3.6: Cost estimate accuracy based upon Information Required [5]
3.4. SUMMARY 39
3.4
SUMMARY
Costs and cost estimating are extremely important in the determination of cash flows and profits.
The two major components are direct costs and overhead costs. The overhead costs have increased
greatly due to safety, environmental, legal, and other new costs while the direct labor costs have
decreased significantly over the past decades as a percentage of the total cost. The approaches to
overhead allocation are the traditional methods which utilized percentage relationships of direct
costs and the ABC method. The newer approaches are relating the costs to a time basis in both
the traditional approach and Time-Based ABC method. If production time can be reduced,
the overhead charge per unit will be reduced. The method of determining the product mark-up
should be based upon a percentage of the selling price rather than a percentage of the total cost.
The estimating class system permits a calculation of the range in the estimate based upon the
complexity of the project and the experience of the estimators.
Estimate ClassPercent of Total Estimating Information Required (EIR (%)) and Diffi culty Level (DL)Typical ApplicationExpected Accuracy Range Based upon Project Diffi culty/ComplexityEIR (%)DLAccuracy Range (%)Class 51–10LowScreeningLow-20 – +300–2HighConceptHigh-50 – +100Class 45–20LowBudget Prep.Low-15 – +201–15HighFeasibilityHigh-30 – +50Class 315–50LowBudget ApprovalLow-10 – +1010–40HighBudget PreparationHigh-20 – +30Class 240–80LowControl BudgetLow-5 – +530–70HighBudget ApprovalHigh-15 – +20Class 175–100LowCheck EstimateLow-3 – +370–100HighBid/TenderHigh-10 – +15*Table 3.6 is altered slightly from the original publication by AACE International [5]. Th is will refl ect rear-ranging data of the last column to be two rows according to project diffi culty/complexity level which originally was:Low-20% – -50%-15% – -30%-10% – -20%-5% – -15%-3% – -10%High +30% – +100%+20% – +70%+10% – +30%+5% – +20%+3% – +15%40
3. COSTS AND COST ESTIMATING
3.5 REFERENCES
[1] Cost Engineering Terminology AACE International Recommended Practice No. 10S-90,
pages 29, 32, 2017. Copyright© 2017 by AACE International: All rights reserved. (Rec-
ommended Practice No. 10S-90, 2017, Cost Engineering Terminology is a free download
to the public by visiting web.aacei.org) 25
Reprinted with the permission of AACE International, 1265 Suncrest Towne Centre Dr.,
Morgantown, WV, 26505, U.S. Phone 304-296-8444.
Internet: http://web.aacei.org
e-mail mailto:[email protected]
[2] Creese, Robert C., Adithan, M., and Pabla, B. S., Estimating and Costing for the Metal
Manufacturing Industries, Marcel Dekker, Inc., page 13, 1992. (Reprinted with Permission
of Taylor and Francis Group LLC Books.) 26, 27
[3] Rondeau, H. F., The 1-3-9 rule for product cost estimation, Machine Design, pages 50–53,
August 1975. 30
[4] Creese, Robert C., Adithan, M., and Pabla, B. S., Estimating and Costing for the Metal
Manufacturing Industries, Marcel Dekker, Inc., pages 35–49, 1992. 36
[5] Cost Estimate Classification System—As Applied in Engineering, Procurement, and Construc-
tion for the Process Industries, AACE International Recommended Practice No. 18R-97,
pages 2–3, 2011. 38, 39
Reprinted with the permission of AACE International, 1265 Suncrest Towne Centre Dr.,
Morgantown, WV, 26505, U.S., Phone 304-296-8444.
Internet: http://web.aacei.org
e-mail: mailto:[email protected]
3.6 EVALUATIVE QUESTIONS
1. Find two additional references to interest charges or cost estimating before time zero AD.
2. In competitive markets, why is cost control performed more by management than increas-
ing prices?
3. What are the major differences between cost estimating and cost accounting?
4. What are the three basic approaches to cost estimating?
5. Repeat the analysis of Table 3.1 for a 5% increase in price, a 5% increase in sales, and 5%
cost reduction.
3.6. EVALUATIVE QUESTIONS 41
6. Determine the new overhead allocations for the Purchase Orders if the total overhead
cost for the purchasing department is $60,000 and the orders for the products are: Piston-
300, Crankshaft-500, Rear Axle-100, and Exhaust Manifold-100. Determine the unit
purchasing OH cost for each of the four product produced and compare them with those
in Table 3.4. The total quantities are the same as those listed in Table 3.4.
7. Determine the unit costs and total unit cost for the crankshaft if 10,000 units were pro-
duced (a recording error was made and 10,000 were made instead of 8,000) and compare
the results (costs and profits) to the 8,000 total and unit costs of Tables 3.4 and 3.5. Assume
all other values remain the same as in Tables 3.3, 3.4, and 3.5.
8. A product has a total cost of $2,000 and the desired profit is 15%.
(a) Determine the selling price if the profit percent is based upon the cost.
(b) Determine the selling price if the profit percent is based upon the selling price.
(c) Determine the selling price if the profit percent is based upon the selling price and
the tax rate is 20% and the profit desired is after taxes.
9. A Class 4 estimate is to be prepared for a project estimate of $1,500,000 that is very similar
to previous work. What is the estimate range for this project?
10. Company JEN does complex projects and the estimated project cost is $5,000,000 for a
Class 1 Estimate. What is the estimate range for this project and what is the minimum
percentage of the total estimating information required?
C H A P T E R 4
Breakeven Analysis
43
4.1
INTRODUCTION
The previous chapter indicated that the best approach for allocating overheads was the time-
based system for either the traditional direct cost system or the Time-Based ABC system.
Breakeven analysis has traditionally focused upon production quantity-based breakeven anal-
ysis and the cost breakeven point. This worked well for marketing, sales, and top-management
for planning goals, but it provided little assistance at the plant management level where the
production quantity is not a variable, but a quantity specified by the customer. The plant su-
perintendent, production manager, or manufacturing manager can control the time to produce
the orders but they cannot control the quantity. Thus, time-based breakeven analysis [1–5] is a
concept being considered for use at the operation levels of production.
In addition to the two approaches to breakeven analysis, one can also consider different
breakeven points and these will be presented in detail. The costs will be considered as fixed,
variable, and semi-variable, but they will be considered differently for the two models as an
item fixed in one approach may be variable in the other. For example, in the quantity-based
system materials would be considered as variable whereas in the time-based system they would
be considered as fixed as the quantity is fixed.
4.2 BREAKEVEN MODEL BASICS
The basis of the two models is the same equation presented by George Dieter in Volume 20 of
the ASM Handbook [6]. The base equation is:
Cu
D
Cm
C
Cc=n
C
Cl =pr;
(4.1)
where
Cu
Cm
Cc
Cl
n
pr
D
D
D
D
D
D
unit cost, $/unit
unit material cost, $/unit
capital cost, $
labor and overhead cost, $/hr
production quantity, number of units
production rate, units/hr
44
4. BREAKEVEN ANALYSIS
If one multiplies Equation (4.1) by the production quantity, n, to obtain the total cost, the
equation for total cost (CT ) is:
Rewriting this equation with n being emphasized:
CT
nCm
Cc
C
C
D
Cl n=pr:
CT
n
(cid:2)
D
.Cm
C
Cl =pr/
Cc:
C
If the production rate is considered to be a constant, the equation would be written as:
CT
n
(cid:2)
D
.slope constant in $/unit/
intercept ($)
C
and this is the basis of the production quantity-based approach.
The basis of time-based approach is also from Equation (4.2)
If one recognizes that:
CT
nCm
Cc
C
C
D
Cl n=pr:
n=pr
nCm
D
the total production time T and
D
the total materials cost for the order, Cmt
then with slight rearranging
Thus,
CT
T
Cl
(cid:2)
D
C
.Cmt
C
Cc/:
CT
.slope constant in $/unit time/
and this is the basis of the time-based approach.
D
(cid:2)
T
intercept ($)
C
(4.2)
(4.3)
(4.2)
(4.4)
4.3 BREAKEVEN POINTS
The four breakeven points are defined so that they can be used in either the time-based system
or the quantity-based system. The costs will be defined in three categories as fixed costs, variable
costs and as semi-variable costs, but the specific costs may not be in the same category in the
two bases. The typical costs in three categories will be listed for each cost base.
4.3.1 CATEGORIES AND TYPICAL EXAMPLES FOR THE PRODUCTION
QUANTITY-BASED SYSTEM
In the production quantity-based system the overhead costs are not a separate category and are
typically included in the fixed costs or may be assessed to the variable direct labor and/or direct
material costs. The production costs will be considered in the three components of fixed costs,
variable costs, and semi-variable costs.
• Fixed Costs—those costs which are independent of the production quantity required to
make the product. The typical examples of fixed costs are property taxes, depreciation,
administrative salaries, and plant overhead. The fixed overhead costs are often converted
to be components of the direct labor cost.
4.3. BREAKEVEN POINTS 45
• Variable Costs—those costs that are a direct function of production quantity to make the
product. Two typical examples of variable costs are direct labor and direct material costs.
• Semi-Variable Costs—those costs which are partially fixed and partially variable. Costs
such as maintenance expenses and inspection costs frequently are considered as semi-
variable costs.
4.3.2 CATEGORIES AND TYPICAL EXAMPLES FOR THE PRODUCTION
TIME-BASED SYSTEM
These costs are divided into two major groups as production costs and overhead costs. The pro-
duction costs are divided into the three groups of fixed costs, variable costs, and semi-variable
costs. Since this is a time-based system, the overhead costs can be considered as a separate cat-
egory as a variable component.
a. Production Costs
– Fixed Costs—those costs which are independent of the production time of the prod-
uct; these would include the material costs. Since the quantity is fixed, the total ma-
terial costs would be fixed.
– Variable Costs—those costs which are directly dependent upon the production time
of the product; these would include machine time costs, depreciation costs, plant
overhead, and direct labor costs.
– Semi-variable Costs—those costs which are partially fixed and partially variable; these
include maintenance costs and utility costs.
b. Overhead Costs
– Variable Costs—those costs which are dependent upon time but are not directly at-
tributable to a specific product; these include administrative costs, research and de-
velopment costs, etc.
Two example problems which have been presented previously [4] will be used to illustrate
the types of variables, calculations, results, and interpretation of the results.
46
4. BREAKEVEN ANALYSIS
4.4
PRODUCTION QUANTITY-BASED BREAKEVEN
EXAMPLE
The production quantity-based method is illustrated first as this is the traditional approach to
breakeven analysis, but all four breakeven points will be illustrated. This is the same example
used at the ASEE [4] conference and one should notice the differences in the data used for the
two approaches. The data for the production quantity-based model is listed in Table 4.1.
Table 4.1: Data for production quantity-based breakeven analysis [1, 4]
The calculations for the four breakeven points are presented in a general form using the
data from Table 4.1.
Let x
D
the units of production.
(a) Shutdown Breakeven Level (SD)
Semi-variable Costs
C
Revenues
20x
20x
15x
x
D
D
D
D
D
Variable Costs
600
2x
3x
C
600
C
C
5x
600
40 units
Item$/Unit$DecimalSales Revenue20Production CostsFixed CostsVariable CostsSemi-variable Costs322,400600Required Return (Profi t)900Tax Rate (40%)0.40(b) Cost Breakeven Level (C)
4.4. PRODUCTION QUANTITY-BASED BREAKEVEN EXAMPLE 47
Revenues
Revenues
20x
20x
15x
x
D
D
D
D
D
D
Total Costs
Variable Costs
600
2x
3x
C
3;000
C
5x
C
3;000
200 units
Semi-variable Costs
2;400
C
C
C
Fixed Costs
(c) Required Return Breakeven Level (RR)
Revenues
Revenues
20x
15x
x
D
D
D
D
D
Total Costs
3;000
5x
Required Return
900
C
C
C
5x
C
3;900
3;900
260 units
(d) Required Return After Taxes Breakeven Level (RRAT)
Revenues
Revenues
Revenues
20x
20x
15x
x
D
D
D
D
D
D
D
Total Costs
Required Return After Taxes
Taxes on Total Required Return
C
Total Costs
3;000
5x
Required Return After Taxes=.1:0
900=.1
0:4/
Tax Rate/
(cid:0)
C
C
C
C
(cid:0)
1;500
C
C
5x
5x
C
4;500
3;000
4;500
300 units
Note that to obtain a required return of $900 after taxes that one must earn $1,500 as the
40% taxes would be $600. To obtain the pre-tax required return, one can use the expression:
Required Return Including Taxes
Required Return After Taxes=.1:0
Tax Rate/
(4.5)
(cid:0)
D
The tax rate is expressed as a decimal.
48
4. BREAKEVEN ANALYSIS
Figures 4.1 and 4.2 show the breakeven points of total costs vs. production quantity and
unit costs vs. total costs.
The results and actions for the various breakeven points are presented in Table 4.2. The
Total Cost vs. Production Quantity is the breakeven figure observed, but the Unit Cost vs.
Production Quantity shows the effects of revenue changes upon the various breakeven points
more easily than that of Total Costs in Figure 4.1.
Figure 4.1: Total revenue and costs for the production-based model [4, 5].
The problem is that the customer usually dictates the level of production and the pro-
duction department has little control over the quantity. The unit cost curve indicates what the
revenue is required to obtain the desired breakeven points.
Total Cost vs. Production QuantityTaxes for Required ReturnTotal RevenueTaxesProfitsOverheadCostsManufacturingCostsRRAT(300)RR(260)C(200)SD(40)Required ReturnFixed CostsSemi-variable CostsVariable CostsTotal Revenue ($1000)12108642001002003004005006004.4. PRODUCTION QUANTITY-BASED BREAKEVEN EXAMPLE 49
Figure 4.2: Unit costs vs. production quantity for the production-based model [4, 5].
Unit Cost vs. Production QuantityProduction QuantityTaxesRRAT(300)RR(260)C(200)SD(40)Required ReturnFixed CostsSemi-variable Variable Costs ($/Unit)504540353025201510500804012020016024028032036040050
4. BREAKEVEN ANALYSIS
Table 4.2: Shutdown points and actions/implications for production quantity-based model [4]
Production Level RangeAction/Implication1. Zero to Shutdown Level (SD) (0–40 units)Do not accept order as all of the out-of-pocket costs (variable and semi-variable costs) will not be recovered.2. Shutdown Level (SD) to Cost Level (C) (40–200 units)Will recover the out-of-pocket costs, but not all of the fi xed costs. Accept only if no better opportunities are available.3. Cost Level (C) to Required Return Level (RR) (200–260 units)Will recover all costs, but will not obtain the desired level of required return. Accept if no better opportunities are available.4. Required Return Level (RR) Level to Required Return After Taxes Level (RRAT) (260–300 units)Have succeeded in making the required return on a pre-tax basis, but not on an after-tax basis. Accept unless better opportunities are available.5. Greater than Required Return After Taxes Level (RRAT) (>300 units)Will recover all costs and exceed required return on an after-tax basis. Accept as this is usually a rare and highly profi table event.4.5. PRODUCTION TIME-BASED BREAKEVEN EXAMPLE 51
4.5
PRODUCTION TIME-BASED BREAKEVEN EXAMPLE
The production time-based method is newer and is used as overheads are becoming a major
component of costs and is typically based upon time. All four breakeven points will be illustrated
and this is the same example used at the ASEE [4] conference. The data for the production
quantity-based model is listed in Table 4.3.
Table 4.3: Data for production time-based breakeven analysis [1]
Notice that the overhead costs are a separate item whereas they were included in the
production costs in the quantity-based model. The required return can be used as an hourly rate
as illustrated in this example, but it can also be listed as a specific amount.
Let y
D
production hours and then the breakeven points can be calculated as follows.
(a) Shutdown Breakeven Point (SD)
Revenues
Revenues
13;000
13;000
20y
y
D
D
D
D
D
D
Production Costs
Fixed Costs
18y
2;000
Variable Costs
2y
1;000
C
C
C
3;000
20y
C
C
10;000
500 hours
Semi-variable Costs
C
Item$/Hour$DecimalSales Revenue13,000Production (Manufacturing) CostsFixed CostsVariable CostsSemi-variable Costs1822,0001,000Overhead Costs20Required Return (Profi t)10Tax Rate (40%)0.4052
4. BREAKEVEN ANALYSIS
(b) Cost Breakeven Point (C)
Revenues
Revenues
13;000
13;000
40y
y
D
D
D
D
D
D
Total Costs
Production Costs
3;000
20y
C
20y
C
3;000
40y
C
C
10;000
250 hours
(c) Required Return Breakeven Point (RR)
Overhead Costs
Revenues
Revenues
13;000
50y
y
D
D
D
D
D
Total Costs
40y
3;000
C
Required Return
10y
3;000
50y
C
C
C
10;000
200 hours
(d) Required Return After Taxes Breakeven Point (RRAT)
Revenues
Revenues
Revenues
13;000
56:66y
y
D
D
D
D
D
(cid:25)
Total Costs
Total Costs
40y
3;000
C
C
Required Return After Taxes
C
Required Return After Taxes=.1
0:40/
10y=.1:0
Tax Rate/
(cid:0)
Taxes on Total Required Return
3;000
40y
16:66y
C
C
C
C
(cid:0)
10;000
176 hours .176:47/
Note that to obtain a required return of 10y after taxes one must earn 16:66y as the 40%
taxes would be 6:66y. To obtain the pre-tax required return, one can use the expression:
Required Return Including Taxes
Required Return After Taxes=.1:0
Tax Rate/:
(4.5)
(cid:0)
D
The tax rate is expressed as a decimal.
4.5. PRODUCTION TIME-BASED BREAKEVEN EXAMPLE 53
The results and actions for the various breakeven points are presented in Table 4.4. The
Total Cost vs. Production Times in Figure 4.3 shows the breakeven points in a manner similar
to the production unit costs where the revenue is a horizontal line. The profitability curve in
Figure 4.4 highlights that lower production times have a major impact upon profitability.
The primary advantage of the profitability plot is that it easily shows the importance of
doing things faster and how that improves profitability. This also can be used to estimate the cost
of delay upon profitability such as a machine having down-time, delivery delay, weather delay,
etc.
Table 4.4: Shutdown points and actions/implications for the production time-based model [1, 5]
Production Level RangeAction/Implication1. Shutdown Level (SD) or higher (> 500 production hours)Do not accept order as all of the direct pro-duction costs will not be recovered and none of the overhead or return will be recovered.2. Breakeven Cost Level (C) to Shutdown Level (SD) (250–500 production hours)Will recover all of the production costs and some of the overhead costs. None of the re-quired return will be recovered.3. Required Return Level (RR) to Breakeven Cost Level (C) (200–250 production hours)Will recover all costs, but will not obtain the desired level of required return before taxes. Accept unless better opportunities are avail-able.4. Required Return After Taxes Level (RRAT) to Required Return Level (RR) (176–200 production hours)Have succeeded in making the required return on a pre-tax basis, but not on an after-tax basis. Accept unless better opportunities are available.5. Less than Required Return After Taxes Level (RRAT) (< 176 production hours)Will recover all costs and exceed required return on an after-tax basis. Accept as this is usually a highly profi table rare event.54
4. BREAKEVEN ANALYSIS
Figure 4.3: Total revenues and costs vs. production time with breakeven points [1, 5].
Total Cost vs. Production TimeProduction Time (hours)Taxes on Required ReturnRevenueOverhead CostsManufacturing CostsRRAT 176RR200C 250SD 500Required ReturnRequired Return($10/hour)Total Revenue and Cost Items504540353025201510501003002005004006007008009004.5. PRODUCTION TIME-BASED BREAKEVEN EXAMPLE 55
Figure 4.4: Profitability vs. production time with breakeven points [1, 5].
Profitability PlotProduction Time (hours)RevenueTaxesNet ProfitOverheadCostsManufacturing CostsRRAT 176200 RR 250 C500 SD Required Return($10/hour)Total Revenue ($1,000)14121086420-2-4-6-8-1010050020015035030025040045050055060056
4. BREAKEVEN ANALYSIS
SUMMARY
4.6
The traditional breakeven charts usually have only one breakeven point at cost, but three other
points can be determined and they are the shut-down point, the breakeven point at required
return, and the breakeven point at required return after taxes. The time-based system is being
used for allocating overheads and can also be used for determining the four breakeven point
using hour units. The plot of total revenue and the various costs vs. production illustrates the
breakeven points on the production quantity basis, and the profitability plot of profitability vs.
production hours illustrates the breakeven points on the production time basis.
4.7 REFERENCES
[1] Creese, Robert C., Time-based breakeven analysis and costing, AACE International Trans-
actions, ABC.02, AACE International, Morgantown WV, pp. ABC.02.1–ABC.02.6,
1998. 43, 46, 51, 53, 54, 55
Reprinted with the permission of AACE International, 1265 Suncrest Towne Centre Dr.,
Morgantown, WV, 26505, Phone 304-296-8444.
Internet: http://web.aacei.org
[2] Creese, Robert C., AACE.02, Time-based breakeven analysis, Joint Cost Management So-
cieties Proceedings, pp. AACE 02.01–AACE 02.07, 1998.
[3] Creese, Robert C., A new breakeven analysis uses production versus quantity, Modern
Castings, pp. 52–53, March 1996.
[4] Creese, Robert C., Time-based versus quantity based breakeven analysis, Proc. of the Amer-
ican Society for Education Annual Conference and Exposition, pp. 9.1308.1–9.1308.17, 2004.
Selected items are Reprinted with permission of American Society for Engineering Edu-
cation. 45, 46, 48, 49, 50, 51
[5] Creese, Robert and Thiruvalam, Kedhar P., Power Point Presentation, “Breakeven Anal-
ysis,” prepared in 2009 for presentation at Metal Casting Seminar. 43, 48, 49, 53, 54,
55
[6] Dieter, George E., Costs and related aspects of materials selection, ASM Handbook Volume
20 Materials Selection and Design, ASM International, Metals Park, OH, pp. 248–249. 43
4.8 EVALUATIVE QUESTIONS
1. When plotting the production breakeven charts the fixed costs traditionally were plotted
first.
4.8. EVALUATIVE QUESTIONS 57
(a) What happens when that is done on the Total Cost vs. Production Quantity graph?
(b) What happens if the variable cost is plotted first on the Unit Cost vs. Production
Quantity graph?
2. What are the four breakeven points if the variable costs are 8 $/unit instead of 3 $/unit in
Table 4.1?
3. Plot both the total cost vs. production quantity breakeven chart similar to Figure 4.1 and
the unit cost breakeven vs. production quantity chart similar to Figure 4.2 when the vari-
able costs are 8 $/unit.
4. What are the four breakeven points if the Required Return in Table 4.1 was $1,200 instead
of $900?
5. What are the four breakeven points if the semi-variable costs in Table 4.1 were 7
instead of 2
600?
(cid:2) C
900
(cid:2) C
6. What are the four breakeven points for the Production Time-Based Breakeven analysis if
the fixed costs in Table 4.3 were $5,000 instead of $2,000?
7. Make the Total Revenues and Costs vs. Production Time results similar to Figure 4.3
and the Profitability vs. Production Time results similar to Figure 4.4 showing the four
breakeven points on each of the graphs.
8. The data for a production quantity-based breakeven problem is in Table 4.5. Calculate the
following items:
(a) Shutdown breakeven point-units
(b) Cost breakeven point-units
(c) Required return breakeven point-units
(d) Required return after taxes breakeven point
(e) Draw the Total Revenue and Costs vs. the Production Quantity showing the four
breakeven points.
9. The data for a production time-based breakeven problem is in Table 4.6. Calculate the
following items:
(a) Shutdown breakeven point-units
(b) Cost breakeven point-units
(c) Required return breakeven point-units
(d) Required return after taxes breakeven point
(e) Draw the Profitability Plot showing the four breakeven points.
58
4. BREAKEVEN ANALYSIS
Table 4.5: Data for production quantity-based breakeven analysis
Table 4.6: Data for production time-based breakeven analysis
Item$/Unit$DecimalSales Revenue20Production (Manufacturing) CostsFixed CostsVariable CostsSemi-variable Costs323,600900Required Return (Profi t)1,200Tax Rate (20%)0.20Item$/Hour$DecimalSales Revenue13,000Production (Manufacturing) CostsFixed CostsVariable CostsSemi-variable Costs1062 3,000Overhead Costs10Required Return (Profi t)1,200Tax Rate (20%)0.20C H A P T E R 5
59
Earned Value Management
5.1
INTRODUCTION
Earned Value Management (EVM) is a tool used to measure project performance with respect
to time and cost. It is typically used to measure performance in project work which involves a
single highly complex item whereas breakeven analysis is used more frequently in manufacturing
and other areas where the output involves a large quantity of a single product or closely related
products. EVM focuses on schedule or time, and costs. EVM can be applied to any project
such as developing new products, research projects, equipment installation projects, and any
project that involves a schedule and costs. Typical EVM projects are large construction projects
buildings, highways, bridges, or dams and other very large projects such as aircraft carriers,
missions to the moon or Mars, and the development of innovative new products such as the i-
Phone or self-driving vehicles. EVM can indicate performance problems such as cost overruns
and/or schedule delays during the project to warn management to take action so the project can
achieve a successful completion. When the project is large, EVM is used for measuring progress
in terms of cost and schedule to specific milestones of the project. Two sources [1, 2] of the
fundamental concepts of EVM are used in the development of the following relationships and
equations. The data used is from earlier examples from AACE International [1], which has
become a base model for comparing methods for evaluating the traditional EVM approach and
the newer time-based EVM approach developed by Lipke [3, 4]. The two approaches will be
illustrated to show the differences as the EVM method shows the schedule variance in terms of
hours or dollars and the time-based method calculates the schedule variance in terms of time
periods rather than dollars. The differences can become large at the end of the project when
projects have significant delays or early completion.
5.2 EARNED VALUE MANAGEMENT PERFORMANCE
PARAMETERS
Three elements are used to measure project performances in EVM:
1. the Planned Value (PV) of the work scheduled or the Budgeted Cost of Work Scheduled
(BCWS);
2. the Earned Value (EV) of the work accomplished or the Budgeted Cost of Work Per-
formed (BCWP); and
60
5. EARNED VALUE MANAGEMENT
3. the Actual Cost (AC) of the work accomplished or the Actual Cost of Work Performed
(ACWP).
The values are traditionally measured in monetary units, such as dollars, or measured in
work hours, and sometimes both are used. Work hours are often used at the work site and
converted to monetary units at the management level. The earned value is used in both the
schedule performance and in the cost performance measurements.
The traditional performance measures derived from these elements are the Schedule Vari-
ance (SV), the Cost Variance (CV), the Schedule Performance Index (SPI), and the Cost Per-
formance Index (CPI), which are presented in the following equation forms. The schedule values
will be presented first:
SV
EV
(cid:0)
D
PV;
(5.1)
where
SV
EV
PV
D
D
D
schedule variance
earned value
planned value:
If the schedule variance is positive, the project is ahead of schedule whereas if the schedule
variance is negative, the project is behind schedule. This same information can be obtained from
the schedule performance index where:
SPI
D
EV=PV;
(5.2)
where
SPI
EV
PV
D
D
D
schedule performance index
earned value
planned value:
A schedule performance index of 1.0 indicates the project is on schedule, a schedule per-
formance index greater than 1.0 indicates the project is ahead of schedule, and a schedule per-
formance index less than 1.0 indicates the project is behind schedule.
The cost performance equations are similar to the schedule performance equations, but
use the actual cost and earned value. The equations are:
CV
EV
(cid:0)
D
AC;
(5.3)
where
CV
EV
AC
D
D
D
cost variance
earned value
actual cost:
5.2. EARNED VALUE MANAGEMENT PERFORMANCE PARAMETERS 61
If the cost variance is positive, the project is under budget whereas if the cost variance
is negative, the project is over budget. This same information can be obtained from the cost
performance index where:
CPI
D
EV=AC;
(5.4)
where
CPI
EV
AC
D
D
D
cost performance index
earned value
actual cost:
A cost performance index of 1.0 indicates the project is on budget, a cost performance
index greater than 1.0 indicates the project is under budget, and a cost performance index less
than 1.0 indicates the project is over budget.
These values can be kept on a periodic basis, such as weekly, and on a cumulative basis for
monitoring the project performance and direction and can give indications as to what corrective
actions need to be taken. Note that Earned Value (EV) is used in both the cost and schedule
indices and variances.
Another item for consideration is to estimate the completion of the project. The comple-
tions can be calculated for the cost completion and the schedule completion. There are various
methods, but the one most often used is to consider the work cost in the future will be at the
planned rate and this results in:
EAC.c/
PV ct
.AC
(cid:0)
C
D
EV/;
(5.5)
where
EAC.c/
AC
EV
PV ct
D
D
D
estimate at completion-cost
D
actual cost to date
earned value to date
planned value for project completion cost:
The completion can also be predicted with respect to time as well, using the planned value and
is expressed as:
EAC.t/
PV ct
.PV
C
(cid:0)
D
EV/;
(5.6)
where
EAC.t/
PV
EV
PV ct
D
D
D
estimate at completion (time)
D
planned value to date
earned value to date
planned value for project completion time:
62
5. EARNED VALUE MANAGEMENT
5.3 EXAMPLE PROBLEM USING TRADITIONAL EARNED
VALUE MANAGEMENT
An example problem, adopted from the AACEI Skills and Knowledge of Cost Engineering,
was expanded and is presented on a project with a delay of one week over the life of the project.
The information is presented in Table 5.1. The tables and figures in this chapter are from the
work of Yi Fang which are published in her thesis [5] and also in Cost Engineering [6] and are
published with permission of AACE International. To illustrate how the values are calculated,
Week 3 will be analyzed in detail for the variances and indexes. The planned time for the project
total completion or PV ct is 440 hours. For the period values for week three using Equations (5.1)
to (5.4), one obtains:
SV.3/
SPI.3/
CV.3/
CPI.3/
D
D
D
D
PV
EV
(cid:0)
EV=PV
65
D
(cid:0)
65=45
D
AC
EV
(cid:0)
EV=AC
65
D
(cid:0)
65=62
D
20
D C
1:44 > 1:00
3:0
D C
1:05 > 1:00:
45
D
62
D
This indicates that the schedule and cost performance indices were both greater than 1.00
for Week 3 and thus on time and under budget for the week.
For the cumulative values, the symbol SV c3 implies the schedule variance for Week 3. The
cumulative values for week three are:
110
120
10
SV c3
SPIc3
CV c3
CPIc3
D
D
D
D
PV c3
EV c3
(cid:0)
EV c3=PV c3
EV c3
(cid:0)
EV c3=ACc3
D
ACc3
D
(cid:0)
110=120
110
(cid:0)
D
110=109
D
D (cid:0)
0:92 < 1:0
1:0
D C
1:01 > 1:0
D
109
D
EAC.c/
PV ct
(cid:0)
D
Percent Completion
EAC.t/
PV ct
D
D
(cid:0)
CV c3
439
PV ct
.EV c3
ACc3/
D
440
(cid:0)
.110
D
(cid:0)
.EV c3=PV ct/
(cid:0)
(cid:0)
109/
D
439
D
440
1
(cid:0)
D
SV c3
450
D
PV ct
.120
(cid:2)
.EV c3
(cid:0)
110/
(cid:0)
450:
D
PV c3/
440
D
(cid:2)
.
D
10/
(cid:0)
(cid:0)
100
.110=440/
100
25%
(cid:0)
D
C
D
This indicates that for the cumulative first three weeks, the project is behind schedule,
slightly under budget, and expected to be completed at 450 hours based upon schedule perfor-
mance (typically used) or at 439 hours based upon cost performance. The project is 25% complete
at the end of the third week. If the percent completion is known, it can be used to determine
the earned value. That is:
Earned Value
D
.Percent Completion as decimal/
Planned Value:
(5.7)
(cid:2)
5.4. EXAMPLE PROBLEM USING EARNED SCHEDULE IN EARNED VALUE MANAGEMENT 63
The cumulative PV, AC, and EV values are plotted in Figure 5.1. Note that the period
values and cumulative values of the SPI and CPI can be quite different, and indicated by the
schedule parameters in Table 5.1. One problem with the schedule indices is that the SV tends
to 0 and the SPI goes to 1.0 regardless of whether or not the project is delayed. This occurs as
the planned and earned values will be equal when the project is completed, regardless if it is
early, on time, or late. This has led to a new calculation procedure for the Schedule Variance and
the Schedule Performance Index on a time based analysis rather than on a cost-based (or work
hour-based) analysis.
Figure 5.1: Planned value, earned value, and actual cost [6].
5.4 EXAMPLE PROBLEM USING EARNED SCHEDULE IN
EARNED VALUE MANAGEMENT
Lipke [3, 4] introduced the concept of Earned Schedule (ESc) which is similar to the Earned
Value, but Earned Schedule is in time units rather than hours or dollars. Fang [5, 6] utilized
the concept and applied it to the AACE International data [1] in her M.S. thesis. The earned
schedule values are used for cumulative values, not period values. The following calculations are
utilizing the data in Table 5.2 which also contains the cumulative values of Table 5.1.
The cumulative time values has two PV values for N and N
1. The AT value is usually
greater than N , but if a project finishes ahead of schedule N can equal or be greater than AT.
C
5004003002001000CostPeriod (weeks)1234567891011PVACEVPVACEV64
5. EARNED VALUE MANAGEMENT
]
6
,
5
[
t
c
e
j
o
r
p
d
e
y
a
l
e
d
r
o
f
h
c
a
o
r
p
p
a
e
u
l
a
v
d
e
n
r
a
e
r
o
f
a
t
a
d
e
v
i
t
a
l
u
m
u
c
d
n
a
d
o
i
r
e
P
:
1
.
5
e
l
b
a
T
Period(Week)(AT)Period ValuesCumulative ValuesPV(BCWS)AC(ACWP)EV(BCWP)SVSPICVCPIPVc(BCWS)ACc(ACWP)EVc(BCWP)SPIcSVcCPIcCVc1301615-150.50-1.00.943016150.50-150.94-1.02453130-150.67-1.00.977547450.60-300.96-2.03456265+201.44+3.01.051201091100.92-101.01+1.04807885+51.06+7.01.092001871950.98-51.04+8.05806670-100.88+4.01.062802532650.95-151.05+12.06505155+51.10+4.01.083303043200.97-101.05+16.0725302501.00-5.00.833553343450.97-101.03+11.08303025-50.83-5.00.833853643700.96-151.02+6.0930333001.00-3.00.914153974000.96-151.01+3.01025282501.00-3.00.894404254250.97-151.000.01101415+15N/A +1.01.074404394401.0001.00+1.05.4. EXAMPLE PROBLEM USING EARNED SCHEDULE IN EARNED VALUE MANAGEMENT 65
The equation for ESc is mathematically expressed as:
ESc
N
D
C
.EV c
(cid:0)
PV c.N //=.PV c.N
1/
(cid:0)
C
PV c.N //;
(5.8)
where
cumulative earned schedule at time period N
D
period where the cumulative PV is less than the current cumulative EV
cumulative earned value at actual time; AT
cumulative planned value at period N
1/
cumulative planned value at next period; N
D
actual time or current time period:
1
C
ESc
N
D
EV c.AT/
PV c.N /
PV c.N
AT
D
D
D
C
For Period 3 (that is where AT
3 and N
2),
D
D
ES3
2
.110
75/=.120
75/
2
0:777
2:78 periods (weeks):
D
C
If, at the start of the project, the PV is greater than the EV, then the ES will be less than
D
D
C
(cid:0)
(cid:0)
one and can be calculated by
Thus, for Period 1, where AT
1
D
ESc
D
.PV 1
(cid:0)
EV 1/=PV 1:
(5.9)
ES1
.30
(cid:0)
D
15/=30
D
0:5 period (week):
This calculation can be repeated for the initial periods until ESc exceeds PV 1 and then
1. The schedule variance, based upon time, is
Equation (5.9) can be used and this is when N
calculated by:
D
where
SV.AT/
ES
(cid:0)
D
AT;
(5.10)
D
SV.AT/
ES
D
AT
D
SV.3/
schedule variance based upon time AT
earned schedule
actual time or current time period
2:78
3:00
(cid:0)
D (cid:0)
0:22 periods (weeks)
D
which implies the project is 0.22 time periods behind schedule at the end of time period 3.
The schedule performance index based upon time is:
SPI.AT/
ES=AT:
D
(5.11)
If one examines Period 3, then
SPI.3/
2:78=3:0
0:93:
D
D
66
5. EARNED VALUE MANAGEMENT
The first cumulative value of PV below EV is when PV
2, thus
D
30 and EV
D
45 which occurs in period
AT
EV C
N
PV c
1/
C
D
D
D
D
D
2
45
1
30
75
PV c.N
and
ESc
ES.2/
D
D
N
1
C
.EV c
.45
PV c.N //=.PV c.N
30/
(cid:0)
30/=.75
1
C
(cid:0)
(cid:0)
D
C
1/
(cid:0)
C
0:3333
PV c.N //
1:33 periods (weeks):
D
Thus, the time-based schedule variance using Equation (5.10) is:
SV.2/
1:33
2:00
0:67 periods (weeks).
D (cid:0)
This implies that the project is 0.67 time periods behind schedule. The schedule performance
index based upon time using Equation (5.2) is:
D
(cid:0)
SPI.2/
D
ES.2/=AT
D
1:33=2:00
0665
D
D
0:67:
Now examining the data for the last period, that is period 11, note that
AT
EV .AT/
11
440:
D
D
The first cumulative value of PV below EV .11/ is 415 which occurs in period N
9, thus
D
N
PV C
1/
C
D
D
D
9
415
440
PV C .N
and
ESc
ES
D
D
N
9
C
.EV c
.440
C
(cid:0)
PV c.N //=.PV c.N
1/
PV c.N //
(cid:0)
415/=.440
415/
(cid:0)
D
C
9
(cid:0)
1
C
D
10 periods (weeks):
Thus, the time-based schedule variance using Equation (5.10) is:
SV.AT /
SV.11/
AT
11
ES
10
(cid:0)
(cid:0)
D
D
D (cid:0)
1 periods (weeks).
This implies that the project is 1.0 time period behind schedule. The final schedule variance
is the difference in time values (AT) when the cumulative planed value (PV c) reaches the project
completion time minus the (AT) when the earned value (EV.N
1) first reaches the project
complete value. The schedule performance index based upon time using Equation (5.11) is:
C
5.5. SUMMARY 67
SPI.11/
ES=AT
10=11
0:909
0:91:
D
D
D
D
The time-based values are the actual values, that is a schedule variance of
1:0 and sched-
ule performance index of 0.909 instead of the traditional schedule variance of 0 and SPI of 1.0.
Table 5.2 indicates the cumulative-based and cumulative time-based values. The major differ-
ences are in the schedule variance and the schedule performance index. The schedule variance is
based upon dollars or work hours, whereas the earned schedule is based upon amount of work
completed for that particular time period.
(cid:0)
Figure 5.2 show the comparison of the SPI values based upon the cumulative dollar (or
work-hour) base (SPI.$/) vs. the SPI values based upon the cumulative project time (SPI.t/)
over the project life for a delayed project. There would also be a difference if the project was
completed early. The differences are a result of the EVM work hour approach must end with a
SPI.$/ at 1.0, regardless of the actual completion time whereas SPI.AT/ is at 0.91.
The major differences between the two SPI indices occur at the end of the project life. The
initial differences are primarily due to the difference in units; that is dollars vs. the time estimate
calculation. The time-based calculation is the relative amount of work earned for the period vs.
the amount planned for the period. Table 5.3 is a combination of Tables 5.1 and 5.2 to indicate
all the information in a single table.
5.5
SUMMARY
The parameters used to measure project performance are the Planned Value, the Earned Value,
and the Actual Cost of the work performed. The performance measures evaluated were the
schedule variance, the schedule performance index, the cost variance, the cost performance in-
dex, the time estimated at completion, the time-based schedule variance, and the time-based
schedule performance index. Indices which are greater than unity are favorable (under budget
or ahead of schedule) and those less than unity are unfavorable (over budget or behind sched-
ule). The traditional schedule variance and schedule performance index calculations give less
accurate results near the project completion than the time based schedule variance and schedule
performance index. These performance measures are critical for long-term projects success so
adjustments can be made to improve project performance during the project execution and to
minimize project delays and cost overruns.
68
5. EARNED VALUE MANAGEMENT
]
6
,
5
[
e
l
u
d
e
h
c
s
d
e
n
r
a
e
r
o
f
s
n
o
i
t
a
l
u
c
l
a
c
d
e
s
a
b
-
e
m
T
i
:
2
.
5
e
l
b
a
T
Period(Week)(AT)Earned Value Cumulative ValuesEarned Schedule Cumulative Time ValuesPV(BCWSAC(ACWP)EV(BCWP)SVcSPIcCVcCPIcNEV(AT)PV(N+1)PV(N)ESSV(AT)SPI(AT)1301615-150.5-10.94N/AN/AN/AN/A0.5-0.50.52754745-300.6-20.9614575301.33-0.670.673120109110-100.9211.012110120752.78-0.220.934200187195-50.9881.0431952001203.94-0.060.985280253265-150.95121.0542652802004.81-0.190.966330304320-100.97161.0553203302805.80-0.200.977355334345-100.97111.0363453553306.60-0.400.948385364370-150.9661.0273703853557.50-0.500.949415397400-150.9631.0184004153858.50-0.500.9410440425425-150.9701.094254404159.40-0.600.941144043944001.011.01044044010.0-1.000.915.5. SUMMARY 69
]
6
,
5
[
s
e
u
l
a
v
d
e
s
a
b
-
e
m
i
t
d
n
a
,
s
e
u
l
a
v
e
v
i
t
a
l
u
m
u
c
,
s
e
u
l
a
v
d
o
i
r
e
P
:
3
.
5
e
l
b
a
T
Period(Week)(AT)Period ValuesCumulative ValuesCumulative Time-Based ValuesPV(BCWS)AC(ACWP)EV(BCWP)SVSPICVCPIPVc(BCWS)ACc(ACWP)EVc(BCWP)SPIcSVcCPIcCVcNEV(AT)PV(N)PV(N+1)ES(AT)SV(AT)SPI(AT)1301615-150.50-1.00.943016150.50-150.94-1.0NANANANA0.5-0.50.52453130-150.67-1.00.977547450.60-300.96-2.014530751.33-0.670.673456265+201.44+3.01.051201091100.92-101.01+1.02110751202.78-0.220934807885+51.06+7.01.092001871950.98-51.04+8.031951202003.94-0.060.985806670-100.88+4.01.062802532650.95-151.05+12.042652002804.81-0.190.966505155+51.10+4.01.083303043200.97-101.05+16.053202803305.8-0.20.97725302501.00-5.00.833553343450.97-101.03+11.063453303556.6-0.40.948303025-50.83-5.00.833853643700.96-151.02+6.073703553857.5-0.50.94930333001.00-3.00.914153974000.96-151.01+3.084003854158.5-0.50.941025282501.00-3.00.894404254250.97-151.000.094254154409.4-0.60.941101415+15N/A +1.01.074404394401.0001.00+1.01044044010-10.9170
5. EARNED VALUE MANAGEMENT
Figure 5.2: Comparison of cost-based SPI.$/ and time-based SPI.t/ [5, 6].
5.6 REFERENCES
[1] AACE International, Skills and Knowledge of Cost Engineering, 5th ed., Chapters 14, 15,
2004. 59, 63
[2] Creese, R. C. and Adithan, M., Earned value management concepts, Strategic Cost Anal-
ysis, New Academic Science Limited, Tunbridge Wells, Kent, UK, pp. 13–23, 2012. 59
[3] Lipke, W., A study of the normality of earned value management indicators, Measurable
News, pp. 1–16, 2002. 59, 63
[4] Lipke, W., Schedule is different, Measurable News, March 2003. 59, 63
[5] Fang, Y., Estimate at Completion for Construction Projects, Master’s Problem Report, Indus-
trial and Management Systems Engineering, West Virginia University, 2008. 62, 63, 64,
68, 69, 70
[6] Creese, R. C. and Fang, Yi, Time-based schedule performance index, Cost Engineering,
Vol. 53, No. 3, pp. 18–20, March 2010. Reprinted with Permission of AACE Interna-
1.11.00.90.80.70.60.50.4SPISPI ($) vs. SPI (t)Period (weeks)1234567891011PVACEVSPI (t)SPI ($)SPI ($)SPI (t)tional, 1265 Suncrest Towne Centre Drive, Morgantown, WV 26505 Phone (304) 296-
8444. http;//web.aacei.org 62, 63, 64, 68, 69, 70
5.7. EVALUATIVE QUESTIONS 71
5.7 EVALUATIVE QUESTIONS
1. Angel Construction Company was awarded a contract for $2,000,000 to build a cathedral
with a budget of 200,000 hours (PV c). Management wants a progress update and the end
of week six and the following data had a budget of 200,000 hours (PV c). Management
wants a progress update at the end of week six and the following data was obtained.
Planned Hours
80,000
Actual Hours Spent
75,000
Percent Completion (based on Earned Hours)
35%
(a) How many hours has the project earned?
(b) What is the cost variance (hrs)?
(c) What is the schedule variance (hrs)?
(d) What is the CPI?
(e) What is the SPI?
(f ) What is the estimated time for completion?
(g) What is the estimated cost at completion?
2. Pumpkin Research and Development was awarded a contract for $25,000,000 to build
a machine to use supersonic sound waves to remove body fat at a rate of 5 kg/hour. The
budget is estimated to take 50,000 hours (PV c). Management wants a progress update and
the end of week six and the following data was obtained.
Planned Hours
7,000
Actual Hours Spent
8,000
Percent Completion (based on Earned Hours)
20%
(a) How many hours has the project earned?
(b) What is the cost variance (hrs)?
(c) What is the schedule variance (hrs)?
(d) What is the CPI?
(e) What is the SPI?
(f ) What is the estimated time for completion?
(g) What is the estimated cost at completion?
72
5. EARNED VALUE MANAGEMENT
3. Use the data of PV; AC, and EV for time periods 6 and 7 in Table 5.1 and calculate the
values for SV; SPI; CV, and CPI for the period and cumulative values to three decimals (for
non-integer terms) and compare the values with those given in Table 5.1.
4. Calculate the earned value cumulative values of SV c; SPIc, CV c; CPIc, and earned sched-
ule cumulative time values of ES; SV.AT/, and SPI.AT/ to three decimals (for non-integer
terms) for weeks 6 and 7 and compare the values with those in Table 5.2 for weeks 6 and
7.
5. Change the PV values for Periods 6, 7, and 8 from 50, 25, 30 to 45, 35, 25 and note the
differences in the SV; SPI; CV; CPI, SPIC , SV C , CPIC , CV C , ES, SV.AT/, and SPI.AT/
values from those in Table 5.3 for those three periods.
6. Use the data in Table 5.4 and calculate the period and cumulative values of SV; SPI,
CV; CPI, and the time-based values of SV.t/ and SPI.t/ for all the periods for the project
manager of the Tibet Construction Company. Plot the SPI.$/ and SPI.t/ values for the
project over the five time periods. Discuss the differences between the SV c and SV.AT/
and the SPIc and SPI.AT/.
5.7. EVALUATIVE QUESTIONS 73
t
c
e
j
o
r
p
n
o
i
t
c
u
r
t
s
n
o
c
l
l
i
m
d
n
i
w
a
n
i
h
C
t
s
e
W
e
h
t
n
o
y
n
a
p
m
o
C
n
o
i
t
c
u
r
t
s
n
o
C
t
e
b
i
T
e
h
t
f
o
a
t
a
d
y
l
k
e
e
W
:
4
.
5
e
l
b
a
T
Period(Week)(AT)Period ValuesCumulative ValuesCumulative Time-Based ValuesPV(BCWS)AC(ACWP)EV(BCWP)SVSPICVCPIPVc(BCWS)ACc(ACWP)EVc(BCWP)SVcSPIcCVcCPIcNPV(N)PV(N+1)EV(AT)ES(AT)SV(AT)SPI(AT)1402520240353534050454404040502520PART II
Tools for Economic Evaluations
C H A P T E R 6
77
Fundamental Definitions,
Terms, and Concepts for
Technical Economic
Evaluations
6.1
INTRODUCTION
The previous chapters have focused on macro-concepts such as financial statements, profits and
cash flows, the Purcell Diagram, breakeven analysis, ABC and time-based evaluations, estimat-
ing ranges, and accuracies. Now the micro-concepts need to be presented in detail so that the
expressions developed and methods applied using these items in the following chapters will be
better understood. The primary focus of this chapter will be on interest, the various types of
interest, inflation, constant and current currency, and exchange rates. This material is available
in many references on engineering economy and much of this is based upon materials developed
for short courses given in the past [1–3] and a book [4] published based on the materials in these
short courses.
6.2
FUNDAMENTAL TERMS RELATED TO INTEREST
CALCULATIONS
6.2.1
INTEREST AND INTEREST RATE
There are many types of interest and two primary definitions of interest are the rate charged for
the investment of capital and the return rate for the investment of capital.
(1) The cost for the use of capital which is also referred to as the time value of money. (This
is the view of the borrower who considers it as a cost or rate for the use of the capital
borrowed.)
(2) The monetary return or rate of return which is necessary to divert money into long-term
investments. (This is the view of the lender who considers it as a rate of return on the
investment.)
78
6. FUNDAMENTAL DEFINITIONS, TERMS, AND CONCEPTS
The interest rate is the ratio of the interest amount accrued in the time period to the
amount owed at the start of that period. There are two major types of interest commonly used
and they are simple interest and compound interest.
Simple Interest
Simple interest is the interest rate that determines the interest amount only on the principal
amount. Simple interest can be defined as follows.
(1) Interest charges are only charged on the principal at the start of the period and not on any
additions or deletions made during the period.
(2) Interest is calculated only on the investment at the end of the period, but it is not included
as part of the investment for the following periods.
The interest calculations for simple interest problems are presented by first calculating the
total amount of interest and then the total amount due at the end of the period which includes
the principal. The total interest amount is:
niP;
I
D
(6.1)
where
I
i
n
P
total interest amount due
interest rate per unit time period .frequently a year/
number of time periods
principal amount or initial investment amount at the beginning of the period:
D
D
D
D
An amount P will be invested for n time periods at an interest rate i and the amount due
at the end of the n periods will be the Principal, .P /, and the total amount of Interest (I ). This
is usually referred to as the Future Worth (F ) amount or Future worth, thus:
F
P
I
C
D
D
P
C
niP
D
P .1
C
i n/:
(6.2)
To illustrate the application of the formula, let
P
n
i
$500
D
3
D
10% or as a decimal; 0:10:
D
Thus, the amount paid at the end of three years is calculated as:
0:10
3/
(cid:2)
C
F
500.1
D
D
$500
Principal
500.1:3/
D
$50
C
Interest for
D
$650
$50
C
Interest for
First Year
Second Year
C
$50
Interest for
Third Year:
6.2. FUNDAMENTAL TERMS RELATED TO INTEREST CALCULATIONS 79
Note that the principal does not change over the n periods and no interest is earned on the
accumulated interest.
Compound Interest
Compound interest is the interest which is most commonly applied. Compound interest is in-
terest on the principal amount and the interest on the previous amounts of interest earned. Two
definitions [4] of compound interest are as follows.
(1) The type of interest that is periodically added to the amount of principal, investment, or
loan so that the subsequent interest rate is based on the cumulative amount of principal
plus total interest.
(2) The type of interest that is charged on any previous interest earned in any time period as
well as on the principal.
When considering compound interest calculations, there are two approaches of calculating
interest.
(1) Discrete Compound Interest Rate is the interest rate is applied at the end of each time
period and amount determined is considered only in the following time periods. This is
the most common form of compound interest rate application.
(2) Continuous Compound Interest Rate is the interest rate is applied continuously during
each time period. This is less commonly applied today, but with businesses being open
24-7, it may be applied more in the future. It is often applied on certificates of deposit.
An example of each approach will be presented, starting with the discrete compound rate
applications. The following table will illustrate the approach for determining the Future Amount
(F ) at the end of n periods starting with the initial Principal Amount (P ).
Time
Initial Amount
Period
(Beginning of Period)
1
2
3
P
P .1
P .1
4
P .1
C
C
i/
i/2
i/3
C
and in general for any value of n
n
i/.n
P .1
1/
(cid:0)
C
Interest
Amount
iP
iP .1
iP .1
iP .1
iP .1
C
C
C
C
i/
i/2
i/3
i/.n
(cid:0)
1/
C
C
C
C
C
C
D
D
D
D
D
D
Total Amount
(End of Period)
P .1
P .1
P .1
P .1
P .1
C
C
C
C
C
i/
i/2
i/3
i/4
i/n
F
D
Thus, the expression for discrete compound interest would be:
P .1
F
D
C
i/n:
(6.3)
80
6. FUNDAMENTAL DEFINITIONS, TERMS, AND CONCEPTS
Using the data of the previous simple interest example and now applying the discrete
compounding interest formula, let:
P
n
i
$500
D
3
D
10% or as a decimal; 0:10
D
Time
Initial Amount
Interest
Total Amount
Period
(Beginning of Period) Amount
(End of Period)
1
2
3
500
550
605
50
55
550
605
60.50
665.50.
This can be determined by Equation (6.3) as:
P .1
F
D
C
i/n
D
500.1
C
0:10/3
D
$665:50
Note that the total amount is $665.50 for discrete compounding vs. $650. for simple interest.
The difference in total interest or return is $15.50 and which is approximately a 2% difference
over the 3-year period.
Continuous compounding is compounding throughout the period and not only at the end
of the period. The expression is obtained by letting the number of compounding periods go to
infinity and thus the interest is considered as the effective interest over the entire period and r
is the nominal interest per year:
ieff
lim
m
!1
D
(cid:140).1
C
r=m/m
1/(cid:141)
(cid:0)
lim
m
!1
D
h(cid:16)1
C
r
1=.m=r/m=r (cid:17)
1i
er
1:
(cid:0)
D
(cid:0)
(6.4)
Thus, for n periods [3] and r
10% interest
D
P .er
1/n
1
(cid:0)
C
D
D
P .ern/
(6.5)
n
1(cid:1)
3(cid:1)
P (cid:0)ieff
C
500 (cid:0)e0:1
(cid:2)
$674:93:
F
F
F
D
D
D
Equation (6.5) is also the form used for calculating the future worth for continuous com-
pounding. Note that over the 3-year period, continuous compounding results in $24.93 (or
nearly 4%) more than simple compounding and earns $9.93 more than discrete compounding.
The higher the interest rate, the greater the differences between the amounts of interest calcu-
lated by the methods as the calculations are not linear. Also, the more time periods, the greater
the differences between the calculated values.
6.3. ACTUAL, COMPOUND, NOMINAL, AND EFFECTIVE ANNUAL INTEREST RATES 81
6.3 ACTUAL, COMPOUND, NOMINAL, AND EFFECTIVE
ANNUAL INTEREST RATES
These four different interest rates and the differences between them will be discussed and illus-
trations of the differences will be presented.
The actual interest rate (i) represents the interest rate per compounding period. It is the
most common of the interest rates used in engineering calculations. The actual interest rates can
be expressed in different periods, such as:
12% per year
—interest would be compounded once per year
6% semi-annually —interest would be compounded 2 times per year
3% per quarter
—interest would be compounded 4 times per year
1% per month
—interest would be compounded 12 times per year
0.03288% per day —interest would be compounded 365 times per year
Nominal interest rate (r) represents the interest rate per year as obtained by multiplying
the interest rate per period by the number of compounding periods per year. It is commonly
known as the Annual Percentage Rate (APR) which is required for notifications on some loans.
Since the other methods involve compounding more frequently, annual compounding results
in the smallest value of the compound interest rates and simple interest is the lowest of all
compounding methods.
Compound interest rate can be either discrete compounding or continuous compounding.
Continuous compounding tends to give the largest amounts of interest. If the compounding
period is one year, the continuous compounding rate is known as the annual effective interest
rate. If the compounding rate is one year, the actual interest, nominal interest, and discrete
compound interest will be the same as there is only one compounding period.
Let us consider a comparison of the interest rates for an interest rate of 3% per quarter for
four periods. The values would be
Actual interest
Nominal interest
Compound (Discrete)
Compound (Continuous)
i
D
3%
3%/quarter for each quarter
12%
0:1200/yr
r
4
1(cid:3)
ieff .discrete/
D
(cid:0)
100
ieff .continuous/
D
(cid:2).1
D
(cid:2)
(cid:2)
D
:03/4
C
(cid:140)er
1(cid:141)
(cid:0)
(cid:2)e0:12
(cid:2)
1(cid:3)
D
0:03/quarter
D
100
12:55%
D
D
0:1255/yr
D
The effective interest rate depends upon which of the compounding methods is used and is
based upon a one-year period. Now we shall calculate the amounts obtained after 3% per quarter
on an initial amount of $1,000 compounded for three years. This results in 12 compounding pe-
riods. The calculations that are used are simple interest, compound (discrete) interest, and com-
pound (continuous) interest. The actual interest rate is used in the calculation of the compound
D
D
(cid:0)
(cid:2)
100
12:75%
0:1275/yr.
82
6. FUNDAMENTAL DEFINITIONS, TERMS, AND CONCEPTS
(discrete) multi-period calculations and the nominal period interest used in the calculation of
the compound (continuous) multi-period calculations.
From Equation (6.2) for simple interest:
F
P .1
i
C
(cid:2)
n/
D
D
1;000.1
12
(cid:2)
C
0:03/
D
$1;360:00 .Interest total is $360:00/:
Using Equation (6.3) for compound (discrete) interest calculations:
P .1
F
D
C
i/n
D
1;000.1
C
0:03/12
D
$1; 425:76:
(Interest total is $425.76 and more than 18% greater than simple interest.)
Using Equation (6.5) for compound (continuous) interest calculations:
D
(Interest total is $433.33 which is more than 20% greater than simple interest.)
D
D
F
P .ern/
1;000 (cid:0)e:0:03
12(cid:1)
(cid:2)
$1;433:33:
It is strongly advised to use the actual compounding period (a quarter or 3 months) and
the corresponding interest (3%) as the annual interest rate is valid only for annual compounding
periods. When other continuous expressions are used which have other factors such as .er
1/
the results would not be correct.
(cid:0)
The difference in the values of the two compound interest methods, discrete and con-
tinuous compounding, is small compared to their differences with the simple interest method.
The more frequently the compounding, the smaller the difference between the two compound-
ing (discrete and continuous) methods, but the differences increase as the interest rate increases
and/or as the total investment time increases.
6.4
FACTORS IN DETERMINING INTEREST RATES
The interest rate considered as a basis for engineering and project calculations is the market
interest rate. One interest rate is the prime interest rate, which is available to banks has been as
low as 0.25% and is usually between 1% and 2% during normal times. The current rate by banks
for deposits is between 0.5–2.0% but has been 3–5% in better economic times. The interest rate
for automobile purchases is in the 3–6% range and had been as high as 8–10% range previously.
When companies are considering returns on their investments, they typically want 10–20%
return on average as some projects may make 30–40% return whereas other projects will lose
money. Some of the major factors in considering an interest rate or rate of return are:
1. Administrative Expenses (1–5%);
2. Pure Gain or Profit (3–20%);
3. Risk of Inflation (1–200%); and
4. Risk of Loss (1–10%).
6.5. INFLATION-FREE INTEREST RATES, CONSTANT CURRENCY, AND ACTUAL CURRENCY 83
The risk of inflation is greater for investments when financial strife occurs in countries and
prices rise rapidly. Construction projects can have rising labor and material prices and the total
costs can go up rapidly. A major problem of inflation is that it is hard to predict long term and
this has resulted in the use of the inflation-free interest rate.
6.5
INFLATION-FREE INTEREST RATES, CONSTANT
CURRENCY, AND ACTUAL CURRENCY
The interest rates that have been considered have inflation as a component in the interest rate.
The currency considered is the amount associated with a cash flow at the point of time at which
it occurs and this is referred to as actual currency or current currency. The term currency has
been used as dollars in the previous examples, but it could be the currency of any country.
Constant currency, or dollars, are dollars expressed in terms of the same purchasing power
relative to a specific point in time, usually a base year. They represent the hypothetical purchas-
ing power of future receipts and disbursements in terms of the purchasing power at the base
year. Constant dollars are referred to as inflation free dollars and are often used on construction
projects and government projects where the projects have a long life and estimating the inflation
rates over a long period of time is highly speculative. The relationship between constant and
actual currency is:
Constant Currency .$/
D
(cid:140)Actual Currency at time n .$/(cid:141)=.1
f /n;
C
(6.6)
where
f
D
inflation rate (as a decimal) at time period n years in the future.
Constant currency is referenced to a base year, which is normally considered time zero or
the beginning of the investment. Other names are constant dollars, real currency, inflation-free
currency, and today’s currency. Constant currency is typically used in construction or govern-
ment projects having a project life 10 years or more and involving life cycle costs where the
maintenance, repair, and rehabilitation values are difficult to predict that far in the future. For
example, the life of a highway bridge can be 100 years and to predict the costs of a bridge deck
replacement 20, 40, 60, and 80 years in the future is extremely difficult with any degree of ac-
curacy, but the cost of a deck replacement today could be predicted with great accuracy. Thus,
using constant currency, the replacement costs would be considered as the same as that of today.
The interest rate used for discounting would not include the effects of inflation.
Although Equation (6.6) assumes the inflation will be constant over the n years of the
investment, it typically will be changing and either an average value will be assumed or the
inflation must be adjusted each year and would make the calculations more complex.
Actual currency or actual dollars are used in most applications, especially when the project
investment life is under 20 years. Other names of actual dollars are nominal currency or current
currency. The interest rate used is the effective or market interest which includes the effects of
84
6. FUNDAMENTAL DEFINITIONS, TERMS, AND CONCEPTS
inflation. For most concerns in manufacturing, commercial, and project with short durations,
actual currency or actual dollars are used.
The inflation-free interest rate, i if , can be determined from the market interest rate and
the inflation rate by:
.1
C
i if
D
where
i/
.1
D
C
or directly by
(cid:140).1
i/=.1
C
f / (cid:0)1
i if (cid:1)
C
f /(cid:141)
1;
(cid:0)
(6.7)
C
market interest rate (decimal)
inflation free interest rate (decimal)
inflation rate (decimal).
D
i
i if
f
D
D
For example, if the market interest rate is 7.1% and the inflation rate was 2%, what would
the inflation free interest rate be? Using Equation (6.7), one obtains:
i if
(cid:140).1
C
D
0:071/=.1
0:02/
1(cid:141)
(cid:0)
D
(cid:140)1:05
1(cid:141)
(cid:0)
D
C
0:05 or 5%:
6.6 CURRENCY EXCHANGE CALCULATIONS
The world is an international market and global projects must consider exchange rates involv-
ing different currencies. A project may be estimated in one currency, but performed in another
country with a different currency. The fluctuations in currency rates can be large due to different
inflation rates in the two countries. The exchange rate is the amount of one countries currency
that would purchase one unit of another country’s currency.
For example, if in the year 2015 when one U.S. dollar would purchase approximately
0.80 Euros, a U.S. investor invested $1,000 in Euros. In 2020, the investor decided to convert
the Euros back to U.S. dollars and the exchange rate is one U.S. dollar to purchase 0.70 Euros.
Consider Currency 1 as the original currency and currency rate in terms of amount of Currency 2
per unit of Currency 1. What did the investor receive?
Currency 1 (current value)
D
Currency 1 (original value)
(cid:140)Currency Rate 2 (original)/Currency Rate2 (now)(cid:141):
(6.8)
(cid:2)
Thus,
Currency (now)
$1;000
(cid:2)
D
(cid:140).0:8 Euro=$1/=.0:7 Euro=$1(cid:141)
$1;000(cid:140)0:8=0:7(cid:141)
$1;142:80:
D
D
If in the year 2015 one U.S. dollar purchased approximately 15 Pesos, a U.S. investor purchased
$1,500 worth of Pesos. In 2020, the investor decided to convert the Pesos back to U.S. dollars
and the exchange rate was 1 U.S. dollar purchased 25 Pesos. What did the investor receive?
Currency (now)
$1;500
(cid:2)
D
(cid:140).15 pesos=$1/=.25 pesos=$1/(cid:141)
$1; 500(cid:140)15=25(cid:141)
$900:
D
D
Thus, the effects of currency exchange rates can be rather large and must be considered in
international projects. To reduce the problems, one usually does the cost and budget analysis in
currency with the lowest inflation rate of the country where the project occurs or in the country
where the project is managed and funded.
6.7. SUMMARY 85
6.7
SUMMARY
Several types of interest rates have been presented. The interest rate most commonly applied
is the market interest rate, but for long-term projects the inflation-free interest rate is used.
The compound discrete interest rate is most frequently applied in calculations and is the effec-
tive interest most considered. However, the actual interest and the nominal interest rates are
the basis for determining the effective interest rates for discrete compounding and continuous
compounding. The actual dollars and market interest rate are used for most short-term projects
whereas the constant currency and inflation-free interest rate are used for long-term projects.
Currency evaluations are important when the project is being funded in one country and being
constructed in another country as the inflation rates can be quite different.
6.8 REFERENCES
[1] Cresse, Robert C. and Kumar, Pradeep, Engineering Economy Basics, 1.5 CEU, p 147, May
18–19, 2000. 77
[2] Cresse, Robert C. and Kumar, Pradeep, Intermediate Engineering Economics, 1.5 CEU,
p 147, June 28–29, 2001.
[3] Creese, Robert C., Engineering economics for engineers, estimators, managers and project
anagers, AACE International Annual Meeting, 1.6 CEU, p. 94, June 28–29, 2008. 77, 80
[4] Creese, Robert C. and Adithan, M., Strategic Cost Analysis for Project Managers and Engi-
neers, New Academic Science Limited, Tunbridge Wells, UK, p. 187, 2012. 77, 79
6.9 EVALUATIVE QUESTIONS
1. What is the value of $100,000 invested at 10% simple interest per year for 10 years?
2. What is the value of $100,000 invested at 10% annual discrete compound interest rate per
year for 10 years?
3. What is the value of $100,000 invested at 10% annual continuous compound interest per
year for 10 years?
86
6. FUNDAMENTAL DEFINITIONS, TERMS, AND CONCEPTS
4. The interest rate is 2% per month and the time period is 1 year.
(a) What is the nominal interest rate?
(b) What is the discrete effective interest rate?
(c) What is the continuous effective interest rate?
5. A person invests $50,000 at 9% simple interest for 4 years. At the end of the 4 years, the
entire amount (principal and interest) is invested at 11% discrete compounded annually
for 10 years. What is the amount of the investment after 14 years?
6. The inflation rate is 4% and the inflation free interest rate is 8%, what is the market interest
rate?
7. The market interest rate is 20% and the inflation rate is 5%, what is the inflation free
interest rate?
8. If continuous compound interest is used and the nominal rate is 15%, what is the equivalent
discrete annual effective interest rate?
9. If the discrete annual effective interest rate is 20%, what is the equivalent nominal rate for
the continuous compound interest.
10. If the currency in the country where the project is being constructed (Country C) has
an inflation rate of 10% and the country where the project is being managed and funded
(Country F) has an inflation rate 15%, what currency should the funds of the project be
kept and why?
11. The $10,000,000 project in Country B is being funded by country A. The exchange rate
of Country A is 2 currency units per Country B currency unit. It is projected that the
exchange rate for the last half of the project ($5,000,000) will drop and the exchange rate
of Country A will drop to 1.5 currency units per Country B currency unit. Which country
should the project funded currency be located and why?
12. Canada had an exchange rate of 0.95 Canadian dollars per U.S. dollar in 2015 and a U.S.
investor spent $10,000 U.S. dollars to purchase Canadian dollars. How many Canadian
dollars did he receive? In 2020 he exchanged those Canadian Dollars for U.S. dollars at
the exchange rate of 0.70 Canadian dollars per U.S. dollar. How many U.S. dollars did he
receive in 2020?
C H A P T E R 7
87
Basic Mathematical
Relationships for Economic
Calculations
INTRODUCTION
7.1
The following chapters will derive and illustrate the fundamental relationships used for eco-
nomic evaluations. The derivation of the basic economic formulas are based upon a few algebraic
relationships and applying these relationships will make the understanding of the economic for-
mulas easier. The mathematics involved in the derivations of the formulas in economics are not
complex and some basic references are listed [1, 2] The following sections will focus on the ex-
pressions for the sums of numbers, the arithmetical progression, the geometric progression, and
the infinite limit expression.
SUMS OF NUMBERS
7.2
In evaluating projects one typically examines all the revenues in the future time periods and
evaluates them at the beginning of the project to determine the expected cost. Thus, if one
wants to determine the total amount or sum S of an amount z each period for n periods, the
total would be:
S.n/
D
nz:
zi
D
(7.1)
n
X
1
i
D
Let us suppose that n
5 and z
D
S.5/
D
D
D
6, then the sum is
6
5
6
6
C
(cid:2)
C
D
6
6
C
30:
6
C
Now let us assume that instead of the period amount remaining constant that it increases
by the same amount each period (as in a uniform gradient); that is for the second period it is
2z, for the third period it is 3z, and for the nth period it is nz. This is the basic sum of numbers
expression. The total amount S for the n periods would be:
S.n/
D
n
X
1
i
D
nz
D
zn.n
C
1/=2:
(7.2)
88
7. BASIC MATHEMATICAL RELATIONSHIPS FOR ECONOMIC CALCULATIONS
Let us suppose the n
5 and z
6 which implies the amount for the second period,
12, and the total amount S for the 5 periods would be:
D
D
S.2/
6
2
(cid:2)
D
D
S.5/
6
6
C
(cid:2)
D
D
12
5
(cid:2)
C
.5
18
C
90
24
C
1/=2
C
D
30
90:
D
The arithmetical progression is a modification of the basic sum of numbers where the
increment y is different than the base amount z. The increment y starts in the second period.
The total amount S for the n periods would be:
S.n/
D
n
X
1
i
z
C
n
X
2
i
D
5, z
D
6, and y
D
nz
y
D
C
n.n
(cid:0)
1/y=2:
(7.3)
3 and the total amount S for the n periods
D
Let us suppose that n
would be:
D
S.5/
6
5
9
6
C
(cid:2)
C
C
D
D
5
12
15
18
60
C
4
C
3=2
(cid:2)
(cid:2)
D
D
30
30
C
D
60:
7.3 GEOMETRIC PROGRESSION
The expression used in the derivation of discrete interest economic expressions is the geometric
progression, commonly called the geometric series, when dealing with discrete interest relation-
ships. The discrete relationships are those where there is a discrete payment at a discrete interest
rate at a specific time. This discrete interest rate implies that it is compounded at a fixed time
period, whereas a continuous interest implies that it is compounded continuously over time.
The case of discrete payments and discrete interest rates is the most common of the economic
problems.
The basic mathematical relationship for discrete interest problems is the geometric series
and for continuous interest problems the infinite limits expression is used. The geometric series
expression is:
S
a
C
D
aR
C
aR2
aR3
aR4
C
C
C (cid:1) (cid:1) (cid:1) C
aR.n
1/
(cid:0)
a (cid:140).Rn
D
1/ =.R
1/(cid:141) ;
(cid:0)
(cid:0)
(7.4)
where
S
a
R
n
D
D
D
D
sum of the series of n terms
constant which occurs in all terms
ratio between terms
number of terms in the sum (including the initial term without the ratio).
Consider the following example where one takes the number 4.a
3
2/ for 3 additional periods .n
4/ and determine the sum. That is:
D
4/ and doubles it
.R
D
S.5/
.4
4
C
(cid:2)
2/
C
D
(cid:0)4
(cid:2)
C
23(cid:1)
4
8
C
C
D
16
C
32
D
60:
1
C
22(cid:1)
D
D
(cid:0)4
(cid:2)
If one uses the geometric series expression, there are four terms (the initial term and the
3 additional periods), so
7.4. INFINITE LIMIT 89
n
R
a
S.4/
4
2
4
4
D
D
D
D
(cid:2)
and thus
(cid:2)24
1(cid:3) =(cid:140)2
(cid:0)
1(cid:141)
4
(cid:2)
D
(cid:0)
(cid:140)16
(cid:0)
1(cid:141)=(cid:140)2
1(cid:141)
4
(cid:2)
D
(cid:0)
(cid:140)15(cid:141)=(cid:140)1(cid:141)
60:
D
This becomes very useful when n is large. In most interest calculations, the ratio is R
D
i/ which is the periodic compounding amount for compound discrete interest calculations.
.1
This will be used in the derivation of the discrete compounding factors in the various economic
expressions.
C
Let us consider the investment of $2,000 per year at an annual interest rate of 1% for
5 years. Thus,
a
R
n
S.n/
S.5/
D
D
D
D
D
D
D
$2;000
1
i
C
D
1:01
5
a (cid:140).Rn
1/ =.R
(cid:0)
$2;000 (cid:2).1:01/5
2;000
(cid:0)
5:101005
(cid:0)
(cid:2)
$10;202:01:
1/(cid:141)
1(cid:3) =.1:01
1/
(cid:0)
D
2;000(cid:140)1:05101
1(cid:141)=(cid:140)1:01
1(cid:141)
(cid:0)
(cid:0)
Thus, the total interest over the 5-year period is $202.01 and the effect of compounding
is small, only $2.01 If the interest rate was 3% .R
1:03/,
D
S.n/
S.5/
D
D
a (cid:140).Rn
1/ =.R
(cid:0)
$2;000 (cid:2).1:03/5
(cid:0)
(cid:0)
1/(cid:141)
1(cid:3) =.1:03
1/
(cid:0)
D
2;000
(cid:2)
5:309136
D
$10;618:27:
The total interest over the 5-year period is $618.27 and the effect of compounding is
$18.27, which is much larger than the $2.01 or even more than 3 times the $2.01 which is
$6.03. The higher the interest rate, the greater the effect of compounding and the greater the
time period also increases the compounding effect.
7.4
INFINITE LIMIT
The infinite limit is used for continuous interest problems, that is when the interest is com-
pounded continuously over the periods rather at the end of the discrete time periods. The con-
tinuous compounding is also used when continuous cash flows are considered instead of discrete
90
7. BASIC MATHEMATICAL RELATIONSHIPS FOR ECONOMIC CALCULATIONS
cash flows. However, continuous cash flow analysis is rarely used and will not be presented in
detail. The continuous interest expression is based on:
(cid:140)1
Lim
k
!1
C
1=k(cid:141)k
e
D
D
2:718281834:
This can be illustrated by taking values of k such as 1, 2, 3, 100, 1,000, and 10,000 and
note that the value approaches the limit of e, which is 2.718 to 3 decimals:
Limit
(cid:140)1
D
C
1=1(cid:141); (cid:20)1
2
(cid:21)
1
2
C
; (cid:140)1
C
1=3(cid:141)3; : : : (cid:140)1
C
1=100(cid:141)100; : : : (cid:140)1
Limit
D
2;
2:25;
2:37;
2:704;
1=1000(cid:141)1000; : : :
C
1=10000(cid:141)10000
: : : (cid:140)1
C
2:71692;
2:71815:
The infinite limit was used in the derivation of continuous compound interest by letting
the number of discrete compounding periods go to infinity; that was:
ieff
ieff
D
D
where
r=m/m
(cid:140).1
C
lim
m
!1
er
(cid:0)
1;
1(cid:141)
(cid:0)
D
lim
m
!1
h
1
f
C
1=.m=r/
r
m=r i
g
1
(cid:0)
D
er
1
(cid:0)
(7.5)
interest rate (effective interest rate on an annual basis)
ieff
r
D
D
nominal interest rate.
One can also express the relationship to determine the nominal interest rate equivalent to
the annual effective interest rate as:
er
(cid:0)ieff
1(cid:1)
C
D
(7.6)
or
C
Equation (7.5) is frequently used to convert discrete compounding factors into continuous
compounding factors and Equations (7.6) and (7.7) to determine annual nominal interest r from
the continuously compounded interest rate.
D
r
ln (cid:0)ieff
1(cid:1) :
(7.7)
7.5
SUMMARY
The basic mathematical expressions for sums of numbers, including the basic sum of numbers
expression, the arithmetic progression, the geometric progression, and the infinite limit. The ge-
ometric series expression will be used in the development of the discrete interest compounding
expressions and the infinite limit will be used to convert the discrete interest formulas to contin-
uous interest formulas. These expressions are relatively simple and are the basis of the economic
expressions presented in the following chapters.
7.6. REFERENCES 91
7.6 REFERENCES
[1] Hodgman, Charles D., Ed., Mathematical Tables from Handbook of Chemistry and Physics,
10th ed., Chemical Rubber Publishing Co., Cleveland, OH, pp. 294–296, 1954. 87
[2] Creese, Robert C. and Adithan, M., Strategic Cost Analysis for Project Managers and Engi-
neers, New Academic Science Limited, Tunbridge Wells, UK, pp. 36–38, 2012. 87
7.7 EVALUATIVE QUESTIONS
1. What is the sum of the numbers from 1 to 10?
2. What is the sum of the numbers from 40 to 50?
3. A gradient of 3 is to be summed over 10 periods and determine the total sum.
4. A cost of $40 per period is for wages. The cost for materials is $5 per period and increases
at the rate of $1 per period. What is the total wage cost for 10 periods? What is the total
material cost for 10 periods?
5. John earns $5 and the amount is doubled each period for the next 6 periods, what is the
sum John will have in his account.
6. Mary earns $10 and the amount is tripled each period for the next 4 periods, what is the
sum Mary will have at the end of these 5 periods?
7. Francisco saves $100 per month and his interest rate is 1/2% compounded monthly. What
amount will Francisco have at the end of the year?
8. Juanita has invested 10,000 Pesos in a high risk bond which pays 1% compounded monthly
which is reinvested automatically. What is the expected value of the bond and total interest
at the end of the year?
9. An investment of $2,000 is made and the interest rate is 0.5% per month. At the end of a
year, determine the total amount available if:
(a) simple interest is used.
(b) discrete compound interest is used.
(c) continuous compound interest is used.
92
7. BASIC MATHEMATICAL RELATIONSHIPS FOR ECONOMIC CALCULATIONS
Also determine
(d) the nominal interest rate.
(e) the effective discrete compound interest rate.
(f ) the effective continuous compound interest rate.
10. An investment of $2,000 is made and the interest rate is 1.5% per month. At the end of a
year, determine the total amount available if:
(a) simple interest is used.
(b) discrete compound interest is used.
(c) continuous compound interest is used.
Also determine
(d) the nominal interest rate.
(e) the effective discrete compound interest rate.
(f ) the effective continuous compound interest rate.
11. An investment of $2,000 is made and the interest rate is 1.5% per quarter. At the end of
a year, determine the total amount available if:
(a) simple interest is used.
(b) discrete compound interest is used.
(c) continuous compound interest is used.
Also determine
(d) the nominal interest rate.
(e) the effective discrete compound interest rate.
(f ) the effective continuous compound interest rate.
12. An investment of $2,000 is made and the interest rate is 1.5% per quarter. At the end of
five years, determine the total amount available if:
(a) simple interest is used.
(b) discrete compound interest is used.
(c) continuous compound interest is used.
(d) the nominal interest rate.
(e) the effective discrete compound interest rate.
(f ) the effective continuous compound interest rate.
C H A P T E R 8
93
Basic Economic Factors and
Equations
8.1
INTRODUCTION
There are several expressions utilized in economics to determine the worth of various payments
types over time for the evaluation of projects. These are evaluated as Present Worth values,
which is the value of the payments at the start or time zero, or Future Worth values, which
is the value of the payments at a specific future time, typically at the end of the project life.
The payments can be a single payment or a uniform series of payments. The expressions are
generally divided into two categories: the basic expressions and the gradient expressions. The
basic expressions will be examined in this chapter and the more complex gradient expressions will
be examined in the next chapter. The primary reference for this section is from a previous short
course and a previous book [1], but the materials can be found in numerous standard engineering
economics textbooks [2, 3] giving slightly different approaches and additional problems. Some
of the expressions developed is this and the following chapters are available using Excel [4,
5], but one should first use and program the expressions before relying on packaged software
expressions.
The two categories of the basic economic expressions are classified by the type of
payments—the single payment expressions and the uniform payment expressions. There are two
single payment expressions and four uniform payment expressions and all six will be presented
in this chapter. The expressions will be first developed for the discrete compound interest rate
and discrete payment case. The developed expressions will then be modified for the continu-
ous compound interest rate with discrete payment case. The highly advanced case of continuous
compound interest rate and continuous payments will not be presented as it is currently rarely
applied.
8.2
SINGLE PAYMENT DISCRETE INTEREST FACTORS
The two single payment cases developed with discrete interest and discrete payments are the
future worth expression and the present worth expression. The future worth expression will be
presented first as it is easier to illustrate.
94
8. BASIC ECONOMIC FACTORS AND EQUATIONS
8.2.1 DISCRETE INTEREST FUTURE WORTH FACTOR (F=P; i; n)
The future worth expression converts a present amount to a future worth for a given interest rate
and given number of compounding periods.
The notation used will be:
F
P
i
I
n
future worth (total amount after n compounding periods)
present worth (total amount at time zero, the beginning of the study period)
interest rate per compounding period (discrete compound interest)
interest earned in compounding period
number of compounding periods
cash flow is occurring.
D
D
D
D
D
" D
A graph indicating where the present worth and future worth are with respect to time
D
(cid:2)
P
i
appears in Figure 8.1.
Figure 8.1: Present and future worth with respect to time.
Table 8.1 illustrates the calculations starting with the investment P (Present Worth) at
the start of the time period and Future Worth F at the end of the time period. This figure is
similar to that in Chapter 6 to obtain Equation (6.3). In most instances the future worth is
greater than the present worth.
Thus, from Table 8.1 it can be seen that the value at the end, the Future Worth Equation
is:
where
F
P
i
n
F
P
.1
(cid:2)
C
D
i/n
(8.1)
future worth (total amount after n compounding periods)
present worth (total amount at time zero, the beginning of the study period)
interest rate per compounding period (discrete compound interest)
number of compounding periods.
D
D
D
D
The conversion factor to convert the present worth to the future worth is called the single
payment future worth discrete compound amount factor is designated as (F=P; i; n), is stated as
F given P; i; n, and is equal to .1
i/n. The discrete future worth factor is:
C
.F=P; i; n/
.1
C
D
i/n:
(8.2)
0 1 2 3 4 5 6 …… (n - 2) (n - 1) nPFTime Table 8.1: Payment illustration for future worth expression derivation
8.2. SINGLE PAYMENT DISCRETE INTEREST FACTORS 95
8.2.2 DISCRETE INTEREST FUTURE WORTH EXAMPLE
If a single payment of $100,000 is invested at 15% interest compounded annually, the compound
amount at the end of the 4th year would be:
F
D
$10;000.1
0:15/4
C
.F=P; i
15%; n
4/
D
D
D
D
.1
$10;000
(cid:2)
1:74901
D
$17; 490
i/n
C
.1
C
D
0:15/4
D
1:74901
and
so
F
P
D
(cid:2)
.F=P; i
15; n
4/
D
D
D
$10;000
(cid:2)
1:74901
D
$17;490:
The .F=P; i; n/ factor is given for the formulas in Table 8.2 and the Appendix and one
should use the formulas to calculate the values. Students should learn to calculate and program
the formulas rather than use tables as many interest rates are not available in the tables.
The total interest earned, I (Total), would be the difference between F and P and would
be:
I.Total/
F
P
(cid:0)
D
D
$17;490
(cid:0)
$10;000
D
$7;490:
8.2.3 DISCRETE INTEREST PRESENT WORTH FACTOR .P =F; i; n/
The present worth expression is the inverse of the future worth expression; that is, the expression
is:
P
D
F .1
C
n
i/(cid:0)
F=.1
i/n;
C
D
(8.3)
where
P
D
present worth (initial amount at time zero)
Time(End of Period)Present Worth (P) at Start of PeriodInterest (I) Earned During PeriodFuture Worth is the Principal + Interest Total at End of Period1P+IP=P × (1 + i)2P × (1 + i)+IP × (1 + i) =P × (1 + i)23P × (1 + i)2+IP × (1 + i)2=P × (1 + i)34P × (1 + i)3+IP × (1 + i)3=P × (1 + i)4…=nP × (1 + i)n-1+IP × (1 + i)n-1=P × (1 + i)n = F96
8. BASIC ECONOMIC FACTORS AND EQUATIONS
F
i
n
future worth (total amount after n time periods)
discrete interest rate per compounding period
number of compounding periods.
D
D
D
The conversion factor to convert the future worth to the present worth is called the single
payment present worth compound amount factor, is designated as .P =F; i; n/, is stated as P
given F , i; n, and is equal to .1
i/n. That is:
n or 1=.1
i/(cid:0)
C
C
.P =F; i; n/
.1
C
D
i/(cid:0)
n:
(8.4)
8.2.4 DISCRETE PRESENT WORTH EXAMPLE
If $10,000 is desired at the end of 4 years and the discrete interest rate is 15% compounded
annually, what amount would one need to be deposited initially?
P
D
F=.1
C
i/n;
P
D
$10;000=.1
0:15/4
C
D
$10;000=1:74901
$5;718
D
.P =F; i; n/
1.1
C
D
n
i/(cid:0)
D
1=.1
C
0:15/4
D
0:57175
therefore
and
so
P
D
$10;000
(cid:2)
.P =F; i
15; n
4/
D
D
D
$10;000
(cid:2)
0:57175
D
$5;717:5
D
$5;178:
The .P =F; i; n/ factor is listed in Table 8.2 at the end of this chapter and in the Appendix.
The present worth analysis is often referred to as “discounting,” that is bringing future values
back to the present value. As the sample indicates, the future worth of $10,000 at the end of
four years in the future is worth only $5,178 when discounted by 15% per year to the current
year.
8.3 UNIFORM SERIES PAYMENTS DISCRETE INTEREST
FACTORS
The four uniform payment factors with discrete interest payments to be developed are the uni-
form series future worth factor .F=A; i; n/, the sinking fund factor .A=F; i; n/, the uniform series
present worth factor .P =A; i; n/, and the capital recovery factor .A=P; i; n/. All of these factors
involve the payment of an amount A at the end of every period. In the development of the var-
ious economic expressions, the assumption is made that the discrete payment is at the end of
the payment period, not at the beginning of the payment period. A simple modification can be
made to convert end of period payments to beginning of the period payments.
8.3. UNIFORM SERIES PAYMENTS DISCRETE INTEREST FACTORS 97
8.3.1 UNIFORM SERIES DISCRETE INTEREST FUTURE WORTH
FACTOR .F=A; i; n/
(Also called Discrete Compound Amount Factor.)
Payments of the amount A are made at the end of the period for n periods (but that does not
include time 0) to determine a total amount F at the end of the last period. The discrete interest
rate i will be compounded at the end of each period and the payment sum plus the interest
accumulated will result in a future worth of F at the end of the n periods. This includes a period
payment A in the final period to complete the future worth value of F as the system is based
upon end-of-period payments. The geometric series expression is used to develop the formulas.
A graphical representation of the payments and future worth appears in Figure 8.2.
Figure 8.2: A graphical representation of the payments and future worth. Where A
payment each period n, F
period, and n
future worth (amount at time n), i
number of compounding periods.
uniform
interest rate per compounding
D
D
D
D
The expression can be derived using the geometric series expression by the payments start-
ing from the last period back to the first period:
A .end of last period/
A.1
i/
A.1
C
i/2
C
: : :
C
.1
C
C
C
C
i/2
C
i/.n
1/(cid:17)
(cid:0)
A (cid:16)1
.1
i/
C
C
C
.1
A .geometric series/:
C (cid:1) (cid:1) (cid:1) C
A.1
C
i/.n
(cid:0)
1/ .end of first period/
F
D
D
D
.1
D
A .r n
A (cid:140).1
If one lets r
F
F
D
D
and
thus
i/, one has
C
1/ =.r
i/n
(cid:0)
(cid:0)
C
1/
D
(cid:0)
1(cid:141) =(cid:140)i(cid:141)
A (cid:140).1
i/n
(cid:0)
C
1(cid:141) =(cid:140).1
1(cid:141)
i/
(cid:0)
C
D
A (cid:140).1
i/n
(cid:0)
C
1(cid:141) =(cid:140)i(cid:141)
A .F=A; i; n/
F
D
A (cid:140).1
i/n
(cid:0)
C
D
1(cid:141) =(cid:140)i(cid:141)
.F=A; i; n/
(cid:140).1
C
D
i/n
(cid:0)
1(cid:141) =(cid:140)i(cid:141):
(8.5)
(8.6)
0 1 2 3 4 nAAAAAFTime ……98
8. BASIC ECONOMIC FACTORS AND EQUATIONS
8.3.2 UNIFORM SERIES DISCRETE INTEREST FUTURE WORTH
EXAMPLE
What is the future worth of $1,000 deposited at the end of the year each year for 10 years when
the interest is 10%?
F
D
$1;000
(cid:2)
$15;937:
0:10/10
(cid:2).1
C
1(cid:3) =(cid:140)0:10(cid:141)
$1;000
(cid:2)
D
(cid:0)
(cid:140)1:5937=0:10(cid:141)
$1; 000
(cid:2)
D
(cid:140)15:937(cid:141)
D
The total amount of interest earned over the 10 years would be:
I .total/
D
D
F
n
A
(cid:0)
15;937
(cid:2)
10
1;000
$5;937:
(cid:0)
(cid:2)
10%; n
D
10(cid:141)
The compound amount factor (cid:140)F=A; i
15:937 may be listed in the ta-
bles, but if one had an interest rate of 9.8% it would not be available and thus one should be able
to use the compound amount factor expression and thus:
D
D
D
(cid:140)F=A; i
9:8%; n
10(cid:141)
(cid:2).1
C
D
D
D
0:098/10
1(cid:3) =(cid:140)0:098(cid:141)
15:785:
D
(cid:0)
Thus,
F
A
(cid:2)
D
.F=A; i; n/
$1;000
(cid:2)
D
(cid:140)F=A; i
D
9:8%; n
10(cid:141)
D
D
$1;000
(cid:2)
15:785
D
$15;785:
8.3.3 SINKING FUND DISCRETE INTEREST FACTOR .A=F; i; n/
The sinking fund determines the amount of A to obtain a desired amount F in the future. The
sinking fund factor involves the same terms as the compound amount factor and the graphical
representation of the payments and future worth is the same and is repeated as in Figure 8.3.
Figure 8.3: A graphical representation of the payments and future worth.
The Sinking Fund Factor is the inverse of the Compound Amount Factor and thus it
would be:
Therefore, the Sinking Fund Equation is:
.A=F; i; n/
D
1=.F=A; i; n/:
A
F
D
i= (cid:140).1
(cid:2) f
i/n
1(cid:141)
g
(cid:0)
C
(8.7)
0 1 2 3 4 nAAAAAFTime ……8.3. UNIFORM SERIES PAYMENTS DISCRETE INTEREST FACTORS 99
.A=F; i; n/
1= (cid:140).1
i= (cid:140).1
i/n
i/n
C
C
(cid:0)
(cid:0)
D
D
1(cid:141) =(cid:140)i(cid:141)
1(cid:141) ;
(8.8)
and
where
uniform payment in each period n
future worth (amount at time n)
interest rate
number of compounding periods.
A
F
i
n
D
D
D
D
8.3.4 SINKING FUND DISCRETE INTEREST FACTOR EXAMPLE
Melania wants to have $1,000 at the end of 10 years, so what amount would she have to save
at the end of each year if the interest rate over the 10-year period is 15% to obtain the desired
$1,000?
A
D
D
$1;000 (cid:2)0:15=.1
$1;000
C
.0:04925/
0:15/10
1(cid:3)
(cid:0)
$49:25:
D
(cid:2)
10/
Thus, .A=F; i
15%; n
0:04925.
D
D
D
$1;000
(cid:2)
D
(cid:140)0:15=.3:046/(cid:141)
Thus, Melania must deposit $49.25 at the end of each year for 10 years to have $1,000 at
the end of the ten year period. Her final payment at period n is necessary to make the total one
thousand dollars.
8.3.5 UNIFORM SERIES DISCRETE INTEREST PRESENT WORTH
FACTOR .P =A; i; n/
The uniform series present worth factor is an extension of the present worth expression to have
payments in each period and not only in the last period. The uniform series present worth factor
is used to convert a uniform series of n payments of the amount A at an interest rate i to a present
worth amount P . A graphical represent of the payments and the present worth is presented as
in Figure 8.4.
Figure 8.4: Payments and the present worth.
0 1 2 3 4 nAAAAAPTime ……100
8. BASIC ECONOMIC FACTORS AND EQUATIONS
To determine the present value P , each payment A must be discounted at the interest
n for each
rate i by the number of periods it occurs in the future. The discount factor is .1
period where n is the specific period for that particular payment. The expression can be derived
using the geometric series expression by:
i/(cid:0)
C
A discounted 1 period
A=.1
A=.1
i/
C
i/2
A discounted 2 periods
i/3
A=.1
A=.1
C (cid:1) (cid:1) (cid:1) C
i/4
A discounted n periods
i/n
A=.1
C
C
C
C
C
C
C (cid:1) (cid:1) (cid:1) C
C
P
P
P
D
D
D
A=.1
i/
C
C
h1
1=.1
i/
C
C
C
1=.1
C
i/2
1=.1
i/3
C
C
C
1=.1
i/4
C
C (cid:1) (cid:1) (cid:1) C
1=.1
C
i/.n
(cid:0)
1/i
(cid:140)
geometric series
(cid:141):
Now using the geometric series equation of Equation (7.4) where a
A=.1
i/; S
P
D
C
D
and R
.1
D
i/(cid:0)
i/ one obtains:
C
S
P
C
1/ =.R
D
D
1 or 1=.1
a (cid:140).Rn
A=.1
(cid:0)
i/
i/
(cid:0)
C
C
(cid:140)
1
f
(cid:2)
(cid:2)
.1
A=.1
A
(cid:2)
1/(cid:141)
C
.1
(cid:0)
(cid:140).1=.1
(cid:140)
1
(cid:0)
f
i/n
C
g
C
=.1
i//n
i/n
(cid:0)
=.1
g
i/n(cid:141) =(cid:140)
C
(cid:0)
C
1(cid:141) =(cid:140).1=.1
1/(cid:141)
i/
1
(cid:0)
C
i/n(cid:141) =(cid:140)
i(cid:141)
f
.1
i/
g
C
=.1
C
i/(cid:141)
(cid:0)
P
A
(cid:140)
.1
f
(cid:2)
C
D
i/n
1
g
(cid:0)
= .i.1
C
i/n/(cid:141) ;
(8.9)
D
D
where
uniform payment each period n
present worth (amount at time zero)
interest rate
number of compounding periods.
A
P
i
n
D
D
D
D
The uniform series present worth factor is thus
.P =A; i; n/
(cid:140)
.1
f
C
D
i/n
1
g
(cid:0)
= .i.1
C
i/n/(cid:141) :
(8.10)
8.3.6 UNIFORM SERIES DISCRETE INTEREST PRESENT WORTH
EXAMPLE
Barack won a lottery which promised $20,000,000 paid as $1,000,000 at the end of each year for
20 years. If the interest rate is 15%, what is the present worth of the Barack’s lottery winnings?
P
P
D
D
D
D
A
.P =A; i
(cid:2)
$1;000;000
$1;000;000
$6;259;300:
(cid:2)
(cid:2)
15%; n
D
(cid:2).1:15/20
.6:2593/
20/
1(cid:3) = (cid:2)0:15
(cid:2)
.1:15/20(cid:3)
D
(cid:0)
8.3. UNIFORM SERIES PAYMENTS DISCRETE INTEREST FACTORS 101
The total present worth amount of the lottery is less than one-third of the listed
$20,000,000 prize amount when the interest rate is 15%. From the calculations, one notes that:
.P =A; i
15%; n
20/
D
D
D
6:2593:
8.3.7 CAPITAL RECOVERY DISCRETE INTEREST FACTOR .A=P; i; n/
The Capital Recovery Discrete Interest Factor is the inverse of the Uniform Series Present Worth
Discrete Interest Factor and has the same graphical representation as the Uniform Series Present
Worth Factor. It is the amount that one must have initially in order to receive an amount A at
the end of each year for n years when the interest rate is i during the n year period (Figure 8.5).
Figure 8.5: Uniform series present worth factor.
The solution is to determine the amount of the payment A based upon the values of P; i,
and n. The Capital Recovery Factor is the inverse of the Uniform Series Present Worth Factor
and is:
A
P
D
(cid:2)
(cid:140).i.1
C
i/n/(cid:141) = (cid:140).1
i/n
1(cid:141) :
(cid:0)
C
(8.11)
where
uniform payment each period n (does not include time zero)
present worth (amount at time zero)
interest rate
number of compounding periods.
A
P
i
n
D
D
D
D
The capital recovery factor is thus:
.A=P; i; n/
(cid:140).i.1
C
D
i/n/(cid:141) = (cid:140).1
i/n
1(cid:141) :
(cid:0)
C
(8.12)
8.3.8 CAPITAL RECOVERY DISCRETE INTEREST FACTOR EXAMPLE
Michelle has purchased a new automobile for $50,000 after a down payment of $5,000, that is,
the total cost was $55,000. Her loan is for $50,000 for a period of 3 years at an interest rate of
15%/year. She wants to pay off the loan at the end of 3 years.
(a) If annual payments are made, what is the annual payment?
A
D
$50;000
(cid:2)
(cid:2)0:15.1:15/3(cid:3) = (cid:2)1:153
1(cid:3)
(cid:0)
D
$50;000
(cid:2)
(cid:140)0:437980(cid:141)
$21;899
D
0 1 2 3 4 nAAAAAPTime ……102
8. BASIC ECONOMIC FACTORS AND EQUATIONS
(b) The total interest paid over the three year period is:
I .total/
n
A
P
3
(cid:2)
D
(cid:0)
(cid:2)
D
$21;899
(cid:0)
$50;000
D
$15;697:
(c) If monthly payments are made, what is the monthly payment?
The interest rate would be 1.25% (15%/12) per month and n
36
A
D
$50; 000
(cid:2)
(cid:2)0:0125.1:0125/36(cid:3) = (cid:2).1:0125/36
(cid:0)
$1;754:93:
D
D
1(cid:3)
(d) The total interest paid over the 36 monthly payments would be
I .total/
D
D
n
A
(cid:2)
(cid:0)
P
$63;177:56
D
(cid:0)
36
$1;754;93
$50;000
(cid:2)
$50;000
(cid:0)
$13;178:
D
Michelle could save $2,519 in interest by paying monthly instead of yearly.
8.4
SINGLE PAYMENT CONTINUOUS INTEREST
FACTORS
As indicated in Chapter 7, the effective interest expressions can be expressed in terms of the
nominal interest r. Let the effective interest be expressed as i, then the expressions become:
and
and similarly
er
i
D
1
(cid:0)
.1
i/
C
D
er
i/n
.1
C
D
ern:
(8.13)
(8.14)
(8.15)
8.4.1 CONTINUOUS INTEREST FUTURE WORTH SINGLE PAYMENT
FACTOR .F=P; r; n/
The discrete future worth equation and discrete future worth factor equations from Section 8.2.2
were:
and
P .1
F
D
C
i/n
.F=P; i; n/
..1
C
D
i/n:
(8.3)
(8.4)
The continuous future worth factor and continuous future worth equations using Equa-
tion (8.15) and the discrete future worth factor and discrete future worth equations would be:
P ern
F
D
(8.16)
and
where
8.4. SINGLE PAYMENT CONTINUOUS INTEREST FACTORS 103
.F=P; r; n/
ern;
D
(8.17)
F
P
r
n
future worth (total amount after n time periods)
present worth (initial amount at time zero)
continuous interest rate per compounding period
number of compounding periods.
D
D
D
D
8.4.2 CONTINUOUS INTEREST FUTURE WORTH SINGLE PAYMENT
EXAMPLE
George has purchased a precious painting for $10,000 and expects it to appreciate in value at
a nominal interest rate of 15% per year compounded continuously over the next 4 years. What
would be the value of the painting at the end of 4 years that George would be expecting?
P ern
$10;000
$10; 000
$18;221
(cid:2)
(cid:2)
F
F
D
D
D
D
4
(cid:2)
e:0:15
1:82212
and
.F=P; 15; 4/
e0:15
(cid:2)
4
1:8221:
D
D
If George is correct in his purchase assumptions, he would have a gain of $8,221 on his
investment. Note that if discrete compounding was used, his gain would have been $7,490 and
if simple interest was used his gain would have been only $6,000.
8.4.3 CONTINUOUS INTEREST PRESENT WORTH SINGLE PAYMENT
FACTOR .P =F; r; n/
The discrete present worth factor and discrete present worth equations from Section 8.2.3 were:
P
D
F .1
C
n
i/(cid:0)
and
.P =F; i; n/
.1
D
C
i/(cid:0)
n:
The continuous present worth factor and continuous present worth equations using Equa-
tion (8.15) and the discrete present worth equation and discrete present worth factor equations
would be:
and
rn
F e(cid:0)
P
D
.P =F; r; n/
e(cid:0)
rn:
D
(8.18)
(8.19)
104
8. BASIC ECONOMIC FACTORS AND EQUATIONS
8.4.4 CONTINUOUS INTEREST PRESENT WORTH SINGLE PAYMENT
EXAMPLE
Laura wants to have $10,000 for the purchase a new car in 4 years as her current car will be lucky
to last that long. Her Aunt Barbara wants to buy a precious book from her. She wants to sell the
book at a price that would appreciate to $10,000 at the end of 4 years and has an opportunity to
invest in a bank note that would pay 15% interest with continuous compounding over the next
4 years. What is the price she needs to sell her precious book at to have the $10,000 in 4 years?
rn
F e(cid:0)
$10;000e(cid:0)
$10;000e(cid:0)
$10;000
$5;488:1
(cid:2)
P
P
D
D
D
D
D
0:15
4
(cid:2)
0:6
0:54881
and
.P =F; r; n/
rn
e(cid:0)
D
D
e(cid:0)
0:15
4
(cid:2)
D
0:54881:
If Laura can persuade Aunt Barbara to pay $5,488 for the book now and invest in the
bank note for 4 years, she would have her desired $10,000. Note that if discrete interest was
used instead of continuous, she would have needed Aunt Barbara pay $5,718 for the book.
8.5 UNIFORM SERIES PAYMENTS CONTINUOUS
INTEREST FACTORS
The factors for the uniform series payments continuous interest are more complex than the single
payment continuous interest factors, but they can be easily obtianed from the Uniform Series
Payments Discrete Interest Factors and they will be illustrated by examples in the following
sections.
8.5.1 UNIFORM SERIES CONTINUOUS INTEREST FACTORS–FUTURE
WORTH, SINKING FUND, PRESENT WORTH, AND CAPITAL
RECOVERY
The uniform series discrete future worth factor in Section 8.3.1 was:
.F=A; i; n/
(cid:140).1
C
D
i/n
(cid:0)
1(cid:141) =(cid:140)i(cid:141):
Using Equations (8.13) and (8.15) and the nominal interest rate r replacing the discrete
interest rate i, the uniform series continuous future worth factor becomes:
.F=A; r; n/
(cid:140)ern
(cid:0)
D
1(cid:141) = (cid:140)er
1(cid:141) :
(cid:0)
(8.20)
8.5. UNIFORM SERIES PAYMENTS CONTINUOUS INTEREST FACTORS 105
The uniform series continuous sinking fund continuous factor (uniform series continuous
future worth factor) in Section 8.3.3 was:
.A=F; i; n/
(cid:140)i(cid:141)= (cid:140).1
i/n
1(cid:141) :
(cid:0)
C
D
Using Equations (8.13) and (8.15) and the nominal interest rate r replacing the discrete
interest rate i, the uniform series continuous future worth factor becomes:
The uniform series discrete present worth factor in Section 8.3.5 was:
.A=F; r; n/
(cid:140)er
D
(cid:0)
1(cid:141) = (cid:140)ern
1(cid:141) :
(cid:0)
(8.21)
(cid:0)
Using Equations (8.13) and (8.15) and the nominal interest rate r replacing the discrete
C
C
D
.P =A; i; n/
(cid:140).1
i/n
1(cid:141) = (cid:140)i.1
i/n(cid:141) :
interest rate i, the uniform series continuous present worth factor becomes:
.P =A; r; n/
(cid:140)ern
(cid:0)
D
1(cid:141) = (cid:140).er
(cid:0)
1/ ern(cid:141) :
(8.22)
Finally, the uniform series discrete capital recovery factor in Section 8.3.7 was:
.A=P; i; n/
(cid:140)i.1
C
D
i/n(cid:141) = (cid:140).1
i/n
1(cid:141) :
(cid:0)
C
Using Equations (8.8) and (8.11) and the nominal interest rate r replacing the discrete
interest rate i, the uniform series continuous present worth factor becomes:
.A=P; r; n/
(cid:140).er
D
(cid:0)
1/ ern(cid:141) = (cid:140)ern
1(cid:141) :
(cid:0)
(8.23)
8.5.2 UNIFORM SERIES CONTINUOUS INTEREST FUTURE WORTH
.F=A; r; n/ EXAMPLE
Uncle Bill wants to take a trip to Morgantown, WV from Hope, AK to watch a football game.
The bus fare is $400 and he plans to put $100 per month into an account for 6 months (that is
1/2 year). He plans to use the additional monies for his room, meals, and the $85 dollar football
ticket. If the account pays continuous annual interest of 6% (which is 1/2% per month), how
much money will he have to spend for his room and meals. Using Equation (8.20), the total
future amount would be:
Therefore,
.F=A; r; n/
(cid:140)ern
(cid:0)
D
1(cid:141) = (cid:140)er
1(cid:141) :
(cid:0)
1(cid:141)
(cid:0)
1(cid:3) = (cid:2)e0:005
1(cid:3)
(cid:0)
1(cid:3) = (cid:2)e0:005
(cid:0)
1(cid:3)
(cid:0)
1(cid:141)=(cid:140)1:0050125
(cid:2)
(cid:0)
1(cid:141) = (cid:140)er
6
A (cid:140)ern
100 (cid:2)e0:005
100 (cid:2)e0:03
$100(cid:140)1:030454
$100
6:0756
$607:56:
(cid:0)
(cid:2)
(cid:0)
F
F
D
D
D
D
D
D
(8.20)
(8.24)
1(cid:141)
(cid:0)
106
8. BASIC ECONOMIC FACTORS AND EQUATIONS
Note that the low interest rate and short period gives a small amount of interest, only $7.56.
Therefore, the money Uncle Bill would have for spending on his room and meals is:
$607:56
If one had utilized r
6%/year and n
D
$400
$85
(cid:0)
D
$122:56:
1=2 year, then:
(cid:0)
D
F
D
100 he0:06
(cid:2)
1=2
1i = (cid:2)e0:06
1(cid:3)
(cid:0)
D
(cid:0)
100(cid:140)0:03045=0:0618(cid:141)
100
(cid:2)
D
0:4925
D
$49:25;
which is wrong. This example intentionally used a low interest rate and short time period to
indicate that the length of the compounding period and the nominal interest rate period must be the
same, which is one month and the corresponding interest is 0.005% per month.
8.5.3 UNIFORM SERIES CONTINUOUS INTEREST SINKING FUND
.A=F; r; n/ EXAMPLE
Lady Hillary has an option to buy a business in five years for $10,000.000. Her account pays
a nominal interest rate of 12% per year. She plans to deposit money monthly in the account
and wants to know what amount she must pay monthly to achieve the $10,000,000 at the end
1% per
of 5 years. For monthly deposits, the interest must be compounded monthly and r
month.
D
Therefore,
.A=F; r; n/
(cid:140)er
D
(cid:0)
1(cid:141) = (cid:140)ern
1(cid:141) :
(cid:0)
(cid:0)
(cid:0)
1(cid:141)
1(cid:141) = (cid:140)ern
F (cid:140)er
(cid:0)
$10;000; 000 (cid:2)e:01
$10;000;000(cid:140)1:010050
$10;000;000(cid:140)1:010050
(cid:0)
$10;000;000(cid:140)0:0122247(cid:141)
$122;247:
60
1(cid:3) = (cid:2)e:01
1(cid:3)
(cid:2)
(cid:0)
1(cid:141)= (cid:2)e:01
60
(cid:2)
(cid:0)
1(cid:141)=(cid:140)1:8221188
(cid:0)
A
D
D
D
D
D
D
(8.21)
(8.25)
1(cid:3)
(cid:0)
1(cid:141)
Thus, Lady Hillary must deposit $122,247 at the end of every month to have her
$10,000,000. Note that if there was no interest, the monthly payments would be $166,666 in-
stead of the $122,247 per month.
This example intentionally used a high interest rate and long time period but the length
of the compounding period and the nominal interest rate period must be the same. Thus, the
5 years of monthly payments indicates 60 monthly payments and the nominal interest rate per
month is 1%.
8.5. UNIFORM SERIES PAYMENTS CONTINUOUS INTEREST FACTORS 107
8.5.4 UNIFORM SERIES CONTINUOUS INTEREST PRESENT WORTH
.P =A; r; n/ EXAMPLE
Ronnie won the $240,000,000 lottery which will pay $12,000,000 per year for 20 years or he can
take a single payment now which is discounted at an annual nominal interest rate of 9% per year.
What is the amount that he would receive as a single payment. These would be beginning of
year payments, so he would receive the first payment at time zero and then 19 more end-of-year
payments:
Therefore,
.P =A; r; n/
(cid:140)ern
(cid:0)
D
1(cid:141) = (cid:140)ern .er
1/(cid:141) :
(cid:0)
P
D
A (cid:140)ern
1(cid:141) = (cid:140)ern .er
1/(cid:141) :
(cid:0)
(cid:0)
(8.22)
(8.26)
For this problem a payment must be made at time zero and then the following 19 end-
of-period payments (which represent the remaining 19 beginning-of-year payments) result in:
P
A
A
D
D
A (cid:140)ern
C
(cid:140)ern
1
f
(cid:0)
1(cid:141) = (cid:140)ern .er
1(cid:141) = (cid:140)ern .er
(cid:0)
1/(cid:141)
(cid:0)
19
C
(cid:2)e0:09
(cid:140)5:52896
(cid:2)
(cid:0)
(cid:0)
1/(cid:141)
(cid:0)
g
19 (cid:0)e0:09
1(cid:3) = (cid:2)e0:09
(cid:2)
1(cid:141)=(cid:140)5:52896
(cid:0)
.1:09417
(cid:2)
1(cid:1)(cid:3)(cid:9)
1/(cid:141)
g
(cid:0)
P
D
D
D
D
$12;000;000 (cid:8)1
$12;000;000
1
f
$12;000;000
1
f
$117;068;160:
C
C
8:75568
g
C
Note that the present worth is less than half of the lottery amount when the interest rate
is 9% compounded continuously.
8.5.5 UNIFORM SERIES CONTINUOUS INTEREST CAPITAL RECOVERY
FACTOR .A=P; r; n/ EXAMPLE
Queen Nancy has decided to purchase the Island of Happiness in the Ocean of Calm Waters for
2 billion dollars over 20 years at a 6% annual nominal interest rate compounded continuously.
What would her annual end-of-year payments be to repay the 2 billion loan?
The factor of .A=P; r; n/ is:
Therefore,
.A=P; r; n/
(cid:140)ern .er
D
(cid:0)
1/(cid:141) = (cid:140)ern
1(cid:141) :
(cid:0)
1/(cid:141) = (cid:140)ern
P (cid:140)ern .er
(cid:0)
(cid:0)
$2;000;000;000 (cid:8)(cid:2)e:0:06
$2;000;000;000
$2;000;000;000
$176;074;944:
1(cid:141)
20 (cid:0)e:0:06
(cid:2)
(cid:140)3:320117.1:0618365
f
0:088037472
(cid:0)
f
g
20
1(cid:1)(cid:3) = (cid:2)e0:06
1(cid:3)(cid:9)
(cid:2)
1/(cid:141) = (cid:140)3:3320117
(cid:0)
(cid:0)
A
A
D
D
D
D
D
(8.23)
(8.27)
1(cid:141)
g
(cid:0)
108
8. BASIC ECONOMIC FACTORS AND EQUATIONS
The total of the 20 year payments of $176 million will be approximately 3.52 billion which
implies that 1.52 billion is paid in interest over the life of the investment which is more than
3/4’s of the original loan value.
8.6
SUMMARY
This chapter has used the mathematical relationships of Chapter 7 to develop the discrete interest
expressions for the single cash flow present worth and future worth and the discrete interest
uniform series expressions for future worth, sinking fund, present worth, and capital recovery.
Example problems using each of the formulas were presented. The factors for the discrete interest
expressions developed were then converted for the continuous interest expressions. Example
problems were presented using the nominal interest and the need to use the nominal interest
based on the compounding period used was emphasized. A summary of the formulas is presented
in Table 8.2 at the end of this chapter.
Table 8.2: Discrete and continuous compounding factors of the basic economic expressions
Notation: P = Present Worth A = uniform end-of-period i = discrete interest rate per compounding periodF = Future Worth n = number of compounding periods r = continuous interest rate per compounding periodCompounding FactorsPayment TypeFactor NameFindGivenSymbolFormulaA. Single Payment(discrete interest)Present WorthFuture WorthPFFP(P/F, i, n)(F/P, i, n) (1 + i)-n(1 + i)nB. Uniform Payment or Uniform Series(discrete interest)Sinking FundCapital RecoveryFuture WorthPresent WorthAAFPFPAA(A/F, i, n)(A/P, i, n)(F/A, i, n)(P/A, i, n)i/[(1 + i)n -1][i(1 + i)n]/[(1 + i)n -1][(1 + i)n -1]/i[(1 + i)n -1]/[i[(1 + i)n]C. Single Payment(continuous interest)Present WorthFuture WorthPFFP(P/F, r, n)(F/P, r, n)e-rnernD. Uniform Payment or Uniform Series(continuous interest)Sinking FundCapital RecoveryFuture WorthPresent WorthAAFPFPAA(A/F, r, n)(A/P, r, n)(F/A, r, n)(P/A, r, n)[(er - 1)/(ern - 1)][ern (er - 1)/(ern - 1)][(ern - 1)]/(er - 1)][(ern - 1)/(ern(er - 1))]Note: i = = er -1 and (1+i)n = ern8.7. REFERENCES 109
8.7 REFERENCES
[1] Creese, Robert C. and Adithan, M., Strategic Cost Analysis for Project Managers and Engi-
neers, New Academic Science Limited, Tunbridge Wells, UK, pp. 39–48, 2012. 93, 111
[2] Park, Chan S., Contemporary Engineering Economics, 2nd ed., Addison-Wesley, Menlo
Park, CA, p. 803, 1997. 93
[3] Newnan, Donald G., Eschenbach, Ted G., and Lavelle, Jerome P., Engineering Economic
Analysis, 11th ed., Oxford University Press, New York, p. 655, 2012. 93
[4] Mehta, Merwan B., Applied Engineering Economics Using Excel, Industrial Press, Inc.,
South Norwalk, CT, p. 260, 2016. 93
[5] Whitman, David L. and Terry, Ronald E., Fundamentals of Engineering Economics and
Decision Analysis, Morgan & Claypool Publishers, San Rafael, CA, p. 219, 2012. 93, 111
8.8 EVALUATIVE QUESTIONS
1. If the time period is 5 years and the interest rate is 10%, calculate the following values:
(a) .P =A; i;
(b) .F=A; i
(c) .A=P; i
(d) .A=F; i
(e) .P =F; i
(f ) .F=P; i
10%; n
10%; n
10%; n
10%; n
10%; n
10%; n
5/
5/
5/
5/
5/
5/
D
D
D
D
D
D
D
D
D
D
D
D
2. If the time period is 5 years and the interest rate is 7.5%, calculate the following values:
(a) .P =A; i;
(b) .F=A; i
(c) .A=P; i
(d) .A=F; i
(e) .P =F; i
(f ) .F=P; i
7:5%; n
7:5%; n
7:5%; n
7:5%; n
7:5%; n
7:5%; n
5/
5/
5/
5/
5/
5/
D
D
D
D
D
D
D
D
D
D
D
D
3. Engineer Jimmy wants to retire in 20 years and would like to have 1 million Euros at that
time. If the interest rate is expected to be 5% over the next 20 years, what annual amount
would he need to save?
110
8. BASIC ECONOMIC FACTORS AND EQUATIONS
4. Rosalynn has purchased a house with a loan of $500,000 Kroner. If the loan interest rate
is 5%, what will be her annual payments over the life of the 20-year loan?
5. Gerald has decided to save for a trip in two years. If the monthly interest rate is 1% and
he saves Rs. 2,500/month, how much will he have saved after 2 years?
6. Yi has decided to purchase a new moped and the discrete interest rate is 1/2% per month
and the purchase price is 20,000 yuan. The payments will be at the end of the month; what
is her expected monthly payment over the 3-year period?
7. Vladimir has purchased a new car and the discrete interest rate is 2% per month and the
purchase price is 100,000 rubles.
(a) What is the expected monthly payment over a 4-year period?
(b) What is the total interest paid over the 4-year period?
8. Marlene won the Irish Sweepstakes of 20 million Irish pounds. The prize is actually 2 mil-
lion pounds per year for 10 years.
(a) If the payments are 10 beginning-of year payments and the discrete interest rate is 5%,
what is the equivalent total amount she would receive if she took a single payment?
Note: since payments are end-of-year, she has 9 end-of-year payments plus the initial
first payment.
(b) If the payments are 10 end-of-year payments and the discrete interest rate is 5%, what
is the equivalent amount she would receive each year if she converts the payments to
beginning-of-year payments.
9. If the time period is 5 years and the nominal continuous interest rate is 10%, calculate the
values of the following factors:
(a) .P =A; r;
(b) .F=A; r
(c) .A=P; r
(d) .A=F; r
(e) .P =F; r
(f ) .F=P; r
10%; n
10%; n
10%; n
10%; n
10%; n
10%; n
5/
5/
5/
5/
5/
5/
D
D
D
D
D
D
D
D
D
D
D
D
Compare these values with the values in Problem 1.
10. If the time period is 3 months and the nominal continuous annual interest rate is 12%,
calculate the values of the following factors:
(a) .P =A; r
(cid:139); n
D
D
3/
(b) .F=A; r
(c) .A=P; r
(d) .A=F; r
(e) .P =F; r
(f ) .F=P; r
8.8. EVALUATIVE QUESTIONS 111
3/
3/
3/
3/
3/
D
D
D
D
D
(cid:139); n
(cid:139); n
(cid:139); n
(cid:139); n
(cid:139); n
D
D
D
D
D
11. Construct a table in a spreadsheet and calculate the expressions for the factors of
10%
100. Compare the values calculated with those in the various ref-
.P =F; i; n/, .F=P; i; n/, .P =A; i; n/, .A=P; i; n/, .F=A; i; n/, and .A=F; i; n/ for i
1–60 and n
and n
erence books [1–5].
D
D
D
12. Construct a table in a spread sheet and calculate the expressions for the factors of
10%
100. Compare the values calculated with those you calculated for
.P =F; r; n/, .F=P; r; n/, .P =A; r; n/, .A=P; r; n/, .F=A; r; n/, and .A=F; r; n/ for r
and n
Problem 9.
1–60 and n
D
D
D
C H A P T E R 9
113
Gradient Economic Factors
and Equations
9.1
INTRODUCTION
Gradient expressions are more complex than the basic expressions in the previous chapter. The
two major classifications of the gradient expressions are the uniform gradient and the geometric
gradient. The uniform gradient is presented in two versions; the standard uniform gradient which
starts in second period and the uniform ramp gradient which starts in the first period and appears
like a ramp or step function. Similarly, the geometric gradient is presented in two versions; the
standard geometric gradient in which the gradient does not start until the second period and the
escalation gradient in which the gradient part starts in the first period. These gradients can be
expressed with discrete or continuous interest expressions. Thus, the four gradient expressions
will be developed for both the discrete and continuous interest and they are: the standard uniform
gradient, the uniform ramp gradient, the geometric gradient, and the escalation gradient. Each
of these will be initially described and the details of their derivation will be presented for the
discrete interest case and then the expressions will be converted to the continuous interest case
for the four systems. Some of these materials were presented in a previous work [1] and the next
two references [2, 3] have some of the gradient expressions and present many more examples and
problems, but some expressions developed are entirely new. Reference [4] is at a graduate level
and uses a different approach by using Z-Transforms for some of the expressions developed. The
uniform ramp gradient is not presented in most books and the escalation gradient is frequently
not considered, but these expressions are quite useful when using the end-of period payments
for annual increases starting in the first year.
9.2
STANDARD UNIFORM GRADIENT DISCRETE
INTEREST
The standard uniform gradient can be expressed as a fixed amount which increases by the same
amount in each of the following periods. The constant increase is called a uniform gradient
appears to look somewhat like a ramp function which is delayed by one period. Most authors
indicate that the uniform gradient first occurs at the end of the second period as indicated by
Figure 9.1. Thus, there is one less payment than there are periods as the first gradient payment
does not start until period 2; thus, if there is a gradient over a period of ten years, there will only
114
9. GRADIENT ECONOMIC FACTORS AND EQUATIONS
be nine payments. The future worth of the gradient can be expressed by Equation (9.1) which
starts with last term and proceeds back to the first term. The amount of the gradient is G, the
interest rate is i, and the number of periods is n and a sketch is shown in Figure 9.1:
Figure 9.1: Standard uniform gradient for future worth derivation.
F
.n
(cid:0)
D
1/G
C
2G.1
.n
2/G.1
(cid:0)
i/.n
C
.year n
3/
(cid:0)
C
1/
i/
C
G.1
C
C
i/2
3/G.1
.n
(cid:0)
i/.n
(cid:0)
C
.year 2/
2/
C
0
0
C
.year 1/
C
(cid:0)
C (cid:1) (cid:1) (cid:1) C
C
(cid:1) (cid:1) (cid:1) C
.year n/
C (cid:1) (cid:1) (cid:1)
.time 0/:
C
(9.1)
If one multiplies Equation (9.1) by .1
i/ and then subtracts Equation (9.1) from that,
one obtains Equation (9.2). Rearranging the terms and using the geometric series, the expression
then becomes Equation (9.3):
C
i/F
.1
C
.n
(cid:0)
D
1/G.1
2G.1
C
F
D
(cid:1) (cid:1) (cid:1) C
.n
(cid:0)
(cid:1) (cid:1) (cid:1) C
1/G
G.1
C
.n
(cid:0)
i/.n
(cid:0)
C
C
2/
(cid:0)
.n
i/
C
i/.n
(cid:0)
C
2/G.1
2/
2/G.1
G.1
i/
C
C
C
C
i/.n
.n
i/2
C
1/
(cid:0)
3/G.1
.n
(cid:0)
i/3
C
C (cid:1) (cid:1) (cid:1)
3/G.1
i/2
.n
(cid:0)
C
C
4/G.1
i/3
C
C (cid:1) (cid:1) (cid:1)
(cid:0)
0 1 2 3 4 nG2G3G(n - 1)GFPeriod (Years) 9.2. STANDARD UNIFORM GRADIENT DISCRETE INTEREST 115
C
i/2
C
i/.n
(cid:0)
i/3
G.1
1/
C
(9.2)
iF
.n
(cid:0)
D (cid:0)
1/G
G.1
i/
G.1
C
G.1
C
G h1
C
.1
3/
C
i/.n
(cid:0)
C
C
i/.n
2/
(cid:0)
G.1
C
C
i/2
i/
C
C
.1
C
.1
i/.n
(cid:0)
2/
.1
C
.1
i/3
C
i/.n
(cid:0)
C
C
C
C
this is the geometric series
i/n
i/n
1(cid:141) =(cid:140).1
1(cid:141) =(cid:140)i(cid:141)
C
1(cid:141)
i/
(cid:0)
(cid:0)
C (cid:1) (cid:1) (cid:1)
1/i
(cid:141)
C
C
1
(cid:0)
(cid:0)
(cid:0)
ni (cid:141) =(cid:140)i(cid:141)
g
nG
D (cid:0)
(cid:1) (cid:1) (cid:1) C
C
.1
nG
nG
C
C
(cid:140).1
f
C
D (cid:0)
D (cid:0)
G
D
C
(cid:140)
G (cid:140).1
G (cid:140).1
i/n
:
(9.3)
Solving for F one obtains the uniform gradient discrete interest expression for the future
worth becomes:
i/n
G (cid:8)(cid:140).1
(cid:0)
C
G.F=G; i; n/:
1
(cid:0)
ni (cid:141) = (cid:2)i 2(cid:3)(cid:9)
F
D
D
(9.4)
The formula, represented by the factor .F=G; i; n/, is used to convert a discrete interest
standard uniform gradient of G to a future worth F and this is what appears in Table 9.1 at the
end of this section:
C
The present worth of the uniform gradient discrete interest can be obtained by:
D
(cid:0)
(cid:0)
.F=G; i; n/
(cid:8)(cid:140).1
i/n
1
ni (cid:141) = (cid:2)i 2(cid:3)(cid:9) :
F=.1
C
i/n
i/n
D
G (cid:8)(cid:140).1
1
i/n
(cid:0)
ni (cid:141) = (cid:2)i 2.1
C
1
ni (cid:141) = (cid:2)i 2.1
(cid:0)
i/n(cid:3)(cid:9)
i/n(cid:3)(cid:9)
C
G (cid:8)(cid:140).1
C
(cid:0)
G.P =G; i; n/:
(cid:0)
C
P
D
D
D
(9.5)
(9.6)
Thus, the conversion formula to convert a uniform gradient discrete interest to a present
worth is:
.P =G; i; n/
(cid:8)(cid:140).1
i/n
1
(cid:0)
(cid:0)
C
D
ni (cid:141) = (cid:2)i 2.1
i/n(cid:3)(cid:9) :
C
(9.7)
The uniform series of the standard uniform gradient discrete interest can be obtained by
i/n
(cid:0)
1
F
i= (cid:140).1
f
G (cid:140).1
C
i/n
C
(cid:0)
G .A=G; i; n/:
(cid:0)
A
D
D
D
1(cid:141)
g D
G (cid:8)(cid:140).1
ni (cid:141) = (cid:140)i ..1
C
i/n
i/n
ni (cid:141) = (cid:2)i 2(cid:3)(cid:9)
.i= (cid:140).1
(cid:2)
i/n
1(cid:141)/
(cid:0)
C
1
(cid:0)
1/(cid:141)
(cid:0)
(9.8)
C
(cid:0)
Thus, the conversion formula to convert a standard uniform gradient discrete interest to a
uniform series is:
.A=G; i; n/
(cid:140).1
C
D
i/n
1
(cid:0)
(cid:0)
ni (cid:141) = (cid:140)i ..1
i/n
1/(cid:141) :
(cid:0)
C
(9.9)
116
9. GRADIENT ECONOMIC FACTORS AND EQUATIONS
9.2.1 STANDARD UNIFORM GRADIENT DISCRETE INTEREST
EXAMPLE
What would be the value of a standard uniform gradient of $200 per year for a period of 10 years.
The first payment would be at the end of the second year and the last payment at the end of the
10th year. The interest rate is 5% .
(a) What is the final gradient payment?
Payment at 10 year
.n
1/
(cid:2)
(cid:0)
D
$200
D
$1;800.
(b) What is the total payment of the gradient amounts, not including the interest?
Total Gradient Payments Made
(cid:140)n.n
C
D
1/=2(cid:141)
(Payments occur only in the last 9 periods, so n
(cid:2)
D
$200
.9
(cid:2)
D
10=2/
(cid:2)
$200
D
$9;000.
9.)
(c) What is the total value including the compounding of interest at the end of year 10?
Total Value is Future Worth if found by using Equation (9.4) which is:
ni (cid:141) = (cid:2)i 2(cid:3)(cid:9)
10
G (cid:8)(cid:140).1
D
0:05(cid:3) =.0:05/2
(cid:2)
0:50(cid:141)=.0:0025/
1
(cid:0)
1
i/n
G (cid:8)(cid:140).1
C
(cid:0)
$200 (cid:2).1:05/10
$200(cid:140)1:62889
(cid:0)
$200(cid:140):12889(cid:141)=0:0025
$10;311:
(cid:0)
1
(cid:0)
(cid:0)
F
D
D
D
D
D
i/n
(cid:0)
C
1(cid:141) =i 2
n=i (cid:9)
(cid:0)
Note the effect of compounding interest results in a total interest gain of $1,311.
The present worth of the gradient can be found directly by Equation (9.6), which is:
1
ni (cid:141) = (cid:2)i 2.1
10
i/n
G (cid:8)(cid:140).1
(cid:0)
C
$200 (cid:2).1:05/10
(cid:2)
$200(cid:140):12889(cid:141)=(cid:140)0:0040722(cid:141)
$6;330:
(cid:0)
1
(cid:0)
(cid:0)
i/n(cid:3)(cid:9)
C
0:05(cid:3) = (cid:2).0:05/2
P
D
D
D
D
.1:05/10(cid:3)
(cid:2)
The equivalent annual uniform series payment A can be found by Equation (9.9), which
is:
ni (cid:141) = (cid:140)i ..1
i/n
1
(cid:0)
1
G (cid:140).1
(cid:0)
C
$200 (cid:2).1:05/10
(cid:2)
$200(cid:140)0:12889(cid:141)=(cid:140)0:031444(cid:141)
$819:8:
10
(cid:0)
(cid:0)
i/n
0:05(cid:3) = (cid:2).0:05/
1/(cid:141)
C
(cid:0)
A
D
D
D
D
(cid:0).1:05/10
1(cid:1)(cid:3)
(cid:0)
(cid:2)
9.3. UNIFORM RAMP GRADIENT DISCRETE INTEREST 117
Thus, it takes $819.8 uniform series payment to be equivalent to a $200 standard uniform
gradient over a 10-year period at 5% interest. The equivalent uniform series payments will
vary considerably as the time increment changes.
The payments have been and usually are considered end-of-period payments. These pay-
ments can be converted to beginning-of-period payments by dividing by the annual uni-
i/. Thus, the beginning-of-period payments are lower and
form series payment A by .1
for the previous end-of-period payment of $819.8 would be $780.7 as the interest rate is
5%.
C
9.3 UNIFORM RAMP GRADIENT DISCRETE INTEREST
The uniform ramp gradient starts in the first period and has the appearance of a ramp starting
at zero. The future worth of the gradient can be expressed by Equation (9.10) which starts with
last term and proceeds back to the first term. The amount of the gradient is G, the interest rate
is i, and the number of periods is n. The subscript R is used to distinguish between the standard
uniform gradient and the Uniform Ramp Gradient, as shown in Figure 9.2:
Figure 9.2: Uniform ramp gradient for future worth derivation.
FR
.n/G
.n
C
D
(cid:0)
1/G.1
.year n/
(cid:1) (cid:1) (cid:1) C
C
2G.1
C
.year n
i/
C
i/.n
1/
(cid:0)
i/2
1/
(cid:0)
C (cid:1) (cid:1) (cid:1)
(9.10)
C
2/
(cid:0)
.n
C
2/G.1
(cid:0)
G.1
C
i/.n
C
.year 1/:
C (cid:1) (cid:1) (cid:1) C
If one multiplies Equation (9.10) by .1
i/ and then subtracts Equation (9.10) from that,
one obtains Equation (9.2). Rearranging the terms and using the geometric series, the expression
C
0 1 2 3 4 n2G1G3G4GnGFRPeriod (Years) 118
9. GRADIENT ECONOMIC FACTORS AND EQUATIONS
then becomes Equation (9.3):
.1
C
i/FR
D
nG .1
C
i/.n
1/G.1
i/3
.n
(cid:0)
C
C
3/G.1
i/4
C
C (cid:1) (cid:1) (cid:1)
FR
D
.n/G
C
i/2
.n
(cid:0)
C
C
3/G.1
i/3
C
C (cid:1) (cid:1) (cid:1)
(cid:0)
2G .1
(cid:1) (cid:1) (cid:1) C
.n
(cid:0)
(cid:1) (cid:1) (cid:1) C
1/G.1
G.1
C
i/2
1/
(cid:0)
C
C
.n
(cid:0)
C
1/
C
C
i/.n
i/
C
i/.n
(cid:0)
.n
(cid:0)
G.1
2/G.1
i/n
C
2/G.1
i/3
C (cid:1) (cid:1) (cid:1)
.1
C
C
i/3
C (cid:1) (cid:1) (cid:1)
iFR
.n/G
C
D (cid:0)
G.1
i/
C
G .1
G.1
C
i/.n
(cid:0)
C
1/
i/2
G.1
C
G .1
C
i/n
(cid:1) (cid:1) (cid:1) C
.1
C
i/G h1
nG
D (cid:0)
C
C
.1
i/.n
C
i/
C
.1
.1
C
2/
(cid:0)
C
.1
i/2
C
i/.n
C
1/i
(cid:0)
(cid:1) (cid:1) (cid:1) C
(cid:140)
C
C
i/G (cid:140).1
C
this part is the geometric series
i/n
i/
i/n
C
i.n
(cid:0)
1/(cid:3) =(cid:140)i(cid:141)(cid:9) :
i/G (cid:140).1
1(cid:141) =(cid:140).1
1(cid:141) =(cid:140)i(cid:141)
C
i/n
C
C
C
(cid:0)
1
C
1
.1
.1
(cid:0)
(cid:0)
C
D (cid:0)
C
nG
nG
D (cid:0)
C
G (cid:8)(cid:2).1
D
C
(cid:141)
1(cid:141)
(cid:0)
(9.11)
(9.12)
Solving for F one obtains the uniform ramp gradient discrete interest expression for the
future worth becomes:
FR
D
G (cid:8)(cid:2).1
C
i/n
C
1
1
(cid:0)
(cid:0)
i.1
C
n/(cid:3) = (cid:2)i 2(cid:3)(cid:9) :
(9.13)
Note that the uniform ramp gradient is similar to the standard uniform gradient with n
in the stardard gradient replaced by .n
1/ in the uniform ramp gradient:
C
F
D
G (cid:8)(cid:140).1
i/n
1
(cid:0)
(cid:0)
C
ni (cid:141) = (cid:2)i 2(cid:3)(cid:9) :
(9.6)
The formula, represented by the factor .FR=G; i; n/, is used to convert a standard uniform
gradient discrete interest of G to a future worth F and this is what appears in Table 9.1:
.FR=G; i; n/
(cid:8)(cid:2).1
C
D
i/n
C
1
1
(cid:0)
(cid:0)
i.1
C
n/(cid:3) = (cid:2)i 2(cid:3)(cid:9) :
(9.14)
The value for the present worth of the uniform ramp gradient discrete interest can be
obtained by converting the future worth to the present worth by:
PR
D
D
D
C
i/n
i/n
FR=.1
G (cid:2).1
(cid:0)
C
G .PR=G; i; n/ :
D
1
C
G (cid:8)(cid:2).1
C
i.1
C
1
(cid:0)
1
1
i/n
C
(cid:0)
(cid:0)
n/(cid:3) = (cid:2)i 2.1
i.1
n/(cid:3) = (cid:2)i 2(cid:3)(cid:9) .1=.1
C
i/n(cid:3)
C
i/n/
C
(9.15)
Thus, the conversion formula to convert a uniform ramp gradient discrete interest to a
9.3. UNIFORM RAMP GRADIENT DISCRETE INTEREST 119
present worth is:
.PR=G; i; n/
(cid:8)(cid:2).1
i/n
C
1
1
i.1
n/(cid:3) = (cid:2)i 2.1
i/n(cid:3)(cid:9) :
(cid:0)
The value for the uniform series of the uniform ramp gradient discrete interest can be
C
C
C
D
(cid:0)
(9.16)
obtained by
AR
D
D
D
i/n
1
FRi= (cid:140).1
G (cid:2).1
(cid:0)
C
G .AR=G; i; n/ :
C
i/n
(cid:0)
C
1(cid:141)
1
D
1
i/n
G (cid:8)(cid:2).1
i.1
C
C
n/(cid:3) = (cid:140)i ..1
(cid:0)
C
(cid:0)
C
1
(cid:0)
i/n
i.1
C
1/(cid:141)
(cid:0)
n/(cid:3) = (cid:2)i 2(cid:3)(cid:9)
i= (cid:140).1
i/n
(cid:0)
C
(cid:2)
1(cid:141)
(9.17)
Thus, the conversion formula to convert a uniform ramp gradient discrete interest to a
uniform series is:
.AR=G; i; n/
(cid:2).1
C
D
i/n
C
1
1
(cid:0)
(cid:0)
i.1
C
n/(cid:3) = (cid:140)i ..1
i/n
1/(cid:141) :
(cid:0)
C
(9.18)
9.3.1 UNIFORM RAMP GRADIENT DISCRETE INTEREST EXAMPLE
What would be the value of a uniform ramp gradient of $200 per year for a period of 10 years.
The first payment would be at the end of the 1st year and the last payment at the end of the
10th year. The interest rate is 5%.
(a) What is the final payment?
Payment at year 10
.n/
(cid:2)
D
$200
D
$2;000.
(b) What is the total payment of the gradient, not including the interest?
Total Payments Made
(cid:140)n.n
1/=2(cid:141)
$200
.10
11=2/
$200
$11;000.
D
C
(cid:2)
D
(cid:2)
(cid:2)
D
(The payments occur in all 10 periods. The total payments are $2,000 more than the stan-
dard uniform gradient, that is: 10 years
$200=year
(cid:2)
(c) What is the total value including the compounding of interest at the end of year 10?
$2;000.)
D
Total Value is Future Worth if found by using Equation (9.13) which is:
FR
D
D
D
D
D
i.1
(cid:0)
11
(cid:2)
n/(cid:3) = (cid:2)i 2(cid:3)(cid:9)
C
0:05(cid:3) =.0:05/2
0:55(cid:141)=.0:0025/
1
1
(cid:0)
1
i/n
G (cid:8)(cid:2).1
C
C
$200 (cid:2).1:05/11
$200(cid:140)1:71034
(cid:0)
$200(cid:140):16034(cid:141)=0:0025
$12; 827:
(cid:0)
1
(cid:0)
(cid:0)
Note the effect of compounding interest results in a total interest gain of $1,827.
120
9. GRADIENT ECONOMIC FACTORS AND EQUATIONS
The present worth of the gradient can be found directly by Equation (9.15), which is:
PR
D
D
D
D
1
i/n
1
(cid:0)
(cid:0)
i.1
G (cid:2).1
C
C
$200 (cid:2).1:05/11
(cid:2)
$200(cid:140):16034(cid:141)=(cid:140)0:0040722(cid:141)
$7;875:
11
(cid:0)
(cid:0)
1
n/(cid:3) = (cid:2)i 2.1
C
C
0:05(cid:3) = (cid:2).0:05/2
i/n(cid:3)
.1:05/10(cid:3)
(cid:2)
The equivalent annual uniform series payment A can be found by Equation (9.17), which
is:
AR
D
D
D
D
1
i/n
1
(cid:0)
(cid:0)
i.1
G (cid:2).1
C
C
$200 (cid:2).1:05/11
(cid:2)
$200(cid:140)0:16034(cid:141)=(cid:140)0:031444(cid:141)
$1019:8:
11
(cid:0)
(cid:0)
1
C
n/(cid:3) = (cid:140)i ..1
C
0:05(cid:3) = (cid:2).0:05/
1/(cid:141)
(cid:0)
(cid:0).1:05/10
i/n
(cid:2)
1(cid:1)(cid:3)
(cid:0)
Thus, it takes $1019.8 uniform series payment to be equivalent to a $200 uniform ramp
gradient over a 10-year period. The equivalent uniform series payments will vary consider-
ably as the time increment changes. Note that the amount of the uniform series payment
equivalent for the uniform ramp gradient was $200 more than that for the standard uni-
A which is how the problem
form gradient for this problem. Thus, AR
was solved previously by solving the standard gradient and then adding the additional
amount.
A .Gradient/
C
D
9.4 GEOMETRIC GRADIENT DISCRETE INTEREST
The geometric gradient does not start until the second period. The present worth of the geo-
metric gradient can be expressed by Equation (9.19). The initial amount is A1 starts in period 1
and has the gradient amount g applied each of the follow periods up to the final period n. The
present worth is obtained by discounting the gradient payments back to time zero. Figure 9.3 is a
sketch of the payments for geometric gradient for deriving the present worth factor. Additional
i:
expressions must be developed when g
i and the initial expressions presented are for g
D
P
D
A1=.1
i/
C
A1 (cid:2).1
C
C
A1 (cid:2).1
C
g/3=.1
C
i/4(cid:3)
C
C (cid:1) (cid:1) (cid:1) C
g/=.1
i/2(cid:3)
C
A1 (cid:2).1
g/2=.1
C
g/.n
1/=.1
(cid:0)
C
A1 h.1
i/3(cid:3)
i/ni :
C
C
Rearranging the terms to obtain the value of 1 for the first term of the geometric gradient
i/, this equation can be reduced
and using the geometric series with the ratio of .1
g/=.1
C
C
C
⁄
(9.19)
9.4. GEOMETRIC GRADIENT DISCRETE INTEREST 121
Figure 9.3: Standard geometric gradient for present worth derivation.
to:
P
P
i
D
g/=.1
C
i/
(cid:141)
2
g
C (cid:1) (cid:1) (cid:1) C f
g/=.1
.1
C
n(cid:3)
i/
g
C
C
C
C
C
D
D
D
D
D
(cid:140)A1=.1
(cid:140)A1=.1
(cid:140)A1=.1
(cid:140)A1=.1
A1
(cid:140).1
f
(cid:140)1
A1
f
i/(cid:141) (cid:2)1
i/(cid:141)
C
(cid:140)
.1
C
i/(cid:141)
(cid:140).1
g/=.1
f
i/(cid:141)
(cid:140).1
f
g/=.1
C
C
g/=.1
i/(cid:141)n
g/=.1
i/
.1
C
C f
C
geometric series
=(cid:140).1
1
C
i/(cid:141)n
i/(cid:141)n
C
1=(cid:140).g
(cid:0)
i/(cid:141)n=(cid:140).i
(cid:0)
(cid:0)
(cid:0)
i/(cid:141)
g
g
g
g/(cid:141)
:
1
=(cid:140)(cid:140).1
C
.1
C
g/=.1
D
Thus, the geometric gradient present worth factor when i
C
C
(cid:0)
(cid:0)
g
g/=.1
C
g/
(cid:0)
C
1(cid:141)
(cid:0)
i/(cid:141)=.1
C
.1
i/
C
i/(cid:141)
C
g is:
g/(cid:141)
:
g
(cid:0)
(9.20)
(9.21)
⁄
i/(cid:141)n(cid:141)=(cid:140).i
.P =A1; g; i; n/
.1
(cid:140)1
(cid:0)
C
D f
g/=.1
C
If i
g, the denominator of Equation (9.21) would be zero which is a problem, so when
g, Equation (9.19) can be arranged so that the ratio .1
g/=.1
i/ becomes 1 and thus:
D
i/2(cid:3)
C
C
A1 (cid:2).1
C
g/2=.1
C
A1 h.1
C
g/.n
1/=.1
(cid:0)
i/3(cid:3)
C
i/ni
.1
C f
g/=.1
C
C
C
C
C
2
i/
g
C (cid:1) (cid:1) (cid:1)
P
D
D
D
D
D
P
P
A1=.1
i/
C
A1 (cid:2).1
g/=.1
A1 (cid:2).1
C
g/3=.1
C
i/4(cid:3)
C
(cid:140)A1=.1
C
i/(cid:141) (cid:2)1
C
.1
C (cid:1) (cid:1) (cid:1) C
i/
g/=.1
C
.1
C
g/=.1
C
i/
.n
g
1
C
1
1/i
(cid:0)
C
C
C (cid:1) (cid:1) (cid:1) C
1(cid:141)
C
i/(cid:141)(cid:140)1
i/(cid:140)n(cid:141)
(cid:1) (cid:1) (cid:1) C f
(cid:140)A1=.1
A1=.1
C
nA1=.1
C
i/:
C
(9.19)
(9.22)
0 1 2 3 4 nA1(1+g)A1A1(1+g)2A1(1+g)3A1(1+g)(n-1)PPeriod (Years) 122
9. GRADIENT ECONOMIC FACTORS AND EQUATIONS
Thus, one has a rather simple expression when i
worth factor expression becomes:
g and the geometric gradient present
D
D f
The value for the future worth of the geometric gradient can be obtained by multiplying
D
C
.P =A1; i
g; n/
n=.1
:
i/
g
(9.23)
by .1
C
i/n to the present worth of the geometric gradient to obtain:
F
F
D
D
P .1
A1
C
(cid:140).1
f
i/n
C
D
i/n
A1
(cid:140)1
f
.1
.1
(cid:0)
C
g/n(cid:141) =(cid:140)i
(cid:0)
C
g/=.1
g(cid:141)
(cid:0)
i/n(cid:141) =(cid:140).i
g/(cid:141)
g (cid:2)
(cid:0)
i/n
.1
C
C
:
g
Thus, the geometric gradient future worth factor is:
.F=A1; g; i; n/
(cid:140).1
D f
i/n
.1
(cid:0)
C
C
g/n(cid:141) =(cid:140)i
g(cid:141)
:
g
(cid:0)
(9.24)
(9.25)
If i
g, then the future worth factor for the geometric gradient is the present worth
factor for the geometric gradient multiplied by .1
i/n which is:
D
F
F
D
D
P .1
C
(cid:2)nA1.1
i/n
C
D
i/n
C
(cid:140).nA1=.1
1(cid:3) :
(cid:0)
i//(cid:141)
.1
(cid:2)
C
C
i/n
(9.26)
Thus, a simple expression occurs when i
factor expression becomes:
g and the geometric gradient future worth
D
.F=A1; i
g; n/
(cid:8)n.1
C
D
D
i/n
1(cid:9) :
(cid:0)
(9.27)
The value for the uniform series of the geometric gradient can be obtained by:
A
A
D
A1 .F=A1g; i; n/
A1
i/n
(cid:2)
.1
.A=F; i; n/
g/n(cid:141) =(cid:140)i
(cid:140).1
f
g (cid:2) f
Thus, the geometric gradient uniform series factor is:
D
C
C
(cid:0)
(cid:0)
g(cid:141)
i= (cid:140).1
i/n
:
1(cid:141)
g
(cid:0)
C
(9.28)
.A=A1; g; i; n/
(cid:140).1
i/n
.1
g/n(cid:141) =(cid:140)i
g(cid:141)
i= (cid:140).1
i/n
(cid:0)
g, then the uniform series factor for the geometric gradient is the future worth
g (cid:2) f
D f
C
C
C
(cid:0)
(cid:0)
:
1(cid:141)
g
(9.29)
If i
D
factor for the geometric gradient multiplied by (cid:140)i= (cid:140).1
i/n
1(cid:141)(cid:141) which is:
A
A
D
D
F (cid:140)i= (cid:140).1
(cid:2)niA1.1
C
C
i/n
i/n
1(cid:141)(cid:141)
(cid:0)
D
1(cid:3) = (cid:140).1
(cid:0)
(cid:0)nA1.1
i/n
C
1(cid:141) :
C
(cid:0)
C
i/n
(cid:0)
1(cid:1) (cid:140)i= (cid:140).1
(cid:0)
i/n
1(cid:141)(cid:141)
(cid:0)
C
(9.30)
Finally, the geometric gradient uniform series factor expression when i
.A=A1; i
g; n/
(cid:2)ni.1
C
D
D
i/n
1(cid:3) = (cid:140).1
(cid:0)
i/n
1(cid:141) :
(cid:0)
C
g becomes:
D
(9.31)
9.4. GEOMETRIC GRADIENT DISCRETE INTEREST 123
9.4.1 GEOMETRIC GRADIENT DISCRETE INTEREST EXAMPLE
What would be the value at the end of 10 years of a geometric gradient of 10% if the initial
amount was $200 for a period of 10 years. The first payment would be at the end of the first year
and the last payment at the end of the 10th year. The interest rate is 5%.
(a) What is the final gradient payment? $200.1
0:10/9
C
D
$100.2:3574/
$471:5.
D
(b) What is the total payments (TP) of the gradients not including the interest?
TP
TP
D
D
D
D
D
C
C
A1 (cid:0)1
.1
C
g/n
A1 ..1
g/n
A1 ..1
$200 (cid:0)1:110
$3; 187:
C
(cid:0)
C
C
g/
.1
C
1/ =.1
(cid:0)
1/ =g
(cid:0)
1(cid:1) =0:10
g/2
g
(cid:0)
C (cid:1) (cid:1) (cid:1) C
1/
g/n
1(cid:1)
(cid:0)
.1
C
(9.32)
(c) What is the future worth including the compounding of interest?
i/n
.1
C
(cid:0)
0:05/10
:9648478(cid:141)=(cid:140)
C
g/n(cid:141) =(cid:140)i
.1
(cid:0)
:05(cid:141)
C
D
(cid:0)
(cid:140).1
A1
C
f
$200 (cid:8)(cid:2).1
$200(cid:140)
(cid:0)
$3;859:
g(cid:141)
(cid:0)
g
0:10/(cid:3)(cid:9) =(cid:140)0:05
$100.19:2969/
(cid:0)
F
F
D
D
D
D
(9.24)
0:10(cid:141)
(d) Therefore, the total interest earned over the 10 years would be:
(e) What is the present worth of the gradient?
TP
F
(cid:0)
D
$3;859
(cid:0)
$3;187
D
$672:
(cid:140).1
g/=.1
C
(cid:140).1
C
0:10/=.1
i/(cid:141)n(cid:141) =(cid:140).i
g/(cid:141)
g
0:05/(cid:141)10(cid:3) =(cid:140).0:05
(cid:0)
(9.20)
0:10/(cid:141)(cid:9)
(cid:0)
(cid:0)
C
0:59233(cid:141)=(cid:140)
:05(cid:141)
(cid:0)
C
P
P
D
D
D
D
(cid:140)1
A1
(cid:0)
f
$200 (cid:8)(cid:2)1
$200(cid:140)
(cid:0)
$2;369:
(f ) What is the equivalent annual uniform series payment A?
124
9. GRADIENT ECONOMIC FACTORS AND EQUATIONS
i/n
f
(cid:140).1
A1
(cid:0)
C
$200 (cid:8)(cid:2).1:05/10
(cid:140).19:2970(cid:141)
$200
f
$307:
.1
(cid:0)
(cid:2)
A
D
D
D
D
(cid:0)
g/n(cid:141) =(cid:140)i
g(cid:141)
C
.1:10/10(cid:3) =.0:05
(cid:140).0:079504(cid:141)
g
g f
(cid:0)
i= (cid:140).1
0:10/
C
(cid:2)
i/n
1(cid:141)
g
(cid:2)0:05=1:0510
(cid:0)
(9.28)
1(cid:3)(cid:9)
(cid:0)
Note that the geometric gradient usually is smaller than the uniform ramp gradient or the
standard uniform gradient as it is a percentage gradient rather than a fixed amount.
9.5 ESCALATION GRADIENT DISCRETE INTEREST
Escalation is used in construction projects as the cost of materials and labor are expected to
increase over time and this is done with the escalation rate. Since traditional engineering econ-
omy expressions use end-of-period payments, the escalation must start in the first period. The
projects estimates are done long before the project starts and the inflation effects will need to
be included starting with the first period. This is the difference between escalation and the ge-
ometric gradient series which is similar to the difference needed for the uniform ramp gradient
series and the standard uniform gradient. The illustration of the escalation payments and present
worth is below and the present worth of the escalation can be expressed by Equation (9.33). The
symbol E will be used to indicate escalation, but is used in the same manner as g except it is also
in the first period. The escalation gradient is illustrated in Figure 9.4.
Figure 9.4: Escalation gradient for present worth derivation.
0 1 2 3 4 nA1(1+ᴇ)A1(1+ᴇ)2A1(1+ᴇ)3A1(1+ᴇ)4A1(1+ᴇ)nPPeriod (Years) 9.5. ESCALATION GRADIENT DISCRETE INTEREST 125
PE D
D
D
D
A1.1
C
A1 (cid:2).1
E/=.1
i/
C
E/4=.1
C
A1 (cid:2).1
i/4(cid:3)
C
E/2=.1
i/2(cid:3)
A1 (cid:2).1
C
A1 (cid:140).1
C
E/n=.1
C
(cid:140)A1.1
C
E/=.1
C
.1
C
E/=.1
(cid:1) (cid:1) (cid:1) C f
C
C
i/(cid:141) h1
i/
C
(cid:140)
C
E/=.1
E/=.1
C
E/=.E
i/(cid:141) (cid:2)(cid:140).1
i/(cid:141) (cid:2)(cid:140).1
i/(cid:141) (cid:140)(cid:140).1
C
(cid:140)A1.1
(cid:140)A1.1
A1
A1
C
(cid:140).1
f
(cid:140).1
f
C (cid:1) (cid:1) (cid:1) C
.1
E/=.1
C
i/
C
C
C
n(cid:141)
g
Geometric Series
i/(cid:141)n
E/=.1
i/(cid:141)n
(cid:0)
i/(cid:141)n
i/(cid:141)n
E/=.1
E/=.1
E/=.1
C
C
C
C
C
(cid:0)
.1
C f
C
(cid:141)
1(cid:3) =(cid:140).1
1(cid:3) =(cid:140)(cid:140).1
1(cid:141)
C
i/n(cid:141)
C
E/=.1
E/3=.1
i/3(cid:3)
C
i/
2
g
C
C (cid:1) (cid:1) (cid:1)
E/=.1
C
E/
(cid:0)
C
1(cid:141)
(cid:0)
i/(cid:141)=.1
C
.1
i/
C
i/(cid:141)
C
(cid:0)
C
C
E/=.E
D
PE D
Equation (9.33) is the expression for the escalation gradient and E is the escalation
amount, and the symbol E is used instead of g. The escalation gradient present worth factor
to obtain the present worth of an escalation gradient is.
i/(cid:141) (cid:140)(cid:140).1
(9.33)
C
C
C
1(cid:141)
(cid:0)
(cid:0)
(cid:0)
g
g
:
.PE=A1; E; i; n/
(cid:140).1
D f
C
E/=.E
(cid:0)
i/(cid:141)(cid:140).1
C
E/=.1
i/(cid:141)n
:
1
g
(cid:0)
C
(9.34)
If i
D
E, Equations (9.33) and (9.34) will have denominators that are zero, so the equa-
E. This is solved in the same manner as
tions must be derived initially using the case where i
was done for the standard geometric gradient:
D
E/2=.1
i/2(cid:3)
C
C
C
A1 (cid:2).1
E/3=.1
i/3(cid:3)
C
C (cid:1) (cid:1) (cid:1)
C
i/
1
C
C f
C
E/=.1
2
i/
g
C
C (cid:1) (cid:1) (cid:1)
C
.n
E/=.1
1/(cid:21)
(cid:0)
PE D
A1.1
E/=.1
C
A1 (cid:140).1
i/
C
C
E/n=.1
(cid:1) (cid:1) (cid:1) C
(cid:140)A1.1
C
E/=.1
C
i/(cid:141) (cid:2)1
A1 (cid:2).1
i/n(cid:141)
.1
C
D
C
n.1
C
E/.n
(cid:0)
C
1/=.1
C
i/o
(cid:1) (cid:1) (cid:1) C
(cid:140)A1
(cid:2)
nA1:
1(cid:141)(cid:140)1
1
1
1(cid:141)
(cid:1) (cid:1) (cid:1) C
C
D
PE D
Thus, for this special case;
C
.PE=A1; i
E; n/
n:
D
D
(9.35)
(9.36)
The equation for the future worth of the escalation gradient can be determined from the
present worth equation by:
FE D
FE D
FE D
A1
A1
A1
(cid:2)
.PE=A1; E; i; n/
(cid:140).1
E/=.E
(cid:2)
i/(cid:141)(cid:140).1
.F=P; i:n/
(cid:2) f
(cid:140).1
f
C
C
E/=.E
(cid:0)
i/(cid:141) (cid:140).1
C
E/n
(cid:0)
C
(cid:0)
E/=.1
i/(cid:141)n
(cid:0)
i/n(cid:141)
:
g
C
.1
C
1
.1
C
g (cid:2)
i/n
(9.37)
(9.38)
(9.39)
(9.40)
(9.43)
(9.44)
126
9. GRADIENT ECONOMIC FACTORS AND EQUATIONS
This also results in the factor being:
.FE=A1; E; i; n/
D f
(cid:140).1
C
E, then
For the special case where i
D
E/=.E
i/(cid:141) (cid:140).1
E/n
.1
i/n(cid:141)
:
g
C
(cid:0)
C
(cid:0)
FE D
D
and thus
A1
(cid:2)
A1n.1
.PE=A1; i
i/n
C
E; n/
(cid:2)
D
.F=P; i; n/
.FE=A1i
e; n/
n.1
i/n:
D
The equation for the uniform series of the escalation gradient can be determined from the
D
C
present worth equation by:
.A=P; i:n/
AE D
AE D
AE D
A1
A1
A1
(cid:2)
.PE=A1; E; i; n/
(cid:140).1
E/=.E
(cid:2)
i/(cid:141)(cid:140).1
(cid:2) f
(cid:140)i.1
C
E/=.E
(cid:0)
i/(cid:141) (cid:140).1
f
This also results in the factor
C
(cid:0)
C
E/=.1
E/n
C
.1
i/(cid:141)n
1
g (cid:2) f
(cid:0)
i/n(cid:141) = (cid:140).1
(cid:140)i..1
i/n
C
i/n= (cid:140)..1
1(cid:141)
:
i/n
C
(cid:0)
1/(cid:141)
g
(9.41)
C
(cid:0)
C
C
(cid:0)
g
E/n
.1
(cid:0)
C
C
i/n(cid:141) = (cid:140).1
i/n
1(cid:141)
:
g
(cid:0)
C
(9.42)
E/=.E
i/(cid:141) (cid:140).1
(cid:0)
E, then
.AE=A1; E; i; n/
(cid:140)i.1
D f
C
For the special case where i
AE D
D
D
A1
A1
.P =A1; i
E; n/
.A=P; i:n/
(cid:2)
ni.1
f
C
D
i/n= (cid:140)..1
(cid:2)
i/n
C
:
1/(cid:141)
g
(cid:0)
And thus,
.AE=A1i
E; n/
ni.1
D f
C
D
i/n= (cid:140)..1
i/n
1/(cid:141)
:
g
(cid:0)
C
9.5.1 ESCALATION GRADIENT DISCRETE INTEREST EXAMPLE
What would be the value at the end of 10 years of an escalation gradient of 10% if the initial
amount was $200 for a period of 10 years. The first payment would be at the end of the first year
and the last payment at the end of the 10th year. The interest rate is 5%.
(a) What is the final payment? $200.1
0:10/10
$200.2:5937/
$518:8
$519.
C
(b) What is the total payments .TP/ of the gradients not including the interest?
D
D
D
TPE D
D
D
D
D
TPE D
.1
C
C
E/
C
C
E/ (cid:0)1
E/ ..1
A1 (cid:0).1
A1.1
A1.1
A1.1
C
C
$200.1:1/ (cid:0)1:110
$3;506:
E/ ..1
C
C
C
.1
C
E/n
E/n
(cid:0)
E/2
E/1
C (cid:1) (cid:1) (cid:1) C
.1
C
.1
E/n
(cid:1)
E/n
C (cid:1) (cid:1) (cid:1) C
E
1/ = .1
C
1/
C
(cid:0)
(cid:0)
1/ =E
(cid:0)
1(cid:1) =0:10
1(cid:1)
(cid:0)
(9.45)
(c) What is the future worth including the compounding of interest?
9.5. ESCALATION GRADIENT DISCRETE INTEREST 127
FE D
D
D
FE D
i/(cid:141) (cid:140).1
C
E/=.E
(cid:0)
0:10/=.0:10
(cid:140).1
A1
f
$100(cid:140).1
$200(cid:140)22:0(cid:141)(cid:140)0:964847(cid:141)
$4;245:
C
(cid:0)
.1
E/n
C
0:05/(cid:141) (cid:8)(cid:2).1
(cid:0)
i/n(cid:141)
g
0:10/10
C
C
(9.37)
.1
(cid:0)
C
0:05/10(cid:3)(cid:9)
(d) Therefore, the total interest earned over the 10 years would be:
FE (cid:0)
TPE D
$4245
(cid:0)
$3506
D
$739
(e) What is the present worth of the escalation gradient?
PE D
D
D
D
(cid:140).1
A1
C
f
$200 (cid:8)(cid:140).1
$200
(cid:2)
$2;606:
E/=.E
i/(cid:141)(cid:141) (cid:140)(cid:140).1
E/=.1
(cid:0)
0:10/=.0:10
C
0:05/(cid:141)
C
.1:1=0; 05/
(cid:0)
(cid:140)0:59233(cid:141)
i/(cid:141)n
C
(cid:2)(cid:140).1
(cid:2)
C
(cid:2)
1(cid:141)
(cid:0)
g
0:10/=.1
0:5/(cid:141)10
1(cid:3)(cid:9)
(cid:0)
C
(9.33)
(f ) What is the equivalent annual escalation payment Ae?
(cid:0)
C
C
C
i/(cid:141) (cid:140).1
i/(cid:141) (cid:140).1
E/=.E
E/=.E
(cid:140)i.1
f
(cid:140)i.1
f
A1
A1
$200(cid:140)0:05.1
(cid:0)
$200(cid:140)1:1; 0:964847(cid:141)=(cid:140)0:62889(cid:141)
$338:
0:10/=.0:10
C
C
(cid:0)
(cid:0)
C
.1
i/n(cid:141) = (cid:140).1
i/n(cid:141) = (cid:140).1
E/n
E/n
.1
C
0:05/(cid:141) (cid:2).1:10/10
(cid:0)
C
i/n
i/n
1(cid:141)
g
(cid:0)
1(cid:141)
(cid:0)
.1:05/10(cid:3) = (cid:2).1:05/10
C
g
(cid:0)
(9.41)
1(cid:3)
(cid:0)
AE D
D
D
D
AE D
Note that the escalation gradient ($338) is larger than the geometric gradient ($307) as the
gradient-escalation starts in the first period rather than the second period.This difference
results in the 10% as the escalation gradient has one more escalation. A summary of all
the discrete formulas is in Table 9.1. This difference is not only for AE but also for PE or
FE.
128
9. GRADIENT ECONOMIC FACTORS AND EQUATIONS
t
s
e
r
e
t
n
i
e
t
e
r
c
s
i
d
d
n
a
s
t
n
e
m
y
a
p
e
t
e
r
c
s
i
d
—
s
n
o
i
s
s
e
r
p
x
e
c
i
m
o
n
o
c
e
f
o
s
r
o
t
c
a
f
g
n
i
d
n
u
o
p
m
o
c
e
t
e
r
c
s
i
D
:
1
.
9
e
l
b
a
T
Payment TypeFactor NameFindGivenSymbolFormulaA. Single PaymentPresent WorthPF(P/F, i, n)(1 + i)-nFuture Worth (Compound Amount)FP(F/P, i ,n)(1+i)n B. Uniform PaymentSinking FundAF(A/F, i ,n)i / [(1+ i)n -1] (Uniform Series)Capital RecoveryAP(A/P, i, n)[(i(1+i)n] ] / [ (1+i)n - 1] Compound AmountFA(F/A, i, n)[(1+i)n -1] / iPresent WorthPA(P/A, i, n)[(1+i)n -1] / [i((1+i)n ]C. Uniform Gradient Expression Standard Uniform GradientUniform Gradient Present WorthPG(P/G, i, n)[((1+i)n - 1 -ni) / ( i2 (1+i)n )] Uniform Gradient Future WorthFG(F/G, i, n)[((1+i)n - 1 -ni) / i2 ]Uniform Gradient Uniform SeriesAG(A/G, i ,n)[((1+i)n - 1 -ni) / ((1+i)n -1)] Uniform Ramp GradientUniform Ramp Gradient Present WorthPRG(PR/G, i, n)[((1+i)n+1 - 1 –i(n+1)) / ( i2 (1+i)n )] Uniform Ramp Gradient Future WorthFRG(FR/G, i, n)[((1+i)n+1 - 1 –i(n+1)) / ( i2)] Uniform Ramp Gradient Uniform SeriesARG(AR/G, i, n)[((1+i)n+1 - 1 –i(n+1)) / ( i(1+i)n - 1)] D. Geometric Gradient Expression Geometric GradientGeometric Gradient Present WorthPA1, g(P/A1,g, i, n)[(1-((1+g)n / (1+i)n))/(i-g)]If g=i(P/A1,g=i, n)n/(1+i)Geometric Gradient Future WorthFA1, g(F/A1, g, i, n)[((1+i)n – (1+g)n)]/[i-g]If g=i(F/A1, g=i, n)n(1+i)(n-1)Geometric Gradient Uniform SeriesAA1, g(A/A1, g, i, n)[(i((1+i)n –(1+g)n))/((i-g)((1+i)n-1))]If g=i(A/A1, g=i, n)[ni(1+i)(n-1) ]/ [(1+i)n -1] Escalation GradientEscalation Gradient Present WorthPᴇA1, ᴇ(Pᴇ/A1, ᴇ,i, n)[(1+ᴇ) /(ᴇ - i)] [((1+ᴇ)/(1+i))n -1]If ᴇ=i(Pᴇ/A1, ᴇ=i,n)nEscalation Gradient Future WorthFᴇA1, ᴇ(Fᴇ/A1, ᴇ,i, n)[(1+ᴇ) /(ᴇ - i)] [((1+ᴇ)n - (1+i))n]If ᴇ=i(Fᴇ/A1, ᴇ=i, n)n(1+i)nEscalation Gradient Uniform SeriesAᴇA1, ᴇ(Aᴇ/A1, ᴇ,i, n)[(i(1+ᴇ)/(ᴇ - i))*((1+ᴇ)n - (1+i)n)]/[(1+i)n 1]If ᴇ=i(Aᴇ/A1, ᴇ=i,n)ni(1+i)n /[(1+i)n -1]Notation:P=Present Worth; i = eff ective discrete interest rate per period; A=uniform end-of-period payments; n = number of periods;F=Future Worth; g=Geometric Gradient Rate; G=Uniform Gradient Amount; ᴇ = Escalation Gradient Rate;A1 = Initial Geometric Gradient Amount and Initial Escalation Gradient Amount9.6
9.6. STANDARD UNIFORM GRADIENT CONTINUOUS INTEREST FORMULAS 129
STANDARD UNIFORM GRADIENT CONTINUOUS
INTEREST FORMULAS
The basic relationships between the nominal (r) and market (i) interest will be used to convert
the discrete interest formulas to continuous interest formulas similar to that done in Chapter 8.
Most books have only small discussions of continuous interest, but Park and Sharp-Bette is a
great reference [4]. The relationships used are:
(9.46)
(9.47)
(9.48)
(9.49)
(9.5)
(9.7)
(9.9)
er
i
D
1
(cid:0)
.1
i/
C
D
er
i/n
.1
C
D
ern
ln.1
r
D
C
i/:
and
and similarly
and
and
The factors for the standard uniform gradient discrete interest were:
.F=G; i; n/
.P =G; i; n/
(cid:8)(cid:140).1
(cid:8)(cid:140).1
i/n
i/n
1
1
(cid:0)
(cid:0)
(cid:0)
(cid:0)
C
C
D
D
ni (cid:141) = (cid:2)i 2(cid:3)(cid:9)
ni (cid:141) = (cid:2)i 2.1
i/n(cid:3)(cid:9)
C
.A=G; i; n/
(cid:140).1
D f
i/n
1
(cid:0)
(cid:0)
C
ni (cid:141) = (cid:140)i ..1
i/n
:
1/(cid:141)
g
(cid:0)
C
The conversion of these factors for the standard uniform gradient continuous interest fac-
tors result in the following equations:
F
P
A
D
D
D
G.F=G; r; n/
G.P =G; r; n/
G.A=G; r; n/
G n(cid:140)ern
G n(cid:140)ern
(cid:0)
(cid:0)
G
(cid:140)ern
f
(cid:0)
n.er
n.er
n.er
1
1
1
(cid:0)
(cid:0)
(cid:0)
(cid:0)
(cid:0)
(cid:0)
1/(cid:141) = (cid:140).er
1/(cid:141) = (cid:140).er
1/(cid:141) = (cid:140).er
(cid:0)
(cid:0)
(cid:0)
D
D
D
1/(cid:141)2o
1/(cid:141)2 erno
1/ .ern
:
1/(cid:141)
g
(cid:0)
(9.50)
(9.51)
(9.52)
9.6.1 STANDARD UNIFORM GRADIENT CONTINUOUS INTEREST
EXAMPLE
The example will be the same as that in Section 9.2.1 to illustrate the difference between discrete
and continuous. What would be the value of a standard uniform gradient (G) of $200 per year
for a period of 10 years. The first payment would be at the end of the second year and the last
payment at the end of the 10th year. The continuous interest rate is 5%.
130
9. GRADIENT ECONOMIC FACTORS AND EQUATIONS
(a) What is the final payment?
Payment at 10 year
.n
1/
(cid:2)
(cid:0)
D
$200
D
$1;800.
(b) What is the total payment of the gradient, not including the interest?
Total Payments Made
(cid:140)n.n
1/=2(cid:141)
$100
.9
(cid:2)
D
(cid:2)
C
D
10=2/
200
(cid:2)
D
$9;000.
(Payments occur only in 9 periods even though there are 10 periods, so n
9.)
D
(c) What is the total value including the continuous compounding of interest at the end of
year 10?
Total Value is Future Worth of the standard uniform gradient continuous interest found
by using Equation (9.50) which is:
F
D
D
D
D
D
D
D
G .F=G; r
G n(cid:140)ern
(cid:0)
$200 (cid:2)e:05
(cid:2)
D
1
(cid:0)
10
5%; n
n .er
1
(cid:0)
(cid:0)
10/
D
1/(cid:141) = (cid:140).er
(cid:0)
10 (cid:0)e0:05
(cid:0)
10.1:05127
1
(cid:0)
(cid:0)
$200(cid:140)1:64872
$200(cid:140):13601(cid:141)=(cid:140)0:0026287(cid:141)
$200(cid:140)51:74(cid:141)
$10;348:
1/(cid:141)2o
(cid:0)
1(cid:1)(cid:3) = h(cid:0)e0:05
2i
1(cid:1)
(cid:0)
1/(cid:141)= (cid:2).1:05127
(cid:0)
1/2(cid:3)
(cid:0)
This is slightly greater than the discrete interest (10,312) as the equivalent discrete interest
would be 5.13% instead of the 5.00% applied in the discrete interest calculations.
Note the effect of continuous interest results in a total interest gain of $1,348 which is
slightly greater than the discrete case.
The present worth of the gradient can be found directly by the factor Equation (9.51),
which is:
P
D
D
D
D
D
D
G.P =G; r
5%; n
10/
D
1
(cid:0)
10
G n(cid:140)ern
(cid:0)
$200 (cid:2)e:05
(cid:2)
D
n .er
1/(cid:141) = (cid:140).er
1
(cid:0)
(cid:0)
(cid:0)
10 (cid:0)e0:05
(cid:0)
10.1:05127
1
$200(cid:140)1:64872
$200(cid:140):13601(cid:141)=(cid:140)0:004334(cid:141)
$6;276:
(cid:0)
(cid:0)
1/(cid:141)2 erno
(cid:0)
1(cid:1)(cid:3) = h(cid:0)e0:05
2
1(cid:1)
e:05
(cid:2)
10i
(cid:0)
1/(cid:141)= (cid:2).1:05127
(cid:0)
1/2
(cid:0)
(cid:2)
1:64872(cid:3)
9.7. RAMP UNIFORM GRADIENT CONTINUOUS INTEREST FORMULAS 131
The discrete interest value of the present worth was $6,330 and thus the difference is rel-
atively small. The lower present worth is due to the slightly higher discount rate of the
continuous compounding vs. discrete compounding.
The equivalent annual uniform series continuous interest payment can be found by Equa-
tion (9.52), which is:
A
D
D
D
D
D
D
10/
1/(cid:141) = (cid:140).er
(cid:0)
10 (cid:0)e0:05
(cid:0)
10.1:05127
D
D
1
(cid:0)
10
5%; n
n .er
1
G .A=G; r
(cid:140)ern
G
(cid:0)
f
$200 (cid:2)e:05
$200(cid:140)1:64872
$200(cid:140):13601(cid:141)=(cid:140)0:0332606(cid:141)
$818:
(cid:0)
(cid:0)
(cid:0)
(cid:0)
1
(cid:2)
1/ .ern
(cid:0)
(cid:0)
1(cid:1)(cid:3) = (cid:2)(cid:0)e0:05
1/(cid:141)=(cid:140).1:05127
(cid:0)
1/(cid:141)
g
1(cid:1) (cid:0)e:05
(cid:2)
1/
(cid:0)
(cid:0)
(cid:2)
10
1(cid:1)(cid:3)
.1:64872
(cid:0)
1/(cid:141)
(cid:0)
Thus, it takes $818 uniform series continuous payment to be equivalent to a $200 standard
uniform gradient of a 10-year period. The equivalent uniform series payments will vary
considerably as the time increment changes. The discrete interest and continuous interest
values have only a slight difference as the interest rate is low and the compounding periods
are relatively low.
9.7 RAMP UNIFORM GRADIENT CONTINUOUS
INTEREST FORMULAS
The discrete interest formulas for uniform ramp gradient from Section 9.3 are presented and
then converted into the continuous formulas for the uniform ramp gradient:
.FR=G; i; n/
.PR=G; i; n/
.AR=G; i; n/
D
D
D
(cid:8)(cid:2).1
(cid:8)(cid:2).1
(cid:2).1
C
C
i/n
i/n
i/n
C
C
C
1
C
1
(cid:0)
1
1
1
(cid:0)
i.1
i.1
(cid:0)
1
(cid:0)
i.1
(cid:0)
(cid:0)
C
C
C
n/(cid:3) = (cid:2)i 2(cid:3)(cid:9)
n/(cid:3) = (cid:2)i 2.1
n/(cid:3) = (cid:140)i ..1
i/n(cid:3)(cid:9)
C
i/n
(cid:0)
C
1/(cid:141) :
(9.13)
(9.16)
(9.18)
The conversion of these factors from discrete to continuous interest factors results in the
factor:
.FR=G; r; n/
.PR=G; r; n/
.AR=G; r; n/
D
D
D
nher.n
C
1/
her.n
C
1/
1
.er
(cid:0)
1
(cid:0)
.er
1/ .1
(cid:0)
1/ .1
n/i = h.er
C
n/i = h.er
1/2io
(cid:0)
1/2 erni
(cid:0)
(cid:0)
(cid:0)
C
(cid:0)
her.n
C
1/
.er
1
(cid:0)
(cid:0)
(cid:0)
1/ .1
C
n/i = (cid:140).er
1/ .ern
1/(cid:141) :
(cid:0)
(cid:0)
(9.53)
(9.54)
(9.55)
132
9. GRADIENT ECONOMIC FACTORS AND EQUATIONS
9.7.1 UNIFORM RAMP GRADIENT CONTINUOUS INTEREST EXAMPLE
What would be the value of a uniform ramp gradient continuous interest of $200 per year for a
10-year period with a nominal interest rate of 5%. The first payment would be at the end of the
first year and the last payment at the end of the 10th year.
(a) What is the final payment?
Payment at year 10
.n/
(cid:2)
D
$200
D
$2;000.
(b) What is the total payment of the gradient, not including the interest?
Total Payments Made
(cid:140)n.n
1/=2(cid:141)
(cid:2)
(The payments occur in all 10 periods.)
C
D
$200
.11
(cid:2)
D
10=2/
(cid:2)
$200
D
$11;000.
(c) What is the total value including the compounding of the continuous interest at the end
of year 10?
Total Value is Future Worth if found by using Equation (9.53) which is:
.FR=G; r; n/
FR
D
D
D
D
D
D
nher.n
C
1/
1
(cid:0)
1/
(cid:0)
1
1/
G nher.n
C
(cid:0)
$200 he0:05.10
C
.er
1/.1
n/i = h.er
1/2io
(cid:0)
.er
C
1/ .1
(cid:0)
n/i = h.er
(cid:0)
(cid:0)
1
(cid:0)
(cid:0)e:05
C
1(cid:1) .1
(cid:0)
(cid:0)
.1:051271
(cid:0)
C
1/.11/(cid:141)=.0:0025/
1/2io
(cid:0)
10/i = (cid:0)e0:05
(9.53)
2
1(cid:1)
(cid:0)
1
$200(cid:140)1:73325
$200(cid:140):169269(cid:141)=0:002629
$12;877:
(cid:0)
(cid:0)
Note the effect of compounding interest results in a total interest gain of $1,877.
The present worth of the gradient can be found directly by using Equation (9.54), which
is:
.PR=G; r; n/
PR
D
D
D
D
D
1/.1
1
.er
1/
(cid:140).er.n
C
G(cid:140).er.n
C
(cid:0)
1/
(cid:0)
1
(cid:140).e0:05.10
(cid:0)
(cid:0)
.er
(cid:0)
1/
C
(cid:0)
1
C
1/.1
C
.e0:05
f
$200
$200(cid:140)0:16927(cid:141)=(cid:140)0:0043340(cid:141)
$7;811:
(cid:0)
(cid:0)
n/(cid:141)=(cid:140).er
(cid:0)
n/(cid:141)=(cid:140).er
1/.10
(cid:0)
1/2ern(cid:141)
1/2ern(cid:141)
1/(cid:141)=(cid:140).e0:05
(cid:0)
C
(9.54)
1/2
(cid:0)
(cid:2)
e0:05
10(cid:141)
(cid:2)
9.8. GEOMETRIC GRADIENT CONTINUOUS INTEREST FORMULAS 133
The equivalent annual uniform series payment A can be found by Equation (9.55), which
is:
.AR=G; i; n/
AR
D
D
D
D
D
h(cid:16)er.n
C
1/
1(cid:17)
.er
1/ .1
n/i = (cid:140).er
1/ .ern
1/(cid:141)
(9.55)
(cid:0)
.er
C
1/(cid:17) .1
(cid:0)
n/i = (cid:140).er
(cid:0)
1/ .ern
1/(cid:141)
(cid:0)
(cid:0)
10/i =
C
(cid:0)
1/
(cid:0)
1
1/
(cid:0)
G h(cid:16)er.n
C
(cid:0)
$200 he0:05.10
1
C
= (cid:2)(cid:0)e0:05
(cid:0)
(cid:0)
1(cid:1) (cid:0)e0:05
(cid:2)
$200(cid:140)0:16927(cid:141)=(cid:140)0:0332607(cid:141)
$1;018:
(cid:0)
(cid:0)
(cid:0)e0:05
10
C
1(cid:1) .1
(cid:0)
1(cid:1)(cid:3)
(cid:0)
Thus, it takes $1,018 uniform series payment to be equivalent to a $200 uniform ramp gra-
dient over a 10-year period. The equivalent uniform series payments will vary considerably
as the time increment changes. Note that the amount of the uniform series payment equiv-
alent for the uniform ramp gradient was $200 more than that for the standard uniform
gradient for this problem which was expected.
9.8 GEOMETRIC GRADIENT CONTINUOUS INTEREST
FORMULAS
D
The geometric gradient is often called the exponential gradient, and thus the geometric gradi-
ent continuous interest formulas may also be called the exponential gradient continuous interest
formulas. The formulas from the discrete interest sections will be converted to continuous in-
terest formulas. However, the formulas (9.46)–(9.49) did not include the conversion of g to a
continuous expression. The formulas for b
r will be left for student development.
Similarly, one would need relationships to convert the discrete gradient interest .g/ to a
continuous interest rate which will use the symbol .b/. Therefore:
and
and thus
and
g
eb
1
(cid:0)
D
g/
.1
C
D
eb
g/n
.1
C
D
ebn
ln.1
b
D
C
g/:
(9.56)
(9.57)
(9.58)
(9.59)
134
9. GRADIENT ECONOMIC FACTORS AND EQUATIONS
The factors for the standard geometric gradient with discrete interest are:
.P =A1; g; i; n/
.F=A1; g; i; n/
.A=A1; g; i; n/
D f
D f
D f
(cid:140)1
(cid:140).1
(cid:140).1
(cid:0)
C
C
.1
C
i/n
i/n
g/=.1
i/(cid:141)n(cid:141)=(cid:140).i
C
g/n(cid:141) =(cid:140)i
g/n(cid:141) =(cid:140)i
.1
.1
(cid:0)
(cid:0)
C
C
(cid:0)
g(cid:141)
g(cid:141)
g/(cid:141)
g
g
g f
i= (cid:140).1
i/n
1(cid:141)
:
g
(cid:0)
C
(9.21)
(9.25)
(9.29)
(cid:0)
(cid:0)
The conversion of these factors the geometric gradient continuous interest result in the
following expressions for the cases where r
b:
⁄
(cid:0)ebn= .ern/(cid:1)(cid:3) = h(cid:16)er
eb(cid:17)io
.P =A1; b; r; n/
.F=A1; b; r; n/
.A=A1; b; r; n/
n(cid:2)1
(cid:0)
n(cid:2)(cid:0)ern
n(cid:2)(cid:0)ern
D
D
D
ebn(cid:1)(cid:3) = h(cid:16)er
ebn(cid:1)(cid:3) = h(cid:16)er
(cid:0)
(cid:0)
(cid:0)
(cid:0)
(cid:0)
eb(cid:17)io
eb(cid:17)io
.er
f
(cid:0)
1/ = (cid:140)ern
:
1(cid:141)
g
(cid:0)
(9.60)
(9.61)
(9.62)
9.8.1 GEOMETRIC GRADIENT CONTINUOUS INTEREST EXAMPLE
What would be the value at the end of 10 years of a geometric continuous gradient of 10%.b
D
10%/ if the initial amount was $200 for a period of 10 years. The first payment would be at
the end of the 1st year and the last payment at the end of the 10th year. The interest rate is
10:52%.)
5%.r
5%/. (Note: if b
10%, then g
D
D
(a) What is the final payment? $200e0:10
D
9
(cid:2)
D
$200.e0:9/
$492.
D
(b) What is the total payments .TP/ of the gradients not including the interest?
(cid:0)e0:1(cid:1)
2
C (cid:1) (cid:1) (cid:1) C
(cid:0)e0;1(cid:1)
n
1(cid:17)
(cid:0)
A1 (cid:0)(cid:0)ebn
1(cid:1)(cid:1) = (cid:16)eb
1(cid:17)
(cid:0)
(cid:0)
D
TP
D
D
D
D
A1 (cid:16)1
C
A1 (cid:0)ebn
(cid:0)
$200 (cid:0)e1:0
$3;268:
(cid:0)e0:1(cid:1)
C
1(cid:1) = (cid:16)eb
1(cid:17)
(cid:0)
1(cid:1) = (cid:0)e0:1
1(cid:1)
(cid:0)
(cid:0)
(c) What is the future worth including the compounding of interest?
.F=A1; b; r; n/
F
D
D
D
D
D
n(cid:2)(cid:0)ern
(cid:0)
ebn(cid:1)(cid:3) = h(cid:16)er
eb(cid:17)io
(cid:0)
(9.61)
0:05
A1 .F=A1; b; r; n/
$200 (cid:0)e10
$200.
(cid:0)
$3;969:
(cid:0)
(cid:2)
e10
0:10(cid:1) = (cid:0)e0:05
(cid:2)
0:05390/
1:069560/=.
(cid:0)
e0:10(cid:1)
(cid:0)
(d) Therefore, the total interest earned over the 10 years would be:
TP
F
(cid:0)
D
$3;969
(cid:0)
3;268
D
$701:
9.9. ESCALATION GRADIENT CONTINUOUS COMPOUNDING FORMULAS 135
(e) What is the present worth?
.P =A1; b; r; n/
P
D
D
D
D
A1 n(cid:2)1
A1 n(cid:2)1
(cid:0)ebn= .ern/(cid:1)(cid:3) = h(cid:16)er
(cid:0)
(cid:0)ebn= .ern/(cid:1)(cid:3) = h(cid:16)er
eb(cid:17)io
eb(cid:17)io
(cid:0)
(cid:0)
(cid:0)
0:64872=
0:05390(cid:141)
(cid:0)
$200(cid:140)
(cid:0)
$2;407:
(f ) What is the equivalent annual uniform series payment A?
.A=A1; b; r; n/
A
D
D
D
D
n(cid:2)(cid:0)ern
(cid:0)
A1 n(cid:2)(cid:0)ern
ebn(cid:1)(cid:3) = her
ebio
1/ = (cid:140)ern
(cid:0)
ebn(cid:1)(cid:3) = her
(cid:0)
.er
f
:05390(cid:141)(cid:140)0:05127=0:64872(cid:141)
(cid:0)
1/ = (cid:140)ern
(cid:0)
(cid:0)
(cid:0)
1:06956=
.er
f
ebio
1(cid:141)
g
1(cid:141)
g
(cid:0)
(cid:140)
(cid:0)
200
f
314:
(cid:0)
(9.60)
(9.62)
9.9 ESCALATION GRADIENT CONTINUOUS
COMPOUNDING FORMULAS
The escalation gradient starts with the gradient in the first period whereas the geometric gradient
does not start the gradient until the second period. The factors for the discrete interest model
will be converted to continuous interest factors. The factors will be developed for the case where
r
E. The escalation gradient
is used primarily in the construction industry and used with constant dollars. The symbol E is
used for the escalation gradient and is similar to that of g in the geometric gradient. The factors
for the discrete case were:
E and it will be left to the students to develop the case where r
D
⁄
.PE=A1; E; i; n/
.FE=A1; E; i; n/
.AE=A1; E; i; n/
(cid:8)(cid:140).1
(cid:140).1
D
D f
D f
C
C
C
E/=.E
E/=.E
(cid:2)(cid:140).1
i/(cid:141)
(cid:2)
i/(cid:141) (cid:140).1
(cid:0)
(cid:0)
(cid:0)
E/=.1
C
E/n
E/n
(cid:0)
C
.1
C
.1
1(cid:3)(cid:9)
(cid:0)
i/(cid:141)n
i/n(cid:141)
g
i/n(cid:141) = (cid:140).1
(cid:0)
C
C
C
(cid:140)i.1
E/=.E
i/(cid:141) (cid:140).1
i/n
1(cid:141)
:
g
(cid:0)
C
(9.34)
(9.38)
(9.42)
Similarly, one would need relationships to convert the discrete gradient interest .E/ to a
continuous interest rate which will use the symbol .c/. Therefore:
and
E
D
ec
1
(cid:0)
E/
.1
C
D
ec
(9.63)
(9.64)
136
9. GRADIENT ECONOMIC FACTORS AND EQUATIONS
and thus
and
E/n
.1
C
D
ecn
(9.65)
c
ln.1
E/:
(9.66)
C
The conversion of these factors to continuous compounding of the interest and the gradi-
D
ent results in:
.PE=A1; c; r; n/
.FE=A1; c; r; n/
.AE=A1; c; r; n/
(cid:140).ec/ = .ec
(cid:140).ec/ = .ec
(cid:140).er
(cid:0)
D f
D f
D f
(cid:0)
er /(cid:141)
er /(cid:141)
(cid:2)
(cid:0)
(cid:2)
1/ .ec/ = .ec
(cid:140)(cid:140).ecn/ = .ern/(cid:141)
(cid:140).ecn
(cid:0)
er /(cid:141)
ern/(cid:141)
g
(cid:140).ecn
(cid:0)
(cid:2)
1(cid:141)
g
(cid:0)
ern/(cid:141) = (cid:140).ern
:
1/(cid:141)
g
(cid:0)
(cid:0)
(9.67)
(9.68)
(9.69)
9.9.1 ESCALATION GRADIENT CONTINUOUS INTEREST EXAMPLE
What would be the value at the end of 10 years of a escalation gradient of 10%.c
10%/ if
the initial amount was $200 for a period of 10 years. The first payment would be at the end of
5%/.
the first year and the last payment at the end of the 10th year. The interest rate is 5%.r
(Note: if c
10:517%.)
10%, E
D
D
D
(a) What is the final payment? $200 e0:10
D
10
(cid:2)
$200.2:71828/
$544.
D
(b) What is the total payments .TP/ of the gradients not including the interest? The expression
D
is converted from the discrete form to the continuous form.
PT E D
D
D
PT E D
1/ =E
E/ ..1
E/n
C
(cid:0)
1/ = .ec
A1.1
C
A1 .ec/ .ecn
(cid:0)
(cid:0)
$200.1:10517/.2:71828
$3;612:
1/
1/=.1:10517
1/
(cid:0)
(cid:0)
(c) What is the future worth including the compounding of interest?
.FE=A1; c; r; n/
D f
FE D
D
er /(cid:141)
(cid:140).ecn
(cid:2)
er /(cid:141)
(cid:0)
(cid:140).ecn
(cid:0)
(cid:140).ec/ = .ec
f
(cid:140).ec/ = .ec
A1
$200(cid:140).1:10517=.1:10517
$200.1:10517=0:0539/.1:06956/
$4;386:
ern/(cid:141)
g
ern/(cid:141)
g
1:05127/(cid:141)
(cid:0)
(cid:0)
(cid:0)
(cid:2)
D
D
(d) Therefore, the total interest earned over the 10 years would be:
FE (cid:0)
PT E D
$4;386
(cid:0)
$3;612
D
$774
.2:71828
(cid:2)
(cid:0)
1:648720/
(9.45)
(9.68)
(e) Determine the present worth
9.10. SUMMARY OF GRADIENT EXPRESSIONS 137
.PE=A1; c; r; n/
D f
PE D
D
er /(cid:141)
(cid:2)
er /(cid:141)
(cid:0)
(cid:140).ec/ = .ec
f
(cid:140).ec/ = .ec
A1
$200(cid:140).1:10517=.1:10517
$200(cid:140)13:301(cid:141)
$2;660:
(cid:0)
(cid:2)
(cid:140).ecn/ = .ern/
1(cid:141)
g
(cid:0)
(cid:140).ecn/ = .ern/
(cid:0)
1:05127/(cid:141)
(cid:0)
D
D
1(cid:141)
g
(cid:140):64872(cid:141)
(cid:2)
(9.67)
(f ) What is the equivalent annual escalation payment Ae?
.AE=A1; c; r; n/
er /(cid:141)
1/ .ec/ = .ec
(cid:140).er
(cid:0)
1/ .ec/ = .ec
A1
$200(cid:140)0:05127/.1:10517/=.1:10517
(cid:0)
(cid:140).er
f
(cid:2)
er /(cid:141)
(cid:140).ecn
(cid:0)
(cid:0)
(cid:2)
(cid:0)
(cid:140).ecn
ern/(cid:141) = (cid:140).ern/
(cid:0)
ern/(cid:141) = (cid:140).ern
(cid:0)
1:05127/(cid:141)
(9.69)
1(cid:141)
g
1/(cid:141)
g
(cid:0)
D f
AE D
D
(cid:0)
1:6487/=.0:6487/
(cid:140)1:06958=0:64872(cid:141)
(cid:2)
(cid:140).2:71828
(cid:2)
(cid:0)
$200(cid:140)1:05124(cid:141)
$347:
D
D
Note that the escalation gradient continuous compounding is larger than the escalation
geometric gradient because of the additional gradient period. The continuous compounding
formulas are presented in Table 9.2. Note that the values of the escalation geometric continuous
compounding gradient are greater than the geometric continuous compounding by the factor
b which is the continuous compounding interest rate plus one—that is, A for
when c
D
the geometric gradient is 314 and e0:1
347.
1:10517 and the A for escalation is 1:10517
b by ec
314
D
D
D
(cid:2)
9.10 SUMMARY OF GRADIENT EXPRESSIONS
The discrete gradient expressions were derived in the first sections using discrete interest com-
pounding and the results are summarized in Table 9.1. An example problem was solved in each
section to illustrate the application of the formulas. The next four sections derived continuous
interest compounding expressions and the results are summarized in Table 9.2. The difference in
the results was relatively small and the continuous interest results tended to be larger for future
worth and annual payments. The differences between the two compounding interest rates will
be greater for a greater number of time periods and higher interest rates. This will happen in
long range projects where life cycle costs are involved over periods such as 50–100 years in the
life of a bridge, highway interchange, or large office building. The continuous interest is also
138
9. GRADIENT ECONOMIC FACTORS AND EQUATIONS
t
s
e
r
e
t
n
i
s
u
o
u
n
i
t
n
o
c
d
n
a
s
t
n
e
m
y
a
p
e
t
e
r
c
s
i
d
—
s
n
o
i
s
s
e
r
p
x
e
c
i
m
o
n
o
c
e
f
o
s
r
o
t
c
a
f
g
n
i
d
n
u
o
p
m
o
c
s
u
o
u
n
i
t
n
o
C
:
2
.
9
e
l
b
a
T
Payment TypeFactor NameFindGivenSymbolFormulaA. Single Payment Present WorthPF(P/F, r, n)e-rnFuture WorthFP(F/P, r, n)ernB. Uniform Payment (Uniform Series)Sinking FundAF(A/F, r, n)[(er-1)/(ern-1)]Capital RecoveryAP(A/P, r, n)[ern(er-1)/(ern-1)]Future WorthFA(F/A, r, n)[(ern-1)/(er-1)]Present WorthPA(P/A, r, n)[(ern-1)/ (ern(er-1))]C. Uniform Gradient Expressions Standard Uniform GradientUniform Gradient Present WorthPG( P/G, r, n){[(ern-1) - n(er-1)]/[(er-1)2 ern)]}Uniform Gradient Future WorthFG( F/G, r, n){[(ern-1) - n(er-1)]/[(er-1)2 )]}Uniform Gradient Uniform SeriesAG( A/G, r, n){[(ern-1) - n(er-1)]/[(er-1)( ern -1)]}Uniform Ramp GradientUniform Ramp Gradient Present WorthPRG( PR/G, r, n){[(er((n+1)-1) - (n+1)(er-1)]/[(er-1)2(ern)]}Uniform Ramp Gradient Future WorthFRG( FR/G, r, n){[(er((n+1)-1) - (n+1)(er-1)]/[(er-1)2 ]}Uniform Ramp Gradient Uniform SeriesARG( AR/G, r, n){[(er((n+1)-1) - (n+1)(er-1)]/[(er-1)2(er n-1)]}D. Geometric Gradient Expressions Geometric GradientGeometric Gradient Present WorthPA1,b(P/A1 b, r, n) {[1-(ebn/ern)]/[er - eb)]}If b=r(P/A1, b=r, n) n/erGeometric Gradient Future WorthFA1,b(F/A1, b, r, n) {[ern-ebn)]/[er - eb)]}If b=r(F/A1, b=r, n)n/er(n-1)Geometric Gradient Uniform SeriesAA1,b(A/A1, b, r, n) {[ern-ebn)]/[er - eb)]} {[(er -1) / (ern-1)]}If b=r(A/A1, b=r, n) [n{(ern )/(ern-1)] * [(er-1)/(er)]Escalation GradientEscalation Gradient Present WorthPᴇA1,c(Pᴇ/A1, c, r, n) {[((ec)/(ec-er))] * [(ecn - ern)/ern]If c=r(Pᴇ/A1, c=r, n) nEscalation Gradient Future WorthFᴇA1.c(Fᴇ/A1, c, r, n) {[((ec)/(ec-er))] * [(ecn - ern)]If c=r(Fᴇ/A1, c=r, n) nernEscalation Gradient Uniform SeriesAᴇA1,c(Aᴇ/A1, c, r, n) {[((er-1)(ec)/(ec-er)] * [(ecn-ern)/(ern-1)]}If c=r(Aᴇ/A1, c= r, n{[n(er -1)ern/ (ern -1)}Notation:P=Present Worth; i = eff ective discrete interest rate per period; A=uniform end-of-period payments; n = number of periods;F=Future Worth; g=Geometric Gradient Rate; G=Uniform Gradient Amount; ᴇ = Escalation Gradient Rate;A1 = Initial Geometric Gradient Amount and Initial Escalation Gradient Amountused in the evaluation of interest rates on certificates of deposits in many institutions to show
higher return rates. The formulas will be utilized in many of the problems in this chapter.
9.11. REFERENCES 139
9.11 REFERENCES
[1] Creese, Robert C. and Adithan, M., Strategic Cost Analysis for Project Managers and En-
gineers, New Academic Science Limited, Tunbridge Wells, UK, pp. 25–58, 2012. 113,
142
[2] Park, Chan S., Contemporary Engineering Economics, 2nd ed., Addison-Wesley, Menlo
Park, CA, p. 803, 1997. 113
[3] Newnan, Donald G., Eschenbach, Ted G., and Lavelle, Jerome P., Engineering Economic
Analysis, 11th ed., Oxford University Press, New York, p. 655, 2012. 113
[4] Park, Chan S. and Sharp-Bette, Gunter P., Advanced Engineering Economics, John Wiley
& Sons, Inc., New York, pp. 38–128, 1990. 113, 129, 142
9.12 EVALUATIVE QUESTIONS
1. If the time period is 5 years and the discrete interest rate is 10%, find the following values
for the uniform gradient:
(a) .P =G; i
(b) .F=G; i
(c) .A=G; i
10%; n
10%; n
10%; n
5/
5/
5/
D
D
D
D
D
D
2. If the time period is 5 years and the discrete interest rate is 10% and the geometric gradient
is 5%, find the following values:
(a) .P =A1; g
(b) .F=A1; g
(c) .A=A1; g
5%; i
5%; i
5%; i
10%; n
10%; n
10%; n
5/
5/
5/
D
D
D
D
D
D
D
D
D
3. If the time period is 5 years and the interest rate is 10% and the escalation rate is 5%, find
the following values:
(a) .PE =A1; E
(b) .FE =A1; E
(c) .AE =A1; E
5%; i
5%; i
5%; i
10%; n
10%; n
10%; n
5/
5/
5/
D
D
D
D
D
D
D
D
D
140
9. GRADIENT ECONOMIC FACTORS AND EQUATIONS
4. The cost of materials for a project is $1,000,000 per year for the next 5 years. The interest
rate (return rate) is expected to be 15% and the escalation rate is predicted to be 5%. What
is the expected present worth of the materials for the project (a) using discrete interest
calculations with escalation and (b) using continuous interest and escalation calculations?
5. A contractor is building a high rise apartment which will take 3 years to build. The contrac-
tor predicts that the materials will be $3,000,000 per year. Consider these as end-of-year
expenses. He expects the materials to escalate by 8% per year. His expected return rate is
20%.
(a) What is the present worth of this project to the contractor?
(b) What is the Future Worth of the project to the contractor?
(c) What is the annual end-of-year payment?
(d) If the project was considered as a geometric gradient, what is the annual end-of-year
payment?
(e) The annual payment for the gradient was a beginning of period payment, what is the
amount?
6. If the time period is 5 years and the nominal interest rate is 10%, find the following values:
(a) .P =F; r
10%; n
D
10%; n
D
5/
5/
(b) .F=P; r
(c) If F
D
D
D
$300, find the value of P
7. If the time period is 5 years and the discrete interest rate is 10%, find the following values:
(a) .P =F; i
(b) .F=P; i
(c) If P
D
D
10%; n
10%; n
D
5/
5/
D
D
$300, find the value of F
8. If the time period is 5 years and the nominal interest rate is 10%, find the following values:
(a) .P =A; r
(b) .P =F; r
(c) .A=P; r
(d) .A=F; r
(e) .F=P; r
10%; n
10%; n
10%; n
10%; n
10%; n
D
D
D
D
D
D
D
D
D
D
5/
5/
5/
5/
5/
5/
(f ) .F=A; r
10%; n
D
$200, find the values of F and P
D
(g) If A
D
(h) If P
D
$200, find the values of F and A
9. If the time period is 5 years and the discrete interest rate is 10%, find the following values:
9.12. EVALUATIVE QUESTIONS 141
D
D
D
D
D
10%; n
10%; n
10%; n
10%; n
10%; n
10%; n
D
D
D
D
D
5/
5/
5/
5/
5/
5/
(a) .P =A; i
(b) .P =F; i
(c) .A=P; i
(d) .A=F; i
(e) .F=P; i
(f ) .F=A; i
(g) If A
(h) If P
D
D
D
$200, find the values of F and P
D
$200, find the values of F and A
10. If the time period is 5 years and the nominal interest rate is 10%, find the following uniform
gradient factors:
(a) .P =G; r
(b) .F=G; r
(c) .A=G; r
10%; n
10%; n
10%; n
5/
5/
5/
D
D
D
D
D
D
11. If the time period is 5 years and the nominal interest rate is 10%, find the following uniform
ramp gradient values:
(a) .PR=G; r
(b) .FR=G; r
(c) .AR=G; r
10%; n
10%; n
10%; n
5/
5/
5/
D
D
D
D
D
D
12. If the time period is 5 years and the discrete interest rate is 10% and the geometric gradient
g rate is 6%, find the following values:
(a) .P =A1; g
(b) .F=A1; g
(c) .A=A1; g
(d) If A1
D
6%; i
6%; i
6%; i
10%; n
10%; n
10%; n
5/
5/
5/
D
D
D
D
D
D
D
D
D
$200, what are the values for P and F ?
13. If the time period is 5 years and the nominal interest rate is 10% and the geometric gradient
b rate is 6%, find the following values:
(a) .P =A1; b
(b) .F=A1; b
6%; r
6%; r
10%; n
10%; n
5/
5/
D
D
D
D
D
D
142
9. GRADIENT ECONOMIC FACTORS AND EQUATIONS
(c) .A=A1; b
6%; r
10%; n
5/
D
D
$200, what are the values for P , A, and F ?
D
(d) If A1
D
14. If the time period is 5 years and the nominal interest rate is 10% and the escalation gradient
c rate is 6%, find the following values:
(a) .PE=A1; c
(b) .FE=A1; c
6%; r
6%; r
D
D
D
D
10%; n
10%; n
(c) .AE=A1; c
6%; r
10%; n
D
D
5/
5/
5/
(d) If A1
D
D
D
$200, what are the values for PE, AE, and FE?
D
15. If the time period is 5 years and the nominal interest rate is 10% and the geometric gradient
b rate is 10%, find the following values:
(a) .P =A1; b
10%; r
10%; n
D
D
(b) .F=A1; b
10%; r
10%; n
D
5/
5/
(c) If A1
D
D
D
$200, what are the values for P , A, and F ?
D
16. If the time period is 5 years and the nominal interest rate is 10% and the escalation gradient
c rate is 10%, find the following values:
(a) .PE=A1; c
(b) .FE=A1; c
10%; r
10%; r
D
D
D
D
10%; n
10%; n
(c) .AE=A1; c
10%; r
10%; n
D
D
5/
5/
5/
(d) If A1
D
D
D
$200, what are the values for PE, AE, and FE
D
17. Show the derivation of the equations for the geometric gradient continuous interest for-
mula when the continuous interest rate r and the continuous gradient b are equal.
18. Show the derivation of the equations for the escalation gradient continuous interest for-
mula when the continuous interest rate r and the continuous escalation gradient c are
equal.
19. Construct a table in a spreadsheet and calculate the expressions for the Uniform Gra-
dient and the Uniform Ramp Gradient factors of .P =G; i; n/, .F=G; i; n/, .A=G; i; n/,
.PR=G; i; n/, .FR=G; i; n/, and .AR=G; i; n/ for i
100.
Compare the values calculated with those in the various reference books [1–4], but it may
be difficult as the only ones usually given are .P =G; i; n/ and .A=G; i; n/.
10%, and n
1–60 and n
D
D
D
20. Construct a table in a spreadsheet and calculate the geometric gradient expressions for the
5%, and
100. Compare the values calculated with those calculated for Problem 2.
factors of .P =A1; g; i; n/, .F=A1; g; i; n/, and .A=A1; g; i; n/, for i
n
1–60 and n
10%, g
D
D
D
D
21. Construct a table in a spreadsheet and calculate the escalation gradient expressions for
9.12. EVALUATIVE QUESTIONS 143
the factors of .PE =A1, E; i; n/, .FE =A1, E; i; n/, and .AE =A1; E; i; n/, for i
5%, and n
Problem 3.
D
100. Compare the values calculated with those calculated for
1–60 and n
10%, E
D
D
D
22. Construct a table in a spreadsheet and calculate the expressions for the Uniform Gradient
and the Uniform Ramp Gradient factors with continuous compounding of .P =G; r; n/,
10%, and
.F=G; r; n/, .A=G; r; n/, .PR=G; r; n/, .FR=G; r; n/, and .AR=G; r; n/ for r
n
100. Compare the values calculated with those in Problem 8.
1–60 and n
D
D
D
23. Construct a table in a spreadsheet and calculate the geometric gradient expressions for the
6%, and
100. Compare the values calculated with those calculated for Prob-
factors of .P =A1; b; r; n/, .F=A1; b; r; n/, and .A=A1; b; r; n/, for r
n
D
lem 13.
1–60 and n
10%, b
D
D
D
24. Construct a table in a spreadsheet and calculate the escalation gradient expressions for
the factors of .PE =A1; c; r; n/, .FE =A1; c; r; n/, and .AE =A1; c; r; n/, for r
6%, and n
D
Problem 14.
D
100. Compare the values calculated with those calculated for
1–60 and n
10%, c
D
D
C H A P T E R 10
145
Depreciation Terms, Methods,
and Systems
10.1 INTRODUCTION
Depreciation is one of the most important deductions that businesses make to recover their
initial investment in facilities and equipment used to produce their products and to lower their
taxes. Every enterprise has some depreciation as facilities such as offices and computers are
depreciable assets as well as items such as race horses, breeding cattle, hogs, goats or sheep,
amusement parks, manufacturing facilities, and equipment, and almost anything you can think
of to produce a product or service is depreciable. Depreciation is the item which causes major
corporations to keep two sets of books, one for the government for paying taxes and another for
the stockholders to show their earnings and profits. The primary sources of references used for
depreciation in this work is Publication 946, How to Depreciate Property, for 2016 Returns [1].
Depreciation is extremely complicated, but the basics presented will meet most of the practical
cases that are encountered in preparing estimates for projects and project evaluations.
A brief review of some of the terms from Chapter 1 will be repeated as they are critical in
the determination of depreciation and its impact upon profits and cash flows. The basic terms
are as follows.
1. Revenues
Income or money generated from Sales of Products and Services, Royalties,
Financial Investments, Gambling/Lottery Winnings, etc.
2. Expenses
Income or money consumed in production of Products and Services such as
labor, materials, equipment, computers, etc.
3. Depreciation
D
An annual income tax deduction that allows for the recovery of the cost
or other basis of certain property over the time you use the property. It is an allowance for
the wear and tear, deterioration or obsolescence of the property. IRS Publication 946 gives
details on the calculations for depreciation [2].
4. Tax Rate
D
A percentage amount applied to the profits to determine the amount of taxes.
The amount of taxes and profits are determined from these terms by the relationships
previously presented in Chapter 1:
D
D
146
10. DEPRECIATION TERMS, METHODS, AND SYSTEMS
Taxes
Profits
D
D
10.1.1 CASH FLOWS
.Revenues
Revenues
Expenses
Expenses
(cid:0)
Depreciation/
Taxes:
Tax Rate
(cid:2)
(10.1)
(10.2)
(cid:0)
(cid:0)
(cid:0)
Cash Flows Before Taxes (CFBT) considers only net cash flows. This analysis is used when taxes
cannot be considered and one is considering only the expenses. In general, one can consider:
CFBT .Cash Flows Before Taxes/
Revenues
(cid:0)
D
Expenses:
(10.3)
Cash Flows After Taxes (CFAT), considers taxes as an additional expense and is a more
realistic of the actual cash flows. CFAT is typically used rather than CFBT as taxes can be a large
item in the cash flows. Taxes are generally calculated based on a percentage of taxable income,
and this is generally often represented as 40% in the U.S., but many of the major companies
in the U.S. pay no income taxes. Some oil-rich countries in the middle east have zero income
taxes whereas in some of the Northern European countries the total taxes reach 70%, but in
these countries the people generally are satisfied in paying higher taxes as they often provide
government pensions and/or healthcare. In Denmark, the tax on a new automobile was 180%
and thus one paid 2.8 times the car price to purchase a new car.
The income tax rate in the U.S. was as high as 90% in the 1950’s, but was reduced during
the last half of the 20th century to under 40% for individuals and corporate taxes were recently
reduced to 20%. Although new tax laws were passed in 2017 for 2018, the details have not been
available from the Internal Revenue Service. Very high tax rates are often imposed during wars
to fund the war effort. The 40% presented in most problems represents an estimate based upon
the total of the federal, state and local taxes and although federal taxes are decreasing, it may
cause an increase in the state and local taxes. The taxable income is based upon the net cash flows
less the depreciation expenses. Thus, the amount allowed for depreciation must be determined
to calculate the taxable income and taxes paid:
CFAT .Cash Flows After Taxes/
Revenues
Expenses
Taxes:
(cid:0)
(cid:0)
D
(10.4)
10.2 DEPRECIATION TERMS AND DEFINITIONS
Depreciation is the systematic allocation of the cost of a capital asset over the recovery period
(useful life). IRS Publication 946 [1–11], which can be downloaded from the IRS website at
http://www.irs.gov, is the best publication on depreciation for most individuals and is up-
dated yearly for the new tax laws. Depreciation can be applied to nearly everything, but one
notable exception is land. There are a wide variety of terms applied in depreciation as to de-
preciable property, depreciation life, and depreciation techniques. Many counties have different
methods for depreciation, but the acceptable methods used in the U.S. system of depreciation
will be emphasized.
10.2. DEPRECIATION TERMS AND DEFINITIONS 147
10.2.1 DEPRECIATION CLASSES OF PROPERTY
The two major classes of depreciable property are tangible property and intangible property.
Some examples of each are as follows.
A. Tangible Property—property you can see and touch [3]
1. Personal Property—assets such as automobiles, houses, buildings, machines, com-
puter equipment, furniture, etc.
2. Real Property—land and buildings erected or agricultural produce growing on the
land. The land is not depreciable, but buildings erected on the land are depreciable.
B. Intangible Property—property that has value, but cannot be seen or touched [3]
1. Intangible Property—it includes items such as goodwill, computer software, copy-
rights, royalty, franchises, patents, trademarks, trade names and permits and licenses.
10.2.2 RECOVERY PERIOD AND DEPRECIATION LIFE
Recovery period is the life used for determining the depreciation life for recovery of the asset.
However, there can be different permissible recovery periods permitted by the tax codes as most
companies want rapid recovery periods, but some want long recovery periods as their income
increases with increasing asset value and depreciation reduces the asset value.
Depreciable items must have a useful life of one year or more, be used in business or used
to produce income, and they lose value via obsolescence, wear and tear, or natural causes, and
are not inventory, stock in trade, or investment property. There are different types of asset life
and the major types of asset life for consideration in determining the recovery period and the
terms often considered are useful life, physical life economic life, and class life. The key term
is the class life and is defined by the Internal Revenue Service (IRS) in the U.S. Class life is
the “number of years that establishes the property class and recovery period for most types of
property under the General Depreciation Schedule (GDS) and the Alternative Depreciation
Schedule (ADS)” [4]. The various assets are assigned a specific asset class and that asset class
will have a recovery period for that class life.
The recovery periods used in the Modified Cost Recovery System-General Depreciation
System (MARCS-GDS) for property classes are presented in Table 10.1. The recovery periods
have been set into nine classes to make classification easier. The property class is the recovery
period in years, that is a three-year property class implies that the recovery period is three years.
The MACRS-ADS and MACRS-GDS have specific recovery periods and the recovery
period for the MACRS-GDS is usually less than that of the MACRS-ADS. The GDS class life
values are generally less than the asset class life period and is the accelerated depreciation life.
148
10. DEPRECIATION TERMS, METHODS, AND SYSTEMS
Table 10.1: MACRS-GDS property classes and types of property [4]
One major exception is the recovery period for automobiles which is five years under the ADS
and the class life recovery period is only three years. This occurred as loans for new automobiles
increased from three years to five years and the government decided the class life should also
increase. The ADS class life values are equal or greater than the asset class life period. The
recovery periods are presented in Table 10.2 for several asset classes.
Property Class LifeRecovery PeriodExamples of Types of Property3-year propertyTrailer units for over-the-road use; special tools for—manufacture of motor vehicles, fabricated metal products, glass products, and rubber products.5-year propertyAutomobiles, taxis, buses, light trucks, heavy duty trucks; information systems-computers and peripherals; construction; manufacture of apparel; cutting of timber; manufacture of chemicals and allied products; manu-facture of electronic components, products, and systems.7-year propertyEquipment for the manufacture of cement, glass products, primary fer-rous metals, foundry products, fabricated metal products, electrical and non-electrical machinery, motor vehicles, ship and boat building; offi ce furniture and fi xtures. (If no life is established, a product is classifi ed as 7-year property)10-year propertyPetroleum refi ning equipment; equipment for the manufacture of grain and grain mill products; ship and boat building dry docks.15-year propertyPipeline transportation, water transportation, telephone distribution equipment; electrical utility nuclear power plant; municipal wastewater treatment.20-year propertyElectric, gas, water and steam utility services; gas utility production plants; electric utility hydraulic production plants; gas utility distribution plants.25-year propertyWater utilities, municipal sewer.Residential Rental Property 27.5-year Residential Structures (depreciated over 27.5 years).Nonresidential Property 39-yearBuildings (depreciated over 39 years).Table 10.2: Class life and recovery periods for selected asset classes [5]
10.2. DEPRECIATION TERMS AND DEFINITIONS 149
Asset ClassAsset DescriptionClass Life (Years)Recovery Period (Years)General GDS*Alternative ADS**0.11Offi ce furniture101277770.12Information systems, including computers6550.13Data handling, except computers6560.22Automobile, taxis 3550.23Buses9590.24Heavy general purpose trucks5661.21Cattle, breeding or dairy71.223Racehorse, above 2 yrs. agenone1253756620282810Mining10101071014711714714714715771271271213Off shore drilling7.557.513.3Petroleum refi ning16101615Construction24.4Manufacture of wood products 1030.1Manufacture of rubber products1430.2Manufacture of plastic products1132.1Manufacture of glass products1433.2Manufacture of primary nonferrous metals 1433.3Manufacture of foundry products1433.4Manufacture of primary steel mill products 1534.0Manufacture of fabricated metal products1235.0Manufacture of electrical machinery and other mechanical products101036.0Manufacture of electronic components, products65636.1Semiconductor manufacture equipment55537.1Manufacture of motor vehicles1237.2Manufacture of aerospace products1039.0Manufacture of athletic, jewelry, and other goods1248.4Satellite space segment property (satellites)85849.4Central steam utility production and distribution49.3Liquefi ed natural gas plant22152251.0Municipal sewer50255080.0Th eme and amusement parks 12.5712.5*GDS = General Depreciation System **ADS = Alternative Depreciation System 150
10. DEPRECIATION TERMS, METHODS, AND SYSTEMS
10.2.3 DEPRECIATION CONVENTIONS
The possible depreciation conventions are full-year, mid-year, mid-quarter, and mid-month. The
full-year convention was used previously, but now is no longer in use because all depreciable pur-
chases made during the first year are not on the first day of the year. Thus, mid-year convention
is the most used convention as it considers that purchases are made throughout the year. This is
used when all purchases are spread out through the year and only one depreciation rate is used
for the year.
Only mid-year convention will be considered in detail. Mid-year convention increases the
number of years for depreciation calculations. Thus, a 5-year mid-year depreciation will have only
half of the depreciation in the 1st and 6th years as it assumes the purchases are made throughout
the year. If most purchases are made at the end of the year, then the mid-quarter or mid-month
conventions would be required.
10.3 TRADITIONAL METHODS OF DEPRECIATION
There are numerous methods of depreciation and only a few of the more commonly used meth-
ods will be presented in detail. The straight line method is the traditional method which gave
a uniform amount of depreciation over the life of the asset. The declining balance method gave
a uniform percentage of depreciation over the investment life which gave larger amounts ini-
tially, but which has the problem of not going to zero. The MACRS system is a combination
of both systems, giving the higher initial amounts of depreciation via the declining balance
method initially and then switching to the straight line method to fully depreciate the item. The
production-based system is not used in the U.S., but is used in other parts of the world and is
based on the amount of use of the facility. There are special depreciation methods, and in the
U.S., the method “Section 179” permits expensing the entire purchase in the first year, but there
are several restrictions. Each of the methods will be present in more detail.
The depreciation amounts are used to update the book value of the asset. The book value
is the initial invest minus the sum of all the depreciations up to the end of the year under con-
sideration. It can be expressed in equation form by:
BV k
B
(cid:0)
D
Di ;
K
X
1
i
D
(10.5)
where
B
Di
BV k
D
D
D
initial investment amount
depreciation in years i
book value at end of year k.
10.3. TRADITIONAL METHODS OF DEPRECIATION 151
10.3.1 STRAIGHT LINE DEPRECIATION METHOD
The straight line method gives a constant amount of depreciation per year and this is why it was
preferred as it is the easiest method. The expression for straight line depreciation is:
Dk
.B
(cid:0)
D
SV/=N;
(10.6)
where
depreciation amount for all years k.k
1; N /
D
investment (purchase cost
salvage value at end of life of asset at N years (usually taken to be zero)
depreciable life of asset, years
year of interest.
installation costs of asset)
C
D
Dk
B
SV
N
k
D
D
D
D
The salvage value is taken as zero when using the MACRS straight line schemes, and the
actual net disposal value would be treated as a capital gain (or loss) when the asset is disposed.
Also, the prediction of a salvage value several years in the future is difficult and by taking the
value as zero, the salvage monies received could be taken as a capital gain. In some cases, the
cost of removal would be greater than any salvage value and would be difficult to predict, but
could be considered as a capital loss. One must not list only the purchase price, but also any
installation costs such as connecting utilities, preparing foundations and other necessary items
to make the purchase operable as the total cost is depreciable.
10.3.2 DECLINING BALANCE DEPRECIATION METHOD
The declining balance method is a constant percent of depreciation of the book value. It is a faster
depreciation method in the initial years than the straight line method. However, the depreciation
amounts decreases in the later years and become less than straight line and the total depreciation
never reaches zero.
The expression for declining balance depreciation, for full year depreciation, is:
Dk
.B/
(cid:2)
D
R.1
(cid:0)
R/k
1;
(cid:0)
(10.7)
where
year of interest
depreciation for year k.k
investment (purchase cost
depreciation rate
depreciable life of asset, years
salvage value (the salvage value is not included in the calculations).
1; N /
installation costs of asset)
100/
(usually 150% or 200%)/.N
D
C
D
(cid:2)
D
k
D
Dk
B
R
N
SV
D
D
D
D
If the life is 10 years .N / and the depreciation rate of 200% is used, then R
(cid:2)
0:2, where R is a decimal less than 1.0. The rate of 200% implies the value initially
200=.10
D
100/
D
152
10. DEPRECIATION TERMS, METHODS, AND SYSTEMS
would be twice that of straight line depreciation and 150% implies that the initial value would
be 1.5 times that of straight line method. However, the amount for declining balance method
decreases but never becomes zero. This method is not used directly today as a separate depreci-
ation method, but it is used in combination with straight line depreciation be presented later as
part of the MACRS. Using the mid-year convention, the depreciation rate would be 100% or
r
0:10 for the first year.
For mid-year convention, the declining balance is a more difficult expression:
D
D1.%/
Dk.%/
R=.100
"
1
R
(cid:0)
D
D
The amount of depreciation would be
N
2/
(cid:2)
D .%/i
#
:
(cid:2)
k
1
(cid:0)
X
1
i
D
(10.8)
(10.9)
Dk.$/
Bk
1
(cid:0)
(cid:2)
D
Dk.%/:
(10.10)
10.3.3 DEPRECIATION EXAMPLE
What is the depreciation amounts using mid-year convention for an investment of $10,000 with
an asset life of 5 years using straight line and double declining balance (200%)? The straight line
depreciation would be 1/5 or 20%, that is $2,000. The 200% declining balance would be 2
20
or 40%. With the mid-year convention, the first year values and last year values will be half of
that calculated. Formulas (10.6) and (10.7) were used to calculate the values with:
(cid:2)
B
N
D
D
10;000
5 depreciable life of asset, years (however it will take 6 years to fully depreciate the
item as only 1/2 of the amount calculated is allocated in the first and last years)
For example, in year 3 of the DDB, the amount of depreciation would be using Equa-
tions (10.8), (10.9), and (10.10):
D1.%/
D3.%/
D3.$/
D
D
D
200=.100
0:40.1
(cid:0)
$10;000
5
(cid:2)
0:20
(cid:2)
2/
D
0:32/
(cid:0)
.0:192/
D
(cid:2)
0:20
0:192
D
$1;920:
Assume the asset is disposed in the last year. The calculation results are in Table 10.3.
SL (Straight Line)
DDB (Double Declining Balance)
10.3. TRADITIONAL METHODS OF DEPRECIATION 153
D
Year 1
Year 2
Year 3
Year 4
Year 5
Year 6
1=5
20%
D
½.0:20
0:20
0:20
0:20
0:20
(cid:2)
½.0:20
10;000/
(cid:2)
10;000
10;000
10;000
10;000
(cid:2)
(cid:2)
(cid:2)
D
D
D
D
$2;000
$2;000
$2;000
$2;000
D
10;000/
(cid:2)
D
D
D
D
D
D
D
$1;000 Year 1
Year 2
2=5
D
40%
D
½.0:40
0:40
Year 3
Year 4
0:40
0:40
Year 5
$1;000 Year 6
0:40
(cid:2)
1=2.0:40
D
D
D
D
D
D
10;000/
$2;000
(cid:2)
8;000
4;800
1;920
1;152
(cid:2)
(cid:2)
(cid:2)
D
$3;200
$1;920
$1;152
$691:20
D
D
D
D
691:20/
$207:36
D
(cid:2)
.20:00%/
.32:00%/
.19:20%/
.11:52%/
.6:912%/
The double-declining balance method is a percentage of the book value, so after the first
year the amount is 40% of the book value of the previous year. In the first and last years, the
amount is only ½ as a result of being mid-year convention. However, in the MACRS systems,
the switch over in the last years is to straight line and the DDB method is not used.
Table 10.3: Straight line and declining balance depreciations and asset book values
The initial depreciation is much more rapid with the declining balance in the early years
(years 1 and 2 for this example), but it never reaches zero. The MACRS uses declining balance
until the amount of depreciation is less than or equal to that of the straight line method for the
remaining life. The straight line depreciation is based upon the remaining investment and the
remaining life, and the straight line depreciation for year 4 would be the book value divided at
the end of year 3 by the remaining life for year 3 which is 2.5 years (for years 4 and 5, and 1/2
year of year 6):
SL Depreciation .year 4/
$2;880 .book value at the end of year 3/
D
2:5 .2:5 years remain after year 3/ D
$1;152:
YearLifeStraight LineDeclining BalanceRemaining ($) at Year EndDepreciationBook Value ($)Depreciation ($)Book Value ($)05010,0000.0010,000.0014½1,0009,0002,000.008,000.0023½2,0007,0003,200.004,800.0032½2,0005,0001,920.002,880.0041½2,0003,0001,152.001,728.005½2,0001,000691.201,036.80601,0000207.36829.44Total10,000 9,170.56154
10. DEPRECIATION TERMS, METHODS, AND SYSTEMS
This is the same amount ($1,152) as the declining balance amount for year 4, so the switch
to straight line would be made at the end of year 3 on a straight line basis. The original straight
line depreciation is no longer used $2,000, but a straight line depreciation over the remaining
2.5 years which is $1,152. Since the straight line method is used, the depreciation for years 4
and 5 will also be $1,152 and year 6 would be for ½ year and thus $576.
The total depreciation would be 2;000
3;200
1;920
1;152
1;152
576
C
D
C
C
C
C
10;000.
10.4 THE MACRS DEPRECIATION SYSTEMS
The MACRS are the depreciation systems used for depreciation. The MACRS systems follow
the declining balance method with a switch-over to the straight line method. A more detailed
explanation of the MACRS systems are in Publication 946. The salvage value in the MACRS
system is taken as zero in the calculation of the depreciation amounts.
The two MACRS systems are:
1. GDS—General Depreciation System
2. ADS—Alternative Depreciation System
10.4.1 MACRS-GDS RECOVERY PERIODS AND PROPERTY CLASSES
The GDS is generally used unless ADS is required by law or if the user wants a slower depreci-
ation. The GDS has the faster depreciation schedule as more depreciation occurs in the earlier
years than in the ADS. The detailed Recovery Period and Property Classes for the GDS are
listed in Table 10.1. The 200% and 150% values refer to the declining balance method percent-
age amount used before the switch to the straight line depreciation method. The nine property
classes and recovery periods most frequent used [6] are:
(1) 3-yr property (200%)
(2) 5-yr property (200%)
(3) 7-yr property (200%—any property that does not have a class life specified
is considered to have a 7-yr class life.)
(4) 10-yr property (200%)
(5) 15-yr property (150%)
(6) 20-yr property (150%)
(7) 25-yr property (150%)
(8) Residential Rental Property (27.5-yr, straight line)
(9) Non-Residential Real Property (39-yr, straight line)
The GDS does permit other systems and they are used in special instances when acceler-
ated depreciation is not preferred. Accelerated depreciation leads to lower taxes paid, but also
leads to lower profits or even losses which may not be desired. A list of all the possible ADS [6]
system is as follows.
1. 200% declining balance for 3-, 5-, 7-, and 10-yr property
10.4. THE MACRS DEPRECIATION SYSTEMS 155
2. 150% declining balance over a GDS recovery period for all property used in farming busi-
nesses (except real property) and for all other property in the 15-, 20-, and 25-yr property
classes
3. straight line for 3-, 5-, 7-, 10-, 15-, 20-, and 25-yr property as well as the residential rental
property and non-residential real property
4. 150% declining balance over an ADS recovery period for property in the 3-, 5-, 7-, and
10-yr property classes.
10.4.2 MACRS-ADS RECOVERY PERIODS AND PROPERTY CLASSES
The ADS system almost always results in equal or longer recovery periods than the GDS [6].
For example, the personal property without a specified class life is 12 years in the ADS system
compared to the 7 years in the GDS system. Some of the differences are the following.
1. See Table 10.2 Class Life Asset Depreciation Range (ADR) System.
2. Any personal property without a class life specified is 12 years.
3. Any real property without a class life specified is 40 years.
There are certain instances where the ADS must be used instead of the GDS. The ADS
is required for the following.
1. Any tangible property predominantly used outside the U.S.
2. Any tax-exempt use property (churches, non-profit organizations, etc.).
3. Any property predominantly used in farming or agricultural business.
4. Any imported property covered by the executive order of the President of the U.S.
The primary systems of ADS used in practice are as follows.
1. 150% declining balance over the ADS recovery period.
2. Straight Line over the GDS Recovery Period (farming).
3. Straight Line of the ADS recovery Period.
156
10. DEPRECIATION TERMS, METHODS, AND SYSTEMS
10.4.3 MACRS-GDS MID-YEAR RECOVERY PERIODS
The MACRS is a declining balance method with a switchover to straight line [7]. This is the
most commonly used system. The MACRS assumes a zero salvage value for all cases. Table 10.4
gives the depreciation percentages for the various mid-year recovery periods for the MARCS-
GDS system. Note that there are only certain recovery periods; that is 3, 5, 7, 10, 15, and 20 years
and that the totals of the columns are always 100.00.
Other conventions are the mid-quarter convention and mid-month convention and must
be used for certain investments or other conditions. For example, if 40% or more of the purchases
are in one quarter, the mid-quarter convention must be used. The mid-month is primarily for
non-residential real property, (e.g., railroad grading or tunnel bore) and residential rental prop-
erty. The mid-quarter and ADS tables are in the Publication 946. Only the mid-year convention
will be presented in detail.
MACRS-GDS Depreciation Example
Consider the case of a $10,000 investment, zero salvage value, and determine the MACRS-
GDS values for 200% GDS method for 5-yr property.
Using the mid-year convention it will take 6 years as the first and last years will receive
only ½ of a depreciation value calculated for that year. Also, if an asset is sold in a year, only ½
of the depreciation is applied for that year.
(cid:2)
R
D
100/
200=.5
0:40 that is 40% depreciation per year (Double Declining Balance)
D
0:20 or 20%. Note that the 200% declining rate is double
and the straight line rate R
that of the straight line method at the start. The straight line value changes according to the
book value and life remaining values of the previous year. The declining balance depreciation is
the 40% of the previous Book Value and is not used after the switch over point when it is less
than or equal to the straight line depreciation. The results are in Table 10.5.
1=5
D
D
If one takes the Amount Used column and divides by 100, note that one obtains the exact
same percentages in Table 10.4 for the Double Declining (200%) Balance column for a recovery
period of 5 years. Thus, if one takes the values for the five year recovery in Table 10.5 and uses the
percentages as decimals and multiplies by the investment of $10,000, one obtains the MACRS
depreciation values in Table 10.5 which is presented in Table 10.6.
10.4.4 MACRS-ADS MID-YEAR RECOVERY PERIODS
There are several MACRS-ADS recovery period tables and they are much larger as they have
many more periods and the full tables can be seen in Publication 946. A section of two tables
will be presented to illustrate the similar construction of the tables using the same mid-year con-
ventions. One table illustrate the straight line convention and the other will illustrate the 150%
declining balance method. The values presented in these tables then can be used to compare
with MACRS-GDS values in Table 10.4. Table 10.7 lists the percentages for the MACRS-
ADS mid-year convention for straight line depreciation.
Table 10.4: Depreciation percentages for MACRS-GDS mid-year (half-year) recovery peri-
ods [7]
10.4. THE MACRS DEPRECIATION SYSTEMS 157
Table 10.8 lists the percentages for the MACRS-ADS mid-year convention for the 150%
declining balance depreciation. Tables 10.7 and 10.8 give only a small portion of the values
for these ADS values, but in comparison of the 3, 5, and 7 year value of the GDS values in
Table 10.4, they are lower in the initial years. It is also interesting to note that the ADS schemes
have many more recovery periods than the ADS , such as the 2.5, 3.5, 4, 6, and 7.5 in addition
to the 3, 5, and 7 years and this would result in much more record keeping.
YearRecovery Period(Double Declining (200%) Balance) (150% Declining Balance) 357101520 133.3320.0014.2910.005.003.750244.4532.0024.4918.009.507.219314.8119.2017.4914.408.556.67747.4111.5212.4911.527.706.177511.528.939.226.935.71365.768.927.376.235.28578.936.555.904.88884.466.555.904.52296.565.914.462106.555.904.461113.285.914.462125.904.461135.914.462145.904.461155.914.462162.954.461174.462184.461194.462204.461212.231Totals100.00100.00100.00100.00100.00100.000158
10. DEPRECIATION TERMS, METHODS, AND SYSTEMS
Table 10.5: MACRS depreciation calculations from declining balance and straight line
MACRS-ADS Depreciation Example
Consider the case of a $10,000 investment, zero salvage value, and determine the MACRS-ADS
depreciation values for the straight line and 150% declining balance methods for 5-yr property.
The percent rate is determined by:
R
D
150=.5
100/
(cid:2)
D
0:30
D
30%:
The result is 30% depreciation rate per year for the 150% Declining Balance MACRS-ADS
compared to the 40% depreciation in the MACRS-GDS. The faster depreciation increases cash
flows and is advantageous for most companies to use the GDS depreciation. The Straight Line
depreciation is even slower than the 150% Declining Balance Method, as shown in Table 10.9.
YearLifeDeclining Balance DepreciationStraight Line DepreciationMACRS DepreciationEnd of Year Book ValueRemainingAmount ($)Amount ($)**Amount Used ($)(End of Year)End of Year200% of Straight Line Initial AmountUse Remaining Life (SL)(Largest of DDB and SL Methods)0500 010,0001*4½2,0001,0002,0008,00023½3,2001,7783,2004,80032½ 1,920*** 1,920***1,9202,88041½****1,1521,1521,7285½1,1521,152652606526520*First year depreciation is for only 1/2 year in mid-year convention.**Straight line is determined by the book value of the previous year divided by the remaining life at the end of the previous year.*** Switch-over Point.**** Remaining Declining Balance Depreciation items do not need to be calculated as they will be equal or less than the Straight Line Depreciation.10.4. THE MACRS DEPRECIATION SYSTEMS 159
Table 10.6: MACRS-GDS depreciation calculations
Table 10.7: Depreciation percentages for MACRS-ADS straight line depreciation [8]
1234Year[Table 10.4]Recovery Period 5 YearsDepreciation (%)Column 2Expressed as a DecimalMultiplied by $ 10,000[Table 10.5]MACRS-GDS DepreciationAmount in $0--120.002,0002,000232.003,2003,200319.201,9201,920411.521,1521,152511.521,1521,1526 5.76 576 5760Recovery Period in Years-Depreciation Amounts in PercentagesYear2.533.54566.57120.0016.6714.2912.5010.008.337.687.14240.0033.3328.5725.0020.0016.6715.3914.29340.0033.3328.5725.0020.0016.6715.3814.29416.6728.5725.0020.0016.6715.3914.28512.5020.0016.6615.3814.29610.0016.6715.3914.2878.3315.3814.2987.14*IRS Publication 946, p. 75160
10. DEPRECIATION TERMS, METHODS, AND SYSTEMS
Table 10.8: Depreciation percentages for MACRS-ADS 150% declining balance deprecia-
tion [9]
Table 10.9: MACRS-ADS 150% declining balance and straight line depreciation calculations
with a 5-yr recovery period
Recovery Period in Years-Depreciation Amounts in PercentagesYear2.533.54566.57130.025.021.4318.7515.0012.5011.5410.71242.037.533.6730.4725.5021.8820.4119.53328.025.022.4520.3117.8516.4115.7015.03412.522.4520.3116.6614.0613.0912.25510.1616.6614.0613.0912.2568.3314.0613.0912.2577.3313.0812.2586.03*IRS Publication 946, p. 8512345150% Declining Balance150% Declining BalanceStraight LineStraight LineYear[Table 10.8]Depreciation (%)(Column2/100)*$10,000 =Depreciation Amount ($)[Table 10.7]Depreciation (%)(Column4/100)* $10,000 =Depreciation Amount ($)0----115.001,50010.001,000225.502,55020.002,000317.851,78520.002,000416.661,66620.002,000516.661,66620.002,00068.3383310.001,00010.5. OTHER DEPRECIATION METHODS 161
10.5 OTHER DEPRECIATION METHODS
There are several other depreciation systems, but two of the most interesting methods are the
Section 179 and the Production-Based methods. The Section 179 method is part of the U.S. code
and is used to expense rather than depreciate costs. The Production-Based method is based upon
the amount of use of the item and is not approved in the U.S. tax code.
10.5.1 SECTION 179 DEPRECIATION
This Special Depreciation Method is unique to the U.S. and is intended to give small and
medium-size companies a method of more rapid depreciation [10]. The limits that can be ex-
pensed change almost yearly with significant increases in the amounts. The maximum amount of
depreciation that could be allowed for 2010 was $250,000 and it increased yearly to $500,000 in
2016. When the total depreciation amount exceeds $2,010,000, the maximum limit is reduced
dollar for dollar. The total amount of depreciation must be less than $2,510,000 during 2016 to
use any of Section 179 Depreciation. There are special limitations on passenger automobiles or
Sport Utility Vehicles (SUV). Enterprise Zone Businesses can have an increase of $35,000 for
the Section 179 limit and have a reduction in the reduced dollar for dollar limit.
The asset must be used 100% exclusively for business use. If less than 100%, only that
percentage used for business can be used and one must use the GDS system to calculate yearly
amounts.
Section 179 Depreciation Examples:
1. Farmer Jimmy bought a tractor for $600,000 during the year and that was the only pur-
chase he has made. He could have a Section 179 Depreciation of $500,000 and the basis
for the remaining depreciation of the tractor would then be $100,000.
2. Farmer Rosalyn bought a small tractor for $120,000 during the year and that was the only
purchase she has made. She could have a Section 179 Depreciation of $120,000 and the
tractor would be fully depreciated.
3. Donnie Great American Farms bought a tractor for $700,000 during the year and had a to-
tal depreciation for all their purchases of $2,410,000 in 2010. Thus, the maximum amount
$2,010,000) or
that could be used for Section 179 would be $500,000
(cid:0)
$100,000. The remaining depreciation for the tractor would be $700,000
$100,000 or
$600,000.
($2,410,000
(cid:0)
(cid:0)
4. Consultant Betty has purchased a new computer system for $8,000 and that is the sole
depreciable item she purchased. Betty uses the computer for business 80% of the time and
20% for personal use. She can take 80% of the $8,000 or $6,400 as a business depreciation
expense.
162
10. DEPRECIATION TERMS, METHODS, AND SYSTEMS
10.5.2 PRODUCTION-BASED DEPRECIATION
The production-based depreciation method is used where the extent of use of equipment depends
on the production quantity or volume of production. The production quantity would be based on
the number of units the equipment could produce before it is worn out. The volume of production
may be the volume of material extracted from a resource before the resource is depleted of the
material. This system is similar to that used in the U.S. for depletion of assets, such as oil,
natural gas, and minerals equipment. However, the production-based depreciation system is
not accepted by the IRS in the U.S. even though an example appears in Publication 946 [11].
Production-Based Depreciation Examples:
1. A truck has a capacity of hauling 500,000 tons during its lifetime and the initial cost of
the truck is $50,000. During the 3rd year, the truck hauls 75,000 tons. The depreciation
for the 3rd year would be:
Depreciation
D
.75;000=500;000/
$50;000
(cid:2)
D
$7;500:
2. A gold mine has an estimated deposit of 500,000 troy ounces of gold and 1,000,000 tons of
rock need to be mined to process the gold. The purchase cost of the mine was $3,000,000.
A mine drilling machine was purchased for $400,000 to do all the mining as its expect life
is also 1,000,000 tons. What would be the drilling machine depreciation if in the first year
it mined 150,000 tons?
Depreciation
D
.150;000=1;000;000/
$400;000
$60;000
D
(cid:2)
3. The expected life of a rolling mill roll unit is 600,000 tons of steel. The unit costs $1,400,000
and the amount rolled in year 2 was 80,000 tons. What would be the depreciation for the
second year?
Depreciation
D
.80;000=600;000/
1;400;000
$186;667
D
(cid:2)
10.6 SUMMARY
Cash flows and profits are the two major items for the financial success of an enterprise, and de-
preciation expenses have a major impact upon them. Depreciation is the recovery of expense in
the past and cash flows increase as depreciation increases. Thus, accelerated depreciation acceler-
ates cash flows into the enterprise. The IRS Publication 946 is a primary guide for depreciation.
The declining balance and straight line methods have been combined and are the basis of the
MACRS depreciation systems. The Section 179 depreciation system allows small and mid-size
companies recover much of their equipment investments under $500,000 in the first year.
The MACRS depreciation systems and the Section 179 depreciation system represent the
majority of depreciation systems used in the U.S. The advantage of the Section 179 depreciation
is that you do not need to keep records to determine depreciation over the investment life as you
can often fully depreciate the investment in the first year. This makes it very attractive for small
and mid-size companies.
10.7. REFERENCES 163
10.7 REFERENCES
[1] IRS Publication 946—How to Depreciate Property (for 2016 Returns), Department of the
Treasury, Internal Revenue Service, (download at http://www.irs.gov), p. 114. 145,
146
[2] IRS Publication 946—How to Depreciate Property (for 2016 Returns), Department of the
Treasury, Internal Revenue Service, (download at http://www.irs.gov), p. 3. 145
[3] IRS Publication 946—How to Depreciate Property (for 2016 Returns), Department of the
Treasury, Internal Revenue Service, (download at http://www.irs.gov), pp. 110–111.
147
[4] IRS Publication 946—How to Depreciate Property (for 2016 Returns), Department of the
Treasury, Internal Revenue Service, (download at http://www.irs.gov), pp. 30–31. 147,
148
[5] IRS Publication 946—How to Depreciate Property (for 2016 Returns), Department of the
Treasury, Internal Revenue Service, (download at http://www.irs.gov), pp. 100–109.
149
[6] IRS Publication 946—How to Depreciate Property (for 2016 Returns), Department of the
Treasury, Internal Revenue Service, (download at http://www.irs.gov), pp. 29–38. 154,
155
[7] IRS Publication 946—How to Depreciate Property (for 2016 Returns), Department of the
Treasury, Internal Revenue Service, (download at http://www.irs.gov), p. 71. 156, 157
[8] IRS Publication 946—How to Depreciate Property (for 2016 Returns), Department of the
Treasury, Internal Revenue Service, (download at http://www.irs.gov), p. 75. 159
[9] IRS Publication 946—How to Depreciate Property (for 2016 Returns), Department of the
Treasury, Internal Revenue Service, (download at http://www.irs.gov), p. 85. 160
[10] IRS Publication 946—How to Depreciate Property (for 2016 Returns), Department of the
Treasury, Internal Revenue Service, (download at http://www.irs.gov), pp. 15–23. 161
[11] IRS Publication 946—How to Depreciate Property (for 2016 Returns), Department of the
Treasury, Internal Revenue Service, (download at http://www.irs.gov), p. 111. 146,
162
164
10. DEPRECIATION TERMS, METHODS, AND SYSTEMS
10.8 EVALUATIVE QUESTIONS
1. An asset was purchased for $80,000 and it took $20,000 to prepare the site and install the
equipment. The asset has a recovery period of 7 years and MACRS-GDS depreciation
was used.
(a) What is the depreciation amount for the 1st year?
(b) What is the book value at the end of the 3rd year?
(c) What is the depreciation amount for the 5th year?
(d) What is the book value after the 6th year?
2. An asset has value of $100,000 and a recovery period of 3 years. Use MACRS-GDS de-
preciation and determine the depreciation amount and book value over the life of the
investment.
3. Your company has purchased a large new tractor trailer truck (heavy duty truck). It has a
basic cost of $180,000 and with additional options costing $20,000, so the cost basis for
depreciation purpose is $200,000. Its market value at the end of 5 years is estimated as
$30,000 and will be depreciated under the GDS.
(a) What is the cumulative depreciation through the end of the 3rd year?
(b) What is the MACRS depreciation in the 4th year?
(c) What is the book value at the end of the 2nd year?
(d) What is the book value at the end of the 5th year?
4. Your company has purchased a large new tractor trailer truck (heavy-duty truck). It has
a basic cost of $180,000 with additional options costing $20,000, so the cost basis for
depreciation purpose is $200,000. Its market value at the end of 5 years is estimated as
$30,000 and it will be depreciated under the ADS with straight line depreciation. (A
heavy-duty truck has life of 6 years under ADS, but assume 5 years for this problem.)
(a) What is the cumulative depreciation through the end of the 3rd year?
(b) What is the MACRS-ADS depreciation in the 4th year?
(c) What is the book value at the end of the 2nd year?
(d) What is the book value at the end of the 5th year?
5. You are a private consultant and purchased a new computer valued at $3,000. You decide to
use the Section 179 method for the computer you purchased as this is the only equipment
purchase for the year.
(a) What is the 1st year depreciation amount?
(b) What is the 2nd year depreciation amount?
10.8. EVALUATIVE QUESTIONS 165
6. You have started a consultancy company and purchased a new computer system valued at
$3,000. You decided to use MACRS-GDS depreciation for the computer.
(a) What is the 1st year depreciation amount?
(b) What is the 2nd year depreciation amount?
7. A longwall mining machine was purchased for $500,000 and is expected to mine 5,000,000
tons of coal during its life. The machine mined 400,000 tons the 1st year and 700,000 tons
the 2nd year.
(a) What is the amount of depreciation using the production-based system?
Year 1______
Year 2______
(b) If the MARCS-GDS system is used, what would be the depreciation amounts for
the first two years?
What is its recovery period?____
If the GDS system is used, what would be the depreciation amounts for the first two
years?
Year 1_____
Year 2_____
8. President Trump has decided that he wants to have a 4-year MACRS-GDS depreciation
scheme. You, as his chief tax adviser, are to determine the depreciation rates in 24 hours,
or be fired. What are the rates for each of the 5 years since mid-year convention is used?
9. Since the 25-year class life is new, the rates are not yet in Publication 946 (2016 Version).
Therefore, for the 150% class, calculate what the rates should be for the MACRS-GDS
system for a 25-year life.
10. An asset has value of $100,000 and a recovery period of 3 years. Use MACRS-ADS (150%)
depreciation and determine the depreciation amount and book value over the life of the
investment. Compare results with question 2.
C H A P T E R 11
167
The Impact of Loans upon
Cash Flows, Taxes, and Profits
11.1 INTRODUCTION
The use of loans is frequently necessary to purchase capital equipment including machinery,
computers, facilities, materials, and other items necessary to produce products or services re-
quired for the enterprise. Loans are important as they impact cash flows, profits, and taxes. The
loan is repaid in payments which contain two major components—the principal portion which
is the portion used to repay the loan balance and the interest which is the fee for the use of the
capital borrowed. The first section will be to analyze loans to determine the two components
of principal and interest and then focus on how these items impact the cash flows before taxes,
taxes, cash flows after taxes, and profits. The loan interest is a depreciable expense and the prin-
cipal is not a depreciable expense, but it reduces the cash flows. Therefore, it is critical to know
both the principal and the interest portions of a loan. Many general references [1–3] exist on
general methods of loans, but only the Principal Present Value Approach will be presented in
this chapter with permission of the American Society for Engineering Education [4].
11.2 THE PRESENT VALUE OF PRINCIPAL APPROACH
FOR DETERMINING THE PRINCIPAL AND
INTEREST COMPONENTS OF A LOAN
A new approach has been developed to determine the amounts of interest and principal of a
loan. In the repayment of loans, where the loan payment is usually fixed for each of the payment
periods. The individual principal and interest components change each period, with the principal
payment increasing each period and the interest portion decreasing each period. However, the
present worth of the principal is constant for each period and thus once determined, it permits
relatively simple calculations for the two components of principal and interest for each period.
The concept that the principal present value is constant gives a better understanding of how
loan payments work. This approach is thus called the Present Value of Principal Approach or
the Principal Present Worth Approach. To illustrate this process an example will be presented.
The nomenclature used is expressed in Table 11.1.
168
11. THE IMPACT OF LOANS UPON CASH FLOWS, TAXES, AND PROFITS
Table 11.1: Nomenclature for loan principal and interest components [4]
11.3 EXAMPLE PROBLEM OF LOAN PROBLEM USING
PRESENT VALUE OF PRINCIPAL APPROACH
Let us consider determining the values of interest and principal payments on a Loan of $10,000
(LV) with an interest rate .i/ of 5% with 10 .n/ yearly end-of-year payments. The initial values
are:
$10;000
0:05
LV
5%
10:
D
D
P
i
n
D
D
D
The initial calculated values are the loan payment .A/ and the present value of the principal
.PVP/:
A
PVP
D
D
D
D
LV
.A=P; i; n/
(cid:2)
$10;000(cid:140)0:05
i
(cid:140).A
(cid:2)
LV/=.1
(cid:0)
(cid:2)
(cid:140).1;295:05
:05
(cid:0)
(cid:2)
D
LV
(cid:2)
1:05/10=
i/(cid:141)
f
1
i
C
(cid:2)
.1:05/10
LV
C
D
10;000/=.1
(cid:2)
i/n=
1
f
(cid:141)
g
.1
i/n
1
(cid:141)
g
(cid:0)
C
$1;295:05
1=..1
D
i//
(cid:0)
i=.1
C
:05/(cid:141)
(cid:2)
$757:19:
C
C
D
(11.1)
(11.2)
i/n
1/(cid:141)
(cid:0)
SymbolVariable DescriptionFormula for Variable or Type of Unit UsedLV or PLoan value or initial principaliInterest rate (%)Used in decimal formnLoan lifeLife in yearsALoan paymentA = LV[i*(1+i)n /{(1+i)n -1}] = LV*(A/P,i,n)tTime period of interestt = 1,2,...nPVPPresent value of principal per periodPVP = [(A - i*LV)/(1+i)] P(t)Principal per period tP(t) = PVP*(1+i)t = PVP*(F/P,i,t)I(t)Interest per periodI(t) = A - P(t)PVI(t)Present value of interest per periodPVI(t) = I(t)/(1+i)t = I(t)*(F/P,i,n) U(t)Unpaid balance of loan after t periodsU(t) =LV-∑t=1P(t) = LV-(A-i*LV)((1+i)t-1)/i) =LV - (A-i*LV)(F/A,i,t)IT(t)Total interest paid to period tIT(t) = ∑t=1I(t) = t*A-PT(t)=t*A -(A-i*LV)(F/A,i,t)PT(t)Total principal paid to period tPT(t) = ∑t=1P(t) = LV -U(t) = (A-i*LV)(F/A,i,t)ttt11.4. LOANS WITH CASH FLOWS, DEPRECIATION, PROFITS, AND TAXES 169
All remaining calculations are based on these two calculated values—A and PVP—and the input
values of LV; i, and n and the time of interest, t:
P .t/
I.t/
PVI.t/
U.t/
PT.t/
IT.t/
D
D
D
D
D
D
D
t
t
D
X
0
t
D
t
t
D
X
t
0
t
D
(cid:2)
PVP.1
i/t
C
D
PVP.F=P; i; t /
A
P .t/
(cid:0)
I.t /.P =F; i; t /
LV.F=P; i; t/
I.t/=..1
C
D
A.P =F; i; t/
(cid:0)
i/t
D
LV.1
i/t
C
(cid:0)
A(cid:140).1
i/t
C
(cid:0)
1(cid:141)=i
(11.3)
(11.4)
(11.5)
(11.6)
P .t/
.A
i
(cid:0)
(cid:2)
D
LLV /.P =F; i; t/
.A
i
(cid:0)
(cid:2)
D
LV/
(cid:2)
(cid:140)..1
C
i/t
(cid:0)
1/=i(cid:141)
(11.7)
I.t/
A
(cid:0)
D
.A
A
t
(cid:2)
(cid:0)
PT.t /
t
A
.A
i
(cid:0)
(cid:2)
(cid:0)
(cid:2)
D
LV/.P =F; i; t/
i
(cid:0)
(cid:2)
LV/
(cid:2)
(cid:2)..1
C
i/t
(cid:0)
1/=i (cid:3) :
(11.8)
Some interesting information about the totals of the loan components occur. (See Ta-
ble 11.2.) The most important item is that the present value of the principal is a constant. Ob-
serve that the present value of the interest and the present value of the principal sum to the total
initial value of the loan. The total sum of the principal parts of the loan is the expected total
value of the initial loan value. These items are not as apparent in the traditional approaches to
the evaluations of loans.
Calculate the present worth of the interest payments. Since they are not the same, each
needs to be calculate individually. Thus,
where
PWI(total)
PWI.1/
C
D
PWI.2/
C (cid:1) (cid:1) (cid:1) C
PWI.10/;
PWI.n/
I.1/=.1
0:05/
C
C
I.2/=.1
C
D
0:05/2
C (cid:1) (cid:1) (cid:1) C
I.10/=.1:
0:05/10
C
2;428:
D
As a check on the PVP use, one can consider it to be an escalation gradient (E) of 5% the
with and interest rate .i/ of zero and A1
757:19, thus, from Chapter 9:
D
i/(cid:141)(cid:140).1
(cid:0)
C
0:05/=.0:05
FE D
D
D
D
E/=.E
f
C
(cid:140).1
A1
$757:19 (cid:8)(cid:2).1
$757:19
f
$10;000:
C
13:20679
g
E/n
.1
(cid:0)
0/(cid:141)(cid:140).1
(cid:0)
i/n(cid:141)
g
0:05/10
C
C
.1
(cid:0)
C
0/10(cid:3)(cid:9)
(9.37)
11.4 LOANS WITH CASH FLOWS, DEPRECIATION,
PROFITS, AND TAXES
The basic relationships between the revenues, expenses, cash flows before taxes, cash flows after
taxes, taxes, gross profits, net profits, depreciation, loan interest, and loan principal will be re-
170
11. THE IMPACT OF LOANS UPON CASH FLOWS, TAXES, AND PROFITS
Table 11.2: Calculations for example Problem 1 using present value of Principal Approach
viewed. The nomenclature for the cash flows is in Table 11.3. The equations for the relationships
will be presented and then an example problem will be used.
The basic expression for Cash Flows Before Taxes is:
CFBT
CFBT
D
D
Revenues
R
LV
Loan Value
C
E:
Expenses
(cid:0)
C
(cid:0)
The basic expression for Cash Flows with a Loan is:
CFAL
CFAL
D
D
Cash Flows Before Taxes
CFBT
LCF:
(cid:0)
Loan Cash Flow
(cid:0)
The basic expression for Taxable Income with a Loan is:
TI
TI
D
D
Cash Flows Before Taxes
CFBT
D:
I
(cid:0)
(cid:0)
Interest Paid
(cid:0)
(cid:0)
Depreciation
(11.9)
(11.10)
(11.11)
PeriodInterest per PeriodPrincipal per PeriodPV of Interest per PeriodPV of Principal per PeriodUnpaid BalanceTotal Interest PaidTotal Principal PaidtI(t)P(t)PVI(t)PVPU(t)IT(t)PT(t)00.000.000.000.0010,000.000.000.001500.00795.05476.19757.199,204.95500.00795.052460.25834.80417.46757.198,370.16960.251,629.843418.51876.54361.52757.197,493.621,378.762,506.384374.68920.36308.25757.196,573.251,753.443,426.755328.66966.38257.52757.195,606.872,082.104,393.136280.341,014.70209.20757.194,592.172,362.445,407.837229.611,065.44163.18757.193,526.732,592.056,473.278176.341,118.71119.35757.192,408.022,768.397,591.989120.401,174.6477.61757.191,233.382,888.798,766.621061.671,233.3837.86757.190.002,950.4610,000.00Totals2,950.4610,000.002,428.147,571.8611.4. LOANS WITH CASH FLOWS, DEPRECIATION, PROFITS, AND TAXES 171
Table 11.3: Nomenclature for cash flow analysis with loans and depreciation [4]
The Basic Expressions for Taxes Paid are:
TP
TP
TP
TP
D
D
D
D
Taxable Income
TI
TR
(cid:2)
Tax Rate
(cid:2)
(Cash Flows Before Taxes
.CFBT
TR:
D/
I
(cid:0)
(cid:0)
(cid:2)
Interest
(cid:0)
(cid:0)
Depreciation)
Tax Rate
(cid:2)
(11.12)
(11.13)
SymbolVariable DescriptionFormula for Variable or Type of Unit UsedINVInvestment that is depreciableAn expense usually at time zero nDProject life (years)tStudy period yeart = 0,1,2....nPR(t)RevenueE(t)ExpenseExpenses would include initial investmentCFBT(t)Cash fl ows before taxesCFBT(t) = R - E(includes INV) LVLoan amountLV = P (usually at time zero)A(t)Loan paymentA = LV*(P/A, i, nL)iLoan interest ratenLLoan life (years)tL = 1,2,....nLLI(t)Loan interest amountLoan interest for each period tPVPPresent value of loan principalPVP=(A - i*LV)/(1+i)LCF(t)Loan cash fl owLV(t=0) and A(t=1,2..nL) values thru loan lifeLP(t)Loan principal amountLP(t) = A(t) - LI(t)CFAL(t)Cash fl ows after loanCFAL(t) = CFBT(t) - LCF(t)DR(t)Depreciation rate for year tDepreciation rates for investmentD(t)Depreciation amount for year tD(t) = DR(t)*INVTI(t)Taxable incomeTI(t) = CFBT(t) - I(t) - D(t)TRTax rate Usually specifi ed and use decimal formTP(t)Taxes PaidTP(t) = TI(t)* TRCFAT(t)Cash fl ows after taxesCFAT(t) = CFAL(t) - TP(t) NP(t)Net profi tsNP(t) = TI(t) - TP(t) = TI(t)(1.0 - TR)CFAT(t)Cash fl ows after taxes(a check)CFAT(t) = NP(t) + D(t) - P(t)172
11. THE IMPACT OF LOANS UPON CASH FLOWS, TAXES, AND PROFITS
The net profits can be determined by:
or
or by
NP
NP
D
D
NP
NP
D
D
Taxable Income
TI
TR
TI
(cid:0)
TI
Taxes
.1
(cid:0)
(cid:2)
D
(cid:2)
(cid:0)
TR/
Cash Flows Before Taxes
.CFBT
D/
.1
I
(cid:0)
TR/:
(cid:0)
(cid:0)
(cid:2)
(cid:0)
Interest
Depreciation)
(1
(cid:0)
(cid:2)
Tax Rate)
(cid:0)
The cash flows after taxes can be determined by:
CFAT
CFAT
CFAT
CFAT
D
D
D
D
Cash Flows After Loan
CFAL
TP
(cid:0)
Taxes Paid
(cid:0)
Net Profits
NP
D
Depreciation
C
P .t/:
Loan Principal
(cid:0)
C
(cid:0)
(11.14)
(11.15)
(11.16)
(11.17)
11.5 EXAMPLE PROBLEMS OF LOANS WITH CASH
FLOWS, DEPRECIATION, TAXES, AND PROFITS
Example Problem A. China Electronics-USA wants to purchase a new set of tooling with a
total cost including installation of $50,000. This tooling would have a MACRS-GDS recovery
period of 5 years to determine the depreciation percentages and the study period would be for
6 years. A loan for $20,000 would be needed and it is planned to pay the loan off in 4 years with
an interest rate of 10% and the loan payments for each of the 4 years would be $6,309.42. The
project is expected to generate a revenue of $30,000 per year and the expected annual costs are
$14,000. The desired rate of return for the company is 15%. Although the totals for the CFAT
and NP are the same with zero required return considerations, they are quite different as to when
they occur. They are very different when the 15% return is considered. The present worth of the
cash flows is only $3,710 whereas the net profits are $13,242 and the payback period is 4 years.
The calculations are in Table 11.4.
Example Problem B. Use the same information in Problem A but use Section 179 De-
preciation instead of the MACRS Depreciation. The difference between the two depreciation
rates for problems A and B is significant when the required rates of return are considered. The
faster depreciation (Section 179 in this problem) gives higher cash flows and much lower prof-
its. Thus, the Section 179 depreciation or other accelerated depreciation would be reported to
the government and straight line depreciation would be reported to the stockholders as slower
depreciation gives higher profits in the early years. In the first year the Section 179 depreciation
11.5. EXAMPLE PROBLEMS OF LOANS WITH CASH FLOWS, DEPRECIATION, TAXES, AND PROFITS 173
n
o
i
t
a
i
c
e
r
p
e
d
S
R
C
A
M
d
n
a
,
s
e
x
a
t
,
s
t
fi
o
r
p
,
s
w
o
fl
h
s
a
c
h
t
i
w
n
a
o
L
—
A
m
e
l
b
o
r
P
e
l
p
m
a
x
E
:
4
.
1
1
e
l
b
a
T
Year EndRevenueExpenseCash FlowsLoan Cash FlowCash Flow After LoanLoan Interest Depr. RateDepr. AmountTaxable IncomeTaxes PaidCash Flow After TaxCum CFAT(t)Net Profi ts(t)R(t)E(t)CFBT(t)LCF(t)CFAL(t)LI(t)DR(t)D(t)TI(t)TP(t)CFAT(t)∑CFAT(t)NP(t)0050,000-50,00020,000-30,00000000-30,000-30,0000130,00014,00016,000-6,3099,6912,00020.0010,0004,0001,6008,091-21,9092,400230,00014,00016,000-6,3099,6911,56932.0016,000-1,569-62810,318-11,591-941330,00014,00016,000-6,3099,6911,09519.209,6005,3052,1227,569-4,0233,183430,00014,00016,000-6,3099,69157411.525,7609,6663,8675,8241,8015,800530,00014,00016,000016,000011.525,76010,2404,09611,90413,7056,144630,00014,00016,000016,00005.762,88013,1205,24810,75224,4577,872Total180,000134,00046,000-5,23840,7625,23810050,00040,76216,30524,45724,457Present Worth (CFAT-0%) = 24,457Present Worth (CFAT-15%) = 3,710Present Worth (NP-15%) = 13,242174
11. THE IMPACT OF LOANS UPON CASH FLOWS, TAXES, AND PROFITS
gave a loss of $21,600 whereas the MARCS gave a profit of $2,400. The present worth of the
cash flows increases to $7,298 whereas the present worth of the profits decreases to $7,860 and
the payback period is reduced to 3 years. Straight line depreciation would give more profits and
lower cash flows as its depreciation rates are slower than the MACRS method. The Section 179
results are in Table 11.5
11.6 SUMMARY
This new approach to loan evaluations does not contain terms to powers of .t
1/
that has been required in the previous published approaches for loan calculations. This approach
better explains how loans work in that the present worth of the principal is constant for each
period t which has not been mentioned or emphasized previously in the literature.
1/ or .t
C
(cid:0)
The effects of loan interest and depreciation upon profits and cash flows is large when
there is consideration of required return or the time value of money. The effects of different de-
preciation methods was illustrated was also illustrated when comparing MACRS vs. Section 179
Depreciation The present worth of Cash Flows After Taxes and Profits will be the same if there is
no required return or time value of money consideration, but there are large differences when the
required return is considered. The focus is on cash flows as accelerated depreciation give higher
cash flows and lower taxes, but also lower profits. The profits will be reported to the stockholders
using straight line depreciation as they will be greater, but an accelerated depreciation will be
used to report to the government for taxes.
11.7 REFERENCES
[1] Creese, Robert C. and Adithan, M., Strategic Cost Analysis for Project Managers and Engi-
neers, New Academic Science Limited, Tunbridge Wells, UK, pp. 74–82, 2012. 167
[2] Park, Chan S., Contemporary Engineering Economics, 2nd ed., Addison-Wesley, Menlo
Park, CA, p. 803, 1997.
[3] Newnan, Donald G., Eschenbach, Ted G., and Lavelle, Jerome P., Engineering Economic
Analysis, 11th ed., Oxford University Press, New York, p. 655, 2012. 167
[4] Creese, R. C., Present value analysis of traditional loans, paper presented at ASEE Annual
Conference and Exposition, Atlanta, Georgia, pp. 23.981.1–23.981.10, June 2013. https:
//peer.asee.org/22366 167, 168, 171
(Materials published with permission of American society)
11.8 EVALUATIVE QUESTIONS
1. A loan for $40,000 is made for a period of 10 years with a 4% interest rate. Determine the
loan payments for end-of-period payments, present value of the principal per period, the
11.8. EVALUATIVE QUESTIONS 175
n
o
i
t
a
i
c
e
r
p
e
d
9
7
1
n
o
i
t
c
e
S
d
n
a
s
e
x
a
t
,
s
t
fi
o
r
p
,
s
w
o
fl
h
s
a
c
h
t
i
w
n
a
o
L
—
B
m
e
l
b
o
r
P
e
l
p
m
a
x
E
:
5
.
1
1
e
l
b
a
T
Year EndRevenueExpenseCash FlowsLoan Cash FlowCash Flow After LoanLoan Interest Depr. RateDepr. AmountTaxable IncomeTaxes PaidCash Flow After TaxCum CFAT(t)Net Profi tsR(t)E(t)CFBT(t)LCF(t)CFAL(t)LI(t)DR(t)D(t)TI(t)TP(t)CFAT(t)∑CFAT(t)NP(t)0050,000-50,00020,000-30,00000000-30,000-30,0000130,00014,00016,000-6,3099,6912,000100.0050,000-36,000-14,40024,091-5,909-21,600230,00014,00016,000-6,3099,6911,5690014,4315,7723,918-1,9918,659330,00014,00016,000-6,3099,6911,0950014,9055,9623,7291,7378,943430,00014,00016,000-6,3099,6915740015,4266,1713,5205,2579,256530,00014,00016,000016,00000016,0006,4009,60014,8579,600630,00014,00016,000016,00000016,0006,4009,60024,4579,600Totals180,000134,00046,000-5,23840,7625,23810050,00040,76216,30524,45724,457Present Worth (CFAT-0%) = 24,457Present Worth (CFAT-15%) = 7,298 = -30,000 + 24,191/(1+.15)1 + 3,918/(1+.15)2 + 3,729/(1+.15)3 + 3,520/(1+.15)4 + 9,600/(1+.15)5 + 9.600/(1+.15)6Present Worth (NP-15%) = 7,860 = 0 - 21,600/(1+.15)1 + 3,918/(1+.15)2 + 8,943/(1+.15)3 + 9,256/(1+.15)4 + 9,600/(1+.15)5 + 9.600/(1+.15)6176
11. THE IMPACT OF LOANS UPON CASH FLOWS, TAXES, AND PROFITS
principal payments each period, the interest payments per period, the cumulative principal
for each period, the cumulative interest per period, and the unpaid balance at the end of
each period.
2. A loan is taken for a flat in the metropolis of Morgantown. The home is priced at $550,000
and the mortgage is for $400,000 at 6% APR for 30 years and the payments are made
monthly.
(a) What is the mortgage payment?
(b) What is the interest on the 125th payment?
(c) What is the principal on the 125th payment?
(d) What is the total interest paid on the loan during the 30 years?
(e) What is the remaining principal amount after the 125th payment is paid?
(f ) What is the total interest paid after the 125th payment is paid?
3. Company WV Consolidated has purchased a new 3D printing machine for $300,000 and
with a loan $200,000 at 6% interest for 5 years. The annual income (savings) from this
machine is expected to be $120,000 and the annual expenses are expected to be $30,000.
MACRS-GDS depreciation is used, the 5-year class life is used and the tax rate is 40%.
(a) What is the amount of depreciation for the 4th year?
(b) What is the book value after the 4th year?
(c) What would be the income taxes due for the 4th year assuming this is the only ma-
chine?
(d) What would be the total cash flows for the 4th year?
(e) What would be the net profits for the 4th year?
(f ) What are the CFAT (15%) at the end of the project?
(g) What is the present worth of the profits at 15% return on the project?
4. Resolve the problem in Section 11.3 and Table 11.2 using an investment of $12,000 instead
of $10,000 and calculate the annual payment, the interest per period, the principal per
period, the PV of principal per period, the unpaid balance each year, and the cumulative
total interest paid and the total principal paid at the end of each period using a 15% interest
rate.
5. Resolve the Example Problem A (Table 11.4) of Section 11.5 using an investment of
$60,000 instead of $50,000 and a loan for $30,000 instead of $20,000 and calculate all
the values in Table 11.4 as well as the Present Worth of the Cash Flows and the Present
Worth of the Profits using a Minimum Acceptable Rate of Return (MARR) of 15%.
6. Resolve problem Example Problem A in Table 11.4 using straight line depreciation with
the mid-year convention and compare the results with those of Problem A and Problem B.
11.8. EVALUATIVE QUESTIONS 177
PART III
Methods for Project Evaluation and
Risk Analysis
C H A P T E R 12
181
Basic Project Evaluation
Techniques
12.1 INTRODUCTION
There are several basic methods for evaluating projects and two projects will be presented for
comparison by these basic traditional techniques. In some cases the results will be different,
and the evaluations can be either on cash flows or net profits. The first seven techniques are
the methods that have been commonly used for project evaluation [1–3] are presented in this
chapter and Chapter 13. The commonly used basic techniques for project evaluation presented
in this chapter are:
1. Payback Period
2. Discounted Payback Period
3. Present Worth (PW) Analysis
4. Future Worth (FW) Analysis
5. Average Annual Equivalent (Average Annual Cost)
6. Return on Original Investment (ROI)
7. Return on Average Investment (RAI)
These techniques will be illustrated with a sample problems utilizing the data in Ta-
bles 12.1 and 12.2. The techniques can be used on a cash flow before taxes or a cash flow after
taxes basis or on a net profit basis and the results between the two projects will be compared.
The analysis, however, should be made on an after tax basis (CFAT) whenever possible as these
are the preferred results. Some investors may prefer the profit basis rather than cash flows, but
cash flows are typically utilized. In addition, since companies use straight line depreciation in-
stead of accelerated depreciation for the stockholders, the stockholders may want to examine
the depreciation used for the government taxes for evaluating the actual profits rather than the
reported “fake news” profits.
There are two projects under consideration by the Jen-Nat company—one project is to
invest $50,000 in an additive manufacturing machine to cut tooling costs and lead times for
182
12. BASIC PROJECT EVALUATION TECHNIQUES
production. The second project is to invest $50,000 for an improved computer security system
to prevent hacking. Only one of the projects can be approved for implementation. The required
return is 15% and loans are required for both projects.
The additive manufacturing project (referred to as Project A) will allow for the creation
of tooling much faster and can be depreciated at the three-year class level. It will require im-
proved computer skills by engineering staff to fully benefit the use of the machine and thus
will have higher expenses, but the more rapid tooling will generate more revenue. The life of the
project would be 5 years and a loan for $20,000 at 10% interest for a 3-year is required. MACRS
depreciation would be for a 3-year property. The data for the project is presented in Table 12.1.
The improved security system to prevent cybersecurity attacks (referred to as Project B)
would provide more security as well as improve computer services. The life of the project is for
6 years and MACRS depreciation would be for a 5-year property. A loan for $20,000 at 10%
interest for 4 years can be obtained as the project life is 6 years. The data security system project
is presented in Table 12.2.
The data in Tables 12.1 and 12.2 includes the loans, the loan interest, the taxable income,
the taxes paid, the total cash flows after taxes, the net profits, the total net profits, the discounted
cash flows after taxes, and the discounted net profits. The MARR used is 15% for discounting
the cash flows and the net profits. The total net profits and the total cash flows after taxes are
equal even though the values in the individual periods are quite different when no discount rate
is used. This is a good check for your calculations. However, the total discounted net profits and
the total discounted cash flows after taxes are not equal. This is because the amounts of cash
flows and profits are frequently different in the same time period. The difference in amounts per
period result in different total amounts. Thus, since the discounted profits and discounted cash
flows are different, one must make a choice in selecting a criteria for the evaluation of a project.
If there is a salvage value, it can be treated as a revenue in the last period when the equipment is
sold. The various techniques for the evaluation of the projects will be examined using the data
of the two example projects.
12.2 PAYBACK PERIOD
The payback period is the year when the cumulative cash flows becomes positive. This technique
is used for small investments at lower levels of management where decisions are made quickly
and when the payback period typically is less than 3 years. It may also have a total initial funding
limit, such as $10,000 or $100,000 depending upon the size of the company initial funding
limits. This not used when the funding is in the $1,000,000 range or higher as payback period
is not the critical issue is large projects.
12.2.1 TRADITIONAL PAYBACK PERIOD
The payback period occurs in the year when the cumulative cash flows after taxes becomes pos-
itive. The limiting payback period may be as short as 1 year, and more frequently is considered
12.2. PAYBACK PERIOD 183
)
A
t
c
e
j
o
r
P
(
t
c
e
j
o
r
p
g
n
i
r
u
t
c
a
f
u
n
a
m
e
v
i
t
i
d
d
a
f
o
s
w
o
fl
h
s
a
C
:
1
.
2
1
e
l
b
a
T
)
B
t
c
e
j
o
r
P
(
t
c
e
j
o
r
p
y
t
i
r
u
c
e
s
r
e
t
u
p
m
o
c
f
o
s
w
o
fl
h
s
a
C
:
2
.
2
1
e
l
b
a
T
End of YearRevenue FlowExpanse FlowCash Flow Before TaxesLoan Cash FlowCash Flow After LoanLoan Interest AmountDepreciationTaxable IncomeTaxes PaidCash Flow After TaxesTotal Cash FlowNet Profi tsTotal Net Profi tsDiscountedCumulativeDiscountedRateAmountNet Profi tsCash FlowCFATNet Profi ts(t)R(t)E(t)CFBT(t)LCF(t)CFAL(t)LI(t)DR(t)D(t) TI(t)TP(t)CFAT(t)∑CFAT(t)NP(t) ∑BO(t) Profi tsFlows-AT∑CFAT(t) ∑NP(t).0050,000-50,00020,000-30,000 0 0 0 0 0-30,000-30,000 0 0 0 -30,000-30,0000150,00030,00020,000-8,04211,9582,00033.3316,665 1,335 53411,424-18,576 801 801 697 9,934 -20,066 697250,00030,00020,000-8,04211,9581,39644.4522,225 -3,621 -1,44813,406 -5,170 -2,172 -1,371-1,64310,137-9,929-946350,00030,00020,000-8,04211,958 73114.81 7,40511,864 4,746 7,212 2,042 7,1185,7474,680 4,742 -5,187 3,734450,00030,00020,000020,000 0 7.41 3,70516,295 6,51813,48215,524 9,777 15,5245,5907,7082,5219,324550,00030,00020,000020,000 0.00 020,000 8,00012,00027,52412,000 27,5245,9665,9668,48715,290Totals250,000200,00050,000-4,12745,8734,12710050,00045,873 18,34927,52427,52415,2908,487End of YearRevenue FlowExpanse FlowCash Flow Before TaxesLoan Cash FlowCash Flow After LoanLoan Interest AmountDepreciationTaxable IncomeTaxes PaidCash Flow After TaxesTotal Cash FlowNet Profi tsTotal Net Profi tsDiscountedCumulativeDiscountedRateAmountNet Profi tsCash FlowCFATNet Profi ts(t)R(t)E(t)CFBT(t)LCF(t)CFAL(t)LI(t)DR(t)D(t) TI(t)TP(t)CFAT(t)∑CFAT(t)NP(t) ∑BO(t) Profi tsFlows-AT∑CFAT(t) ∑NP(t).0050,000-50,00025,000-25,000 0 0 0 0 0 -25,000-25,00000 0-25,000-25,0000130,00012,00018,000-7,88710,1132,50020.0010,000 5,5002,2007,913-17,0873,3003,300 2,870 6,881 -18,119 2,870230,00012,00018,000-7,88710,1131,96132.0016,000 39 15 10,098 -6,989233,323187,635-10,4842,887330,00012,00018,000-7,88710,1131,36919.209,600 7,0312,8127,301 3124,2197,5422,7744,800-5,683 5,661430,00012,00018,000-7,88710,113 71711.525,760 11,5234,609 5,504 5,8166,91414,4563,9533,147-2,5369,614530,00012,00018,000018,00011.525,760 12,2404,896 13,104 18,9207,34421,8003,6516,5153,97913,265630,00012,00018,000018,000 5.762,880 15,1206,04811,952 30,8729,07230,872 3,9225,1679,14617,187Totals180,000122,00058,000-6,54751,4536,547100.0050,00051,453 20,58130,87230,87217,1879,146184
12. BASIC PROJECT EVALUATION TECHNIQUES
to be 2 or 3 years. Longer periods are generally not advised as the time value of money is usu-
ally not considered. Comparing the two alternatives from the data in Tables 12.1 and 12.2, the
traditional payback periods are presented in Table 12.3.
Table 12.3: Traditional payback period
The payback values as integers assume the year in which the first positive value occurs.
The periods in parenthesis assume that cash flows are continuous throughout the year and, for
Project A, would be calculated using the last negative and first positive values as:
Project A Payback
Payback Period
D
D
D
D
D
year of last negative
.
(cid:0)
C
CFAT(negative)/(
(cid:0)
(CFAT negative)
C
CFAT Positive))
(12.1)
.
.5;170//=..
(cid:0)
(cid:0)
.5;170/=.5;170
.
(cid:0)
5;170/
2;042/
2;042//
C
C
2
2
2
C
C
C
0:72
2:72 years:
Although the payback is frequently calculated in fractional years, the basis of cash flow
analysis is the payments are at the end of the period and that payments are considered as discrete,
not continuous. The payback period should be considered as three years in both cases and another
criteria should be used. If only one alternative is being considered, then if payback period is less
than or equal to the specified limit, it should be approved and if more than the specified limit,
it should be rejected.
Some problems with the payback period analysis are as follows.
1. No consideration is given to the benefits after the payback period.
2. When comparing two investments, if the alternatives have different project life periods, the
payback periods would be expected to be different. For this example, the projects would be
repeated—6 times for Project A and 5 times for Project B to have the same life of 30 years.
3. The magnitude of the cumulative cash flows makes no difference, only the payback year—
unless one is considering uniform continuous payments throughout the year.
Since Project A has a shorter payback period in uniform continuous payments throughout
the year it would be the preferred project only if uniform continuous payments considerations are
Payback MethodProject A (years)Project B (years)CFAT3 years (2.72 Years)3 years (2.96 years)Project Life5 years6 years12.2. PAYBACK PERIOD 185
accepted. Otherwise, Project A and Project B have equal payback periods and another alternative
project evaluation technique, such as return on investment, should be considered to differentiate
between the projects if both cannot be approved as both meet the three year limit.
Note that discounted net profits would be a poor indicator of project performance as with
the accelerated depreciation schemes, the 2nd year usually has much lower profits (and can be
negative) than the 1st year which tends to be positive. This is another reason why cash flows are
a better measure for project performance than profits.
12.2.2 DISCOUNTED PAYBACK PERIOD
A more realistic payback period can be determined by discounting the future cash flows and this
is the basis of the Discounted Payback Method. Different paybacks consider the cumulative cash
flows of the project until a positive cash flow results. The paybacks of interest are the cumulative
discounted cash flows before taxes, the cumulative discounted cash flows after loan, and the
cumulative the discounted cash flows after taxes. These discounted cash flows are presented in
Table 12.4 using a discount rate of 15% for Project A.
Table 12.4: Project A payback periods by cumulative discounted cash flows for CFBT, CFAL,
and CFAT
Note that the discounted payback period of 4 years is the same for all 3 methods as all
of them first become positive during the 4th year. The discounted payback period has the same
problems as the traditional payback period. The discounted payback period typically increases
the payback period when uniform continuous cash flows are assumed. When end-of-period
cash flows are assumed, it may increase the payback period by one year or more over that of
the traditional payback period. This is expected as the future cash flows are mainly positive and
YearCash Flows Before TaxesDiscounted CFBT FlowsCumulativeDiscounted CFBT FlowsCash Flows After LoanDiscounted CFAL FlowsCumulative Discounted CFAL FlowsCash Flows After TaxesDiscounted CFAT FlowsCumulative Discounted CFAT Flows0-50,000-50,000-50,000-30,000-30,000-30,000-30,000-30,000-30,000120,00017,391-32,60911,95810,398-19,60211,4249,934-20,066220,00015,123-17,48611,9589,042-10,56013,406,10.137-9.929320,00013,150-4,33611,0587,862-2,6987,2124,742-5,187420,00011,435+7,09920,00011,435+8,73713,4827,708+2,521520,0009,943+17,04220,0009,944+18,68112,0005,966+8,487Total50,00017,04245,97318,68127,5248,487186
12. BASIC PROJECT EVALUATION TECHNIQUES
are being reduced by the discounting, so the payback period would tend to increase. The best
evaluation technique for determining the payback period is the cumulative discounted cash flows
after taxes.
12.3 TIME VALUE OF MONEY ANALYSIS FOR PROJECT
PROFIT EVALUATION
The time value of money approaches probably the most utilized approaches for project analysis.
The present worth, future worth, and average annual worth of the profits and cash flows, such
n is to
as the CFAL, CFBT, and CFAT, are commonly used. The factor for discount is .1
discount all of the n periods back to time zero to calculate the present worth which can then
easily be converted to a future worth or average annual worth.
i/(cid:0)
C
12.3.1 PRESENT WORTH ANALYSIS OF PROFITS
To determine the return on investment the returns use the profits rather than the cash flows.
The present worth method discounts all the payments back to time zero. The present worth is
the most commonly used method to determine the various cash flows or the profits. For profits
and cash flows, one generally wants to maximize the value. The discount factor for discounting
n where n is the number of periods the specific amount is to be discounted and i is
is .1
the discounting factor. The present worth for the cash flows of the discounted net profits for
Project A can be calculated using the data from Table 12.4 and the results are presented in
Table 12.5.
i/(cid:0)
C
The calculation procedure will be illustrated for calculating the present worth of the profits:
PW.NP
i%/
(cid:0)
D
NP.0/
.NP.1//=.1
C
NP.3/.1
C
i=100/3
i=100/1
NP.2/=.1
C
NP.n/=.1
i=100/2
C
i=100/n
C
C
C (cid:1) (cid:1) (cid:1) C
C
(12.2)
PW.NP
15%/
(cid:0)
0
0
D
D
C
C
697
C
15;290
(cid:0)
D C
801=.1
C
9;777=.1
0:15/
2;172=.1
0:15/2
7;118=.1
(cid:0)
0:15/4
C
12;000=.1
C
0:15/5
0:15/3
C
C
1;643
C
4;680
C
5;590
C
C
C
5;966
(if > 0.0 accept if a single alternative or if it is the greatest of all the
alternatives considered).
The present worth approach has another major advantage in the it will give identical values
when using actual currency or constant currency. One does not need to convert the dollars to
constant dollars or use the inflation free interest rate for the base calculations. However, to
obtain future worth or average annual equivalents, the constant dollars and inflation free interest
rates must be used. The present worth analysis is the most commonly used method for project
12.3. TIME VALUE OF MONEY ANALYSIS FOR PROJECT PROFIT EVALUATION 187
Table 12.5: Project A discounted profits cash flows calculations data for determining present
worth and ROI
evaluations. Also note that the discount rate has a major impact upon the profits as shown by
reducing the undiscounted profits from $27,524 to the discounted profits of $15,290.
12.3.2 FUTURE WORTH AND AVERAGE ANNUAL WORTH OF PROFITS
The results of the calculations for determining the present worth of the various cash flow
expressions—CFBT, CFAL, and CFAT—are in Table 12.4. The present worth values can easily
be converted to Future Worth Values or Average Annual Values, which is important for com-
paring projects which may have a different project evaluation periods, or commonly called the
project life:
P
P
(cid:2)
.1
.F=P; i; n/
i/n
.1
C
C
(cid:2)
15;290
(cid:2)
$30;753;
15=100/5
15;290.1:15/5
D
(12.3)
FW
D
F
F
F
F
D
D
D
D
where
FW
PW
F
P
D
D
D
D
future worth
present worth.
YearNet Profi ts NPDiscounted Net Profi ts (MARR=15%)CumulativeDiscountedNet Profi tsDepreciationof InvestmentBook Value(end-of-year)of InvestmentAverage Mid-Year BookValue of InvestmentDuring the Year**0000050,00050.000180169769716,66533,33541,667.52-2,172-1,643-94622,22511,11022,222.537,1184,6803,7347,4053,7057,407.549,7775,5909,3243,70501,852.5512.0005,96615,290000Totals27,52415,29050,00098,150123,150Include initial investment (50,000) which is used to calculate end-of-year book values** Mid-Year Book Value = (Previous year book Value + Current year book Value)/2 and Initial Book Value188
12. BASIC PROJECT EVALUATION TECHNIQUES
The future worth increases the values and those who are impressed by large numbers like
to utilize this method. It is used to indicate future sums such as expected retirement incomes or
pension values in the future.
The average annual value or average annual equivalent, A, is determined from:
AW
A
A
A
A
D
D
D
D
D
D
D
D
where
P
.A=P; i; n/
(cid:2)
P ..i=100/
15;290
$4;561;
(cid:2)
..1
(cid:2)
.0:15/
C
(cid:2)
i=100/n/=..1
C
..1:15/5=..1:15/5
i=100/n
1//
(cid:0)
1//
(cid:0)
(12.4)
AW
PW
A
P
D
D
D
D
Average Annual Worth
Present Worth.
D
Average Annual Equivalent
The average annual equivalent values can be compared when the projects have different
project life values and this is the best method for making decisions under those circumstances.
If one uses the present worth analysis, one must have the same life and that results in multiple
projects. For example, if Project 1 has a life of 4 years and the competing Project 2 has a life of
5 years, then a total life of 20 years would be needed. This results in repeating Project 1 five times
and Project 2 four times to have an equivalent study period of 20 years. One of the evaluative
questions requests the calculation of the average annual equivalent for Project B and compare it
with Project A.
12.4 RETURN OF ORIGINAL INVESTMENT (ROI)
The return on original investment (often called return on investment—ROI) can be considered
on a undiscounted (not discounted) or on a discounted basis. The return on investment is the
average yearly profit divided by the initial fixed investment. This gives the average of the yearly
ROI values. The data being applied is from Table 12.5
12.4.1 ROI – NOT DISCOUNTED
The basic form does not include discounting the profits. The formula used to obtain the values
on a percentage basis is:
ROI
D
(Average Yearly Profit/Original Fixed Investment)
100:
(cid:2)
(12.5)
For Project A the ROI would be:
ROI
D f
(cid:140).801
2;172
7;118
9;777
(cid:0)
C
(cid:140).27;524/=5(cid:141)=50;000
C
100
12;000/=5/(cid:141)=50;000
100
g (cid:2)
C
D f
D
11:01%:
g (cid:2)
12.5. RETURN ON AVERAGE INVESTMENT (RAI)
189
12.4.2 ROI – DISCOUNTED (ROI-D)
The total discounted profits were calculated previously as $15,290 in Table 12.5 and thus the
discounted ROI with the MARR of 15% would be:
ROI-D
D
D
(Average Yearly Discounted Profit/Original Fixed Investment)
100
(Total discounted Profits/5)/Original Fixed Investment)
(cid:2)
100
(cid:2)
(12.6)
ROI-D
D f
(cid:140).15;290/=5(cid:141)=50;000
100
D
g (cid:2)
6:12%:
The ROI not-discounted percentage value is almost twice the percentage of the discounted
ROI-D, and thus it is often preferred in the selling of the project. The annual worth of discounted
profits with beginning-of-period payments is consistent in the timing of the payments and the
investment. The discounted return is lower, but it is the return above the MARR value.
12.4.3 ROI ANNUAL WORTH – AW (ROI)
A new measure of the average annual worth of the profits would be with respect to the original
investment when the investments lives are different or as another general approach to the ROI
method. Since the annual worth is discounted, this represents the average return per year over
the original investment.
In this case:
AW (ROI)
AW=I
D
D
4;561=50;000
0:0912
D
D
9:12%:
(12.7)
12.4.4 ROI ANNUAL WORTH (BASE TIME) – AW-B (ROI)
Since the annual worth payments are end of periods and the investment occurs at time zero,
the annual worth payments should be made beginning-of-period payments. This put both the
annual worth payments and the original investment being considered at the same time (base
time) for evaluating the return. Thus, for evaluating it at time zero, the annual worth payments
must be moved to the beginning and is:
AW-b (ROI)
(cid:140)AW=..1
C
(cid:140)4;561=..1
D
D
C
i//(cid:141)
I /
(cid:2)
0:15/(cid:141)
50;000/
0:0793
D
D
7:93%:
(cid:2)
(12.8)
This rate [AW-b (ROI)] will be higher than the ROI-discounted as payments. All payments in
the ROI discounted are taken at time zero whereas the AW-b (ROI) are considered yearly in
the future at the beginning of the year.
12.5 RETURN ON AVERAGE INVESTMENT (RAI)
Since the original fixed investment is being depreciated over the investment period, the average
investment over the investment life is thought to give a more reasonable representation of the
190
12. BASIC PROJECT EVALUATION TECHNIQUES
investment amount. Both the actual investments and the discounted investments can be utilized
in the calculations and each will be presented separately using the data in Table 12.5. The average
investment is usually considered as the end-of-year book value, but it also can be considered as
the average book value of the investment. Both calculations will be presented, with the end-of
year book value first followed by the average book value.
12.5.1 RAI – NOT DISCOUNTED
The return on average investment (RAI) is the percentage relationship on the average annual
profit to the average outstanding investment. The general formula used is:
RAI
D
where
(Average Yearly Profit/Average Outstanding Fixed Investment)
100;
(cid:2)
(12.9)
Return on Average Investment using End-of-Year Book Values
D
RAI
Average Yearly Profit
Average Outstanding Fixed Investment
D
D
Values/Total Project Life.
Total Net Profit/Total Project Life
Total End-of-Year Book
Therefore, using the values in Table 12.5:
RAI
D
D
.27;524=5/=.98;150=5/
28:04%:
100
(cid:2)
This is a rather large return compared to the ROI and is not representative of most opera-
tions. The sum of the end-of-year book values are less than the total investment, so an extremely
large value is obtained. To obtain a better representation of the book value during the year, the
mid-year book values are used which can be obtained by:
Mid-Year Book Value
D
(Previous year book Value
C
Current year book Value)=2:
(12.10)
If the average mid-year book value is utilized, then the RAI values are somewhat more
reasonable, but are still high. The revised form of Equation (12.7) is:
RAI-M
D
(Average Yearly Profit/Average
Mid-Year Book Value of Fixed Investment)
100;
(cid:2)
(12.11)
where
Return on Average Investment using Mid-Year Book Value
RAI-M
D
Average Yearly Profit
Average Mid-Year Book Value of Fixed Investment
Total Net Profits/Total Project Life
D
Total of Average Mid-Year Book Values of Investment/Total Project Life.
D
Therefore, using the values of Table 12.5:
12.5. RETURN ON AVERAGE INVESTMENT (RAI)
191
RAI-M
D
D
.27;524=5/=.123;150=5/
22:35%:
100
(cid:2)
The 22.35% is still a high return on investment, but that is expected when the investment
portion of the calculation is reduced. The next consideration will be to consider the reduction of
the profits by discounting.
12.5.2 RAI – DISCOUNTED (RAI-D)
The discounted return on average investment (RAI-Discounted) is the percentage relationship
on the average annual discounted profits to the average outstanding investment. The general
formula used is:
RAI-D
D
(Average Yearly Discounted Profit/Average
Outstanding Fixed Investment)
100;
(cid:2)
(12.12)
where
Return on Average Investment using Discounted Profits
D
RAI-D
Average Yearly Discounted Profit
Average Outstanding Fixed Investment
D
D
Values/Total Project Life.
Total Net Discounted Profit/Total Project Life
Total End-of-Year Book
Therefore, using the values in Table 12.5:
RAI-D
D
D
.15;290=5/=.98;150=5/
15:58%:
100
(cid:2)
The discounted profits are the present worth of the profits and the RAI-D is much lower
than the undiscounted RAI.
If the average book value is utilized, then the RAI-D values are somewhat more reason-
able. The revised form of Equation (12.12) is:
RAI-DM
D
(Average Yearly Discounted Profit/Average
Mid-Year Book Value of Fixed Investment)
100;
(cid:2)
(12.13)
where
RAI-DM
D
Return on Average Investment using Discounted Profits and
Mid-Year Book Values
Average Yearly Profit
D
Total Net Profits/Total Project Life
192
12. BASIC PROJECT EVALUATION TECHNIQUES
Average Mid-Year Book Value of Fixed Investment
Total of Average Mid-Year Book Values of Investment/Total Project Life
D
Therefore, using the values of Table 12.5:
RAI-DM
D
D
.15;290=5/=.123;150=5/
12:42%:
100
(cid:2)
In the selection of a return on investment method, it is important to select one method and
use it consistently. In selecting a method, it should be one that tends to best match your actual
returns on investment. The study period should be over the economic life of the investment
or when the book value first reaches zero or the expected salvage value. In general, the ROI
values seem more reasonable as the book values decline more rapidly than the actual value of
the equipment. Note that automobiles are fully depreciated in 5 years, but there are millions of
cars on the road that are older than 5 years. However, the operating expenses of the automobile
increase as they age.
12.6 SUMMARY
Several alternatives have been presented for evaluating projects. The best method presented thus
far is probably the present worth method when projects have equivalent project lives, but the
average annual worth is best when projects have different lives or when considering ROI analysis.
The present worth method also has the advantage that it would result in the same results if
used for constant dollars (inflation-free dollars) and inflation-free interest rates as well as actual
dollars market rates including inflation.
The payback period method is based on the fact that earlier payback periods are usually
better than later payback periods and would tend to reduce the risk of the project failing. This
will also be shown by the project balance method illustrated in Chapter 13.
The average annual worth method is the best when comparing projects which have dif-
ferent project study periods. The present worth method would be used to calculate the values
which would be converted to average annual worth values for comparison in the selection of the
best method. The return on original investment is the used frequently as it does not depend on
the discount rate of the returns or book values and reasonable values are obtained.
The rate of return methods provide alternative methods for project evaluation that can
supplement the present worth overall approach. The present worth method has a pre-selected
return for evaluation and does not determine the return automatically and all projects in compa-
nies do not have the same required return level. Projects critical to the survival of the enterprise
may have lower return requirements than non-critical projects.
The ROI is a more used performance measure than the RAI methods, and the ROI-D
better indicates if the required return is made on the original investment. The AW (ROI) method
and beginning of period method, AW-b (ROI), appears best for evaluating ROI investments
when the alternatives have different study periods. The AW-b (ROI) has both the AW payments
and investment evaluated at the same time (beginning-of-period).
12.7. REFERENCES 193
12.7 REFERENCES
[1] Creese, Robert C. and Adithan, M., Strategic Cost Analysis for Project Managers and Engi-
neers, New Academic Science Limited, Tunbridge Wells, UK, pp. 83–108, 2012. 181
[2] Gelhausen, Marvin, Managing Editor, Certification Study Guide, 2nd ed., revised, AACE
International, Inc, Morgantown, WV, pp. 17-1-6, 2003.
[3] Heinze, Kurt, Cost Analysis of Capital Projects, Marcel Dekker, Inc., New York, pp. 115–
119, 1996. 181
[4] Humphreys, Kenneth K., Jelen’s Cost and Optimization Engineering, McGraw-Hill, Inc.,
New York, pp. 103–110, 1991.
12.8 EVALUATIVE QUESTIONS
1. Prepare an equivalent Table 12.4 for Project B Payback Periods by Cumulative Discounted
Cash Flows for CFBT, CFAL, and CFAT and determine the Project B Payback Periods
by Cumulative Discounted Cash Flows for the CFBT, CFAL, and CFAT approaches. Are
they different?
2. Determine the future worth value and average annual equivalents for Project B. Compare
the values with Project A to make a selection.
3. Determine the end-of-year book values for Project A and Project B for each year over the
life of the project.
4. Prepare a table similar to that of Table 12.5 for Project B. Then determine the discounted
and non-discounted values of ROI and RAI; that is:
(a) ROI
(b) ROI-D
(c) RAI
(d) RAI-M
(e) RAI-D
(f ) RAI-DM
(g) AW (ROI)
(h) AW-b (ROI)
194
12. BASIC PROJECT EVALUATION TECHNIQUES
Include initial investment (50,000) which is used to calculate end-of-year book values.
** Mid-Year Book Value
Initial Book Value.
D
(Previous year book Value
C
Current year book Value)/2 and
5. Solve Project A using the MACRS-ADS straight line depreciation method (same number
of years) and compare the results with the MACRS-GDS solution. Compare the payback
periods the PW values of the profits and cash flows at 0% and 15% return.
6. Solve Project A using Section 179 depreciation method and compare the results with the
MACRS solution. Compare the payback periods the PW values of the profits and cash
flows in the 1st year and for the total project duration. Determine the various discounted
and non-discounted values of ROI and RAI for the two methods; that is:
(a) ROI
(b) ROI-D
(c) RAI
(d) RAI-M
(e) RAI-D
(f ) RAI-DM
(g) AW (ROI)
(h) AW-b (ROI)
7. An investment of $80,000 is made for a 3D-printing machine. It is expected to generate an
annual revenue of $40,000 with annual expenses of $15,000. The project life is 8 years and
the equipment has a class life of 5 years and MACRS depreciation is used. The 3D-printing
machine will have a salvage value estimated at $5,000 when the project is complete at the
end of the 8th year. The equipment has a class life of 5 years, MACRS depreciation will
be used and the income taxes are 35% The required return is 10% and assume the capital
gains tax is the same as the income tax. A loan for $20,000 is needed and the interest rate
is 15%.
Answer the following questions using the data from Table 12.6.
(a)
(i) Determine the Payback Period in years.
(ii) Determine the Discounted Payback Period in years.
(b) Determine the Present Worth of the project CFAT.
(c) Determine the Average Annual Worth of the Project CFAT.
(d) Determine the Returns on Investment—ROI and ROI-D, AW (ROI), and AW-b
(ROI).
(e) Determine the Returns on Average Investment—RAI, RAI-D, RAI-M, and RAI-
DM.
12.8. EVALUATIVE QUESTIONS 195
7
m
e
l
b
o
r
P
r
o
f
a
t
a
D
:
6
.
2
1
e
l
b
a
T
RevenueInvestmentExpensesLoanAfter LoanLoanDepreciationTaxable IncomeTaxes PaidNet Profi tsCFATCumDiscCumYearRIECFBTCash FlowCFALInterest%AmountCFATCFATDiscFAT0080,00080,000-80,00020,000-60,000000000-60,000-60,000(60,000)(60,000)140,00015,00025,000-8,76016,2403,0002016,0006,0002,1003,90014,140-45,86012,855(47,145)240,00015,00025,000-8,76016,2402,1363225,600-2,736-958-1,77817,198-28,66114,213(32,932)340,00015,00025,000-8,76016,2401,14319.215,3608,4972,9745,52313,266-15,3959,967(22,965)440,00015,00025,00025,00011.529,21615,7845,52410,26019,4764,08013,302(9,662)540,00015,00025,00025,00011.529,21615,7845,52410,26019,47623,55612,0932,430640,00015,00025,00025,0005.764,60820,3927,13713,25517,86341,41910,08312,513740,00015,00025,00025,000025,0008,75016,25016,25057,6698,33920,852840,00015,00025,00025,000025,0008,75016,25016,25073,9197,58128.43395,00005,0005,000005,0001,7503,2503,25077,1691,51629,949Total325,000200,000125,000(6,279)118,7216,27910080,000118,72141,55277,16977,16929,949C H A P T E R 13
197
Advanced Project Evaluation
Techniques
13.1 INTRODUCTION
The project evaluation techniques are mainly used to evaluate single projects on an accept-reject
basis and are difficult for selecting the best of several of several projects when investing funds are
limited. The techniques are the internal rate of return, the modified internal rate of return, bene-
fit/cost ratio analysis, and project balance. These comparisons can be added to your spreadsheets
in evaluating projects. General references [1–3] for this section have guided the arrangement of
these topics.
The Internal Rate of Return (IRR) method determines the actual rate of return of the
project and one can select the project with the highest rate of return. It was difficult to calculate
the IRR before computers as it required several trial calculations, but with computers repeated
calculations are performed very rapidly and this technique now more frequently applied.
The Modified Internal Rate of Return (MIRR) method, also referred to as the External
Rate of Return (ERR) method, was used to closely approximate the IRR and was much easier
to calculate as it could be performed in a single calculation. It utilizes the future worth of the
benefits divided by the present worth of the costs to determine the MIRR, but the ease in now
calculating the IRR has reduced the importance of the MIRR.
The Benefit/Cost (B/C) ratio method involves the similar calculations as the present worth
method, but often involves other factors such as safety issues, environmental issues, and eco-
nomic development factors which often are not considered in traditional present worth analysis.
It is, however, another view of the present worth method by rearranging its components to
measure the ratio of the benefits to the costs
The Project Balance (PB) method considers the value of the cash flow by escalating the
initial cash flow (which usually are negative) through the period and adding the cash flow earned
during the period at the end of the period. The final project balance and the end of the study
period should be positive for the project to be acceptable and the area of the negative balances
compared to the area of the positive balances give an indication of the risk of the project. Projects
with no positive areas or small positive areas would be considered as risky projects. This is a more
conservative approach than the other evaluation methods.
198
13. ADVANCED PROJECT EVALUATION TECHNIQUES
13.2 INTERNAL RATE OF RETURN (IRR)
The IRR is compared to the firm’s MARR and if the IRR is greater than the MARR, the project
is acceptable. One advantage is that the IRR can be compared to projects that have a different
project life and/or a different investment amount. The IRR method calculates the specific rate of
return that makes the present worth of the project cash flows equal to zero. Typically, the present
worth of the cash flows is positive at a zero rate of return and is reduced as the rate of return
increases and when it becomes zero, then the project would earn the desired rate of return.
The example Project A from Chapter 12 will be used as an example for calculations and
the data is repeated in Table 13.1. The additive manufacturing project will allow the creation of
tooling much faster and can be depreciated at the three year level. The investment is $50,000 and
the required return is 15%. It will require improved computer skills by engineering staff to fully
benefit the use of the machine and thus will have higher expenses, but the more rapid tooling
will generate more revenue. The life of the project would be 5 years and a loan for $20,000 at
10% interest is needed to assist in the initial financing.
The present worth of the of the project cash flows with the 15% required return is $8,487.
The return rate will be increased until the present worth of the project cash flows becomes zero
and values are listed in Table 13.2. These calculations are now performed easily with spreadsheets
and the IRR is 26.4% and are illustrated in Figure 13.1 This is the anticipated return that the
project will bring. Thus, by comparing the IRR’s, the project with the highest IRR would be
preferred and the effects of project life and investment are considered. Discounting of net profit
cash flows does not work for IRR evaluations as it would be positive even at 100% IRR. This is
another reason why cash flows are preferred over profits for project evaluation analysis.
The primary disadvantage is that trial and error is typically used, but this is no longer a
serious problem as it can be done automatically by the computer. There may be cases where there
is more than one IRR, but the first one, which would be the lowest, is usually the one desired.
The MIRR has only one value, so it was used as a check as an estimate of the IRR, but it is now
rarely used as the actual IRR can be calculated directly with little difficulty.
13.3 MODIFIED INTERNAL RATE OF RETURN (MIRR)
The MIRR was used to estimate the IRR and has only one rate of return without the possibility
of multiple rates of return. However, it is only an approximate value of the IRR but will be close
and if multiple values of the IRR occur, the one nearest to the MIRR would be the correct one
to select.
The MIRR method takes the future worth of the of the net positive cash flows and the
present worth of the net negative cash flows to determine the MIRR and can be expressed as:
MIRR/n
.1
C
Future Worth of positive cash flows at MARR
Present Worth of negative cash flows at MARR
:
D
(13.1)
13.3. MODIFIED INTERNAL RATE OF RETURN (MIRR)
199
)
A
t
c
e
j
o
r
P
(
t
c
e
j
o
r
p
g
n
i
r
u
t
c
a
f
u
n
a
m
e
v
i
t
i
d
d
a
f
o
s
w
o
fl
h
s
a
C
:
1
.
3
1
e
l
b
a
T
End of YearRevenue FlowExpanse FlowCash Flow Before TaxesLoan Cash FlowCash Flow After LoanLoan Interest AmountDepreciationTaxable IncomeTaxes PaidCash Flow After TaxesTotal Cash FlowNet Profi tsTotal Net Profi tsDiscountedCumulativeDiscountedRateAmountNet Profi tsCash FlowCFATNet Profi ts(t)R(t)E(t)CFBT(t)LCF(t)CFAL(t)LI(t)DR(t)D(t) TI(t)TP(t)CFAT(t)∑CFAT(t)NP(t) ∑BO(t) Profi tsFlows-AT∑CFAT(t) ∑NP(t).0050,000-50,00020,000-30,000 0 0 0 0 0-30,000-30,000 0 0 0 -30,000-30,0000150,00030,00020,000-8,04211,9582,00033.3316,665 1,335 53411,424-18,576 801 801 697 9,934 -20,066 697250,00030,00020,000-8,04211,9581,39644.4522,225 -3,621 -1,44813,406 -5,170 -2,172 -1,371-1,64310,137-9,929-946350,00030,00020,000-8,04211,958 73114.81 7,40511,864 4,746 7,212 2,042 7,1185,7474,680 4,742 -5,187 3,734450,00030,00020,000020,000 0 7.41 3,70516,295 6,51813,48215,524 9,777 15,5245,5907,7082,5219,324550,00030,00020,000020,000 0.00 020,000 8,00012,00027,52412,000 27,5245,9665,9668,48715,290Totals250,000200,00050,000-4,12745,8734,12710050,00045,873 18,34927,52427,52415,2908,487200
13. ADVANCED PROJECT EVALUATION TECHNIQUES
Table 13.2: Internal rate of return calculations
Figure 13.1: Present worth vs. rate of return to determine IRR.
Estimated IRR (%)Cash Flows After Taxes027,524 15 8,48720 4,23725 86626 24426.4 130,00025,00020,00015,00010,0005,0000Present Worth ($)051015202530Rate of Return (%)Present Worth vs. Rate of ReturnTable 13.3: Positive future worth and negative present worth for calculating MIRR
13.4. BENEFIT/COST RATIO 201
Using the data of Table 13.3 in Equation (13.1) for the MIRR one obtains:
0:15/4
C
C
13;406.1
0:15/3
C
C
7;212.1
30;000
C
0:15/2
13;482.1
0:15/
C
C
12;000
C
.1
MIRR/5
C
11;424.1
D
.1
.1
C
MIRR/5
MIRR/
D
C
MIRR
D
20:9%:
D
2:58039
1:2089
Note that the MIRR (20.9%) is greater than the MARR (15%), but not near to the actual
IRR (26.4%) for this case. This is because the MIRR used the MARR to calculate the values and
then it should be recalculated using the MIRR value and calculate a new MIRR. The MIRR is
now rarely used as it is relatively easy to directly calculate the IRR.
13.4 BENEFIT/COST RATIO
B/C analysis utilizes the present worth procedure to determine the benefits and costs. There
are two versions of the analysis—the conventional B/C ratio and the modified benefit cost
M (B/C) ratio. The conventional ratio utilizes the total benefits and total costs as the ratio
components whereas the M (B/C) utilizes the net benefits and total investment costs as the ra-
tio. The M (B/C) will give higher values of the ratio, but the decision as to whether the process
is acceptable will be the same.
YearPositive Cash FlowsFuture Worth of Positive Cash FlowsNegative Cash FlowsPresent Worth of Negative Cash Flows030,00030,000111,42419,981213,40620,38937,2129,538413,48215,504512,00012,000Totals57,52477,41230,00030,000202
13. ADVANCED PROJECT EVALUATION TECHNIQUES
13.4.1 CONVENTIONAL BENEFIT/COST RATIO
The B/C ratio is an extension of the present worth method as it uses the present worth value
of the benefits compared to the present worth value of the costs. When the present worth of
the benefits exceeds the present worth of the costs, the ratio of benefits to costs exceeds unity
(1.0), and the project is considered acceptable. This is the same as the total present worth being
greater than zero which is the criteria for an acceptable project by present worth analysis. This
approach is frequently used for evaluation of government projects and other projects where the
benefits and costs include the potential effects upon society, safety, and the environment, which
are difficult to measure directly as well as the direct benefits and costs. The saving of lives, fewer
accidents, etc. are considered as benefits and the costs for implementing them are considered
as costs. The problem also requires the value of saving a life and administrations who prefer to
avoid safety and environmental projects use a low value of a “life” and administrations who want
to implement safety and environmental measures use a higher value of “life.” The ratio is also
dependent upon the rate of return and, in general, the higher the required rate of return, the
less likely for the approval of the project as most benefits occur in the future, and higher MARR
values decrease the present worth of the future benefits.
The acceptance of a project by present worth analysis requires that the project value is
positive, that is:
For B/C analysis, the project is separated into benefits (positive values) and costs (negative val-
ues), and this results in Equation (13.2) being rewritten as:
PW (project value)
0:
(cid:21)
(13.2)
That is the approach of present worth analysis and the approach of B/C analysis is:
PW (benefits)
PW (costs)
0:
(cid:21)
(cid:0)
(cid:21)
Now dividing PW (Benefits) by the PW (costs), one obtains the relationship for B/C ratio anal-
ysis:
PW (benefits)
PW (costs):
PW (benefits)=PW (costs)
1:
(cid:21)
(13.3)
Equation (13.3) illustrates the standard version of the B/C ratio. An example of the B/C
ratio from the data in Table 13.1 using the CFAT values with no rate of return, what is the B/C
ratio for the cash flows after taxes for Project A? (The values are listed in Table 13.3.)
The B/C ratio using a zero discount rate starts with the present worth of the benefits
(positive cash flows) in the cash flow after taxes which is:
PW (Benefits)
11;424
13;406
7;212
C
C
C
D
13;482
C
12;000
D
57;524:
Present worth of the costs (negative cash flows) in the cash flow after tax costs is:
PW (Costs)
30;000:
D
13.4. BENEFIT/COST RATIO 203
Benefit-Cost Ratio
D
Therefore, at a zero rate of return, the benefit/cost ratio exceeds one and the project is
D
D
PW (Benefits)/PW (Costs)
57,524/30,000
1:92 > 1:0.
acceptable.
If the MARR is 15%, the B/C ratio can be determined using the data of Table 13.1.
The present worth of the discounted benefits (positive cash flows) in the cash flow after
taxes is:
PW (Benefits)
9;934
10;137
4;742
C
C
7;708
C
5;966
D
C
D
38;487:
The present worth of the costs (negative cash flows) in the cash flow after tax costs is:
PW (Costs)
30;000:
D
Benefit-Cost Ratio
Thus, the effect of a high MARR will generally greatly reduce the B/C ratio and could make it
negative and the project unacceptable.
PW (Benefits)/PW (Costs)
38,487/30,000
1:28 > 1:0.
D
D
D
When comparing alternatives, the study period must be the same when using present
worth values. The alternative with the highest benefit/cost ratio that is greater than 1.0 is the
best acceptable alternative project. If the project all have negative B/C ratios the one closest
to 1.0 is the best if a project must be selected; but it does have a negative present worth. If
discounted values are used and the ratio is greater than 1.0, the project will also meet the MARR
requirements. If the projects have unequal lives, then the B/C should be calculated on an annual
worth basis.
In comparing alternatives, projects with large investments may have a higher present worth
value than a lower investment project, but the B/C ratio may favor the lower present worth
project.
13.4.2 TRAFFIC INTERSECTION EVALUATION
This is a different version of the problem presented in Strategic Cost Analysis [4]. This problem
considers a traffic light vs. a traffic circle proposed for a dangerous intersection. In the past 3 years
an average of 2 fatalities and 6 major accidents have occurred. The installation of the traffic light
signal will cost $500,000, the traffic circle will cost $4,000,000. The annual maintenance for
the traffic circle will be $10,000 and the annual maintenance of the signal will be $20,000. The
safety engineers expect the fatalities using the traffic light to be reduced by 20% and the serious
injuries to be reduced by 40%. They also predict that the traffic circle will reduce fatalities will
be reduced by 80% and serious injuries reduced by 50%. The cost of a fatality is estimated at
$5,000,000 and a serious injury at $200,000. The discount rate for such projects is 5% (funded
by bonds) and the project life is estimated to be 25 years for both alternatives. What is the B/C
ratio for each of the alternatives.
204
13. ADVANCED PROJECT EVALUATION TECHNIQUES
Present Worth Values are:
Traffic Light Study
Costs
Investment
PW (Annual Maintenance)
$20,000 (cid:140).1:05/25
(cid:0)
D
$20,000 .P =A; i
5%; n
25/
D
1(cid:141)=(cid:140)0:05.1:05/25(cid:141)
D
D
$20,000 (14.094)
D
Total Costs
Benefits
PW (Fatality Savings)
.P =A; i
5%; n
D
25/
D
D
PW (Injury Saving)
.P =A; i
5%; n
D
D
25/
(cid:2)
80,000
D
D
(0.20)
5,000,000
(cid:2)
$1,000,000
D
(0.40)
(cid:2)
200,000
14.094
14.094
(cid:2)
Total Benefits
Benefit/Cost Ratio
D
Traffic Circle Study
Costs
15,221,520/781,890
19.47.
D
Investment
PW (Annual Maintenance)
$20,000 (cid:140).1:05/25
(cid:0)
D
$10,000 .P =A; i
5%; n
25/
D
1(cid:141)=(cid:140)0:05.1:05/25(cid:141)
D
D
$10,000 (14.094)
D
Total Costs
Benefits
PW (Fatality Savings)
.P =A; i
5%; n
D
25/
D
D
PW (Injury Saving)
.P =A; i
5%; n
D
D
D
(0.80)
5,000,000)
(cid:2)
$4,000,000
D
(0.50)
(cid:2)
200,000
14.094
D
25/
(cid:2)
100,000
14.094
(cid:2)
Total Benefits
$500,000
$281,880
$781,890
$14,094,000
$1,127,520
$15,221,520
$4,000,000
$140,940
$4,140,940
$56,376,000
$1,409,400
$57,785,400
D
D
D
D
D
D
D
D
D
D
D
D
Benefit/Cost Ratio
57,785,400/4,140,940
13.95.
The present worth of both proposals are much greater than 1.0, but the traffic light is
preferred as its B/C ratio is the largest of the two. However, if one compares the projects on a
present worth basis, the traffic circle would be preferred.
D
D
13.5 MODIFIED BENEFIT/COST RATIO
The M B/C considers the net benefits divided by the total investment costs. The net benefits are
the present worth of all the annual benefits minus the present worth of all the annual operating
costs and cost term is only the present worth of all the investment costs, which usually occur at
13.5. MODIFIED BENEFIT/COST RATIO 205
time zero but may also be in other periods for large construction projects. This method results in
a higher ratio value, but generally does not alter the preference between the alternatives. If the
projects have identical project lives, the present worth approach values can be used, but if the
project lives are different the average annual values should be use. The values will be higher than
the conventional B/C ratio, but the relationship with respect to 1 as to being greater or less than
one will not change. It results in a higher B/C number, which makes the project appear better.
The investment costs may occur over more than time zero and it would include the present worth
of all investment costs:
Modified Benefit/Cost
(Benefits
(cid:0)
D
Operating Costs)/Total Investment Costs.
(13.4)
If one takes the data for the Traffic Light vs. Traffic Circle problem, one has the following.
Traffic Light Study
Costs
Investment
PW (Annual Maintenance)
$20,000 (cid:140).1:05/25
(cid:0)
D
Benefits
$20,000 .P =A; i
5%; n
25/
D
1(cid:141)=(cid:140)0:05.1:05/25(cid:141)
D
D
$20,000(14.094)
D
Total Costs
PW (Fatality Savings)
.P =A; i
5%; n
D
25/
D
D
PW (Injury Saving)
.P =A; i
5%; n
D
D
25/
(cid:2)
80,000
D
D
(0.20)
5,000,000
(cid:2)
$1,000,000
D
(0.40)
(cid:2)
200,000
14.094
14.094
(cid:2)
Total Benefits
Modified Benefit/Cost Ratio
29.88.
14,939,641/500,000
D
(15,221,520
(cid:0)
281,890)/500,000
D
D
Traffic Circle Study
Costs
Investment
PW (Annual Maintenance)
$20,000 (cid:140).1:05/25
(cid:0)
D
$10,000 .P =A; i
5%; n
25/
D
1(cid:141)=(cid:140)0:05.1:05/25(cid:141)
D
D
$10,000(14.094)
D
Total Costs
$500,000
$281,880
$781,880
$14,094,000
$1,127,520
$15,221,520
$4,000,000
$140,940
$4,140,940
D
D
D
D
D
D
D
D
D
206
13. ADVANCED PROJECT EVALUATION TECHNIQUES
Benefits
(cid:2)
D
PW (Fatality Savings)
.P =A; i
5%; n
D
25/
D
D
PW (Injury Saving)
.P =A; i
5%; n
D
D
(0.80)
5,000,000
(cid:2)
$4,000,000
D
(0.50)
(cid:2)
200,000
14.09
D
25/
(cid:2)
100,000
14.094
(cid:2)
Total Benefits
$56,376,000
$1,409,400
$57,785,400
D
D
D
14.41.
D
Modified Benefit/Cost Ratio
(57,785,400
(cid:0)
140,940)/4,000,000
D
(57,664,460/4,000,000)
Both ratios increased, but the traffic light ratio increased more and has the highest ratio.
The modified B/C ratio will always give a higher value, but it will not change the alternative
selection. The higher value may cause people think it is better, but the changes in the numerator
and denominator are the same and since the numerator is always larger for the ratio to be greater
than one, the ratio will increase under the M B/C ratio.
The calculations could also be performed on an annual worth cost basis by converting the
investment costs to an annual worth (AW) by using .A=P; i; n/ and using the original costs and
will be illustrated for the traffic light.
Traffic Light Study
Costs
AW (Annual Investment)
$500,000 .A=P; i
500,000 ..0:05.1
$500,000 (0.070952)
D
C
D
D
D
AW (Annual Maintenance)
5%; n
25/
D
0:05/25/=..1
0:05/25
1/
(cid:0)
C
$20,000
D
Total Annual Costs
Benefits
AW (Fatality Savings)
AW (Injury Saving)
D
(0.20)
D
(0.40)
(cid:2)
5,000,000
(cid:2)
200,000
Total Annual Benefits
$35,476
$20,000
$55,476
$1,000,000
$80,000
$1,080,000
D
D
D
D
D
D
($1,080.000
Modified Benefit/Cost
20,000)/($35,476 )
The M B/C is the same whether on a present worth basis or on an annual cost basis. In
many instances there are several annual costs and it is easier to use the annual cost basis. If one
uses annual costs, one must convert the initial investment cost to an annual investment cost.
29.88 (Annual Costs).
D
D
(cid:0)
13.6 POSITIVE AND NEGATIVE PROJECT BALANCES
13.6. POSITIVE AND NEGATIVE PROJECT BALANCES 207
13.6.1 INTRODUCTION
A Project Balance (PB) determines the project balance at the end of each period. It usually
starts with a negative balance, which is a result of the initial investment. An excellent reference
for project balances is that by Chan and Sharp-Bette [2]. The balances are summed and the
positive future balances reduce the initial negative investment. It is a future worth approach
being accumulated from the starting period to the end of the project. The balance lasts through
the period and then the next value is added at the end of the period. Each period, usually a
year, changes at the end of the period and the last period has no area. This procedure permits
one to select between projects which may have similar present worth evaluations. Those projects
which have large positive cash flows in the initial periods will have a higher project balance. This
method is to be used for comparing projects which have the same study period. The ratio of the
negative area balance to the positive area balance is an indication of the risk of the project as the
higher the ratio, the higher the risk.
13.6.2 PROJECT A EXAMPLE PROBLEM
If one takes Project A, from Table 13.1 the future worth of the project would be:
FW.CFAT/
15%/
(cid:0)
30;000.1:15/5
7;212.1
11;424.1
0:15/4
13;406.1
0:15/3
D
C
D (cid:0)
C
0:15/2
C
19;980
C
13;482.1
C
0:15/
C
12;000
C
C
20;389
C
9:538
C
15;504
C
C
C
60;341
C
12;000
D
$17;070
(if > 0.0 accept if single alternative or if greatest value among alternatives)
and
PW.CFAT
15%/
(cid:0)
D
17;170=.1:15/5
8;487:
D
The project balance approach calculates the project balance at the end of each year and
time zero is also included. The future worth of the initial period balance is added to end period
balance to create the initial period balance for the next period. The balance is added to the
previous balance to determine the end of the current years cumulative cash balance. The last
value of the cumulative balance is identical to the future worth of the cash flows for the project.
The area represented for the current project balance times the period length, which is one for
all periods. The last value which has a zero length as it is the end of the project and does not
contribute to the area balance, but is part of the cumulative balance. The results for Project A
are shown in Table 13.4 and Figure 13.2 and the large negative areas represent an indication of
the risk of the project.
208
13. ADVANCED PROJECT EVALUATION TECHNIQUES
Table 13.4: Project balance Project A calculations for cumulative balances and area balances
Figure 13.2: Project balance diagram for Project A example problem.
$74;097 and the positive area is
The total negative area of the project is
$4,409 so the
(cid:0)
project is a somewhat “risky” project. The payback year using the Project Balance method would
not occur until the end-of-period 4 and the discounted cash flows also would be in the 4th
year, but the undiscounted total cash flow analysis would be the end-of-period 3. In general, the
project balance method is more negative as the initial cash flow is throughout the study period
and then increased by the MARR. Projects with longer study periods should have lower ratios
of negative area/positive area than projects of shorter study periods.
C
Project Balance (PB) for Project ACumulative BalanceArea CalculationArea Balance(-) Balance(+) BalanceNegative AreaPositive AreaPB0 = -30,000-30,000-30,000 *1 -30,000 PB1 = -30,000*(1 + 0.15) + 11,424 -23,076-23,076 *1 -23,076PB2 = -23,076*(1 + 0.15) + 13,406 -13,131- 13,131 *1 -13,131PB3 = -13,131*(1 + 0.15) + 7,212 - 7,889- 7,889 *1- 7,889PB4 = - 7,889*(1 + 0.15) + 13,482+ 4,409 + 4,409 *1+ 4,409 PB5 = 4,409*(1+ 0.15) +12,000+ 17,070 +17,070 *00Total-74,096+ 4,409 *Note that the fi nal positive project balance is the same as the future worth of the project.012345-30,000-23,076-13,132- 7,889+ 4,40917,070Period (years)PositiveProject BalanceNegativeThe ratio of Negative Area to the Positive Area is:
13.6. POSITIVE AND NEGATIVE PROJECT BALANCES 209
Negative Area/Positive Area
74;097=4;409
16:81:
D
D
13.6.3 PROJECT Z EXAMPLE PROBLEM
Project (Z) with an initial cost of $30,000 and a set of annual net cash flow after taxes for 5 years
to be
$25,000, and
$ 34,147.
$15,000,
$10,000,
$8,000,
(cid:0)
The return rate is 15% and the present worth would be:
C
C
C
C
PW.CFAT
15%/
(cid:0)
30;000
15;000=.1
D (cid:0)
C
D (cid:0)
D
30;000
$8;487:
10;000=.1
0:15/3
C
C
8:696
C
6;049
C
(cid:0)
(cid:0)
0:15/
8;000=.1
C
25;000=.1
C
0:15/4
14;294
C
9;863
C
C
0:15/2
34;147=.1
16;077
C
C
0:15/5
C
This present worth is identical to Project A as calculated in Table 13.1. The project balance,
however, is different as shown by Table 13.5 and Figure 13.3. The future worth would also be
the same at $17,070.
Table 13.5: Project balance Project Z calculations for cumulative balances and area balances
The negative project balance for Project Z, at
$167,175, is much more negative than
$73,766. and indicates a higher risk (more than double)
the project balance for Project A, at
involved with Project Z. Although the present worth values are the same over the same project
lives, the risk for loss is much greater for Project Z than for Project A. There is a positive cu-
mulative balance for Project A at the end-of-period 4, but the Project Z becomes positive only
at the end of the project in period 5. Since there is no positive are for Project Z, the ratio of
(cid:0)
(cid:0)
Project Balance (PB) for Project ACumulative BalanceArea CalculationArea Balance(-) Balance(+) BalanceNegative AreaPositive AreaPB0 = -30,000-30,000-30,000 *1 -30,000 PB1 = -30,000*(1 + 0.15) + 10,000 -44,500-44,450 *1 -44,450PB2 = -44,500*(1 + 0.15) + 8,000 -43,175-43,175 *1 -43,175PB3 = -43,175*(1 + 0.15) + 15,000 -34,651-34,651 *1- 34,651PB4 = - 34,651*(1 + 0.15) + 25,000-14,849 -14,849 *1- 14,849 PB5 = - 14,849*(1+ 0.15) +34,147+ 17,071 + 17,071 *00Total-167,175 210
13. ADVANCED PROJECT EVALUATION TECHNIQUES
the negative area to positive area would be infinity and this indicates a highly risky project as a
positive cumulative balance first occurs at the end of the last period.
Figure 13.3: Project balance diagram for Project Z example problem.
13.7 SUMMARY
Five additional methods for the evaluation of projects have been presented—the internal rate
of return (IRR), the modified internal rate of return (MIRR), the benefit/cost ratio (B/C), the
modified benefit cost ratio (M B/C), and the project balance approach (PB). The IRR determines
the rate of return at which the present worth becomes zero using a trial and error approach. The
MIRR was an estimate of the IRR and will be greater than the MARR, but less than the IRR
and is being replaced by directly calculating the IRR. The B/C ratio is the ratio of the positive
cash flows to the total of the negative cash flows and the investment. The M B/C ratio considers
the ratio of the net benefits (positive cash flows minus the negative cash flows) to the investment
and will result in a higher ratio value, but generally will not change the selection between projects
with the same study period. The project balance approach is used to compare acceptable projects
with present worth that are similar to look at the risk with respect when the positive cash flows
occur. It is based upon future worth calculations of projects with equivalent future worth values
and equal project lives, and projects with the greater net negative cash flow areas are the more
risky projects. Projects with short study periods, large investments, and high MARR values will
have large negative areas even though large revenues may occur in the latter project periods.
012345-30,000-44,500-43,175- 34,651- 14,849+ 17,170Period (years)Positive+Project BalanceNegative-13.8. REFERENCES 211
13.8 REFERENCES
[1] Creese, Robert C. and Adithan, M., Strategic Cost Analysis for Project Managers and Engi-
neers, New Academic Science Limited, Tunbridge Wells, UK, pp. 83–108, 2012. 197
[2] Park, Chan S. and Sharp-Bette, Gunter P., Advanced Engineering Economics, John Wiley
& Sons, Inc., New York, pp. 207–209, 231–236, and 246–253, 1990. 207
[3] Newnan, Donald G., Eschenbach, Ted G., and Lavelle, Jerome P., Engineering Economic
Analysis, 11th ed., Oxford University Press, New York, p. 655, 2012. 197
[4] Creese, Robert C. and Adithan, M., Strategic Cost Analysis for Project Managers and Engi-
neers, New Academic Science Limited, Tunbridge Wells, UK, pp. 95–105, 2012. 203
13.9 EVALUATIVE QUESTIONS
1. Two alternative public works projects to prevent flooding by hurricanes are under con-
sideration. If the MARR is 4%, which of the projects should be selected? Use both the
conventional and modified B/C ratios.
Capital Investment for Dams and Pumps
Project HC Project LC
$7,500,000
$9,000,000
Annual Operations and Maintenance
$250,000
$200,000
Annual Benefit
Useful Project Life (years)
$400,000
$300,000
60
45
2. Calculate the B/C ratio and the M B/C ratio for the traffic circle and traffic light using
annual worth values. Also determine the PW values for both the traffic light and traffic
circle.
3. Use a value of $7,000,000 for a saved life and calculate the new B/C ratio for the traffic
circle and traffic light.
4. An investment of $800,000 was made for a new process which is expected to generate an
annual revenue of $350,000 with annual expenses of $100,000 for 7 years before being
replaced. The equipment has a MACRS-GDS of 5 years and the MARR is 15% and the
data is in Table 13.6. You may want to add additional columns to determine the answers.
(a) Determine the Internal Rate of Return (IRR).
(b) Determine the MIRR.
(c) Determine the B/C ratio on a before tax and depreciation basis (only revenues, ex-
penses, and investment).
212
13. ADVANCED PROJECT EVALUATION TECHNIQUES
4
m
e
l
b
o
r
P
r
o
f
a
t
a
D
:
6
.
3
1
e
l
b
a
T
YearNetDepreciationTaxableTaxesNetCumulativeDiscountedCumulativeDiscountedRevenuesExpensesCFBTPercentAmountIncomePaidProfi tsCFATCFATCFATCFAT00800,000-800,00000000-800,000-800,000-800,000-800,0001350,000100,000250,00020.00160,00090,00036,00054,000214,000-586,000186,087-613,9132350,000100,000250,00032,00256,000-6,000-2,400-3,600252,400-333,600190,851-423,0623350,000100,000250,00019.20153,60096,40038,56057,840211,440-122,160139,025-284,0374350,000100,000250,00011.5292,160157,84063,13694,704186,86464,704106,840-177,1975350,000100,000250,00011.5292,160157,48063,13694,704186,864251,56892,904-84,2936350,000100,000250,0005.7646,080203,92082,568122,352168,432420,00072,818-11,4757350,000100,000250,00000250,000100,000150,000150,000570,00056,39144,916Totals2,450,0001,500,000950,000100800,000950,000380,000570,000570,00044,91613.9. EVALUATIVE QUESTIONS 213
(d) Determine the M B/C ratio on a before tax basis and depreciation basis.
(e) Calculate the FW of the CFAT.
(f ) Prepare a Project Balance Diagram on the CFAT and determine the total negative
area and the total positive areas.
(g) Determine the book value of the equipment over the project life, starting at year zero.
(h) What is the payback period using CFAT and with using discounted CFAT?
(i) Calculate the ROI and ROI-D, AW (ROI), and AW-b (ROI).
5. Rework Problem 4 using straight line depreciation (MACRS-ADS) for 5 years with mid-
year depreciation.
6. Rework Problem 4 using a MARR of 5% instead of 15%.
7. Make a project balance diagram for Problem 4. Calculate the future worth of the project
and use that as a check for your calculations.
C H A P T E R 14
215
Introduction to Risk Analysis
14.1 INTRODUCTION
The basic methods of risk analysis involve varying selected input parameters of the model to
determine their effect upon the cash flows. The probabilistic methods for estimating risk analysis
are more advanced and presented in the following chapter. The positive and negative project
balances of the previous chapter can also be used for an approximate conservative estimate of
project risk and more realistic methods will be presented.
The projects considered involve data for future events such as revenues, expenses, depreci-
ation rates and methods, taxes, and a desired rate of return. The values used are the best estimates
that are available when the project study is made and the future is full of uncertainty. The longer
the project, the higher degree of uncertainty in the future data. In the previous chapters the data
were point estimates of the variables that were assumed known with certainty, but one should
consider variation to determine which of the variables are most critical and the sensitivity of
the parameter to change. In this chapter, discrete changes will be considered to estimate the
variation to estimate project risk.
14.1.1 RISK VS. UNCERTAINTY
Risk refers to situations which can be described by some outcomes whose probabilities can be es-
timated. These probabilities can be discrete or continuous and the distributions, and parameters
are assumed to be known.
Uncertainty implies the probabilities, distributions, and/or parameters are not known. The
techniques for considering decisions under conditions of uncertainty are more advanced than the
scope of this book. More detailed discussions on the differences between risk and uncertainty
are presented in the reference [1]. Thus, the focus in this and the following chapter will be risk
analysis [2] but the variability in the risk analysis is often called the uncertainty of the project.
Two basic approaches for considering risk presented in this chapter are:
1. Sensitivity Analysis and
2. Optimistic-Pessimistic Analysis (Scenario Analysis)
These two techniques will be illustrated by several examples to emphasize the methodology
for obtaining results. The results are used to illustrate the effect of variation of the input variables
upon the output.
216
14. INTRODUCTION TO RISK ANALYSIS
14.2 SENSITIVITY ANALYSIS
Sensitivity analysis is an approach for examining the impact of change of selected critical param-
eters in the estimate. The present worth method is frequently used to evaluate the percentage
change in one variable while the other variables remain fixed. The variables which will be ex-
amined for change are selling price, capacity utilization, investment life, return rate, and total
cost changes for an example problem concerning a 3D rapid prototyping project for tooling
production.
14.2.1 INNOVATIVE 3D RAPID PROTOTYPING AND TOOLING CENTER
EXAMPLE PROBLEM
A group of investors are planning to start a 3D rapid prototyping and tooling center to provide
tooling and prototypes for the various manufacturing companies in Manufacturing Valley. The
plant initial investment will be $30 million with $10 million for the physical plant construction
and $20 million for installation and equipment. The plant will have 50 engineers and technicians
with an average salary of $60,000/year and a management and sales force of 6 employees with
an average salary of $80,000/year. The process uses a wire feed which is melted by a laser source.
The planned processing capacity is 100 kg of wire per hour with an effective product yield
of 80% and the other 20% represents scrap and test specimens. The maximum capacity would be
125 kg per hour. The life of the facility is estimated to be 10 years for investment recovery. The
operating costs are estimated at $20/hour, the annual equipment investment is $1 million, the
annual depreciation at $2 million, and the total taxes are estimated at 25%. The plant operation
is 24 hours/day and for 340 days per year and 25 days for shutdowns and holidays and the annual
utility costs are $200,000. The sales revenue is expected to be $40/kg of product sold and the
wire cost is expected to be $10/kg used. The data for analysis is in Table 14.1.
The present worth of the investment at the 15% MARR, that is PWI.15%/:
PWI.15%/
$
$
$
(cid:0)
(cid:0)
C
D
D
D
30;000;000
30;000;000
12;428;900:
$8;454;000
$8;454;000
(cid:140)P =A; i
5:0188
(cid:2)
(cid:2)
15; n
10(cid:141)
D
D
C
C
The calculated IRR of 25.21% is greater than 15% MARR and the project is approved
for further study. The next steps are to investigate the sensitivity of the PWI.15%/ to changes in
selling price, capacity utilization, tax rate, investment life, return rate, and total cost changes.
14.2.2 SELLING PRICE SENSITIVITY
The selling price (SP) will be evaluated at a 20% decrease, 10% decrease, zero change, 10%
increase, and 20% increase which will result in the selling price levels of 32, 36, 40, 44, and
Table 14.1: Innovative 3D rapid prototyping and tooling center
14.2. SENSITIVITY ANALYSIS 217
48 $/kg. The expression for revenue becomes:
Revenues
D
652;800
652;800
(cid:2)
selling price (SP)
SP:
D
Then the present worth of the investment at a 15% MARR, that is PWI.15%/:
(cid:2)
PWI.15%/
D
Expenses)
(1
(cid:2)
(cid:0)
(cid:0)
Taxrate (decimal))
Investment
.P =A; i
D
30;000;000
(cid:2)
(Revenues
10/
C
15; n
D
.652;800
30;000;000
2;457;204
C
C
SP
(cid:2)
(cid:0)
D (cid:0)
D (cid:0)
D
2;457;204
85;859;244:
SP
(cid:0)
SP
(cid:2)
(cid:2)
(cid:0)
14;840;000/
55;859;244
.1
(cid:2)
(cid:0)
0:25/
(cid:2)
5:0188
(14.1)
(14.2)
Land and Building Construction = $10,000,000Equipment and Controls= $20,000,000Total Investment= $30,000,000Annual RevenuesProduct Sales340 days/yr * 24 hr/day * 100 kg/hr * 0.8 kg product/kg wire used*$ 40/kg= $26,112,000Annual ExpensesLaborEngineers and Technicians50 Employees @ $60,000= $3,000,000Management and Sales6 Employees @ $80,000= $480,000Operating CostsMaterials 340 days/yr*24 hr/day*100 kg/hr*1*10 $/kg= $8,160,000Annual Utility Costs= $200,000Yearly New Equipment Investment= $1,000,000Annual Depreciation Expenses= $2,000,000Total Annual Expense $14,840,000Gross Profi t11,272,000Taxes @25%2,818,000Net Profi ts 8,454,000218
14. INTRODUCTION TO RISK ANALYSIS
Solving for various costs and the results are in Table 14.2.
Table 14.2: Selling price sensitivity
Thus, one observes that the operation is very sensitive to selling price, as a 10% change
has approximately a $10 million change in the present worth of the investment.
14.2.3 PROCESSING CAPACITY SENSITIVITY
The processing capacity (PC) will be evaluated similarly to that of the selling price, but both the
revenues and the expenses have components related to the processing capacity. The relation for
the revenues will be:
Revenues
340
24
PC
0:8
40
(cid:2)
D
(cid:2)
(cid:2)
(cid:2)
D
261;120
PC:
(cid:2)
(14.3)
The relation for the material expenses will be similar and is
Material Expenses
340
24
PC
1
(cid:2)
(cid:2)
10
D
(cid:2)
(cid:2)
D
81;600
PC:
(cid:2)
This will change the total expense expression to:
Expenses
81;600
81;600
PC
PC
(cid:2)
(cid:2)
C
C
D
D
.14;840;000
6;680;000
(cid:0)
6;680;000/:
PW.15%/
D
Investment
.P =A; i
(Revenues
10/
C
15; n
(cid:2)
D
30;000;000
D
.261;120
Expenses)
.1
(cid:2)
(cid:0)
(cid:0)
Taxrate(decimal)/
PC
(cid:2)
(cid:0)
.81;600
PC
(cid:2)
C
6;680;000//
D (cid:0)
(cid:2)
D (cid:0)
.1
0:25/
5:0188
(cid:0)
C
(cid:2)
C
30;000;000
.675;731
P C
(cid:2)
25;144;188/
(cid:0)
675;731
PC
55;144;188:
(cid:2)
Solving for various capacity levels, the results are in Table 14.3.
D
(cid:0)
(14.4)
(14.5)
Sales Price($/kg)Sales PriceChange (%)PWI (15%)($)32-20-7,228,70036-102,600,10040012,428,90044+1022,257,70048+2032,086,500Table 14.3: Process capacity sensitivity
14.2. SENSITIVITY ANALYSIS 219
The present worth is also very sensitive to the processing capacity (PC) as a 10% change
results in a nearly $7 million change. It is not as sensitive as the selling price, but it would be a
critical parameter to monitor.
14.2.4 TAX RATE SENSITIVITY
The effect of tax rates (TR) receives considerable attention in the political world and the TR
being considered range from 20–40%, and that range will be considered. This results to changes
60%. The values for the revenues and expenses before
of
taxes would be for the base case:
20%, base case
40%, and
20%,
C
C
C
(cid:0)
Revenues
Expenses
D
D
$26;112;000
$14;840;000:
PW.15%/
30;000;000
.P =A; i
D
30;000;000
30;000;000
D (cid:0)
(cid:2)
D (cid:0)
D (cid:0)
C
C
.26;112;000
C
0:15; n
10/
D
.11;272;000/
56;571;914
(cid:2)
14;840;000/
.1
(cid:2)
(cid:0)
TR/
(cid:0)
(cid:2)
.1
..1
(cid:0)
5:0188
TR/
(cid:0)
TR/:
(cid:2)
(14.6)
Solving for the effects of taxes on the PW.15%/ results are in Table 14.4.
Even though the tax rate has changed considerably both as the amount applied and the
percentage increase, the project still has a positive present worth, PW.15%/, which is greater
than 10% of the initial investment. Since the percentage changes are larger than the other com-
parisons, they cannot be compared directly, but the changes are smaller than one normally would
expect. This indicates that the effects of sales and operations management performance have a
much greater effect upon the project present worth than the tax rate.
Processing Capacity(kg/hr)Processing Capacity Change (%)PW (15%)($)80-20-1,005,70090-105,671,600100012,428,900110+1019,186,200120+2025,943,500220
14. INTRODUCTION TO RISK ANALYSIS
Table 14.4: Tax rate sensitivity
14.2.5 INVESTMENT LIFE SENSITIVITY
The investment life (IL) of the facility will affect the (cid:140)P =A; i; n(cid:141) term in the PW.15%/ expression.
The various investment lives considered will be 8, 9, 10, 11, and 12 years. The expression would
be:
PW.15%/
30;000;000
.P =A; i
D
30;000;000
.P =A; i
D
30;000;000
D (cid:0)
(cid:2)
D (cid:0)
(cid:2)
D (cid:0)
.26;112;000
14;840;000/
(cid:0)
C
0:15; n
C
0:15; n
D
.11;272;000/
8; 9; 10; 11; 12/
..0:75/
(cid:2)
8; 9; 10; 11; 12/
D
8;454;000/
C
.1
(cid:2)
(cid:0)
0:25/
.P =A; i
0:15; n
D
D
(cid:2)
8; 9; 10; 11; 12/:
(14.7)
Using this expression and varying the investment life from 8–12 years with the results in Ta-
ble 14.5.
Table 14.5: Investment life sensitivity
Tax Rate Applied (%)Tax Rate Change (%)PWI (15%) ($)20-2015,257,50025012,428,90030+209,600,30035+406,771,70040+603,943,100Investment Life (IL)(years)Percent Change (%)P/A,i = 0.15n = ILPW (15%)($)8-204.49728,016,3009-104.771610,339,1001005.018812,428,90011105.233714,245,70012205.420615,825,800One notices that the operations sensitivity to investment life to a 10% change in invest-
ment life changes the present worth, PW.15%/, by roughly 2 million dollars compared to the
much greater 7 and 10 million dollar changes by sales price and processing capacity changes.
14.2. SENSITIVITY ANALYSIS 221
14.2.6 REQUIRED RATE OF RETURN SENSITIVITY
The MARR of the facility will affect the (cid:140)P =A; i; n(cid:141) term in the PW (MARR)) expression. The
various required return values considered will be 12, 13.5, 15, 16.5, and 18% which represent
changes of
20% of the initial MARR. The expression would be:
10, and
10, 0,
20,
(cid:0)
(cid:0)
PW.15%/
C
C
30;000;000
.P =A; i
D
30;000;000
.P =A; i
D
30;000;000
.P =A; i
D
D (cid:0)
(cid:2)
D (cid:0)
(cid:2)
D (cid:0)
(cid:2)
.26;112;000
14;840;000/
.1
(cid:2)
(cid:0)
C
0:12; 0:135; 0:15; 0:165; 0:18; n
(cid:0)
10/
0:25/
D
.11;272;000/
..0:75/
C
0:12; 0:135; 0:15; 0:165; 0:18; n
(cid:2)
8;454;000/
C
0:12; 0:135; 0:15; 0:165; 0:18; n
10/
10/:
D
D
(14.8)
Using this expression and varying the MARR from 12–18% with the results in Table 14.6.
Table 14.6: MARR sensitivity
The magnitude of the PW change in the MARR is similar to that of the investment life,
but as the MARR requirement increases the PW decreased. The change per increase was less
than 2 million per 10% change which is similar to the changes in investment life, but in the
opposite directions.
14.2.7 TOTAL COST SENSITIVITY
The total cost (TC) can often change faster than the revenues, so an examination of similar
changes in the total costs as was done for changes in the total revenues will be presented. An
adjustment factor (AF) will be used on the expenses to have the same percentage changes as
MARR (i)Percent(%)Percent Change of MARRP/A,i = MARRn = 10PW (MARR(i))($)12-205.650216,513,90013.5-105.319514,971,1001505.018812,428,90016.5104.744610,110,80018204.49417,993,100222
14. INTRODUCTION TO RISK ANALYSIS
occurred in the revenue increases. The cost adjustment factors (AF) will be: 0.80, 0.90, 1.0, 1.1,
and 1.2:
PWI.15%/
D
(Revenues-Expenses
Investment
.P =A; i
D
30;000;000
(cid:2)
C
15; n
10/
D
.26;112;000
D (cid:0)
D (cid:0)
C
C
30;000;000
.26;112;000
AF)
.1
(cid:2)
(cid:0)
(cid:2)
Taxrate(decimal)
14;840;000
14;840;000
AF/
AF/
(cid:2)
(cid:2)
(cid:2)
(cid:2)
(cid:0)
(cid:0)
.1
0:25/
(cid:0)
3:7641:
(cid:2)
5:0188
(14.9)
Using this expression and the AF values of 0.80, 0.90, 1.0, 1.1, and 1.2, the present worth
results are in Table 14.7.
Table 14.7: Total cost sensitivity
The lower the cost adjustment factor, the higher the present worth. The changes as a result
of the changes in total costs are large, but not as large as the sales revenue or the process capacity
changes. But it is an area that management primarily controls at the operations area, rather than
at the marketing and sales areas, and is the one that production management should focus on.
14.3 OPTIMISTIC-PESSIMISTIC ANALYSIS
Optimistic-Pessimistic Analysis is used to evaluate variation in one or more variables and three
different levels of variation for each of the variable(s). Three cases are usually considered for each
variable and they are: a “worst-case” (Pessimistic or P ), “most-likely case” (Most Likely or ML),
and “best-case” (optimistic or O). Often more than one variable is analyzed in the study. The
worst case is one which the results would be lower in less than 5 (or 10)% of the cases and the
best case would be exceeded in only 5 (or 10)% of the cases. This is a simple and effective method
for analyzing the effect of two variables. More variables can be considered, but the tables would
be considerably more complex.
Cost Adjustment Factor (AF)Percent Change in AF (%)PW (15%)($)0.8-2023,600,8000.9-1018,014,6001.0012,428,9001.1106,843,0001.2201,257,10014.3. OPTIMISTIC-PESSIMISTIC ANALYSIS 223
14.3.1 INNOVATIVE 3D RAPID PROTOTYPING AND TOOLING CENTER
INVESTOR CONCERNS
The investors for the rapid prototyping and tooling center had two major concerns after making
the sensitivity analysis of the variables. The investors were concerned with the revenues and the
investment life as computerized equipment can become obsolete rapidly as noted by the rapid
changes in smartphone capabilities. The following three scenarios were developed for the desired
rate of return of 15% after taxes for the two variables of Net Revenues and Investment Life. The
investment life values were changed to 6 years and 14 years as the depreciation issues under
accelerated depreciation would be over in 6 years and major replacements would definitely be
required after 14 years.
Table 14.8: Innovative rapid prototyping and tooling center analysis
The present worth of three scenarios of the nine total scenarios were evaluated as:
PW
(cid:0)
O.15%/
PW
(cid:0)
ML.15%/
PW
(cid:0)
P .15%/
30
30
30
30
30
30
D (cid:0)
D (cid:0)
D (cid:0)
D (cid:0)
D (cid:0)
D (cid:0)
C
C
C
C
C
C
12:40(cid:140)P =A; i
12:40(cid:140)5:7245(cid:141)
D
15; n
14(cid:141)
D
41:0
D
15; n
8:45(cid:140)P =A; i
8:45(cid:140)5:0188(cid:141)
4:54(cid:140)P =A; i
4:54(cid:140)3:7845(cid:141)
D
D
D
10(cid:141)
D
12:4
15; n
6(cid:141)
D
12:8:
D (cid:0)
Table 14.9 contains the results of all nine scenarios and the reader should check the other
calculated values in the solution matrix.
These three scenarios give an simple average of $11.0 million and a range of $53.8 million.
The question is which of the two variables is the critical variable, revenue, or investment life. This
analysis indicates that the low revenues are the primary cause and not the short life. Only when
the revenues are low does the project lose money, and it does so at all three lives considered. This
indicates to management that revenues are key and perhaps should be re-examined. The overall
average is $11.0 million on equal weight basis compared to the most likely value of 12.4.
Optimistic (O)ScenariosMost-Likely (ML)Pessimistic (P)Capital Investment (Million $ Units)303030Investment Life (yrs)14106Net Annual Revenues (Million $ Units)12.408.454.54224
14. INTRODUCTION TO RISK ANALYSIS
Table 14.9: Innovative rapid prototyping and tooling center analysis solution matrix
14.4 SUMMARY
Two commonly used methods are applied for estimating variability and risk are sensitivity anal-
ysis and optimistic-pessimistic analysis to indicate which are the critical variables and which are
the non-critical variables. These methods utilize the present worth approach and probabilities
are not considered. Sensitivity analysis takes only one variable for consideration at a time, but it
is straight forward process. The optimistic-pessimistic analysis considers two variables at a time
and often indicates which of the two variables is more critical easier than the individual variable
analysis. The consideration of three variables is much more difficult than two variables in the
optimistic-pessimistic analysis.
14.5 REFERENCES
[1] Garvey, Paul R., Probability Methods for Cost Uncertainty Analysis, Marcel Dekker, Inc,
New York, p. 27 and p. 338, 2000. 215
[2] Creese, Robert C. and Adithan, M., Strategic Cost Analysis for Project Managers and Engi-
neers, New Academic Science Limited, Tunbridge Wells, UK, pp. 109–114, 2012. 215
14.6 EVALUATIVE QUESTIONS
1. The salaries were too low in the data of Table 14.1 and the engineers and technicians
were increased to $80,000 and the management and sales were increased to $100,000. The
tax rate was decreased to 20%. Since cash flows were was very sensitive to selling price,
calculate the present worth of the cash flows and determine the sensitivity of it similar to
that of Table 14.2.
2. Two projects are being considered, one with high risk (risky) with a higher investment but
higher returns and a more conservative project (traditional). Should the risky project be
selected?
Investment Life (years)61014NetRevenues(Million $ Units)$12.4016.932.241.0$ 8.452.012.418.4$ 4.54-12.8-7.2-4.014.6. EVALUATIVE QUESTIONS 225
Project Challenge
Net Investment ($)
Total Revenues
Total Costs
Net Revenues ($)
MARR (%)
Project Life
T (Traditional Project) R (Risky Project)
150,000
80,000
20,000
60,000
15
4
225,000
110,000
20,000
90,000
15
4
(a) Using PW analysis, which is the better project? Note that the net investment is 50%
higher as well as the net revenues. What are the average annual worth of the invest-
ments?
(b) Let the investment life of the traditional project be 5 years instead of 4? How does
that alter the selection? Use average annual worth techniques for consideration.
(c) Use the investment live of 4 years for both projects, but the risky project must earn
20% MARR and the traditional project remains at 15%.
(d) What is the B/C ratio for the two projects over 4 years? (Evaluate the risky project
at both MARR values.)
3. Project ABC has a high risk has a high investment but higher returns. Should it be selected
with the new data?
Project ABC
Net Investment ($)
Total Revenues ($)
Total Costs ($)
Net Revenues ($)
MARR (%)
Project Life (years)
R (Risky Project)
225,000
90,000
25,000
65,000
15
5
(a) Should it be selected?
(b) Do a sensitivity analysis by varying the project life for 3, 4, 5, 6, and 7 years. What is
the trend?
(c) Do a sensitivity analysis by varying the MARR from 5, 10, 15, 20, and 25%. What
is the trend?
(d) What is the rate of return at which the project has a zero present worth value? What
is this rate of return named?
226
14. INTRODUCTION TO RISK ANALYSIS
(e) What is the discounted and non-discounted B/C ratio for the initial project?
(f ) What is the ROI and ROI-D for the project. (assume total costs includes all expenses
including depreciation and taxes)?
4. Calculate the six scenarios that were that were not done for the results in Table 14.9. Give
the equations, the values used, and the results for six scenarios.
5. The investors were concerned about with the net revenues and the investment life in new
process. The following three scenarios were developed for the desired rate of return of 15%
after taxes for the 2 variables of net revenues and investment life. The investment life values
were changed to 3, 5, and 7 years as the project is risky.
Project Challenge Data
Capital Investment (Million $ Units)
Investment Life (years)
Net Annual Revenues (Million $ Units)
Scenarios
Optimistic Most-Likely Pessimistic
(O)
40
7
21
(ML)
40
5
18
(P)
40
3
15
(a) Determine the present worth of the nine possibilities and form a solution matrix.
Discuss which of the variables you consider to be most important for this problem.
(b) Calculate the ROI for the three scenarios: optimistic, most-likely, and pessimistic.
(c) Calculate the ROI for the 3 scenarios using the investment live of 7 years for all three.
C H A P T E R 15
227
Risk Analysis with Probability
Considerations
15.1 PROBABILITY METHODS AND TERMINOLOGY
The traditional risk methods of the previous chapter gave point estimates values for the project of
interest, but little indication of the potential range of the results or the probability of a loss on the
project. This added information is helpful in making decisions about the selection of a particular
project. This chapter will present an introduction to probability considerations in the evaluation
of projects, but other references are listed for a more detailed coverage of the topic [1, 2].
The key terms are random variables and probability distributions. A random variable can
take on several values which can have a probability that can be determined by the probability
distribution for that variable. The random variable can have either discrete values predicted by
discrete probability distribution (also called discrete probability density function) or continu-
ous values predicted by a continuous probability distribution (or continuous probability density
function). In discrete distributions there are a finite number of values and each value as a distinct
probability associated with it. With continuous distributions, the set of variables is not count-
able and the probability density function does not produce a probability value at a point as in
the discrete distribution, but a probability range for between two points which are designated as
the lower point, L, and the higher or upper point, U .
The key measures of probability distributions are the mean or “expected value” and the
“variance” and “standard deviation” which are used to indicate mean and the possible variation
of the mean and the data set. The standard deviation is the square root of the variance and is
more commonly used to describe the variation about the mean. The formula for the expected
value for the discrete probability distribution, which involves a summation is:
E.x/
(cid:22)
D
D
N
X
1
i
D
pi xi (discrete case),
(15.1)
where
E.x/
(cid:22)
i
N
D
D
expected value of the variable or mean x
D
symbol for mean
number of outcomes of variable x in discrete case
total number of outcomes and N is the last outcome
D
228
15. RISK ANALYSIS WITH PROBABILITY CONSIDERATIONS
pi
D
probability of specific ith outcome occurring.
The formula for the variance of the discrete probability distribution is:
Var.X/
(cid:27) 2
D
D
i
N
D
X
1
i
D
(cid:22)(cid:141)2pi
(cid:140)xi
(cid:0)
(cid:140)x2
i (cid:0)
2(cid:22)xi
C
(cid:22)2(cid:141)pi
(cid:27) 2
(cid:27) 2
(cid:27) 2
(cid:27) 2
(cid:27) 2
D
D
D
D
D
where
N
i
D
X
1
i
D
i
N
D
X
1
i
D
i
N
D
X
i
1
D
E.x2/
E.x2/
(cid:140)x2
i (cid:141)pi
2(cid:22)
(cid:0)
i
N
D
X
1
i
D
xi pi
(cid:22)2
C
pi
i
N
D
X
1
i
D
x2
i (cid:0)
2(cid:22)
(cid:22)i
(cid:2)
C
(cid:22)2
1
(cid:2)
(cid:22)2
(cid:140)E.x/(cid:141)2;
(cid:0)
(cid:0)
(15.2)
(cid:27) 2
D
(cid:27)
D
E.x/
(cid:22)
D
E.x2/
i
N
pi
D
D
D
symbol for the variance, which is the square of the standard deviation
symbol for the standard deviation
expected value of the variable or mean x which is (cid:22)
D
symbol for the mean
D
expected value of the square of the variable x
an outcome of variable x in the discrete case
total number of outcomes and N is the last outcome
probability of specific ith outcome occurring.
For continuous probability density functions (PDFs), the probability of an event x be-
tween the lower limit of L and the upper limit U is given by:
P .L < x < U /
U
Z
D
L
f .x/dx:
(15.3)
When the lower and upper limits are the total range of the distribution of the distribution,
Highest Value) will be 1.0. When the values are less than
the value of P (Lowest Value
the total range, the probability will be less than 1.0.
(cid:20)
(cid:20)
x
The formula for the expected value for the distribution between the, that is the mean (cid:22),
for the continuous PDF, which involves an integral, is:
E.x/
(cid:22)
D
U
Z
D
L
x
(cid:2)
f .x/dx;
(15.4)
where
15.2. DISCRETE PROBABILITY EXAMPLES 229
E.x/
(cid:22)
D
f .x/
U
L
D
D
expected value of the variable or mean x
D
symbol for mean
continuous probability distribution function of variable x
D
upper limit of continuous probability distribution of variable x
lower limit of continuous probability distribution of variable x.
The formula for the variance of the continuous probability distribution, which also involves
an integral, is:
Var.X /
D
U
Z
L
(cid:140)x
(cid:0)
E.x/(cid:141)2f .x/dx (continuous case)
(cid:27) 2
D
U
(cid:27) 2
Z
D
L
x2f .x/dx
(cid:140)E.X/(cid:141)2:
(cid:0)
This follows the same procedure as in the discrete case except integrals are used instead of sum-
mations and results in:
(cid:27) 2
E.X 2/
(cid:140)E.X/(cid:141)2:
(15.5)
(cid:0)
Examples will now be presented illustrating the use of discrete probability analysis and
D
then will follow with some continuous probability examples.
15.2 DISCRETE PROBABILITY EXAMPLES
15.2.1 DONNIE THE DEALMAKER
Donnie the Dealmaker has invested in a product where he has arranged for three suppliers and
has three major customers. He needs all the suppliers and the customer demand is high, and they
are completely independent of each other. The information on the costs, revenues proportion of
the total supplied by each supplier, and their cost as well as the proportion of the total sold to
each customer is in Table 15.1. This problem follows a procedure illustrated by Garvey [3]. All
of the product supplied by the suppliers will be sold to the customers.
Table 15.2 shows the revenues, revenue probabilities, costs, cost probabilities, profit
amounts profit probabilities, expected profits, and E.x/2. Donnie the Dealmaker wants to know
what he will make as a profit and the standard deviation.
The total of the expected profit is 11.0 and that is the expected profit with the suppliers
and customers and the probabilities with no inventory or shortage problems. The variance and
standard deviation can be found using the last two columns of Table 15.2 and Equation (15.2):
(cid:27) 2
D
D
D
(cid:140)E.X/(cid:141)2
E.X 2/
206
(cid:0)
(cid:140)11(cid:141)2
(cid:0)
85:
230
15. RISK ANALYSIS WITH PROBABILITY CONSIDERATIONS
Table 15.1: Supplier and customer share of product and prices
Table 15.2: Revenues, costs, and profits for Donnie the Dealmaker’s product investment
The standard deviation is the square root of the variance and is:
851=2
9:22:
(cid:27)
(cid:27)
D
D
Dealmaker Donnie will make an average of $11 in sales, but the profit on any individual
$30 in $10 increments. The probability of a loss occurs on only
sale can range from
one event which has a 3% probability (0.03 in Table 15.2) of occurring.
$10 to
C
(cid:0)
15.2.2 THE INNOVATIVE 3D RAPID PROTOTYPING AND TOOLING
CENTER
The Innovative 3-D Rapid Prototyping and Tooling Center example in Sections 14.2.1
and 14.2.3 had the following capacity values and present worth values as presented in Table 15.3.
SupplierABCSupplier Share of Product0.20.50.3Supplier Price to Dealmaker304050CustomerXYZCustomer Share of Product0.10.60.3Customer Price Paid to Dealmaker405060Revenue AmountRevenue ProbabilityCost AmountCost ProbabilityProfi t AmountProfi t ProbabilityExpected Profi tE(X2)400.1300.2100.020.22400.1400.500.050.0000400.1500.3-100.03-0.33500.6300.2200.122.448500.6400.3100.303.030500.6500.500.18600.3300.2300.061.854600.3400.3200.153.060600.3500.5100.090.99Totals1.0011.0206The probabilities for each capacity level are as assigned and thus one can determine the expected
value (mean), variance, and standard deviation and then determine the mean value of the present
worth:
15.3. CONTINUOUS PROBABILITY MODELS 231
Table 15.3: Innovative 3D rapid prototyping and tooling center data
(cid:22)
D
E.x2/
(cid:27) 2
(cid:27)
D
D
P x
Mean
E.x/
D
P x2
D
D
Variance
(cid:0)
Standard Deviation
D
p.x/
E.x2
(cid:2)
D
p.x/
(cid:2)
9;760
.E.x//2
D
98
D
9;760
D
Square root of variance
D
(cid:0)
982
9;760
(cid:0)
12:49
9;604
156
D
D
The expected value (mean) of the processing capacity is 98 kg/hr, which is less than the
designed operating capacity of 100, the variance is 156 (kg/hr)2, and the standard deviation
is 12.49 kg/hr. The present worth as a function of the processing capacity from final version of
Equation (14.2) was:
The mean of the present worth would be:
PW.15%/
675;731
D
PC
(cid:2)
(cid:0)
55;144;188:
(14.2)
(cid:22).PW.15%//
D
D
$675;731
$11;077;450:
(cid:2)
98
55;144;188
(cid:0)
Since the process capacity mean is lower than the base processing capacity value of 100, the
present worth value is also lower than its original value of $12,428,900. The standard deviation
for the present worth is approximately $8,422,000.
15.3 CONTINUOUS PROBABILITY MODELS
As mentioned previously, the probability at a specific value with a continuous distribution is zero
and the probabilities are calculated for a range of values between an lower limit, L and an upper
Processing CapacityKg/hrProbability of Event “x”Present Worth(15%)Expected CapacityKg/hrxp(x)PW/103x*p(x)x2 * p(x)800.20-1,006161,280900.205,672181,6201000.3012,429303,0001100.2019,186222,4201200.1025,944121,440Totals1.00989,760232
15. RISK ANALYSIS WITH PROBABILITY CONSIDERATIONS
limit, U . The probability is the area between the two limits of the PDF. There are numerous
continuous probability density functions and only two will be considered which are the normal
distribution and the triangular distribution. The normal distribution is the most commonly used
distribution of all the probability density functions, but the triangular distribution is commonly
used in estimating and in determining ranges of cost data.
15.3.1 NORMAL DISTRIBUTION PROPERTIES
The normal distribution is frequently assumed for many problems as the central limit theorem
indicates that the means from samples are distributed normally. For example, the cost of an item
is the total of three items that are independent random variables, then the distribution of the
total cost will be normal even if the independent random variables are not normally distributed.
Some of the properties about means and variances of distributions are:
E
!
X
N
X
1
i
bX2/
D
E.aX1
C
N
X
1
i
D
!
X
Variance
E.X1/
C
E.X2/
C (cid:1) (cid:1) (cid:1) C
E.N /
for
i
1; 2; : : : N
D
aE.X1/
bE.X2/
C
(15.6)
(15.7)
Variance .X1/
C
Variance .X2/
C (cid:1) (cid:1) (cid:1) C
Variance.XN/
(15.8)
D
D
D
(cid:27)
D
standard deviation
Square Root (variance)
D
(cid:140)a2 variance
Standard Deviation .aX1
bX1/
(cid:0)
The cumulative probability for the variable X to a specific value c is given by
D
C
C
(cid:0)
X1
b2 variance
X2(cid:141)1=2
(15.9)
Probability .X < c/
(cid:136)(cid:140).c
(cid:0)
D
(cid:22)/=(cid:27)(cid:141)
D
(cid:136).Z/:
(15.10)
Probability values of (cid:136)(Z) are in Table 15.4.
Basic Normal Probability Examples
Use Equation (15.10) and Table 15.4 to answer the following questions to gain familiarity in
obtaining probability values. Table 15.4 is useful, but the NORMSDIST function in Excel® is
much better and it, or other equivalent expressions in other software packages, would be easier
and faster than look-up tables. If using electronic spreadsheets it would be useful, easier, and
more accurate to use the computer function rather than Table 15.4.
A. If the mean of a normal distribution is 20 and the standard deviation is 10, what is the
probability that a random variable selected from that distribution is less than zero?
Prob.X < 0/
(cid:136)(cid:140).0
(cid:0)
D
20/=10(cid:141)
(cid:136)(cid:140)
(cid:0)
D
2:0(cid:141)
D
0:023
D
2:3%
Table 15.4: Probability values (cid:136).Z/ for the standard normal distribution Z-Values in 0.05 in-
crements
15.3. CONTINUOUS PROBABILITY MODELS 233
B. What is the probability that the random variable selected is less than 15?
Prob.X < 15/
(cid:136)(cid:140).15
(cid:0)
D
20/=10(cid:141)
(cid:136)(cid:140)
(cid:0)
D
0:5(cid:141)
D
0:308
D
30:8%
C. What is the probability that the random variable selected is greater than or equal to 30?
Prob.X
30/
(cid:21)
1
1
(cid:0)
(cid:0)
D
D
0:841
D
Prob.X < 30/
1
(cid:136)(cid:140).30
D
0:159 or 15:9%
(cid:0)
20/=10(cid:141)
1
(cid:0)
D
(cid:136)(cid:140)
C
1:0(cid:141)
(cid:0)
Z -ValueΦ (Z)Z -ValueΦ (Z)Z -ValueΦ (Z)Z -ValueΦ (Z)Z -ValueΦ (Z)Z -ValueΦ (Z)-3.000.001-2.000.023-1.000.1590.000.50010.8412.000.978-2.950.002-1.950.026-0.950.1710.050.5201.050.8532.050.980-2.900.002-1.900.029-0.900.1840.100.5401.100.8642.100.982-2.850.002-1.850.032-0.850.1980.150.5601.150.8752.150.984-2.800.003-1.800.036-0.800.2120.200.5791.200.8852.200.986-2.750.003-1.750.040-0.750.2270.250.5991.250.8942.250.988-2.700.004-1.700.045-0.700.2420.300.6181.300.9032.300.990-2.650.004-1.650.049-0.650.2580.350.6371.350.9112.350.991-2.600.005-1.600.055-0.600.2840.400.6551.400.9192.400.992-2.550.005-1.550.061-0.550.2910.450.6741.450.9262.450.993-2.500.006-1.500.067-0.500.3080.500.6911.500.9332.500.994-2.450.007-1.450.073-0.450.3260.550.7011.550.9392.550.995-2.400.008-1.400.081-0.400.3450.600.7261.600.9452.600.995-2.350.009-1.350.088-0.350.3630.650.7421.650.9512.650.996-2.300.011-1.300.097-0.300.3820.700.7581.700.9552.700.996-2.250.012-1.250.106-0.250.4010.750.7731.750.9602.750.997-2.200.014-1.200.115-0.200.4210.800.7881.800.9642.800.997-2.150.016-1.150.125-0.150.4400.850.8021.850.9682.850.998-2.100.018-1.100.136-0.100.4600.900.8161.900.9712.900.998-2.050.020-1.050.147-0.050.4800.950.8291.950.9742.950.998-2.000.023-1.000.1590.000.5001.000.8412.000.9783.000.999234
15. RISK ANALYSIS WITH PROBABILITY CONSIDERATIONS
D. What is the probability that the random variable selected is between 15 and 30?
Prob.15
X
(cid:20)
(cid:20)
30/
D
D
D
Prob.X
(cid:136)(cid:140).30
(cid:136).
C
(cid:0)
1:00/
30/
(cid:20)
(cid:0)
20/=10(cid:141)
(cid:136).
(cid:0)
(cid:0)
Prob.X < 15/
(cid:136)(cid:140).15
20/=10(cid:141)
(cid:0)
0:50/(cid:141)
(cid:0)
0:841
D
0:308
(cid:0)
D
0:533
D
53:3%
E. If one assumes that Donnie the Dealmaker can assume his data is distributed normally,
the probability of a loss would be:
Prob.X < 0/
(cid:136)(cid:140).0
11/=9:22(cid:141)
(cid:136)(cid:140)
D
(cid:0)
This is very close to the discrete probability of 3.0%.
D
(cid:0)
1:193(cid:141)
(cid:25)
0:027 or 2:7%:
This risk of a project is usually considered to be the probability of obtaining a loss, but it
can be specified to be a loss at a specific required return.
Cash Flow Normal Distribution Example Problem
The cash flows from a project are presented in Table 15.5. The required rate of return is 15%.
What is the present worth of the cash flows on the project with an investment of $40,000,
expected mean revenues, and expected standard deviation of the cash flows? What is the prob-
ability that the cash flow from the project has a loss?
Table 15.5: Cash flow example data
The first steps are to calculate the mean, variance, and standard deviation of the present
worth of the cash flows at the required rate of return. This can be done by:
PW.15%/
D (cid:0)
40;000
40;000
15;000(cid:140)P =F; 15; 1(cid:141)
15;000
0:8696
(cid:2)
C
C
C
D (cid:0)
C
20;000
20;000(cid:140)P =F; 15; 2(cid:141)
20;000(cid:140)P =F; 15; 3(cid:141)
0:7561
(cid:2)
C
0:6575
(cid:2)
C
20;000
PW.15%/
(cid:22)
D
D
D
$1;316:
$1;316:
YearExpected Cash Flow µ($)Standard Deviationof Cash Flow σVariance of Cash Flow σ20-40,0001,0001,000,000115,0001,5002,250.000220,0002,0004,000,000320,0003,0009,000,000The variance is calculated by:
15.3. CONTINUOUS PROBABILITY MODELS 235
V .PW/
D
(cid:27).PW/2
(cid:27).PW/2
(cid:27).PW/
D
D
D
D
D
.:8696/2
106
(cid:2)
(cid:2)
C
1;5002
2:2867
C
.:7561/2
106
(cid:2)
3:8908
2;0002
C
106
(cid:2)
C
(cid:2)
12
1
1;0002
(cid:2)
106
(cid:2)
8:8790
C
1:7015
C
106
(cid:2)
103
2:980
(cid:2)
2;980:
.:6575/2
3;0002
(cid:2)
D
(cid:0)
P .PW < 0/
(cid:136)(cid:140).0
1;316/=2;980(cid:141)
(cid:136)(cid:140)
0:4416(cid:141)
(cid:136)(cid:140)
0:45(cid:141)
0:326
0:33
33%
This indicates the mean value of the present worth is $1,316, the standard deviation is
$2,980 and there is a 33% chance that the project will not make the desired return of 15%; that
is, it will have a negative present worth value for the desired MARR 33% of the time.
D
(cid:0)
(cid:25)
(cid:0)
(cid:25)
D
D
15.3.2 TRIANGULAR DISTRIBUTION PROPERTIES
The triangular distribution is described by a lower limit, the most likely value (which is also
called the mode) and an upper limit. Costs of individual items tend to follow the triangular
distribution rather than the normal distribution. The equations for the cumulative PDFs, mean,
and variance are listed in many sources, such as Wikipedia [4] and Garvey [2].
P .x < M /
P .x > M /
D
D
.x
1
L/2=..U
(cid:0)
.U
x/2=..U
(cid:0)
L/
(cid:0)
(cid:0)
(cid:2)
L/
(cid:0)
(cid:2)
(cid:0)
.U
.M
L//
for L
x
M
M //
(cid:0)
(cid:20)
for M
(cid:20)
L
x
(cid:20)
(cid:20)
where
L
U
M
Lower limit
Upper limit
Most Likely Value, also called Mode
D
D
D
(cid:22)
(cid:27) 2
D
D
E.x/
D
Var.x/
.L
M
C
.1=18/
D
C
(cid:2)
U /=3
..M
L/
(cid:2)
(cid:0)
.M
(cid:0)
U /
C
.U
(cid:0)
L/2/
(15.11a)
(15.11b)
(15.12)
(15.13)
Basic Triangular Probability Example
A variable follows the triangular distribution and has a most likely value of 12 and a lower limit
of 7 and a upper limit of 20. What is the mean, standard deviation, the probability that the
random variable is less than 10, the probability the random variable is between 8 and 10, and
236
15. RISK ANALYSIS WITH PROBABILITY CONSIDERATIONS
the probability the random variable is greater than 18?
U /=3
.7
12
20/=3
13
C
(cid:2)
7/
..M
.12
(cid:0)
(cid:0)
(cid:2)
D
L/
C
.M
C
U /
(cid:2)
20/
(cid:0)
.20
C
7/2/
C
(cid:0)
D
.U
D
(cid:0)
7:17
L/2/
E.x/
D
Var.x/
.L
M
C
.1=18/
D
..12
.1=18/
(cid:2)
.7:17/1=2
.10
D
7/2=..20
(cid:0)
2:68
(cid:22)
(cid:27) 2
(cid:27)
P .x < 10/
P .8 < x < 10/
P .x < 8/
D
D
D
D
D
D
D
7/
.12
7/
(cid:0)
D
0:138
D
13:8%
(cid:0)
P .x < 10/
(cid:0)
(cid:2)
P .x < 8/
(cid:0)
7/2=..20
.8
(cid:0)
7/
(cid:2)
(cid:0)
.12
(cid:0)
7/
D
0:015
D
1:5%
Therefore,
P .8 < x < 10/
P .X > 18/
D
D
D
P .x < 10/
P .x < 8/
13:8%
(cid:0)
P .x < 18/
1
:038(cid:141)
(cid:140)1
(cid:0)
D
D
(cid:0)
0:038
D
(cid:140)1
D
.20
(cid:0)
(cid:0)
3:8%:
1
1
(cid:0)
(cid:0)
1:5%
D
(cid:0)
18/2=..20
12:3%
7/
(cid:2)
(cid:0)
.20
(cid:0)
12//(cid:141)
PERT and the Cooper and Davidson Approach
The triangular distribution is often used in cost analysis as individual cost components tend
to follow the triangular distribution rather than the normal distribution, but the sum of cost
components via the central limit theorem will have a normal distribution. The PERT [5] and
Cooper and Davidson [6] approaches use modified triangular versions with more weight on
the mode than on the outside limits. The Program Evaluation and Review Technique (PERT),
utilizes the triangular distribution with a high (optimistic), low (pessimistic), and most likely
values and that is similar to the Optimistic-Pessimistic Analysis of Chapter 14. PERT was first
used in the late 1950s in the development of the Polaris nuclear submarine program.
As in the use of the Optimistic-Pessimistic Analysis Technique, it is assumed that the high
and low values are known (or can be estimated) and are the endpoints of the distribution. The
normal distribution with its endpoints of infinity (plus and minus) is not practical in cost analysis
situations as negative costs would not be reasonable for a single component. It is very difficult
to estimate the end points, and Cooper and Davidson (C&D) have modified the parameters so
the end points are 10% values; that is, the low estimate implies there is only a 10% chance of
having a value lower than this estimate and the high estimate implies there is only a 10% chance
of having a value higher than this estimate. Using this definition, their expressions for the mean
and standard deviation are:
(cid:140)H
(cid:140)H
(cid:22)
(cid:27)
D
D
C
(cid:0)
2M
L(cid:141)=4
C
L(cid:141)=2:65;
(15.14)
(15.15)
where
15.3. CONTINUOUS PROBABILITY MODELS 237
H
D
L
D
M
D
2:65
D
high estimate
low estimate
most likely value (Mode)
value for 80% confidence level.
Note that the expression for the mean for the previous problem would be:
(cid:140)H
(cid:22)
D
C
2M
C
L(cid:141)=4
.20
2
(cid:2)
C
12
C
D
7/=4
D
12:75:
This is less than the mean value of 13 for the actual triangular distribution as the mode
has a higher weight in the approximation expression. It could be higher or lower if 2M is higher
or lower than .H
L/. Note that the mode has the same weight as the total of the high and
C
low estimates in determining the mean.
The equations used in PERT are very similar to those of C&D, but they assume the
high and low values are the actual high and low values with no probability of being outside the
limits and a confidence limit of approximately 100%. The standard deviation equation represents
the range divided by 6, which is occasionally used to estimate the standard deviation when the
normal distribution is used. Those equations for PERT analysis are:
(cid:140)H
(cid:140)H
(cid:22)
(cid:27)
D
D
C
(cid:0)
4M
C
L(cid:141)=6;
L(cid:141)=6
(15.16)
(15.17)
where
H
L
M
6
high estimate
low estimate
most likely value (Mode)
D
D
D
value for 100% confidence level.
D
Note that the expression for the mean for the previous problem would be:
(cid:140)H
(cid:22)
D
C
4M
C
L(cid:141)=6
.20
4
(cid:2)
C
12
C
D
7/=6
D
12:50:
This is less than the value of 13 for the actual triangular distribution as the median has
a higher weight in the approximation expression. It could be higher or lower if 4M is higher
or lower than 2
L/. In these formulas the most likely value or mode is used, and if the
distribution was normal, the mean and the mode would be equal. Note that the mode has twice
the weight of the sum of the high and low estimates in PERT vs. equal weights in the C&D
approach.
.H
C
(cid:2)
238
15. RISK ANALYSIS WITH PROBABILITY CONSIDERATIONS
Triangular Cash Flow Analysis Using Cooper and Davidson Approach
A present worth cash flow analysis, in million dollar units, was performed using the Cooper
and Davidson [6] equations using the assumption that there is only a 10% chance that the value
will be higher or a 10% chance that it will be lower than the lower limit. This represents a 80%
confidence range and the following values were obtained in Table 15.6:
Table 15.6: Cash flow analysis data for C&D approach
16:982
C
342:811=2
D
D
(cid:27) 2
(cid:27)
P (cid:140)CF < 0(cid:141)
MinCFAT
MaxCFAT
D
9:75
9:75
D
(cid:136)(cid:140).0
D
D
The range would be
(cid:0)
6:792
C
18:51
2:832
D
342:81
9:75/=18:51(cid:141)
18:51=2
18:51=2
(cid:0)
2:65
2:65
(cid:2)
(cid:2)
(cid:136)(cid:140)
0:526(cid:141)
(cid:0)
14:8
34:3:
D
D (cid:0)
D C
(cid:0)
C
14:8 to 34:3 or 49:1 million dollars.
0:30
D
(cid:25)
30%
Traditional analysis indicates that the project will make 15 million dollar units using the
most likely values. The C&D risk analysis indicates that, due to the variability in the data, the
expected value of the CFAT is only 9.75 million because of the range of cash flow components.
Also, there is a 10% chance that the project will have a cash flow of a negative 14.8 million or
lower. There is also a 10% chance the project will have a cash flow greater than 34.3 million.
The relatively high probability of a negative cash flow at approximately 30% would tend to cause
rejection of the project.
Triangular Cash Flow Analysis Using PERT Approach
An analysis using PERT would have a larger range for the higher and lower values as that
assumes the lower and upper limits are absolute values; that is there is no chance of higher
or lower values. The follow is using estimated values for the higher and lower limits for the 3
standard deviations of the revenues and expenses of 30 and 50% values. These are the upper
accuracy range limits for feasibility studies (see Table 15.7).
202
82
470:251=2
C
(cid:27) 2
(cid:27)
D
D
470:25
2:52
21:69
D
C
D
Most LikelyRange (%)Low ValueHigh ValueMean µStandard Deviation σRevenue 150-20/+10120165146.2516.98Expenses60-10/+205472 61.56.79Investment75 -5/+571.2578.75 75.0 2.83Cash Flow+15µ = +9.75σ = 18.51Table 15.7: Cash flow analysis data for PERT approach
15.4. RISK SUMMARY 239
D
P (cid:140)CF < 0(cid:141)
MinCFAT
MaxCFAT
D
D
The range would be
8:0
8:0
0:369(cid:141)
(cid:136)(cid:140).0
(cid:0)
6
(cid:2)
6
(cid:2)
8:0=21:69(cid:141)
21:69=2
21:69=2
D
(cid:0)
D (cid:0)
D C
C
57:1 to 73:1 or 130:2 million dollars
(cid:136)(cid:140)
(cid:0)
57:1
73:1
(cid:25)
0:35
35%
D
(cid:0)
Traditional analysis using the most likely values indicates that the project will make the
required return plus an additional 15 million dollar units. This PERT risk analysis indicates that,
due to the variability in the data, the expected value of the additional CFAT is only 8 million.
57:1 million to a positive 73.1 million. The range is larger
The estimate range is from a low of
than the C&D analysis as the confidence level would be nearly 100% instead of 80%. The higher
probability of a negative cash flow at approximately 35% would tend to cause rejection of the
project. The PERT values results in a wider estimate range and higher probability of a negative
cash flow. The larger range of the PERT analysis, approximately 130 million dollars, vs. that of
the C&D, approximately 50 million dollars, is what had led to the development of the C&D
approach.
(cid:0)
If the exact same ranges were used for the PERT and C&D models, the PERT would
have a smaller range as it divides the range by 6 vs. 2.65 for C&D. The PERT method also puts
more weight on the mode in calculating the distribution mean.
15.4 RISK SUMMARY
The basic risk analysis methods in of Sensitivity Analysis and Optimistic-Pessimistic Analy-
sis give a point estimate value, but no indication of the range of possible error. The sensitivity
analysis gives an indication of the effect of critical variables upon the engineering economic
expression by examining specific cases to determine which variables are critical for cost con-
trol of the process. The optimistic-pessimistic analysis is generally restricted to two variables to
determine which variable is more important.
The discrete probability analysis requires that the discrete probability density function for
each of the possible events be known and when the number of events is numerous, it can be
tedious to evaluate. However, the mean can be calculated to determine the expected profit of
Most LikelyRange (%)Low ValueHigh ValueMean µStandard Deviation σRevenue 150-50/+307519514520Expenses60-30/+504290 628Investment75 -10/+1067.582.5 75 2.5Cash Flow+15µ = +8.0σ = 21.69240
15. RISK ANALYSIS WITH PROBABILITY CONSIDERATIONS
the process and the probability of a loss can be determined by summing the probabilities of those
events in which a loss occurs.
The continuous probability analysis uses the data and the probability density function for
that data set to calculate the mean and standard deviation. The probability of an individual item
has zero probability, so an event represents a continuous range of events. The two primary density
functions for the analysis of cost data are the triangular distribution and the normal distribution.
The normal distribution can be used for analyzing the sum of individual distributions which
may not be normal, but will approach the normal distributions via the central limit theorem.
The triangular distribution is better for estimating individual revenue and cost components as
the most likely value is usually not the mean in cost analysis and the total cost can be estimated
with the normal distribution via the central limit theorem.
The advantage of probability analysis is that it is able not only to determine a range but
also determine the probability of a loss. The PERT method is applied more and has a higher
confidence level than the C&D approach but the longer range of PERT tends to lower the
estimated mean and results in a higher standard deviation leading to a higher probability of a loss.
The variability of the estimate and confidence level of the estimate are important in evaluating
the risk of the project.
15.5 REFERENCES
[1] Lindgren, B. W. and McElrath, G. W., Introduction to Probability and Statistics, The
McMillan Company, New York, p. 277, 1959. 227
[2] Garvey, Paul R., Probability Methods for Cost Uncertainty Analysis, Marcel Dekker, Inc,
New York, pp. 109–112, 2000. 227, 235
[3] Garvey, Paul R., Probability Methods for Cost Uncertainty Analysis, Marcel Dekker, Inc,
New York, pp. 51–53, 2000. 229
[4] Wikipedia Web Page, (2-17-2018). https://en.wikipedia.org/wiki/Triangular_d
istribution 235
[5] Wikipedia Web Page, (2-17-2018). https://en.wikipedia.org/wiki/Program_eval
uation_and_review_technique 236
[6] Cooper, D. O. and Davidson, L. B., The parameter method for risk analysis, Chemical
Engineering Progress, pp. 73–78, November 1976. 236, 238
15.6 EVALUATIVE QUESTIONS
1. Use the following discrete data matrix, complete the matrix (Table 15.8) and calculate the
mean and standard deviation of the Present Worth (15%).
15.6. EVALUATIVE QUESTIONS 241
PW.15%/
2;457;204
SP.$=kg/
(cid:2)
(cid:0)
D
85;859;244
Table 15.8: Selling price data matrix
2. Donnie the Dealmaker has a revised set of revenue probabilities. Complete Table 15.9 and
answer the following questions.
(a) What is the expected profit?.
(b) What is the variance of the profit?
(c) What is the standard deviation of the profit?
(d) What is the range for the possible profit scenarios?
(e) What is the actual probability of a loss?
(f ) What is the probability of a loss using the normal distribution?
3. The mean of a normal distribution is 1,000 and the standard deviation is 100.
(a) What is the probability that a random variable selected from that distribution is less
than 700?
(b) What is the probability that the random variable selected is less than 900?
Sales Price $/KgProbabilityPresent Worth (15%)Expected Selling Price 5/KgE(x2) for VarianceExpected Present Worth (15%)E(x2) for Variance of PW (15%)xp(x)PW/103x*p(x)x2 * p(x)300.10340.30380.30420.30460.10Totals1.00µ (Selling Price) =_______σ2 (Selling Price) = _______µ (Present Worth (15%)) =_______σ2 (Present Worth (15%)) = _______242
15. RISK ANALYSIS WITH PROBABILITY CONSIDERATIONS
Table 15.9: Revenues, costs, and profits for Donnie the Dealmaker’s revised product investment
(c) What is the probability that the random variable selected is greater than or equal to
1,200?
(d) What is the probability that the random variable is between 900 and 1,100?
4. A variable follows the triangular distribution and has the lower limit of 600, an upper limit
of 2,200 and the most likely value of 1,200.
(a) What is the mean?
(b) What is the standard deviation?
(c) What is the probability that the variable is less than 800?
(d) What is the probability that the variable is greater than 1,500?
(e) What is the probability that the variable is between 800 and 1,500?
5. The cash flows for a project with an initial investment of $20,000 are in Table 15.10.
(a) If the MARR is 10%, what is the present worth of the cash flows?
(b) What is the variance of the cash flows?
(c) What is the standard deviation of the cash flows?
(d) What is the probability that the cash flows will be negative?
(e) What is the probability that the cash flow is less than 2000?
Revenue AmountRevenue ProbabilityCost AmountCost ProbabilityProfi t AmountProfi t ProbabilityExpected Profi tE(X2)400.3300.2100.060.66400.3400.500.150.00400.3500.3-100.09-0.99500.5300.220 500.5400.310500.5500.50600.2300.230600.2400.320600.2500.510Totals1.00Table 15.10: Cash flows for a project with an initial investment of $20,000
15.6. EVALUATIVE QUESTIONS 243
6. The values in Table 15.11 were obtained is the response to the bids for a contract. The bids
are assumed to follow a normal distribution.
(a) What is the actual range of the bids?
(b) What is the mean value of the bid distribution?
(c) What is the standard deviation for the bid distribution?
(d) What is the probability that a bid would be less than $350,000?
(e) What is the probability that a bid would be greater than $450,000?
Table 15.11: Bids for a contract
7.
(a) Cost/value engineers were sent back to re-examine a project with the new data pro-
vided. (See Table 15.12.) Using the C&D approach, determine the “risk” and would
you recommend the project?
– What is the mean value?
– What is the standard deviation?
– What is the probability the cash flow is negative?
YearExpected Cash Flow µStandard Deviationof Cash Flow σVariance of Cash Flow σ20-20,00010010,00018,00020040.00029,00030090,000310,000400160,000Bid NumberEstimate (1,000 $ Units)03501420237533905370244
15. RISK ANALYSIS WITH PROBABILITY CONSIDERATIONS
– What is the probability that the cash flow is less than 10?
– What is the probability the cash flow is less than 20?
– Why or why not do you recommend the project?
Table 15.12: Cash flow analysis data for C&D approach
(b) Using the PERT approach (see Table 15.13), determine the “risk” and would you
recommend the project?
– What is the mean value?
– What is the standard deviation?
– What is the probability the cash flow is negative?
– What is the probability that the cash flow is less than 10?
– What is the probability the cash flow is less than 20?
– Why or why not do you recommend the project
Table 15.13: Cash flow analysis data for PERT approach
8. The cash flows from a project are presented in Table 15.14. The required rate of return is
10%. What is the present worth of the expected cash flow on the project and what is the
probability that the cash flow from the project has a loss?
(a) What is the present worth of the expected cash flows at a MARR of 10%?
Most LikelyRange (%)Low ValueHigh ValueMean µStandard Deviation σRevenue 140-15/+5Expenses50-10/+15Investment70 -5/+5Cash Flow+20µ = σ = Most LikelyRange (%)Low ValueHigh ValueMean µStandard Deviation σRevenue 140-15/+5Expenses50-10/+15Investment70 -5/+5Cash Flow+20µ = σ = (b) What is the standard deviation of the cash flows for the project?
(c) What is the probability the project will have zero cash flows with the MARR of 10%?
(d) What is the probability the project will have zero cash flows with the MARR at 20%?
15.6. EVALUATIVE QUESTIONS 245
Table 15.14: Cash flows from a project
YearExpected Cash Flow µ($)Standard Deviationof Cash Flow µVariance of Cash Flow µ20-9,000100 10,0001 4,000200 40,0002 4,000400 160,0003 4,000600 360,000A P P E N D I X A
247
Discrete and Continuous
Compounding Factors
Please see the tables on the pages that follow.
248 A. DISCRETE AND CONTINUOUS COMPOUNDING FACTORS
t
s
e
r
e
t
n
i
e
t
e
r
c
s
i
d
d
n
a
s
t
n
e
m
y
a
p
e
t
e
r
c
s
i
D
—
s
n
o
i
s
s
e
r
p
x
e
c
i
m
o
n
o
c
e
f
o
s
r
o
t
c
a
f
g
n
i
d
n
u
o
p
m
o
c
e
t
e
r
c
s
i
D
:
1
.
A
e
l
b
a
T
Payment TypeFactor NameFindGivenSymbolFormulaA. Single PaymentPresent WorthPF(P/F, i, n)(1 + i)-nFuture Worth (Compound Amount)FP(F/P, i ,n)(1+i)n B. Uniform PaymentSinking FundAF(A/F, i ,n)i / [(1+ i)n -1] (Uniform Series)Capital RecoveryAP(A/P, i, n)[(i(1+i)n) ] / [ (1+i)n - 1] Compound AmountFA(F/A, i, n)[(1+i)n -1] / iPresent WorthPA(P/A, i, n)[(1+i)n -1] / [i(1+i)n]C. Uniform Gradient Expression Standard Uniform GradientUniform Gradient Present WorthPG(P/G, i, n)[((1+i)n - 1 -ni) / ( i2 (1+i)n )] Uniform Gradient Future WorthFG(F/G, i, n)[((1+i)n - 1 -ni) / i2 ]Uniform Gradient Uniform SeriesAG(A/G, i ,n)[((1+i)n - 1 -ni) / ((1+i)n -1)] Uniform Ramp GradientUniform Ramp Gradient Present WorthPᴿG(Pᴿ/G, i, n)[((1+i)n+1 - 1 –i(n+1)) / ( i2 (1+i)n )] Uniform Ramp Gradient Future WorthFᴿG(Fᴿ/G, i, n)[((1+i)n+1 - 1 –i(n+1)) / ( i2)] Uniform Ramp Gradient Uniform SeriesAᴿG(Aᴿ/G, i, n)[((1+i)n+1 - 1 –i(n+1)) / ( i(1+i)n - 1)] D. Geometric Gradient Expression Geometric GradientGeometric Gradient Present WorthPA1, g(P/A1,g, i, n)[(1-((1+g)n/(1+i)n))/(i-g)]If g=i(P/A1,g=i, n)n/(1+i)Geometric Gradient Future WorthFA1, g(F/A1, g, i, n)[((1+i)n – (1+g)n)]/[i-g]If g=i(F/A1, g=i, n)n(1+i)(n-1)Geometric Gradient Uniform SeriesAA1, g(A/A1, g, i, n)[(i((1+i)n –(1+g)n))/((i-g)((1+i)n-1))]If g=i(A/A1, g=i, n)[ni(1+i)(n-1)]/ [(1+i)n -1] Escalation GradientEscalation Gradient Present WorthPᴇA1, ᴇ(Pᴇ/A1, ᴇ,i, n)[(1+ᴇ) /(ᴇ - i)] [((1+ᴇ)/(1+i))n -1]If ᴇ=i(Pᴇ/A1, ᴇ=i, n)nEscalation Gradient Future WorthFᴇA1, ᴇ(Fᴇ/A1, ᴇ,i, n)[(1+ᴇ) /(ᴇ - i)] [((1+ᴇ)n - (1+i))n]If ᴇ=i(Fᴇ/A1, ᴇ=i, n)n(1+i)nEscalation Gradient Uniform SeriesAᴇA1, ᴇ(Aᴇ/A1, ᴇ,i, n)[(i(1+ᴇ)/(ᴇ - i))*((1+ᴇ)n - (1+i)n)]/[(1+i)n-1]If ᴇ=i(Aᴇ/A1, ᴇ=i,n)ni(1+i)n /[(1+i)n -1]Notation:P=Present Worth; i = eff ective discrete interest rate per period; A=uniform end-of-period payments; n = number of periods;F=Future Worth; g=Geometric Gradient Rate; G=Uniform Gradient Amount; ᴇ = Escalation Gradient Rate;A1 = Initial Geometric Gradient Amount and Initial Escalation Gradient Amountappendix1249
t
s
e
r
e
t
n
i
s
u
o
u
n
i
t
n
o
c
d
n
a
s
t
n
e
m
y
a
p
e
t
e
r
c
s
i
D
—
s
n
o
i
s
s
e
r
p
x
e
c
i
m
o
n
o
c
e
f
o
s
r
o
t
c
a
f
g
n
i
d
n
u
o
p
m
o
c
s
u
o
u
n
i
t
n
o
C
:
2
.
A
e
l
b
a
T
Payment TypeFactor NameFindGivenSymbolFormulaA. Single Payment Present WorthPF(P/F, r, n)e-r nFuture WorthFP(F/P, r, n)er nB. Uniform Payment (Uniform Series)Sinking FundAF(A/F, r, n)[(er-1)/(er n-1)]Capital RecoveryAP(A/P, r, n)[er n(er-1)/(er n-1)]Future WorthFA(F/A, r, n)[(er n-1)/(er-1)]Present WorthPA(P/A, r, n)[(er n-1)/ (er n(er-1))]C. Uniform Gradient Expressions Standard Uniform GradientUniform Gradient Present WorthPG( P/G, r, n){[(er n-1) - n(er-1)]/[(er-1)2 er n)]}Uniform Gradient Future WorthFG( F/G, r, n){[(er n-1) - n(er-1)]/[(er-1)2 )]}Uniform Gradient Uniform SeriesAG( A/G, r, n){[(er n-1) - n(er-1)]/[(er-1)( er n -1)]} Uniform Ramp GradientUniform Ramp Gradient Present WorthPᴿG( Pᴿ/G, r, n){[(er((n+1)-1)-(n+1)(er-1)]/[(er-1)2(er n)]}Uniform Ramp Gradient Future WorthFᴿG( Fᴿ/G, r, n){[(er((n+1)-1) - (n+1)(er-1)]/[(er-1)2 ]}Uniform Ramp Gradient Uniform SeriesAᴿG( Aᴿ/G, r, n){[(er((n+1)-1) - (n+1)(er-1)]/[(er-1)2(er n-1)]}D. Geometric Gradient Expressions Geometric GradientGeometric Gradient Present WorthPA1,,b(P/A1, b, r, n) {[1-(ebn/er n)]/[er - eb)]}If b=r(P/A1, b=r, n) n/erGeometric Gradient Future WorthFA1,b(F/A1, b, r, n) {[er n-ebn)]/[er - eb)]}If b=r(F/A1, b=r, n)n/er(n-1)Geometric Gradient Uniform SeriesAA1,b(A/A1, b, r, n) {[er n-ebn)]/[er - eb)]} {[(er -1) / (er n-1)]}If b=r(A/A1, b=r, n) [n{(er n )/(er n-1)] * [(er-1)/(er)] Escalation GradientEscalation Gradient Present WorthPᴇA1,c(Pᴇ/A1, c, r, n) {[((ec)/(ec-er))] * [(ecn - er n)/ern]If c=r(Pᴇ/A1, c=r, n) nEscalation Gradient Future WorthFᴇA1.c(Fᴇ/A1, c, r, n) {[((ec)/(ec-er))] * [(ecn - er n)]If c=r(Fᴇ/A1, c=r, n) nernEscalation Gradient Uniform SeriesAᴇA1,c(Aᴇ/A1, c, r, n) {[((er-1)(ec)/(ec-er)] * [(ecn-er n)/(er n-1)]}If c=r(Aᴇ/A1, c= r, n{[n(er -1)er n/ (er n -1)}Notation:P=Present Worth; i = eff ective discrete interest rate per period; A=uniform end-of-period payments; n = number of periods;F=Future Worth; g=Geometric Gradient Rate; G=Uniform Gradient Amount; ᴇ = Escalation Gradient Rate;A1 = Initial Geometric Gradient Amount and Initial Escalation Gradient AmountAuthor’s Biography
251
ROBERT C. CREESE
Dr. Robert C. Creese was Professor of Industrial and Management Systems Engineering at
West Virginia University and taught courses on Engineering Economy, Advanced Engineering
Economics, Cost and Estimating for Manufacturing, Manufacturing Processes, and Advanced
Manufacturing Processes. He has previously taught at The Pennsylvania State University (9
years), Grove City College (4 years), Aalborg University in Denmark (3 sabbaticals), and West
Virginia University for 35 years. He worked at US Steel for two years as an Industrial Engineer
before starting his teaching career.
Dr. Creese is a Fellow of the AACE International, received the Charles V. Keane Service
Award and Brian D. Dunfield Educational Service Award presented by AACE International,
and was treasurer of the Northern West Virginia Section of AACE International for more than
20 years. He is a Life Member of AACE International, ASEE (American Society for Engineer-
ing Education), and ASM (American Society for Materials). He also is a member of ICEAA
(International Cost Estimating & Analysis Association), AIST (Association for Iron & Steel
Technology), AWS (American Welding Society), and AFS (American Foundry Society).
He obtained his B.S. in Industrial Engineering from The Pennsylvania State University,
his M.S. in Industrial Engineering from the University of California at Berkeley, and his Ph.D.
in Metallurgy from The Pennsylvania State University.
Dr. Robert C. Creese has authored the book Introduction to Manufacturing Processes and
Materials (Marcel Dekker-1999) and co-authored two books Estimating and Costing for the
Metal Manufacturing Industries (Marcel Dekker-1992) with Dr. M. Adithan, Professor Emer-
itus of VIT University Vellore, India, and Dr. B.S. Pabla of the Technical Teachers’ Training
Institute, Chandigarh, India, and Strategic Cost Analysis for Project Managers and Engineers (New
Age International Publishers-2010) with Dr. M. Adithan, VIT University, Vellore, India. He
has authored/co-authored more than 100 technical papers.
|
https://ieeexplore.ieee.org/xpl/ebooks/bookPdfWithBanner.jsp?fileName=8355744.pdf&bkn=8355743&pdfType=book
|
Series ISSN 1939-5221
Data Mining and Market Intelligence
Implications for Decision Making
Mustapha Akinkunmi, American University of Nigeria
This book is written to address the issues relating to data gathering, data warehousing, and data analysis,
all of which are useful when working with large amounts of data. Using practical examples of market
intelligence, this book is designed to inspire and inform readers on how to conduct market intelligence
by leveraging data and technology, supporting smart decision making. The book explains some suitable
methodologies for data analysis that are based on robust statistical methods. For illustrative purposes,
the author uses real-life data for all the examples in this book. In addition, the book discusses the
concepts, techniques, and applications of digital media and mobile data mining.
Hence, this book is a guide tool for policy makers, academics, and practitioners whose areas of interest
are statistical inference, applied statistics, applied mathematics, business mathematics, quantitative
techniques, and economic and social statistics.
ABOUT SYNTHESIS
This volume is a printed version of a work that appears in the Synthesis Digital
Library of Engineering and Computer Science. Synthesis Lectures provide concise
original presentations of important research and development topics, published
quickly in digital and print formats. For more information, visit our website:
http://store.morganclaypool.com
store.morganclaypool.com
A
K
I
N
K
U
N
M
I
D
A
T
A
M
N
I
I
N
G
A
N
D
M
A
R
K
E
T
I
N
T
E
L
L
I
G
E
N
C
E
M
O
R
G
A
N
&
C
L
A
Y
P
O
O
L
Data Mining and
Market Intelligence
Implications for
Decision Making
Mustapha Akinkunmi
Data Mining
and Market Intelligence
Implications for Decision Making
Synthesis Lectures on
Engineering
Each book in the series is written by a well known expert in the field. Most titles cover subjects
such as professional development, education, and study skills, as well as basic introductory
undergraduate material and other topics appropriate for a broader and less technical audience.
In addition, the series includes several titles written on very specific topics not covered
elsewhere in the Synthesis Digital Library.
Data Mining and Market Intelligence: Implications for Decision Making
Mustapha Akinkunmi
2018
Empowering Proessional Teaching in Engineering: Sustaining the Scholarship of
Teaching
John Heywood
2018
The Human Side of Engineering
John Heywood
2017
Geometric Programming for Design Equation Development and Cost/Profit
Optimizaton, Third Edition
Robert C. Creese
2016
Engineering Principles in Everyday Life for Non-Engineers
Saeed Benjamin Niku
2016
A, B, See... in 3D: A Workbook to Improve 3-D Visualization Skills
Dan G. Dimitriu
2015
The Captains of Energy: Systems Dynamics from an Energy Perspective
Vincent C. Prantil and Timothy Decker
2015
iii
Lying by Approximation: The Truth about Finite Element Analysis
Vincent C. Prantil, Christopher Papadopoulos, and Paul D. Gessler
2013
Simplified Models for Assessing Heat and Mass Transfer in Evaporative Towers
Alessandra De Angelis, Onorio Saro, Giulio Lorenzini, Stefano D’Elia, and Marco Medici
2013
The Engineering Design Challenge: A Creative Process
Charles W. Dolan
2013
The Making of Green Engineers: Sustainable Development and the Hybrid Imagination
Andrew Jamison
2013
Crafting Your Research Future: A Guide to Successful Master’s and Ph.D. Degrees in
Science & Engineering
Charles X. Ling and Qiang Yang
2012
Fundamentals of Engineering Economics and Decision Analysis
David L. Whitman and Ronald E. Terry
2012
A Little Book on Teaching: A Beginner’s Guide for Educators of Engineering and
Applied Science
Steven F. Barrett
2012
Engineering Thermodynamics and 21st Century Energy Problems: A Textbook
Companion for Student Engagement
Donna Riley
2011
MATLAB for Engineering and the Life Sciences
Joseph V. Tranquillo
2011
Systems Engineering: Building Successful Systems
Howard Eisner
2011
Fin Shape Thermal Optimization Using Bejan’s Constructal Theory
Giulio Lorenzini, Simone Moretti, and Alessandra Conti
2011
iv
Geometric Programming for Design and Cost Optimization (with illustrative case study
problems and solutions), Second Edition
Robert C. Creese
2010
Survive and Thrive: A Guide for Untenured Faculty
Wendy C. Crone
2010
Geometric Programming for Design and Cost Optimization (with Illustrative Case Study
Problems and Solutions)
Robert C. Creese
2009
Style and Ethics of Communication in Science and Engineering
Jay D. Humphrey and Jeffrey W. Holmes
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Analog Multimedia
Explorations
Lina J. Karam and Naji Mounsef
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Digital Multimedia and
Robotics Explorations
Lina J. Karam and Naji Mounsef
2008
CAD/CAM of Sculptured Surfaces on Multi-Axis NC Machine: The DG/K-Based
Approach
Stephen P. Radzevich
2008
Tensor Properties of Solids, Part Two: Transport Properties of Solids
Richard F. Tinder
2007
Tensor Properties of Solids, Part One: Equilibrium Tensor Properties of Solids
Richard F. Tinder
2007
Essentials of Applied Mathematics for Scientists and Engineers
Robert G. Watts
2007
Project Management for Engineering Design
Charles Lessard and Joseph Lessard
2007
Relativistic Flight Mechanics and Space Travel
Richard F. Tinder
2006
v
Copyright © 2018 by Morgan & Claypool
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in
any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations
in printed reviews, without the prior permission of the publisher.
Data Mining and Market Intelligence: Implications for Decision Making
Mustapha Akinkunmi
www.morganclaypool.com
ISBN: 9781681733203
ISBN: 9781681733210
ISBN: 9781681733227
paperback
ebook
hardcover
DOI 10.2200/S00838ED1V01Y201803ENG030
A Publication in the Morgan & Claypool Publishers series
SYNTHESIS LECTURES ON ENGINEERING
Series ISSN
Print 1939-5221 Electronic 1939-523X
Data Mining
and Market Intelligence
Implications for Decision Making
Mustapha Akinkunmi
American University of Nigeria
SYNTHESIS LECTURES ON ENGINEERING #30
CM&cLaypoolMorganpublishers&ABSTRACT
This book is written to address the issues relating to data gathering, data warehousing, and
data analysis, all of which are useful when working with large amounts of data. Using practical
examples of market intelligence, this book is designed to inspire and inform readers on how
to conduct market intelligence by leveraging data and technology, supporting smart decision
making. The book explains some suitable methodologies for data analysis that are based on robust
statistical methods. For illustrative purposes, the author uses real-life data for all the examples in
this book. In addition, the book discusses the concepts, techniques, and applications of digital
media and mobile data mining.
Hence, this book is a guide tool for policy makers, academics, and practitioners whose
areas of interest are statistical inference, applied statistics, applied mathematics, business math-
ematics, quantitative techniques, and economic and social statistics.
KEYWORDS
data mining, decision making, market intelligence, market pooling, survey
ix
To Dr. Sarah Omotunde Alade,
Former Deputy Governor (Economic Policy),
Central Bank of Nigeria
Contents
xi
Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
1
Introduction to Market Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Understanding the Link Between Marketing Insights and Decision Making . . 1
Transform Data into Insights for Decisions: Segmentation, Positioning,
1.2
Product Development, etc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Market Intelligence Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Scientific Method and Technology of Marketing Research . . . . . . . . . . . . . . . . 3
1.4
1.5
Innovative Solutions to Real-life Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6 Designing the Research Methodology, Questionnaire, Sampling Plan, and
Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Turning Data into Strategic Insights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.7
1.8
2 The Market Research Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1 The Marketing Research Framework and Process . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2
Research Problems and Correct Design Techniques . . . . . . . . . . . . . . . . . . . . . 10
2.3 Data Collection Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4 Generating Marketing Insights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5
3
Qualitative Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.1
3.2
3.3
Self-administered Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Personal Interview or Face-to-Face Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
xii
4
5
6
7
Quantitative Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.1 Data Preparation and Descriptive Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.2
Fundamentals of Quantitative Methods and Their Applications . . . . . . . . . . . 19
4.3 Concept of Distribution Pattern, Central Tendency, and Dispersion . . . . . . . 20
4.3.1 Distribution Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.3.2 Measure of Central Tendency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.3.3 Measure of Dispersion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.4 Construction of Confidence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.4.1 Application of Confidence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.5 Other Descriptive Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.5.1 Skewness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.5.2 Kurtosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.6
Hypothesis Testing and Regression Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.1 Data Preparation and Evaluation for Quantitative Analysis . . . . . . . . . . . . . . . 39
5.2 Constructing and Testing Data Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.3
Regression Analysis: Concept and Applications (Interpret Data
Relationships and Forecasting) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.3.1 Assumptions of Linear Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.3.2 Simple Linear Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.3.3 Multiple Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.3.4 Assumptions of Multiple Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.4
Analyzing Survey Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
6.1 Quantitative Technique of Collecting Survey Data: Consumer
Expenditure Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Types of Measurement Scales and Their Applications . . . . . . . . . . . . . . . . . . . 51
Survey Research Rigor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Testing Data Quality: Survey Error Detection Procedures . . . . . . . . . . . . . . . 55
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
6.2
6.3
6.4
6.5
Index Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
7.1
bra Expectation Index: Principles, Techniques, and Applications . . . . . . . . . . 59
7.1.1 Objectives of bra Expectation Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
7.1.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
xiii
7.2
7.3
7.4
7.5
7.6
7.1.3 Calculation of bra Expectation Index . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
bra Consumer Confidence Index: Principles, Techniques, and Applications . . 61
7.2.1 Components of braCCI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
7.2.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
7.2.3 Illustrative Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
7.2.4 Index Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
braIndex: Principles, Techniques, and Applications . . . . . . . . . . . . . . . . . . . . . 63
7.3.1 Basic Criteria for Selection of Constituent Stocks . . . . . . . . . . . . . . . . . 63
7.3.2 Technical Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
7.3.3 Fundamental Selection Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
7.3.4 Corporate Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
7.3.5 Stock Splits Adjustment Barometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
7.3.6 Free-float . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
7.3.7 Calculation of braIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
7.3.8 Illustrative Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
7.3.9 Measure of braIndex Volatility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
7.3.10 Index Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
bra Producer Price Index: Principles, Techniques, and Applications . . . . . . . . 75
7.4.1 Uses of braPPI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
7.4.2 Components of braPPI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
7.4.3 Scope and Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
7.4.4 Collection of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
7.4.5 Index Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
7.4.6 Illustrative Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
bra Bond Index: Principles, Techniques, and Applications . . . . . . . . . . . . . . . 78
7.5.1 Definition of Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
7.5.2 Basic Criteria for Constituent Bonds . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
7.5.3 Index Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
7.5.4 Sub-indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
7.5.5 Illustrative Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
braInflation Index: Principles, Techniques, and Applications . . . . . . . . . . . . . 85
7.6.1 Uses of bra Inflation Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
7.6.2 Classification of braII Items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
7.6.3 Period of the Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
7.6.4 Data Collection, Collation, and Processing . . . . . . . . . . . . . . . . . . . . . . 87
7.6.5 Quality Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
7.6.6 Index Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
xiv
8
9
7.6.7 bra Inflation Indices Publication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
7.6.8 Expenditure Category Weight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
7.6.9 Illustrative Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
7.7
Digital Media Monitoring, Measurement, and Modeling . . . . . . . . . . . . . . . . . 97
8.1 Understandings of Social Media Monitoring, Measurement, and Modeling . 97
Strategic Insight of Social Media Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . 98
8.2
Social Media Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
8.3
Social Media Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
8.4
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
8.5
Causal Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
9.1 Marketing Mix Modeling: Concept, Principles, Methods, and Applications 109
Effective Communication of Research, Intelligence, and Analytic Insights . 112
9.2
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
9.3
10 Mobile Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
10.1 Concept of Mobile Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
10.2 Activities of Mobile Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
10.3 Architecture of Mobile Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
10.4 Algorithms of Mobile Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
10.5 Application of Mobile Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
10.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
A Questionnaires, Items Survey, and Weights of Elementary Items . . . . . . . . . . 121
A.1
Sample of Business Expectation Survey Questionnaire . . . . . . . . . . . . . . . . . 121
A.2 List of Items Survey Monthly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
A.3 Weights of Some Items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
Author’s Biography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Foreword
xv
Data Mining and Market Intelligence: Implications for Decision Making, could not have been writ-
ten at a more appropriate and relevant time. Central banking, especially monetary policy, is often
regarded as boring and uninspiring; this erroneous conclusion is reached due to the visible out-
come of monetary policy committees’ decisions—whether to hold the policy rate steady or raise
or drop the benchmark by a few basis points. Thus, the common belief is that there is limited
scope for complex decision making and data management techniques in a central bank setting.
However, unbeknownst to many, a variety of analytical techniques including data mining
and market intelligence—supported by extensive research and consumer polls—are employed by
the staff of central banks. This is sometimes bolstered by the input from consultants before ar-
riving at various policy options, the consequences of which are subject to extensive deliberations
by policy makers.
Dr. Mustapha Abiodun Akinkunmi is an expert in macroeconomics and has been con-
ducting data mining research through his consulting assignment with the Central Bank of Nige-
ria, spanning almost 10 years from March 2008 to December 2017. He has shared his expertise
in great detail in this book, particularly regarding technical details including research method-
ology, market intelligence, and data collection methods. He exposes readers to qualitative tools
and techniques deployed in processing data including hypothesis testing, regression analysis, and
other methods.
The analysis of survey data using indices, including the author’s customized “bra Bond
index,” was also explained in great detail. He provides a comprehensive overview of digital me-
dia monitoring, measurement, and modeling, and explains the distinction between digital and
social media. Methods of exploring cause and effect relationships—very useful in monitoring
the impact of monetary policy decisions—are also thoroughly explored. Mobile data mining,
featured in Chapter 10, should be of interest to younger experts.
As a member of the central Bank of Nigeria Monetary Policy Committee for 10 years,
I greatly appreciate the positive impact of Dr. Akinkunmi’s consulting assignments, and how
profoundly his work has assisted us in conducting sound monetary policy in spite of the political
and socio-economic constraints of the time.
xvi FOREWORD
This book is strongly recommended for researchers, statisticians, and policy makers, and
can also serve as a textbook to support teaching data mining at universities and colleges.
Tunde Lemo, OFR
Former Deputy Governor
Central Bank of Nigeria
xvii
Preface
This book is written to address the issues relating to data gathering, data warehousing and data
analysis. These are useful when working with large amounts of data. Using practical examples of
market intelligence, this book is to inspire and inform readers on how to conduct market intelli-
gence by leveraging data and technology, supporting smart decision-making. The book explains
some suitable methodologies for data analysis that are based on robust statistical methods. For
illustrative purposes, the author uses real-life data for all the examples in this book. In addition,
the book discusses the concepts, techniques and applications of digital media and mobile data
mining.
Hence, the book is a guide tool for policy makers, academics and practitioners whose areas
of interest are statistical inference, applied statistics, applied mathematics, business mathematics,
quantitative techniques and economic and social statistics.
Mustapha Akinkunmi
March 2018
Acknowledgments
xix
This book grew out of the decade-long market intelligence assignment undertaken at the Cen-
tral Bank of Nigeria (CBN). First, I would like to extend my deepest appreciation to Mr. Bola
Onadele (Koko). Without his introduction, I would not have met Dr. Sarah Alade with whom
I convinced on my understanding of the methodology of market intelligence, leading to the
inception of this book. I was fortunate to have had many excellent contributors in the Mon-
etary Policy Department (MPD) of the CBN, especially under the leadership of Mr. Moses
Ajayi, Dr. Okorie Uchendu, Dr. Alvan Ikoku, and other senior staff of MPD CBN such as
Dr. Ngozi Egbuna, Mr. Ademola Bamidele, Mr. J.S. Akuns, Mr. Lawrence Akinboyo, the late
Mr. P.J. Obaseki, and numerous others of MPD CBN, whose questions and comments greatly
contributed to the clarity and exposition of this text.
I owe a great intellectual debt to Dr. Darryl McLeod of Fordham University, my brilliant
former professor, my colleagues at Fordham University—Center of International Policy Studies,
Dr. Emre Ozsoz and Dr. Erick Rengifo, and Mr. Daniel Eduardo Butt of Grenoble Ecole de
Management.
To the founding directors of Brickfield Road Associates Limited (bra), Mr. Wale Edun
and Mr. Tunde Folawiyo: I thank you both for your financial contribution and encouragement.
Additional significant contributions were made by my colleagues at bra, Mr. Saheed Bello and
Mr. Lanre Sanni. I am exceptionally grateful to these gentlemen.
Parts of this book were written while I was teaching at the Economics Department of
Montclair State University and State University of New York—FIT during 2017. I would like
to extend my appreciation to both institutions for their wonderful, supportive working environ-
ments and friendly atmosphere.
Mustapha Akinkunmi
Chair, Accounting and Finance
School of Business and Entrepreneuship
American University of Nigeria
Yola, Nigeria
March 2018
Acronyms
xxi
American Marketing Association
Automated Teller Machine
Brickfield Road Associates Consumer Confidence Index
Business Expectation Index
Brickfield Road Associates
Brickfield Road Associates Inflation Index
Brickfield Road Associates Index
Brickfield Road Associates Producer Price Index
Central Bank of Nigeria
Consumer Expenditure Survey
Confidence Interval
Consumer Price Index
Central Processing Unit
Deoxyribonucleic Acid
Expectation Index
Extract, Transform, and Load
Earnings Per Share
Foreign Direct Investment
Focus Group Discussion
Gross Domestic Products
Gross National Income
Gross Rating Points
AMA
ATM
braCCI
BES
bra
braII
braIndex
braPPI
CBN
CES
CI
CPI
CPU
DNA
EI
ETL
EPS
FDI
FGD
GDP
GNI
GRPs
xxii ACRONYMS
HTTP
IIR
INFL
IPR
IQR
ITR
LBS
Hypertext Transfer Protocol
Index of Interest Returns
Inflation
Index of Price Returns
Interquartile Range
Index of Total Returns
Location-Based Service
MMM
Marketing Mix Model
NBS
NSE
POS
PSI
ROE
ROI
SOA
TRPs
WDI
XML
National Bureau of Statistics
Nigerian Stock Exchange
Point of Sale
Present Situation Index
Return on Equity
Return on Investment
Service-Oriented Architecture
Target Rating Points
World Database Indicator
Extensible Markup Language
C H A P T E R 1
1
Introduction to Market
Intelligence
This introductory chapter will broadly define issues related to market intelligence. In addition,
we identify important market intelligence tools and provide the strategies to effectively utilize
these tools in a market intelligence environment.
1.1 UNDERSTANDING THE LINK BETWEEN
MARKETING INSIGHTS AND DECISION MAKING
It has become easier to access large volumes of data, thanks to the decreasing cost of data stor-
age technologies as well as the wide availability of internet connections. However, the pattern
of these data still exhibits heterogeneity in the characteristics of origin, content, and represen-
tation. It is pertinent to ask whether it is possible to transform raw data into information and
knowledge that can be communicated to and explored by decision makers to support and boost
their operations.
Market intelligence is a set of mathematical models and analytical techniques that utilizes
the existing data to provide valuable insights, which can be critical for supporting the decision-
making process in both simple and complex market environments.
Making decisions is a continuous process in a complex market environment. These deci-
sions may be more or less impactful, have short- or long-term implications, and involve mobi-
lizing employees at lower or higher levels of an organizational hierarchy. One of fundamental
drivers of the performance and competitive strength of any given organization is the capacity
to make well-informed decisions both at the organizational and individual level, especially as
modern organizational trends move toward empowering employees and devolving the decision-
making processes which may normally be performed by supervisors or management.
Most people reach a decision by exploring easy and intuitive approaches that consider
factors such as experience, knowledge of their domain, and the quality or breadth of information
available to them. This technique results in a constant, reactive decision-making approach that
may not be suitable to navigate unstable conditions, such as those arising from frequent and
rapid fluctuations in the economic environment.
When the possible outcomes of a decision are too difficult to explore using an intuitive
method, the processes of decision making require more rigorous approaches in terms of analytical
2
1. INTRODUCTION TO MARKET INTELLIGENCE
techniques and mathematical models. Quite a number of decision makers have broadly accepted
that the competitive advantages of their firm are supported by effective strategic decision making.
1.2 TRANSFORM DATA INTO INSIGHTS FOR
DECISIONS: SEGMENTATION, POSITIONING,
PRODUCT DEVELOPMENT, ETC.
Marketing research is defined as the discovery and analysis of information in order to better un-
derstand the effectiveness of a firm’s marketing efforts and to support business strategy. Accord-
ing to The American Marketing Association (AMA), marketing research connects the supplier,
customer, and public to the marketer through information. This information is utilized to iden-
tify and define marketing opportunities and problems, generate, refine, and evaluate marketing
actions, monitor marketing performance and enhance the understanding of marketing as a pro-
cess. Marketing research provides the information needed to confront these problems, design
the techniques of gathering information, manage and implement the data collection process,
analyze the results, and communicate the findings and their implications.
By definition, marketing research must include customer research since customers are
the end user of a product. Marketing research is mainly applied to the following: market sizing,
market share analysis, product concept testing, pricing strategies, focus groups, brand perception
studies, and customer attitude or perception research.
The mix of metrics, marketing research, and data mining is crucial to the effective execu-
tion of a marketing plan. Therefore, the following steps are employed to integrate these metrics.
1. Identify relevant stakeholders and their interests.
2. Choose appropriate metrics to measure marketing success.
3. Evaluate market opportunities. This step provides an answer to four basic questions: Where
are the market opportunities? What are the market segments? What is the size of each
segment? How fast does each segment grow? Market opportunity information can be ob-
tained using a variety of techniques. One technique is to explore publicly available news
and existing internal firm-level data. Another technique is to use forecasted data on market
opportunities by segment. These forecasts entail both opportunity size and growth infor-
mation, based on underlying assumptions. In the absence of existing market opportunity
information, customized research is needed to obtain information.
4. Conduct competitive analysis.
5. Derive the optimal marketing spending and media mix. Several analytical methods are em-
ployed to model optimal marketing spend. Optimization deals with the maximization or
minimization of a particular metric. The most common aim of optimization of marketing
spending is profit maximization.
1.3. MARKET INTELLIGENCE TOOLS 3
6. Leverage data mining for optimization and get early buy-in and feedback from key stakehold-
ers. As marketing research is used as the bedrock of formulating a high-level marketing
strategy, the implementation of this strategy through tactics demands robust analytical
modeling. Here, data mining adds value by providing a path to achieve the opportunities
opened by research.
7. Track and compare of metric goals and results. Managers monitor the actual outcome of their
marketing efforts vs. the marketing plan, in order to evaluate the effectiveness of a cam-
paign (or lack thereof ) in meeting the firm’s targets. If their marketing plans appear suc-
cessful, managers can implement these successful tactics and strategies in future market-
ing efforts. However, if monitoring reveals that a marketing strategy is failing to meet its
goals, managers can learn from the inadequacies of past marketing approaches to lessen
the probability of failure in the future. It should also be noted that failure to meet the
targets identified by a marketing plan may not be due to poor execution; it is possible
that the original marketing strategy was flawed, and used metrics which were unrealistic,
methodologically unsound, or not relevant to the firm’s actual desired outcomes.
8. Incorporate learning into the next round of marketing planning. As noted in point seven above,
managers can look at the deviation of actual results from their plan, identify the reasons
for these deviations, and incorporate their experiences to refine future marketing plans.
1.3 MARKET INTELLIGENCE TOOLS
The key aim of the market intelligence system is to supply decision makers with the relevant
tools and techniques that can enhance their ability to make effective and timely decisions. By
applying in-depth analytical approaches, decision makers can make better decisions and strate-
gize actionable plans to attain their objectives in a more effective way. This analytical approach
requires decision makers to clearly define both the steps for assessing alternative choices as well
as the devices needed to regulate the problem under investigation. In addition, full awareness and
understanding of the fundamental logic behind the decision-making process relies on thorough
examination and thought.
The statistical tools, mathematical models, and algorithms made available through a good
market intelligence system support strong logical inference and lead to analytically sound con-
clusions, while exploring a large number of possible outcomes in a minimal period of time.
1.4
SCIENTIFIC METHOD AND TECHNOLOGY OF
MARKETING RESEARCH
Quick responses to the actions of competitors and to new market conditions determine the
success or even the survival of a company. The urgency of making speedy and effective decisions
is intensified in markets where competition is ferocious and customer needs constantly evolve.
4
1. INTRODUCTION TO MARKET INTELLIGENCE
Broadly, the use of market intelligence systems enhances the scientific and rational tech-
niques used to manage the volatility of complex markets. Mathematical models have been ex-
plored to capture a real system in other scientific disciplines like physics, operation research,
etc. Chapters 4 and 5 of this book will provide the key mathematical models utilized in market
intelligence and decision-making processes, while applications of these models are presented in
Chapters 6 and 7.
1.5
INNOVATIVE SOLUTIONS TO REAL-LIFE ISSUES
The success of any market intelligence project depends on the following factors.
(a) Technologies: The evolution of technology plays a critical role in boosting the develop-
ment of market intelligence systems. The downward trend in the cost of hardware and
the increasing ubiquity of cheap storage and software technologies means that the use of
sophisticated algorithms and statistical techniques is no longer the sole domain of large
firms. In addition, falling technology costs encourage the use of state-of-the-art graphi-
cal visualization approaches, increase data storage capacities, and allow an organization to
store terabytes of data for market intelligence systems. With the aid of network connectiv-
ity, information, and knowledge extracted from market intelligence systems are diffused
within organizations. The diffusion of data analysis tools is determined by the integra-
tion pattern of hardware and software bought by different suppliers or built internally by
a company.
(b) Analytics: Mathematical models, as well as analytical methodologies, perform a crucial
responsibility of extracting relevant information and knowledge from the available data
within and outside an organization. Merely providing a visualization of data is not a suf-
ficient platform for facilitating decision making; it is necessary to supplement graphical
representations of data with more advanced models of inductive learning and optimiza-
tion to bolster the decision-making process.
(c) Human resources: The competencies of employees individually and collectively deter-
mine the human assets of an organization. Organizational culture consists of the whole
knowledge acquired and shared by these employees. One of the major assets of any or-
ganization is the ability of its employees to translate acquired information into practical
actions. The value of human assets has become increasingly important in modern times; in
fast-developing sectors (such as software development and technology), as physical assets
required for production become less costly, more accessible, and homogeneous, a firm’s
employees are a source of value that cannot be replicated by competing firms. As a con-
sequence, firms must focus on developing the decision-making skills of its employees,
and hone their ability to perform analysis, interpret findings, work out creative solutions,
and devise effective action plans. An organization tends to form a comparative advantage
1.6. DESIGNING THE RESEARCH METHODOLOGY, QUESTIONNAIRE, SAMPLING PLAN, AND DATA ANALYSIS 5
by employing people with greater mental quickness and willingness to accept changes in
decision-making styles, under the assumption that their decision making will be supported
by information gleaned from market intelligence systems.
1.6 DESIGNING THE RESEARCH METHODOLOGY,
QUESTIONNAIRE, SAMPLING PLAN, AND DATA
ANALYSIS
The following key features of the market-related rational approach are as follows.
• Identify the aims of the analysis as well as the performance measures used to assess alter-
native options.
• Build mathematical models that appropriately reflect the connections among system con-
trol variables, parameters, and evaluation metrics.
• Evaluate the impact of variations in the control variables and changes in parameters on
performance, using what-if analyses.
Other advantages of mathematical models include developing a deeper understanding of
the phenomena under investigation, enabling knowledge transfer to other individuals, preserving
knowledge, and providing flexible approaches to similar problems.
Market Intelligence Designs: The design of a market intelligence system entails three key
components.
(a) Source of data: The first step is to gather and integrate the data stored in the various
primary and secondary sources. These sources of data may not be homogenous in origin
and type. The data sources are mainly from operational systems, complemented by un-
structured documents and data received from external providers. Practitioners will face
challenges to unify and integrate the different data sources.
(b) Data warehouses and data marts: Data from different sources are stored in databases, and
are extracted using transformation tools known as extract, transform, and load (ETL), in
order to improve the analysis of market intelligence. These databases are often called data
warehouses and data marts.
(c) Market intelligence techniques: Involve the use of extracted data to feed mathemati-
cal models and analytical approaches used to aid decision making. The most common
techniques include multidimensional cube analysis, exploratory data analysis, time series
analysis, inductive learning models for data mining, and optimization models.
The building blocks of a market intelligence system are depicted in Figure 1.1.
6
1. INTRODUCTION TO MARKET INTELLIGENCE
Figure 1.1: Structure of market intelligence system. (Source: Excerpt from Business Intelligence
System.)
(d) Data exploration: Tools employed in this level are passive, in the sense that decision-
makers need to generate hypotheses or define criteria for extracting data, then utilize the
analysis measures to find answers and confirm (or disprove) their original insight. For
instance, if a practitioner observes low market demand in a particular day, he or she might
formulate a hypothesis using extraction and visualization tools, then subject the hypothesis
to a statistical test to verify his or her inference is sufficiently driven by data. Chapter 5
will describe statistical approaches for exploratory data.
(e) Data mining: This is an active segment of market intelligence methodologies, and is aimed
at extracting information and knowledge from raw data. This entails leveraging mathemat-
ical models for pattern recognition, using machine learning and data mining tools. In the
data mining phase, the formulation of a hypothesis is not necessary, as the decision maker’s
primary purpose during this phase is to expand their knowledge of the data and consider
possible paths for exploration.
(f ) Optimization: This phase allows decision makers to determine the best solution out of a
set of alternative actions.
DecisionsOptimizationChoosing the BestAlternativeData MiningModels for Learning from DataData ExplorationStatistical Analysis and VisualizationData Warehouses/Data MartsMultidimensional Cube AnalysisData SourcesOperational Data, Documents, and External Data1.7. TURNING DATA INTO STRATEGIC INSIGHTS 7
(g) Decisions: These are the top of the market intelligence pyramid, where decision makers
adopt a particular course of action based on the outcomes of steps a–f. Stated differently,
this represents the concluding segment of the decision-making process.
Moving from the bottom to the top of the pyramid requires more sophisticated tools of an
active type. This also alters the roles and competencies needed in each phase. At the lower end of
the pyramid, information system specialists such as database administrators perform marketing
intelligence tasks. Intermediate phases require data analysts and experts in mathematical and
statistical modeling in order to execute tasks. Decision makers and managers are mandated to
execute tasks at the top of the pyramid.
Market intelligence systems troubleshoot the challenges confronting different types of
agents in a market environment.
1.7 TURNING DATA INTO STRATEGIC INSIGHTS
The path of any market intelligence analysis is based on its application domain, characteristics of
the decision makers, as well as the available analytical methodologies. An ideal cyclical market
intelligence path will have four components, as illustrated in Figure 1.2.
Figure 1.2: Market intelligence cyclical path.
(a) Analysis: The first phase requires accurately identifying the problem at hand. Based on
this identification, decision makers have to create a mental picture of the problem being
analyzed through diagnosing the key drivers that are considered as the most relevant. This
phase prompts decision makers to ask several questions and to get quick reactions in an
interactive way.
EvaluationsInsightsAnalysisDecisions8
1. INTRODUCTION TO MARKET INTELLIGENCE
(b) Insight: This second phase is intended to provide a deeper understanding of the existing
problem. The information gathered from the analysis phase is transformed into knowledge
during the insight phase. The extraction of knowledge happens as a result of the intuition
of the decision makers based on experience and the available unstructured information.
However, inductive learning models could be relevant in this stage if the data are struc-
tured.
(c) Decision: This phase converts the knowledge obtained in the second phase into decisions
and then into actions. The presence of market intelligence approaches can encourage more
rapid execution of the first two phases (analysis and insight) in order to make effective and
timely decisions that are better suited to the strategic priorities of a given market agent.
This contributes to an overall fall in the execution time of the analysis- decision-action—
revision cycle, and hence to a decision-making process of better quality.
(d) Evaluation: This phase deals with monitoring, performance measurement, and evaluation.
In addition, it requires extensive metrics beyond financial metrics in order to examine the
key performance indicators in different market environments. Robust analytical tools for
evaluating performance will be described in subsequent chapters.
1.8 EXERCISES
1.1.
a. What do you understand by market intelligence?
b. Mention and describe the factors that are responsible for a successful market in-
telligence.
c. How would you relate marketing insight and decision making?
1.2.
a. What is marketing research?
b. Explain the mix of marketing research and data mining tools used for the effective
execution of a marking plan.
c. What are the market intelligence tools?
1.3.
a. Discuss the components of market intelligence designs.
b. With the aid of diagram, describe the structure of market intelligence systems.
1.4.
a. Enumerate the components of the market intelligence cyclical path.
b. Distinguish between data exploration and data mining.
C H A P T E R 2
9
The Market Research Process
The main objective of this chapter is to assist practitioners in understanding the steps involved
in conducting market research. In this chapter, we will study how information derived from
customer feedback reaches producers. Exploiting this information is useful to identify market
opportunities, define or hone competitive advantages, and navigate market challenges.
2.1 THE MARKETING RESEARCH FRAMEWORK AND
PROCESS
The marketing research process is a procedure that outlines the tasks which must be accom-
plished in order to conduct a successful marketing strategy. There are six processes involved
in conducting marketing research. These steps are: (i) definition of the problem, (ii) develop-
ment of an approach to the problem, (iii) formulation of research design, (iv) collection of data,
(v) preparation of data and data analysis, and (vi) generation of report and presentation.
Definition of the problem: The first step to be taken when commencing marketing research
is to define the problem which entails the purpose of the study, relevant information about the
study and how to use the information gathered in decision-making processes. This step also
involves considering the limitations of decision making at a firm; therefore, there is a need to
discuss potential problems with decision makers and industry experts.
Development of an approach to the problem: After defining the problem, the next step is
to formulate objective analytical models, research questions, hypotheses testing techniques, and
to specify the required information. This process necessitates developing specific strategies to
resolve the components of the research problems defined earlier.
Formulation of research design: Research design is a framework for conducting the market-
ing research survey. At this stage, the methods of obtaining required information, constructing
suitable hypotheses, and designing a questionnaire that will obtain useful data are delineated.
The procedure in formulating a rigorous research design includes obtaining secondary data, de-
signing a questionnaire, defining an appropriate sampling method, conducting qualitative and
quantitative primary research, measuring and scaling variables, and defining which types of anal-
ysis will be commensurate with testing the hypothesis.
10
2. THE MARKET RESEARCH PROCESS
Collection of data: This entails making decisions regarding the nature of the data and method
of collecting the data. After collaborating with decision makers and industry experts, the lead re-
searcher will facilitate training for field researchers. This training will intimate the fieldworkers
with the objective of the study, the challenges in carrying out their duties and possible solu-
tions to overcome these challenges. In addition, to improve the quality of data gathering, proper
supervision and constant evaluation are both necessary.
Preparation of data and data analysis: Data preparation involves discretization, cleaning, in-
tegration, transformation, and reduction of data for analytical use. Data analysis processes in-
volve the exploration of data through descriptive statistics, graphs, exploring the relationship
between variables, and comparing matched groups. Data analysis incorporates a wide variety
of techniques; data mining concentrates on modeling and knowledge discovery for predictive
rather than descriptive purposes.
Generation of report and presentation: This involves the documentation of the whole project
in a written form. The report documents the procedure in the first five steps described above in a
systematic manner. The results of the project, major findings, conclusion, and recommendations
are part of the report generated. A PowerPoint presentation is encouraged to present salient
points and interpretation of results to the decision makers or professionals who are not part of
the research project.
2.2 RESEARCH PROBLEMS AND CORRECT DESIGN
TECHNIQUES
Building a market intelligence system can be conceptually viewed as a project that exhibits the
following features: an ultimate objective; expected duration and costs; the usage and coordination
of the resources required to carry-out planned activities. The project cycle of a market intelligence
system is picturized in Figure 2.1.
(a) Analysis: This is the first phase of the project in which an organization has to carefully
identify its needs. Broadly speaking, this phrase is carried out through interactions with
agents engaging in different activities with the market. It is very crucial to clearly define the
major objectives and priorities of the project, as well as to estimate the costs and benefits
attached to the development of the project.
(b) Design: The second phase entails two sub-phases focused on providing a provisional plan
of the whole layout as well as the evolution of the project, either near-term or mid-term
and future-term. The first step during this phase is to assess the existing information in-
frastructure, and to investigate the main decision-making processes in order to sufficiently
ascertain information requirements. Based on the traditional approaches of project man-
agement, the plan will include identification of development phases, priorities, expected
2.3. DATA COLLECTION METHODS 11
Figure 2.1: Phases in the project cycle of a market intelligence system.
execution times and projected costs, complemented with stating required roles and re-
sources.
(c) Planning: This stage demands establishing the characteristics of a suitable market intel-
ligence systems in greater detail. In addition, it includes the assessment of existing data
and external data in order to design a central data warehouse. At this stage, practitioners
also delineate the boundaries of the available data, specify the mathematical models to be
employed, ensure that the available data are fit for the defined models, and test how ef-
ficient the models are in addressing the objectives of the marketing plan. In conclusion,
it is important to make a system prototype at the lowest cost possible and with limited
capabilities in order to close the gap between actual needs and project specifications.
(d) Implementation and control: This final stage comprises five major sub-stages. These sub-
stages are development of data warehouses; creation of metadata; extraction and transfor-
mation of data into the data warehouses; development of application tools for the analyses;
and a test-run of the system.
2.3 DATA COLLECTION METHODS
This section presents five commonly used methods of collecting data, including personal inter-
views, direct observation, questionnaires, focus group discussions, and documents and records.
AnalysisAnalysisIdentification ofMarket NeedsDesign - Infrastructure Recognition- Project Macro PlanningPlanning - Detailed projectrequirements (e.g., definitionof mathematical modelsneeded; developmentof a prototype; identificationof the data, and definitionof data warehousesand data marts) Implementation and Control - Development of data warehouses and data marts - Developmet of ETL tools - Development of applications - Development of metadata - Release and testing12
2. THE MARKET RESEARCH PROCESS
Personal interview method: This method encourages interaction between an interviewer and
respondents regarding a topic of interest. It gives respondents the ability to express his or her
perception of a topic, through the perspective of their personal beliefs, values, knowledge, feel-
ings, and experiences. Usually, before the interview, the interviewer has prepared the questions
he or she intends to ask the respondent. The questions may be easily understood by the respon-
dent, or may be ambiguous and require some clarification on the part of the interviewer. The
three types of interview are structured, semi-structured, and unstructured interviews. The struc-
tured interview is similar to a questionnaire, where the respondents choose from given options
or supply an answer in the case of an open-ended question. Semi-structured interviews allow
the interviewer to be flexible in the manner and sequence of posing questions, in order to adapt
inquiry to better fit the information they seek, but the questions are still hinged upon a num-
ber of central thematic questions. Unstructured interviews are basically open-ended in nature,
allowing an interviewer more latitude to improvise and explore different topics. However, the
more unstructured an interview, the harder it is to standardize or codify responses into data that
is conducive to analysis using quantitative or objective analytical methods.
Direct observation method: This is a method of data collection where the observer passively
participates by recording the behavior of respondents. In most cases, this method is used in be-
havioral science to study interactions among groups of people. Direct observation methods often
require the recording of participants, such as by using a video camera mounted at a strategic lo-
cation to record interactions. This clip of the video can be watched and analyzed by the observer
after the scene. However, although the visual or audio data can appear objective, interpreting
the underlying motives of participant behavior is performed through the subjective lens of the
researcher. Therefore, the best practice is to use other methods in conjunction with direct ob-
servation methods in order to achieve greater accuracy, combined with a reflexive approach that
acknowledges how personal biases can arise when interpreting the behavior of participants.
Questionnaire method: A questionnaire is an instrument used in data collection that con-
sists of series of questions and other prompts for the purpose of gathering information from
the respondent. A field worker approaches a respondent to administer the questionnaire. The
respondent will be given time to digest and complete the questionnaire. Many people feel more
comfortable with this method in comparison to participating in an interview. This approach
reduces bias from the researcher, since the same questions are asked of all the respondents.
Focus groups discussion: Focus groups discussions usually involve about 6–12 people with
similar characteristics or common interests. This involves an in-depth interview accomplished
in a group setting to provide feedback on what people think about products or issues, and en-
gender a deeper understanding of the topics of interest. The interaction within the group serves
as an object of analysis, and participants are often influenced by the responses of their peers
during the course of discussion. There is need for a moderator to anchor the discussion with
comments and guidelines, and capture the relevant information from the group. This method is
2.4. GENERATING MARKETING INSIGHTS 13
more economical than the individual interview method, and can often unearth insights which
may not otherwise be discovered in an individual interview setting. Power-distance dynamics
between the interviewer and interviewee can be bridged, as group participants have the support
of peers and can feel more at ease in expressing direct (possibly negative) opinions about a firm
or product vs. expressing opinions in an individual interview setting.
Document and records: This entails examining existing data in the form of reports, publi-
cations, newsletters, databases, multimedia, formal records, secondary data, etc. This method
provides foundational information about a phenomenon, is relatively cheap or accessible, and
provides a behind-the-scenes look at a project. Media reports can also help marketing practi-
tioners understand current trends in phenomena which are piquing public interest. However,
secondary data can suffer from shortcomings in that it may not meet the specific informational
needs of an organization, or may be out of date or compiled by an untrustworthy source.
2.4 GENERATING MARKETING INSIGHTS
Market intelligence is defined as insights generated from marketing research or data mining.
It presents the entire picture of market opportunities and challenges in order to provide maxi-
mum value and understanding. To enhance market intelligence, there is need to have integrated
database systems that connect together data from sales, marketing, customer, research, opera-
tions, and finance. These disaggregated data should ideally be maintained on the same hardware
system. Groth (2000) identified the common challenges confronting the marketers in the as-
pect of data quality. These are redundant data, incorrect or inconsistent data, typos, stale data,
variance in defining terms, and missing data.
2.5 EXERCISES
2.1.
a. Describe marketing research framework and process.
b. Provide a detailed explanation of the project cycle of a market intelligence system.
2.2.
a. Discuss the methods of data collection that you are familiar with.
b. What are the similarities and dissimilarities between the personal interview
method and the questionnaire methods of data collection?
2.3. Describe how data can be collected from a focus group discussion.
C H A P T E R 3
15
Qualitative Techniques
In the previous section, we discussed the methods of data collection. The goal of this chapter
is to elaborate on two qualitative techniques utilized by field researchers to gather data from
respondents—the self-administered method and the personal interview method. The factors
that affect the choice of method include the characteristics of the target population and access
to the sample in terms of location, time, and infrastructural availability.
3.1
SELF-ADMINISTERED METHOD
This method requires respondents to answer questions independently, without any interference
or intervention on the part of the interviewer. The self-administered survey method involves
designing a well-structured questionnaire itemized in a chronological order. The questions are
concise and unambiguous to avoid misleading or guiding respondents into providing unintended
responses. The questionnaire can consist of open-ended or closed-ended responses. An open-
ended question allows respondents to provide an answer based on their knowledge or experi-
ence about a topic, and can result in responses that can range from a full sentence to multiple
paragraphs. Closed-ended questions require respondents to select the most appropriate response
from a set list of options provided by the interviewer. These questions can be structured in a mul-
tiple choice, yes/no, or Likert scale format, but interviewees can only choose from the responses
offered. In order to accommodate the opinion of the respondent, the field “Other/specify” can
be added to the options, allowing those surveyed to provide their own response.
We should bear in mind that in the preparation process we might have defined our target
population, segmented by region and also delineated into non-overlapping enumeration areas. A
sampling frame that contains the list of sampling units is used to know exactly from whom we are
getting the data. In developing our sample frame, a sampling technique might have been adopted
depending on the objective of the research. Depending on the requirements or restrictions of the
research being performed, the researcher can choose simple random sampling, cluster random
sampling, or stratified random sampling approaches.
Responses generated using the self-administered method can be gathered by meeting in-
dividuals or groups in person. Alternatively, the researcher can gather responses via post, inter-
net or email. After choosing the appropriate method of gathering respondents, a copy of the
questionnaire will be distributed to each of the respondents in the sampling frame, and the re-
spondents will be given a reasonable time to answer the questions either by filling or ticking the
appropriate answer(s). However, there is the possibility of recording some non-responses. Some
16
3. QUALITATIVE TECHNIQUES
respondents may (accidentally or deliberately) fail to answer some of the questions in the survey,
or fail to submit a response to a survey in the required time frame. Interviewers or field workers
will need to examine the data and reach a judgment about whether to revisit respondents or
survey more respondents to bridge these gaps in the data.
To illustrate a practical example, Brickfield Road Associates Limited (bra) employs a num-
ber of field workers to carry out a business expectation survey on a monthly basis. The question-
naires are given to a list of industry leaders from the major sectors of the economy, allowing
them to express their sentiments regarding monetary policy indicators and their future expecta-
tions of the Nigerian economy as a whole. Many of the questions contained in the questionnaire
are closed-ended, with the exception of a field in the questionnaire that allows respondents to
elucidate upon the reasons behind their opinions and forecasts. The firm distributes its question-
naire by leveraging its technologies to notify respondents when the questionnaire is available to
complete online. bra has developed a mobile app to collect data; this enables the company to
check the responses submitted, support its field researchers and improve the quality of the data
gathered in future surveys. The mobile app includes GPS functionality to capture the time and
location of the survey. Non-responses are mitigated by sending reminders to the respondents
via text messages, emails, calls and visitation. A sample of the questionnaire used for the bra
business expectation survey can be found in the Appendix A.
3.2
PERSONAL INTERVIEW OR FACE-TO-FACE
METHOD
The personal interview method, also known as the face-to-face method, is a method of data
collection where the interviewer goes to the field with prepared questions to ask respondents in
person. In contrast to self-administered surveys, the personal interview method gives interview-
ers the freedom to explore the responses given by survey subjects. The interviewer can do so by
asking further questions of respondents, and observing their behavior while offering responses,
in order to establish why respondents answered in a particular manner.
As noted above, this practice can be performed in person, or via telephony media. The in-
terviewer should have an intimate understanding of the purpose of the survey and its content. In
a situation where a question seems ambiguous to the interviewee, it is the duty of the interviewer
to rephrase the question without losing the contextual meaning of the question. The interviewer
should be swift when asking questions and recording the answers provided by respondents.
The major advantages of the personal interview method are high response rates and the
ability to gather more nuanced, natural responses supplemented by observation of the behavior of
the respondents. The face-to-face method also mitigates the problem of non-responses suffered
by the self-administered method. On the other hand, the disadvantage of using the personal
interview method is that it is time consuming and expensive, especially if interviews are to be
conducted in person with subjects in a far away or remote location.
3.3 EXERCISES
3.1. What do you understand by the concept of qualitative techniques?
3.2.
a. With the aid of examples, describe open-ended questions and closed-ended ques-
3.3. EXERCISES 17
tions.
b. What is a sampling frame?
3.3. Explain how technology can assist in the aspect of data collection.
3.4. Describe how to conduct a survey using the personal interview method.
3.5. State the merits and demerits of the personal interview method of data collection.
3.6. Design a well-structured questionnaire that captures the processes, opportunities, and
challenges faced by a new business.
C H A P T E R 4
19
Quantitative Techniques
4.1 DATA PREPARATION AND DESCRIPTIVE STATISTICS
In this chapter, we will examine the definition of quantitative techniques, different types of
quantitative techniques, the functions of quantitative techniques, and the application of quan-
titative techniques. We will discuss distribution patterns as well as measures of central tendency
and dispersion for both grouped and ungrouped data using illustrative examples. Also, we will
explain the difference between point and interval estimates, how to construct confidence inter-
vals, and the necessity of performing these tasks. In addition, we will shed light on the other
useful descriptive statistics.
4.2
FUNDAMENTALS OF QUANTITATIVE METHODS
AND THEIR APPLICATIONS
Quantitative techniques can be defined as methods or systematic approaches of solving prob-
lems, planning, and making informed decisions using numerical data. With the aid of quanti-
tative techniques, decision makers can make optimal decisions in order to achieve set goals and
objectives. Quantitative techniques can be classified into three areas: (i) mathematical quan-
titative techniques, (ii) statistical quantitative techniques, and (iii) programming quantitative
techniques. We will briefly discuss each classification in this order.
Mathematical quantitative techniques: These techniques employ the principles of mathe-
matics to quantitative data to make smart decisions. These techniques include using calculus,
matrix algebra, set theory, and exponential smoothing to assist efficient and accurate decision-
making.
Statistical quantitative techniques: These techniques involve using statistical methods in
data collection, data analysis, and presentation. Statistical tools include probability theory, re-
gression analysis, discriminant analysis, time series analysis, panel data analysis, experimental
design and statistical quality control, heuristic methods, sequencing and scheduling problems,
and other statistical methods.
Programming techniques: These techniques involve writing code or designing software pack-
ages to build models which are used to aid decision-making processes. This technique is com-
monly used in the area of operations research. Programming methods include uses of linear pro-
20
4. QUANTITATIVE TECHNIQUES
gramming, game theory, decision theory, inventory theory, and computer simulation, among
others.
Uses of Quantitative Techniques
Quantitative techniques are mostly useful in the following areas:
(i) optimal allocation of limited resources;
(ii) inventory control;
(iii) queuing theory; and
(iv) timetabling and scheduling.
Limitations of Quantitative Techniques
(i) It relies on numerical assumptions.
(ii) It involves some complex calculations, models, and equations.
(iii) It does not measure intangible factors or variables such as skill or attitude.
(iv) It is one of the tools for making decisions, but is not a decision in itself (i.e., quantitative
techniques can be used in conjunction with other techniques like qualitative to make a
better decision).
4.3 CONCEPT OF DISTRIBUTION PATTERN, CENTRAL
TENDENCY, AND DISPERSION
4.3.1 DISTRIBUTION PATTERN
A distribution pattern summarizes the characteristics exhibited by a set of observations, repre-
sented in graphical terms. For instance, a researcher may have interest in knowing how a given
set of data is clustered or spread apart. A simple way of doing this is to plot the observations on
a set of axes and see how tightly clustered the observations are distributed across these axes. The
relationship among sets of observations can be vividly seen with the aid of chart. These charts
include scatter diagrams, bar charts, pie charts, histograms, etc. In some cases, a set of observa-
tions may be generated by a probability distribution. The diagrams in Figure 4.1 show features
of a number of datasets generated from probability distributions.
4.3.2 MEASURE OF CENTRAL TENDENCY
This is the statistical measure that identifies a single value as representative of an entire distribu-
tion by identifying the central position within that set of data. The most common measures of
central tendency are mean, median, and mode. We will discuss these measures under two cases
4.3. CONCEPT OF DISTRIBUTION PATTERN, CENTRAL TENDENCY, AND DISPERSION 21
Figure 4.1: Datasets generated from probability distributions.
1.210.80.60.40.20-0.20102030405060Uniform (0,1)(b) Uniform distribution with a = 0, b = 11210864200102030405060Binomial (0.05,100)(a) Binomial distribution with p = 0.05, n = 1003210-1-2-3-40102030405060Normal (0,1)(c) Standard normal distribution with mean = 0 and variance = 122
4. QUANTITATIVE TECHNIQUES
of grouped and ungrouped data. It is important to note that the mean, median, and mode are
equal in a standard normal distribution.
Ungrouped Data
This is the kind of dataset given as individual data point. In some cases it is not accompanied by
a frequency table.
(a) Mean: This is one of the most important measures of location; the mean is often also
referred to as the average. The mean can be calculated by adding all the observations and
(cid:31) and is
dividing the result by the total number of observations. The mean is denoted by
N
n
i xi
, where xi is value of the ith observation and n
mathematically represented as
n
is the number of observations.
(cid:31)
N
D
P
(b) Median: The median is the middle value of a given dataset after re-arranging the observa-
tions in either ascending or descending order. In case the number of observations is even,
the median is calculated by taking the average of the two central values. It is calculated as
.n
th is the position of the median
th, where n is the number of observations and .n
1/
1/
C
2
C
2
value in a given dataset.
(c) Mode: The mode is defined as the most frequent value(s) in a given dataset. It is possible
to have two or more values that appear equally most frequently in a dataset. A dataset is
mono-modal when a single value appears most in a dataset, and bi-modal when there are
two values having the equal highest frequency in a dataset.
Example 4.1
Table 4.1 shows the distribution of month-on-change inflation (%) in Nigeria in the year
2015.
Table 4.1: Distribution of month-on-change inflation (%) in Nigeria, 2015
Calculate: (i) mean, (ii) median, and (iii) mode of the distribution.
MonthJanFebMarAprMayJunJulAugSepOctNovDecInfl ation0.90.71.00.91.11.10.80.60.60.50.81.2Source: NBS4.3. CONCEPT OF DISTRIBUTION PATTERN, CENTRAL TENDENCY, AND DISPERSION 23
Solution
(i)
(cid:31)
N
D
D
(ii)
P
n
i xi
n
0:9
C
0:7
1:0
0:9
1:1
1:1
C
C
C
C
C
12
0:8
0:6
0:6
C
C
0:5
C
0:8
C
1:2
C
0:8:
D
median
0:9
0:8
C
2
D
0:85:
D
(iii) Modes are 0.6, 0.8, 0.9 and 1.1.
Grouped Data
Grouped data is a set of data that are given in intervals. It is normally comes with a frequency
table and the interval groupings are equal in most cases.
(a) Mean:
n
i f xi
P
N
;
(cid:31)
N
D
where f is the frequency of the each group or class and N is the total number of observa-
tions or sum of the frequencies.
(b) Median: This can be calculated as follows:
Median
Lm
C
D
N
2 (cid:0)
Fm
CF
!
C;
where Lm is the lower class boundary of the median class, CF is the cumulative frequency
of class preceding median class, Fm is the frequency of the median class, N is the sum of
the frequency, and C is the class width.
(c) Mode: The mode formula for the grouped data is represented mathematically as
Mode
Lm
D
(cid:18)
Fm
C
2Fm
(cid:0)
Fb
(cid:0)
Fa
Fb
(cid:0)
(cid:19) C;
where Lm is the lower limit of modal class, Fm is the frequency of the modal class, Fb is
the frequency of class before the modal class, Fa is the frequency of class after the modal
class, and C is the class width.
24
4. QUANTITATIVE TECHNIQUES
Table 4.2: Distribution of score in mathematics examination
Example 4.2
One-hundred school students sat a mathematics examination. The distribution of their
scores is grouped and presented in the Table 4.2.
Calculate the mean, median, and mode of the distribution of score in Mathematics ex-
amination.
Solution
n
i f xi
P
N
(i) Mean;
(cid:31)
N
See Table 4.3.
D
4500
100 D
D
45.
Table 4.3: Mean, median, and mode of the distribution of score in mathematics examination
(ii) Median
Lm
C
D
N
2 (cid:0)
Fm
CF
!
C .
Score Interval0-910-1920-2930-3940-4950-5960-6970-7980-8990-99Frequency41071831561252Score IntervalFrequencyClassMid-point(x)FXClass BoundaryCumulative Frequency0-944.518.0-0.5-9.5410-191014.5145.09.5-19.51420-29724.5171.519.5-29.52130-391834.5621.029.5-39.53940-493144.51379.539.5-49.57050-59554.5272.549.5-59.57560-69664.5387.059.5-69.58170-791274.5894.069.5-79.59380-89584.5422.579.5-89.59890-99294.5189.089.5-99.5100Sum 10045004.3. CONCEPT OF DISTRIBUTION PATTERN, CENTRAL TENDENCY, AND DISPERSION 25
To determine the median class: N
2 th
D
(cid:18) 50
50th, this falls within (40–49) class interval
39
(cid:19)
(cid:0)
31
10
(cid:2)
D
43:05:
39:5
C
(cid:18)
Fm
Fb
(cid:19) C .
(iii) Mode
Lm
(cid:0)
Fa
The modal class is the same (40–49) class interval
2Fm
Fb
C
D
(cid:0)
(cid:0)
40
C
D
(cid:18)
31
2.31/
18
5
(cid:0)
(cid:0)
(cid:0)
(cid:19)
18
(cid:2)
10
D
43:33:
4.3.3 MEASURE OF DISPERSION
In a situation whereby the measure of tendency is not sufficiently adequate to describe the data,
measures of dispersion, or spread be employed. It is possible to have two different datasets having
the same means, so we can also look at the spread or the variance within the data. Measures of
dispersion include: range, interquartile range, standard deviation and variance, among others.
In this chapter, we are going to discuss some important measures of dispersion using grouped
and ungrouped data.
Ungrouped Data
(a) Range: This is the difference between the highest and lowest values in a dataset.
(b) Interquartile range: It is the difference between first quartile and the third quartile. The
first quartile and third quartile are equivalent to the 25th and 75th, percentiles respectively,
and are obtained after arranging the dataset in an ascending or descending order. The in-
terquartile range describes 50% of the dataset, lying between the 25th and 75th percentile.
The first and third quarters can be denoted by Q1 and Q3, respectively:
Q1
D
.n
1/
th
C
4
Q3
D
3.n
1/
th
C
4
IQR
Q3
Q1:
D
Standard deviation: This is the commonly used measure of dispersion. It describes how obser-
vations are from the mean. Standard deviation can be computed by finding the square root of the
sum of squared deviation from the mean divided by the number of observations. It is important
to note that there is a slight difference in how the population and sample standard deviations
(cid:0)
26
4. QUANTITATIVE TECHNIQUES
are calculated (the sum of squared differences is divided by n
sample standard deviation).
(cid:0)
1 in the case of calculating the
(a) Standard deviation
(b) Standard deviation
s
n
P
i
D
1 .xi
n
x/2
(cid:0) N
s
n
P
i
D
x/2
(cid:0) N
1
1 .xi
n
(cid:0)
D
D
(population)
(sample)
Variance: This is calculated by squaring the value of standard deviation
(a) Variance
(b) Variance
n
P
i
D
n
P
i
D
1 .xi
n
1 .xi
n
(cid:0)
x/2
(cid:0) N
x/2
(cid:0) N
1
D
D
(population)
(sample)
Example 4.3
An investor wants to know how the annual growth (%) in Gross Domestic Products in
Africa is dispersed. He selected a random of sample of 13 countries. Table 4.4 presents the data.
Table 4.4: Annual growth (%) in gross domestic products in Africa. (Country Codes1).
Calculate: (i) range, (ii) interquartile range, (iii) standard deviation, and (iv) variance of
the distribution of the annual growth of GDP.
Solution
(i) Range
9:61
1:26
D
(ii) Interquartile range.
(cid:0)
8:34.
D
Arranging the dataset produced Table 4.5.
1Algeria (DZA), Angola (AGO), Central African Republic (CAF), Egypt (EGY), Ethiopia (ETH), Gambia (GMB), Ghana
(GHA), Kenya (KEN), Lesotho (LSO), Mali (MLI), Nigeria (NGA), Senegal (SEN), South-Africa (ZAF).
CountryCode1DZAAGOCAFEGYETHGMBGHAKENLSOMLINGASENZAFGDP (%)3.763.014.804.209.614.723.925.651.615.962.656.491.26Source: WDI, 20154.3. CONCEPT OF DISTRIBUTION PATTERN, CENTRAL TENDENCY, AND DISPERSION 27
Table 4.5: Dataset
Q1
D
.13
1/
th
C
4
D
3:5th
2:83
D
Q3
D
3.13
C
4
1/
th
D
10:5th
5:81:
D
Therefore, IQR
2:98.
D
(iii) Standard deviation
s
P
n
i
D
D
x/2
(cid:0) N
1
1 .xi
n
(cid:0)
P
n
i xi
n D
(cid:31)
N
D
4:43
r .1:26
(cid:0)
4:43/2
C
.1:61
(cid:0)
4:43/2
.2:65
C
Standard deviation
2:22
D
(iv) Variance = 2:222
4:93.
D
Grouped Data
4:43/2
(cid:0)
13
1
(cid:0)
:::
C
C
.6:49
(cid:0)
4:43/2
.9:61
(cid:0)
C
4:43/2
D
(a) Range: For grouped data, the range is calculated by subtracting the lower-class boundary
of the first class from the upper-class boundary of the last class.
(b) Interquartile range: This is computed by modifying the median formula:
Q1
LQ1 C
D
Q3
LQ3 C
D
N
CF
!
C
4 (cid:0)
FQ1
3N
4 (cid:0)
FQ3
CF
!
C
IQR
Q3
(cid:0)
D
Q1;
CountryCodeZAFLSONGAAGODZAGHAEGYGMBCAFKENMLISENETHGDP (%)1.261.612.653.013.763.924.204.724.805.655.966.499.6128
4. QUANTITATIVE TECHNIQUES
where LQi is the lower limit of quartile class, FQi frequency of the quartile class, CF is
the cumulative frequency of class preceding quartile class, N is the sum of the frequency,
and C is the class width.
(c) Standard deviation: The population standard deviation is denoted by (cid:27) while the sample
standard deviation is denoted by s. It can be computed as follows:
s
P f .x
n
x/2
(cid:0) N
(cid:27)
D
.population/
s
x/2
P f .x
n
(cid:0) N
1
s
D
.sample/:
(cid:0)
(d) Variance: The population parameter and the sample statistic for variance are denoted by
(cid:27) 2 and s2, respectively, and are mathematically defined as follows:
(cid:27) 2
D
P f .x
n
x/2
(cid:0) N
.population/
s2
D
P f .x
n
(cid:0)
x/2
(cid:0) N
1
.sample/:
Example 4.4
Table 4.6 shows the distribution of the broad money annual growth (%) in the selected
45 African countries. Calculate the: (i) range, (ii) standard deviation, (iii) variance, and (iv) in-
terquartile range of the distribution.
Table 4.6: Distribution of the broad money annual growth (%) in the selected 45 African coun-
tries
Solution
(i) Range
99:5
0:5/
.
(cid:0)
(cid:0)
D
100
D
Class Interval0-910-1920-2930-3940-4950-5960-6970-7980-9890-99Frequency616135220001Source: WDI, 20104.3. CONCEPT OF DISTRIBUTION PATTERN, CENTRAL TENDENCY, AND DISPERSION 29
(ii) Standard deviation
P
n
i f xi
N
(cid:31)
N
D
1042:5
D
45 D
23:17
s
x/2
P f .x
n
(cid:0) N
1
(cid:0)
23:17/2
s .4:5
(cid:0)
s
D
D
270:91
.14:5
C
(cid:0)
23:17/2
1
45
(cid:0)
: : :
C
C
.94:5
(cid:0)
23:17/2
16:46
D
(iii) s2
D
(iv) IQR
Q3
Q1.
D
(cid:0)
To determine the quartile class, we compute (cid:0) 45
quartile falls in (10–19) class interval (see Table 4.7).
4 (cid:1) th
11:25th, this implies that the first
D
Q1
9:5
D
(cid:18) 11:25
16
(cid:0)
C
6
(cid:19)
10
(cid:2)
D
12:78:
Table 4.7: Class interval and class boundary
Also, the position of third quartile (cid:0) 3
45
4 (cid:1) th
(cid:2)
29) class intervals (see Table 4.7):
D
33:75th, the third quarter falls within (20–
Q3
D
19:5
(cid:18) 33:75
(cid:0)
13
C
22
(cid:19)
10
(cid:2)
D
28:54:
ClassIntervalFXFXX-mean(X-mean)2F(X-mean)2CFClass Boundary0-964.527-18.67348.442090.676-0.5-9.510-191614.5232-8.6775.111201.78229.5-19.520-291324.5318.51.331.7823.113519.5-29.530-39534.5172.511.33128.44642.224029.5-39.540-49244.58921.33455.11910.224239.5-49.550-59254.510931.33981.781963.564449.5-59.560-69064.5041.331708.440.004459.5-69.570-79074.5051.332635.110.004469.5-79.580-89084.5061.333761.780.004479.5-89.590-99194.594.571.335088.445088.444589.5-99.5Sum451042.51192030
4. QUANTITATIVE TECHNIQUES
Therefore, IQR
28:54
(cid:0)
D
12:78
D
15:76.
4.4 CONSTRUCTION OF CONFIDENCE INTERVALS
In statistics, the primary objective in selecting a random sample from a population is to be
able to compute a statistic which closely describes the population parameter. This estimate can
be referred as a Point Estimate. It is important to know how accurately the sample statistic
represents the population parameter. In order to illustrate this, we can calculate a range of values
for the sample, in which we are confident that the population parameter of interest is contained.
In most cases, a statistician would like to compute a 95% confidence interval for his estimation;
this is common practice in sciences, social sciences, and education. We refer to this range of
values inside which the true population value falls as an Interval Estimate.
The threshold of the confidence interval can be amended based on the field of study.
A biostatistician would prefer to look at 99% confidence interval rather than 95% confidence
interval. In the health sector, because we deal with human life, it is expected to be more precise
with our estimates to reduce loss of lives, or the release of a drug which could either be ineffective
or cause adverse side-effects. A 99% confidence limit implies that we are confident that if we
took 100 samples from the population, approximately 99 will be contain the true value of the
parameter of interest.
A confidence interval can be constructed generally as: point estimate
margin of error
(cid:6)
CI
D
sample statistic
Critical value
(cid:2)
(cid:6)
standard error of estimate:
For instance, a confidence interval (C:I) for population mean can be written as:
C:I
(cid:31)
D N
(cid:6)
tn
(cid:0)
1; q
(cid:2)
s
pn
:
A 100 .1
(cid:0)
(cid:11)/ % confidence region for (cid:22) contains:
tn
(cid:31)
N
(cid:0)
(cid:0)
1; q
(cid:2)
s
pn (cid:20)
(cid:22)
(cid:31)
(cid:20) N
C
tn
(cid:0)
1; q
(cid:2)
s
pn
;
where (cid:11) is the level of significance, n is the sample size, s is the standard deviation,
and t is the critical region from the t-distribution table.
x is the mean
N
Suppose we want to construct a 99% confidence interval for an unknown population mean,
then a 99% probability that confidence interval will contain the true population mean could be
written as
P (cid:20)
(cid:31)
N
tn
(cid:0)
(cid:0)
1; q
(cid:2)
s
pn (cid:20)
(cid:22)
(cid:31)
(cid:20) N
C
tn
(cid:0)
1; q
(cid:2)
(cid:21)
s
pn
D
0:99:
Example 4.5
Refer to Example 4.4. Construct a 95% confidence interval on the average broad money
annual growth (%) in selected African countries.
4.5. OTHER DESCRIPTIVE STATISTICS 31
Solution
45;
n
D
(cid:31)
N
D
23:17%; s
D
16:46; and (cid:11)
0:05
D
Since the sample size is large, i.e, n > 30 we will use normal distribution (z) instead of
(cid:11)=2,
student t-distribution. Under the assumption of two-tail test, we have critical value to be z1
while one-tail test is z1
(cid:11):
(cid:0)
(cid:0)
C:I
D
23:17
(cid:0)
z0:975
C:I
C:I
D
D
23:17
23:17
1:96
1:96
(cid:0)
(cid:0)
16:46
p45 (cid:20)
(cid:22)
(cid:20)
23:17
16:46
p45 (cid:20)
(cid:22)
(cid:20)
23:17
16:46
p45 (cid:20)
(cid:22)
(cid:20)
23:17
C
C
C
(cid:2)
(cid:2)
(cid:2)
z0:975
16:46
p45
(cid:2)
1:96
1:96
16:46
p45
16:46
p45
(cid:2)
(cid:2)
C:I
D
18:36%
(cid:22)
(cid:20)
(cid:20)
27:98%:
4.4.1 APPLICATION OF CONFIDENCE INTERVALS
In as much we cannot be specific about events, one needs to provide a reasonable range for
the variable of interest. A simple point estimate is often insufficient for practical needs; a range
of possible outcomes within a defined probability is more useful. Confidence intervals can be
applied in business, production, healthcare, science, engineering, etc. Governments can apply
the concept of confidence intervals in price control; businesses apply its concept in inventory
control and sales movement; manufacturing firms use the concept in the area of quality control;
an athlete can use confidence intervals to estimate the average time range of times he or she can
complete a race of 100 m. Engineers employ it to determine the range of years for the durability
of a bridge or to determine the ranges of conditions between which a certain structure or part
operate safely. In addition, understanding confidence intervals helps to determine the quantity
of resources required to carry out some specific tasks and aid overall planning.
4.5 OTHER DESCRIPTIVE STATISTICS
Apart from some basic descriptive statistics explained above, there are a few other supplementary
statistics that assist us in understanding the nature of how data is dispersed. In the section, we
shall elucidate on skewness and kurtosis.
32
4. QUANTITATIVE TECHNIQUES
4.5.1 SKEWNESS
Skewness is defined as the measure of symmetry or departure from center in a dataset. A dataset
or distribution is said to be symmetric if it has the same features at both right and left from the
central point. In a normal distribution, the skewness is zero. It is positively skewed if the value
of skewness is positive, this means that it has a long tail at the right side of the figure implying
that there are concentration of the high values in the distribution of the dataset. However, when
there is high concentration of low values in a distribution, then there is heavy tail at the left side
of the figure and is it said to be negatively skewed.
In a skewed distribution, values of mean, median and mode would not coincide. The value
of skewness can be positive or negative or zero. These under-listed conditions could be helpful
to determine the nature of the skewness.
(a) When the values of mean, median, and mode are equal, there is no skewness.
(b) When mean > median > mode, skewness will be positive.
(c) When mean < median < mode, skewness will be negative.
The skewness can be diagrammatically represented in Figure 4.2.
Figure 4.2: The skewness.
According to Bulmer (1979) rule of thumb, he suggested that:
• If skewness is less than
1 or greater than
(cid:0)
C
1, the distribution is highly skewed.
• If skewness is between
erately skewed.
1 and
(cid:0)
(cid:0)
1=2 or between
1=2 and
C
C
1, the distribution is mod-
• If skewness is between
1=2 and
(cid:0)
C
1=2, the distribution is approximately symmetric.
There are two approaches to calculate the skewness of a dataset, namely:
(i) Skewness
(Bowley’s Method)
where Q1; Q2, and Q3 are lower quartile, median, and upper quartile, respectively.
(cid:0)
Q1
C
Q3
D
(cid:0)
;
Q1
Q3
2Q2
Negative Skew(large tail to the left)Positive Skew(large tail to the right)(ii) Pearson’s coefficient of skewness
3 .mean-median/
standard deviation ;
D
(Karl-Pearson’s method)
For simplicity, we will illustrate examples in the sub-section with ungrouped data.
4.5. OTHER DESCRIPTIVE STATISTICS 33
Example 4.6
Table 4.8 shows the prices of 25 selected food prices watch by the National Bureau of
Statistics (NBS) in the month of August 2017.
Table 4.8: Food prices
Calculate the skewness using (i) Bowley method, (ii) Karl-Pearson’s coefficient of skew-
ness, and (iii) compare the results in (i) and (ii).
Solution
599:64; Q1
(cid:31)
N
D
.25
1/
C
4
D
Q3
D
3.n
C
4
th
1/
308:77; Q2
D
.n
1/
th
C
2
D
D
349:64;
th
D
890:83 and s
544:65
D
(i) Skewness
Q1
D
Q3
C
Q3
(cid:0)
2Q2
(cid:0)
Q1 D
308:77
890:83
349:64
2
(cid:2)
308:77
(cid:0)
C
890:83
(cid:0)
0:86.
D
3 .599:64
(ii) Pearson’s coefficient of skewness
D
3 .mean-median/
standard deviation D
349:64/
(cid:0)
544:65
1:38.
D
(iii) The results in (i) and (ii) indicated that the distribution of the prices of 25 selected foods
are positively skewed.
4.5.2 KURTOSIS
Kurtosis can be defined as a measure of the sharpness or shallowness of the peak of a distribution
curve. A normal distribution has kurtosis of 3 with excess kurtosis of exactly 0. There are three
Item12345678910Price485.1942.92370.25335.711131.381376.85311.20278.70465.34834.74Item11121314151617181920Price946.93158.85192.531629.25310.15345.821070.572361.70226.66349.64Item2122232425Price320.19343.45394.35401.30307.39Source: NBS, August, 201734
4. QUANTITATIVE TECHNIQUES
types of kurtosis: mesokurtic, platykurtic, and leptokurtic. A mesokurtic (normal kurtosis) dis-
0), a platykurtic (negative kurtosis) has kurtosis
tribution has kurtosis
(cid:138)
< 3 (i.e., excess kurtosis < 0), and leptokurtic (positive kurtosis) distribution has a kurtosis > 3
(implying excess kurtosis > 0). Figure 4.3 shows the types of the shape of kurtosis.
3 (i.e., excess kurtosis
(cid:138)
Figure 4.3: Shape types of kurtosis.
The moment of coefficient of Kurtosis is denoted by K4, and can be defined mathemati-
cally as:
and
Kurtosis.k4/
m4
m2
2
D
x/2
P .x
(cid:0) N
n
m2
D
and m4
x/4
P .x
(cid:0) N
n
;
D
where m2 and m4 are the second moment (variance) and fourth moment, respectively.
The excess kurtosis
D
Or excess kurtosis k4
K4
3
1/
.n
(cid:0)
2/.n
(cid:0)
.n
(cid:0)
(cid:0)
(population).
3/ ..n
1/ K4
6/.
C
C
D
Example 4.7
Refer to data in Example 4.6. Compute the coefficient of kurtosis for the prices of selected
foods.
Solution
599:64
(cid:31)
N
D
Negative KurtosisPositive KurtosisNormal Distributionx/2
P .x
(cid:0) N
n
.485:19
(cid:0)
599:64/2
: : :
C
.401:30
(cid:0)
284780:20
(cid:0)
C
.42:92
25
599:64/2
25
599:64/2
.307:39
C
m2
D
D
D
4.6. EXERCISES 35
: : :
599:64/2
C
(cid:0)
x/4
P .x
(cid:0) N
n
.485:19
599:64/4
.42:92
(cid:0)
C
(cid:0)
460719706565:24:
m4
D
D
D
599:64/4
25
: : :
C
C
.307:39
(cid:0)
599:64/4
Therefore, K4
m4
m2
D
2 D
460719706565:24
284780:202 D
5:68, the excess kurtosis is .5:68
3/
(cid:0)
D
2:68.
We are using sample data, thus, excess kurtosis
22 .26
k4
D
The excess kurtosis of 3.59 means the distribution is highly leptokurtic.
.n
1/
(cid:0)
2/.n
3/ ..n
1/ K4
3:59.
2:68
6/
6/
C
C
D
C
D
(cid:2)
23
24
.n
(cid:0)
(cid:0)
(cid:2)
4.6 EXERCISES
4.1.
a. Mention and explain the different types of quantitative techniques.
b. In what areas are quantitative techniques applicable?
c. What are the limitations of using quantitative techniques?
4.2.
a. What do you understand by the term “measure of central tendency?”
b. Find the mean, median, and mode of the following score of students in mathemat-
ics test:
5, 7, 10, 3, 5, 7, 8, 9, 6, 6, 7, 10, 5, 4, 7, 3, 1, 4, 8, 9, 10, 7, 5, 4, 4, 6, 4, 2, 8, 3, 4,
3, 3, 7, 1, 6, 9, 4, 7.
4.3. The age distribution of students that participated in an A-level examination is as follows:
16, 15, 17, 18, 16, 15, 21, 15, 19, 17, 20, 21, 16, 14, 22, 21, 19, 16, 17, 17, 21, 22, 18,
19, 16, 18, 20, 20, 23, 18, 20, 16, 19, 18, 15, 16, 15, 19, 21, 20, 17, 15, 14, 17, 21, 23,
20, 18, 19, 18, 20, 17, 17, 19, 21, 23, 24, 21, 22, 19.
Calculate: (i) mean, (ii) median, (iii) mode, (iv) range, (v) interquartile range, and (vi)
variance.
36
4. QUANTITATIVE TECHNIQUES
Table 4.9: Weekly wages
4.4. The weekly wages of 100 employees in a firm are given in Table 4.9.
Find:
a. mean, median, and mode of the distribution of the wages; and
b. standard deviation and variance of the wages.
4.5. The annual gross domestic products (GDP) growth (%) of United States from 1961–
2016 is recorded as follows:
2.30, 6.10, 4.40, 5.80, 6.40, 6.50, 2.50, 4.80, 3.10, 3.21, 3.30, 5.26, 5.64, -0.52, -0.20,
5.39, 4.61, 5.56, 3.18, -0.24, 2.59, -1.91, 4.63, 7.26, 4.24, 3.51, 3.46, 4.20, 3.68, 1.92,
-0.07, 3.56, 2.75, 4.04, 2.72, 3.80, 4.49, 4.45, 4.69, 4.09, 0.98, 1.79, 2.81, 3.79, 3.35,
2.67, 1.78, -0.29, -2.78, 2.53, 1.60, 2.22, 1.68, 2.37, 2.86, 1.49.
Calculate:
a. Mean, median, and mode of the annual GDP growth (%).
b. Range, interquartile range of the United States annual GDP growth (%).
c. Standard deviation and variance of the distribution.
4.6.
a. Define skewness and Kurtosis.
b. Refer to the data in question 5.
c. Calculate the Pearson’s coefficient of skewness and kurtosis of the United States
annual GDP growth (%).
Wages ($)Number of Employees1–10311–20521–30631–401541–501851–602561–701371–80981–90491–1002d. Construct a 95% confidence interval for the annual GDP growth rate.
4.7. Table 4.10 shows the distribution of yield strength (MPa) of steel produced at a steel
rolling company. Find:
4.6. EXERCISES 37
a. skewness using Bowley’s Method and Pearson’s coefficient of skewness and
b. kurtosis of the yield strength (MPa).
Table 4.10: Distribution of yield strength (MPa) of steel produced at a steel company
Yield Strength (MPa)250–269270–289290–309310–329330–349350–369370–389Frequency19232825353027Yield Strength (MPa)390–409410–429430–449450–469470–489490–509510–529Frequency2114109 5 2 2C H A P T E R 5
39
Hypothesis Testing and
Regression Analysis
In this chapter, we look at the different stages of data preparation involved in quantitative analy-
sis. Understanding these processes will help us gather reliable data and reach a valid conclusion.
We will discuss types of hypotheses and how they are stated mathematically. Furthermore, we
shall discuss hypothesis testing with worked examples.
5.1 DATA PREPARATION AND EVALUATION FOR
QUANTITATIVE ANALYSIS
Data preparation and evaluation processes involve data validation, editing, coding, assembling,
and transformation. In this section, we will discuss these processes of preparing and evaluating
data before conducting a rigorous analysis.
Data validation: This is a process of checking data for authenticity. This is a way to determine
if the survey was carried out correctly and appropriately. This process helps researchers ascer-
tain whether interviewers or field workers conducted research according to the research plan
and objectives. Researchers can affirm that the survey was conducted using the correct target
population, in the appropriate location and during an appropriate period. Checking the ques-
tionnaires may require the annulment of unacceptable questionnaires. The need to invalidate a
questionnaire may arise as a result of a considerable number of incomplete questions, missing
pages, or responses gathered from unqualified respondents who were not appropriate to poll for
the purposes of the survey.
Data editing: This is a process of checking for errors and biases in the data. Errors can come
from the interviewer or the respondent. Data editing verifies response consistency and accuracy,
making corrections where necessary. Errors may occur during the completion of a questionnaire;
for example, a respondent who graduated from a higher institution at the age of 21 can mistak-
enly write 12 as their present age. It is the duty of a quality control manager to edit the data,
based on the results gathered from other associated questions. Data editing is necessary before
data analysis to remove problems that can lead to invalid analyses and incorrect conclusions
(possibly resulting in Type I or Type II errors; see Section 5.2 below).
40
5. HYPOTHESIS TESTING AND REGRESSION ANALYSIS
Data coding: The data obtained from the questionnaire are not all numerical in nature. Some-
times, responses must be translated into numerical data to allow quantitative analysis. Binary
variables (e.g., yes or no, male or female) are usually coded as 1 or 0, or 1 or 2. These dummy
variables can be used to capture responses in a numerical fashion. The category variables (nomi-
nal or ordinal) are represented by numbers (e.g., 1, 2, 3 , …, n). It is a good practice to assign the
highest code number to the most positive response or to the most important end of a scale. Some
questionnaires are pre-coded, while some questionnaires are not pre-coded. For a questionnaire
that is not pre-coded, it is essential for researcher to code the responses in the questionnaire
before quantitative analysis.
Data assembling: This is the process of collating all the validated, edited, and coded data
together and entering the corresponding values for each of the variables under investigation into
the relevant software for analysis. The collated data are presented in a data matrix that has rows
and columns. Data assembling can be performed in an Excel spreadsheet and then exported
into a statistical software application (such as Stata or R) for analysis. Some statistical software
allows the researcher to enter the data directly by selecting the appropriate box in the answer
options provided. Some online survey software packages are able to detect invalid responses and
inappropriate answers. For instance, if a field in the questionnaire requires a numeric answer
and the respondent enters an alphanumeric answer, the software will flag the particular question
in the questionnaire for the respondent to amend with an appropriate response, and prevent
submission of the survey unless all the fields are completed with valid responses.
Data transformation: This involves replacing a particular variable with a function of that vari-
able. For instance, we can replace a variable by taking the logarithm of that particular variable
or taking the square or square root of a particular variable. Data transformation helps to observe
otherwise obscure relationships among variables, especially variables that are not linearly related.
It can also assist smoothing or changing the shape of a distribution.
5.2 CONSTRUCTING AND TESTING DATA HYPOTHESES
A hypothesis is a statement made through speculating upon the outcome of a research study or
experiment. It is often a statement about a population. Generally, a researcher takes a sample
from a given population and assesses if the sampled data supports the stated hypothesis about
the population. A typical example of hypothesis is to test if the mean of population A is equal to
the mean of population B, or to test if the mean of population A is significantly different from
zero. In either case, a sample of data will be taken from the populations to evaluate the claim.
The steps for constructing and testing hypotheses are as follows.
State the hypotheses: There are two types of hypothesis—the null hypothesis and the alter-
native hypothesis. The null hypothesis is the hypothesis under investigation and it is denoted
by H0. The alternative hypothesis is the other hypothesis if the null hypothesis is rejected and
5.2. CONSTRUCTING AND TESTING DATA HYPOTHESES 41
it is denoted by H1. The following are the possible example of hypotheses for one-tail test and
two-tail test:
H0
H0
H0
H0
H0
H0
W
W
W
W
W
W
(cid:22)
(cid:22)
D
D
0 vs. H1
0 vs. H1
(cid:22) > 0 vs. H1
0 vs. H1
(cid:22)
(cid:21)
(cid:22) < 0 vs. H1
(cid:22) < 0 vs. H1
W
W
W
W
W
W
(cid:22) < 0 (one-tail test)
(cid:22) > 0 (one-tail test)
0 (one-tail test)
(cid:22)
(cid:20)
(cid:22) < 0 (one-tail test)
0 (one-tail test)
0 (one-tail test)
(cid:22)
(cid:22)
(cid:21)
(cid:21)
H0
W
(cid:22)
D
0 vs. H1
(cid:22)
W
6 D
0 (two-tail test):
Set the significance level .(cid:11)/: There is need to set the level of significance before the exper-
iment. A Type I error occurs when the null hypothesis is rejected when it is actually true. The
probability of committing a Type I error in a test is denoted as P (rejecting H0
H0) = (cid:11) and
the significance level is usually set at 5%. A Type II error is committed when accepting a null
hypothesis which is actually false; this is denoted by (cid:12). Table 5.1 shows the statistical errors.
j
Table 5.1: Statistical errors
Compute a test statistic: A test statistic is used to measure the degree of agreement between
the sample data and the null hypothesis. It is a standardized value obtained from the sample
data and is used to determine whether to accept or reject the null hypothesis. If the data shows
evidence strongly in favor for rejection of the null hypothesis, the absolute value of the test
statistic (in the case of F or t tests) becomes large and the p-value becomes small, depending on
the alternative hypothesis. A computed test statistic that exceeds the value of the test statistic will
result in a p-value below the selected significance level. The choice of test statistic depends on
the probability model assumed under null hypothesis. The commonly used test statistics include
z-statistic, t-statistic, F-statistic, and chi-square statistic.
DecisionHo Is Actually TrueFalseReject HoType I error (α)CorrectAccept HoCorrectType II error (β)42
5. HYPOTHESIS TESTING AND REGRESSION ANALYSIS
Construct acceptance/rejection regions: The acceptance region consists of values that are
consistent with the null hypothesis while the rejection region consists of values that may not
occur (or are very unlikely to occur on a frequent basis) if null hypothesis is true. The rejection
region is also known as the critical region and the values that contains in the critical region
is called the critical values. The critical value is compared with the test statistic to determine
whether to reject the null hypothesis. For absolute value of test statistic greater than the critical
value, null hypothesis is rejected. The acceptance and rejection regions of a two-tail tests for
normal distribution are depicted in Figure 5.1.
Figure 5.1: Critical regions for the test of significance.
Draw a conclusion: The decision rule is to reject the null hypothesis whenever the absolute
value of computed statistic is greater than the critical value (table value). Thus, a valid conclusion
can be drawn based on the data.
5.3 REGRESSION ANALYSIS: CONCEPT AND
APPLICATIONS (INTERPRET DATA RELATIONSHIPS
AND FORECASTING)
Regression analysis is a statistical technique to analyze quantitative data in order to estimate
model parameters and make forecasts. Regression analysis is used to model the relationship
between a response variable and one or more explanatory variables. The response variable or
dependent variable is denoted by Y , while the independent variable or explanatory variable is
denoted by X. A regression model is said to be simple linear regression if it contains one depen-
dent variable and one independent variable, and the two variables are linearly related. Thus, the
Critical RegionCritical RegionAcceptanceRegion-Zα/2-Zα/2-Z05.3. REGRESSION ANALYSIS: CONCEPT AND APPLICATIONS 43
variation in the dependent variable (Y ) can be explained by the variation in independent variable
(X). A multiple regression shows the relationship between one dependent variable (Y ) and more
than one independent variable (i.e., X1, X2, …, Xn). In this chapter, we will concentrate on the
simple linear regression and multiple regression models.
A forecast is a prediction about the future values of data. A regression forecast can be
either extrapolation or interpolation. Extrapolation is an estimation of value based on extending
a known sequence of values and interpolation is an estimation of value within the sequence
of two known values. After modeling the relationship between variables of interest to create a
regression model, we can use this model equation to predict the value of the dependent variable
given any value of the independent (explanatory) variable.
5.3.1 ASSUMPTIONS OF LINEAR REGRESSION
(i) The relationship between the response variable (Y ) and explanatory variable (X) must be
linear.
(ii) The explanatory variable (X) is non-stochastic (deterministic).
(iii) The model residuals (errors) are statistically independent.
(iv) The errors are normally distributed with zero mean and a constant standard deviation.
5.3.2 SIMPLE LINEAR REGRESSION
The mathematical model for the simple linear regression is given as:
Y
(cid:12)0 C
D
(cid:12)1X
C
";
(5.1)
where " is the error term, and (cid:12)0 and (cid:12)1 are the intercept and regression coefficient of X,
respectively. This equation mathematically represents the relationship between two variables,
for example, the relationship between income and expenditure, advertising spend in a region
and sales revenue, years of education and salary, and other uses.
This model will give us the best line of fit for the two variables under investigation. Using
the ordinary least squares (OLS) method to estimate the model, the regression coefficients can
be calculated as follows:
(cid:12)1 D
n P XY
n P X 2
.P X/.P Y /
.P X/.P X/
(cid:0)
(cid:0)
The regression model is fitted as
Y
(cid:12)0 D N
(cid:0)
(cid:12)1 N
X:
Y
D O(cid:12)0 C O(cid:12)1X:
(5.2)
(5.3)
(5.4)
44
5. HYPOTHESIS TESTING AND REGRESSION ANALYSIS
Example 5.1
Table 5.2 shows the log of final consumption expenditure (Y ) and log of Gross Domestic
Products (X) in Nigeria between 1981 and 2015. (i) Fit the model by regressing final consump-
tion expenditure (Y ) on Gross Domestic Products (X) and interpret the result. (ii) Extrapolate
the value of log of final consumption expenditure when log of Gross Domestic Products is 7.0.
Table 5.2: Final consumption expenditure and log Gross Domestic Products
Solution
(i) n
35, P XY
D
5:22
X
and N
D
4673:12, P X
182:60, P Y
D
D
D
893:94, P X 2
960:90
Y
N
D
25:54,
D
(cid:12)1 D
35 .4673:12/
.182:60
893:94/
(cid:0)
35 .960:90/
(cid:2)
.182:60/2
(cid:0)
1:1282:
D
Substitute for the value of (cid:12)1 in (5.3) to solve for (cid:12)0
(cid:12)0 D
25:54
(cid:0)
.1:1282
5:22/
(cid:2)
D
19:6508:
Y
Therefore, the model is O
19:65
C
D
1:13X.
Year1981198219831984198519861987198819891990Y25.3325.2825.1525.0725.1524.9624.7424.8424.8125.00X4.824.814.764.744.824.734.614.694.754.87Year1991199219931994199519961997199819992000Y25.0325.1325.1125.0725.1325.3025.2725.2825.2225.24X4.864.874.894.904.894.944.975.005.005.05Year2001200220032004200520062007200820092010Y25.5625.5725.6925.9226.0125.9026.2626.1826.3626.34X5.105.135.235.525.565.645.705.765.835.91Year20112012201320142015Y26.3226.3126.4726.4726.47X5.956.006.056.116.14Source: WDI5.3. REGRESSION ANALYSIS: CONCEPT AND APPLICATIONS 45
The one percentage change in Gross Domestic Products (X) results in 13 percentage
changes in final consumption expenditure. This indicates a household spending elastic-
ity of 1.13, implying that GDP responds to consumption spending is 1.13.
(ii) When X
Y
7:0, then O
D
D
19:65
C
1:13 .7/
D
27:56.
5.3.3 MULTIPLE REGRESSION
This is an extension of simple linear regression. In the real world, what determines the value of
a particular variable, such as salary, is dictated by more than one explanatory variable, such as
years of education, years of experience, geographical location, and even gender or ethnicity. The
model that shows the relationship between the response variable (Y ) and a set of explanatory
variables (Xi ) is called Multiple Regression.
The multiple regression model is of the form:
Y
(cid:12)0 C
D
(cid:12)1X1
C
(cid:12)2X2
C
: : : (cid:12)kXk
"
C
(5.5)
where Y is the dependent variable, Xi are the explanatory variables, and (cid:12)i are the coefficients
of regression (change in Y per unit change in Xi ).
5.3.4 ASSUMPTIONS OF MULTIPLE REGRESSION
(i) The relationship between the dependent variable and the independent variables is linear.
(ii) The residuals are normally distributed.
(iii) The residuals are homoscedastic, i.e., residuals are uncorrelated.
(iv) No multicollinearity, i.e., independent variables are not correlated.
For the sake of simplicity, we assumed that we are dealing with one dependent variable
and two independent variables. The model can be stated as:
Y
(cid:12)0 C
D
(cid:12)1X1
C
(cid:12)2X2
":
C
The regression coefficients can be defined as:
(cid:12)1 D
(cid:12)2 D
(cid:0)P x2
(cid:0)P x2
(cid:0)
1(cid:1) (cid:0)P x2
2(cid:1)
2(cid:1) .P x1y/
(cid:0)P x2
1(cid:1) .P x2y/
(cid:0)P x2
X1
(cid:12)1 N
(cid:0)
1(cid:1) (cid:0)P x2
2(cid:1)
X2;
(cid:12)2 N
(cid:0)
Y
(cid:12)0 D N
(cid:0)
.P x1x2/ .P x2y/
.P x1x2/2
(cid:0)
.P x1x2/ .P x1y/
.P x1x2/2
(cid:0)
(5.6)
(5.7)
(5.8)
(5.9)
46
5. HYPOTHESIS TESTING AND REGRESSION ANALYSIS
where P x1y
P X1Y
D
(cid:0)
.P X1/.P Y /
N
X X2Y
.P X2/ .P Y /
N
(cid:0)
X x2y
X x1x2
D
D
X X1X2
.P X1/ .P X2/
N
(cid:0)
.P X1/ .P X1/
N
.P X2/ .P X2/
N
:
(cid:0)
(cid:0)
X x2
1 D
X X1X1
X x2
2 D
X X2X2
Example 5.2
Table 5.3 shows the data for Nigeria’s net foreign direct investment inflow as percentage of
GDP (FDI), real GDP growth rate (GDP), and inflation rate (INFL) recorded in 1986–2015.
10 and GDP
(i) Regress FDI on INFL and GDP. (ii) What is the value of FDI when INFL
D
2:72?
D
Table 5.3: Data for Nigeria’s net foreign direct investment inflow
YearFDIINFLGDPYearFDIINFLGDP19860.935.72-8.7520012.7018.874.4119872.5311.29-10.7520023.1712.883.7819881.6354.517.5420032.9614.0310.3519897.7850.476.4720042.1315.0033.7419901.917.3612.7720054.4417.863.4419912.6013.01-0.6220063.348.248.2119923.0644.590.4320073.635.386.8319938.5257.172.0920083.9411.586.27199410.8357.030.9120095.0511.546.9319953.7872.84-0.3120101.6313.727.8419964.5529.274.9920112.1510.844.8919974.308.532.8020121.5312.224.2819983.2810.002.7220131.088.485.3919992.806.620.4720140.828.066.3120002.466.935.3220150.649.022.65Source: WDISolution
5.3. REGRESSION ANALYSIS: CONCEPT AND APPLICATIONS 47
1:93, (cid:12)1 D (cid:0)
FDI
D
(i) To reduce computational effort, the relationship between Nigeria’s net foreign direct in-
vestment inflow as percentage of GDP (FDI), real GDP growth rate (GDP), and inflation
rate (INFL) is shown in the E-Views result below (see Table 5.4). As we can see in the
result shown below, (cid:12)0 D
Thus, the model is estimated as:
0:01, and (cid:12)2 D
1:93
(cid:0)
(cid:3)
Note that only the constant term and the inflation rate are statistically significant while
GDP is not significant in the model. Let us relax the significance of variables at this mo-
ment and assume that all variables are significant. The model can be interpreted as for a
unit change in real GDP growth rate, net foreign direct investment inflow as percentage
of GDP (FDI) decreases by 1%. Also, for a unit change in inflation rate, net foreign direct
investment inflow as percentage of GDP (FDI) increases by 7% while the constant term
is 1.93 (i.e., the value of FDI when GDP and INFL at zero).
b
INFL
0:07.
GDP
0:01
0:07
C
(cid:3)
(ii)
FDI
1:93
(cid:0)
D
0:01 .2:72/
0:07 .10/
2:6
D
C
Remember that the difference between the actual value and estimated value is the error;
thus the error is 0.68 (see Table 5.4).
b
48
5. HYPOTHESIS TESTING AND REGRESSION ANALYSIS
Table 5.4: E-Views output
5.4 EXERCISES
5.1. Discuss the data preparation and evaluation processes.
5.2. What is a hypothesis? Give an example of a phenomenon which could be explored using
a hypothesis.
5.3.
a. What is the difference between a null and alternative hypothesis?
b. State the steps in constructing and testing a hypothesis.
5.4.
a. What do you understand by regression analysis?
b. State the assumptions of the linear regression model.
c. Table 5.5 shows the GDP growth rate (Y ) and inflation rate (X) of the U.S. from
1961–2016. Fit the regression model; what is the value of GDP growth rate (%)
when inflation rate is 3.0%?
5.5. The following data in Table 5.6 shows the monthly revenues (in billions naira) generated
from the federal government of Nigeria from Oil (OILR) and non-oil (NONR) sectors
between June 2015 and May 2016.
E-Views OutputDependent Variable: FDIMethod: Least SquaresDate: 11/23/17 Time: 15:38Sample: 1986 2015Included Observations: 30VariableCoeffi cientStandard Errort-StatisticProbabilityCGDPINFL1.932733-0.0078280.0706210.5789590.0488530.0185763.338289-0.1602373.8017950.00250.87390.0007R-squaredAdjusted R-squaredS.E. of regressionSum squared residLog LikelihoodF-statisticProb(F-statistic)0.3531760.3052631.90411697.89273-60.308287.3712090.002790Mean dependent varS.D. dependent varAkaike info criterionSchwarz criterionHannan-Quinn criterionDurbin-Watson statistic3.3390002.2844584.2205524.3606724.2653772.043500Table 5.5: GDP growth rate (Y ) and inflation rate (X)
5.4. EXERCISES 49
Table 5.6: Monthly revenues
a. Plot a scatter diagram of the data.
b. Calculate the coefficients of the regression model.
c. If the Nigeria oil sector revenue in June 2016 is NGN170.5 billion, what would be
the predicted revenue for the non-oil sector?
YearYXYearYXYearYXYearYX19612.301.081975-0.209.1319893.684.8320032.812.2719626.101.1219765.395.7419901.925.4020043.792.6819634.401.2119774.616.491991-0.074.2320053.353.3919645.801.3119785.567.6519923.563.0320062.673.2319656.401.6719793.1811.2719932.752.9520071.782.8519666.502.991980-0.2413.5119944.042.612008-0.293.8419672.502.7819812.5910.3219952.722.812009-2.78-0.3619684.804.221982-1.916.1619963.802.9320102.531.6419693.105.4119834.633.2119974..492.3420111.603.1619703.215.9019847.264.3219984.451.5520122.222.0719713.304.2619854.243.5619994.692.1920131.681.4619725.263.3119863.511.8620004.093.3820142.371.6219735.646.2219873.463.7420010.982.8320152.860.121974-0.5211.0419884.204.0120021.791.5920161.491.26Source: WDIMonthJun-15Jul-15Aug-15Sep-15Oct-15Nov-15OILR270.16224.06218.22217.70112.13202.97NONR262.76281.41215.69165.91276.96160.59MonthDec-15Jan-16Feb-16Mar-16Apr-16May-16OILR218.40256.36225.92183.81183.31161.36NONR162.75187.64159.23143.74157.10173.15Source: CBN Statistical BulletinC H A P T E R 6
51
Analyzing Survey Data
Analyzing survey data entails processing data, presenting data and drawing valid conclusions
from the analysis. In the following example, we are going to use consumer expenditure surveys
to illustrate how survey data are collected. We will consider types of measurement and their
real-world applications, explore survey research puzzles, obtain strategic insights from survey
research, and discuss how to test the quality of data.
6.1 QUANTITATIVE TECHNIQUE OF COLLECTING
SURVEY DATA: CONSUMER EXPENDITURE SURVEY
Consumer expenditure surveys are conducted to obtain data regarding household income, ex-
penditure and the demographic characteristics of consumers. The method of data collection is
similar to the personal interview and questionnaire methods described in Section 2.3. In the
survey, we are able to discover which are the most frequently purchased items in the market
with respect to income group, geographic location, size of the family, and other demographic
information. This survey shows the pattern of consumer spending at a particular period of time
and provides a continuous trend of information regarding purchasing habits. Consumer expen-
diture surveys are also used to determine the relative importance of goods and services in the
“basket” of products used to determine the consumer price index (CPI). The bra Inflation Index
is an independent consumer price index developed by bra to measure the price level movement in
Nigerian markets, built upon consumer expenditure surveys. The relative importance of products
as part of an entire basket of purchases is used to provide the weightings necessary to calculate
the CPI.
6.2 TYPES OF MEASUREMENT SCALES AND THEIR
APPLICATIONS
The four types of measurement scales include nominal, ordinal, interval, and ratio.
Nominal scale: Nominal is a Latin word that means name. Nominal scales are used to classify
variables into discrete groups; these groups do not have any inherent numerical association.
The variables on a nominal scale are also known as categorical variables. Nominal variables are
categorized without any order or sequence. Example of categorical variables are sex, race, eye
color, genotype, religion, etc. Nominal variables can be represented in a bar chart, histogram,
52
6. ANALYZING SURVEY DATA
or pie chart. Consider Table 6.1 that contains the information on the obtained from a survey of
types of houses in a community and presented in a bar chart.
Table 6.1: Types of houses in a community
Figure 6.1: Frequency of types of houses in a community.
Ordinal scale: Ordinal variables are similar to nominal or categorical variables, with the dif-
ference that each variable is assigned a numerical value based on a desired ordering. For instance,
education level can be categorized into five categories; primary school leaver, secondary school
leaver, college of education graduate, polytechnic graduate, and university graduate. We may
want to assign the number 1 to indicate primary school education, 2 to secondary school, 3 to
college of education, 4 to polytechnic, and 5 to university. In this case, we can delineate the
difference between one educational level and another based on the order applied; for example,
House TypeFrequencyBungalow12Duplex16Cottage2Detached7Flat5Semi-detached6Terrace3BungalowDuplexCottageDetachedFlatSemi-detachedTerrace1216275636.2. TYPES OF MEASUREMENT SCALES AND THEIR APPLICATIONS 53
the gap between primary and secondary school leavers is wider than the gap between polytech-
nic and university. However, applying ordinal scales to categorical variables implies that each
category is equally spaced. In this example, the gap between primary and secondary education
is the same as the gap between high school and university education. This assumption may not
hold true, thus care must be taken to justify the ordering of an ordinal scale. Table 6.2 is the
new country classification by income level. The World Bank classification of the thresholds are
low-income, lower-middle income, upper-middle income, and high-income countries with un-
equal GNI/Capita range. We can assign values to these thresholds as follows: low-income
1;
lower-middle income
2; upper-middle income
3; and high-income
D
4.
D
D
D
Table 6.2: Country classification by income level
Interval scale: An interval variable is similar to an ordinal variable, the main difference being
that the interval variable contains a range of values that are equally spaced (i.e., the interval
variable has values of equal range). Assuming that a teacher conducted a test of 10 marks to 30
pupils in Mathematics class and none of the pupils got zero. We know that the minimum and
maximum scores are one and ten marks, respectively. The teacher can classify the test scores into
five distinct groups as follows: 1–2, 3–4, 5–6, 7–8, 9–10. See Table 6.3.
Table 6.3: Distribution of test score
Ratio scale: A ratio variable has the qualities of nominal, ordinal, and interval variables, and
also has an absolute zero that is meaningful. This implies that a meaningful fraction can be
constructed with a ratio variable. Ratio scale provides the ultimate-order, interval values, and
Th resholdGNI/Capita (current US$)Low-income<1,005Lower-middle income1,006 – 3,955Upper-middle income3,956 – 12,235High-income>12,235Test ScoreFrequency1–223–415–6107–8159–10254
6. ANALYZING SURVEY DATA
the ability to calculate ratios since a “true zero” can be defined. For example, we can simply say
that the service time is greater than zero, this means that we have conceptualized a zero point
in time. A ratio scale allows comparisons to be made easily. For instance, it is possible to say
that “the doctor has twice patients in this month than in the previous month.” In a research, it
is a good idea to have a higher level of measurement like ratio and interval rather than a lower
one like nominal and ordinal. Other example of ratio variables are height, weight, length, gross
sales, expenditure, income, etc. A ratio variable can be used as a dependent variable for most of
parametric statistical tests such as t-tests, F -tests, correlation, and regression analysis.
6.3
SURVEY RESEARCH RIGOR
In order to formulate a survey that achieves results reflective of a population while minimizing
errors, the researcher must follow rigorous standards of quality, engagement and execution. These
three factors are discussed below.
(a) Quality of questionnaire: The quality of the questionnaire determines the quality of the
data obtained from the survey. It is the aim of the researcher to understand a phenomenon
by exploring the underlying reasons for variance in the responses obtained from a survey.
If survey questions are poorly worded or ambiguous, this results in respondent errors and
adds variance to the results which are not reflective of the phenomenon being studied, but
are simply a result of poor survey design. Before the questionnaire is set to be administered,
a researcher must perform the following task on the questionnaire.
(i) Check the correctness of the questions by putting himself and others in position of
the respondents to provide answers to the questions.
(ii) Ensure that questions are simple, avoiding double-negatives (do you disagree that
housing is too expensive?) or double-barreled questions (do you think housing is too
expensive and cars are affordable?)
(iii) Avoiding leading questions (e.g., “Do you think politicians deserve their pay rises
despite the poor performance of the economy and their personal lack of ethics?”)
(iv) Make the questions as concise as possible.
(v) Constantly edit the questionnaire by removing irrelevant questions or checking for
ambiguous meanings.
(vi) Accommodate all possible answers, for instance by offering “none of the above” or
“other, specify” type responses.
(b) Survey engagement: When conducting a survey, it is necessary to ask respondents to sac-
rifice their time, thus researchers should be conscious of the duration of the survey. To
achieve a greater accuracy in our survey, we should mindful of our respondents’ time and
set a time limit on our survey, particularly when using online surveys. This will encourage
6.4. TESTING DATA QUALITY: SURVEY ERROR DETECTION PROCEDURES 55
more respondents to participate in our survey. The greater the number of people that par-
ticipate in the survey the more the robust the analysis should be (assuming that the correct
demographic is being surveyed). It is essential to add “save and continue” buttons to allow
a respondent to finish a survey later; this is especially useful for longer surveys. For online
surveys, it is good practice to show a progress bar to indicate the percentage of questions
that have been completed by the respondent. More importantly, offering respondents in-
centives enhances the quality of data. If incentives are given to respondents, they will be
more willing to supply the correct data and express their feeling toward a question, and
offering incentives can often lead to a larger number of respondents willing to complete
the survey. However, the ability to offer incentives is often dependent on research funding,
and may not be cost-effective on a large scale.
(c) Effective survey execution: Aside from administering face-to-face surveys or interviews,
we can also use other methods to administer our questionnaires. The methods include
email, social media, mobile apps, and offline surveys. These methods are effective and
efficient for conducting a large sample survey in a limited time interval. In addition, we
should test the functionality and responsiveness of the online questionnaire before it is
finally deployed. Communication between the researcher and respondents is crucial before,
during and after administering questionnaires.
6.4 TESTING DATA QUALITY: SURVEY ERROR
DETECTION PROCEDURES
Good quality data should be free of errors. After collecting data from the field, it is necessary
to perform a routine data check to confirm the authenticity of the data and to check for other
errors that may have occurred during the data gathering process.
The Consumer Expenditure Survey and Commodity Price Expenditure Survey are con-
ducted by bra to track consumer spending patterns. As with any survey, the accuracy of consumer
expenditure and commodity price estimates depends on the accuracy of the collected data. The
surveys have several procedures already in place to ensure the accuracy of the published results.
The following steps are used to cross-check data generated from the field.
(a) Re-interview respondents.
(b) Use computerized checks to verify the logical consistency of responses given by respon-
dents.
(c) Perform an outlier review of individual survey responses.
(d) Perform an outlier review of the summarized expenditure estimates before they are pub-
lished.
56
6. ANALYZING SURVEY DATA
(e) Use Benford’s Law to determine the distribution of the leading digit of all numbers re-
ported on a survey form.
Here, we are going to discuss the characteristics, technique and application of Bedford’s
law. The kind of data that Benford’s law can be applied to are as follows.
(a) Data that has a wide variety in the number of figures (i.e., data with a large range).
(b) The data is right skewed, i.e., the mean is greater than the median, and the distribution
has a long right-tail rather than being symmetric.
(c) Data with values that are formed by a mathematical combination of numbers from other
distributions.
(d) Data that has no predefined maximum or minimum value (except a minimum of zero).
Benford’s Law involves examining the distribution of the leading (or left-most) digits of
all the numbers reported on a survey form. These leading digits have been observed to follow a
certain distribution regardless of the nature of the survey. This law is used to detect sources of
unusual data, especially in the case that the field worker is under suspicion of manipulating data.
Benford’s Law states that the proportion of “real world” numbers whose leading digit is
1; 2; 3; : : : ; 9 are approximately log10 (cid:16) d
(cid:17), where d is the leading digit of the randomly
selected number. For instance, 125,000, 28,013,20 and 72,467,456 have the leading digits of 1,
2, and 7, respectively.
D
C
d
d
1
Benford’s law for the prediction of the leading digits (1–9) is tabulated in Table 6.4.
Table 6.4: Benford’s Law
Comparing the distribution of the leading digits against Benford’s predicted distribution,
we identify large variances between the leading digits distribution and Benford’s predicted dis-
tribution. Wide margins between the observed and predicted distribution may indicate data
tampering or erroneous data gathering processes which could warrant further investigation.
Let us consider the hypothetical example, assuming that there were 2,000 respondents in
the Consumer Expenditure Survey and the leading digits of their expenditure are classified as
in Table 6.5.
From Table 6.5, we can observe that the margin between the percentage of reported ex-
penditure with the leading digits 3 and 8 vary significantly from the predicted distribution, with
absolute difference of 3.50 and 3.68, respectively. Thus, these expenditure values under a leading
digit of 3 and 8 should be scrutinized because they are suspicious.
1st Signifi cant Digit123456789Benford Prediction (%)30.1017.6112.499.697.926.695.805.124.58Table 6.5: Benford’s prediction
6.5. EXERCISES 57
6.5 EXERCISES
6.1. Discuss the purpose(s) of a consumer expenditure survey.
6.2. Describe the four types of measurement scales.
6.3. What are some of the features of a good questionnaire?
6.4. State the procedures used to detect errors in a survey.
6.5. What is Benford’s law, and what are its applications?
6.6. Given the following data obtained from 30 households expenditure ($):
1,500, 1,725, 2,458, 1,620, 2,550, 1,740, 1,420, 2,375, 1,810, 1,450, 1,430, 980, 910,
1,001, 2,000, 1,720, 899, 769, 580, 4,000, 1,665, 3,110, 755, 680, 1,200, 2,500, 590,
880, 1,008, 675.
Calculate Benford’s prediction for the data and interpret your results.
6.7. The Statistics department arm of Central Bank of Nigeria conducted an expenditure
survey to explore the distribution pattern of people living in the state capital, Abuja. A
random sample of 5,000 households are selected and the result of the survey is summa-
rized in Table 6.6.
a. Compute Benford’s prediction for the distribution of expenditure.
b. Do any suspicious values arise from the previous computation?
Leading Digit (d)Reported ExpenditureBenford’s Predictionlog10 d + 1 dDiff erenceNumberPercentage160830.4030.100.30235017.5017.61-0.1131808.9912.49-3.5041919.549.69-0.1551567.787.92-0.1461326.626.69-0.0771165.815.800.0181768.805.123.689914.564.58-0.02Total2,00010010058
6. ANALYZING SURVEY DATA
Table 6.6: Result of the survey of a random selected 5,000 households
Leading Digit123456789Frequency689496126784329545636952965C H A P T E R 7
59
Index Methodology
bra develops and maintains a number of indices that measure the performance of the Nigerian
economy. In this chapter, we will elucidate on the principles and techniques used to formulate
indices by providing the reader with practical examples. These indices include bra Expectation
Index, bra Consumer Confidence Index, braIndex, bra Producer Price Index, bra Bond Index,
and bra Inflation Index. We will discuss each of the indices extensively.
7.1
bra EXPECTATION INDEX: PRINCIPLES,
TECHNIQUES, AND APPLICATIONS
bra conducts a monthly business survey which polls respondents across a number of sectors
about their expectations for the Nigerian economy during the next one-, three-, and six-month
time frames. Our expectation surveys gather the opinions of business leaders and entrepreneurs
regarding the performance of key areas such as: employment, prices, bank rates, economic con-
ditions, business conditions, cost of production, volume of products, market share, stock level
and production, access to credit facilities, financial condition, and the success of social or fis-
cal policies, among other factors. The survey obtains a cross-section of sentiments from small,
medium, and large firms in the following sectors of the economy: the financial sector, manufac-
turing sector, hotel sector, petroleum marketing, telecommunication sector, commercial sector,
agricultural sector, and construction sector.
7.1.1 OBJECTIVES OF bra EXPECTATION INDEX
(i) To survey short run business conditions.
(ii) To assist optimal decision making and better planning for economic improvement.
(iii) To serve as a leading business cycle indicator.
7.1.2 METHODOLOGY
There are two approaches to compute bra Expectation Index, namely—(i) diffusion index and
(ii) net balance.
(a) Diffusion index: This is calculated by assigning weights to the percentage of respondents
under each response category: positive, negative, and neutral. The index takes the values
of 0–100. An index value of 100 indicates that the respondents are unanimously positive
60
7. INDEX METHODOLOGY
regarding the economic variable under consideration, or unanimously expect this variable
to improve. The index value of zero indicates a unanimously pessimistic response about
the key variable, or a unanimous expectation that this variable will deteriorate. For the al-
location of weightings to the response options, we will look at the cases when the response
options are three and when the response options are four.
Case I: Three response options From the survey questions where there are three dif-
ferent options provided to the respondents—Positive/Increase, Negative/Decrease, and
Neutral/Remain Unchanged. We assigned the weight of 1.0 to the percentage reporting
increase, a weight of 0.5 to the percentage reporting decrease and a weight of zero to
percentage reporting remain unchanged.
Case II: Five response options For the questions where there are five options are
provided—substantially increase, increase, same, decrease, and substantially decrease. The
diffusion index would weigh the responses 1.0, 0.75, 0.5, 0.25, and 0.0, respectively.
(b) Net balance: This is calculated as the percentage of respondents expecting an improve-
ment/increase in economy indicator less the percentage expecting a deterioration/decrease
in the same economic variable.
7.1.3 CALCULATION OF bra EXPECTATION INDEX
bra adopts the formula below to calculate the diffusion index for the business expectation survey:
Index
1
2
D
(cid:8)(cid:0)% reporting increase
% reporting decrease(cid:1)
100(cid:9) :
C
(cid:0)
(7.1)
The index is read thus.
1. If the index is above 50, it shows that respondents expect conditions to improve.
2. If the index is at 50, it shows that respondent expect conditions to remain unchanged.
3. If the index is below 50, it shows that respondents expect conditions to deteriorate.
It is important to keep in mind the current state of what is being measured. To illustrate,
if an economy is in a deep recession, current economic conditions will be very unfavorable. If we
poll respondents about economic expectations and determine a result above 50, this indicates
that our respondents believe there will be some improvement in economic conditions. However,
this does not necessarily mean that the economy is expected to achieve positive GDP growth,
only that conditions are expected to improve from a deeply negative position to possibly slightly
negative (perhaps respondents believe that the economy will contract at a slower pace). See the
illustrative example in Table 7.1.
7.2. bra CONSUMER CONFIDENCE INDEX: PRINCIPLES, TECHNIQUES, AND APPLICATIONS 61
Table 7.1: Illustrative example
7.2
bra CONSUMER CONFIDENCE INDEX: PRINCIPLES,
TECHNIQUES, AND APPLICATIONS
The bra Consumer Confidence Index (braCCI) is based on a monthly survey that shows the
direction of the economic performance within a short period; it serves as a leading indicator that
gives specific insight of what consumers expect of the economy within the specified period. The
index will be generated on monthly basis from the monthly business expectation survey (BES)
conducted by the company’s field researchers. The questions selected from the observations made
by the respondents to compute the index will remain constant throughout the history of the
series.
7.2.1 COMPONENTS OF braCCI
The components of braCCI are Present Situation Index (PSI) and Expectation Index. The survey
questions have three different options responses—Positive, Negative, and Neutral and covers the
same sectors covered, as mentioned above.
The Present Situation Index is composed of current business conditions, current employ-
ment conditions, and general sales conditions. On the other hand, the constituents of expecta-
tion index are expected business conditions, employment conditions, general sales conditions,
and firm income expectations predicted for the next six month horizon.
7.2.2 METHODOLOGY
bra adopts the average relative approach in the computation of the consumer confidence index;
this can be written mathematically as:
Relative value .Xi /
D
(Positive responses)/(Total responses):
The relative value is derived for each of the questions; the average relative for the calendar month
is then estimated in (7.2) as follows:
Average Relative (CCI)
7
1 Xi
P
N
;
D
(7.2)
PeriodResponses in PercentagesDiff usion IndexIncreaseDecreaseUnchanged1-month751510803-month50050756-month001005062
7. INDEX METHODOLOGY
where N represents the total number of variables constituting the component of the questions in
the document, while i represents the items in the components. The consumer confidence index
is then split into two parts, namely the current month and six-month outlook:
Present Situation Index (PSI)
P
3
1 Xj
nj
;
D
(7.3)
where j represents the items of current month responses in the components and nj is the number
of current month variables in the component section of the document.
Expectation Index (EI)
P
4
1 Xk
nk
;
D
(7.4)
where k represents the items of six month responses in the components and nk is the number of
six-month variables in the component section of the document. The benchmark for the index
as a reference of economic direction is 0.5 (50 index points).
7.2.3
ILLUSTRATIVE EXAMPLE
Let us consider that Table 7.2 is the summary of the business expectation survey for 100 respon-
dents.
Table 7.2: Business expectation survey
General Outlook
Average Relative Value
Average Relative Value
Consumer Confidence Index
D
D
.0:70
0:617
0:60
C
C
0:50
.0:617
100/
(cid:3)
D
0:60
0:70
C
C
0:60/=6
61:7
C
D
Current Month
Average Relative Value
Average Relative Value
D
D
.0:70
0:633
C
0:50
C
0:70/=3
ResponsePositiveNegativeNeutralRelativeBusiness conditions (current month)7020100.7Business conditions (six month)6020200.6Employment status (current month)5030200.5Employment status (six month)6010300.6Sales status (current month)7020100.7Sales status (six month)6020200.6Firm income realize in the next six (6) month8010100.87.3. braINDEX: PRINCIPLES, TECHNIQUES, AND APPLICATIONS 63
Present Situation Index
.0:63
100/
(cid:3)
D
63:3
D
Six Month Outlook
Average Relative Value
Average Relative Value
Expectation Index
D
D
.0:65
.0:60
0:65
D
100/
(cid:3)
D
65:0
0:60
0:60
C
C
C
0:8/=4
7.2.4
INDEX MAINTENANCE
We keep track of the components of the index and make proper adjustments (expanding the
number of components) when appropriate. Constant monitoring and management is set in place
for effective maintenance.
7.3
braINDEX: PRINCIPLES, TECHNIQUES, AND
APPLICATIONS
braIndex is a pure market capitalization weighted index. This index is also known as bra50 Index.
This equity index is estimated daily, using the total market capitalization of the constituent stocks
without adjustment for free floats. Consistency in adjustment for corporate actions and events
as well as stock rotation is a quality control component of the braIndex’s formulation.
The braIndex consists of the 50 top-performing stocks on the Nigeria Stock Exchange
(NSE). The component stocks are fundamentally sound and technically liquid enough to justify
selection. Selected stocks must have proven records of earnings and growing shareholder value
in addition to being listed on the exchange for at least five years. Stocks are placed under rigorous
selection screening before they are qualified to be among the constituent stocks. These selection
criteria are: basic criteria, technical criteria, and fundamental criteria.
7.3.1 BASIC CRITERIA FOR SELECTION OF CONSTITUENT STOCKS
1. Any companies selected must be quoted companies on the Nigerian Stock Exchange
(NSE).
2. Stocks delisted from the NSE are NOT qualified.
3. Stocks must be listed on the NSE for a minimum of five years.
4. Companies that have not held an AGM (Annual General Meetings) in the last five years
or for two consecutive years are not considered.
5. Stocks must have a minimum market capitalization that is equal to or higher than the
average of its sector.
6. Any company that did not pay dividends for a minimum of three years, or declared a rights
issue or bonus issue for the same number of years, are not considered.
64
7. INDEX METHODOLOGY
7. If any financial institutions fall under the marginal banks list of the Central Bank of Nigeria
(CBN), such a financial institution is disqualified.
8. Any companies that have not published up-to-date unqualified financial statements as of
the time of selection or review are not considered.
9. Any companies that satisfy the above basic criteria will be considered in the calculations
of bra equity index.
7.3.2 TECHNICAL CRITERIA
1. Stocks that are not active and traded persistently on the NSE for a minimum of three
months without a basic reason are not considered. Component stocks that violate this
criterion are removed.
2. Stocks must be traded at an average volume of at least 5,000 shares a day for the last
12-month period preceding the determination date.
3. Stocks experiencing persistent price crashes for three months are not considered.
4. Stocks with a negative 52 week beta vs. the market beta are not considered.
5. Stocks with a negative 52 week beta vs. the industry beta are not considered.
6. P/E growth must also be positive for at least the last two consecutive years.
7.3.3 FUNDAMENTAL SELECTION CRITERIA
1. Earnings Per Share (EPS) must be positive (more than zero) for at least the last two years.
2. Dividends, rights issues, and bonus issues must be positive for at least the last two consec-
utive years.
3. Selected companies must have positive revenue reserve/retained earnings for at least the
last two consecutive years (out of five years).
4. Market Capitalization Threshold: Stocks must have a market capitalization that is more
than or equal to the industry/sectorial average capitalization.
5. The company must have a positive Return on Equity (ROE) for at least the last two con-
secutive years (out of five years).
6. Price-Earnings Ratio must be positive for at least the last two consecutive years.
7. Growth rate of EPS must be positive for at least the last two consecutive years (out of four
years).
8. Sustainable growth rate must be positive for at least the last two years.
7.3. braINDEX: PRINCIPLES, TECHNIQUES, AND APPLICATIONS 65
7.3.4 CORPORATE EVENT
The following are corporate actions that require divisor adjustment.
Stock Split
A stock split simply involves altering the number of shares outstanding of a listed company and
proportionally adjusting the share price to compensate. This exercise in no way affects the intrin-
sic value or performance of the stock; the stock split affects the market price of each individual
share of stock and not the company’s value. For example, a company may have 1,000 outstanding
shares priced at $1,000 each; in a 2-for-1 split, the company now has 2,000 outstanding shares
priced at $500 each. Both before and after, the company’s market capitalization is $1,000,000.
Some firms may perform a stock split to make their shares more affordable for smaller investors,
as Apple did in 2014 by performing a 7-1 split in 2014 once their shares hit $645. Theory shows
that in some cases, this may slightly increase market capitalization after a split as demand for the
stock rises due to a large number of small investors buying the stock. The most common splits
are 3-for-2, 2-for-1, 5-for-4, 3-for-1, and sometimes 1-for-10.
Dividend
If the price of a stock is adjusted for a dividend, this will affect the index divisor as well as the
index on the day of the adjustment only. The effect is that the index divisor will be lower than
50 for the price average, and immediately after such an adjustment, the divisor will go back to
50, provided there are no other corporate actions.
Bonus Issues
If the price of a stock is adjusted for a bonus, it will also affect the index divisor as well as the
index on the day of adjustment only. The effect on the divisor is the same as that of the dividend
adjustment described above.
Right Issues
If a company declares a rights issue offer, its stock will be subject to a technical suspension during
the period the offer is available for the existing shareholders to claim. Therefore, the offered price
will be lower than the market determined price. However, right issues will increase the company
value, unlike dividend and bonus actions. The effect is different from above in that index divisor
will be lower than 50 (for the price average for instance) for only the periods the right issues are
offered and immediately after the period, the divisor will go back to 50, provided also that there
are no other companies taking corporate actions. Note that the adjustment to a divisor will only
be done once and is sustained for the entire period of suspension.
66
7. INDEX METHODOLOGY
7.3.5 STOCK SPLITS ADJUSTMENT BAROMETER
Ai
(cid:2)
.i
j /
;
C
i
(7.5)
where Ai is the volume of issued shares, i is the proportion of share units, and j is the proportion
of shares added.
7.3.6 FREE-FLOAT
Free-float of a company is the proportion of shares held by investors who are likely to be willing
to trade. It thus excludes shares held by strategic shareholders.
Free-float Adjustment
The following shareholdings are viewed as strategic in nature and are excluded from index cal-
culations.
1. Strategic Holdings: Shares held by strategic shareholding(s) that individually or collec-
tively control more than 30% of the shareholdings.
2. Directors’ Holdings: Shares held by director(s) who individually control more than 5%
of the shareholdings.
3. Cross-Holdings: Shares held by a Nigerian—listed company which controls more than
5% of the Shareholdings as investments.
4. Lock-Up Shares: Shares held by shareholder(s) who individually or collectivity represent
more than 5% of the shareholdings in the company with a publicly disclosed lockup ar-
rangement.
5. The free float adjustment is built on the proportion (two-thirds) of the existing constituent.
Free-float Adjustment Formula
FAF1
D
100%
2
3
(cid:0)
.100%
(cid:0)
FAF2/ ;
(7.6)
where FAF1 is the free-float of the new constituent stock and FAF2 is the free-float of the existing
constituent stock.
7.3.7 CALCULATION OF braINDEX
The braIndex is calculated on a daily basis with mathematical expression:
Current Index
P .Pt
P .Pt
(cid:0)
D
IS/
(cid:2)
IS/
(cid:2)
(cid:2)
1
(cid:2)
FAF
FAF (cid:2)
Yesterday’s Closing Index;
(7.7)
7.3. braINDEX: PRINCIPLES, TECHNIQUES, AND APPLICATIONS 67
1/, IS is the issued,
where Pt is the current price at day t, Pt
and FAF is the free-float-adjusted factor; its value falls between 0 and 1, and adjusted every six
months.
1 is the closing price at day .t
(cid:0)
(cid:0)
7.3.8
ILLUSTRATIVE EXAMPLE
Base Day
Consider seven constituent stocks with the following number of issued shares and closing prices.
These stocks are grouped in two sectors. Table 7.3 shows the information about the stocks.
Table 7.3: Information about the stocks
SectorStock NameIssued SharesClosing PriceMarketCapitalization (NGN)Aggregate MarketCapitalization(NGN)IA42,00012.20512,400B27,50018.50508,750C14,0009.30130,200D18,00011.40205,2001,356,550IIE10,0008.8088,000F6,00010.2561,500G13,5009.75131,625281,125Total1,637,67568
7. INDEX METHODOLOGY
Table 7.4: Day 1
Day 1: Only the prices of shares changedSectorStock NameIssued SharesClosing PriceMarketCapitalization (NGN)Aggregate MarketCapitalization(NGN)IA42,00012.25514,500B27,50019.15526,625C14,0009.50133,000D18,00011.80212,4001,386,525IIE10,0009.1091,000F6,00010.2561,500G13,5009.72131,220283,720Total1,670,245Index ComputationSectorAggregate Market Capitalization(NGN)Base Day’s IndexDay 1 IndexDay 1Base DayI1,386,5251,356,550100102.21II283,720281,125100100.92Index1,670,2451,637,675100101.997.3. braINDEX: PRINCIPLES, TECHNIQUES, AND APPLICATIONS 69
Table 7.5: Day 2
Day 2: Stock C traded ex-bonus at the ratio of “1 for 4”SectorStock NameIssued SharesClosing PriceMarketCapitalization (NGN)Aggregate MarketCapitalization(NGN)IA42,00012.25514,500B27,50019.15526,625C*17,500*8.00*140,000*D18,00011.80212,4001,393,525IIE10,0009.1091,000F6,00010.3061,800G13,5009.90133,650286,450Total1,679,975Index computationSectorAggregate Market Capitalization(NGN)Day 1 IndexDay 2 IndexDay 2Day 1I1,393,5251,386,525102.21102.73II286,450283,720100.92101.89Index1,679,9751,670,245101.99102.5870
7. INDEX METHODOLOGY
Table 7.6: Day 3
Day 3: Stock E traded ex-right issue at the ratio of “1 for 2” at NGN 5 per shareSectorStock NameIssued SharesClosing PriceMarketCapitalization (NGN)Aggregate MarketCapitalization(NGN)IA42,00012.25514,500B27,50019.20528,000C17,5008.30145,250D18,00011.60208,8001,396,550IIE*15,000*8.50*127,500*F6,00010.3562,100G13,5009.92133,920323,520Total1,720,070Index ComputationSectorAggregate Market Capitalization(NGN)Day 2 IndexDay 3 IndexDay 3Day 2I1,396,5501,393,525102.73102.95II323,520311,450**101.89105.84Index1,720,0701,704,975102.58103.497.3. braINDEX: PRINCIPLES, TECHNIQUES, AND APPLICATIONS 71
Table 7.7: Day 4
Day 4: Replacement of stock F by stock H, supposing stock H is sellling at NGN 10.5 with 7,500 number of issuedSectorStock NameIssued SharesClosing PriceMarketCapitalization (NGN)Aggregate MarketCapitalization(NGN)IA42,00012.2512,400B27,50019522,500C17,5008.5148,750D18,00011.65209,7001,393,350IIE15,0008.1121,500H7,50010.578,750G13,5009.92133,920334,170Total1,727,520Index ComputationSectorAggregate Market Capitalization(NGN)Day 3 IndexDay 4 IndexDay 4Day 3I1,393,3501,396,550102.95102.71II334,170340,170***105.84103.98Index1,727,5201,720,070103.49103.9472
7. INDEX METHODOLOGY
Table 7.8: Day 5
Day 5: Suspension of stock B, requires stock B to trade at the last traded priceSectorStock NameIssued SharesClosing PriceMarketCapitalization (NGN)Aggregate MarketCapitalization(NGN)IA42,00012.35518,700B27,50019522,500C17,5008.55149,625D18,00011.5207,0001,397,825IIE15,0008.15122,250H7,50010.5378,975G13,5009.88133,380334,605Total1,732,430Index ComputationSectorAggregate Market Capitalization(NGN)Day 4 IndexDay 5 IndexDay 5Day 4I1,397,8251,393,350102.71103.04II334,605334,170103.98104.11Index1,732,4301,727,520103.94104.237.3. braINDEX: PRINCIPLES, TECHNIQUES, AND APPLICATIONS 73
Table 7.9: Day 6
Day 6: Resumption of stock BSectorStock NameIssued SharesClosing PriceMarketCapitalization (NGN)Aggregate MarketCapitalization(NGN)IA42,00012.3516,600B27,50018.55510,125C17,5008.7152,250D18,00011.5207,0001,385,975IIE15,0008.19122,850H7,50010.6579,875G13,5009.9133,650336,375Total1,722,350Index ComputationSectorAggregate Market Capitalization(NGN)Day 5 IndexDay 6 IndexDay 6Day 5I1,385,9751,397,825103.04102.17II336,375334,605104.11104.66Index1,722,3501,732,430104.23103.6374
7. INDEX METHODOLOGY
Table 7.10: Day 7
Day 7: Dividend on stock ASectorStock NameIssued SharesClosing PriceMarketCapitalization (NGN)Aggregate MarketCapitalization(NGN)IA42,00011.90499,800B27,50018.55510,125C17,5008.70152,250D18,00011.50207,0001,369,175IIE15,0008.19122,850H7,50010.6579,875G13,5009.90133,650336,375Total1,705,550Index ComputationSectorAggregate Market Capitalization(NGN)Day 6 IndexDay 7 IndexDay 7Day 6I1,369,1751,385,975102.17100.93II336,375336,375104.66104.66Index1,705,5501,722,350103.63102.627.4. bra PRODUCER PRICE INDEX: PRINCIPLES, TECHNIQUES, AND APPLICATIONS 75
7.3.9 MEASURE OF braINDEX VOLATILITY
The bra50Index volatility is calculated on a 30, 60, 90, 180, and 365 days basis with the following
mathematical expression:
Index Volatility
where (cid:22)
x
(cid:3)
D
P .x/, P .x/
xi
xj
D
, and x
D
7.3.10 INDEX MAINTENANCE
D
xi
D
q.x
(cid:22)/2
(cid:3)
(cid:0)
P .x/;
(7.8)
daily index value.
Periodic Rotation: This is required to remove firms with declining fundamental and technical
profiles, and replace their shares with those of companies with a rising profile. The periodic
review/rotation shall be done every six months to effect change in any of our constituent stocks.
Situations that could result in such rotation include a consistent fall in share prices of a stock,
delisting of a stock, firm bankruptcy, and suspension of a stock on the exchange, among others.
The automatic detection and rotation of such stock requires daily monitoring of component
stocks for these effects. In other words, the database and front-end is designed to capture these
effects. In conclusion, the bra team continues to work consistently at evaluating various actions
that might affect the indices trend. We are positioned to ensure they are captured in order to
maintain the integrity of the index at all times.
7.4
bra PRODUCER PRICE INDEX: PRINCIPLES,
TECHNIQUES, AND APPLICATIONS
bra Producer Price Index (braPPI) is one of the core short-term business indicators used to
measure the economic situation of the country. It is the instrument used to measure the average
change in prices of industrial products, which are produced and sold by Nigerian enterprises.
This is performed by continuously sampling the prices of groups of items produced and sold on
the domestic market. This sample is simply the representation of total industrial production.
The braPPI measures price changes from the producer’s perspective, while the braCPI
(bra Inflation Index) measures price changes from the consumer’s perspective.
7.4.1 USES OF braPPI
The main uses of the braPPI are:
1. to serve as a leading indicator of inflationary trends in the Nigerian business environment;
2. to serve as deflator of national accounting at constant prices;
3. to serve as “escalators” to adjust prices of inputs in long term sales contracts; and
4. as an analytical tool for business owners and researchers.
76
7. INDEX METHODOLOGY
7.4.2 COMPONENTS OF braPPI
The indices are calculated for each of these groups:
1. All commodities
2. Fuel
3. Stages of processing
4. Durable and non-durable categories
In order to achieve transparency in the report, the durable and non-durable category is
sub-divided into 15 sub-groups (aggregates):
1. Farm Products
2. Processed Foods and Feeds
3. Textile Products and Apparel
4. Hides, Skin, Leather, and related Products
5. Fuels, related Products, and Power
6. Chemicals and Allied Products
7. Rubber and Plastic Products
8. Lumber and Wood products
9. Pulp, Paper, and Allied Products
10. Metal and Metal Products
11. Machinery and Equipment
12. Furniture and Household durables
13. Non-metallic Mineral Products
14. Transportation Equipment
15. Miscellaneous Products
7.4.3 SCOPE AND COVERAGE
braPPI survey covers many industrial sectors such as manufacturing, mining and quarrying, oil
and gas extraction, and gas and steam supply.
7.4. bra PRODUCER PRICE INDEX: PRINCIPLES, TECHNIQUES, AND APPLICATIONS 77
7.4.4 COLLECTION OF DATA
Data are collected monthly from the companies in each sector using probabilistic sampling tech-
niques called stratified sampling. The questionnaires are designed to capture product character-
istics such as: the name of product(s), brand names, product specifications, units of measure,
production cost per product, and producer’s price of selected product(s). There are many methods
used when collecting data, varying from one establishment to another. These include: personal
visits to the outlet, telephoning, e-mailing, and questionnaires, among others.
7.4.5
INDEX CALCULATION
braPPI indices are base weighted (Laspeyres) according to the sales in the base year. After col-
lecting price data from the enterprises, elementary indices (price relatives) are calculated for each
specification (price relative-specification price in the current month divided by the weighted with
sales structure in the base year). The weight reference period is updated periodically (two years)
for adjustment.
At the first stage of elementary aggregation, individual prices are combined and each price
is weighted by the value of production which it represents. In a case where one price from
each enterprise is combined to give an elementary aggregate for product P, then the weight
would correspond to the share of enterprise production of a particular product P in the entire
economy. However, where more than one price for a product is collected from a single enterprise
then the price would be weighted using relative production values for the different transaction
specifications. Industry indices are obtained by weighting together the product indices relevant
for each industry using the values of output of the different products for that industry, and not
enterprises in the sample alone. As mentioned earlier, the index is calculated according to the
Laspeyres’ formula, which is the weighted average of prices, as follows:
It
D
n
X
1
i
D
Wi 0
(cid:19)
(cid:18) Pit
Pio
(cid:2)
100;
(7.9)
where It is the price index in the current period; Pt is the current price of the product i; and P0
is the price of product i in the reference period and Wio is the weight associated with product i.
7.4.6
ILLUSTRATIVE EXAMPLE
Table 7.11 shows the producer’s prices for five commodities with their associated weights.
Q: What is the producer price index for the commodities in 2017 using 2016 as the base year?
Solution
See Table 7.12.
It
D
n
X
1
i
D
Wi0
(cid:19)
(cid:18) Pit
Pio
(cid:2)
100
D
103:47:
78
7. INDEX METHODOLOGY
Table 7.11: Producer’s prices for five commodities
Table 7.12: Price index for 2017 commodities using 2016 as the base year
7.5
bra BOND INDEX: PRINCIPLES, TECHNIQUES, AND
APPLICATIONS
The FGN Bond Index is a market value-weighted index designed to measure the performance of
the Nigerian Investment-grade fixed income market. It is a powerful tool used by investors and
financial managers to describe the market, and to compare the return on investments. The index
is divided into sub-indices based on a variety of maturity classifications. The index, sub-index
returns, and other statistics are calculated at the end of the business day. The constituent bonds
undergo a review and rebalancing on a monthly basis.
7.5.1 DEFINITION OF TERMS
Announcement date: The date on which changes to the index are published.
Blended price: The price calculated from the individual bid prices that bra Limited receives
from price providers for Index Bonds as of the close of each business day. For Index Bonds
CommodityWo20162017PoPtCommodity A0.3502,5002,500Commodity B0.2857,6207,800Commodity C0.1005,1805,100Commodity D0.2504,5005,000Commodity E0.0153,0003,350Total1.00022,80023,750CommodityWo20162017Wi0 Pit PioPoPtCommodity A0.3502,5002,5000.3500Commodity B0.2857,6207,8000.2917Commodity C0.1005,1805,1000.0985Commodity D0.2504,5005,0000.2778Commodity E0.0153,0003,3500.0168Total1.00022,80023,7501.03477.5. bra BOND INDEX: PRINCIPLES, TECHNIQUES, AND APPLICATIONS 79
where three or fewer price providers have submitted pricing information, the blended price is
the arithmetic average of the individual bid prices after removing outliers.
Business day: Any day that Nigerian bonds are traded, as determined by the Central Bank of
Nigeria.
Close: The end of a calendar or Business Day for the purpose of calculating Index values and
other statistics, currently 04:00PM.
Eligible bond: A bond that meets all of the eligibility criteria, based on publicly available
information as of the close of the Business Day preceding the Announcement Date, but is not
currently an Index Bond.
Index bond: A bond that is included in the Index.
Par amount: The total par or “face value” amount outstanding of an Index Bond or an Eli-
gible Bond as determined by the Index Committee, net of strips, reconstitutions, re-openings,
and sinking fund payments. Holdings of the Central Bank of Nigeria are included in the Par
Amount.
7.5.2 BASIC CRITERIA FOR CONSTITUENT BONDS
A constituent bond must possess the following criteria in the area of: insurance, bond type,
coupon frequency, par-amount, and minimum term.
(a) Issuance
– The constituent bond must be delivered in the domestic market on or before the next
rebalancing date.
– It must be denominated in local currency (Nigerian-Naira).
– It must be intended to be traded by institutional investors.
– It must be issued by the Federal Government of Nigeria.
(b) Type of bond: The bond must be one of the following types: bullet bond, callable bond,
asset-backed security, capital bond, sinking fund bond, extendible bond, fixed-floater
bond, or retractable bond.
(c) Coupon frequency: The frequency of coupon payments must be semi-annual.
(d) Par amount: The bond must have a minimum par of N 100 Million for inclusion in the
constituent bond and a minimum of N 50 Million as of the rebalancing date.
(e) Minimum term: The bond must have at least 18 months term-to-maturity as of the next
rebalancing date.
80
7. INDEX METHODOLOGY
7.5.3
INDEX CALCULATION
A market value is calculated for each Index Bond as of the close on each business day. The market
value of an Index Bond on day t is calculated as follows:
Vt
D
Part
(cid:18) Pt
At
C
100
(cid:19) ;
where Vt is the market value at day t, Pt is the average price of a bond, At is the accrued interest
of index bond at day t, Part is the face value as of the last monthly rebalancing, adjusted for
principal repayments and mandatory sinking fund payments up to and including day t.
The relative weight of a bond:
weightk D
Vk
Pk Vk
:
The total return of a bond:
Vt
It
C
TRt
D
Vt
1
(cid:0)
;
C
C
Vt
Prt
1
(cid:0)
where TRt is the total return, It is the interest payments on day t, Prt is the principal repayments
on day t, Vt
1, and Vt is the market value on day t.
1 is the market value on day t
In addition, total return is the addition of interest return and price return. The interest
(cid:0)
(cid:0)
returns can be represented mathematically as:
Part
At
100 (cid:0)
(cid:2)
IRt
D
1
(cid:0)
(cid:2)
At
It
1
C
(cid:0)
100
:
Part
Vt
1
(cid:0)
However, the price returns is calculated as:
Part
(cid:2)
PRt
D
.P t
1/
Pt
(cid:0)
100
(cid:0)
.100
Pt
(cid:0)
100
1/
(cid:0)
;
Prt
(cid:2)
1
Vt
C
(cid:0)
(7.10)
(7.11)
where IRt is the interest return on day t, Part is the par amount of bond, At is the accrued
interest to the index bond at day t, It is the interest payment on day t, and Part is the face value
as of the last monthly rebalancing, adjusted for principal repayments and mandatory sinking
fund payments up to and including day t.
The unrealized capital gain or loss due to any change in the price:
Part
.P t
1/
Pt
(cid:0)
100
(cid:0)
:
(7.12)
(cid:2)
Vt
1
(cid:0)
The realized capital gain or loss due to receiving a principal repayment at par rather than
at the current price is computed as:
Prt
(cid:2)
.100
Pt
(cid:0)
100
1/
(cid:0)
Vt
1
(cid:0)
(7.13)
7.5. bra BOND INDEX: PRINCIPLES, TECHNIQUES, AND APPLICATIONS 81
Part
.P t
1/
Pt
(cid:0)
100
(cid:0)
:
(7.14)
(cid:2)
Vt
1
(cid:0)
7.5.4 SUB-INDICES
The three sub-indices are index of the total returns (ITR), index of the interest returns (IIR), and
the index of the price returns (IPR). These indices can be computed as follows:
ITR
D
IIR
D
IPR
D
TRi
t
Pi V i
t
1 (cid:2)
(cid:0)
Pi V i
1
t
(cid:0)
IRi
t
Pi V i
t
1 (cid:2)
(cid:0)
Pi V i
1
t
(cid:0)
Pi V i
t
1 (cid:2)
(cid:0)
Pi V i
t
(cid:0)
PRi
t
;
1
(7.15)
(7.16)
(7.17)
where TRt is the total return, IRt is the interest return on day t, PRt is the price return on day t,
and Vt
1 is the market value on day t
1.
(cid:0)
Thus, the daily indices value can be computed as follows:
(cid:0)
ITRt
ITRt
1.1
(cid:0)
C
D
TRt /
IIRt
IIRt
1.1
(cid:0)
C
D
PRt /
IPRt
IPRt
1.1
(cid:0)
C
D
IRt /:
(7.18)
(7.19)
(7.20)
7.5.5
ILLUSTRATIVE EXAMPLE
Consider five bonds issued by the Nigerian Federal Government. Tables 7.13–7.15 show the
transactions made on the two consecutive trading days. From the daily quoted prices, the yields,
accrued interests, and market values were calculated.
82
7. INDEX METHODOLOGY
)
8
1
0
2
-
n
a
J
-
2
0
(
y
a
D
e
s
a
B
:
1
y
a
D
:
3
1
.
7
e
l
b
a
T
Day 1: Base Day (02-Jan-2018)Issue DateBond NameTenor (yr)Coupon (%)MaturityPar Issue (N’bn)Bid PriceYield (%)Accrued InterestTTM (Yr)Market Value4/12/17Bond A313.79%4/12/2035100.8013.35%10.102.2838,814,236,187.857/27/16Bond B57.95%7/27/2136101.287.53%11.393.5740,562,105,482.563/30/17Bond C710.75%3/30/2415103.2210.04%8.176.2416,707,819,289.325/25/17Bond D1012.25%5/25/272597.9012.63%7.399.4026,322,486,413.046/29/17Bond E2015.00%6/29/372095.4315.76%7.6619.5020,618,786,885.25Total143,025,434,258.017.5. bra BOND INDEX: PRINCIPLES, TECHNIQUES, AND APPLICATIONS 83
)
8
1
0
2
-
n
a
J
-
3
0
(
:
2
y
a
D
:
4
1
.
7
e
l
b
a
T
Day 2: (03-Jan-2018)Issue DateBond NameTenor (yr)Coupon(%)MaturityPar Issue (N’bn)Bid PriceYield (%)Accrued InterestTTM (Yr)4/12/17Bond A313.79%4/12/2035100.8913.30%10.135922.2738,859,072,928.187/27/16Bond B57.95%7/27/2136101.507.46%11.414123.5640,649,082,656.473/30/17Bond C710.75%3/30/2415104.109.85%8.1948346.2416,844,225,027.025/25/17Bond D1012.25%5/25/272598.2512.57%7.4232349.3926,418,308,423.916/29/17Bond E2015.00%6/29/372095.8715.68%7.70491819.5020,714,983,606.56Total143,485,672,642.14(2)Market Value84
7. INDEX METHODOLOGY
s
n
r
u
t
e
R
:
2
y
a
D
:
5
1
.
7
e
l
b
a
T
Day 2: ReturnsClassBond Name(3) Interest Return(4) Price Return(5) Total Return(1)*(3)(1)*(4)(1)*(5)Between 1 and 3Bond A0.0003440.0008120.00115513,336,74031,500,00044,836,740.46Between 3 and 5Bond B0.0001920.0019530.0021447,777,17479,200,00086,977,173.995 years and aboveBond C0.0002640.007900.0081644,405,738132,000,000136,405,737.8Bond D0.0003160.0033240.003648,322,01187,500,00095,822,011Bond E0.0003980.0042680.0046658,196,72188,000,00096,196,721.47All Constituent bonds42,038,385418,200,000460,238,384.707.6. braINFLATION INDEX: PRINCIPLES, TECHNIQUES, AND APPLICATIONS 85
For term-to-maturity between 1 and 3 (Bond A), the interest return is calculated as:
35;000;000;000
(cid:2)
IRt
D
10:13592
35;000;000;000
100 (cid:0)
38;814;236;187:85
10:10
13:30%
C
100
(cid:2)
0:000344:
D
We assumed that there is no principal repayment on Bond A, thus the formula for price
return reduced to:
PRt
D
35;000;000;000
(cid:0)
100
(cid:2)
38;814;236;187:85
.100:89
100:80/
0:000812
D
Furthermore, we can compute the sub-indices as follows:
TRt
IRt
D
C
PRt
D
0:001155:
IIR
IPR
ITR
42038385
D
143;025;434;258:01 D
0:000294
418200000
D
143;025;434;258:01 D
0:002924
460238384:70
D
143;025;434;258:01 D
0:003218:
Thus, the daily index values are computed as follows:
IIRt
1000
.1
(cid:3)
C
D
0:00294/
D
1003:22
IPRt
1000
.1
(cid:3)
C
D
0:002924/
ITRt
1000
.1
(cid:3)
C
D
0:003218/
1002:94
1002:92:
D
D
7.6
braINFLATION INDEX: PRINCIPLES, TECHNIQUES,
AND APPLICATIONS
The bra Inflation Index (braII) is an independent index that measures the change in the price
level of a market basket of consumer goods and services purchased by households in Nigeria.
Researchers visit designated markets to collect the prices of goods and services, after which
the indices are computed monthly. These indices include the overall index and sub-indices. The
percentage change in the consumer price index is used to measure inflation.
86
7. INDEX METHODOLOGY
7.6.1 USES OF bra INFLATION INDEX
The major uses of the braII are:
(i) to measure changes in the purchasing power of money;
(ii) to measure price inflation witnessed by Nigerians;
(iii) to measure changes in living standards; and
(iv) to serve as one of the major macro-economic indicators.
7.6.2 CLASSIFICATION OF braII ITEMS
The braII is classified into three constituent items: category items, aggregate items, and ele-
mentary items. The following are the 11 category items in braII: (i) alcoholic beverages and
tobacco, (ii) clothing and footwear, (iii) communication, (iv) education, (v) energy, (vi) food
and non-alcoholic beverages, (vii) housing and household goods and services, (viii) medical and
household chemicals products services, (ix) recreation, (x) transportation, and (xi) utilities, other
goods, and services.
The aggregates items are the aggregation of elementary items unit. The table that shows
the category items, aggregate items, and elementary items is found in the Appendix A.
The survey identifies all the local governments in each senatorial district, with special
attention paid to land mass and population size. Local Governments with the lowest population
density were considered as rural and the most densely populated areas were considered urban,
while those in between were classified as rural-urban.
More explicitly, based on population density, three local government areas are enumerated
from the three senatorial districts considered per state, such that the three local government
areas with the highest, medium, and lowest population density are considered. Hence, nine
local government areas are considered in total per state, with the exception of Abuja, where all
six local government areas in the senatorial district are considered.
Thus, in each senatorial district, the survey considers three segregation classes, categorized
in descending order from least to most densely populated:
• Rural
• Rural-urban
• Urban
As detailed above, the braCPI surveys markets from each local government area with
emphasis on one rural and one urban market. Urban markets are markets where transactions
take place every day with no specific market day, while Rural markets are markets that have
specific market days.
7.6. braINFLATION INDEX: PRINCIPLES, TECHNIQUES, AND APPLICATIONS 87
Based on this, nine local government areas are enumerated in each of our representative
states (Adamawa, Anambra, Kano, Oyo, Plateau, Rivers, and Lagos) besides Abuja (FCT),
where all the six local government areas in the territory are covered. Notably, all the local gov-
ernment areas surveyed in each state are spread across the senatorial districts in the state, with
two markets sampled in each of the local government areas. The market survey considers 138
markets in total from the above listed states (FCT inclusive). Eighteen markets are considered
from each state, with the exception of Abuja, where 12 markets are considered from 6 local
governments.
7.6.3 PERIOD OF THE SURVEY
The market surveys are conducted between the 2nd and 3rd week of every month. 23 of our staff
(CPI field enumerators) visit designated markets in the targeted states to obtain information on
the recent prices of the items used to track and measure price changes. The field enumerators
record the prices of over 500 items each month, representing a scientifically selected sample of
the prices paid by consumers for goods. Detailed descriptions of the items survey are provided
in the Appendix A.2.
7.6.4 DATA COLLECTION, COLLATION, AND PROCESSING
Our data collection units across the nation are staffed with three enumerators and one manager
per state (one field staff person in each senatorial district with an overall manager for the entire
state), except for Abuja, which has two enumerators and a manager. During each visit, the field
enumerator collects price data on a specific good or service that was precisely defined in our price
data sheet. We have employed a somewhat unorthodox methodology in obtaining our prices,
such as by purchasing phone credit, maintaining a periodical presence, presenting of our branded
shirt, etc. This allowed us to create a relationship with people in each market. The recorded
information is sent to the bra server in Lagos through an online data entry and analytics engine
(Opera) built for the purpose of collection and analysis of data from the enumerated states.
Surveyed prices are controlled by regional managers, who verify the accuracy of the survey
prices compared to the last month’s prices and state whether or not the specific variance selected
by the field survey enumerator corresponds with the representative characteristics. In case of
doubt, the field enumerators are contacted to verify prices. In addition, commodity specialists
check the data for accuracy and consistency at the bra limited Lagos office and make any nec-
essary corrections or adjustments, which can range from adjustment for a change in the size or
quantity of a packaged item to more complex adjustments based upon statistical analysis of the
value of an item’s features or quality. Thus, commodity specialists strive to prevent changes in
the quality of items from affecting the CPI’s measurement of price change.
Processing collected commodity prices involves the following.
88
7. INDEX METHODOLOGY
(a) Calculation of the average prices of commodities from different locations. More explicitly,
the average price of the representative commodity is calculated as a weighted arithmetic
mean of prices for locations calculated by simple arithmetic mean.
(b) Using the average price to get the index baseline for each commodity.
(c) Comparing the current year price of each commodity with a base year price to obtain a
relative price.
(d) Generating a constant weight from the consumer expenditure survey in the base year.
(e) Using Laspeyres formula to calculate an aggregated index for each category.
In compliance with the National Bureau of Statistics methodology, the Laspeyres’ formula
of constant weights is used for the calculation of the bra inflation index, using the prices from
November 2009 as the base year prices.
7.6.5 QUALITY ADJUSTMENT
The quality adjustment of the price index is performed when a product which is part of the
price survey is replaced by another product. The change is due to the absence of the original
product in the market or decreased demand for the product. When this occurs, the product
representative in the consumer basket is replaced by a similar product or a product in great
demand. More frequently, product substitution takes place when the product surveyed ceases to
be sold in the market, most likely due to the producer facing liquidation. After field staff advise
that a product’s disappearance from a market is permanent, the bra office selects another similar
product, complying with the description or survey prices in the market. Steps to perform the
adjustment may include the following.
(a) Direct adjustment
This is often used when dealing with two comparable products. The new item is considered
directly comparable, if the following is true.
1. It is produced by the same manufacturer.
2. It is produced from the same materials and has the same or similar technical param-
eters which are important for customers (utility value).
3. It has the same measurement unit.
4. It has the same type of packaging.
5. The difference between the original and the new items are not significant and may re-
fer to customer taste and personal preferences, such as color, design, shape, decorative
elements, etc.
7.6. braINFLATION INDEX: PRINCIPLES, TECHNIQUES, AND APPLICATIONS 89
(b) Overlap imputation method
When dealing with two incomparable products or samples, the overlap imputation method
is used where the price of both the original (old) and new products are surveyed. In the
overlap month, the prices of both products are surveyed. Price development in the first
month is measured based on the original product, while next month’s price is performed
with the new product using its price surveyed in the previous overlap month. The price
difference of the original and new products will not feed into the price index.
7.6.6
INDEX CALCULATION
The bra Inflation Index is built in two stages. In the first stage, prices for over 200 specific items
are averaged to yield a price for the elementary items in the basket. The index for each aggregate
item is calculated based on these average prices. This stage is often referred to as “lower-level
aggregation” as it involves averaging the most fundamental component of the index—observed
price change for specifically defined consumer goods, services, and products. For example, the
prices of approximately five different brands of milk at designated markets in enumerated states
are observed every month. Milk is one of the 217 elementary items in the current bra Inflation
Index market basket structure and is categorized as a canned/packaged foods and other groceries
aggregate item (full description of groceries and other groceries). The canned/packaged foods
and other groceries index is one of the 10 basic indices of the food and beverages category.
Three versions of the bra Inflation Index formula are used for the lower-level aggregation
in the food basket. Each formula has both strong and weak points, and computations are per-
formed with caution. The fixed basket Laspeyres’ method is adopted, and the other two formulae
are stated as follows:
(i) Laspeyres Index:
(ii) Paasche Index:
(iii) Fisher Ideal Index:
L
D
n
X
1
i
D
Wi 0
(cid:21)
(cid:20) Pit
P0t
(cid:2)
100
P
D
n
X
1
i
D
Wit
(cid:21)
(cid:20) Pit
Pio
(cid:2)
100
F
D
pL
P ;
(cid:2)
(7.21)
(7.22)
(7.23)
n
where Wi0 is the weight of items at reference period .P
i
price of items at current and previous periods.
D
1 Wi0
D
1/ , and Pit and P0t are the
90
7. INDEX METHODOLOGY
7.6.7
bra INFLATION INDICES PUBLICATION
bra limited publishes different Consumer Price Inflation (CPI) series on a monthly basis. The
published series include the following.
Composite Inflation (Headline Inflation)
The composite rate of inflation consists of changes in the consumer price index including the
influence of changes in the price of food and energy. The mathematical representation of Com-
posite Inflation is given as:
where CPI t is the value of CPI for the current period and CPI t
previous period.
H
D
CPI t
CPI t
(cid:0)
CPI t
1
(cid:0)
1
(cid:0)
100;
(cid:2)
(7.24)
1 is the value of CPI for the
(cid:0)
Core Inflation
The core rate of inflation consists of changes in the consumer price index without the influence
of changes in the prices of food and energy. The core inflation index is made up of the following
categories: alcoholic beverages and tobacco, clothing and footwear, health, recreation and cul-
ture, education, communication, furnishing household equipment and maintenance, restaurant
and hotel, and miscellaneous goods and services:
C
D
cCPI t
cCPI t
(cid:0)
cCPI t
1
(cid:0)
1
(cid:0)
100;
(cid:2)
where cCPI t is the value of core CPI for the current period and cCPI t
for the previous period.
(7.25)
1 is the value of core CPI
(cid:0)
Non-core Inflation
The non-core inflation rate consists of components such as changes in the prices of food and
energy. The non-core inflation is made up of the following categories: food and non-alcoholic
beverages, housing water, electricity, gas and other fuels, and transportation. It is mathematically
expressed as:
where nCPI t is the value of non-core CPI for the current period and nCPI t
non-core CPI for the previous period.
N
D
nCPI t
nCPI t
(cid:0)
nCPI t
1
(cid:0)
1
(cid:0)
100
(cid:2)
(7.26)
1 is the value of
(cid:0)
Food Inflation
Food inflation explains the changes in the consumer price index considering only food items,
excluding the influence of changes in the price of other items.
F
D
fCPI t
fCPI t (cid:0)
fCPI t
1
(cid:0)
1
(cid:0)
(cid:2)
100;
(7.27)
7.6. braINFLATION INDEX: PRINCIPLES, TECHNIQUES, AND APPLICATIONS 91
where fCPI t is the value of food CPI for the current period and fCPI t
for the previous period.
(cid:0)
1 is the value of food CPI
The construction of the bra Inflation Index begins by selecting a group of goods and ser-
vices that are usually bought by the reference population in the index. This collection of goods
and services is known as the market basket. The bra inflation market basket is developed from
detailed expenditure information provided by the households who participate in the Consumer
Expenditure Survey (CES). Altogether, 2,280 households provide expenditure information for
use in determining the importance or weight of each item in the index structure. This data is
also used to select the categories of items from which specific unique commodity and service
items are selected to be priced for the bra Inflation Index.
The Consumer Expenditure Survey (CES) provides essential inputs required to compile
the CPI. The survey has been conducted on a monthly basis since the inception of the contract
in order to obtain comprehensive (up-to-date) information about the expenditure patterns of
households, and for updating the expenditure weights used in compiling the CPI.
Two kinds of information are required for compiling the CPI. First, a “basket” of con-
sumer goods and services commonly purchased by households and a weighting system reflecting
the relative importance of individual items in the basket, in terms of their shares in the overall
household expenditure, has to be established. Since consumers spend more on some items than
on others, similar price movements in different items may have different impacts on the overall
price change.
Second, data on the price movements of various items of goods and services in the basket
have to be collected continuously so that movements of market prices can be fully reflected in
the price indices. However, as advised by the Central Bank, our expenditure pattern used to
calculate the bra Inflation Index has been back-dated to November 2009, to follow the rebasing
of the National Bureau of Statistics (NBS) to 2009.
7.6.8 EXPENDITURE CATEGORY WEIGHT
Expenditure weights are used to give proportional emphasis for the price change of one item (or
component) in relation to other items (or components) in the bra Inflation Index. These weights
are derived from the CES. Such expenditure weights allow the bra Inflation Index to distinguish
between items that are important to consumers and to provide the appropriate weighting for
changes in that product’s price, based on its importance. Altogether, 660 households provide
expenditure information for use in determining the importance, or weight, of each item in the
index structure.
Table 7.16 gives the category weight of bra Inflation Index. The weights of elementary
items is shown in Appendix A.3.
92
7. INDEX METHODOLOGY
Table 7.16: Category weight of bra Inflation Index
7.6.9
ILLUSTRATIVE EXAMPLE
Table 7.17 shows the prices of five commodities and quantity demanded between 2016 and
2017. Calculate: (i) Lasperyre index, (ii) Paasche index, and (iii) Fisher’s ideal index.
Table 7.17: Prices of five commodities
Solution
See the solution on Table 7.18.
Let us generate our weight since it is not given in the question:
(i) Laspeyres Index:
Category ItemsWeightAlcoholic beverages and tobacco10.83Clothing and footwear53.39Communication45.56Education36.36Energy48.04Food and non-alcoholic beverages259.01Housing and household goods and services256.06Medical and household chemical products services16.47Recreation8.90Transportation260.92Utilities, other goods, and services4.46Total1,000Commodity20162017PoQoPtQtRice25072556Beans18041953Yams55035803Wheat20021802Plantains15051206Table 7.18: Indexes
7.7. EXERCISES 93
Wi0
D
P0Q0
5
P
i
1 P0Q0
D
L
D
n
X
1
i
D
Wi 0
(cid:21)
(cid:20) Pit
P0t
(cid:2)
100
D
100:85
(ii) Paasche Index:
Wit
D
P
Pt Qt
5
i
1 Pt Qt
D
P
D
n
X
1
i
D
Wit
(cid:21)
(cid:20) Pit
Pio
(cid:2)
100
D
99:88
(iii) Fisher Ideal Index:
pL
F
D
P
(cid:2)
D
p100:85
99:88
(cid:2)
D
100:36:
7.7 EXERCISES
7.1.
a. What is an expectation index?
b. What are the main functions of the expectation index?
c. Describe two approaches of business expectation index computation.
7.2.
a. What is a consumer confidence index?
Items20162017PoQoPtQtWoWi0 Pit P0tWtWit Pit P0tPoQoPtQtRice250725561,7501,5300.3320.33870.3100.3162Beans180419537205850.1370.14800.1190.1284Yams550358031,6501,7400.3130.33020.3530.3718Wheat200218024003600.0760.06830.0730.0657Plantains150513067507200.1420.12330.1460.1167Total1,330211.340205,2705,2651.0001.00851.0000.998894
7. INDEX METHODOLOGY
b. What are the components of the bra consumer confidence index?
c. Table 7.19 shows the outcome of a business expectation survey polling 1,000 re-
spondents. Calculate: (i) consumer confidence index, (ii) present situation index,
and (iii) business expectation index.
Table 7.19: Outcome of a business expectation survey
7.3.
a. What is the bra Inflation Index and what are its uses?
b. What are the category items of the bra Inflation Index?
c. Table 7.20 shows the prices and quantities demanded for six commodities and
quantities between 2016 and 2017. Calculate: (i) Lasperyre index, (ii) Paasche in-
dex, and (iii) Fisher’s ideal index.
Table 7.20: Prices and quantities demanded for six commodities
7.4.
a. State the basic, fundamental, and technical criteria for selection of constituent
stocks.
b. Using Table 7.21, find the market capitalization for the Agriculture and Banking
sectors.
IndicatorsPositiveNegativeNeutralBusiness conditions (current month)770230-Business conditions (six months)65025010Employment status (current month)510135355Employment status (six months)9151075Sales status (current month)85411036Sales status (six months)76022020Firm income realize in the next six months80011585Commodity20162017PriceQuantityPriceQuantityA1,150121,23014B7552081515C4801547518D60086208E8501083012F33053208c. Compute the stock index, assuming the agriculture and banking indices were
102.23 and 105.50, respectively, at the close of the previous day.
7.7. EXERCISES 95
Table 7.21: Market capitalization
7.5.
a. What is a producer price index?
b. What are the uses of the bra producer price index?
c. List the components of bra producer price index.
7.6. Refer to Table 7.22 and calculate the Lasperyre’s producer price index for the commodi-
ties in 2017 using 2016 as the base year.
Table 7.22: Lasperyre’s producer price index (assume all commodities have equal weights)
7.7.
a. State the basic criteria for constituent bonds.
b. Enumerate the different types of bonds.
c. Define the following terms: coupon, face value, and term-to-maturity.
SectorStock NameNumber of SharesClose PriceAgricultureI15,0008.20II11,3005.50III8,7505.00BankingIV22,0007.25V20,2208.15VI19,5007.38VII10,6007.30Commodity20162017PoPtA7,5007,575B5,0005,000C3,7504,015D1,8001,900E2,6402,600F3,0003,050C H A P T E R 8
97
Digital Media Monitoring,
Measurement, and Modeling
This chapter introduces us to the concept of digital media and the distinction between digital
media and social media. The chapter concentrates on how to monitor, measure, and model social
media. It also covers how social media monitoring reports can be interpreted.
The concept of digital and social media is often used interchangeably, but the two terms
are different in nature. Social media is a subset of digital media, thus all social media strategies
are digital, but not all digital media can be categorized as social media. Digital media includes
the following channels of communications: mobile messaging, email, website, apps, and social
media, among others. However, social media are online communications channels and platforms
for community-based input, interaction, content sharing and collaboration. Some notable social
media platforms include Facebook, Twitter, Instagram, and LinkedIn.
8.1 UNDERSTANDINGS OF SOCIAL MEDIA
MONITORING, MEASUREMENT, AND MODELING
Social media monitoring is a process of using tools or software to track, gather, and analyze data
from the online activities of social media users to assess consumer perceptions of companies’
brands. Social media monitoring tools include Online Analytics, Buzz Analytics, and Social
Media Analytics, among others. To gain a deeper understanding of how consumers feel about
a particular product, it can be worth monitoring what users have said about the product on
different social platforms, including via blogs, micro-blogs, forums, social networking services
(LinkedIn, Facebook, etc.), video sharing sites (YouTube), and message/complaint sections of
corporate social media pages or corporate websites.
To minimize costs, employees can utilize free social media search engines to explore and
monitor the feedback of consumers regarding particular products and services. However, free
social media search engines have limited functionality and are not comprehensive enough to
perform some specific tasks (e.g., automatic digital storage of clips). Companies with a serious
financial and reputational interest in monitoring the sentiments expressed by consumers through
social media often subscribe to the apps specifically designed for social media monitoring with
comprehensive coverage, features, and functions; some apps designed for this purpose include
TweetDeck, CoTweet, and Twitscoop. These tools collate searches from numerous websites and
98
8. DIGITAL MEDIA MONITORING, MEASUREMENT, AND MODELING
present the information gathered from searches through a user interface. This enables firms to
learn the keywords that are trending online and understand consumer sentiments regarding their
products, enabling the firm to take proactive action in identifying developing customer needs.
8.2
STRATEGIC INSIGHT OF SOCIAL MEDIA
MONITORING
The essence of monitoring social media is to keep firms apprised of online discussions concerning
the products and services the firm offers. The information gathered from online monitoring tools
can aid managers in supporting corporate strategy and achieving the firm’s marketing goals and
objectives. The data and information gathered from online activities can serve as a point of
strategic action in the area of marketing, sales, publicity and customer relations. Social media
monitoring helps to boost the company’s image and provides an avenue to sensitize potential
customers toward other brands in the firm’s product mix.
In marketing, social media monitoring is particularly helpful as it tells a company what is
being said about its products. It can also reveal the general geographical location of the company’s
primary consumers, allowing better demographic targeting so that the company can best position
its products in the market, supported with the appropriate message to affect the purchasing
decisions of potential consumers. Also, marketing and sales departments can use the information
obtained from the online conversations to correct wrong impressions about the company and its
brands and to promote the company’s products. The company can establish its presence online
by having a representative to engage in sales activities in online conversations, for example, by
maintaining an official Facebook or Twitter page through which the representative responds to
the queries of social media users. It is particularly important that the company leverage these
channels to provide customers with solutions when they encounter a problem with the company’s
product or services. The support team monitors every comment about the company’s products
to understand the major problems associated with the company’s brands. A company that fails
to adequately manage dissatisfaction may be faced with negative social media campaigns, which
could become viral and spread beyond the control of the company’s public relations team.
8.3
SOCIAL MEDIA MEASUREMENT
While there are many approaches to measure the activities in social media, we will discuss five
important measurements in this section. These measurements include: exposure, engagement,
preference, impact, and advocacy.
(a) Exposure
This metric comprises of the following.
(i) Gross Rating Points—this is a measure of the impressions an advertisement promo-
tion can achieve. It is calculated as a product of percentage of audience reached, and
8.3. SOCIAL MEDIA MEASUREMENT 99
the number of advertisement impressions; mathematically represented as: GRPs
Reach
Frequency.
(cid:2)
D
(ii) Target Rating Points (TRPs)—this is similar to GRPs, but instead of calculating a
rating based on the population at large, the focus is narrowed to a target audience.
(iii) Number of mentions or posts.
(iv) Percentage increase in number of likes or follows.
(v) Percentage increase in opportunity to view.
(vi) Percentage increase in desirable items.
(vii) Percentage decrease in undesirable items.
(viii) Percentage decrease in cost per thousand impressions.
(b) Engagement
This involves tracking the number of users that interacted with an advert or item of content.
Engagement creates a higher possibility of viewing, liking, sharing, and commenting upon
content. Involvement in these activities is crucial in the social process that creates more
awareness and popularity, leading to an increase in sales. Social media engagement metrics
are:
(i) Interaction rate.
(ii) Percentage page per visit.
(iii) Percentage increase in requests for information.
(iv) Percentage increase in return visits and time spent on pages.
(v) The time period spent viewing a webpage, posting, or video download.
(c) Preference
This measures the general inclination a customer has toward a product after visiting social
media; this eventually may lead to purchase of the product. The followings are metrics
relating to preference:
(i) Number of purchase intent.
(ii) Number of preference.
(iii) Percentage increase in willingness to consider.
(iv) Percentage increase in likelihood to recommend.
(v) Preference for specific product.
100
8. DIGITAL MEDIA MONITORING, MEASUREMENT, AND MODELING
(d) Impact
This measures how successful a company’s social media/online campaigns have been in
reaching an audience, and how efficient the firm has been in converting interest into sales.
Impact may eventually lead to increases in a firm’s revenue over time as the number of
conversions increases. Impact can be measured via the following statistics:
(i) Percentage of new visits to the contents.
(ii) Number of new subscribers.
(iii) Number of referral traffic to the content.
(iv) Percentage increase in sales.
(v) Percentage of coupon downloads.
(vi) Percentage increase in coupon redeemed.
(vii) Percentage decrease in cost of communication.
(viii) Percentage change in issue sentiment.
(e) Advocacy
This is a measure of support or promotion. Advocacy can be used to measure how satisfied
consumers were after the purchase of a product or service. When a company builds a strong
online following through a successful product offering, it can greatly benefit from word-
of-mouth marketing as users recommend products to people in their own social networks.
The following metrics are categorized as advocacy measurement in social media:
(i) Percentage increase in recommendations.
(ii) Percentage increase in satisfactory ratings.
(iii) Percentage increase in good reviews.
(iv) Percentage increase in the fans/ambassador of brands.
8.4
SOCIAL MEDIA MODELING
In this section, we will discuss social media modeling based on centrality analysis, community
detection, influence modeling, and sentiment analysis.
A. CENTRALITY ANALYSIS
This is a measures how significant a node is within a network. Commonly used criteria include
degree centrality, eigenvector centrality, closeness centrality, and betweenness centrality.
(a) Degree centrality
8.4. SOCIAL MEDIA MODELING 101
This classifies the importance of a node based on the number of links held by each node
to other nodes; the higher the number of links, the more important the particular node is
likely to be. It is useful to determine the number of inbound links and outbound links.
The degree centrality for node in an undirected graph is displayed in Figure 8.1 and it is
calculated as:
where c is the degree centrality and di is the degree of node ni .
c .ni /
di ;
D
Figure 8.1: Degree centrality.
(b) Closeness centrality
The closeness centrality measures how each one of the nodes close to other nodes. Any
central node should have a shortest distance path to reach the remaining nodes. The node
with higher closeness centrality value is more important. Closeness centrality is computed
as the inverse average geodesic distance of one node to the others. It is mathematically
represented as:
Cclose .ui /
2
D
4
n
1
(cid:0)
1
n
X
i
j
⁄
1
3
(cid:0)
d (cid:0)ui ; uj (cid:1)
;
5
(8.1)
where n denotes the number of nodes and d (cid:0)ui ; uj (cid:1) denotes the geodesic distance be-
tween nodes ui ; uj .
102
8. DIGITAL MEDIA MONITORING, MEASUREMENT, AND MODELING
(c) Normalizing degree centrality
This involves the process of normalizing by maximum possible degree. The simple nor-
malization is calculated as:
Cnorm .ni /
di
(cid:0)
;
1
D
n
(8.2)
where n is the total number of nodes. The normalized by maximum degree is computed
as:
Cmax .ni /
di
maxkdk
:
D
(8.3)
(d) Eigenvector centrality
Having a large number of friends does not guarantee the importance of a node; having a
number of important and influential neighbors makes a node more important in a network.
Therefore, eigenvector centrality integrates the importance of neighbors. Eigenvector cen-
trality is defined as:
Ceig .ni /
D
1
(cid:21)
n
X
1
k
D
Ak; i Ceig .nk/ ;
(8.4)
where Ceig
nodes and fixed constant, respectively.
T
(cid:0)Ceig .n1/ ; Ceig .n2/ ; : : : ; Ceig .nn/(cid:1)
D
Equation (8.3) can be written as:
and (cid:21) are the centrality vectors for the
D
where Ceig stands for eigenvector of adjacency matrix AT and (cid:21) represents the correspond-
ing eigenvalue.
(cid:21)Ceig
AT Ceig;
(8.5)
(e) Betweenness centrality
Betweenness centrality measures the number of times a node serves as bridges between
other nodes in a network. It is useful to identify individuals that influence others in de-
cision making. However, this measurement could also be carefully used in the analysis of
communication dynamics. A high number of betweenness shows an individual has influ-
ence or control over other members of the community.
Illustrative Examples
1. Table 8.1 shows the geodesic distance between nodes. What node has the highest closeness
centrality?
Table 8.1: Pairwise geodesic distance
8.4. SOCIAL MEDIA MODELING 103
Solution
Cclose .1/
D
1
1
C
C
Cclose .2/
D
1
1
C
C
Cclose .3/
D
1
1
C
C
Cclose .4/
D
1
2
C
C
Cclose .5/
D
2
2
C
C
Cclose .6/
D
2
3
C
C
7
1
7
2
7
1
7
1
7
1
7
1
1
2
1
2
1
1
1
1
1
1
1
2
(cid:0)
C
(cid:0)
C
(cid:0)
C
(cid:0)
C
(cid:0)
C
(cid:0)
C
2
C
C
3 D
3
C
C
3 D
1
C
C
2 D
2
C
C
3 D
1
C
C
4 D
1
C
C
2 D
0:60
0:50
0:86
0:60
0:55
0:55
0:35
Cclose .7/
D
3
3
4
2 D
1
3
7
2
(cid:0)
C
C
Therefore, node 3 has the highest closeness centrality among the seven nodes.
C
C
C
2. Consider Figure 8.2. Determine the most centralized node from the figure.
Node123456710111223210122333110111241210123522110146231210273323420104
8. DIGITAL MEDIA MONITORING, MEASUREMENT, AND MODELING
Figure 8.2: Centralized node.
Solution
The adjancency matrix
2
6
6
6
6
6
6
6
4
0 1 1 0 0 0
1 0 0 1 1 0
1 0 0 1 1 0
0 1 1 0 1 0
0 1 1 1 0 1
0 0 0 0 1 0
3
7
7
7
7
7
7
7
5
A
D
Then (cid:21)Ceig
AT Ceig implies .A
(cid:21)I /Ceig
0.
D
The eigenvalues are 2.94,
the corresponding eigenvector of
(cid:0)
2:24, 0.67, 0,
(cid:0)
(cid:0)
D
1:37, and 0. The largest eigenvalue is 2.94, with
Ceig
D
2
6
6
6
6
6
6
6
4
0:30
0:44
0:44
0:48
0:52
0:18
3
7
7
7
7
7
7
7
5
:
Therefore, node (e) is the most central.
B. COMMUNITY DETECTION
In the context of community detection, a community refers to a group or clusters. These are
identifiable groups that consist of nodes that frequently interact with one another, vs. nodes that
do not belong to the same group. The community shares their common interests on social media
and most of their interactions center on problem-solving and promotion of content. In other
words, a community consists of nodes with common boundaries and attributes. For instance,
communities can be regarded as groups of friends attending the same university or friends in
the same geographical vicinity.
abdfec8.4. SOCIAL MEDIA MODELING 105
C. INFLUENCE MODELING
This is used in information diffusion and spreading new ideas. In this model, there is an active
node that can easily influence other nodes in making a decision. The two common influence
models are the Linear Threshold Model (LTM) and Independent Cascade Model (ICM). Tang
and Liu [2010] cited Granovetter (1978) in stating that an actor would take an action if the
number of his friends who have already taken the action exceeds a certain threshold. LTM
assumes that node thresholds are generated randomly from a uniform distribution U (cid:140)0; 1(cid:141) before
the commencement of diffusion process. On the other hand, ICM emphasizes the sender’s view
instead of receiver’s view. Let us consider a node (say v) activated at step s; this node has a chance
to activate all other nearby nodes with probability p of a successful activation with another closest
node u. Then, node u will be active at step s
1 if the activation is completed. See Figures 8.3
and 8.4.
C
D. SENTIMENT ANALYSIS
Sentiment analysis is referred to as opinion mining. It is useful to summarize public opinion
regarding certain products, services, or issues on social media. Sentiment analysis helps compa-
nies to monitor the popularity of their products, gauge the perception of new products, measure
the firm’s reputation, and support market analysis. Sentiment analysis can also be used to re-
view customer feedback on products or services to understand the developing perception of a
particular product or service.
E. TOPIC AND RELATIONSHIP RECOGNITION
Topic recognition is the process of analyzing written information (such as comments, forum
discussions, etc.) to know whether the analyzed content can be categorized as a new or existing
topic. This is done by mapping a given piece of text to one or more labels. Relationship recog-
nition of topics is important when the texts to be analyzed are part of a current discussion, or
serve as references between posts or discussions.
106
8. DIGITAL MEDIA MONITORING, MEASUREMENT, AND MODELING
Figure 8.3: Linear threshold model diffusion process (green nodes are active, orange nodes are
newly activated, and dark nodes are inactive).
ABCDHIGFEABCDHIGFEInitial StageStage 1ABCDHIGFEStage 2ABCDHIGFEFinal Stage8.4. SOCIAL MEDIA MODELING 107
Figure 8.4: Independent cascade model diffusion process (green nodes are active, orange nodes
are newly activated, and dark nodes are inactive and red nodes are not succeed in activation).
ABCDHIGFEABCDHIGFEInitial StageStage 1ABCDHIGFEStage 2ABCDHIGFEStage 3108
8. DIGITAL MEDIA MONITORING, MEASUREMENT, AND MODELING
8.5 EXERCISES
8.1.
a. What is digital media?
b. What is social media monitoring, measurement, and modeling?
8.2. Using Figure 8.5, what is the most central node?
Figure 8.5: Determine the central node.
8.3.
a. What is community detection?
b. Explain sentiment analysis.
c. What is topic and relationship recognition?
8.4. The geodesic distance of five nodes is shown in Table 8.2. Find the highest closeness
centrality.
Table 8.2: Geodesic distance
IIIIIIIVVNode12345101112210122311011412101522110C H A P T E R 9
Causal Methods
109
Causal methods are techniques used to identify the extent and nature of cause-and-effect rela-
tionships among variables of interest. Causal methods are used to analyze problems by showing
the existing patterns of relationships among variables. The confirmation of cause-and-effect rela-
tionships is based on the existence of causal evidence. The major components of causal evidence
are briefly explained below.
In this case, the effect follows the cause. To illustrate, it would be er-
Temporal sequence:
roneous to credit an advertising campaign for an increase in sales revenues if the rise in sales
had occurred before the advertisement were published. Logically, there should be sales growth
after an advertising campaign commences as a result of heightened awareness of a firm’s product
offerings.
Concomitant variation: This is logical or systematic variation between two variables of inter-
est. The practical example is that if a company does not conduct survey on products, the level of
products acceptability would not be known.
Non-spurious association: Establishing a true cause-and-effect between two variables as-
sumes that no interference arising from other variables exists. For example, a surge in consump-
tion of a firm’s products may be due to macroeconomic or sector-wide effects (variables) rather
than the specific advertising and marketing efforts of the company.
9.1 MARKETING MIX MODELING: CONCEPT,
PRINCIPLES, METHODS, AND APPLICATIONS
The Marketing Mix Model (MMM) measures the impact of sales on marketing activities. An
MMM can help a firm maximize its future spend and its return on investment (ROI). The
MMM measures all possible marketing inputs and detects marketing investments that would
yield long-term revenue growth. These are the variables that marketing managers can control
to influence the company’s sales. The term “mix” in the MMM refers to the combination of the
classical “4Ps” of marketing—Product, Price, Place, and Promotion (see Figure 9.1).
Product stands for the firm’s products and how they are differentiated from competing
product offerings (high quality, visual appearance, maintenance, and repair). Promotion stands
for how the firm publicizes its products through advertisements, offering coupons, privilege
cards, sales displays, trade fairs, and other promotional efforts. Price entails the pricing strate-
110
9. CAUSAL METHODS
Figure 9.1: Marketing Mix—the 4Ps.
gies relating to a company’s product mix depending on the life cycle of the product offering,
product features, perceived utility and competing products. Place stands for product delivery
area quantified by variables such as distribution, availability and convenience.
The evolution of the MMM came as a result of the attempt to find answers for the many
questions relating to the optimal allocation of the marketing budget. These questions are: At
what level or combination of these marketing mix variables does the firm maximize one of the
output variables (company’s sales, market share or profit)? How did sales or market share respond
to previous levels of or expenditures on these variables? Consequently, researchers provided an-
swers to these persistent questions by developing econometric models of market responses to
the marketing mix, bearing in mind how to manage available resources (the marketing mix vari-
ables) to maximize output variables. The MMM makes use of past data or existing information
to make valid conclusions and to develop a better marketing plan or strategy for the future
(see Tellis [2011]).
In recent times, establishing the most efficient marketing mix has become more complex;
in the presence of new marketing strategies and the complexity of computerized data, MMM
often uses multiple regression models to forecast the optimal mix of marketing variables. The
application of a regression model will help to understand how the independent variables (in-
put variables such as advertisement, promotion) can explain the variation in dependent variable
ProductWhat products to makeand sellTARGETMARKETPlaceWhere to sell yourprocuctPromotionAdvertising, personal selling, sales promotion,and publicityPriceHow much your productis going to cost9.1. MARKETING MIX MODELING: CONCEPT, PRINCIPLES, METHODS, AND APPLICATIONS 111
(sales). Having developed a regression model to show the relationship that exists among the
variables of interest, this model can be used to predict for future values, say the company’s sales
over a horizon. The model can also identify which of the independent variables’ coefficients has
the strongest effect on consumer intentions to buy, so as best to focus its efforts when tailoring
its product mix. The various data that can be used to build the MMM are economic data (interest
rate, inflation rate, etc.), industry data (pricing, competitive, services), market data (sales, rev-
enues, profits, ROI, etc.), and target audience data. Naturally, the robustness of the model relies
on the accuracy of the data. The four main steps to take in order to create a robust marketing
mix model are shown in Figure 9.2.
Figure 9.2: Procedure for creating MMM.
A typical simple regression model can be constructed in which the response (dependent)
variable is a product’s sales and the independent variable is advertising. The model can be rep-
resented mathematically as:
Yt
(cid:12)0 C
D
(cid:12)1Xt
C
"t ;
(9.1)
where Yt denotes the product’s sales at period t, Xt denotes advertising at period t, (cid:12)0 and (cid:12)1
are the regression coefficients, and "t denotes error terms (assuming "t
N.0; 1//.
In this section, we will discuss the major types of regression model that can be used the
(cid:24)
modeling marketing mix.
RECOMMENDATIONImplement recommendations basedon the modelsFORECAST OPTIMIZATIONCalculate optimal marketing spend allocation for markets and channels based on different scenariosMODEL BUILDING- Explore the collected data - Build the MMM using collected datato identify the major driversDATA COLLECTION- Identify data sources- Collect historical data to builda database112
9. CAUSAL METHODS
1. Multiple Linear Regression
This shows a linear relationship between two or more independent variables (price, dig-
ital spends, newspaper and magazine spends, TV spends, etc.) and a response vari-
able (sales/market share/profit). A multiple linear regression model with k predictors
.X1; X2; : : : ; Xk/ and a response can be written in the form:
Yt
(cid:12)0 C
D
(cid:12)1X1t
C
(cid:12)2X2t
C
: : :
C
(cid:12)kXkt
"t ;
C
(9.2)
where Yt denotes the product’s sales at period t, Xt denotes advertising at period t, (cid:12)i s
are the regression coefficients, and "t denotes error terms (assuming "t
N.0; 1//. The (cid:12)i s
help to measure the effect or contribution of each of the explanatory variables on response.
The beta is interpreted as one unit increase in the explanatory variable would increase the
response variable by beta units with all other explanatory variables constant.
(cid:24)
2. Nonlinear Regression
In some cases, the relationship between independent variable(s) and dependent variable
are not linear. For instance, some variables may show a linear relationship with sales; this
implies that as the independent variables increase, sales are increasing. However, TV GRP
does not show a linear relationship with sales, thus increase TV GRPs will increase sales
up to a limit. At a certain point in time, for every increase unit in TV GRP would result to
less influence on sales (diminishing marginal returns). In addition, an advertisement will
create more awareness to potential customers to a certain extent and boost sales, which
in turn reduce the percentage of change in sales as the brand is known to all. As a result,
the data is transformed to apply a nonlinear relationship to the linear model. A typical
example of a nonlinear regression model is represented below:
(cid:12)0X (cid:12)1
1t X (cid:12)2
2t ;
Yt
D
(9.3)
where Yt represents the dependent variable at period t, Xit s represent explanatory variables
at period t and (cid:12)i s are the regression coefficients.
Specifically, the diminishing effect of past advertisements on present sales shows a non-
linear relationship. There exists a small component referred to as lambda that is multiplied
with the previous period of GRP value. This can be written mathematically as:
Sales
D
.GRPt /n
(cid:21)GRPt
1:
(cid:0)
C
(9.4)
9.2 EFFECTIVE COMMUNICATION OF RESEARCH,
INTELLIGENCE, AND ANALYTIC INSIGHTS
Research communication can be defined as the process of interpreting or translating complex
research findings into a language, format, and context that non-experts can easily comprehend.
9.2. EFFECTIVE COMMUNICATION OF RESEARCH, INTELLIGENCE, AND ANALYTIC INSIGHTS 113
Figure 9.3: Linear and nonlinear response to advertisement.
One of the purposes of conducting market research is to measure customer satisfaction and how
to maintain a competitive edge over other brands, while finding new markets for products or
services. Research can provide firms with the knowledge of whether the resources and financial
expenditures dedicated to developing a new product or service will yield profit in the future.
It is a necessity for researchers to keep a project’s stakeholders informed by communicating
its findings and recommendations to both the decision makers and other stakeholders. Based on
these research findings and recommendations, the decision makers will have to deliberate inten-
sively on how to incorporate recommendations into the organizational system for growth. For
some research projects, participants and beneficiaries can be researchers, policy makers, donors,
government agencies, academics, and others.
There are many dissemination tools available to researchers, each of which has strengths
and weaknesses in reaching its potential audience. These tools can be used to complement one
another to produce an effective dissemination package. Types of dissemination tools include
research reports, peer-reviewed papers, policy briefs, and press releases. The work contained in
one dissemination tool can be modified or replicated in the development of another tool. Other
dissemination channels/tools are interpersonal communication, email messages, websites, mass
media, etc.
As a practical illustration, after the completion of the monthly market intelligent survey
exercise, bra Limited generates a monthly report and presents detailed findings and recom-
mendations to the staff of Central Bank of Nigeria, Abuja. The presentation team consists of
representatives from different departments that form the organization. In the course of the pre-
sentation, questions emerge as a result of the outcomes of the survey, and bra Limited provides
answers based on the market information and experiences. In a case where the answers provided
are not satisfactory, questions are thrown open to the floor for further deliberation. A high
number of employees in attendance from different departments with a wealth of experience in
SalesAdvertisingNonlinearResponseLinearResponse114
9. CAUSAL METHODS
their area of specializations has helped to generate the research that is used to provide a lasting
solutions to the challenges facing the Nigerian economy at large.
9.3 EXERCISES
9.1. What do you understand by causal methods?
9.2. Define the following:
a. Temporal sequence.
b. Concomitant variation.
c. Non-spurious association.
9.3.
a. What is marketing mix model?
b. Mention the 4Ps of marketing mix.
c. Explain the procedure of creating a marketing mix model.
9.4. Differentiate between multiple linear regression and nonlinear regression.
9.5.
a. What is research communication?
b. Describe how can we generate insights from the effective communication of re-
search.
C H A P T E R 10
115
Mobile Data Mining
Modern communication technology has become ubiquitous in everyday life, and as such has
generated a huge amount of research interest. Modern smartphones integrate numerous func-
tions and allow the converge of numerous forms of media; these features include internet brows-
ing, multimedia, gaming, and many others. Nowadays, researchers can make use of mobile
phones to collect a wealth of data and useful information. The data can be in form of call logs,
message logs, location, application usage, web usage, and sensor data, among other character-
istics (see Figure 10.1). Some behavioral analysis such as sentiment analysis, and segmentation
analysis can be performed to analyze these activities.
Figure 10.1: Different kinds of data in mobile phone.
10.1 CONCEPT OF MOBILE DATA MINING
Data mining is an analytic process designed to explore patterns or systematic relationships be-
tween variables. Data mining provides the link between stored transactions and analytical sys-
tems. The essence of mining data is to provide a prediction for future occurrences of the data.
Mobile data mining can play significant roles of data producer, data analyzer, and client of
remote data miners. The first task of smartphone data mining is to use the smartphones to cap-
ture data from various sources, such as call logs, message logs, locations, etc. Second, the data
collected undergoes ETL. This process involves extracting data from different sources and con-
Sensor DataApplication Usage DataWeb Usage DataLocation DataCall and Message Log Data116
10. MOBILE DATA MINING
verting them into a unique format, transforming it into clean data that is suitable for analysis by
removing errors, and then loading and storing the clean data into data warehouses for mining.
Third, after the data has been warehoused, mining activities (behavior analysis, location-based
services) can be conducted. Figure 10.2 demonstrates the process of mobile data mining.
Figure 10.2: Mobile data mining process.
Behavioral Analysis reveals the social impact of the various activities performed by a mo-
bile phone user. This analysis may be based on the social interactions of mobile phone user in
a community or in an organization, or it could reveal information about a user’s personality
of behavior. The interaction of a user with the mobile phone gives a holistic overview of their
habits. For instance, a user may prefer to listen or watch video whenever he or she is on a bus in
traffic. These activities can be recorded over time and can be used to predict the user’s behavior
whenever he or she experiences similar conditions.
Location-Based Services (LBS) allow an analyst to predict where the mobile user will visit
in the future, considering the past movement history of the mobile phone user. There are three
steps for forecasting the location of a mobile phone. Systematic gathering of the movement
history of the user, recognition of movement patterns, and prediction of the next location based
on pattern observed.
In recent technological developments, some application models have been deployed into
mobile phones for mining data collected via smart phones. bra develops a mobile application that
is used to collate data on business expectation survey and market survey. The mobile application
uses the embedded GPS to log the time and location of the survey. This function helps to monitor
the activities of the fieldworkers during the period of survey, and also aids in confirming the
authenticity of the data collected. The collection of data is done through the mobile phone but
the data resides on a database server. The outcome of the survey can be displayed on the phone
by invoking the necessary functions integrated in the app.
Application DataLocation DataMessage DataSensor DataBehavioralAnalysisLocation-basedAnalysisCall Data Data WarehouseDataMartExtractionTransformationLoading10.2 ACTIVITIES OF MOBILE DATA MINING
10.2. ACTIVITIES OF MOBILE DATA MINING 117
The models in the mobile phone application can run in three different schemes. These schemes
are mobile interface, On-board CPU and Client server. In the mobile interface scheme, mobile
applications provide a user interface, but data mining activities are performed through back-
end computational infrastructures. For the on-board CPU scheme, mobile data mining jobs are
performed locally using the computational power and structure of the mobile device. Through
the client-server arrangement, data mining activities are carried out on both the mobile devices
and back-end servers.
Communication overheads in mobile interface schemes are high when compared to on-
board CPU schemes. The on-board CPU data mining scheme has good data visualization and
low dormancy, while the mobile interface system does not have such benefits. However, the
issues encountered in using on-board CPU center on limitations concerning battery power,
memory, processing and storage. These problems are largely ameliorated in a mobile interface
scheme.
bra uses technologies based on JAVA 1.7, JAVA Enterprise Edition, MS SQL, JSF, and
Structs in the development of the mobile data mining. The diagram in Figure 10.3, shows the
Figure 10.3: End-to-end flow.
flow of data within the platform to produce various reports. The following are the steps taken
to produce reports.
1. Data Entry Clerks collate data of specific formats based on the various indexes supported
by the platform and inputs the “raw data” into the system via various supported channels,
web, or mobile.
End-to-End FlowEnters data via webor mobileReport generated is stored in thepreferred formatAd-hoc reports can also be generatedAnalysis is run automaticallyon a chosen day of the monthSubmits data for valicationAnalysisTriggerAnalysisDataEntryClerkNoYesIsDataValid?DatabaseforStorageDataSubmission118
10. MOBILE DATA MINING
2. The data are validated by an internal logic built for each data format. The validation ensures
the final data stored are valid entries matching various stipulated algorithms to ensure the
reports that will be generated eventually are of very high integrity.
3. After all data are captured, the “Index Engine System” is executed based on the scheduled
period or manually as required. It executes the various process on the data captured to
produce the report.
10.3 ARCHITECTURE OF MOBILE DATA MINING
In recent times, the Service Oriented Architecture (SOA) model is mainly used to implement
distributed systems where applications and components interact each other independently from
platforms and language [Kumar and Ramadevi, 2016]. Kumar and Ramadevi state that the most
important implementation of the SOA model is web services, because of internet technologies
(XML and HTTP) which are globally accepted. web services encompass the integration of dis-
tributed applications, processes and data, optimizing the deployment of systems and improving
their efficiency. The wide use of web services in mobile environments allows the integration of
mobile devices with server applications to run on different platforms, and allows us to access
and distributed services from mobile devices.
bra develops mobile data mining architecture based on five components, namely: data
capture, middle ware, data storage, data analysis, and data reporting. The architecture diagram
shows the design of the systems/components that form the entire platform. It shows the vari-
ous connections between the components that allow the data to be captured, validated, stored,
processed, reported, and finally consumed by the Reporting Analyst. Figure 10.4 shows the ar-
chitectural design of mobile data mining.
10.4 ALGORITHMS OF MOBILE DATA MINING
Data mining algorithms are heuristic procedures that are used to analyze data by observing the
historical pattern or trends in a dataset. It uses the outcome of the analysis to define the optimal
parameters for creating models. Data mining algorithms incorporate statistical and learning
algorithms that help to advance the collection and management of data. Data mining can be
applied in the area of education, bioinformatics, education, genetics, banking, and electrical
power. Mobile data mining uses algorithms of data mining in a mobile computing environment.
Figure 10.5 shows the algorithms in the data mining process.
10.5 APPLICATION OF MOBILE DATA MINING
Recently, it has become possible for firms to purchase mobile apps with the capacity of per-
forming data mining tasks. This concept of mobile data mining is used across a variety of fields,
for example, to develop body-health monitoring smart phones, vehicle control systems, fraud
10.5. APPLICATION OF MOBILE DATA MINING 119
Figure 10.4: Mobile data mining architecture.
Figure 10.5: Data mining algorithms.
ArchitectureData CaptureMiddle-wareData StorageData AnalysisData ReportingMobileWebMiddle-wareLoadBalancerDB LoadBalancerIndexAlgorithmEngine1st MWInstance1st DBInstance2nd DBInstanceNth DBInstance2nd MWInstanceNth MWInstanceData Mining AlgorithmsOnline Analytical ProcessQuery ToolsPredictionClassificationVisualizationDescriptionClusteringAssociationSequentialAnalysisDecisionTreeNeuralNetworkRegressionSQLDiscovery-driven Methods120
10. MOBILE DATA MINING
detection software, and wireless security systems. More research should be done to improve
the battery life of mobile phones, since data mining through mobile phones requires a lot of
processing power that can quickly drain the life of a handset.
Innovative applications of data mining have materialized in the following sectors.
(a) Healthcare: There is tremendous growth in biomedical research, including in areas such
as pharmaceuticals, cancer therapies, and human genome research. Lately, research on
DNA has revealed genetic sources for diseases and led to the development of preventive
medicines and treatments to manage these diseases.
(b) Finance: Financial institutions collect a lot of high-quality and reliable data from cus-
tomers using banking services, investment services, and insurance services. This data can
be used for a wide variety of purposes, for example, financial engineering (i.e., the cre-
ation of new asset classes, especially in the field of derivatives), credit checking, and fraud
detection.
(c) Telecommunication: A vast amount of data has been collected from the telecommuni-
cation sector, ranging from personal data to the biometry data of individual subscribers.
This data can be used to tracking fraudulent activities (including other criminal activities).
(d) Retail industry: Data collected on sales, customers’ shopping patterns, and services col-
lected through e-commerce site purchases, ATM withdrawals, and POS transactions can
yield useful information about consumption behavior. For example, we can identify cus-
tomers’ shopping patterns and trends over a period of time. With the aid of data mining,
we can have the knowledge of when, where, and how a customer spends their disposable
income. This data can be leveraged to improve the quality of customer service to enhance
consumer retention and satisfaction, and reduce business costs.
10.6 EXERCISES
10.1. What is mobile data mining?
10.2. With the aid of a diagram, describe the mobile data mining process.
10.3. Discuss the end-to-end flow in producing bra reports.
10.4. With the aid of a diagram, describe mobile data mining architecture.
10.5.
a. Explain the function of data mining algorithms.
b. Mention areas where mobile data mining may be applicable.
A P P E N D I X A
121
Questionnaires, Items Survey,
and Weights of Elementary
Items
A.1 SAMPLE OF BUSINESS EXPECTATION SURVEY
QUESTIONNAIRE
Business Expectation SurveySector: Manufacturing (Finance)Company InformationPlease provide the following information in Block Letters for verifi cation purposes.Company Name:Principal Activity/Product Line:Division/Department:Company Address & Local Government:Name of Respondent:Position:Phone Number:Email:*All information gathered herein shall be treated strictly confi dential and shall be used for statistical purposes only.Note: Please indicate “Increased” if an increase of at least 5% is expected or has been experienced. Please indicate “Decreased” if a decrease of more than 5% is expected or has been experienced. Otherwise, please indicate “Remain Unchanged”.Kindly indicate by a check mark your answer for each item in this questionnaire unless otherwise specifi ed. If the item is not applicable, please write N.A on its left side. Th ank you!122 A. QUESTIONNAIRES, ITEMS SURVEY, AND WEIGHTS OF ELEMENTARY ITEMS
1. Employment Expectation:What is the Institution’s Employment Expectation for: ___?PeriodDecreasedRemain UnchangedIncreasedUnknownCurrent MonthOne MonthTh ree MonthsSix MonthsComment2. Prices ExpectationsWhat is the Infl ation Rate Expectation?PeriodDecreasedRemain UnchangedIncreasedUnknownCurrent MonthOne MonthTh ree MonthsSix MonthsCommentWhat is the Company Product Prices Expectation?PeriodDecreasedRemain UnchangedIncreasedUnknownCurrent MonthOne MonthTh ree MonthsSix MonthsCommentWhat is the Exchange Rate (NGN/USD) Expectation?PeriodAppreciateDepreciateUnknownRemainUnchangedCurrent MonthOne MonthTh ree MonthsSix MonthsCommentA.1. SAMPLE OF BUSINESS EXPECTATION SURVEY QUESTIONNAIRE 123
3. Bank RatesWhat is the general Interest Rate Expectation?PeriodDecreasedRemain UnchangedIncreasedUnknownCurrent MonthOne MonthTh ree MonthsSix MonthsCommentWhat is the Deposit Rate Expectation?PeriodDecreasedRemain UnchangedIncreasedUnknownCurrent MonthOne MonthTh ree MonthsSix MonthsComment124 A. QUESTIONNAIRES, ITEMS SURVEY, AND WEIGHTS OF ELEMENTARY ITEMS
4. Access to Credit Facilities and Financial ConditionWhat is your company’s Financial Condition?PeriodEasyNormalTightUnknownCurrent MonthOne MonthTh ree MonthsSix MonthsCommentWhat is your company’s Load Credit Expectation?PeriodDecreasedRemain UnchangedIncreasedUnknownCurrent MonthOne MonthTh ree MonthsSix MonthsCommentWhat is your company’s Access to Credit Expectation?PeriodDecreasedRemain UnchangedIncreasedUnknownCurrent MonthOne MonthTh ree MonthsSix MonthsCommentIf opportune, would you like to access Loan Credit?YesNoIf yes, for what purpose?PeriodExpansionStock Raw Material InputAcquisition of AssetsInvestmentCurrent MonthOne MonthTh ree MonthsSix MonthsCommentA.1. SAMPLE OF BUSINESS EXPECTATION SURVEY QUESTIONNAIRE 125
5. Economic and Business ConditionsWhat do you think of the country’s Economic Condition in the next ___?PeriodImproveRemain UnchangedDeteriorateUnknownCurrent MonthOne MonthTh ree MonthsSix MonthsCommentHow will you rate and predict Business Conditions in the next ___?PeriodImproveRemain UnchangedDeteriorateUnknownCurrent MonthOne MonthTh ree MonthsSix MonthsComment126 A. QUESTIONNAIRES, ITEMS SURVEY, AND WEIGHTS OF ELEMENTARY ITEMS
6. Production Cost ExpectationsWhat is the general Cost of Production Expectation?PeriodDecreasedRemain UnchangedIncreasedUnknownCurrent MonthOne MonthTh ree MonthsSix MonthsComment7. Market Share ExpectationsWhat is the general Sales Expectation?PeriodDecreasedRemain UnchangedIncreasedUnknownCurrent MonthOne MonthTh ree MonthsSix MonthsComment8. Stock Level ExpectationsHow would you rate and predict your company’s Stock (Inventory of Raw Materials)?PeriodDecreasedRemain UnchangedIncreasedUnknownCurrent MonthOne MonthTh ree MonthsSix MonthsCommentA.1. SAMPLE OF BUSINESS EXPECTATION SURVEY QUESTIONNAIRE 127
9. Capacity Utilization ExpectationsWhat is the Current Capacity Utilization of the production of your fi rm?ResponsesMark (X)At full capacity (100%)Slightly below capacity (80%)Moderately below capacity (60%)Signifi cantly below capacity (40%)At close to zero capacity utilizationCommentWhat is your forecast for the trend of capacity utilization of the production processes at your fi rm?PeriodDecreasedRemain UnchangedIncreasedUnknownCurrent MonthOne MonthTh ree MonthsSix MonthsComment10. Social and Economic Policy Shocks (Factors Limiting Business Operations)What factors are currently or might likely limit your ability to increase business activity? Please number the factors ranked according to its signifi cance in your production/business activity, “1” being the most signifi cant.NoneAccess to creditForeign competitionDomestic competitionHigh operating costLack of demandShortage of laborHigh staff turnoverDiffi cult to collect debtHigh interest rateUnclear economic lawLack of equipmentOthers (please spedify)Please provide details on the top three factors limiting your business activity.Factor 1Factor 2Factor 3128 A. QUESTIONNAIRES, ITEMS SURVEY, AND WEIGHTS OF ELEMENTARY ITEMS
What are the business expectations on the current economic, social, and fi scal issues?PeriodDecreasedRemain UnchangedIncreasedUnknownCurrent MonthOne MonthTh ree MonthsSix MonthsCommentSuggestions, if any, for policy implications:What is the value of the following lending rates:TypeValueInterest RateManagement FeeProcessing FeeOther Fees (if applicable)CommentA.1. SAMPLE OF BUSINESS EXPECTATION SURVEY QUESTIONNAIRE 129
Purchasing Manager’s IndexHow did the quantity of new orders in the current month compare to the quantity of orders in the previous month?% Decreased% Remain Unchanged% Increased% UnknownJuly 2017 IndexAugust Index% ChangeHow did supplier delivery times from the current month compare to the previous month?% Improve% Remain Unchanged%Deteriorate% UnknownJuly 2017 IndexAugust Index% ChangeHow did the volume of production in the current month compare to production level in the previous month?% Decreased% Remain Unchanged% Increased% UnknownJuly 2017 IndexAugust Index% ChangeHow did the quantity of raw materials/WIP inventory from the current month to the quantity of inventory in the previous month?% Decreased% Remain Unchanged% Increased% UnknownJuly 2017 IndexAugust Index% ChangeHow did your fi rm’s employment levels this month compare to the previous month?% Improve% Remain Unchanged%Deteriorate% UnknownJuly 2017 IndexAugust Index% Change130 A. QUESTIONNAIRES, ITEMS SURVEY, AND WEIGHTS OF ELEMENTARY ITEMS
A.2 LIST OF ITEMS SURVEY MONTHLY
List of Items Survey MonthlyS/NCategory ItemElementary ItemMeasurement1Food and Non-Alcoholic BeveragesBread SlicedGramsBread UnslicedGramsCake (Ordinary)GramsGramsGramsRound BreadWheat BreadChewing Gum PiecePeppermint1 1 1 1 1 PieceVicks Lemon Plus1 Piece3 Crowns Evaporated Milk 170 g1 TinBlue Band 450 g TinCow and Gate Baby Milk 900 g1 TinSt. Louis Cubed Sugar 500 g1 PackExeter Corned Beef 340 g1 TinGeisha 155 g TinGlucose D 400 g TinGranulated SugarGramsKetchup 450 ml450 mlLocal Coasta BiscuitGramsLocal Cracker BiscuitGramsMacaroni 500 g 1 PacketNan Baby Milk 450 g450 gramsLocal Coasta BiscuitGramsLocal Cracker Biscuit GramsMacaroni 500 g 1 PacketNatural Honey:Local Production1 Beer BottleIndomie Noodles 70 g70 GramsPackaged Wheat FlourKilogramsPeak Evaporated Milk 170 ml1 TinPlantain Chips (Indicate Weight)GramsA.2. LIST OF ITEMS SURVEY MONTHLY 131
List of Items Survey Monthly (continued)S/NCategory ItemElementary ItemMeasurement1Food and Non-Alcoholic Beverages (continued)Poundo Yam (Indicate Weight)GramsPowdered 3 Crowns MilkGramsPowdered Peak Milk 400 g1 TinQuaker OatsGramsSalt (Indicate Weight)GramsTitus Sardine 125 g 1 TinGolden Penny Semovita 2 kgKilogramsSma Baby Milk 450 g1 TinGolden Penny Spaghetti500 g/0.5 kg500 g/0.5 kgStar Kist Tuna1 TinStrawberry Jam (Indicate Weight)GramsTomato PureeWafersGramsGramsBag Tea-1 Packet of Lipton1 PackBournvita 450 g1 TinNasco Corn Flakes 350 g1 PackCustard 300 g1 TinHimalt1 BottleMaltina1 BottleNescafe Coff ee 50 g1 TinOgi/Akamu/KokoOvaltine 450 g1 TinYogurt (Indicate Size)BananaKilogramsKilogramsKilogramsImported AppleKilogramsOrangePawpawKilogramsKilogramsPineappleKilogramsKilogramsWater LemonBrown BeansKilograms132 A. QUESTIONNAIRES, ITEMS SURVEY, AND WEIGHTS OF ELEMENTARY ITEMS
List of Items Survey Monthly (continued)S/NCategory ItemElementary ItemMeasurement1Food and Non-Alcoholic Beverages (continued)Corn FlourKilogramsEluboKilogramsFlourKilogramsFufuKilogramsGroundnutKilogramsGuinea CornKilogramsGuinea Corn FlourKilogramsImported RiceKilogramsLocal RiceKilogramsLocust BeansKilogramsMilletKilogramsOfada RiceKilogramsPigeon BeansKilogramsPlantain FlourKilogramsWheatKilogramsWhite BeansKilogramsWhite GarriKilogramsKilogramsKilogramsKilogramsWhite MaizeYellow GarriYellow MaizeBeefKilogramsEggKilogramsFrozen TurkeyKilogramsGoat MeatKilogramsLive Agric ChickenKilogramsLive Guinea FowlKilogramsLive Local ChickenKilogramsLive Medium TurkeyKilogramsPorkKilogramsTurkeyKilogramsWhole Frozen ChickenKilogramsA.2. LIST OF ITEMS SURVEY MONTHLY 133
List of Items Survey Monthly (continued)S/NCategory ItemElementary ItemMeasurement1Food and Non-Alcoholic Beverages (continued)AkaraGramsEko (Agidi/Kafa)GramsMoin-MoinGramsMilling Charges1 VisitCatfi shKilogramsCrabKilogramsCroakerKilogramsFresh ShrimpsKilogramsMackerelKilogramsRed-Dried ShrimpsKilogramsStock FishKilogramsTilapiaKilogramsTitusKilogramsSwan Bottled Water (Indicate Size)ClEva Bottled Water (Indicate Size)ClRagolis Bottled Water (Indicate Size)ClNestle Bottled Water (Indicate Size)ClAquadana Bottled Water (Indicate Size)ClFive Alive (Indicate Size)ClFunman Juice (Indicate Size)ClCaprisonne (Indicate Size)ClChivita (Indicate Size)ClHappy Hour (Indicate Size)ClLucozade Boast (Indicate Size)ClSachet Water (Indicate Size)1 SachetCoca-Cola 35 cl1 BottleFanta 35 cl1 BottleSprite 35 cl1 Bottle7-Up 1 Bottle134 A. QUESTIONNAIRES, ITEMS SURVEY, AND WEIGHTS OF ELEMENTARY ITEMS
List of Items Survey Monthly (continued)S/NCategory ItemElementary ItemMeasurement1Food and Non-Alcoholic Beverages (continued)Pepsi1 BottleMirinda1 BottleCassavaKilogramsCocoyamKilogramsIrish PotatoesKilogramsPotatoKilogramsSweet PotatoesKilogramsKilogramsYamAgbono/AkponBitter ColaGramsGramsBitter LeafGramsCabbageCarrotGramsCashew NutCucumberGramsCurry PowderGramsEweduGramsGramsGramsGarlicGramsGingerGramsGroundnut Oil1 Beer BottleKnorr CubesGramsLettuceGramsMelon Seed ShelledGramsMelon Seed UnshelledGramsOnionsGramsGramsPalm NutPalm Oil1 Beer BottleFresh Pepper Sugar CaneGramsGramsTh ymeGramsFresh TomatoGramsA.2. LIST OF ITEMS SURVEY MONTHLY 135
List of Items Survey Monthly (continued)1Food and Non-Alcoholic Beverages (continued)UgwuGramsWater LeafGramsS/NCategory ItemElementary ItemMeasurement2Clothing and Foot WearsAdire YardsAgbada Buba & Sokoto (Men)1 PieceGuinea Brocade (Mallam Style)1 PieceAnkara High Quality Prices6 6 YardsAnkara Low Quality Prices6 YardsBabies Dresses (for female), 2 yrs old1 PieceBrocade (made in Nigeria) 10 YardsChildren WearsOneFemale GownOneOneOneFemale Head TieFemale ShirtFemale Trousers1 PairKampala6 YardsLace (Made in Nigeria) 10 YardsLinen Male Trousers1 PairMen Belt1 PairMen Long Sleeve Shirt1 PieceMen Short Sleeve T-Shirt1 PieceMen Short (Boxers)1 PairNative Cap1 PieceNative Cloth (Aso-Oke)10 lagsNative Dress (Kaftan) Ready Made1Printed Fabric1 YardSinglet (Boys)Skirt (Girls)11Synthetic Materials for Sewing1 YardTies1136 A. QUESTIONNAIRES, ITEMS SURVEY, AND WEIGHTS OF ELEMENTARY ITEMS
List of Items Survey Monthly (continued)S/NCategory ItemElementary ItemMeasurement2Clothing and Foot Wears (cont.)Two-Piece Suit (Coat and Trouser) Terylene1 PairVelvet (Skirt Material)1 YardWomen’s Brief 100% Poly Amide Double Seat, Elastic Waist Band; Medium Size1PieceChildren Shoes1 PairFemale Sandals1 PairFemale Shoes1 PairMale Sandals1 PairMale Shoes1 PairSlippers1 PairLaundry & Dry Cleaning Services for Two Pieces Suit (Coat and Trouser)1 PieceRepair of Shoes Excluding Cost of Materials1 PairRepair of Cloth (Mending)1 PairTailoring Charges for School Uni-form (for Girls)1 PieceTailoring Charges for School Uni-form (for Boys Shirts and Knick-ers)1 PieceTailoring Charges for Women's Buba & Iro1 PieceTailoring Charges Men's Buba & Sokoto1 PieceTailoring Charges Men's Suit (Coat and Trouser)1 Piece3HealthAmpicillin Capsule of 250 mg Sachet of 10 Capsule1 PacketAspirin Complete Sachet 1 PacketA.2. LIST OF ITEMS SURVEY MONTHLY 137
List of Items Survey Monthly (continued)S/NCategory ItemElementary ItemMeasurement3Health (cont.Chlroquine, Pack of 10 Tablet1 PacketEpsom Salts, 1 Sachet1 PacketFansidar, Sachet of 3 Tablets1 PacketMultivitamins Complete Sachet1 PacketNivaquine, Pack of 10 Tablet1 PacketNivaquine Syrup 60 ml1 BottleNovalgin, Pack of 20 Tablets1 PacketPanadol, 1 Sachet/ Complete of 10 Tablet1 SachetParacetamol1 SachetTerramycin Capsule: A 250 mg Sachet of 4 Capsules1 PacketBenylin and Codein: 100 ml Bottle1 BottleMultivite Syrup: Small Bottle of 100ml1 BottleMulyivite: Glaxo, 1 Bottle of 60 Tablets1 BottleOreptal: 1 Medium Bottle 300 ml1 BottleInsecticide(Spray)Baton 400 ml1 TinDustin Powder Standard Size 250 gm1 TinMethylated Spirit (Indicate Size)1Shaving Powder1Blood Test in Private Hospital (Malaria/ Typhoid Parasite)1 TestConsultation Fee Chemists 1 VisitConsultation Fee Dental Service 1 VisitConsultation Fee Government Hospitals1 VisitConsultation Fee Private Hospitals 1 VisitConsultation Fee Unorthodox Clinics1 VisitCorrective Eye Glasses1 138 A. QUESTIONNAIRES, ITEMS SURVEY, AND WEIGHTS OF ELEMENTARY ITEMS
List of Items Survey Monthly (continued)S/NCategory ItemElementary ItemMeasurement3Health (cont.)Hospital Accommodation in Government Hospital, Fee per Day on Admission Excluding Food1 DayHospital Accommodation in Private Hospital, Fee per Day on Admission Excluding Food1 DayLaboratory Service: Urine Test for Presence of Albumen and Glucose in Private Hospital1 TestMidwives1 ServiceNurse1 ServicePhysiotherapist1 ServiceX-Ray Photography, 1 Chest X-Ray in Private Hospital1 TestAndrew Liver Salt1 Sachet of 5 mg1 SachetCombatrin: 1 Pack of 6 Tablets1 PacketsDiabenese: 250 mg, 1 Box of 20 Tablets1 PacketsEye Drop: Visine 15 ml1 BottleRobb in Small Tin of 4 g1 TinTooth Paste: Close Up Small Size 37 g1 TubeCotton Wool 100 g PackCrepe Bandage 2"1 RollDusting Powder Standard Size 250 g1 1 TinGrip Water 100 ml1 BottleIodine: 1 Bottle 15 ml1 BottleMedicated Plaster1 SachetMilton Sterilizing Fluid 500 ml1 BottleOlive Oil Goya 88.7 ml1 BottleOintments Penicillin 20 g1 TubeA.2. LIST OF ITEMS SURVEY MONTHLY 139
List of Items Survey Monthly (continued)S/NCategory ItemElementary ItemMeasurement3Health (cont.)Potash (Potassium Permanganate Crystal) 100 g1 BottleSyringe 5 ml14Furnishing Household Equip-ment and MaintenanceBath: Standard Size, Cast Iron (Coated With Ceramic) Ariston1Crittal Window Frame: 4' × 4' Without Inbuilt Burglary-Proof1Flush Door: Made of Plywood Size 2.5' × 8' 1 Galvanized Iron1Glass Sheet for Crittal Window Plain Std Size 12" × 24"1Glass Sheet for Crittal Window Shadedstd Size 12" × 24"1Louvre Frame with 8 Blades1Louvre Glass, Plain 61 cm1Louvre Glass, Shaded 61 cm1Shower Fittings Iron Type1Wash Hand Basin 40 cm × 55 cm1Water Closet: Complete Set (Local)1Electric Bulb 60 Watts Plain1Electric Iron Philips Dry Iron1Electric Kettle (4 l) Specify Make1Electrical Wire (1 yd 1.5 mm)1Extension Box 6 mmFlorescent Tube (White) 4' 40 Watts1Lamp Holder (Angle)1Socket (Wall Suface) 4 mm1140 A. QUESTIONNAIRES, ITEMS SURVEY, AND WEIGHTS OF ELEMENTARY ITEMS
4Furnishing Household Equip-ment and Maintenance (cont.)Audio Set Cooker1DVD Player1FanFreezerFridgeLaundry IronTV Set1Vhs Player1Bed Frame: Well-Polished Plywood Dimension 4' × 6' 6"11111Bed Linen1Cushion Chair: Wooden Frame With Spring With Arms, Well Polished Single Seater1Furniture (Wardrobe)1Kitchen Cupboard: Ordinary Unpolished with Three Shelves1Mat: Made of Natural Fiber Specify Size1Ordinary Chair: Table Chair with Some Foam Attachment without Arms1Ordinary Chair All Wooden, Standard Size without Arms1Pillow: Foam Filled1Wall Hanger1Wood Bed: Frames and Bed Stead Made of Wood Well Polished 4' × 6' Wood1Writing Table: Well-Polished, 3 Drawers on One Side: Top in Formica11CupboardA.2. LIST OF ITEMS SURVEY MONTHLY 141
4Furnishing Household Equip-ment and Maintenance (cont.)Air Conditional ( Panasonic) 2 horsepower1Batteries Small (For Small Radio) Size AA 1Bed Sheet: Ready Made Printed Fabric Polyester (Up To 25% Cotton) Size 4' × 6' Bed1Blanket Made in Nigeria1Cooking Pot Aluminum Type : 2 Handle with Lid Medium Tower Brand1Dinner Plate, Unbreakable1Disinfectant: Dettol in Plastic Bottle 250 ml1Domestic Employee Steward: On Monthly SalaryOne MonthDomestic Employee: All Duties (Full Time) with Feeding and Accommodation Monthly Per EmployeeOne MonthElectric Cooker: 2 Rings, 1 Flask Th ermocool, Medium Size1Fork Stainless Steel1Gas Cooker: 2 Burners (without Oven) Specify Type1Gas Cooker: 4-Burner Eazee Cooker with Oven1Glasses—Ordinary Drinking Glass Pack of 6Grinder Phillips1Household Utensils Sauce Pan1Insecticide (Spray) Bagon 400 ml1 Tin142 A. QUESTIONNAIRES, ITEMS SURVEY, AND WEIGHTS OF ELEMENTARY ITEMS
4Furnishing Household Equip-ment and Maintenance (cont.)Kitchen Knife with Wooden/Plastic Handle1Lamp Th reads (Wick)1Linoleum (Carpet) Plastic Type Multicolour, Price/Yard1yardPans (Frying Pans ) Medium1Refrigerator 250/300 l Th ermocool Haier1Rug: Single Color, Simplest Type Price Per Sq. MeterSquare meterScoring Powder (Vim) Price of One 500 g1Serving Dishes Medium Size1Standing Fan (Specify Product)1Standing Fan (Specify Product)1Table Knife Stainless Steel1Tablespoon Stainless Steel1Toaster Machine (Phillips)1Tumbler 25 cl Plain With Handle1Washing Machine (Ignis)1Cooker11Microwave Plates1Bowl1Cooler1Foodfl ask1Metal Bucket Big Size 341Plastic Basin: 60 cm Diameter1Plastic Bucket With Metal Handle, Big Size 341Plastic Cup about 25 cl1Plastic Plate16" Construction Block1A.2. LIST OF ITEMS SURVEY MONTHLY 143
4Furnishing Household Equip-ment and Maintenance (cont.)9" Construction Block1Asbestos Roofi ng Sheet, 1.2 m × 1.2 m1Asbestos Roofi ng Sheet, 1.2 m × 3 m1Ceiling Slate1Cement: Benue (50 kg Bag)1Cement: Burham (50 kg Bag)1Cement: Dangote (50 kg Bag)1Cement: Elephant (50 kg Bag)1Cement: Larfarge (50 kg Bag)1Corrugated Iron Sheet Comb Brand (20 Sheets)1Corrugated Iron Sheet Hand Brand (20 Sheets)1Emulsion Paint: Nigerlux 4 l1Floor Tiles: Plastic Plain Nigeriate Tiles 300 mm × 300 mm1Gloss Paint: Dulux 4 lGravels Washed 5 cu Meters1Iron Nails 10 cm (4")1Iron Rod 1.3 cm × 9 cm1Iron Rod 2 cm × 9 cm1Labor Rate1Padlock: Yale Medium1Plumbing Pipe 1" (Plastic)1Plumbing Pipe 1/2" (Plastic)1Plumbing Pipe 2" (Plastic)1Plumbing Pipe 4" (Plastic)1Roofi ng Sheet11Sand: ErosionSand: Washed1Stones: Granite1144 A. QUESTIONNAIRES, ITEMS SURVEY, AND WEIGHTS OF ELEMENTARY ITEMS
4Furnishing Household Equip-ment and Maintenance (cont.)Union Key; 2 Leavers Made in England1Wall Tiles:Ceramic Plain 15 cm × 15 cm i.e.. 6" × 6"1Water Paint: Dulux 4 Litres1Wood And Construction Materials For Kitchen Basin1Wood Plank Iroko 2" × 3"1Wood Plank Iroko 3" × 4"1Wood Plank Mahogany 2" × 3"1Wood Plank Mahogany 3" × 4"1Electric Charge (Non-Private Users)1 kwhElectric Charge (Private Users)1 kwhGas in Medium Cylinder (12.5 kg Refi lling Specify)1 CylinderGas in Small Cylinder, i.e., 5 kg Refi lling1 CylinderKerosene: Price of 1 Beer Bottle (60 cl)Beer BottleKerosene: Price of One Gallon1 GallonLiter of Diesel 1 LiterLiter of Kerosene 1 LiterLiter of Petrol 1 LiterBed Room Flat: Rent1 MonthLarge Shop1 MonthMedium Shop1 MonthSmall Shop1 Month2 Bed Room Flat: Rent1 Month3 Bed Room Flat: Rent1 MonthCandle Price of a Packet of 8 Candles, White1 PacketCandle Price of One 1 Duplex1 MonthA.2. LIST OF ITEMS SURVEY MONTHLY 145
4Furnishing Household Equip-ment and Maintenance (cont.)Half Duplex1 MonthMatches Price of One Box1 BoxModern Detached Bungalow 4–5 Rooms Including Sitting Room: Rent1 MonthMortgage Charges 1 ServiceOther Housing Charges (Agreement)1 ServiceRefuse DisposalMonthlyRoom & Parlor, Wall Made of Cement Blocks, in a Bungalow House, Standard Size: Rent1 MonthSelf-Contained with Water Closet: Rent1 MonthSelf-Contained without Water Closet: Rent1 MonthSingle Room, Wall Made of Cement Blocks, in a Bungalow House, Standard Size: Rent1 MonthWater RateMonthly5Miscellaneous Goods and ServicesCar Wash1 VisitCarpenter: Fixing of Door Key Excluding Cost of Key Cost of LaborChildren's Hair Cut By (Established Barber)1 PersonFan Repair: Rewinding of Coil (Cost of Labor Only)Cost of LaborFridge Repair: Replacement of Compressor (Cost of Labor Only)Cost of LaborGrinding of Food Condiments, Soup Ingredients Mixed (e.g., Pepper, Tomatoes, Onions )1 ServiceJewelry (Bangles)1146 A. QUESTIONNAIRES, ITEMS SURVEY, AND WEIGHTS OF ELEMENTARY ITEMS
5Misc. Goods and Services (cont.)Laborer: Cost of Labor Per DayCost of LaborManicure and Pedicure1 ServiceMen's Hair Cut: Adult By (Established Barber)1 PersonNight Watchman: Hired Privately, Duties From 8pm–8am Everyday Charge/Month1 Monthly SalaryPhotocopy 1 Page Document Foolscap Size1 ServicePressing Iron Repair: Replacement Of Element (Cost of Labor)Cost of LaborRadio Repair: Replacement of Transformer Excluding Cost of Material (Labor Only)Cost of LaborShoe Shinning, a Pair of Shoes, (Cleaning, Polishing, and Shining)1 ServiceTelevision Repair: Replacement of Transformer Excluding Cost of Material (Labor Only)1 ServiceWasher Man: Cost of Washing Coat and Trouser1 ServiceWatch Repair General Overhauling1 ServiceWater Carrier1 TinWedding Rings112" × 18" Mirror 5 mm1African Comb; Wooden About 10 teeth1Air Freshener 300 ml1Broom: Ordinary Floor Broom1 BroomCar Brush11Chewing StickA.2. LIST OF ITEMS SURVEY MONTHLY 147
5Misc. Goods and Services (cont.)Cutlass "Machete" Wooden Handle; Large1Eye Glass (Sun Shade)1Hair Drier Super Top Standard Type Jet Cream 125 g1Lady Citizen WatchLips StickLudo, Standard Size, Glazed Complete Set1Magic Shaving Powder1Men's Electronic Citizen Wrist Watch, Simple Type111Mosquito Coil1 1 1 CoilNeedle OrdinaryOffi ce Pin PacketPetroleum Jelly Vaseline Small 50 g1Plastic Comb With Handle, Shape Like African Comb11Sewing TreadShoe Polish KIWI Black In A Round Tin of 50 ml1 TinSponge (Local or Woven)1Studs1Suitcase Medium1Tie Clips1Tiger Razor Blade Packet of 1011Tony Montana 100 g Tooth Brush, Jordan1Travelling Bag (Medium)1Walking Stick Wooden Polished Uncovered1Wall Clock (Battery Type)1Women Hair Brush1Women Hand Bag Medium Size1148 A. QUESTIONNAIRES, ITEMS SURVEY, AND WEIGHTS OF ELEMENTARY ITEMS
5Misc. Goods and Services (cont.)Women Hand Bag Small Size16CommunicationBrowsing Fee 1 1 HourDesktop Computer1 UnitInternet Subscription MonthFax Service 1 ServiceMoney Order 1 Stamps Nokia SimpleUnitTelephone at Public Call Lasting 3 Minutes Local within City3 MinutesTelephone Cal Intra State Per Minute Gsm at Business Center1 MinuteLand-Line Subscription and (Sim)1Mobile Subscription and (Sim)17Education Advanced Learner Dictionary1Arts TextbookBall Pen (Bic) Price of One11Cello Tape11 Daily Newspaper, Daily Times Price of One Copy1Envelope 4" × 9" Pkt Of 251Eraser 1Exercise Book 2A, 20 Leaves1Great Wall Sharpener Made in China1Mathematics Set, Oxford1New General Mathematics For Jss1-3 by MF Macrae & Co1New Practical English Jss1-3 by P.A Ogundipe P.S Tregido1Nigerian Integrated Science Project Jss1-3 by Science Teachers Association1Pencil with Eraser: H.B Price of One1A.2. LIST OF ITEMS SURVEY MONTHLY 149
7Education (cont.)Quick Ink Pot Small Pot1Ruler (Wooden): Price of One1Text Book Mathematics: For Primary Four, Macmillian1Weekly Magazine Newswatch1Boarding Fees For Secondary School Charge 1 Term (Public)Per TermNursery School Fees, without Meal Fee per TermPer TermPrimary School Fees; Private, without Meal per TermPer TermPrivate Lesson for Primary Four Pupil Charge per MonthPer TermPrivate Vocational School Fees, for Secretarial Studies Fee per TermPer TermQuranic School Fees; without Meal Fees per TermPer TermUniversity Education Fee; Course of Study Economics One Session; Fee without Meal and AccommodationPer SessionWriting & Drawing (Children Book)18Recreation and CultureAudio Cassettes 1Cable Subscription 1DVD 1DVD Player 1Video Cassettes 1Video CDs 1Audio CDs11Cd Player(Sony)Children Ball; Synthetic Rubber, Diameter 22 cm1150 A. QUESTIONNAIRES, ITEMS SURVEY, AND WEIGHTS OF ELEMENTARY ITEMS
8Recreation and Culture (cont.)Color Fil Kodak 36 Exposures Size 1351Compact Disk (CD) of Popular International Artist1Compact Disk (CD) Original (Popular Indigenous Music Artist)1Guitar(Ordinary)1Pocket Camera: CanonPortable Radio, 3 Band Panasonic 37501Television Color 20" LG1Unrecorded Compact Disc1Video Recorder Sony Model Ed1151Cinema Show: In Town Center Popular Indigenous Film Gate Fee1 PersonFootball Match, for a Division 1 Match Uncovered Seat Gate Fee per Person1 PersonPhotographic Development and Printing 36 Exposures Color: Postcard Size (Charge For Development Plus Printing of 36 Copies)1 VisitRecorded CD of Popular Indigenous Artist19Restaurant and HotelMeal at a Local Restaurant, Total Cost of a Normal Feeding, a Plate of Eba with About Two Pieces of Meat1 PersonMeal at a Local Restaurant for a Plate of Amala and Soup With Two Pieces of Meat1 PersonMeal at a Local Restaurant for a Plate of Cooked Fufu and Soup with Two Pieces of Meat1 PersonA.2. LIST OF ITEMS SURVEY MONTHLY 151
9Restaurant and Hotel (cont.)Meal at a Local Restaurant for a Plate of Cooked Rice and Stew with Two Pieces of Meat1 PersonMeal at a Local Restaurant for a Plate of Fried Rice and Chicken1 PersonMeal at a Local Restaurant for a Plate of Poriage and Chicken1 PersonMeal at a Local Restaurant for a Plate of Pounded Yam and Soup with Two Pieces of Meat1 PersonMeal at a Local Restaurant for a Plate of Tuwo and Soup with Two Pieces of Meat1 PersonMeal at a Standard Hotel, Charge of Full Lunch1 PersonRoom Charge in a Standard Hotel, Single Bed 1 Night10TransportationBreak & Clutch Fluid Heavy Duty Polygard: Super Al Heavy Duty 485 ml1 CanEngine Oil, 1 l 1Tire Gauge ServiceTire RepairCost of LaborBicycle RepairsCost of LaborBrake PadCar Insurance Th ird Party Premium Plus 5 Years Claim1111Diesel Oil LiterDriving License Renewal for Type B PrivateRenewalDriving License Renewal for Type E PrivateRenewal152 A. QUESTIONNAIRES, ITEMS SURVEY, AND WEIGHTS OF ELEMENTARY ITEMS
10Transportation (cont.)Diesel Oil LiterDriving License Renewal for Type B PrivateRenewalDriving License Renewal for Type E PrivateRenewalFuel (Petrol)1 1 LiterLearner’s Permit1Motorcycle Insurance Th ird Party1Motorcycle License1Motorcycle Repair General Service1Replacement of One Front Shock Absorber of Acar, Cost of Labor Only Cost of LaborVehicle License: Annual Renewal1Bicycle Tube,Diamond for Men1Bicycle Tire, Diamond China for Men1Car Battery: Varta 12 Volts, 45A.H with Acid1Car Battery: Varta 12 Volts, 60 AH with Acid1Car Tire: Dunlop Elite 175 SR 141Exide Battery 12 Volts 45 AH1Fan Belts AutomagaMotor Cycle Tube, 1 Tube Dunlop1Motor Cycle Tire, 1 Indicate Brand E.G Dunlop1Sparking Plug, "Champion" N9y, Price of 4 Plugs1Air Fare Charges for Specifi ed Route, Single Journey Only Locally State Capital to Abuja1 PersonA.2. LIST OF ITEMS SURVEY MONTHLY 153
10Transportation (cont.)Bus Journey Within City per Drop Charge per Person Constant Route1 PersonBus Journey within Inter City, State Routes per Drop Charge per Person Constant Route1 PersonJourney By Motorcycle (Okada/Achaba) per Drop Constant Route1 PersonRail Transport: Economy Charge for About 100 km 1 PersonTaxi Journey; per Drop (About 3 km) per Person Urban1 PersonWater Transport: Waterway Passenger Transportation Constant Route1 Person11Alcoholic Beverages, Tobacco, and KolaStar Lager Beer 600 ml1 BottleHarp Lager Beer (Indicate Size)1 Bottle33 Export Lager Beer (Indicate Size1 BottleSmall Guinness Stout 33 cl1 BottlePalm Wine Beer Bottle Burukutu Beer Bottle Chelsea London Dry Gin (Indicate Size)1 BottleSeaman Aromatic Schnapps (Indicate Size)1 BottleLocal Gin (Ogogoro)Beer BottleBenson & Hedges StickRothmansStickLondon MentholStickConsulateStickSt. Moritz StickKola Nuts Grams154 A. QUESTIONNAIRES, ITEMS SURVEY, AND WEIGHTS OF ELEMENTARY ITEMS
A.3 WEIGHTS OF SOME ITEMS
Weights of Some ItemsItemsWeightItemsWeightItemsWeightAbestos0.01Gas 0.36Sand0.01Adire 0.31Gin 0.19Sanitary towels 0.21Advanced Text Books0.35Ginger 0.07Sardine 0.19Ankara 0.50Government hospitals0.17Sausages 0.05Antibiotics 0.05Grease0.01Seasoning0.29Aspirin0.02Groundnut0.14Secondary schools 1.12Audio cassettes 0.03Groundnut oil0.95Shaving powder0.03Audio CDs0.09Guinea corn 0.11Shrimps0.20Audio set0.84Half duplex 0.01Slippers 0.25Bag tea0.06Imported apple0.11Soaps 1.18Banana 0.33Insect killer/disinfectant0.17Socket0.04Beans 0.75Inter-city 0.01Soft drinks 0.44Bed0.90Intermediate textbooks0.11Spaghetti0.18Beer 0.94International fl ight 1.49Spoon and fork0.08Bicycle0.08Internet subscription0.01Stamps 0.01Biscuits 0.16Intra-city 0.01Stapler 0.01Block0.14Iodine0.02Stencils 0.01Bottled water0.39Kampala 0.06Sugar 0.38Bowl0.01Kerosene 0.73Table0.15Bread 1.49Kindergarten 0.01Taxi 1.51Break oil0.01Kitchen basin0.01Th ree bedroom fl at 0.01Brocade 0.62Knives0.02Tissue paper 0.22Browsing fee 0.04Kola0.11Tomato 1.17A.3. WEIGHTS OF SOME ITEMS 155
Weights of Some ItemsItemsWeightItemseightItemsWWeightBucket0.21Labor rate0.01Towel rack0.07Bulb0.05Lace 0.86Turkey 0.14Bungalow 0.01Lamp holder0.03Tuwo0.52Buns 0.02Land-line subscrip-tion and (SIM)0.17TV set 2.80Bus0.11Laundry iron0.26Two bedroom fl at 2.61Cable subscription 0.88Local herbs 0.02Typing sheets 0.04Cake 0.04Luxury bus0.19Tire0.01Canoe 0.01Maize0.52Tire guage0.01Car3.67Malaria drugs 0.22Tire repair0.02Cassava 0.06Male sandals 0.19Universities 0.60Ceiling slate0.09Male shirt0.60Unorthodox clinics 0.01Cement0.14Male shoes 0.87Vegetable 0.97Chairs0.29Male trousers0.41VHS player0.01Chemists 0.13Malt drink0.24Video cassettes 0.05Chicken 1.27Mass transit bus 1.70Video CDs 0.06Children shoes0.09Meat4.75Wall Hanger0.11Children wears0.68Meat/fi sh pies 0.11Wardrobe1.03Cigarettes 0.11Medicated powder0.01Water Closet0.01Cocoa drink 0.47Methylated spirit0.01Water Corporation Charge0.03Cocoyam 0.13Microwave0.18Watermelon 0.07Coff ee 0.07Milk 1.14Wheat0.17Colleges of Education0.74Mini bus1.65Wine 0.13Computer schools 0.51Mini van0.49Wood nail0.03Cooker0.28Miscellaneous 0.40Yam 1.35Cooler0.25Mobile handset1.43Yogurt 0.09Corn Flakes0.12Mobile subscription and (SIM)0.70Cosmetics0.34Money order 0.01Crab 0.03Motor bike0.85Cupboard0.14Motor bike 1.42156 A. QUESTIONNAIRES, ITEMS SURVEY, AND WEIGHTS OF ELEMENTARY ITEMS
Weights of Some ItemsItemsWeightItemsWeightItemsWeightCustard0.07Multivitamins 0.03Desktop computer0.62Newspaper/magazine0.34Detergent0.77Noodles0.30Diesel 0.16Notebooks0.18Domestic fl ight 0.01Nursing schools 0.19Doughnuts0.01Ogi0.12Duplex0.01One bedroom fl at 3.29DVD0.01Onions 0.51DVD player0.51Orange 0.48Egg 0.76Others 0.01Electric power 0.90Packaged Juice0.24Electrical wire0.06Palm oil0.72Elementary Textbooks 0.07Pawpaw 0.05Elubo0.33Pencils 0.01Engine oil 0.25Pens 0.02Envelopes 0.01Pepper 0.85Extension box0.10Petrol 3.97Face-to-face 0.76Phone box0.01Fan0.95Pineapple 0.12Fax Service 0.01Pipes 0.01Female sandals 0.13Plank0.06Female shirt0.34Plates0.08Female shoes 0.42Plumbing pipe0.05Female trousers/skirt0.40Polytechnics 0.74Ferry 0.01Pork0.48Fish 3.20Potato 0.38Flour0.01Primary schools 0.94Fluorescent tube0.01Private hospitals 0.38Foodfl ask0.04Recharge cards1.22Freezer0.01Rice 2.23Fridge0.84Rod0.03Fufu0.10Roofi ng sheet1.06Gallon0.08Sachet water0.41Garri0.43Salt 0.15Bibliography
157
Desai, N. and D’Mello, L. (2014). An overview on mobile data mining-use the data gener-
ated from your mobile phone to obtain useful knowledge. International Journal of Engineering
Research and Technology (IJERT), 3(9), pp. 1172–1175.
Fixed Income Index Mathematics Methodology, S&P Dow Jones Indices LLC (2017).
Gyapong, M., Kamau, E., Najjemba, R., and Ogundahunsi, O. (2014). Disseminating research
findings. World Health Organization Document Production Services, Geneva, Switzerland.
Kumar, S. B. and Ramadevi (2016). Survey paper on data mining in mobile devices. International
Journal of Advanced Research in Computer and Communication Engineering, 5(6), pp. 331–334.
118
Kaschesky, M., Sobkowicz, P., and Bouchard, G. (2011). Opinion mining in social me-
dia: Modeling, simulating, and visualizing political opinion formation in the Web.
Proc. of the 12th Annual International Conference on Digital Government Research. DOI:
10.1145/2037556.2037607.
Paidi, A. N. (2012). Data mining: Future trends and applications. International Journal of Modern
Engineering Research, 2(6), pp. 4657–4663.
Rehman, M. H., Liew, C. S., Wah, T. Y., Shuja, J., and Daghighi, B. (2015). Mining personal
data using smartphones and wearable devices: A survey. Open Access Sensors, 15(1), pp. 4430–
4469. www.mdpi.com/journal/sensors DOI: 10.3390/s150204430.
Tang, L. and Liu, H. (2010). Community Detection and Mining in Social Media. Morgan &
Claypool Publishers. DOI: 10.2200/s00298ed1v01y201009dmk003. 105
Tellis, G. J. (2011). Modeling marketing mix. In The Handbook of Marketing Research. SAGE
Journals. 110
Zafarani, R., Abbasi, M. A., and Liu, H. (2014). Social Media Mining: An Introduction. Cam-
bridge University Press, UK. DOI: 10.1017/cbo9781139088510.
158 BIBLIOGRAPHY
Websites
Decision Analyst Website (2018). How to build a marketing mix models.
https://www.decisionanalyst.com/analytics/marketingmixmodeling/
Cambridge Intelligence Website.
https://cambridge-intelligence.com/keylines-faqs-social-network-
analysis/
CATALYST Website (2018). How to build a marketing mix model.
http://www.catalystinc.com/wp-content/uploads/2015/05/Marketing-Mix-
Redesign.pdf
International Association for the Measurement and Evaluation of Communication (AMEC),
Website.
https://www.amecorg.com
Author’s Biography
159
MUSTAPHA AKINKUNMI
Dr. Mustapha Akinkunmi is a Financial Economist and
Technology Strategist. He has over 25 years of experi-
ence in estimation, planning, and forecasting using sta-
tistical and econometric methods, with particular expertise
in risk, expected utility, discounting, binomial-tree valua-
tion methods, financial econometrics models, Monte Carlo
simulations, macroeconomics, and exchange rate modeling.
Dr. Akinkunmi has performed extensive software develop-
ment for quantitative analysis of capital markets, revenue and
payment gateway, predictive analytics, data science, and credit
risk management.
He has a record of success in identifying and implement-
ing change management programs and institutional develop-
ment initiatives in both public and private sector organizations. He has been in high profile po-
sitions as a Consultant, Financial Advisor, Project Manager, and Business Strategist to AT&T,
Salomon Brothers, Goldman Sachs, Phibro Energy, First Boston (Credit Suisse First Boston),
World Bank, and Central Bank of Nigeria. He is an internationally recognized co-author (In-
troduction to Strategic Financial Management, May 2013) and leader in demand analysis, special-
izing in working with very large databases. Furthermore, he has conducted teaching and applied
research in areas that include analyses of expenditure patterns, inflation and exchange rate mod-
eling for Manhattan College, Riverdale, NY, Fordham University, New York, NY, University
of Lagos, Lagos, Nigeria, State University of New York-FIT, New York, NY, Montclair State
University, Montclair, NJ, and American University, Yola, Nigeria.
In 1990, he founded Technology Solutions Incorporated (TSI) in New York, which fo-
cused on data science and software application development for clients including major fi-
nancial services institutions. After ten years of successful operations and rapid growth under
Dr. Akinkunmi’s leadership, TSI was acquired by a publicly traded technology company based
in the U.S. in a value-creating transaction. Dr. Akinkunmi was the former Honorable Commis-
sioner for Finance, Lagos State, Nigeria. He is now an associate professor of finance and chair
of the accounting and finance department at the American University of Nigeria, Yola, Nigeria.
|
https://ieeexplore.ieee.org/xpl/ebooks/bookPdfWithBanner.jsp?fileName=6813428.pdf&bkn=6813427&pdfType=book
|
S Y N T H E S I S L E C T U R E S O N E N G I N E E R I N G
Series ISSN: 1939-5221
SERIES EDITOR: Stephen F. Barrett,University of Wyoming
LYING BY APPROXIMATION
The Truth about Finite Element Analysis
Vincent C. Prantil, Milwaukee School of Engineering
Christopher Papadopoulos, University of Puerto Rico, Mayag Ÿez
Paul D. Gessler, Graduate Student, Marquette University
In teaching an introduction to the finite element method at the
undergraduate level, a prudent mix of theory and applications
is often sought. In many cases, analysts use the finite elementmethod to perform parametric
studies on potential designs to size parts, weed out less desirabledesign scenarios, and predict
system behavior under load. In this book, we discuss common pitfalls encountered by many finite
element analysts, in particular, students encountering the method for the first time. We present a
variety of simple problems in axial, bending, torsion, and shearloading that combine the students’
knowledge of theoretical mechanics, numerical methods, and approximations particular to the
finite element method itself. We also present case studies inwhich analyses are coupled with
experiments to emphasize validation, illustrate where interpretations of numerical results can
be misleading, and what can be done to allay such tendencies. Challenges in presenting the
necessary mix of theory and applications in a typical undergraduate course are discussed. We also
discuss a list of tips and rules of thumb for applying the method in practice.
ABOUT SYNTHESIS
This volume is a printed version of a work that appears in the Synthesis Digital
Library of Engineering and Computer Science. Synthesis Lectures provide
concise, original presentations of important research and development topics,
published quickly, in digital and print formats. For more information visit
www.morganclaypool.com
w w w . m o r g a n c l a y p o o l . c o m
9 781627 052351
ISBN: 978-1-62705-235-1
90000
P
R
A
N
T
I
L
•
P
A
P
A
D
O
P
O
U
L
O
S
•
G
E
S
S
L
E
R
L
Y
I
N
G
B
Y
A
P
P
R
O
X
M
A
T
I
I
O
N
M
O
R
G
A
N
&
C
L
A
Y
P
O
O
L
LYING BY APPROXIMATION
The Truth about Finite Element Analysis
Vincent C. Prantil
Christopher Papadopoulos
Paul D. Gessler
S Y N T H E S I S L E C T U R E S O N E N G I N E E R I N G
Stephen F. Barrett, SERIES EDITOR
Lying by Approximation
e Truth about Finite Element Analysis
Synthesis Lectures on
Engineering
Each book in the series is written by a well known expert in the field. Most titles cover subjects
such as professional development, education, and study skills, as well as basic introductory
undergraduate material and other topics appropriate for a broader and less technical audience.
In addition, the series includes several titles written on very specific topics not covered elsewhere
in the Synthesis Digital Library.
Lying by Approximation: e Truth about Finite Element Analysis
Vincent C. Prantil, Christopher Papadopoulos, and Paul D. Gessler
2013
e Engineering Design Challenge: A Creative Process
Charles W. Dolan
2013
e Making of Green Engineers: Sustainable Development and the Hybrid Imagination
Andrew Jamison
2013
Crafting Your Research Future: A Guide to Successful Master’s and Ph.D. Degrees in
Science & Engineering
Charles X. Ling and Qiang Yang
2012
Fundamentals of Engineering Economics and Decision Analysis
David L. Whitman and Ronald E. Terry
2012
A Little Book on Teaching: A Beginner’s Guide for Educators of Engineering and Applied
Science
Steven F. Barrett
2012
Engineering ermodynamics and 21st Century Energy Problems: A Textbook
Companion for Student Engagement
Donna Riley
2011
iv
MATLAB for Engineering and the Life Sciences
Joseph V. Tranquillo
2011
Systems Engineering: Building Successful Systems
Howard Eisner
2011
Fin Shape ermal Optimization Using Bejan’s Constructal eory
Giulio Lorenzini, Simone Moretti, and Alessandra Conti
2011
Geometric Programming for Design and Cost Optimization (with illustrative case study
problems and solutions), Second Edition
Robert C. Creese
2010
Survive and rive: A Guide for Untenured Faculty
Wendy C. Crone
2010
Geometric Programming for Design and Cost Optimization (with Illustrative Case Study
Problems and Solutions)
Robert C. Creese
2009
Style and Ethics of Communication in Science and Engineering
Jay D. Humphrey and Jeffrey W. Holmes
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Analog Multimedia
Explorations
Lina J. Karam and Naji Mounsef
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Digital Multimedia and
Robotics Explorations
Lina J. Karam and Naji Mounsef
2008
CAD/CAM of Sculptured Surfaces on Multi-Axis NC Machine: e DG/K-Based
Approach
Stephen P. Radzevich
2008
v
Tensor Properties of Solids, Part Two: Transport Properties of Solids
Richard F. Tinder
2007
Tensor Properties of Solids, Part One: Equilibrium Tensor Properties of Solids
Richard F. Tinder
2007
Essentials of Applied Mathematics for Scientists and Engineers
Robert G. Watts
2007
Project Management for Engineering Design
Charles Lessard and Joseph Lessard
2007
Relativistic Flight Mechanics and Space Travel
Richard F. Tinder
2006
Copyright © 2013 by Morgan & Claypool
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in
any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations
in printed reviews, without the prior permission of the publisher.
Lying by Approximation: e Truth about Finite Element Analysis
Vincent C. Prantil, Christopher Papadopoulos, and Paul D. Gessler
www.morganclaypool.com
ISBN: 9781627052351
ISBN: 9781627052368
paperback
ebook
DOI 10.2200/S00503ED1V01Y201305ENG023
A Publication in the Morgan & Claypool Publishers series
SYNTHESIS LECTURES ON ENGINEERING
Lecture #23
Series ISSN
Synthesis Lectures on Engineering
Print 1939-5221 Electronic 1939-523X
ANSYS, Inc. has granted permission for use of the screenshots of ANSYS software results used in this book.
ANSYS, ANSYS Mechanical, ANSYS Multiphysics, Workbench, and any and all ANSYS, Inc. product and
service names are registered trademarks or trademarks of ANSYS, Inc. or its subsidiaries located in the United
States or other countries. All other trademarks or registered trademarks are the property of their respective owners.
Lying by Approximation
e Truth about Finite Element Analysis
Vincent C. Prantil
Milwaukee School of Engineering
Christopher Papadopoulos
University of Puerto Rico Mayagüez
Paul D. Gessler
Graduate Student, Marquette University
SYNTHESIS LECTURES ON ENGINEERING #23
CM&cLaypoolMorganpublishers&ABSTRACT
In teaching an introduction to the finite element method at the undergraduate level, a prudent
mix of theory and applications is often sought. In many cases, analysts use the finite element
method to perform parametric studies on potential designs to size parts, weed out less desirable
design scenarios, and predict system behavior under load. In this book, we discuss common pit-
falls encountered by many finite element analysts, in particular, students encountering the method
for the first time. We present a variety of simple problems in axial, bending, torsion, and shear
loading that combine the students’ knowledge of theoretical mechanics, numerical methods, and
approximations particular to the finite element method itself. We also present case studies in
which analyses are coupled with experiments to emphasize validation, illustrate where interpre-
tations of numerical results can be misleading, and what can be done to allay such tendencies.
Challenges in presenting the necessary mix of theory and applications in a typical undergraduate
course are discussed. We also discuss a list of tips and rules of thumb for applying the method in
practice.
KEYWORDS
finite element method, finite element analysis, numerical methods, computational
analysis, engineering mechanics, mathematical modeling, modeling approximation
Contents
ix
1
2
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
What is Book Is Intended to Be . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Pedagogical Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
What is Book Is Not Intended to Be . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Outline of Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Guilty Until Proven Innocent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Guilty Until Proven Innocent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 What a Minimal Requisite Skill Set Looks Like . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 e Ten Most Common Mistakes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Man vs. Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Putting it Together: Toward a New FEA Pedagogy . . . . . . . . . . . . . . . . . . . . . . . 7
1.5
Let’s Get Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1 Qualitative Concepts of Mechanics of Materials . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 e Stress Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Idealized Structural Responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3
2.3.1 Axial Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.2 Lateral Shear Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.3 Bending Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.4 Torsional Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4 What Dimension Are You In? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4.1 e Limit of the in (Plane Stress and Pressure Vessels) . . . . . . . . . . . . 24
2.4.2 e Limit of the ick (Plane Strain) . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.4.3 Analogy of Plane Stress and Plane Strain . . . . . . . . . . . . . . . . . . . . . . . . 25
2.4.4 e Limit of the Round (Axisymmetry) . . . . . . . . . . . . . . . . . . . . . . . . . . 26
St. Venant’s Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Combined Loading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
A Closing Remark and Look Ahead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.5
2.6
2.7
x
3 Where We Begin to Go Wrong . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.1
Exceptions to the Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.2 e Lines in the Sand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2.1 A Stepped Axial Rod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.2.2 A Short, Stubby Beam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.2.3 A ick-Walled Pressure Vessel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Utility of the Finite Element Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.3
4
It’s Only a Model
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.1 e Expectation Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.2
Philosophy of Mathematical Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.3 e Art of Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.4 What Are We Approximating? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Lessons Learned . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.5
5.1
5.2
5 Wisdom Is Doing It . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Preliminary Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.2.1 e Cast of Element Characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.2.2 Good and Bad Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.2.3 Applying Boundary Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.2.4 Applying External Loads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Post-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Further Rules to Live By in Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Solution Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.3
5.4
5.5
5.6
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Afterword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Authors’ Biographies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Preface
xi
WHAT THIS BOOK IS INTENDED TO BE
In undergraduate engineering curricula, a first course in finite element analysis (FEA) is routinely
required, but is often not taken until after the second year of study. Such a class typically includes:
1. an overview of the procedural aspects of the method;
2. a derivation of the mathematical theory for a variety of relatively simple one- and two-
dimensional element formulations;
3. practicing the finite element procedure by hand on select simple problems; and
4. employing the finite element method using some commercial software package as practiced
by engineers in industry.
Students are increasingly expected to apply this knowledge in other settings, particularly in
the context of their senior capstone design projects. However, students routinely commit a variety
of errors in applying FEA. In particular, they lack the maturity to make appropriate modeling
decisions and interpretations of their results. is, in turn, inhibits them from using FEA to
make sound judgements in their projects. Indeed, the twin abilities to conduct accurate analyses
and to make informed judgements lie at the heart of what it means to be a competent professional
engineer.
Many instructors are aware of this circumstance and recognize the need to coach their
students to perform FEA with greater maturity, but they are often mired in teaching strictly
according to the treatment of standard textbooks which emphasize underlying derivation and
theory. Indeed, there is a need for such deep, rigorous, and detailed study, but not at the expense
of learning mature habits. Many professors therefore develop means to teach around the text by
providing additional explanations, insights, approaches, and probing questions.
Our intent here is to provide just such an alternative resource for professors and instructors
of undergraduates who are looking for a fresh and novel approach to teaching FEA that prioritizes
the development of practical skills and good habits. Using material compiled from existing course
notes and exercises already in use by the authors and their colleagues, we lay a path through the
forest of details that an undergraduate or other novice can follow to discover the habits and secrets
of a seasoned user. We surmise that in this book already lie many ideas that match what many
instructors already intuitively understand and convey as they, on their own, teach around the text.
xii PREFACE
In laying this path, we deliberately employ an approach to emphasize and exploit the natural
ties between classical Mechanics of Materials (MoM) and FEA, and which is motivated, in part,
by the philosophy articulated in Papadopoulos et al. [2011]. Of course, equally deep ties exist
between elasticity theory and FEA, but as our focus is developing expertise of undergraduates,
we appeal primarily to the ties between FEA and MoM.
In this approach, we provide examples in which FEA can be used to confirm results of
hand calculation, closed form solutions, or standard tables—and vice versa—helping students to
build confidence in all. e book then explores more advanced user habits such as formulating
expectations, making estimates, and performing benchmark calculations. Broadly speaking, this
book responds to the growing call to include simulation as a basic engineering competency, and
will help to promote the development of a culture of using simulation in the undergraduate en-
gineering curriculum.
As such, we envision this book being used as a companion to a traditional textbook in an
upper-level undergraduate FEA course and also as an instructional guide for practice in other
courses in which FEA is applied, including courses as early as freshman design and introductory
mechanics. Even at these early stages, instructors can judiciously draw from the book to plant
the seeds of good habits in their students. is book is written in language that is immediately
transparent to instructors and accessible to students who have completed a basic course in MoM.
Terminologies that might be advanced to the novice user are italicized and explained in the context
of their use.
PEDAGOGICAL APPROACH
e pedagogical strategy of this book is based in the educational theory of constructivism and
related research in misconceptions. e essence of constructivist philosophy to which we appeal
here is rooted in the work of cognitive psychologist Jerome Bruner, and is succinctly described
by Montfort et al. [2009]: “learning [is] a complex process in which learners are constantly read-
justing their existing knowledge and, more importantly, the relationships between the things that
they know.” Further, this readjustment process requires that the learner not just passively receive
information, but actively enter into the “discovery of regularities of previously unrecognized re-
lations and similarities between ideas, with a resulting sense of self-confidence in one’s abilities”
[Bruner, 1960].
One way to involve students in the processes of readjusting and discovering knowledge
is by anticipating their misconceptions and providing exercises and activities that force them to
reevaluate their original assumptions and conceptions. For at least three decades, science and
engineering educators have realized the importance of identifying and addressing misconceptions,
suggesting that educators should directly address misconceptions by some combination of early
intervention and an infusion of activities that force students to face the misconceptions head-
on [Hake, 1998, McDermott, 1984, Montfort et al., 2009, Papadopoulos, 2008, Streveler et al.,
2008]. Broadly speaking, “active learning,” “problem based learning,” “inquiry based learning,”
and “student centered learning” approaches aim to accomplish this. Ken Bain, in his book, What
the Best College Teachers Do, champions this view:
PREFACE xiii
Some of the best teachers want to create an expectation failure, a situation in which
existing mental models lead to faulty expectations. ey attempt to place students in
situations where their mental models will not work. ey listen to student concep-
tions before challenging them. ey introduced problems, often case studies of what
could go wrong, and engaged the students in grappling with the issues those examples
raised [Bain, 2004].
Physics educator Lillian McDermott further adds that “students need to participate in the process
of constructing qualitative models and applying these models to predict and explain real-world
phenomena” [McDermott, 2001].
It is important to observe that this type of instruction requires a high degree of interaction
and feedback on the part of the teacher and a correspondingly high degree of self-inquiry on the
part of the learner. In this environment, teachers need to allow students to test ideas, and lend
support in tweaking those ideas into a more correct model of how things happen, and students
must eagerly participate in this process of discovery.
In the spirit of those instructors who have successfully accomplished this, we seek to provide
students with the support they need to cognitively rewire. Indeed, many of the examples and
exercises are deliberately designed to confront readers with expectation failures and to provide
them ample opportunity to develop models that appropriately match reality, but which also require
instructors to intervene as supportive mentors. With this approach, novices and students will
develop the good habits required of experienced users.
In the particular case of FEA, many of the common pitfalls repeatedly encountered by an-
alysts are rooted in a mixture of inadequacies in their understanding of MoM theory, modeling,
and the useful approximations particular to FEA, as well as their inability to integrate these areas
of knowledge. To address these matters, we aim to strike a prudent balance between theory and
practical application. We suggest that this is best accomplished by prescribing a minimal requisite
skill set, rooted in mastery of MoM, upon which the modeling decisions required in the finite el-
ement method are based. is mastery of the most rudimentary underlying theory helps students
make fewer of the errors in judgement when validating their numerical simulations.
Ultimately, our emphasis is to provide an instructional approach that is amenable to a prac-
ticing engineer rather than a mathematician. We attempt to cultivate the habit of care that is
necessary to perform good quality engineering analysis. When answering the question “What is
a university for?,” New York Times columnist David Brooks wrote:
[to obtain] technical knowledge and practical knowledge. Technical knowledge is for-
mulas…that can be captured in lectures. Practical knowledge is not about what you
do, but how you do it. It can not be taught or memorized, only imparted and absorbed.
It is not reducible to rules; it only exists in practice [Brooks, 2013].
xiv PREFACE
In view of this attitude toward practice, we provide guidance for using pre-programmed
software. Guidance is offered for both commercial software and academically developed finite el-
ement codes via the online video tutorials found at the wiki site SimCafe (https://confluence.
cornell.edu/display/SIMULATION/Home) [Bhaskaran, 2012]. e NSF-sponsored project
team at Cornell University [Bhaskaran and Dimiduk, 2010] has graciously supplied ANSYS tu-
torials for the collection of illustrative case studies presented here. All tutorials for this book can
be found at https://confluence.cornell.edu/display/SIMULATION/Prantil+et+al.
In summary, we write this book for student and faculty colleagues who are willing to un-
dertake
1. iterative learning in a supportive environment in which students are unafraid to make er-
rors, confront misconceptions, and revisit problems, and in which instructors are present to
provide support “when things go wrong;”
2. a strong navigational approach that is orderly and progressive, but not necessarily “top
down;” and
3. an approach in which MoM theory and FEA are intimately entwined.
WHAT THIS BOOK IS NOT INTENDED TO BE
Most current textbook treatments of the mathematical theory of finite elements draw on vari-
ational calculus and linear algebra. As suggested previously, we intend this book to serve as a
supplement for more advanced undergraduates and as a resource to inform teaching of earlier
stage students. Our focus is not on treatment of the mathematical rigor and underpinnings of the
finite element method, but rather a guide to good practice. erefore, this book is not intended to
be a reference or text on the formulation, theory, or mathematical underpinnings of the finite ele-
ment method. ere are many excellent treatments outlining the method [Cook et al., 2002, Kim
and Sankar, 2009, Logan, 2001, ompson, 2004, Zienkiewicz and Taylor, 2005, Zienkiewicz
et al., 2005]. Any one of these would be sufficient for an introductory course in an undergraduate
mechanical engineering curriculum.
is book is also not intended to be a tutorial guide for applying the method or a step-by-
step user’s guide to a particular commercial software package, e. g., Kurowski [2013], Lawrence
[2012], Lee [2012]. We assume that the instructor using this book is already providing such
tutorial instruction or that the reader already has a working knowledge of such. We emphasize,
however, that we do provide online video tutorials at the SimCafe wiki site, which include further
user guidance and suggested follow-up exercises. We encourage the student or novice reader to open
a tutorial or start an FEA session from scratch and directly attempt the exercises and examples that are
provided in both the tutorials and the book.
PREFACE xv
OUTLINE OF BOOK
Chapter 1 addresses why humans tend to have an optimism bias in which they think they are
correct in more situations than they really are [Conly, 2013]. Digital technology has most likely
added to this bias. We review a published list of ten common mistakes made in FEA practice,
and we argue that avoidance of these errors begins with the user adopting an attitude of skepti-
cism of numerical results until they have been validated. Most analysts agree this is best done by
comparison with relevant theory and experimental data. To apply theory, one must be fluent in
the very basic mechanics relationships.
In Chapter 2 we summarize essential topics from Mechanics of Materials and provide
corresponding examples that can be solved using simple, well-known relationships based on one-
dimensional modeling assumptions. While these problems do not require use of FEA, they are
excellent for offering a first exposure to FEA in which the user can quickly build confidence in
the method. Moreover, the theory underlying these examples forms the basis for the “minimal
requisite skill set” mentioned previously. With this in hand, the user can begin the crucial task of
understanding how to interpret FEA results by comparison with a trusted theory. is small set
of topics is remarkably useful due to the great number of situations in which they serve as good
models for practical situations.
However, as problems become more detailed and complex, the applicability of these ele-
mentary relations diminishes. Here, a more complex multi-dimensional theory of elasticity may
be required but FEA can still be used to obtain reasonable approximate solutions, and basic prin-
ciples from Mechanics of Materials can still be applied to interpret results, albeit with caution.
erefore, in Chapter 3, we illustrate several examples of problems whose analytical solutions
(where tractable) are more involved, and where FEA is eminently useful, although still relatively
straightforward.
Chapter 4 gets at the core of the list of the common mistakes made when pre-processing
the finite element model. Mistakes that plague many finite element analysts involve relatively
simple errors in input that seem intuitively correct, but which have strong adverse consequences
for numerical predictions of displacements and stresses.
Finally, in Chapter 5, we present a list of prudent practices as well as pitfalls to avoid in
order to achieve meaningful results and to make validation of one’s results a less onerous task.
is chapter can serve as an excellent reference as the reader begins to venture in his or her own
practice.
Vincent C. Prantil, Christopher Papadopoulos, and Paul D. Gessler
August 2013
Acknowledgments
xvii
We gratefully acknowledge Rajesh Bhaskaran, Director of the Swanson Engineering Simu-
lation Program at Cornell University, and Robert McBride for providing the ANSYS finite
element tutorials at the wiki site SimCafe (https://confluence.cornell.edu/display/
SIMULATION/Home). We also gratefully acknowledge funding provided to Cornell University un-
der Award 0942706 from the National Science Foundation (NSF) for implementation of the
SimCafe wiki interface.
We greatly appreciate the contributions of Jim Papadopoulos of Northeastern University,
who has long been a passionate advocate for introducing FEA practice throughout the under-
graduate engineering curriculum. His vision, keen insights, and collaboration on a previous ar-
ticle inspired ideas used in this book. We are grateful to him and also to Habib Tabatabai from
the University of Wisconsin–Milwaukee for providing a review of an earlier draft of this work
which helped us to polish and refine many details. We also acknowledge William E. Howard of
East Carolina University. His collaboration on several previous articles linking simulations and
experiments formed the basis of several examples in this book.
We also express our sincere gratitude to our colleagues Genock Portela Gauthier and Aidsa
Santiago Roman of the University of Puerto Rico, Mayagüez. ey are collaborators on a related
project sponsored by the National Science Foundation under Award 1044886 that is developing
new simulation tools for mechanics courses. Some of the modules developed in this NSF project
and related understandings of engineering pedagogy appear in this book.
We further acknowledge personal communications and discussion that took place at the
9th U.S. National Congress on Computational Mechanics held in San Francisco in July 2007.
Professors Jat du Toit, Mike Gosz, and Göran Sandberg organized the first mini-symposium on
the teaching of finite element methods to undergraduates at which some of the first ideas for
this supplementary text were discussed and took preliminary form. We are grateful to the mini-
symposium for both fostering an international debate and proffering fruitful discussions leading
to this work.
Finally, we acknowledge chapter heading character designs illustrated by Tim Decker, Se-
nior Lecturer at the Peck School of the Arts at the University of Wisconsin–Milwaukee and
Milwaukee Area Technical College. Prior to teaching animation in Milwaukee, Tim was the lay-
out artist and animator for the award-winning television series “e Simpsons!,” and animation
supervisor for Disney Interactive. He has also appeared as a guest artist in animation and car-
tooning for PBS. We are grateful for Tim’s imaginative characterization of numerical analysts
practicing the fine art of approximation.
xviii ACKNOWLEDGMENTS
From Vincent C. Prantil: I wish to dedicate this book to my wife, Laurna, and my children,
Carmen and Lorin. eir patience, support, laughter, and love carry me through my journey. ey
have also unselfishly encouraged and supported the many adventures in my calling as a teacher.
I would also like to dedicate this book to my parents, Dolores and Joseph Prantil. ey let me
find my own way and gave me the wings to follow my dreams. I wish to thank my mentors,
Paul Dawson and Anthony Ingraffea at Cornell University. eir expertise and dedication to
teaching computational methods led me to pursue its pedagogy with enthusiasm. I also thank
James T. Jenkins of Cornell University for his unyielding pursuit of excellent theoretical modeling
and its use in validating all things numerical. I thank them for, in their own collective words,
reminding me to “have fun, keep learning, and to never forget how I thought, how I learned, and
how I felt …when I was a student.” I dedicate this book to my students who doubt, prod, question,
and keep me young. We travel through the forest together. Finally, I am forever grateful to my
Creator who blesses me every day with a mysterious mix of skepticism, faith, failure, humility,
humor, energy, and imagination. Ego adhuc cognita.
From Christopher Papadopoulos: I dedicate this book to my family, particularly my parents
Kimon and Mary Lou. ey provided me with every opportunity to become educated, and all that
they have done for me has been motivated by love. I dedicate this book to my sister, Emily, who
has inspired me to high academic achievement through her own success, and to Clare, who has
been a vital part of my life journey for which I am greatly blessed. I also dedicate this book to
my cousin Jim Papadopoulos, who is the lead author of a reference that is frequently cited in
this book. I have always admired Jim’s keen mind for mechanics, his dedication to teaching, and
his persistence on convincing me of his point of view regarding the need to incorporate FEA
practice in my teaching. I thank all of the people who have mentored me in various capacities,
particularly my thesis advisor Timothy Healey, undergraduate mentors Sunil Saigal and Omar
Ghattas, teachers Dolores Stieper, Susan Spaker, and Richard Piccirilli, and collegial mentors
Habib Tabatabai, Yiorgos Papaioannou, and Indira Nair. In their own way, each of them has
challenged me, has argued with me, has had patience, and ultimately has supported me in ways
that have led to my intellectual and professional growth. To my first friends in Puerto Rico,
Marcelo Suárez, Jaquelina Alvarez, Basir Shafiq, Walter Silva, Ramón Vásquez, and Robert Acar,
and many other colleagues, including Marcel Castro, Bill Frey, Héctor Huyke, Sara Gavrell, Luis
Jiménez, Aidsa Santiago, and Genock Portela, gracias por darme la bienvenida y que continuemos
a trabajar juntos. Finally, I dedicate this book to my many students, from Cornell, Milwaukee,
and Mayagüez, who bring me great joy and pride. You are the ones for whom I ultimately write
this book.
From Paul D. Gessler: I dedicate this book to my grandfather, Donald A. Gessler (1932–
2013). He not only taught me how stuff works, which ignited my interest in engineering, but also
never stopped teaching me about how to live life and help others in need, in other words, how the
really important stuff works. I would like to thank my parents, Timothy and Shelley, my fiancée
Elise, the rest of my family, especially brothers Phillip, Peter, and John, and friends and colleagues,
especially Marshall Schaeffer and Alex Zelhofer. None of my work would be as it is without
their influence, support, and distractions (no matter how unwelcome these distractions sometimes
seemed at the time). I would also like to thank my advisor Professor Margaret M. Mathison and
the rest of the Marquette University faculty for allowing me to take on this project in addition to
my research and graduate coursework. Finally, as always, soli Deo gloria.
ACKNOWLEDGMENTS xix
Vincent C. Prantil, Christopher Papadopoulos, and Paul D. Gessler
August 2013
C H A P T E R 1
1
Guilty Until Proven Innocent
I repeatedly tell students that it is risky to accept
computer calculations without having done some
parallel closed-form modeling to benchmark the
computer results. Without such benchmarking and
validation, how do we know that the computer isn’t
talking nonsense?
Clive Dym
Principles of Mathematical Modeling
If you only make one predictive simulation, it will likely
be wrong.
Loren Lorig
CEO, Itasca International
1.1 GUILTY UNTIL PROVEN INNOCENT
One of the many advantages of the finite element method (FEM) is that it is relatively easy to
create a model and use the method to run an analysis. Often, for better or worse, the method has
become commonplace enough to be seen as a sophisticated calculator. In addition to enhanced
computational speed, this is due to the development and preponderance of graphical user inter-
faces (GUI) used as pre- and post-processors to nearly all commercial finite element software.
Yet a great hazard of FEM is also that, with the aid of commercial software, it can be
too easy to create a model and run an analysis. e ease of operation can foster “computational
complacency” [Paulino, 2000] in validating numerical results. It often appears that the myth that
“the computer must be right” is alive and well. While, indeed, algorithms in commercial codes are
well debugged and are unlikely to contain programming errors, the user is ultimately responsible
for making appropriate modeling assumptions and interpretations of the output.
Hand in hand with complacency is the “optimism bias,” in which people tend to believe
that they are correct in more situations than they really are [Conly, 2013]. In the context of
FEA, even honest users who intend to validate their work might mislead themselves, thinking
that results are correct because they appear to correspond to a simple theory that they might be
applying inappropriately (for example, out of its bounds of accuracy), or they might be missing
2
1. GUILTY UNTIL PROVEN INNOCENT
a key theoretical idea altogether. Like a cancer, computational complacency and the optimism
bias can spread. ey can develop into bad habits that thwart the user’s comprehension of some
minimal requisite skill set on which use of the numerical method depends.
However, before exploring this minimum requisite skill set in detail, the user must first
realize that he or she should be skeptical toward all results of a numerical simulation until demon-
strating a sound reason to accept them. In short, we often tell our students—beginning with the
first lesson—that, like it or not, algorithmic simulation results are guilty until proven innocent.
1.2 WHAT A MINIMAL REQUISITE SKILL SET LOOKS LIKE
Once the analyst understands the need for providing proper input and validating the interpre-
tation of output, he or she is ready to learn the fundamental skills that will enable him or her
to perform responsible numerical simulations. To motivate this, we first provide an analogy with
driving an automobile.
We can all agree that while a driver need not understand scientifically the vehicle dynamics
or the thermodynamics of the combustion engine, any driver must have a basic sense of how
the vehicle and engine operate. For example, braking on ice is less effective than braking on
pavement; or maple syrup should not be placed in the fuel tank. Of course it cannot hurt to have
some theoretical knowledge, such as to understand that braking distance increases roughly as the
square of velocity, or in qualitative terms, “disproportionately.” at is why driving instructors
concentrate on teaching elements of automobile acceleration, cornering, smooth braking, and
field of vision rather than the theory of internal combustion engines. Moreover, the instructor
should be seasoned to anticipate and correct errors made by the learner. In the end, the student
develops some innate feel for what constitutes “good driving,” and learns to distinguish between
“good” and “bad” maneuvers based on experience.
Likewise, in the realm of FEA practice, we believe what is called for is the development of a
“gut feel” for what constitutes correct behavior and good modeling practice. We contend that the
minimal requisite skill set on which good FEA practice is based has two principal components:
1. the ability to apply basic theory of Mechanics of Materials; and
2. the ability to make good modeling decisions, including choice of dimension, element type,
mesh discretization, and boundary conditions, based on one’s knowledge of MoM and pre-
vious experience.
ese requirements are based on the intimate relationship between FEA and the theory of elas-
ticity, of which a minimal understanding is constituted by classical Mechanics of Materials. ey
also appeal to pedagogical theory that states that confronting misconceptions—particularly when
they are deeply held—is an effective means to eventually enable the learner to overcome them and
replace them with appropriate conceptions. is anticipates our further remarks in the next sec-
tion regarding how to help students confront their misconceptions directly.
1.3. THE TEN MOST COMMON MISTAKES 3
We note that the minimal requisite skill set does not contain an in-depth, rigorous, math-
ematical treatment of the theory underlying FEM. Such rigor, while necessary to program al-
gorithms or as a prerequisite for graduate studies, is not essential to operate and perform finite
element simulations and correctly interpret their results. For practical applications of FEA, what
is imperative is the ability to distinguish between good and bad methods for interfacing with the
tool.
Note To e Instructor
A treatment of the background necessary to use the finite element method effectively is given by
Papadopoulos et al. [2011]. Here we argue that a top-down, theory-first emphasis employed in many cur-
ricula may not be as necessary as has been thought. We believe that teaching the underlying mechanics can
be enhanced by introducing the finite element method as early as an Introduction to Engineering course in
the freshman year. We also feel that hand calculations in Statics and Mechanics of Materials can be reinter-
preted and made more appealing by emphasizing them as steps used to validate and benchmark numerical
simulations. Finally, in an upper division course in finite element theory, one may undertake a deeper learn-
ing of how to perform an informed computational analysis under the tutelage, guidance, and support of a
seasoned, experienced practitioner.
1.3 THE TEN MOST COMMON MISTAKES
Computational models are easily
misused…unintentionally or intentionally.
Boris Jeremić
University of California Davis
In accordance with our proposed minimal requisite skill set, we now present a useful list of com-
monly committed errors in FEA practice. While the advanced user will likely recognize many of
these errors (hopefully through direct experience!), the novice who has little or no FEA experi-
ence might not fully appreciate their meaning at this point. Nevertheless, they serve as a good
preview of issues that will arise, and as a reference to which the novice may return as he or she
gains more experience.
Recently, Chalice Engineering, LLC [2009] compiled an assessment of mistakes most
commonly made in performing finite element analysis in industrial practice. After 10 years of col-
lecting anecdotal evidence in both teaching undergraduates and advising capstone design projects,
we found this list to be nearly inclusive of the most common errors encountered by undergrad-
uate students in their introductory finite element method course. e list published by Chalice
Engineering is reproduced here verbatim.
4
1. GUILTY UNTIL PROVEN INNOCENT
1. Doing analysis for the sake of it: Not being aware of the end requirements of a finite ele-
ment analysis—not all benefits of analysis are quantifiable but an analysis specification
is important and all practitioners should be aware of it.
2. Lack of verification: Not having adequate verification information to bridge the gap be-
tween benchmarking and one’s own finite element analysis strategy. Test data some-
times exists but has been forgotten. Consider the cost of tests to verify what the analysis
team produces, compared with the potential cost of believing the results when they are
wrong.
3. Wrong elements: Using an inefficient finite element type or model, e. g., a 3D model
when a 2D model would do, or unreliable linear triangular or tetrahedra elements.
4. Bad post-processing: Not post-processing results correctly (especially stress) or consis-
tently. Not checking unaveraged stresses.
5. Assuming conservatism: Because one particular finite element analysis is known to be
conservative, a different analysis of a similar structure under different conditions may
not be so.
6. Attempting to predict contact stresses without modeling contact: is might
give
sensible-looking results, but is seldom meaningful.
7. Not standardising finite element analysis procedures: is has been a frequent cause of
repeated or lost work. Any finite element analysis team should have a documented stan-
dard modeling procedure for typical analyses encountered within the organisation, and
analysts should follow it wherever possible. Non-standard analyses should be derived
from the standard procedures where possible.
8. Inadequate archiving: Another frequent cause of lost work. Teams should have a master
model store and documented instructions about what and how to archive. Again, this
is a quality related issue. For any kind of analysis data, normal backup procedures are
not sufficient—attention needs to be paid to what information and file types are to be
archived in order to allow projects to be retraced, but without using excessive disk space.
9. Ignoring geometry or boundary condition approximations: Try to understand how in-
appropriate restraint conditions in static or dynamic analyses can affect results.
10. Ignoring errors associated with the mesh: Sometimes these can cancel out errors asso-
ciated with mistake 9, which can confuse the user into thinking that the model is more
accurate than it is. A convergence test will help.
1.4. MAN VS. MACHINE 5
While it may come as no surprise, novice users commit many, if not all, of these errors.
But these errors continue to be committed routinely even by advanced users and engineers in
industrial practice. As suggested earlier, we attribute this to a lack of a minimal requisite skill
set (or an inability to apply such fluently). is lack of understanding is due, at least in part, to
computational complacency [Paulino, 2000] and the optimism bias [Conly, 2013] cited earlier.
Avoiding such errors is not simply a matter of telling and re-telling the student “how to do
it.” Most students learn by repeated attempts in the face of incorrect reasoning and results. It is
through repeated corrections in the face of practice that we learn, not simply by being presented
with how things ought to work. erefore, before a sense of good modeling practice can truly be
learned and internalized, the student must come to appreciate the value of being skeptical about
initial numerical simulations, i. e., that they are guilty until proven innocent. Students must realize
and care that their intuition might be incorrect. en they must actively work to deconstruct their
previously incorrect model, and replace it with a model with deeper understanding. Likewise,
the good instructor must provide a supportive environment in which students are encouraged to
explore problems in which they are likely to make errors, and then coach them to be self-critical,
to realize and understand the errors that they have made.
Indeed, as suggested by the attention on student misconceptions in the literature on peda-
gogy [Hake, 1998, McDermott, 1984, Montfort et al., 2009, Papadopoulos, 2008, Streveler et al.,
2008], when students are forced to work out a problem with judicious questioning and investiga-
tion where their initial reasoning was incorrect—again, in Ken Bain’s words, an expectation failure
[Bain, 2004]—their learning retention is greater, and their recall and critical thinking skills are
enhanced. We take up this point further in the last section of this chapter when we recommend
our pedagogical strategy for FEA.
1.4 MAN VS. MACHINE
It’s foolish to swap the amazing machine in your skull
for the crude machine on your desk. Sometimes, man
beats the machine.
David Brooks
e New York Times
It is noteworthy that many introductory texts for the study of finite element analysis make use of
some form or the other of the necessary procedural steps in applying the method in practice. en
students are provided exercises in applying these procedural steps by means of hand calculations.
e procedural steps that a typical finite element analysis should include are as follows:
Ask what the solution should look like: An analyst must have some idea of what to expect in
the solution, e. g., a stress concentration, and other characteristics of the solution, such as
symmetry.
6
1. GUILTY UNTIL PROVEN INNOCENT
Choose an appropriate element formulation: One needs to understand, from knowledge of the
expected solution, what elemental degrees of freedom and polynomial order of approxima-
tion are necessary to accurately model the problem.
Mesh the global domain: With knowledge of the expected solution and the chosen order of in-
terpolation—the estimation of the solution at a general location based on the computed
solution at the grid points of the mesh (the order of which could be linear, quadratic, etc.),
one can wisely select a number and arrangement of elements necessary to adequately capture
the response.
Define the strain-displacement and stress-strain relations: It is important to know what for-
mulation your commercial software code has programmed into the analysis module. Clas-
sical small strain relations are appropriate for linear, static stress analysis. e user must
provide a constitutive law relating stress and strain.
Compile the load-displacement relations: e element matrix equations are either derived in
closed form a priori or computed via numerical integration within the analysis code.
Assemble the element equations into a global matrix equation: is step is performed algo-
rithmically with knowledge of the element degrees of freedom and nodal connectivity. is
global equation relates externally applied conjugate forces and associated nodal point de-
grees of freedom. It represents a generalized form of nodal point equilibrium.
Apply loads and boundary conditions: Because there are multiple prescriptions of statically
equivalent loads and boundary constraints, their precise prescription must be justified based
on problem symmetry and proximity to internal regions where accurate stress results are
most desired.
Solve for the primary nodal degrees of freedom: Solve the appropriately reduced global matrix
equation.
Solve for the derivatives of primary degrees of freedom: is involves calculating generalized
reaction forces at nodes and strains and stresses within elements.
Interpret, verify, and validate the results: Based on comparisons with initial expectations, ex-
perimental data, analytical benchmark results, or other reputable numerical solutions, have
the calculated results converged and are they reasonable?
Again, the novice might not completely understand or appreciate the meaning of each
step at this time. However, he or she can still gain some sense and insight into the procedure.
In particular, it is very telling that the steps break down succinctly into those performed by the
analyst and those performed by the computer. Even the novice will appreciate the complementary
roles of the human and the machine from the very outset.
1.5. PUTTING IT TOGETHER: TOWARD A NEW FEA PEDAGOGY 7
With the advent of high speed computers, it is clear that the machine wins in the battle
of raw speed and avoidance of computational error. However, while speed and computational
accuracy are necessary, they are not sufficient—and not even most important—for producing good
FEA results. e machine cannot provide the intellect, strategy, and judgement of the human
mind, all of which are crucial to perform good analysis.
e myth that the “computer is always right” comes, in part, from the truth that yes, most
commercial finite element software has been sufficiently debugged, removing most or all internal
programming errors. Studies by Jeremić [2009] show that programming errors in commercial
codes persist only in a very small percentage of cases. In short, the computer, while working fast,
also works nearly flawlessly. It can therefore do the “heavy lifting” required to analyze complex
problems that lead to the solution of problems with thousands and even millions of degrees of
freedom.
But most errors encountered in finite element analysis are either due to incorrect user input,
i. e., garbage in—garbage out, or due to lack of prudent judgement regarding dimensional approx-
imations, active degrees of freedom, loading strategy, sensitivity to boundary conditions, or the
nature of the correct theoretical solution. at is, they can often be traced to one of two causes:
incorrect understanding of finite element modeling, or poor application of strength of materials,
and often both to varying degrees.
In most cases, therefore, it is operator error to blame for all of the top ten mistakes [Chalice
Engineering, LLC, 2009]. To correct these mistakes, the analyst must look for cause and effect.
And as remarked, most often, the code is not the cause, although sometimes the user should inves-
tigate if the model programmed in the algorithm is, in fact, the correct model for the application
at hand.
us, when the task at hand can be described in an efficient and robust algorithmic form,
the task should be owned by the machine. In those instances where the task requires judgement
and/or compromise, the mind trumps the processor. And this is where the practice of numerical
analysis most often goes awry. It perhaps comes as no surprise that the ten most commonly made
mistakes are found only in the procedural steps performed by the analyst and none involve the steps
performed algorithmically by the computer. is glaring reality is the driving force behind our novel
approach to learning the finite element method wherein we focus on user behaviors rather than
on derivations of algorithms.
1.5
PUTTING IT TOGETHER: TOWARD A NEW FEA
PEDAGOGY
We have reviewed common errors and standard procedure, in which we emphasize the need for
the analyst to be skeptical and to take responsibility for making good judgements. Recalling our
overall pedagogical philosophy based on constructivism and encounter of misconceptions, we now
outline our vision of a new FEA pedagogy that prioritizes user behaviors. We draw from our own
notes and examples to provide a set of exercises and case studies in which students can encounter
8
1. GUILTY UNTIL PROVEN INNOCENT
common errors and expectation failures in a safe environment, and in which they can iteratively
address and correct their misconceptions. We promote three effective approaches to ferreting out
these misconceptions:
1. utilizing case studies that present commonly encountered expectation failures in students’
understanding of mechanics;
2. identifying specific user input, reasoning, or post-processing decisions that result in the
specific misunderstanding of the problem at hand; and
3. validation of results, such as by performing repeated convergence studies to verify numer-
ical simulations, comparison with benchmark solutions, or comparison with experimental
results.
We strongly believe that for the novice user, it is prudent to focus on the procedural steps
that require interaction, judgement, and interpretation, particularly through repeated experience
confronting errors and making corrections. is is in contrast to traditional approaches in which a
significant amount of classroom time is spent teaching the underlying mathematical formulation
of routines that are ultimately performed without error by the computer, such as rote calculation
of element stiffness matrices, assembly of global stiffness matrices, and solution of the principal
degrees of freedom.
While we think it is important for students to know that such internal computations are
made, deriving such procedures should not be done at the expense of providing repeated expe-
riences in which students encounter and correct the errors and misconceptions that we already
know they will make. Rather, we believe this time would be better spent on discussions of, say,
how stresses vary within and between neighboring elements, and if the modeler’s decision cap-
tured this behavior correctly. As misconceptions are overcome, and good procedural habits and
intuitions are formed, then the student is all the more pre-disposed to learning and appreciating
important aspects of the underlying theory at later stages in their education.
In summary, we boil everything down to four concurrent practices.
1. Introduce students to the finite element method much earlier in their curriculum [Pa-
padopoulos et al., 2011], e. g., in elementary Mechanics of Materials.
2. Focus on applications that illustrate and highlight common pitfalls and ways to circumvent
them, e. g., choosing proper element formulations, correctly prescribing boundary condi-
tions, and validating solution results.
3. Keep mathematical derivations to a minimum and focus these primarily in areas directly
related to mechanics principles, e. g., equilibrium and approximation by interpolation.
4. Highlight a succinct list of commonly accepted good and bad practices in applications of
finite element analysis.
1.5. PUTTING IT TOGETHER: TOWARD A NEW FEA PEDAGOGY 9
We note in closing that there is a growing body of work on what modelers feel is appropriate
skepticism with which preliminary simulation results should be judged in both academic and
industrial environments. ere are a variety of research findings on the teaching of finite element
analysis to undergraduates [du Toit et al., 2007], computational complacency [Paulino, 2000],
and the reliability of simulation results [Hatton, 1999] which the reader may wish to further
explore.
C H A P T E R 2
Let’s Get Started
11
Seek the model which is simple, but not too simple.
Albert Einstein
Essentially, all models are wrong, but some models are
useful.
George E.P. Box
Professor Emeritus, University of Wisconsin
Note To e Instructor
Here we detail the kind of knowledge, rooted in Mechanics of Materials, that is important for using FEA
effectively. While some finite element theory is important, it should not be considered to be a barrier to the
early incorporation of FEA in the curriculum; rather, the requisite knowledge is meant to be built throughout
the curriculum as the undergraduate student advances. Mechanics educators and practitioners have absorbed
some concepts so well that it is easy to forget that these concepts are relatively new to students. Many
technical areas must be learned in order to interpret FEA results, catch modeling errors, and guide design.
One essential kind of knowledge is comprised of concepts, simplifying physical assumptions, and critical
thinking that takes place throughout the undergraduate engineering curriculum. We do not advocate that
students learn less mechanics theory. With the advent of powerful analysis tools, we specifically advocate that
students should learn as much if not more—a holistic approach that promotes a qualitative understanding
of “what affects something else,” an expanded grasp of definitions and core concepts.
In the Preface and Chapter 1, we proposed that the kind of knowledge that is important for using
FEA effectively falls into two categories:
1. the ability to apply basic theory of Mechanics of Materials to formulate initial expectations
of results and related estimates, and to interpret or benchmark results and
2. the ability to make good modeling decisions, including choice of dimension, element type,
mesh, and boundary conditions, based on knowledge of MoM and previous experience.
In this chapter we explore the first of these categories, namely the synergy between Me-
chanics of Materials and Finite Element Analysis. We begin this chapter with a bird’s eye view of
some qualitative aspects of MoM that the reader should begin to appreciate, followed by a review
12
2. LET’S GET STARTED
of what we regard are the minimum essential elements of MoM theory required to undertake
study of FEA. We close with two examples that can be solved by hand calculation as a means to
illustrate the finite element method.
Some colleagues are concerned that use of FEA in early courses might supplant a strong
understanding of Mechanics of Materials principles because the effort normally done by hand
can now be done by “pressing a few buttons.” We insist that this is neither our point of view nor a
circumstance that is likely to occur under a pedagogy that is committed to ensuring that students
form good habits of understanding modeling assumptions and validation procedures. We insist
that use of FEA requires even more theoretical understanding so that it can be applied with skill.
e usual adjuration to “calculate problems first by hand” can then be re-interpreted as “take steps
to validate and benchmark your FEA solution.”
2.1 QUALITATIVE CONCEPTS OF MECHANICS OF
MATERIALS
Here we present a list of qualitative concepts that can be read at once by novices and experts,
motivated by ideas presented in [Papadopoulos et al., 2011]. While the expert will recognize
many of these ideas from experience, the novice can begin to appreciate the qualitative concepts
and ideas that a more seasoned practitioner uses with confidence and fluency. We recommend
that students periodically return to this list after doing some of the example problems so that they
can develop a better feel for how these ideas appear in practice. e presence of this list at the
beginning of the chapter should not be interpreted to mean that the student must master this list
all at once on first reading. Rather, practice itself is what will help the student to internalize these
ideas and develop the fluency of an expert. is list of qualitative concepts is as follows.
• All structures, no matter how strong, are deformable at least to a small degree. is means
that when loads are applied, the material points in the structure move or displace. Many
structural elements can be modeled as simple springs as a means to understand the relation-
ship of force to displacement in the structure.
• Studying the exact geometry of a structure and its actual displacements under loading can
be very complicated with many resulting equations being nonlinear. In many structures of
practical interest, however, the displacements will remain small compared to the overall size
of the structure, and simple small displacement approximations can be made that lead to
simpler, linear relations. Such linearity renders the ability to superpose basic solutions, or
to scale any solution in load or overall size.
• One has the ability to interpret a result in terms of basic ideas or elementary asymptotic
solutions. For example, the bending moment transmitted by a cross section; the force and
moment equilibrium of loads plus reactions; the maximum bending or twisting strain at an
outer fiber; and rigid body degrees of freedom of a body or system.
2.2. THE STRESS TENSOR 13
• Stress is a tensor, a directional specification of tractions across arbitrarily oriented surfaces.
Principal directions exist on surfaces where the shear stress vanishes. ere are no tractions
on a free surface, so principal directions are parallel to the surface, and sometimes predictable
from symmetry.
• For isotropic material failure, we can ignore stress orientation and use a scalar invariant as
a failure metric.
• e source of stress concentrations is based on specific geometric features, such as re-entrant
corners or cavities.
• Structures with a single load path are determinate, and the resultants are known from the
load. Structures with multiple load paths are indeterminate, e. g., springs in parallel share
the load. Adding material generally increases the load carried by a support, and perhaps
even its peak stress.
• Indeterminate structures are often called redundant. ey obey the laws of static equilib-
rium, but these equations alone are insufficient to determine the force distribution in the
system. Additional equations enforcing compatibility are necessary. ese describe how the
displacements of material points in the structure must behave in order for the structure to
remain intact.
• An idealized pinned support neglects modest moments that exist in the actual physical
structure. Similar idealizations hold when modeling other classical localized boundary con-
ditions such as built-in or compliant constraints.
• Analysts must be aware that the world is not rigid, and particularly that prevention of lateral
strain is not always realistic.
• When calculating stress, users should exploit St. Venant’s principle, i. e., it may be possible
to ignore the actual compliance of an end support sufficiently far away from the point of
load application.
2.2 THE STRESS TENSOR
In Mechanics of Materials, one is introduced to the basics of stress and strain and their relation
in Hooke’s Law. Recall that external loads on a structure produce internal forces and moments
that result in internal stresses. e concept of stress describes how reactions of the structure to
external loads are distributed across arbitrarily-oriented planes in the structure. Recall that there
are fundamentally two basic types of stress: (i) normal stress, (cid:27), and (ii) shear stress, (cid:28), as illustrated
in Fig. 2.1.
Although it is common to refer to “bending stress,” “torsional stress,” “bearing stress,” “sin-
gle shear,” “double shear,” “punching shear,” etc., we emphasize that these names do not represent
14
2. LET’S GET STARTED
Figure 2.1: e concept of normal and shear stress and strain components is illustrated on infinitesi-
mal volumes.
other basic kinds of stress; rather, they are names assigned to internal stresses specific to commonly
studied load cases. All types of stress ultimately can be classified as either normal or shear. Normal
stresses result from:
1. axial loads and deformation of prismatic rods or bars,
2. transverse loads, moments, and the associated curvature in prismatic beams, and
3. approximations of bearing stress.
Shear stresses result from:
1. transverse shear forces and the associated lateral deformations in prismatic beams,
2. torsional loading and rotational deformation in prismatic shafts, and
3. approximations in single shear, double shear, and punching shear.
2.3
IDEALIZED STRUCTURAL RESPONSES
eories are like maps; the test of a map lies not in
arbitrarily checking random points, but in whether
people find it useful to (use it to) get somewhere.
Ronald Giere
Perhaps you have noticed that many of the problems studied in an elementary Mechanics of
Materials course consist of highly regular structural forms: rods or bars with uniform cross section;
circular shafts and pipes; and beams with uniform and prismatic cross sections. Perhaps you never
/PSNBM4IFBS*OUFSOBM'PSDF(cid:9)4USFTT(cid:10)%FGPSNBUJPO(cid:9)4USBJO(cid:10)thought much about just how simple these forms are, but they possess twin properties almost akin
to a lucky accident of nature:
2.3. IDEALIZED STRUCTURAL RESPONSES 15
1. they possess simple closed form stress-strain and load-displacement relationships that are
amenable to hand calculations and
2. they are widely useful and applicable in countless examples of engineering design and con-
struction.
Indeed, the determination of internal stresses in these basic elements follows very simple anal-
ysis that is highly accurate. Whether it is obvious or amazing that these common forms should
succumb to such simple analysis can be debated by the philosophically inclined. Regardless, this
wonderful situation enables engineers to prescribe the use of these objects widely with a high
degree of confidence in understanding their behavior. We now review these basic forms in detail.
2.3.1 AXIAL RESPONSE
A long slender bar, subjected only to axial end forces, and whose weight is neglected is a ‘two force
member’ whose internal forces are parallel to the bar itself. Bars are further assumed to undergo
small displacements and exhibit negligible out-of-plane effects, i. e., we assume no change in the
cross-sectional dimensions as the material element deforms under normal stress. e internal
normal stress can be produced by tensile and compressive axial forces, P , that act purely normal
to the cross section as shown in Fig. 2.2. e value of this stress, denoted by (cid:27)axial, is a normal
stress given by the well-known relationship
(cid:27)axial D
In addition, the axial displacement of a long bar of length L under uniform load P is given by
:
P
A
(cid:14)axial D
PL
AE
:
2.3.2 LATERAL SHEAR RESPONSE
One way that a shear stress can be produced is by distributing a lateral (or transverse) force, V ,
in the plane of a cross section, as shown in Fig. 2.3. is stress will not, in general, be uniform
over the cross section. However, for certain regular shapes its intensity can be estimated using the
well-known formula
(cid:28)lateral D
VQ
I t
;
where V is the resultant of the lateral force vectors, I is the second area moment of the cross
section, and Q and t are, respectively, the first area moment of the cross section and thickness
16
2. LET’S GET STARTED
Figure 2.2: Average normal stress distributions in a bar due to axial load on faces perpendicular to
the load.
(or width) of the cross section at the location where the stress is being evaluated. Because the
calculation of Q is sometimes involved, an approximation for the maximum shear stress in the
section due to this type of loading can be easily obtained by knowing the shape of the cross section,
where, for instance
(cid:28)lateral, max D
( 4V
3A
3V
2A
for circular cross sections,
for rectangular cross sections.
Figure 2.3: Shear traction is distributed perpendicular to the normal of the cross section.
2.3.3 BENDING RESPONSE
Both tensile and compressive normal stresses can also be caused by bending moments, as shown
in Fig. 2.4. If the beam has a prismatic section and is symmetric about the transverse plane, the
pure bending assumption that ‘plane sections remain plane and normal to the neutral axis’ can be
/PSNBM4USFTT4IFBS4USFTT2.3. IDEALIZED STRUCTURAL RESPONSES 17
applied to yield the well-known formula to predict the bending stress at a given distance from
the neutral axis:
(cid:27)bending D
My
I
;
where M refers to the resultant moment, I represents the cross-sectional property known as the
area moment of inertia about an axis passing through the centroid of the cross section, and y
represents the distance from the neutral axis toward the outer edge of the cross section where the
stress is being evaluated. e displacement of a beam due to a transverse loading can be determined
Figure 2.4: Stress distribution due to bending loads varies linearly through the cross section.
by integrating the fourth-order differential equation
d4 v
d x4 D (cid:0)
w
EI
;
where E is the modulus of elasticity and w is the load per unit length applied transversely to the
beam. is basic theory of beam bending is often referred to as Euler-Bernoulli beam theory.
2.3.4 TORSIONAL RESPONSE
Shear stresses can also develop when a torque is applied to a shaft. If the shaft is circular or
annular in cross section, it can be assumed that cross sections remain parallel and circular. From
this assumption, the shear stress due to torsion can be predicted at a point at a given radial distance,
(cid:26), away from the center by the well-known formula
(cid:28)torsion D
T(cid:26)
J
;
where T is the total torque carried by the section and J is the polar moment of inertia of the cross
section. ese stress components are illustrated in Fig. 2.5. Under these conditions, the axial twist
(sometimes referred to as angular displacement) along such a shaft of length L can be calculated
from the formula
where G is the modulus of rigidity.
(cid:18)
D
T L
GJ
;
/FVUSBM"YJT$PNQSFTTJWF4USFTT5FOTJMF4USFTT18
2. LET’S GET STARTED
Figure 2.5: Internal stresses due to torsion loads are distributed as shear tractions.
Example 2.1: Simple Truss Analysis
A weight is suspended by three bars as shown in Fig. 2.6. All three bars are made of
5000 lbf.
12 in, the diameter of each bar is 0:5 in, and W
steel, a
D
Determine the force carried by each bar.
16 in, b
12 in, c
D
D
D
Figure 2.6: A three-bar structure supporting a weight forms an indeterminate truss.
A Free Body Diagram (FBD) of point P reveals that there are three unknown forces,
as shown in Fig. 2.7.
I
yxabcQRSW(0;0)PExample 2.1: Simple Truss Analysis (continued)
2.3. IDEALIZED STRUCTURAL RESPONSES 19
Figure 2.7: Free body diagram of point P with bar angle conventions.
However, there are only two equations of static equilibrium:
X Fx
X Fy
W
W
FPQ cos (cid:18)PQ
FPQ sin (cid:18)PQ
C
C
FPR cos (cid:18)PR
FPR sin (cid:18)PR
FPS cos (cid:18)PS
FPS sin (cid:18)PS
C
C
0;
W;
D
D
where the angle for each bar is measured in the counterclockwise direction from the pos-
itive x-axis. Such a system is called statically indeterminate because the equations of static
equilibrium are insufficient to determine the forces in the structural elements. Analysis of a
statically indeterminate system requires additional equations that account for the structural
deformation, i. e., how the bars deform under their applied load.
Inverting the force-displacement relation from Section 2.3.1, F
.EA=L/(cid:14), where
E is the modulus of elasticity, A is the cross-sectional area of the bar, and L is the (initial)
length of the bar, allows us to interpret each bar as a spring with equivalent stiffness
D
EA
L
:
k
D
Denoting the stiffness of each bar by kPQ, kPR, and kPS , and the deformation of each
bar by (cid:14)PQ, (cid:14)PR, and (cid:14)PS , we can rewrite the equilibrium equations as follows:
X Fx
X Fy
W
W
kPQ(cid:14)PQ cos (cid:18)PQ
kPQ(cid:14)PQ sin (cid:18)PQ
C
C
kPR(cid:14)PR cos (cid:18)PR
kPR(cid:14)PR sin (cid:18)PR
kPS (cid:14)PS cos (cid:18)PS
kPS (cid:14)PS sin (cid:18)PS
C
C
0;
W:
D
D
I
PFPQFPRFPSWxPPQPRPS20
2. LET’S GET STARTED
Example 2.1: Simple Truss Analysis (continued)
After the load is applied, the point P , which is initially located at .0; 0/, will move to a new
location P 0. We use u and v to denote, respectively, the horizontal and vertical components of
the displacement from point P to point P 0, as illustrated in Fig. 2.8. Note that by convention,
we have illustrated the case such that u > 0 and v > 0, but in general, one or both of these
values could be negative.
Figure 2.8: e structure deforms and point P displaces as the load is applied.
As suggested by Fig. 2.8, both the length and direction of each bar change after the load
is applied. However, under many common circumstances, the displacements are small enough
such that the change in direction is negligible. erefore we will assume, as an approximation,
that each deformed bar is parallel to its original position. is is illustrated in Fig. 2.9 which
shows initial and deformed positions of the bar PS near point P , and how the deformation
(cid:14)PS is geometrically related to the displacements u and v.
I
yxabcQRSW(0;0)PP0Example 2.1: Simple Truss Analysis (continued)
2.3. IDEALIZED STRUCTURAL RESPONSES 21
Figure 2.9: e displacement (cid:14)PS is comprised of components along x and y directions.
Using basic trigonometry, the deformation of the bar (cid:14)PS is related to the displacement of
P 0, .u; v/, by the equation
(cid:14)PS
(cid:0)
D
u cos (cid:18)PS
C
v sin (cid:18)PS :
Note the negative sign in front of (cid:14)PS accounts for the convention that positive (cid:14) corresponds
to the bar getting longer, but in Fig. 2.8, the bar is contracted.
Because the kinematic description of each bar is standardized (Fig. 2.7), the equations
for the other two bars are similar without requiring separate derivations:
(cid:14)PR
(cid:14)PQ
(cid:0)
(cid:0)
D
D
u cos (cid:18)PR
u cos (cid:18)PQ
C
C
v sin (cid:18)PR
v sin (cid:18)PQ :
ese equations are called compatibility equations because the deformations must be compat-
ible so that all bars remain connected at point P 0. In summary, we now have five equations
for the five unknown variables (cid:14)PQ, (cid:14)PR, (cid:14)PS , u, and v. Notice also that these equations
are linear in these variables. is is a consequence of our use of the approximation that the
direction of each bar remains unchanged. For this example, we have in mind that the reader
will solve the five equations using a numerical solver such as M or Excel, and then
develop a model of this problem using a commercial FE solver. We recommend assembling
the structure using beam or bar elements. Depending on the reader’s experience with FEA,
it may or may not be clear that both equilibrium and compatibility conditions are simulta-
neously enforced as part of a displacement-based finite element analysis. In our model, using
one-dimensional bar (or truss) elements in ANSYS, the finite element method obtains the
theoretical solution exactly (up to machine precision): the bar forces are 1287:5 lb in bar PQ,
3197:5 lb in bar PR, and 1456:6 lb in bar PS; the displacements of the loaded point P 0
I
PP0uvPS+ıPS(cid:0)ıPS22
2. LET’S GET STARTED
Example 2.1: Simple Truss Analysis (continued)
6:00
are u
D (cid:0)
post-processing the finite element results, as shown in Fig. 2.10.
10(cid:0)
10(cid:0)
6:74
4 in and v
D
(cid:2)
(cid:2)
3 in. e deformed shape can be illustrated by
Figure 2.10: e structure deforms and point P displaces as the load is applied. e finite ele-
ment result matches the exact result for nodal displacements and bar forces.
is example is adapted from Papadopoulos et al. [2013] with permission.
2.4 WHAT DIMENSION ARE YOU IN?
e distribution of stress, strain, and displacement in an elastic body subject to prescribed forces
requires consideration of a number of fundamental conditions relating material constitutive laws,
material properties, geometry, and surface forces.
1. e equations of equilibrium must be satisfied throughout the body.
2. A constitutive law relating stress and strain must apply to the material, e. g., linear elastic
Hooke’s law.
3. Compatibility must hold, i. e., the components of strain must be compatible with one an-
other or the strain must be consistent with the preservation of body continuity. is is a
critical matter for FEA that is not always discussed in mechanics of materials.
2.4. WHAT DIMENSION ARE YOU IN?
23
4. e stress, strain, and deformation must be such as to conform to the conditions of loading
imposed at the boundaries.
Realistically, all problems are three-dimensional, but satisfying all the conditions outlined
above can quickly become intractable. Indeed, closed form solutions to three-dimensional bound-
ary value problems in linear elasticity can be very involved or even impossible. When possible,
it is wise to take advantage of simplifications in which the displacement , stress, or strain fields
take on a one- or two-dimensional nature. ese opportunities afford themselves when a lower
dimensional model captures enough of the essential behavior.
For instance, in Example 2.1, we tacitly recommended that the 3-bar structure be modeled
with beam or bar elements. is was natural enough, but to elaborate, we assumed that behaviors
such as lateral contraction of the bars via the Poisson effect, bending, or other stresses not directed
along the axes of the bars were negligible. us, a model that resembles the behavior of a simple
axial bar, and its correspondingly simple behavior as described in Section 2.3.1, is sufficient. It is
unnecessary to develop a ‘true’ three-dimensional model that is more complicated.
In general, when modeling, the metaphor to not ‘throw the baby out with the bathwater’
is apt. e ‘baby’ is that which is essential, i. e., the dominant mechanics that we choose to keep
in the model, such as the dominant axial behavior of the bars in Example 2.1. e ‘bathwater’
is all of the other mechanics that we choose to neglect, such as the lateral effects in the bars of
Example 2.1.
ere are several other important situations in which it is appropriate to simplify the dimen-
sionality of a problem. is is evidenced when we realize that simple beam deflection solutions
resolve only the deformed shape of the neutral axis of the beam cross section. Indeed, in the
simplest beam bending theory that was reviewed in Section 2.3.2, referred to as Euler-Bernoulli
theory, the formulae for axial bending stress and maximum deflection are sufficient in the limit as
the beam length dominates over the remaining two cross-sectional dimensions. In other words,
Euler-Bernoulli beam theory holds only in the limit as the beam becomes “long and slender.” e
simplest bending relations become progressively more insufficient as the cross-sectional dimen-
sions grow and are no longer small compared with the beam’s length. In this limit, one can argue
that the beam becomes ‘hopelessly three-dimensional.’
Other opportunities afford themselves when two dimensions, say in a plane, are either com-
monly large or small compared with an out-of-plane dimension. In this limit, we have been taught
two-dimensional planar solutions for plane stress, plane strain, and axisymmetric conditions. We
explore these situations in the following sections.
24
2. LET’S GET STARTED
2.4.1 THE LIMIT OF THE THIN (PLANE STRESS AND PRESSURE
VESSELS)
ere are many problems of practical importance in which the stress conditions are ones of plane
stress. is occurs often in thin members, as shown in Fig. 2.11. In this limit:
1. e stress components (cid:27)x, (cid:27)y, and (cid:27)z do not vary through the thickness, i. e., they are
functions of x and y only.
2. Externally applied forces are functions of x and y only.
3. e out-of-plane stress components are identically zero, i. e.,
(cid:27)z
(cid:28)zx
(cid:28)zy
0
0
0:
D
D
D
(cid:28)xz
(cid:28)yz
D
D
For such cases in FEA, a two-dimensional solid or continuum plane stress element is used.
Figure 2.11: A state of plane stress will often result in thin sections with loads applied in the plane.
2.4.2 THE LIMIT OF THE THICK (PLANE STRAIN)
ere are many problems of practical importance in which the strain conditions are ones of plane
strain. For long, prismatic members subject to lateral loading in the x-y plane, as shown in
Fig. 2.12, a state of plane strain will result. In this limit:
xyyzyx)(xyOTyTx1. e strain components do not vary through the thickness, i. e., they are functions of x and
2.4. WHAT DIMENSION ARE YOU IN?
25
y only.
2. Externally applied forces are functions of x and y only.
3. e out-of-plane strain components are identically zero, i. e.,
(cid:15)z
(cid:13)zx
(cid:13)zy
0
0
0:
D
D
D
(cid:13)xz
(cid:13)yz
D
D
For such cases in FEA, a two-dimensional solid or continuum plane strain element is used.
Figure 2.12: A state of plane strain will often result in thick sections with loads applied in the plane.
2.4.3 ANALOGY OF PLANE STRESS AND PLANE STRAIN
For similar cross sections, a solution derived for plane stress is strictly analogous to those for plane
strain when using the conversions listed in Table 2.1.
xyzzxyz26
2. LET’S GET STARTED
Table 2.1: Conversions of two-dimensional assumptions
Solution
To convert to
Plane stress
Plane strain
Plane strain Plane stress
E is
replaced by
(cid:23)2/
E=.1
(cid:0)
2(cid:23)/=.1
E.1
C
(cid:23) is
replaced by
(cid:23)/
(cid:23)=.1
(cid:23)/2
(cid:23)=.1
C
(cid:0)
C
(cid:23)/
2.4.4 THE LIMIT OF THE ROUND (AXISYMMETRY)
Finally, many practical problems exhibit azimuthal symmetry about an axis. When there is no
dependence of the deformation on the angle, (cid:18), in Fig. 2.13, the state of stress will not vary
in this direction and the stress and deformation fields reduce to functions of .r; z/ only. Such
conditions arise whenever:
1. all cross sections in the r-z-plane experience identical deformations;
2. externally applied forces are functions of r and z only; and
3. there is no (cid:18)-variation of the deformation in the body, i. e., points in the transverse .r; z/
plane always remain in their respective transverse planes following application of the loads.
Figure 2.13: An axisymmetric geometry results when there is no variation in the azimuthal ((cid:18)) direc-
tion.
For such cases in FEA, the body is meshed in the r-z plane and an axisymmetric, two-dimensional
continuum element is chosen for the analysis.
rz2.5. ST. VENANT’S PRINCIPLE 27
2.5
ST. VENANT’S PRINCIPLE
St. Venant’s principle, attributed to Barré de St. Venant, is a statement about the change in stress
distribution with respect to distance from a prescribed load or boundary condition. St. Venant’s
principle has significant implications for finite element analysis. It may be stated in a number of
equivalent ways.
1. e difference in stresses produced by two sets of statically equivalent forces acting on a sur-
face, A, diminishes with distance from A and becomes negligible at distances large relative
to the linear dimensions of A.
2. e detailed distribution of applied forces and moments on a boundary affects the internal
stress distribution in the vicinity of those applied forces and moments, but at several charac-
teristic dimensions away from the reactions, the internal stresses are essentially dependent
only on the applied external forces and moments, and not on how these forces and moments
are applied. A characteristic dimension is not an absolute dimension, e. g., “2 in,” but rather,
is a dimension that is meaningful in proportion to the given system, e. g., “1/3 the width of
the bar.” is is illustrated in Fig. 2.14.
3. Only stresses in the vicinity of loads are sensitive to the details of how those loads are applied.
4. If self-equilibrating forces act on a surface area, A, the internal stresses diminish with dis-
tance from A. e rate at which the stresses attenuate with distance may be influenced by
the shape of the body and must be estimated independently in each case.
5. Statically equivalent systems of forces and moments produce the same stresses and strains
within a body except in the immediate region where the loads are applied.
6. e localized effects caused by any load acting on the body tend to disappear in regions that
are sufficiently far away from the application of the load.
Many of the mathematical representations of the simplest loading conditions are them-
selves simple. But illustration of the concepts behind these relatively simple formulae are too
often lost on students exposed to them for the first time. A powerful teaching tool is the use of
quality graphical representations and illustrative examples, both of which appear in Steif [2012]
and Philpot [2010]. e reader may also find interesting two handbooks whose focus is a collec-
tion of formulae. ese references are useful for benchmarking solutions and providing bounding
cases used in preliminary analysis [Allain, 2011, Pope, 1997].
28
2. LET’S GET STARTED
Figure 2.14: Statically-equivalent sets of applied loads distributed differently over a boundary or part
thereof do not alter the internal stresses and their distribution several characteristic dimensions (here,
measured in terms of the width, w) away from the applied loads. Here, a compression specimen is
subjected to equivalent loads (P ) over different portions of its ends: (a) full end, (b) half end, and (c)
point load. Approximately one specimen width into the bar, the state of stress is a uniform constant
stress corresponding to P =A.
2.6 COMBINED LOADING
Note To e Instructor
While students may recognize these idealized loading cases and their respective simple formulae,
we often observe that how to linearly superpose these stress components under conditions of even simple
combined loading still eludes students even after exposure to the finite element method. Here we consider a
simple illustration for which finite element analysis is both straightforward and useful in framing students’
hand calculations as benchmarks for simulation results.
SimCafe Tutorial 1: Combined Loading in an Idealized Signpost
e purpose of this case study is to illustrate how combined loading is handled in a
straightforward manner using the finite element method. It presents a case study wherein
students can perform parametric studies varying the degrees to which the combined load-
ings are dominated by either axial, bending, torsional, or transverse shear response. It also
showcases how internal stresses from combined loads are superposed in a linear analysis.
I
w(cid:9)B(cid:10)w32w3ww(cid:9)C(cid:10)w32w3ww(cid:9)D(cid:10)w32w3wSimCafe Tutorial 1: Combined Loading in an Idealized Signpost (continued)
Follow the
directions
at
https://confluence.cornell.edu/display/
SIMULATION/Signpost to complete the tutorial.
2.6. COMBINED LOADING 29
Example 2.2: Combined Loading in an Idealized Signpost
D
28 ft, and h2
700 lbf=ft, Fy
e cantilevered signpost shown in Fig. 2.15 has dimensions x1
6 ft, z1
8 ft. e system is subjected to the external loads wz
4 ft, b2
D
900 lbf=ft,
13 ft, h1
w0
net weight of the signpost. e signpost is made of
steel, and it is assumed that the signpost will remain in its elastic range. is means that when
the external load is removed, the material will return to its original shape without suffering
permanent deformation.
D
8000 lbf, and Fz
D
D
D
D
D
D
Figure 2.15: Geometrical description of the signpost illustrating dimensions and loads.
e post diameters do and di must be designed so that the total combined normal stresses and
combined shear stresses do not exceed allowable values. Assume allowable stresses of 25 ksi
and 16 ksi for normal and shear stress, respectively, which already account for an appropriate
factor of safety. is example is adapted from Papadopoulos et al. [2013] with permission
and with credit due to Genock Portela.
4JHOQPTU$SPTT4FDUJPOdidoxyz'JYFE4VQQPSU4UFFM1PTUFz1h1h2b2z1x1Fy1wzwx=zh1+h2w030
2. LET’S GET STARTED
2.7 A CLOSING REMARK AND LOOK AHEAD
In this chapter we reviewed common structural elements and their usual analyses from Mechanics
of Materials. We then used these forms to illustrate broader qualitative concepts and to introduce
the finite element method. So far, no major surprises have surfaced, and all of the results are as
expected.
As we look ahead to the next chapter, we are now ready to examine problems that have
greater geometrical complexity and irregularities. While some aspects of the FEA procedure will
be the same as those introduced here, they must now be used with more caution, skepticism, and
refinement. Moreover, the user will need to learn some new techniques to completely capture
essential details in these new situations.
C H A P T E R 3
31
Where We Begin to Go Wrong
When all you have is a hammer, everything looks like a
nail.
Anonymous
Note To e Instructor
We have often told our students that one of the advantages of finite element analysis is that it is relatively
easy to perform. We also add that one of the disadvantages of finite element analysis is that it can be too easy
to perform. As ease of use becomes more prevalent, it can belie the complexity of the actual solution to one’s
problem. Clearly, a distinct advantage has always been to give drudgery and repetitive tasks to the machine
to free up time for the analyst to spend critically thinking. So the computer is a fast, but not necessarily
intelligent, aid in obtaining sufficiently accurate solutions. e requisite intelligence lives primarily in two
places:
1. commercial software’s pre-programmed algorithms that approximately model theories with which
students may or may not be familiar and
2. an analyst’s pre-processing of a model formulation and interfacing this model with the commercial
software.
Where students go wrong can often be traced to one of these two lapses in intelligence. e first
appears when students attempt problems whose solutions they do not know a priori. In such cases, the
theory they know may or may not be relevant or sufficient to model the problem. Students often view this as
carte blanche for initiating a finite element analysis. One common pitfall is that it is more difficult to validate
a solution you do not know or understand a priori. In such cases, new learners often turn to the theory
they know when attempting to validate simulation results. Comparing the results of correct finite element
analysis with expectations using inadequate theory is a common mistake made by students in introductory
courses. is is particularly true in courses where commercial software is used as part of the student laboratory
experience. We illustrate this first “way to go wrong” with three illustrative examples.
32
3. WHERE WE BEGIN TO GO WRONG
3.1 EXCEPTIONS TO THE RULE
If you’re running a fever, you will remain home and nurse it…to a point. If your fever reaches
104 (cid:14)F, however, you may consider visiting your doctor or local emergency room. Analogously,
the simple formulae discussed in Chapter 2 can suffer a similar fate in their predictability as one
deviates further from the simplifying physical assumptions on which they are based. Take the
model for normal bending stress developed in slender beams:
(cid:27)bending D
My
I
:
is formula is sufficiently accurate when beams are “long and slender;” that is, they are beams in
which the length along the neutral axis is large compared with the dimensions of the beam cross
section. e transverse deflections under load must also, typically, be orders of magnitude lower
than the beam span. As with all good theory, Euler-Bernoulli beam theory is considered valid in
a field of geometric dimensions and deformation scales that are bounded by dimensionless ratios.
For example, simple beam theory is considered to be applicable when
v
L (cid:28)
1
I
D
L (cid:28)
1;
where D is a characteristic linear dimension of the cross section. We may even estimate a range of
validity by specifying “lines in the sand” beyond which we can apply the results of the simplified
theory:
L
v (cid:21)
1000
I
L
D (cid:21)
20:
What is important is that these dimensionless “limits of applicability” are somewhat arbi-
trary. ey serve only as user-defined risk limits in applying simplifying assumptions. ey serve
as warning posts beyond which we may wish to consider whether the true internal bending stresses
are sufficiently modeled by such simple formulae. Of course, the deviations from the simple limit
occur gradually as one passes through their range of applicability. Much like the metaphor of a
fever, the severity of the dysfunction grows degree by degree. Only finally at “some limit” (that
generally varies from person to person) do we decide the formula is too sick to be used any fur-
ther. Like climbers on Mount Everest, if one ignores too many small increments in impending
bad weather, one could get caught on the mountain in conditions where equipment suitable for
milder weather is no longer appropriate to the task. As the applicability of our simplifications
falter, i. e., for sufficiently short beams, the predictions of models based on these simplifications
will agree less and less with results observed in practice and in the laboratory.
e moral of the story is simple. e formulae examined in Chapter 2 do not suddenly go
bad, no more than a fever jumps from mild to extreme. One tends to step out of the range of
applicability of these simple formulae slowly, one degree at a time, until we finally judge predic-
tions based on them to be “sufficiently wrong.” Practitioners of FEA must know the applicability
3.2. THE LINES IN THE SAND 33
of the theories that, in addition to comparison with experimental data, are used to validate any
numerical approximation of mechanical behavior.
While the applicability of any simple formula is limited, it is still useful because the range
of applicability can generally be large. However, no matter how large the region in which these
formulae hold, we must be aware of the fences that bound them lest we utilize poor validation
tools to benchmark our numerical simulations.
e following are some of the specific places where mechanics idealizations may either
break down or become sufficiently flawed to warrant treading with caution [Papadopoulos, 2008,
Papadopoulos et al., 2011]. is list is not inclusive, but we point out several instances where their
bearing on validation of FEA is paramount.
• While linearity is applicable for small displacements, it is a poor approximation when dis-
placements grow “sufficiently large.” Studying the exact geometry of a structure and its
actual displacements under loading can be very complicated with many resulting equations
being nonlinear. When this is the case, the advantages accompanying linearity are lost, e. g.,
the guarantee of unique solutions, ability to superpose basic solutions, and ability to scale
any solution in load or overall size.
• Stress concentrations based on geometry such as re-entrant corners or cavities are, in gen-
eral, not captured by formulae that describe homogeneous states of stress. Stress concen-
trations are rooted in the interplay of stresses in orthogonal directions and not describable
by one-dimensional simplifications.
• For loading that results in fully three-dimensional, inhomogeneous stress states, any and all
formulae that rely on lower-dimensional idealizations are often no longer valid.
• When three-dimensional variation occurs, neglect of warpage and lateral strain may not be
realistic.
• For loading and geometry that are fully three-dimensional, boundary conditions that are
idealized in lower dimensions can no longer be specified in unique terms. ere are a variety
of approximations to classic boundary conditions such as a clamped support.
3.2 THE LINES IN THE SAND
We do not intend to outline all the boundaries of the simplest theories. is has been undertaken
in sufficient detail in many good mechanics of material texts such as Philpot [2010], Steif [2012],
and Riley et al. [2007]. We wish here to illustrate a few salient examples. ese will serve to
highlight what happens in distinct crossings of “lines in the sand,” such as:
1. when stress concentrations defy one-dimensional idealization,
2. when previously-insignificant deformation modes become non-negligible, and
3. when geometric dimensions dictate three-dimensional stress states.
34
3. WHERE WE BEGIN TO GO WRONG
3.2.1 A STEPPED AXIAL ROD
SimCafe Tutorial 2: Stress Concentration in a Stepped Axial Shaft
When geometries exhibit discontinuities along a loading path, stress concentrations
generally arise. Stress flow is analogous to fluid flow and steep gradients that result in navi-
gating sharp discontinuities result in enhanced stress intensity. What may not be evident is
that a discontinuity in geometry requires modeling the geometry in multiple dimensions in
order to capture how the stress flows through the domain. us, one-dimensional simplifi-
cations are not capable of capturing these important effects.
e purpose of this tutorial is to showcase perhaps the simplest stress concentration
and point out that it can be resolved in two- or three-dimensions. Simple one-dimensional el-
ements (i. e., simple axial bar elements) that capture constant stress within an element are in-
sufficient to capture stress concentrations, even when many elements are used. In other words,
the requisite theory is absent, so mesh refinement is of no utility in converging on the solu-
tion. When the element formulation does not contain the necessary physics, h-convergence,
or using more elements, captures no more of the solution than does a coarser discretization.
is tutorial is meant to highlight where it is relatively straightforward to apply FEA and
resolve a solution correctly that belies analytical treatment with uniaxial formulae (such as
(cid:27)axial D
directions
SIMULATION/Stepped+Shaft to complete the tutorial.
https://confluence.cornell.edu/display/
Follow the
P =A).
at
Example 3.1: A Stepped Axial Rod
Consider a stepped shaft under uniform axial load, P , as shown in Fig. 3.1.
Figure 3.1: Geometrical description of a shaft with a discontinuous step.
Stress concentrations arise due to coupling of the stress response in multiple direc-
tions. In the axisymmetric geometry pictured in Fig. 3.1, simplified two-dimensional theory
of elasticity can be employed to derive approximate theoretical expressions for the observed
I
Drh3.2. THE LINES IN THE SAND 35
Example 3.1: A Stepped Axial Rod (continued)
stress risers, by fitting such models to experimental data [Solverson, 1953]. Many stress con-
centration factors fit in this manner are collected in Young and Budynas [2002]. For a stepped
shaft with circular fillets:
h
r
D
h=r
2h=D
3 in
1 in
8 in
3
3=4
D
D
D
D
D
0:75;
D
a simple fit formula for the axial stress concentration is accurate to within 5% and given by:
(cid:18) 2h
D
C3
2h
D C
0:831ph=r
(cid:0)
0:318ph=r
(cid:0)
0:5220ph=r
0:009ph=r
C2
D
C1
C
1:225
C
1:831
D
D (cid:0)
D
D (cid:0)
D
2:236
0:63
1:377;
(cid:0)
C
2
(cid:19)
3
(cid:19)
(cid:18) 2h
D
C4
C
0:010.h=r/
2:634
D
0:049.h=r/
0:176.h=r/
0:117.h=r/
2:529
1:8599
0:9654
D (cid:0)
D
D (cid:0)
(cid:0)
C
(cid:0)
K
C1
C2
C3
C4
K
)
and
(cid:27)max D
K(cid:27)nom D
K
P
Amin D
K
4P
(cid:25) .D
(cid:0)
2h/2 D
1376 psi:
e response of a circular stepped shaft in tension is axisymmetric. An axisymmet-
ric analysis undertaken in ANSYS predicts the stress concentration to within the order of
accuracy of the simple formula fit, as shown in Figs. 3.2 and 3.3.
I
36
3. WHERE WE BEGIN TO GO WRONG
Example 3.1: A Stepped Axial Rod (continued)
Figure 3.2: e finite element method predicts the axial stress concentration in a stepped shaft.
Figure 3.3: e local axial stress concentration is shown in the vicinity of the step fillet.
Because these effects arise from coupling of stress in different directions, one-dimensional theories
are incapable of modeling stress concentrations in the vicinity of geometric discontinuities such as
re-entrant corners or fillets. Users must be careful to remember that in such cases two- or three-
3.2. THE LINES IN THE SAND 37
dimensional simulations are required. Because multi-dimensional analysis is required to capture
stress concentrations, it is also required in numerical design considerations of how to alleviate
such stress risers. For instance, in the case of the stepped shaft, one might ask the question “Is
there any way to alleviate the stress concentration at the fillet without changing the diameter on
either side or increasing the radius of the fillet?” A three-dimensional analysis reveals that this
is actually possible by undercutting the larger diameter portion of the shaft in the vicinity of the
original step, as shown in Fig. 3.4.
Figure 3.4: It is possible to alleviate a stress riser without changing either diameter of a stepped shaft.
A multi-dimensional finite element analysis is required to capture these phenomena. is solution
is reproduced from [Papadopoulos et al., 2011] with permission, with particular credit due to Jim
Papadopoulos.
3.2.2 A SHORT, STUBBY BEAM
Euler-Bernoulli beam theory, as introduced in strength of materials courses, accounts for trans-
verse deflection due to bending only. Bending deflections can be said to dominate the deforma-
tion response when the span-to-depth ratio of the beam exceeds, say, 15. For progressively shorter
beams, the assumption that shear deformation can be neglected when compared with the bend-
ing deformation is no longer warranted. In these limits, the shear deformation should be taken
into account. Timoshenko beam theory accounts for explicit contributions of deformation due to
38
3. WHERE WE BEGIN TO GO WRONG
shear. is is the theory one should apply when the span-to-depth ratio of the beam falls below
some prescribed limit.
SimCafe Tutorial 3: Stress and Deflection in a Timoshenko Beam
e purpose of this tutorial is to showcase where simple beam theory begins to break
down. In some commercial codes, simple one-dimensional cubic beam elements that capture
bending deflection do not capture shear deflection. Alternatively, Timoshenko beam theory
may be used by default in the element formulation (as with the BEAM188 element in ANSYS
v14). When shear deflection is accounted for in the one-dimensional element formulation,
results for the beam’s tip deflection will not agree with tip deflections predicted by simple
Euler-Bernoulli beam theory when the beam is relatively short. Again, attempts to capture
this effect with h-convergence will ultimately fail when the necessary physics is not contained
in the element formulation. When it is and the results are compared to simpler theory, the
disagreement may be substantial. Once again, h-convergence captures no more of the solu-
tion than does a coarser discretization. is tutorial is meant to highlight when it is relatively
straightforward to apply three-dimensional FEA and resolve a solution correctly that belies
PL3=3EI ).
analytical treatment with simple formulae (such as bending tip deflection v
https://confluence.cornell.edu/display/
at
Follow the
directions
D
SIMULATION/Stubby+Beam to complete the tutorial.
Example 3.2: Large Depth-to-Span Ratio Beams
Consider a relatively short tip-loaded cantilevered I-beam, as shown in Fig. 3.5.
Figure 3.5: A simple cantilever beam is loaded under transverse point tip load P .
e behavior of relatively short beams can be numerically approximated by either one-
dimensional beam elements that account for shear deflection or a fully three-dimensional
analysis. One should note, however, that while one-dimensional Timoshenko beam ele-
ments have interpolation functions for shear deformation, they do not capture the complete
three-dimensional state of stress within the beam. For instance, in short cantilever beams the
I
xyPL2cExample 3.2: Large Depth-to-Span Ratio Beams (continued)
normal stress component at the clamped edge can no longer be predicted with the simple
bending formula in Chapter 2.
3.2. THE LINES IN THE SAND 39
Figure 3.6: Cross section of a short I-beam and a corresponding three-dimensional solid model
that can be imported into many commercial finite element software packages.
e solid model is meshed for an I-beam whose span is 24 in. With a span-to-depth
ratio of only 3, the actual deformation and stress response will not be modeled well by Euler-
Bernoulli beam theory. ree-dimensional finite element simulations indicate that the shear
deflections are on the order of those from simple bending theory and the wall normal stresses
deviate substantially from those predicted by simple bending theory. Typical contours of
displacement and stress for the three-dimensional model are shown in Figs. 3.7 and 3.8 for
a tip load of 1000 lb.
I
(cid:25)JO(cid:17)(cid:15)(cid:20)(cid:18)(cid:19)(cid:22)JO(cid:25)JO(cid:17)(cid:15)(cid:21)(cid:20)(cid:24)(cid:22)JO(cid:17)(cid:15)(cid:22)JO40
3. WHERE WE BEGIN TO GO WRONG
Example 3.2: Large Depth-to-Span Ratio Beams (continued)
Figure 3.7: Both shear and bending contribute to the total transverse deformation of short
beams.
Figure 3.8: e axial stress at the fixed wall deviates substantially from that predicted by one-
dimensional beam theory.
Additional three-dimensional models can be run to examine the effects of the length of the
beam. Such analyses verify the dependence of both the tip deflection and normal wall stress on
the beam’s span-to-depth ratio, as evidenced by results in Figs. 3.9 and 3.10.
3.2. THE LINES IN THE SAND 41
Figure 3.9: e tip deflections in short beams predicted by Euler-Bernoulli beam theory become
progressively inaccurate for relatively short beams.
Figure 3.10: e normal stress at the fixed end predicted using Euler-Bernoulli beam theory in short
cantilever beams can underestimate the actual normal stress substantially.
3.2.3 A THICK-WALLED PRESSURE VESSEL
e simple formulae outlined in Chapter 2 represent nearly all states of uniform or linearly vary-
ing stress. Radial and hoop stresses in pressure vessels become uniform through the thickness
as the radius-to-thickness ratio becomes large. Because these formulae are simple and because
the variation of both radial and hoop stress becomes nonlinear for thick vessels, analysts may be
tempted to push the limits of the simple formulae. Here we point out that, as with the other
simple formulae, the deviation from the uniform stress state occurs gradually. When the radius-
24681012(cid:0)80(cid:0)60(cid:0)40(cid:0)2004QBO(cid:14)UP(cid:14)EFQUISBUJP(cid:9)(cid:14)(cid:10)1FSDFOU&SSPS(cid:9)(cid:6)(cid:10)0510152025(cid:0)80(cid:0)60(cid:0)40(cid:0)2004QBO(cid:14)UP(cid:14)EFQUISBUJP(cid:9)(cid:14)(cid:10)1FSDFOU&SSPS(cid:9)(cid:6)(cid:10)42
3. WHERE WE BEGIN TO GO WRONG
to-thickness ratio falls below 10, errors arising from predicting stresses with thin-walled formulae
become appreciable and thick-walled formulae become increasingly necessary.
SimCafe Tutorial 4: Hoop Stress in a ick-Walled Pressure Vessel
e purpose of this tutorial is to illustrate how thin-wall pressure vessel theory grad-
ually loses applicability as the radius-to-thickness ratio decreases. As before, this happens
gradually as the vessel walls become thicker. is tutorial is meant to highlight where it is
relatively straightforward to apply three-dimensional or axisymmetric FEA and resolve a
solution correctly for thick-walled vessels.
Follow the
directions
SIMULATION/Pressure+Vessel to complete the tutorial.
at
https://confluence.cornell.edu/display/
Example 3.3: A Hydraulic Test Stand
Consider a hydraulic pressure vessel used to apply loads to experimental fixtures in an
undergraduate statics and strength of materials laboratory, as shown in Fig. 3.11.
Figure 3.11: Hydraulic test stands are typically moderately thick-walled pressure vessels.
Consider that the pressure vessel is verging on the limits of the thin-wall theory. e
outer diameter is 4 in with an inner diameter of 3 in and a 0.5 in wall thickness, giving an
I
BYJBMBYJBMbap3.2. THE LINES IN THE SAND 43
Example 3.3: A Hydraulic Test Stand (continued)
average radius-to-thickness ratio of 3.5. Exploiting symmetry, an axisymmetric analysis of
half the vessel is created. e vessel is internally loaded with a constant pressure of 1000 psi.
e axisymmetric deformed mesh and internal stresses indicate a stress riser in the bottom
of the tank where membrane and bending stresses coincide, as shown in Fig. 3.12.
Figure 3.12: Pressure vessel hoop stress maximum occurs in the bottom of a thick-walled vessel.
Far from the discontinuity of the vessel corner, the hoop and radial stress variations in the
axial direction in the cylinder wall vanish, as shown in Fig. 3.13.
Figure 3.13: Pressure vessel hoop stresses are no longer uniform through the wall of a thick-
walled vessel.
I
44
3. WHERE WE BEGIN TO GO WRONG
Example 3.3: A Hydraulic Test Stand (continued)
Paths through the domain may be defined in many commercial finite element software
packages. Here, the variation of hoop stress through the wall thickness is not negligible. e
results shown in Fig. 3.14 show the maximum value on the inner diameter predicted correctly
by thick-wall theory.
Figure 3.14: Radial variation of hoop stress in the uniform section of the cylinder wall shows a
peak value at the inner wall that is underestimated by thin-wall theory.
When we vary the vessel thickness, the gradual degradation of the predictions using thin-
walled formulae become evident, as shown in Fig. 3.15.
Figure 3.15: Hoop stresses in thick-walled pressure vessels are underestimated by relations based on
thin-walled pressure vessel theory.
010203040(cid:0)40(cid:0)200"WFSBHFSBEJVT(cid:14)UP(cid:14)UIJDLOFTTSBUJP(cid:9)(cid:14)(cid:10)1FSDFOU&SSPS(cid:9)(cid:6)(cid:10)3.3. UTILITY OF THE FINITE ELEMENT METHOD 45
3.3 UTILITY OF THE FINITE ELEMENT METHOD
e deviations from the simplest stress states that occur in the examples of Sections 3.1 and 3.2
are easily handled by the finite element method. Deviations such as stress concentrations may
be predicted using approximate formulae, but these are almost always dependent on details of
the specimen geometry; FEA, in contrast, is simple enough to apply in all of these cases and
does a good job predicting the correct behavior for elastic deformation and stress. As the finite
element method becomes more pervasively used in industry, we feel there is utility in introducing
the method earlier in engineering curricula [Papadopoulos et al., 2011]. Distinct advantages to
introducing the method throughout one’s undergraduate studies include reducing the drudgery
and potential errors of computation, focusing on the theory of mechanics, while enabling students
to approach more complicated problems that escape the realm of closed-form solutions.
Now recall our earlier point that when using pre-programmed software, the majority of the
errors and their severity are attributable to the user. ese include faulty input, poor modeling,
poor pre-processing, and ignorance of the software protocol. Analogous errors of using a wrong
formula or remaining ignorant of a key formula can occur when using hand calculations [Jeremić,
2009, Papadopoulos et al., 2011, Prantil and Howard, 2007, 2008]. e potential for such error
in problems like the ones in this chapter is high because the theoretical solutions are likely beyond
what most undergraduate mechanical engineering students have learned.
Here FEA can be very beneficial to allow students to explore behavior beyond their basic
theoretical knowledge, and it can serve as a bridge for them to discover more advanced theoretical
treatments that appear, such as Gieck and Gieck [2006] and Young and Budynas [2002]. Such
books are good references for finite element analysts to have at hand for validating numerical
solutions for problems whose analytical or empirical solutions have been determined. Using these
solutions as benchmarks for FEA analyses helps reinforce the practice of finding published and
verified solutions for comparison with numerical simulations. is further underscores our earlier
point that we advocate early introduction of FEA in the curriculum, even when it appears to
precede the students’ current level of engineering knowledge [Papadopoulos et al., 2011].
While applying the finite element method in these cases is relatively straightforward, for
more complex geometries and boundary conditions, the prescription of model details leads to sit-
uations in which it can become progressively easier for analysts to go wrong applying the method.
We discuss illustrative case studies for two such boundary value problems in Chapter 4.
C H A P T E R 4
It’s Only a Model
47
A model is a lie devised to help explain the truth.
Anonymous
e truth is always too complex.
Bruce Irons and Nigel Shrive
e Finite Element Primer
Note To e Instructor
e second lapse in intelligence in applying the finite element method occurs when users understand the
problem they want to solve, and understand the theory that they believe holds for the problem at hand. e
issue is whether the analyst properly poses the finite element formulation of the problem. ese types of
errors can occur when analysts pre-process a model and
1. apply loads or boundary conditions incorrectly,
2. use an inadequate element formulation for the solution desired, or
3. analyze the problem in an inappropriate dimension, i. e., pose the problem as two-dimensional when
three-dimensional analysis is required.
In this scenario, the user falls prey to an old adage wherein the computer is doing what they tell it to do
rather than what they want it to do. Here we pose two deceivingly simple problems that cause new learners
to often make these common mistakes in problem formulation.
48
4. IT’S ONLY A MODEL
4.1 THE EXPECTATION FAILURE
We expect regularities everywhere and attempt to find
them even where there are none. Events which do not
yield to these attempts we are inclined to treat as
“background noise,” and we stick to our expectations
even when they are inadequate.
Karl Popper
Conjectures and Refutations: e Growth of Scientific
Knowledge
As we mentioned in the Preface and elsewhere, we strongly believe in using expectation failures as
part of our teaching strategy. Because they are crucial to the examples in this chapter, we repeat
the words of Ken Bain to remind the reader of their meaning and importance:
Some of the best teachers want to create an expectation failure, a situation in which
existing mental models lead to faulty expectations. ey attempt to place students in
situations where their mental models will not work. ey listen to student concep-
tions before challenging them. ey introduced problems, often case studies of what
could go wrong, and engaged the students in grappling with the issues those examples
raised [Bain, 2004].
Among the list of common errors made in FEA practice, in this chapter we address misconcep-
tions regarding either
1. the real physics governing the problem or
2. the construction of the finite element model approximating these physical mechanisms.
So an analyst harbors some misconception regarding underlying physical phenomena or
details of an appropriate numerical approximation. But, and this is critical, they begin to assure
themselves that they do understand. Perhaps they do not remember that “the truth is always too
complex” and either our broad simplifications of reality (the simple formulae) or the finite ele-
ment model approximations (say, lower-order interpolation finite elements) are insufficient for
the problem at hand. As we discussed in Chapter 3, analysts will proceed as if these simplifications
adequately represent the real behavior. In these cases, analysts may trust the incorrect numerical
analysis. Even when presented experimental evidence that does not validate the computational
results, analysts can still “cling with fervor” to these incorrect results. Such computational compla-
cency may be born of rationalizing that because the software has more theory programmed into it
than the user has learned, the computer is more likely right.
In finite element analysis, expectation failures can arise in the following ways.
1. One prescribes boundary conditions that either over- or under-constrain the boundaries by
4.2. PHILOSOPHY OF MATHEMATICAL MODELING 49
(a) not removing all rigid body translation and rotation or
(b) overly constraining degrees of freedom along a particular direction that preclude de-
formation and Poisson effects in orthogonal directions.
2. One chooses inappropriate finite element formulations, such as
(a) planar or one-dimensional elements that are not appropriate for the observed behavior,
(b) finite elements with inadequate degrees of freedom, or
(c) finite elements with inadequate order of interpolation.
3. Lower-order interpolations appear to predict behavior more accurately than higher-order
interpolations.
4. Meshes with fewer active degrees of freedom appear to predict more accurately than meshes
with more active degrees of freedom.
We wish to illustrate these points with two examples where finite element modeling can go wrong.
Remember, whether or not simplified theory is appropriate, incorrect finite element results are
typically cases of analyst error.
4.2
PHILOSOPHY OF MATHEMATICAL MODELING
e great masters do not take any model quite so
seriously as the rest of us. ey know that it is, after all,
only a model, possibly replaceable.
C.S. Lewis
e game I play is imagination in a tight straightjacket.
at straightjacket is called the laws of physics.
Richard Feynman
S.L. Hayakawa is noted for pointing out that “the symbol is not the thing symbolized; the word is
not the thing; the map is not the territory it stands for” [Dym, 2004], echoing Richard Feynman
who recalled that his father “knew the difference between knowing the name of something and
knowing something” [Public Broadcasting System–NOVA, 1993]. When engineers attempt to
formulate models for systems and processes, it is incumbent upon us to remember that the process,
the system is “the thing,” “the territory.” e model is a symbol, word, or map that in some way
names the thing. ey are not the same. To model some process well requires recasting its real
nature into a simplified shell that allows its basic nature to be captured in mathematical form, a
set of equations whose solutions tell us something about how the model system behaves under a
given set of controlled conditions. An abstraction of the process is shown in Fig. 4.1.
50
4. IT’S ONLY A MODEL
Figure 4.1: A mathematical model is devised by sufficiently simplifying a problem statement such
that its formulation can be cast in equation form.
Similar conceptualizations have been illustrated elsewhere and these overviews of model-
ing are well worth reading: Carson and Cobelli [2000], Dym [2004], Greenbaum and Chartier
[2012]. In order to numerically model a system, we must observe the system in nature. We must:
1. collect all information relevant to how the system behaves,
2. detail what we need to find out or predict,
3. specify how well we need to know or predict this behavior, and
4. seriously ask a singularly important question: “What do we expect to happen?”
3FBM8PSME4JNQMJGZJOH1IZTJDBM"TTVNQUJPOT.BUIFNBUJDBM.PEFM%JTDSFUJ[FE.PEFM/VNFSJDBM4PMVUJPO*OUFSQSFUBUJPOPG3FTVMUT3FWJTJU4JNQMJGZJOH"TTVNQUJPOT4.2. PHILOSOPHY OF MATHEMATICAL MODELING 51
We should have a knowledgeable, informed expectation of how the system will respond to dis-
turbances, excitation, or loading based on practical experience, prudent observation, and one’s
understanding of the relevant physics.
In any given process, only a few physical mechanisms tend to dominate the behavior. Mak-
ing physically simplifying assumptions means deciding what physical mechanisms to retain (recall
the baby) and which to neglect (recall the bathwater). e modeler needs to retain the dominant
physics and neglect all higher-order effects, making the model as simple as possible, but no sim-
pler. Making appropriate simplifying assumptions is an art whose mastery comes only gradually
with continued experience.
After appropriate simplifying assumptions are made, application of a conservation or bal-
ance principle results in a differential equation for the boundary value problem. Finite element
methods provide a piecewise approximation to the solution of this differential equation. In con-
structing finite element models, the major inputs from the user are
1. the choice of finite element, which dictates the incremental solution interpolation between
nodes and
2. the specific prescription of boundary conditions for the global domain.
We’ve already learned that beam behavior can be approximated using one-dimensional and three-
dimensional models. Here we will use both and compare the results to experimentally measured
values.
Recall that boundary value problems are described fully by a governing differential equa-
tion coupled with an admissible set of appropriate boundary conditions. For static analyses, the
boundary conditions must remove all rigid body translations and rotations.
Upon applying admissible boundary conditions, we solve for displacements throughout
the global region. Most commercial finite element software then post-processes the displacement
solution to compute
1. reaction forces corresponding to applied displacement constraints and
2. internal stresses which may be displayed or contoured.
One goal in model development is to start with the simplest approximation that captures
the physics and provides perhaps crude, but reliable qualitative predictions of system behavior.
We will seek to iterate on the model to provide more quantitative results, and then to validate the
numerical predictions with experimental observations and test results. All models are approxima-
tions whose errors most commonly arise from
1. expectation failures,
2. faulty simplifying assumptions,
3. poor discretization of the domain,
52
4. IT’S ONLY A MODEL
4. poor choice of element interpolation function,
5. incorrect post-processing, or
6. misinterpretation of results.
To validate a numerical solution, it is prudent to perform initial benchmark solutions on repre-
sentative problems with simplified geometries and boundary conditions. Preferably, these prob-
lems are ones whose solutions are known either in closed form or bounded by analytical solutions
from above and below. Beyond this, all system modeling employing numerical simulation requires
model iterations. Based on previous results, subsequent analyses must be entertained that:
1. relax simplifying assumptions,
2. refine the discretization, or
3. employ higher-order interpolation between solution grid points.
Such model iterations must be performed until the solution converges and independent validation
is achieved.
Finite element analysis is a numerical approximation in which the global solution to a large-
scale boundary value problem is approximated by a series of finite range functions that are them-
selves lower-order Taylor series approximations that approximate the local behavior of the solu-
tion with sufficient accuracy. ese local representations of the solution are based on the Lagrange
polynomial interpolation functions that characterize each finite element.
4.3 THE ART OF APPROXIMATION
Modeling and the approximations made therein are an art. When devising numerical approxi-
mations on top of the requisite simplifying assumptions, any model is never, strictly speaking,
correct, but (hopefully) correct enough. Nearly all numerical approximations in finite element
modeling are approximations to theoretical solutions characterized by high levels of continuity
and differentiability. But these approximations consist of piecewise, lower-order Lagrange poly-
nomial fits between grid points at which nodal equilibrium is explicitly satisfied. e levels of
continuity in displacement sacrificed in the weighted residual are the inherent penalty for the
approximation that allows average solutions to continuous differential equations to be obtained
from simpler algebraic matrix equations. In some crude sense, numerical analysis is the fine art of
lying by approximation.
e concept of piecewise polynomial interpolation of a solution over a finite domain is
rooted in appropriately truncated Taylor series expansions. In some defined neighborhood of the
nodes, a continuous function has an infinite number of truncated Taylor series approximations.
e applicable neighborhood over which each series is considered valid then depends on the order
of the truncation.
4.4. WHAT ARE WE APPROXIMATING?
53
Figure 4.2: A generic function is shown over some global domain.
D
D
e(cid:0)
Consider the function f .x/
x sin.3x/, plotted in Fig. 4.2. In the vicinity of the point
2, the second-order Taylor series approximation represents the function with some level of
x
2, as shown in Fig. 4.3. e linear, first-order
accuracy in some prescribed neighborhood of x
D
Taylor series approximation is a reasonable representation over yet a smaller window. e zeroth-
order Taylor series allows for no interpolation. en it follows that the neighborhood over which
an element’s interpolation function approximates a known solution with acceptable accuracy will
determine the appropriate element size you want in your discretized domain. erefore, it follows
that you cannot know how to best discretize your domain without knowing what element inter-
polation, i. e., element type, you have chosen. As we will see, how well higher-order derivatives of
these interpolating functions represent the derivatives of the actual solution must be considered
in order to determine the accuracy of the stresses predicted by the numerical model.
4.4 WHAT ARE WE APPROXIMATING?
e primary solution variables in FEA are displacements at discrete grid points we call nodes. A
discrete solution using the finite element method always delivers an approximate overall solution
in the entire domain characterized by
1. maintenance of force equilibrium at all nodes and
2. sacrifice of inter-element force equilibrium in neighboring finite elements that share par-
ticular nodal points.
01234(cid:0)0:200:20:40:6xf(x)54
4. IT’S ONLY A MODEL
Figure 4.3: Progressively higher-order truncated Taylor series approximations to an arbitrary function
model the function’s behavior well over progressively larger local neighborhoods.
e displaced configuration of an elastic body is precisely the set of nodal point displace-
ments superposed on the original undeformed configuration. e deformed body acts as an elab-
orate three-dimensional spring that, upon unloading, would return instantaneously to its original
size and shape. e set of nodal point displacements comprise a set of coefficients that each mul-
tiply basis functions whose collected weighted sum represents an approximation of the continu-
ous displacement field in three dimensions. Finite element analysis is, in one sense, a piecewise
Lagrange polynomial interpolation of this continuous field into many lower-order polynomials
whose continuity requirements at nodal points are dictated by the order of truncation of the local
Taylor series. It is, therefore, the order of the interpolation or shape function that dictates the
variation of displacement along the interior of the finite element.
Now let’s consider an idealized finite element analysis as an example of:
1. developing and solving a mathematical model,
2. showcasing where particular errors made in finite element practice might occur, and
3. illustrating where theory embedded in finite element formulations is no guarantee that using
finite element analysis will result in an accurate simulation.
01234(cid:0)2(cid:0)1:5(cid:0)1(cid:0)0:500:51xf(x)f(x)(cid:17)UI(cid:14)PSEFS5BZMPSTFSJFT(cid:18)TU(cid:14)PSEFS5BZMPSTFSJFT(cid:19)OE(cid:14)PSEFS5BZMPSTFSJFT4.4. WHAT ARE WE APPROXIMATING?
55
SimCafe Tutorial 5: Four-Point Bend Test on a T-Beam
e purpose of this case study is to showcase how the manner in which boundary
conditions are applied can change with the number of dimensions in the analysis. Prescription
of a single unique “appropriate” set of boundary conditions may no longer exist in a three-
dimensional model vs. its one-dimensional analog. In the case study described here, multiple
prescriptions of a “simple support” lead to significantly different predicted bending stresses
even in the fairly benign circumstances encountered in a four-point bend test.
Follow the
directions
at
https://confluence.cornell.edu/display/
SIMULATION/T-Beam to complete the tutorial.
Example 4.1: Four-Point Bend Test on a T-Beam
Consider that we are examining a long, slender T-beam loaded at two symmetric loca-
tions on its top surface while being simply supported at its ends along triangular knife-edge
supports as shown in Figs. 4.4 and 4.5. e load was applied with a hydraulic cylinder ap-
paratus. Strain gages mounted at several locations between the loading points (where the
moment was constant and the transverse shear force was zero) were monitored during the
test. We know that the beam is made of isotropic steel with a span of 30 in and constant
cross-sectional properties. We wish to accurately predict its peak bending stress.
Figure 4.4: A T-section beam cross section is pictured, along with a schematic of the loads ap-
plied in a four-point bend test.
I
3JO0:3125JO30JOPP7JO4ZNNFUSJDMPBETPBQQMJFEPWFS(cid:17)(cid:15)(cid:22)JOMFOHUI5:75JO0:5625JO0:375JO56
4. IT’S ONLY A MODEL
Example 4.1: Four-Point Bend Test on a T-Beam (continued)
Figure 4.5: e T-section beam is simply supported along triangular knife edges at each end.
We assume the load is quasi-static. e material remains in the elastic range, the beam
is long and slender enough for Euler-Bernoulli beam theory to be a sufficient representation
of the deformation and internal stress response. We neglect contributions to the deformation
from shear deflection. We assume the vertical transverse loads from the hydraulic press can
be modeled as pressures over small contact patches. We also assume the simple support at
the ends of the beam constrain the transverse displacements at the beam’s bottom flange in
contact with the knife-edge support.
Having chosen a one-dimensional beam element, we are assuming a cubic interpola-
tion of transverse deflection between node points to represent a global solution that is cubic.
One would then expect to generate exact results [Irons and Shrive, 1983] as there are no trun-
cation errors in the approximation. A linear distribution of normal, bending stress through
the depth of the section would then be the expected result. e simplest discretization is
shown in Fig. 4.6.
Figure 4.6: A one-dimensional finite element mesh using beam elements is loaded with idealized
point loads.
Comparisons of the normal bending stress results of the one-dimensional analyses
with those determined from strain gage test data from the lab allowed for some interesting
comparisons, as shown in Table 4.1. Here we report the stresses in dimensionless form where
I
Example 4.1: Four-Point Bend Test on a T-Beam (continued)
the actual stress is normalized with respect to the characteristic bending stress
4.4. WHAT ARE WE APPROXIMATING?
57
PLh
2I
:
(cid:27)
O
D
is illustrates a further point about the finite element method. It is entirely devoid
of any reference to the chosen system of units. ese are entirely at the discretion of the
user. One need only prescribe a consistent set of units in order to interpret results meaning-
fully. Because the units are discretionary, results from linear static analyses scale linearly with
load and dimensionless results are rendered independent of the actual specific load, section
properties, or material constants chosen.
Table 4.1: Results of one-dimensional beam analyses
Experiment
Euler-Bernoulli beam theory
FEA: beam elements
(cid:27)bottom=
(cid:27)
O
0.1108
0.1134
0.1134
(cid:27)top=
(cid:27)
O
-0.2464
-0.2724
-0.2724
Based on these results, in which physical experiment, simple beam theory, and finite
element simulation are in good agreement, we could conclude that we have obtained an
accurate answer for the peak bending stresses in the beam, and in particular, that the use of
beam elements is an appropriate choice for the finite element model. We further comment
15 in).
that the alert reader should surmise that the peak stresses occur at the midpoint (x
Recall that models are approximations of reality, and it is quite possible that more than
one model is capable of producing an accurate result. It is well worth asking if a fully three-
dimensional analysis would also verify these results. is may, in fact, be what one expects at
first glance.
D
To investigate this question, a three-dimensional model is created in which the hy-
draulic loads are approximated as pressure loads over the small contact areas. We also assume
that the knife-edge supports at the left and right ends can be modeled by constraining the
transverse (z-direction) and out-of-plane (y-direction) displacements at all points along left
and right edges of the beam’s bottom flange (that is, along the edge lines parallel to the
y-direction), as shown in Fig. 4.7.
I
58
4. IT’S ONLY A MODEL
Example 4.1: Four-Point Bend Test on a T-Beam (continued)
Figure 4.7: Simple support boundary conditions used in the finite element model are applied
throughout the cross section.
Preliminary results indicate that the stress variation is fairly linear through the cross
section, as depicted in Fig. 4.8.
Figure 4.8: Axial stress distribution in the three-dimensional beam model varies linearly through
the section.
But the normal bending stress at the extreme fibers predicted by the three-dimensional
finite element model does not agree with experiment as outlined in Table 4.2. e stresses
are under-predicted on top by 11% and on bottom by 52%.
I
All translational DOF fixedfor all nodes at the supports4.4. WHAT ARE WE APPROXIMATING?
59
Example 4.1: Four-Point Bend Test on a T-Beam (continued)
Table 4.2: Results of one- and three-dimensional beam analyses
Experiment
Euler-Bernoulli beam theory
FEA: beam elements
FEA: solid elements
(cid:27)bottom=
(cid:27)
O
0.1108
0.1134
0.1134
0.0536
(cid:27)top=
(cid:27)
O
-0.2464
-0.2724
-0.2724
-0.2184
At this point we have arrived at an expectation failure, for the results from our “obvi-
ously correct” three-dimensional model do not match the accepted values from the previous
analysis. In the spirit of our pedagogical approach, it is now incumbent upon the student to
speculate as to why this has occurred, and likewise, it is imperative that instructors support-
ively coach their students toward a more correct understanding of the solution. For instance,
some possible reasons for our discrepancy include the following.
• e simple beam calculations were made with a cross section that neglected the fillets
between the web and flange. e solid-element model includes the fillets, resulting in
a stiffer structure. e error introduced by neglecting the fillets is less than 0.5% for
this geometry.
• Experimental errors, including reading of the applied pressure, locations of the sup-
ports and load application points, inaccurate modulus of elasticity, and strain gage er-
rors, caused the measured strains to be inaccurate. If only the three-dimensional model
were being compared to the experimental results, this might have been a reasonable
conclusion. However, the agreement of the simple beam calculations and beam finite
element model results with the experimental results may cast doubt on the accuracy
some particular aspect of the solid-element model.
• ere are not enough elements through the thickness in the solid-element model to al-
low for the bending stresses to be accurately calculated. While this is a possibility, closer
examination of the maximum and minimum stresses predicted by the solid-element
model shows that the neutral axis location (assuming a linear distribution of stress) is
more than 0.5 in away from the centroid of the cross section. is result suggests that
some other type of loading is being introduced into the beam.
Having considered and eliminated these possible explanations as likely, we are led to suspect
the boundary conditions. e three-dimensional model made use of boundary conditions
whose equivalent effect is to allow only rotation about the y-axis (along the knife-edge sup-
port). Indeed, the boundary condition illustrated in Fig. 4.7 seems to be a good representation
of the physical constraint, as the real beam rests on a support that extends across the entire
I
60
4. IT’S ONLY A MODEL
Example 4.1: Four-Point Bend Test on a T-Beam (continued)
flange, and one is predisposed to visualizing this simple rotation condition. Note that the
portion of the beam that extends beyond the support is not included in the finite element
model.
However, the boundary conditions restrict displacements that are possible with the
three-dimensional model and which exceed the conditions imposed by the actual knife-edge
constraint. In particular, the flange of the beam does not remain perfectly flat. Rather, it
rests freely on the support and is free to deform in the direction transverse to the beam’s
neutral axis and throughout the entire depth of the beam. Since the axial strain varies with
distance away from the neutral axis, the transverse strain due to Poisson’s ratio also varies.
is variation of transverse strain, not accounted for in one-dimensional analyses, results in
curvature of the flange. You can easily visualize this effect by bending a rubber eraser between
thumb and forefinger and noticing the curvature transverse to the applied bending.
It nevertheless still seems reasonable that some three-dimensional model should work.
We can modify the boundary conditions to allow the model to curve in the transverse direc-
tion. ese alternative boundary conditions are relaxed to apply to the two corner nodes on
each end of the beam only. e deflected shape of a slice of the beam section with these new
boundary conditions applied is illustrated in Fig. 4.9. Although the deflections are greatly
exaggerated, the tendency of the beam flange to curve rather than sit flat on the support is
clearly evident. is relaxation of the constraint on the flange appears to have rather strong
effects on the predicted bending stresses.
Figure 4.9: Modified boundary conditions applied to the finite element model result in an altered
deformed shape of the beam at these supports.
I
x- and z-displacements fixedAll translationalDOF fixed4.4. WHAT ARE WE APPROXIMATING?
61
Example 4.1: Four-Point Bend Test on a T-Beam (continued)
As reported in Table 4.3, the new results for peak bending stresses using the relaxed
constraints are much closer to the experimental results than those using the stricter con-
straints.
Table 4.3: Results of alternate beam analyses
Experiment
Euler-Bernoulli beam theory
FEA: beam elements
FEA: solid elements,
loosely-pinned supports
FEA: solid elements,
fully-pinned supports
(cid:27)bottom=
(cid:27)
O
0.1108
0.1134
0.1134
(cid:27)top=
(cid:27)
O
-0.2464
-0.2724
-0.2724
0.0946
-0.2230
0.0536
-0.2184
ere are several important lessons to take away from this exercise.
1. With the loosely pinned supports, the error in maximum bending stress on the bottom
of the beam is reduced from 52% to 16%, while the maximum bending stress on the
top of the web is now even more accurate than the one-dimensional results.
2. For beams whose depth-to-span ratio is not small, Poisson effects on stresses may be
significant. Furthermore, these effects are accentuated because the end constraints are
placed along the beam flange surface which is not on the neutral axis. Beam theory
inherently assumes that all constraints are placed at the neutral axis.
3. e one-dimensional results may agree well with experiment because of the proximity
of the flange, where actual boundary conditions are placed in the experiment, to the
actual neutral axis.
4. While a three-dimensional model can account for out-of-plane effects, the precise form
of the boundary conditions can have strong effects on stresses.
5. Solid elements are not always the best choice for an analysis when this choice is made
irrespective of the boundary conditions. Often, realistic deformations result that may be
outside of the realm of one’s limited experience. With easy access to a part or assembly
modeled with a solid modeling program, it may seem logical to import and analyze the
structure with three-dimensional elements for no more important reason than ease for
the analyst.
I
62
4. IT’S ONLY A MODEL
Example 4.1: Four-Point Bend Test on a T-Beam (continued)
6. Very often the part or assembly modeled with a solid modeling program has been cre-
ated without previous knowledge of where and how loads and boundary conditions will
need to be applied in a subsequent finite element analysis. Often, analysts will struggle
with wanting to import these solid model part or assembly files nonetheless. is may
lead or even force them to place less than optimal loadings and boundary conditions
where they otherwise might not.
7. Constraints that produce only negligibly small differences in strains can result in sig-
nificant differences in internal stresses.
8. In this example problem, an analysis with over 14,000 three-dimensional solid elements
produced inferior results compared to an analysis with four simple one-dimensional
beam elements.
Use of three-dimensional analysis does not guarantee more accurate results. Because
there are still discrepancies between the three-dimensional stress predictions and experimen-
tal results, we suggest that, as an exercise, students should further relax the boundary con-
straints along the flange to allow displacement along the beam’s longitudinal axis. Since the
flange is below the neutral axis, and there is bending, a compressive force will develop along
the bottom of the flange if both ends of the beam are fixed in the axial direction. In this way,
the extent to which this additional axial force does or does not affect the maximum bending
stress can be explicitly determined.
SimCafe Tutorial 6: Large Depth-to-Span Ratio Beams
e purpose of this case study is to illustrate how assumptions of planar behavior affect
numerical simulation of simple beam bending. e plane stress and plane strain assumptions
lead to bounds on the actual three-dimensional behavior. While this analysis is simple to
perform, the results are not so readily validated by those who are not ready to question when
the planar approximations are reasonable to apply. Such reasoning can lead analysts to con-
clude that numerical results have “converged” on a result which is inaccurate by over 100%.
e point of this exercise is to have analysts convince themselves that simplified theories are
often bounds and that geometries that do not cleanly and unambiguously lend themselves
obviously to either (thin or thick) limit, may still be ones for which one limit is reasonable
and applicable. Also, an intuitive feel for making and applying these simplifications often still
eludes users of the finite element method. is case study is an exercise in boundary condi-
I
4.4. WHAT ARE WE APPROXIMATING?
63
SimCafe Tutorial 6: Large Depth-to-Span Ratio Beams (continued)
tion prescription, choosing appropriate finite element formulations, simulation convergence,
and applying caution in interpreting one’s results.
Follow the
directions
at
https://confluence.cornell.edu/display/
SIMULATION/2D+Beam to complete the tutorial.
Example 4.2: Large Depth-to-Span Ratio Beams
A simply supported beam of rectangular cross section is point loaded at some arbitrary
25 in,
point along its length as shown in Fig. 4.10. Consider a beam where L
h
3 in, and load P
100 in, a
1000 lbf.
8 in, b
D
D
D
D
D
Figure 4.10: A beam with rectangular cross section is simply-supported while an off-center point
load is applied.
While, in general, a finite element analysis will more accurately predict deflections
than, say, internal stresses (we will discuss this in more detail in Chapter 5), this example
illustrates a case in which even the deflections can be poorly modeled. We wish to examine
the implications of analyzing the problem with one-, two-, and three-dimensional element
formulations. Analysts must choose and defend their method of analysis including all im-
plications that dimensional space imposes on the results. Examples of the simplest meshes
for either plane stress or plane strain analyses using continuum elements are illustrated in
Fig. 4.11.
Analysts may suppose that Euler-Bernoulli beam theory applies for these long, slender
beams, presumably because it is the theory with which they are most familiar, and/or because
it seems to work in other apparently similar examples, such as our previous example with the
T-beam. However, while this assumption is intuitively appealing, we will see that it leads to
a variety of pitfalls. Perhaps other expectation failures are in the offing.
Let us proceed with a narrative of this example assuming that the Euler-Bernoulli
theory holds, although this has not yet been verified. Under this assumption, the curious
analyst, in light of the previous example, might simulate the beam with several models. Due
I
yxPaL(cid:0)abh64
4. IT’S ONLY A MODEL
Example 4.2: Large Depth-to-Span Ratio Beams (continued)
to the rectangular section of the beam, one-, two-, and three-dimensional models might be
appropriate.
Figure 4.11: Typical mesh discretizations using (a) linear 3-node triangular elements and (b)
bi-linear 4-node quadrilateral elements.
e analyst proceeds to simulate the beam using a variety of elements: one-dimensional
beam elements, plane strain triangles, plane strain quadrilaterals, plane stress triangles, plane
stress quadrilaterals, and three-dimensional brick elements (using what the analyst believes
to be sufficiently relaxed end constraints, as per the previous example). e results for max-
imum deflection are reported in Fig. 4.12. All results are reported in dimensionless form,
normalized by the characteristic deflection
PL3
EI
:
v
O
D
According to these results, and still believing that Euler-Bernoulli beam theory is cor-
rect, the analyst would see that the maximum converged transverse deflection predicted by
plane stress conditions underestimates the deflection predicted by Euler-Bernoulli beam the-
ory by nearly 50%; by comparison, the maximum converged transverse deflection predicted by
plane strain conditions overestimate the prediction of Euler-Bernoulli theory by 40%. e
analyst also realizes that the converged results from the three-dimensional brick elements
appear to be in agreement with the converged plane stress results, but that a coarse mesh in-
stance of the plane strain model seems to agree well with the expected Euler-Bernoulli beam
theory. How does the analyst sort out these mixed messages?
I
(cid:9)C(cid:10)(cid:9)B(cid:10)Example 4.2: Large Depth-to-Span Ratio Beams (continued)
4.4. WHAT ARE WE APPROXIMATING?
65
Figure 4.12: Maximum deflection predicted by finite element models assuming two-dimensional
plane strain, two-dimensional plane stress, three-dimensional, and idealized one-dimensional
behavior are compared with predictions from Euler-Bernoulli beam theory.
ere is now a subtle point to make about this problem in comparison to the previous
problem with the T-beam. Whereas in the previous problem the neutral axis was near the
bottom of the beam, in this case it is not. Rather, it lies at mid-depth, and is hence far away
from the location of the support pins that are at the bottom of the beam. is gives our first
clue that regular Euler-Bernoulli beam theory is not applicable here.
Secondly, given that a fully three-dimensional analysis using solid elements can prop-
erly be specified to match the given boundary conditions, and given further that the three-
dimensional formulation can account for the Poisson effects and out-of-plane curvature
(which turn out to be significant), the three-dimensional analysis appears to give an accurate
result. Because the converged plane stress solution agrees with the three-dimensional theory,
and because the plane stress elements do not preclude out-of-plane Poisson effects, we have
even further indication that the three-dimensional (and hence two-dimensional plane stress)
solutions are valid.
I
0246810121416182000:511:522:533:5410(cid:0)2/VNCFSPGFMFNFOUTUISPVHIUIJDLOFTT(cid:13)n%JNFOTJPOMFTTNBYJNVNEFnFDUJPO(cid:13)vNBY/(PL3/EI)&VMFS(cid:14)#FSOPVMMJCFBNUIFPSZ1MBOFTUSBJOUSJBOHMFT1MBOFTUSBJORVBESJMBUFSBMT1MBOFTUSFTTUSJBOHMFT1MBOFTUSFTTRVBESJMBUFSBMTiSFF(cid:14)EJNFOTJPOBMIFYBIFESB66
4. IT’S ONLY A MODEL
Example 4.2: Large Depth-to-Span Ratio Beams (continued)
We can use this example to raise the general point that simple beam theory is not suf-
ficiently accurate for beams with high depth to span ratios where the pinned boundaries are
placed at the bottom surface of the beam cross section. Nevertheless, too often even experi-
enced analysts too often take for granted that it applies universally as an accepted solution
for slender beam problems.
We point out that this type of qualitative reasoning is not trivial. e analyst’s ability
to undertake this reasoning correctly depends on two key issues that have been articulated
repeatedly throughout this text:
• the analyst is willing to anticipate, confront, and let go of misconceptions, even when
they appear to be intuitive and based on prior understandings; and
• the analyst has sufficient understanding of Mechanics of Materials and understands
how to think through the differences in the models considered.
e consequences of getting this analysis wrong, in this case, can be far reaching. e
analyst who insists on sticking with the Euler-Bernoulli beam theory not only will persist
with that error, but as a consequence might make other poor judgements, such as believing,
as is apparent in this case, that a relatively coarse mesh under plane strain conditions is also
generally correct! is could, in turn, lead to the analyst to not performing sufficient mesh
refinement studies in other problems, and to accept other erroneous plane strain solutions.
In closing this example, we note that elementary beam theory would, in fact, be rea-
sonable if one were to take in account the nature of the support boundary conditions. Some
users will notice that the plane stress solutions converge to a maximum deflection nearly half
that obtained by simple beam theory. In this case, they may investigate the possibility of pin-
ning the end supports of the two-dimensional mesh at the mid-plane location of the neutral
axis. is, of course, lowers the area moment of inertia by close to a factor of two, bringing
the theory and two-dimensional analysis into very good agreement. Alternatively, they can
apply beam theory employing offset neutral axes, i. e., one-dimensional beam element line
models with a moment of inertia about some point well below the neutral axis as will be
the case for pin supports on the bottom edge of the beam. is exercise illustrates the rather
strong dependence of the solution of the boundary value problem on the precise prescrip-
tion of the support boundary conditions, as well as the bounding nature of two-dimensional
continuum approximations for truly three-dimensional problems.
When one accepts that the three-dimensional analysis is accurate, users can become under-
standably frustrated that a two-dimensional analysis is always an approximation whose accuracy
they have to be prepared to verify. While a three-dimensional analysis may be accurate for this
particular problem, it requires substantially more computational effort and cost than the corre-
sponding two-dimensional plane stress approximation.
4.5. LESSONS LEARNED 67
4.5
LESSONS LEARNED
ese two case studies point out several realities in application of the finite element method.
1. When one proceeds to higher dimensions, while Poisson effects, i. e., lateral dimensional
changes and out-of-plane warping, are captured, the precise manner in which classical
boundary conditions such as simple supports or clamped supports are applied can have
significant influence on the numerical results.
2. Improper boundary conditions can lead one to purposefully choose poorer element formu-
lations and coarser meshes in attempts to validate a solution.
3. While one- and two-dimensional idealizations help reduce computational effort, they must
be understood and substantiated.
ese lessons illustrate several of the common errors encountered in using the finite element
method [Chalice Engineering, LLC, 2009]. ese include:
1. using wrong elements for an analysis,
2. incorrectly prescribing boundary conditions,
3. incorrectly applying theory for solution validation,
4. assuming finite element analysis is conservative, and
5. using finite element analysis for the sake of it.
ere are arguably only two types of errors made in numerical simulation: either in faulty
assumptions regarding the relevant physics governing the engineering system or discretization
error in the numerical solution algorithm employed. Good analysts must understand and take
responsibility for both. Modeling is, therefore, necessarily an iterative enterprise involving re-
assessing the validity of one’s physical assumptions as one hones in on an acceptable solution.
Because our numerical simulations are only approximations, this book has emphasized that users
should be skeptical of their solutions prior to validating them. Further interesting reading re-
garding modeling approximation and anomalies can be found in Deaton [2010, 2013], Dvorak
[2003], Fleenor [2009], Grieve [2006], and Kurowski [2001, 2002a,b,c].
C H A P T E R 5
69
Wisdom Is Doing It
In theory, theory and practice are the same. In practice,
they are not.
Albert Einstein
Do you know the difference between knowledge and
wisdom? Wisdom is doing it!
Dan Millman
A Peaceful Warrior
Sometimes it is said that the application of science or a theory is “as much an art as a science.”
e practice of the finite element method fits the bill. Several authors have collected their own
practical tips for application of the method. But, in general, books primarily about finite element
theory do not present details regarding use of the method in practice. Books that attempt to
address practical advice about applying the method in practice [Budynas, 2011, Kim and Sankar,
2009] almost always address issues that can be traced to the original list of ten most common
mistakes presented in Chapter 1. Consider that the method is comprised of the following.
1. Preliminary analysis, which may entail:
(a) simplifying the problem to obtain an analytical solution or estimation based on theory,
(b) obtaining theoretical solutions representing upper or lower bounds for the solution, or
(c) calculating the order of expected values for deflections and stresses and locations for
their respective maxima/minima.
2. Pre-processing, which usually includes:
(a) choosing an appropriate finite element formulation,
(b) discretizing the domain globally,
(c) refining it locally in areas of interest, and
(d) applying loads and displacement constraints.
70
5. WISDOM IS DOING IT
3. Solving the equations.
4. Post-processing the solution variables to compute
(a) reaction forces and
(b) internal stresses.
5. Interpreting and validating numerical solution results.
Referring to the list of most commonly made mistakes reported in Chapter 1, we attempt
to correlate this list with the steps performed in the finite element method in Table 5.1. Five of
the ten common errors might be avoided by paying particular attention to a well-performed pre-
liminary analysis. Errors in pre-processing result in four of the typical errors. ere is substantial
overlap as preliminary analysis directly affects the most substantial step in pre-processing, which
is discretizing the domain. Finally, three commonly made mistakes can be avoided with prudent
post-processing. e solution of the equations for nodal point equilibrium usually results in no
errors.
Note To e Instructor
While it is always important for students to know what a piece of computational software is doing on
their behalf, having students mathematically carry out the steps of computing element equations, assembling
them into a global matrix equation, reducing its rank once the boundary conditions have been decided, and
solving the reduced set of equations will all be done for them in practice by commercial software. Because
of the relative importance of the other mistakes they will likely make, we question the utility of assigning
students problems requiring this mathematics. Many times, these are precisely the types of assignments that
are given in an introductory course in the finite element method. It may behoove all of us who teach the
method to realize that if we only have a single chance to speak to students on behalf of the method, we
should at least discuss the list of places they will likely make mistakes. We also might be of better service to
their education by assigning open-ended problems that require them to focus more on the steps where they
are most likely to err while we are available to intervene and correct any ongoing misconceptions and poor
practices before they become matters of routine.
Table 5.1: Mistakes listed by Chalice Engineering, LLC [2009] fall solely within portions of the
analysis process performed by the analyst
Analysis Step
Preliminary Analysis
Pre-processing
Solution
Post-processing
Mistakes Made
1,2,3,7,9
3,6,9,10
0
2,4,5
5.1
PRELIMINARY ANALYSIS
5.1. PRELIMINARY ANALYSIS 71
Often, engineers go wrong early by ignoring what may arguably be the most important step. is
is the preliminary analysis. is preliminary analysis takes place before one ever turns on the com-
puter. Preliminary analysis consists of asking the question “What does one expect to happen?” To
answer this question, one must apply mechanics theory. While most practical problems preclude
analytical solutions, one can often simplify the problem to the extent where the order of deflec-
tion and stresses can be estimated and one can identify where their respective maxima are likely to
occur. Sometimes simplifications of the real problem will lead to simpler solutions that may repre-
sent upper and lower bounds on deflections and stresses. Example 4.2 is a case in point. e finite
element model is then created and analyzed to obtain a more precise, albeit approximate, solution
whose quantitative results can be used for design purposes. When engineers neglect this step,
they place themselves at a distinct disadvantage when attempting to later validate their numerical
solution. It also places an analyst at a distinct disadvantage for pre-processing intelligently. It is
often claimed by students that if they could compute the analytical solution, they wouldn’t need
the finite element method. But, in the end, this is a convenient rationalization to avoid the work
involved in preliminary analysis. It is a crucial step if for no other reason than that it feeds so
heavily into the most important decision made in pre-processing: discretizing the domain.
5.2
PRE-PROCESSING
Apart from preliminary analysis, the most common errors are made in pre-processing or estab-
lishing the numerical model of a real physical process. But preliminary analysis plays a critical
role in reasons analysts go wrong in creating their models. Recall our discussion in Chapter 4
regarding what we are approximating, specifically the discussion centered on Figs. 4.2 and 4.3 for
piecewise interpolation. Because there is no single, unique way to discretize the domain, creating
a good quality mesh is a skill often best acquired through experience. Creating a good domain
discretization requires first knowing something about the solution you are trying to approximate
over that domain. is is because the finite element method approximates this solution with piece-
wise lower-order polynomial interpolations (the finite elements themselves). For instance, if one
is trying to approximate a periodic solution using elements with linear interpolation, one should
be asking the question “How many linear segments are required to sufficiently model a sinusoidal
function over the domain prescribed?” So the decision of the element type, i. e., the solution in-
terpolation polynomial order, and the decision on mesh density are intimately tied together given
one knows something about the expected solution.
An equally important consideration is that the internal stresses in a deformed model are
related to strains, which are higher-order derivatives of the displacement field. is has conse-
quences that may best be illustrated by example. Consider a long, slender beam that is simply
supported at both ends and loaded uniformly along its length as shown in Fig. 5.1. Over the
72
5. WISDOM IS DOING IT
entire domain, the exact bending moment varies quadratically and the exact shear force varies
linearly.
Figure 5.1: A uniformly loaded, simply supported, long, slender beam exhibits transverse deflection
that varies as a fourth-order polynomial along its span.
e exact solutions are
vexact D
Mexact D
Vexact D
2s3
s4(cid:1)
(cid:0)
wL4
24EI
wL2
2
wL
2
(cid:0)
s
(cid:0)
(cid:0)s
(cid:0)
C
s2(cid:1)
.1
2s/ ;
(cid:0)
(cid:1)
D
D
D
1 lbf
24 lbf=ft, and EI
where s
x=L. If we decide to model this beam with a mesh containing three cubic, one-
dimensional beam elements, we will effectively be choosing to model the quartic function with
three piecewise cubic functions. Consider a beam with length L
1 ft uniformly loaded with
ft2. e normalized deflections predicted by this finite element
w
model are shown in Fig. 5.2, which clearly have excellent agreement with the analytical solution.
Because bending moment varies linearly in a cubic beam element, the finite element pre-
diction for the bending moment (and therefore the bending stress) over the beam then models
a quadratic function with three linear segments. Finally, the linear variation of shear force is ap-
proximated by three piecewise constant segments. ese FEA solutions are shown in Figs. 5.3
and 5.4, respectively. It is important to realize that while deflections are approximated well in this
case, bending stress and shear force will only be captured reasonably well by successively localized
refinements in mesh discretization.
D
Considering the correlation of the FEA prediction with the corresponding exact solution,
the higher the order of the derivative of the displacement one wishes to approximate, the poorer
the method does with any given mesh. e implication is that one generally needs a finer dis-
cretization to capture stresses accurately than to capture the deformed shape. In other words:
wL5.2. PRE-PROCESSING 73
Figure 5.2: Displacement predicted by three piecewise continuous cubic interpolations very closely
approximates the single quartic analytical variation in deflection.
Figure 5.3: Bending moment predicted by the three element model is piecewise linear. is prediction
captures the quadratic moment variation less closely than the cubic displacement interpolation captures
the deformed shape.
00:20:40:60:81(cid:0)0:3(cid:0)0:2(cid:0)0:10/PSNBMJ[FEQPTJUJPO(cid:13)s=x/L/PSNBMJ[FEEFnFDUJPO(cid:13)v"OBMZUJDBM'&"00:20:40:60:810123/PSNBMJ[FEQPTJUJPO(cid:13)s=x/L/PSNBMJ[FECFOEJOHNPNFOU(cid:13)M"OBMZUJDBM'&"74
5. WISDOM IS DOING IT
Figure 5.4: Transverse shear force predicted by the three element model is piecewise constant. is
prediction captures the linear shear force variation less closely than the linear moment interpolation
captures the quadratic bending moment.
1. the global displacement solution is generally more accurate than the global stress solution;
2. the discretization necessary to capture stresses accurately is finer than that needed to capture
deformations accurately; and
3. what constitutes an acceptable mesh will be determined by whether one wishes a more
accurate answer for deformation or stress.
It is, therefore, absolutely essential to know what element type one is using to properly mesh the
problem domain and interpret one’s results.
5.2.1 THE CAST OF ELEMENT CHARACTERS
An excellent presentation and discussion of practical element formulations is given in Budynas
[2011]. Basically, one can place the majority of finite element formulations in one of five cate-
gories.
1. One-dimensional formulations for purely axial response (bar elements). Most elements are
two-noded and utilize linear interpolation functions.
2. One-dimensional formulations that account for axial and out-of-plane bending response
(beam elements). Most elements are two-noded and utilize cubic polynomial interpolation
functions for transverse deflections.
00:20:40:60:81(cid:0)10(cid:0)50510/PSNBMJ[FEQPTJUJPO(cid:13)s=x/L/PSNBMJ[FETIFBSGPSDF(cid:13)V"OBMZUJDBM'&"5.2. PRE-PROCESSING 75
3. Two-dimensional solid elements that account for two-dimensional in-plane stress states
(plane stress/plane strain/axisymmetric solid or continuum elements). ese elements may
be triangular or quadrilateral. Typically, linear and parabolic interpolation functions are
available. ese effectively behave like two-dimensional analogs to one-dimensional bar
elements.
4. Two-dimensional elements that respond to out-of-plane loads and moments (plate or
shell elements). Plate elements effectively behave like two-dimensional analogs to one-
dimensional beam elements.
5. Full three-dimensional solid elements. Typical elements are tetrahedral and hexahedral
(brick) elements. Both linear and parabolic interpolation functions are offered in most ele-
ment libraries.
ese element formulations are illustrated in Table 5.2.
With regard to specific element formulations:
1. One-dimensional element formulations cannot capture stress concentrations and should be
avoided where such stress risers are expected.
2. Two-dimensional element formulations can reduce the computational mesh size by orders
of magnitude when conditions of plane stress, plane strain, or axisymmetry apply. Two-
dimensional analysis should be considered in these limits.
3. For such two-dimensional element formulations, generally quadrilateral elements (of the
same order interpolation) outperform triangular elements.
4. Two-dimensional element formulations used to capture in-plane bending should contain
a minimum of three to five elements across the cross section perpendicular to the bending
axis. Generally, where in-plane bending occurs in two-dimensional analysis, one should
consider use of a higher-order, usually parabolic element interpolation.
5. One should avoid using three-noded triangular plate elements for out-of-plane bending as
they are particularly stiff. In such analyses, the number of degrees of freedom necessary for
a convergent solution will often dictate use of a higher-order element interpolation.
6. For three-dimensional solid analysis, hexahedral (brick) elements generally outperform
tetrahedral elements, but tetrahedral elements will often be used by automated mesh gen-
erators because they can most easily fill generally complex three-dimensional regions.
7. When using tetrahedral element formulations in three-dimensional analysis, it is preferred
to use a higher order, i. e., parabolic interpolation of displacements.
8. ree-dimensional solid elements often do not include rotational nodal degrees of freedom.
erefore, modeling global rotation at a boundary becomes yet a further approximation.
76
5. WISDOM IS DOING IT
Table 5.2: Basic finite element types
Element
1D Linear
Schematic
2D Triangular
2D Rectangular
3D Tetrahedral
3D Hexahedral
5.2.2 GOOD AND BAD ELEMENTS
Good quality meshes typically employ:
• aspect ratios as close to unity as is feasible, i. e., equal side lengths in any single element;
• element shapes that avoid irregularities such as excessively small or large corner (skew) an-
gles, e. g., 90(cid:14) angles in quadrilaterals and 60(cid:14) angles in triangles;
• gradual transition in element size. Rapid transitions in element size should be avoided
whenever possible; and
5.2. PRE-PROCESSING 77
• mesh refinement where the stress gradients are large.
Poor quality elements will inevitably appear in complex geometries, particularly when an analyst
employs automatic mesh generators. Typically, commercial software will flag such elements with
warnings and alert the user. An analyst is then responsible for adjusting the mesh locally, perhaps
manually if necessary, to assure good quality results.
In general, it is difficult to avoid having an arbitrarily oriented element in a region of con-
stant stress in a mesh. For this reason, element formulations are tested to ensure they can predict
reasonably constant stress values in such cases. is is called the patch test. All good elements
should be able to pass the patch test [Irons and Shrive, 1983].
A nice discussion of good and bad element behaviors is presented in Irons and Shrive [1983]
and Kim and Sankar [2009]. Many good meshing strategies are outlined by Budynas [2011].
5.2.3 APPLYING BOUNDARY CONSTRAINTS
Several rules of thumb are necessary to consider when applying boundary constraints.
1. Be sure the boundary conditions applied to the model always remove all rigid body trans-
lation and rotation, i. e., “always tie down the horse.” Some commercial software packages
will attempt to solve such ill-posed problems and deliver no results.
2. Errors in boundary conditions can be subtle and hard to recognize. For example, consider
the two-dimensional constraints applied for the simple supports in Example 4.2. Because
they are not applied along the neutral axis of the beam, the apparent flexural stiffness of the
beam is nearly twice that one would calculate using elementary beam theory.
3. Applying idealized boundary conditions becomes more difficult in higher dimensions. For
instance, applying a simple support is straightforward in one-dimensional elements, but in
two and three dimensions, there are multiple ways to apply the constraint at the domain
edges. is same quandary occurs when applying any idealized boundary constraint such
as a clamped edge in two or three dimensions where rotational degrees of freedom are not
available and constraints on the local slope of the deformed structure cannot be explicitly
constrained.
4. Local results, particularly maximum deflections or stresses, can be very sensitive to small
variations in the application of boundary condition constraints.
5.2.4 APPLYING EXTERNAL LOADS
Several rules of thumb should be considered when applying loads to the structure.
78
5. WISDOM IS DOING IT
1. Point loads are idealized load applications and will generally result in unreasonably large
internal stresses in the vicinity of the application point. One should consider applying lo-
calized pressures when possible.
2. Usually, when concentrated loads are applied the stresses resulting from statically equivalent
loads will be independent of the method of application a distance away from the load that
is of the order of the transverse dimensions of the structure locally. is is the principle of
St. Venant. It should be employed liberally in application of the finite element method and
in interpreting its results.
3. In an analogous manner as with prescribing boundary constraints, when one models do-
mains in two or three dimensions, element formulations may not have rotational degrees
of freedom. For such cases, application of a concentrated moment or couple is no longer
unique and not as straightforward as it is when using one-dimensional elements. In such
cases, one should consider experimenting with different possible prescriptions of the cou-
ple using local point loads and compare stresses a St. Venant’s decay distance away from the
concentrated moment.
4. Generally, the order of complexity of the solution to boundary value problems will increase
with the order of the loading. Given a specific finite element formulation, the more complex
the loading, the more approximate the solution. is was illustrated in the beam example
of Fig. 5.1. Such one-dimensional beam elements capture bending stress and shear forces
exactly when only point loads and couples are applied. ese same bending stresses and
shear forces are only approximately predicted when distributed or more complex loading is
applied.
5.3
POST-PROCESSING
Computer graphics has achieved such a level of polish
and versatility as to inspire great trust in the underlying
analysis, a trust that may be unwarranted. (One can now
make mistakes with more confidence than ever before.)
R.D. Cook, D.S. Malkus, and M.E. Plesha
Concepts and Applications of Finite Element Analysis,
3rd Edition
ere are several rules of thumb to consider when post-processing results.
1. Plotting deformed shapes of structures is a good way to spot particular errors in application
of boundary constraints.
2. Element stresses are most accurate at internal integration points where they are calculated.
ese stresses are averaged at nodes shared by elements. e nodal-averaged stresses are
interpolated between nodes, contoured, and then, generally, artificially smoothed to create
contoured results.
5.4. FURTHER RULES TO LIVE BY IN PRACTICE 79
3. When displaying stress contours, it is often good practice to contour element values directly
as well as the nodally averaged values. is is a good practice because:
(a) If the element stresses are observably discontinuous to the eye, then the stress gradients
are larger than the mesh is capable of predicting and one should refine the mesh.
(b) If the element stresses are not overly discontinuous, then the smoothed contours are
sufficient to represent the overall character of the solution.
5.4
FURTHER RULES TO LIVE BY IN PRACTICE
One can establish a set of ground rules that can serve as a starting point for good practical finite
element analysis. Again, this list, while not exhaustive, attempts to address several of the most
common errors made in applying the finite element method.
1. Use the finite element method only when it is necessary, i. e., when the simplest formulae
outlined in Chapter 2 or other analytical methods are not generally applicable.
2. ere are no units involved in formulation of the finite element method. An analyst must
always use dimensionally consistent units and interpret results accordingly.
3. e finite element discretization results in a model that is too stiff, implying:
(a) models upon which only displacement boundary conditions are applied will, in general,
result in stresses that are higher than the actual stresses;
(b) models upon which only force boundary conditions are applied will, in general, result
in displacements that are smaller than the actual displacements; and
(c) no general conclusions can be made once the boundary constraints are mixed, which
is most often the case.
4. One should not generally assume that finite element analysis is conservative.
5. It is not necessarily true that three-dimensional analysis outperforms two-dimensional anal-
ysis or that two-dimensional analysis outperforms one-dimensional analysis.
6. One should consider mesh refinements in regions where there are large gradients in material
stiffness such as dissimilar material interfaces or large discontinuities in load-bearing areas.
7. Consider applying the principle of St. Venant in order to avoid modeling geometric fea-
tures wherein the stress results are not of primary importance, e. g., details at or near load
application points.
80
5. WISDOM IS DOING IT
8. Exploit global symmetry wherever and as much as possible.
9. When importing geometries from solid modeling software, it is important, when possible,
to create the solid model with design intent. By this, we mean that solid geometry entities
such as grid points and surfaces should be strategically created such that boundary condi-
tions can be placed on nodes and element edges that lie, respectively, on these solid entities.
is practice allows one to usually perform mesh refinements and iterations without the
inconvenience of re-applying the boundary conditions.
5.5
SOLUTION VALIDATION
Believe nothing, no matter where you read it or who
said it, unless it agrees with your own reason and your
own common sense.
Buddha
Nobody believes a model except the one who devised it;
everyone believes an experiment except the one who
performed it.
Albert Einstein
Perhaps not all experimentalists are so cautious nor all modelers as careless, but, as evidenced by
the common errors made by analysts, it can seem as if those who computationally model systems
can be led to a false sense of security in their numerical solutions. We like to recommend that
all numerical model results must, initially at least, be viewed through skeptical spectacles. If one
treats at least one’s initial findings as guilty until proven innocent, one will be less likely to accept
results that are incorrect.
In general, an engineering analysis can be accomplished either
1. theoretically, from first principles,
2. approximately, using numerical analysis, or
3. empirically, using discrete experiments.
Having all three one might consider the mother lode. But, in any analysis, we should shoot for
results of one approach to be benchmarked or validated by one or both of the others. Here we
define validation as the process of determining the degree to which a model is an accurate rep-
resentation of the real world from the perspective of the intended uses of the model. In essence,
validation provides evidence that the correct model is solved to a given level of accuracy.
As we are attempting to prove the results of our numerical analyses innocent, we should
validate all results with either theoretical results or experimental data. While theoretical results
are often precluded in real applications, they may have limited applicability when they represent
5.6. VERIFICATION 81
1. upper and lower bounds of the real solution or
2. the correct solution in only part of the global domain.
When using experimental results for validation, one should consider the following.
1. ey are often considered the harbinger of truth.
2. Boundary constraints more easily realized in the laboratory can sometimes be difficult to
realize in a computational model, for example, machine compliance for a tensile test speci-
men.
3. Boundary constraints more easily realized in discrete analysis can sometimes be more diffi-
cult to achieve in the laboratory.
4. Experiments can be costly and time-consuming.
Numerical analyses should not be trusted without either theoretically or experimentally validat-
ing the solution. Neither should the results of numerical analyses be accepted without proper
examination of insensitivity to the mesh discretization. As in Example 4.2, a proper convergence
study should always be attempted. Correct results can only be obtained in the limit as the results
are no longer sensitive to the use of any finer discretization of the global domain. We term such
convergence mesh insensitivity. When the results fall within a specified insensitivity to the mesh
or element size, one can conclude the numerical analysis has converged. It is important to note
that this is a necessary but not sufficient condition for the computational results to be acceptable
or a correct solution to the problem posed. Recall that the solutions in Example 4.2 eventually
converged, but those assuming plane strain conditions were incorrect, i. e., they solved the wrong
problem.
5.6 VERIFICATION
Extensive tests showed that many software codes widely
used in science and engineering are not as accurate as
we would like to think.
Les Hatton
Oakwood Computing
By verification, we refer to the process of determining that a model implementation accurately
represents the developer’s conceptual description. Verification provides evidence that the numeri-
cal model is solved correctly. It is tacitly assumed that commercial software is completely debugged
before a version is released. Les Hatton at Oakwood Computing has presented interesting find-
ings that indicate errors in software and programming, while small in number, do occur. is
sometimes happens in commercial FEA software. Most instances of which we are aware have
82
5. WISDOM IS DOING IT
been in the post-processing software. While the primary variable solution is most often entirely
correct, sometimes listings and contour plot variables are not stored correctly and are subsequently
improperly displayed. Luckily, these instances are rare and not a primary cause of errors on the
part of the analyst. In any case, they can be caught by prudent use of preliminary analysis.
We believe that the majority of textbooks addressing introductory finite elements primarily
and predominantly emphasize the mathematical foundation and procedural application of the
method. We have emphasized, rather, a practical approach based on recognition that most errors
made in application of the method are in pre- and post-processing and are made mostly in model
development. Further interesting reading regarding issues of practical application of the finite
element method can be found in Dunder and Ridlon [1978], Dvorak [2003], Gokhale et al.
[2008], Morris [2008], and Sastry [2010].
Summary
83
e most common mistakes made by novice users of the finite element method involve
procedural steps performed explicitly by the user. Exercises in many textbooks emphasize mathe-
matical elements of the procedure performed strictly by the computer. We have introduced an al-
ternative examination of the method used in practice that focuses on a published list of commonly
made mistakes. Examination of the root causes of such mistakes reveals that they are intimately
tied more to a user’s command of underlying theory of strength of materials and less to a user’s
ability to reproduce mathematical computations undertaken by the processor.
We outlined a basic requisite skill set necessary to undertake use of the finite element
method. en we explored excursions where first the underlying theory no longer holds, and
then ultimately where users are most likely to interface with the software in a faulty manner. Fi-
nally, we posited a short listing of rules for applying the finite element method in practice. While
this list is generally acknowledged by many practitioners, we find that it is typically relegated to
more of an aside and less of a central theme. We provided relatively simple examples to showcase
where mistakes are made when one does not follow practical rules of thumb from the start.
If the method is taught with more of this emphasis on expectation failures of newly learned
mechanics of materials, and more prudent attention to questioning computational complacency, it
is our hope that the occurrence of these common mistakes may be reduced. Also, an earlier intro-
duction to the method as a practical tool may prove to be a useful precursor to better and deeper
learning of the mathematics underlying finite element interpolation. We argue that, instead of
emphasizing steps performed well by the computer, becoming competent in finite element anal-
ysis should focus on the steps of the process where analyst’s choices have the greatest impact on
the results.
Afterword
85
is book was written to supplement texts on FEA theory with prudent rules for practice
by focusing specifically on errors commonly made in industry. Based on our experience teaching
the method to undergraduates, we included examples where students have faltered in the past and
couched these in terms of expectation failures. After reading this book, if you have comments on
the presentation of the exercises or wish to suggest additional examples that emphasize expecta-
tion failures, feel free to contact the authors at [email protected]. ank you, in advance,
for any input you have.
Bibliography
87
Allain, R. (2011). Just Enough Physics. Amazon Digital Services. 27
Bain, K. (2004). What the Best College Teachers Do. Harvard University Press. xiii, 5, 48
Bhaskaran, R. (2012).
SimCafe Wiki-Based Online Learning System.
In https://
confluence.cornell.edu/display/SIMULATION/Home. xiv
Bhaskaran, R. and Dimiduk, K. (2010). Integrating advanced simulations into engineering cur-
ricula: Helping students to approach simulation like experts. In NSF Award 0942706. xiv
Brooks, D. (2013). e Practical University. e New York Times, April 5:A23. xiii
Bruner, J. S. (1960). e Process of Education. Harvard University Press. xii
Budynas, R. G. (2011). Advanced Strength and Applied Stress Analysis. McGraw Hill. 69, 74, 77
Carson, E. and Cobelli, C. (2000). Modelling methodology for physiology and medicine. Academic
Press. 50
Chalice Engineering, LLC (2009). Ten common mistakes made in finite element analy-
sis. In http://www.chalice-engineering.com/analysis_basics/Top_Ten_mistakes.
html. 3, 7, 67, 70
Conly, S. (2013). ree Cheers for the Nanny State. New York Times, March 24:A26. xv, 1, 5
Cook, R. D., Malkus, D. S., Plesha, M. E., and Witt, R. J. (2002). Concepts and Applications in
Finite Element Analysis, 4th Edition. McGraw Hill. xiv
Deaton, B. (2010). Believing experiments vs. simulations. In http://onlyamodel.com/2010/
quote-experiments-vs-simulations/. 67
Deaton, B. (2013). Responding to skepticism toward your model. In http://onlyamodel.
com/2013/responding-to-skepticism-toward-your-model/. 67
du Toit, J., Gosz, M., and Sandberg, G. (2007). Minisymposium on the Teaching of Finite Ele-
ments at the Undergraduate Level. In 9th U.S. National Congress on Computational Mechanics.
9
Dunder, V. F. and Ridlon, S. A. (1978). Practical applications of the finite element method.
Journal of the Structural Division ASCE, 104:9–21. 82
88 BIBLIOGRAPHY
Dvorak, P. (2003). A Few Best Practices for FEA Users. In http://machinedesign.com/
article/a-few-best-practices-for-fea-users-0904. 67, 82
Dym, C. (2004). Principles of Mathematical Modeling. Academic Press, 2 edition. 49, 50
Fleenor, M. (2009). Modeling Stupidity. e Teaching Professor, 23:2–4. 67
Gieck, K. and Gieck, R. (2006). Engineering Formulas. Gieck Verlag Publishing. 45
Gokhale, N., Deshpande, S., Bedekar, S., and ite, A. (2008). Practical Finite Element Analysis.
Finite to Infinite Publishers. 82
Greenbaum, A. and Chartier, T. P. (2012). Numerical Methods: Design, Analysis, and Computer
Implementation Algorithms. Princeton University Press, Princeton, New Jersey. 50
Grieve, D. J. (2006). Errors Arising in FEA.
In http://www.tech.plym.ac.uk/sme/
mech335/feaerrors.htm. 67
Hake, R. R. (1998). Interactive Engagement vs. Traditional Methods: A Six ousand Student
Survey of Mechanics Test Data for Introductory Physics Courses. American Journal of Physics,
66:64–74. DOI: 10.1119/1.18809. xii, 5
Hatton, L. (1999). Programming Technology, Reliability, Safety, and Measurement. In www.
leshatton.org/wp-content/uploads/2012/01/PTRel_IER298.pdf. 9
Irons, B. and Shrive, N. (1983). Finite Element Primer. Ellis Horwood Publishers. 56, 77
Jeremić, B. (2009). Verification and Validation in Geomechanics. In A Multidisciplinary Workshop
on Deformation and Failure of Geomaterials, Brindisi, Italy. 7, 45
Kim, N.-H. and Sankar, B. V. (2009). Introduction to Finite Element Analysis and Design. J Wiley
and Sons. xiv, 69, 77
Kurowski, P. (2001). Easily Made Errors Mar FEA Results. In http://machinedesign.com/
article/easily-made-errors-mar-fea-results-0913. 67
Kurowski, P. (2002a). How to Find Errors in Finite Element Models.
In http:
//machinedesign.com/article/how-to-find-errors-in-finite-element-models-
1115. 67
Kurowski, P. (2002b). More Errors that Mar FEA Results. In http://machinedesign.com/
article/more-errors-that-mar-fea-results-0321. 67
Kurowski, P. (2002c). When Good Engineers Deliver Bad FEA. In http://machinedesign.
com/article/when-good-engineers-deliver-bad-fea-1115. 67
BIBLIOGRAPHY 89
Kurowski, P. (2013). Engineering Analysis with SolidWorks Simulation 2013. SDC Publications.
xiv
Lawrence, K. L. (2012). ANSYS Workbench Tutorial Release 14. SDC Publications. xiv
Lee, H.-H. (2012). Finite Element Simulations with ANSYS Workbench 14. SDC Publications.
xiv
Logan, D. L. (2001). Applications in the Finite Element Method. Brooks Cole Publishing Com-
pany. xiv
McDermott, L. C. (1984). Research on Conceptual Understanding in Mechanics. Physics Today,
37:24–34. DOI: 10.1063/1.2916318. xii, 5
McDermott, L. C. (2001). Oersted Medal Lecture 2001: Physics Education Research: e Key
to Student Learning. American Journal of Physics, 69:1127–1137. DOI: 10.1119/1.1389280.
xiii
Montfort, D., Brown, S., and Pollack, D. (2009). An Investigation of Students’ Conceptual Un-
derstanding in Related Sophomore to Graduate-Level Engineering and Mechanics Courses.
Journal of Engineering Education, 98:111–129. DOI: 10.1002/j.2168-9830.2009.tb01011.x.
xii, 5
Morris, A. (2008). A Practical Guide to Reliable Finite Element Modeling. J. Wiley and Sons.
DOI: 10.1002/9780470512111. 82
Papadopoulos, C. (2008). Assessing Cognitive Reasoning and Learning in Mechanics. In Amer-
ican Society for Engineering Education Annual Conference and Exposition. xii, 5, 33
Papadopoulos, C., Roman, A. S., Gauthier, G. P., and Ponce, A. (2013). Leveraging Simulation
Tools to Deliver Ill-Structured Problems in Statics and Mechanics of Materials: Initial Results.
In American Society for Engineering Education Annual Conference and Exposition. 22, 29
Papadopoulos, J., Papadopoulos, C., and Prantil, V. C. (2011). A Philosophy of Integrating
FEA Practice roughout the Undergraduate CE/ME Curriculum. In American Society for
Engineering Education Annual Conference and Exposition. xii, 3, 8, 12, 33, 37, 45
Paulino, G. (2000). Warning: e Computed Answer May Be Wrong. In http://paulino.
cee.illinois.edu/courses/cee361/handouts/wrcabm.htm. 1, 5, 9
Philpot, T. A. (2010). Mechanics of Materials: An Integrated Learning System. J. Wiley and Sons.
27, 33
Pope, J. E., editor (1997). Rules of umb for Mechanical Engineers: A Manual for Quick, Accurate
Solutions to Everyday Mechanical Engineering Problems. ROTpub. 27
90 BIBLIOGRAPHY
Prantil, V. C. and Howard, W. E. (2007). Teaching Finite Element Simulation in Conjunction
with Experiment and eory in an Integrated Systems Design. In 9th U.S. National Congress
on Computational Mechanics. 45
Prantil, V. C. and Howard, W. E. (2008). Incorporating Expectation Failures in an Undergrad-
uate Finite Element Course. In American Society for Engineering Education Annual Conference
and Exposition, volume 1. ASEE, Curran Associates, Inc. 45
Public Broadcasting System–NOVA (1993). e Best Mind Since Einstein - Richard Feynman
Biography. Television Production. 49
Riley, W. F., Sturges, L. D., and Morris, D. H. (2007). Mechanics of Materials, 6th Edition. J
Wiley and Sons. 33
Sastry, S. S. (2010). Accepted Practices in Practical Finite Element Analysis of Struc-
tures. In http://www.nafems.org/downloads/india/webinar/mar_10/accepted_fe_
practices_nafems_india.pdf. 82
Solverson, R. (1953). Stress Concentrations in Fillets. Master’s thesis, California Institute of
Technology. 35
Steif, P. S. (2012). Mechanics of Materials: An Integrated Learning System. J Wiley and Sons. 27,
33
Streveler, R., Litzinger, T., Miller, R., and Steif, P. (2008). Learning Conceptual Knowledge in
the Engineering Sciences: Overview and Future Research Directions. Journal of Engineering
Education, 97:279–294. DOI: 10.1002/j.2168-9830.2008.tb00979.x. xii, 5
ompson, E. G. (2004). Introduction to the Finite Element Method: eory, Programming and
Applications. John Wiley and Sons. xiv
Young, W. C. and Budynas, R. G. (2002). Roark’s Formulas for Stress and Strain, 7th Edition.
McGraw Hill. 35, 45
Zienkiewicz, O. and Taylor, R. (2005). e Finite Element Method for Solid & Structural Mechan-
ics, 6th Edition. Elsevier, Butterworth-Heinemann Publishing. xiv
Zienkiewicz, O., Taylor, R., and Zhu, J. (2005). e Finite Element Method: Its Basis & Funda-
mentals, 6th Edition. Elsevier, Butterworth-Heinemann Publishing. xiv
Authors’ Biographies
91
VINCENT C. PRANTIL
Vincent C. Prantil earned his B.S., M.S., and Ph.D. in Mechanical Engineering from Cornell
University where he was awarded e Sibley Prize in Mechanical Engineering and held an An-
drew Dickson White Presidential Fellowship. He was a Senior Member of Technical Staff at
Sandia National Laboratories California in the Applied Mechanics and Materials Modeling Di-
rectorates for 11 years. His research interests lie in microstructural material modeling, dry gran-
ular materials, metals plasticity, finite element, and numerical analysis. He was jointly awarded
an R&D100 award for co-developing the Sandia Microstructure-Property Model Software in
2000 and held the Otto Maha Research Fellowship in Fluid Power at the Milwaukee School of
Engineering (MSOE) from 2006–2008. He joined the faculty in the Department of Mechanical
Engineering at MSOE in September 2000 where he presently specializes in finite element model
development, numerical methods, and dynamic systems modeling.
CHRISTOPHER PAPADOPOULOS
Christopher Papadopoulos earned B.S. degrees in Civil Engineering and Mathematics in 1993
at Carnegie Mellon University, and his Ph.D. in eoretical and Applied Mechanics in 1999 at
Cornell University, where he was a National Science Foundation Graduate Research Fellow. He
is currently a member of the faculty of the Department of Engineering Science and Materials
at the University of Puerto Rico, Mayagüez (UPRM), where he has worked since 2009. He was
previously a member of the faculty in the Department of Civil Engineering and Mechanics at the
University of Wisconsin–Milwaukee from 2001–2008. Chris is currently the principal investiga-
tor of two NSF projects, one in appropriate technology and engineering ethics, and the other in
mechanics education. He has additional research interests in nonlinear structural mechanics and
biomechanics. Chris currently serves as Secretary and Executive Board Member of the ASEE
Mechanics Division and he is the chair of the Mechanics Committee in his department. He is
also a member of a campus committee that arranged for an art exhibit honoring the life of Roberto
Clemente to be donated to the UPRM campus from the Smithsonian Museum. Chris is a pas-
sionate educator and advocate for humanitarian uses of technology. In his free time he enjoys
swimming, cycling, running, cooking, and learning the languages of the Caribbean.
92 AUTHORS’ BIOGRAPHIES
PAUL D. GESSLER
Paul D. Gessler is currently a graduate student pursuing his M.S. in the Mechanical Engineering
Department at Marquette University in Milwaukee, Wisconsin. He earned his B.S. in Mechan-
ical Engineering from the Milwaukee School of Engineering in 2012. Paul’s main interests are
using modeling and simulation at an appropriate abstraction level to improve the product design
and systems engineering process. He has experience with a wide variety of commercial FEA/CFD
codes and has written several bespoke codes for fluid, structural, and thermal system analysis. Paul
hopes to be a proponent of model-based design practices in industry throughout his career.
|
https://ieeexplore.ieee.org/xpl/ebooks/bookPdfWithBanner.jsp?fileName=9110817.pdf&bkn=9110816&pdfType=book
|
Introduction to Engineering Research
Wendy C. Crone, University of Wisconsin-Madison
Undergraduate and first-year graduate students engaging in engineering research need more
than technical skills and tools to be successful. From finding a research position and funding, to
getting the mentoring needed to be successful while conducting research responsibly, to learning
how to do the other aspects of research associated with project management and communication,
this book provides novice researchers with the guidance they need to begin developing mastery.
Awareness and deeper understanding of the broader context of research reduces barriers to
success, increases capacity to contribute to a research team, and enhances ability to work both
independently and collaboratively. Being prepared for what’s to come and knowing the questions
to ask along the way allows those entering researcher to become more comfortable engaging
with not only the research itself but also their colleagues and mentors.
ABOUT SYNTHESIS
This volume is a printed version of a work that appears in the Synthesis Digital
Library of Engineering and Computer Science. Synthesis Lectures provide concise
original presentations of important research and development topics, published
quickly in digital and print formats. For more information, visit our website:
http://store.morganclaypool.com
store.morganclaypool.com
C
R
O
N
E
I
N
T
R
O
D
U
C
T
I
O
N
T
O
E
N
G
I
N
E
E
R
I
N
G
R
E
S
E
A
R
C
H
M
O
R
G
A
N
&
C
L
A
Y
P
O
O
L
Introduction to
Engineering Research
Synthesis Lectures on
Engineering, Science, and
Technology
Each book in the series is written by a well known expert in the field. Most titles cover subjects
such as professional development, education, and study skills, as well as basic introductory
undergraduate material and other topics appropriate for a broader and less technical audience.
In addition, the series includes several titles written on very specific topics not covered
elsewhere in the Synthesis Digital Library.
Introduction to Engineering Research
Wendy C. Crone
2020
Theory of Electromagnetic Beams
John Lekner
2020
The Search for the Absolute: How Magic Became Science
Jeffrey H. Williams
2020
The Big Picture: The Universe in Five S.T.E.P.S.
John Beaver
2020
Relativistic Classical Mechanics and Electrodynamics
Martin Land and Lawrence P. Horwitz
2019
Generating Functions in Engineering and the Applied Sciences
Rajan Chattamvelli and Ramalingam Shanmugam
2019
iv
Transformative Teaching: A Collection of Stories of Engineering Faculty’s Pedagogical
Journeys
Nadia Kellam, Brooke Coley, and Audrey Boklage
2019
Ancient Hindu Science: Its Transmission and Impact on World Cultures
Alok Kumar
2019
Value Rational Engineering
Shuichi Fukuda
2018
Strategic Cost Fundamentals: for Designers, Engineers, Technologists, Estimators,
Project Managers, and Financial Analysts
Robert C. Creese
2018
Concise Introduction to Cement Chemistry and Manufacturing
Tadele Assefa Aragaw
2018
Data Mining and Market Intelligence: Implications for Decision Making
Mustapha Akinkunmi
2018
Empowering Professional Teaching in Engineering: Sustaining the Scholarship of
Teaching
John Heywood
2018
The Human Side of Engineering
John Heywood
2017
Geometric Programming for Design Equation Development and Cost/Profit
Optimization (with illustrative case study problems and solutions), Third Edition
Robert C. Creese
2016
Engineering Principles in Everyday Life for Non-Engineers
Saeed Benjamin Niku
2016
A, B, See... in 3D: A Workbook to Improve 3-D Visualization Skills
Dan G. Dimitriu
2015
v
The Captains of Energy: Systems Dynamics from an Energy Perspective
Vincent C. Prantil and Timothy Decker
2015
Lying by Approximation: The Truth about Finite Element Analysis
Vincent C. Prantil, Christopher Papadopoulos, and Paul D. Gessler
2013
Simplified Models for Assessing Heat and Mass Transfer in Evaporative Towers
Alessandra De Angelis, Onorio Saro, Giulio Lorenzini, Stefano D’Elia, and Marco Medici
2013
The Engineering Design Challenge: A Creative Process
Charles W. Dolan
2013
The Making of Green Engineers: Sustainable Development and the Hybrid Imagination
Andrew Jamison
2013
Crafting Your Research Future: A Guide to Successful Master’s and Ph.D. Degrees in
Science & Engineering
Charles X. Ling and Qiang Yang
2012
Fundamentals of Engineering Economics and Decision Analysis
David L. Whitman and Ronald E. Terry
2012
A Little Book on Teaching: A Beginner’s Guide for Educators of Engineering and
Applied Science
Steven F. Barrett
2012
Engineering Thermodynamics and 21st Century Energy Problems: A Textbook
Companion for Student Engagement
Donna Riley
2011
MATLAB for Engineering and the Life Sciences
Joseph V. Tranquillo
2011
Systems Engineering: Building Successful Systems
Howard Eisner
2011
vi
Fin Shape Thermal Optimization Using Bejan’s Constructal Theory
Giulio Lorenzini, Simone Moretti, and Alessandra Conti
2011
Geometric Programming for Design and Cost Optimization (with illustrative case study
problems and solutions), Second Edition
Robert C. Creese
2010
Survive and Thrive: A Guide for Untenured Faculty
Wendy C. Crone
2010
Geometric Programming for Design and Cost Optimization (with Illustrative Case Study
Problems and Solutions)
Robert C. Creese
2009
Style and Ethics of Communication in Science and Engineering
Jay D. Humphrey and Jeffrey W. Holmes
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Analog Multimedia
Explorations
Lina J. Karam and Naji Mounsef
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Digital Multimedia and
Robotics Explorations
Lina J. Karam and Naji Mounsef
2008
CAD/CAM of Sculptured Surfaces on Multi-Axis NC Machine: The DG/K-Based
Approach
Stephen P. Radzevich
2008
Tensor Properties of Solids, Part Two: Transport Properties of Solids
Richard F. Tinder
2007
Tensor Properties of Solids, Part One: Equilibrium Tensor Properties of Solids
Richard F. Tinder
2007
vii
Essentials of Applied Mathematics for Scientists and Engineers
Robert G. Watts
2007
Project Management for Engineering Design
Charles Lessard and Joseph Lessard
2007
Relativistic Flight Mechanics and Space Travel
Richard F. Tinder
2006
Copyright © 2020 by Morgan & Claypool
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in
any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations
in printed reviews, without the prior permission of the publisher.
Introduction to Engineering Research
Wendy C. Crone
www.morganclaypool.com
ISBN: 9781681737997
ISBN: 9781681738000
ISBN: 9781681738017
paperback
ebook
hardcover
DOI 10.2200/S00995ED1V01Y202002EST006
A Publication in the Morgan & Claypool Publishers series
SYNTHESIS LECTURES ON ENGINEERING, SCIENCE, AND TECHNOLOGY
Lecture #38
Series ISSN
Print 2690-0300 Electronic 2690-0327
Introduction to
Engineering Research
Wendy C. Crone
University of Wisconsin–Madison
SYNTHESIS LECTURES ON ENGINEERING, SCIENCE, AND
TECHNOLOGY #38
CM&cLaypoolMorganpublishers&ABSTRACT
Undergraduate and first-year graduate students engaging in engineering research need more
than technical skills and tools to be successful. From finding a research position and funding, to
getting the mentoring needed to be successful while conducting research responsibly, to learning
how to do the other aspects of research associated with project management and communica-
tion, this book provides novice researchers with the guidance they need to begin developing
mastery. Awareness and deeper understanding of the broader context of research reduces barri-
ers to success, increases capacity to contribute to a research team, and enhances ability to work
both independently and collaboratively. Being prepared for what’s to come and knowing the
questions to ask along the way allows those entering researcher to become more comfortable
engaging with not only the research itself but also their colleagues and mentors.
KEYWORDS
engineering research, technical communications, research ethics, project manage-
ment, mentoring
xi
To my family.
Contents
xiii
1
2
Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Credits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Introduction to Engineering Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Who is This Book for? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 How Research is Different . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2.1 Engineering Research Defined . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3
Engineering Research Careers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Finding the Right Research Position for You . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1
2.2
Societal Implications of Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Identifying a Research Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Undergraduate Research Experiences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4 The Graduate School Application Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.4.1
Is Graduate School Right for You? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.4.2 The Graduate School Application Packet . . . . . . . . . . . . . . . . . . . . . . 20
2.4.3 The Application Timeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4.4 Visiting a Graduate Program You Would Like to Attend . . . . . . . . . . 24
2.4.5 Getting Accepted into a Graduate Program . . . . . . . . . . . . . . . . . . . . 27
2.5
Funding of Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.5.1 U.S. Model of Research Universities . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Funding Your Graduate Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.5.2
Fellowship Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.5.3
2.6 Understanding the Organization of Your Research Group . . . . . . . . . . . . . . . . 32
xiv
3
4
5
Becoming a Researcher . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.1 Developing a Relationship with Your Research Mentor . . . . . . . . . . . . . . . . . . 35
3.2
Aligning Expectations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.3 Developing Expertise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.4 Developing Your Own Identity as a Researcher . . . . . . . . . . . . . . . . . . . . . . . . 43
Tracking Your Development as a Researcher . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.5
3.6
Being an Effective Team Member and Collaborator . . . . . . . . . . . . . . . . . . . . 48
3.7 Working with a Diverse Research Team . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.8 Developing Global Competency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.8.1 Other Resources on Global Competency . . . . . . . . . . . . . . . . . . . . . . . 64
3.9 Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Building on the Research of Others . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.1 The Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Valuing What Came Before You . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.2
Reading Journal Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.3
Reading Critically . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.4
Literature Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.5
Proper Citation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
4.6
4.7 Citation Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.8
Preparing a Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
4.9 Crediting the Work of Others . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Conducting Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.1
Scientific Habits of the Mind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.1.1 Other Resources on Scientific Method . . . . . . . . . . . . . . . . . . . . . . . 106
5.2 Developing a Research Proposal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
5.3 Getting Started and Staying Motivated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Project Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
5.4
5.4.1
Project Management Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
5.5
Scheduling Committee Meetings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
5.6 Navigating Roadblocks and Obstacles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Research Ethics (Error, Negligence, Misconduct) . . . . . . . . . . . . . . . . . . . . . 126
5.7
5.7.1 Misconduct Case Studies and the D.I.S.O.R.D.E.R. Framework . . 129
5.7.2 Other Resources on Research Ethics . . . . . . . . . . . . . . . . . . . . . . . . . 132
Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
5.8
6
7
8
9
xv
Documenting Your Research Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
6.1
Keeping a Research Notebook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
6.1.1 Documenting Your Research in a Paper Laboratory Notebook . . . . 137
6.1.2 Documenting Your Research in an Electronic Research Notebook . 139
6.1.3 Regular Evaluation of Your Research Notebook . . . . . . . . . . . . . . . . 139
6.2 Data Storage and Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Avoiding Data Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
6.3
Sharing Your Research via Oral Communication . . . . . . . . . . . . . . . . . . . . . . . 147
Informal Conversations with Other Researchers . . . . . . . . . . . . . . . . . . . . . . 147
7.1
Informal Conversations with Nonspecialist Audiences . . . . . . . . . . . . . . . . . . 148
7.2
Engineering Outreach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
7.3
7.4
Poster Presentations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
7.5 The Research Talk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
Resources on Oral Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
7.6
Sharing your Research via Written Communication . . . . . . . . . . . . . . . . . . . . 165
8.1
8.2
8.3
8.4
8.5
8.6
8.7
Translating Technical Topics in Written Formats . . . . . . . . . . . . . . . . . . . . . . 165
Basic Principles of Technical Writing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
8.2.1 Dealing with Writer’s Block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Standard Formats in Technical Writing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
8.3.1 Abstracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
8.3.2 Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
8.3.3 Technical Writing for a Proposal, Thesis, or Journal Article . . . . . . . 172
Refining your Writing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
8.4.1 Writing Workshops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Issues Surrounding Authorship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Publishing your Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
Resources on Written Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
Safeguarding Your Personal Health and Happiness . . . . . . . . . . . . . . . . . . . . . 185
9.1 The Challenges You May Face in Graduate School
. . . . . . . . . . . . . . . . . . . . 185
9.1.1 Graduate Student Mental Health . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
9.2
Steps You Can Take to Be Healthier and Happier . . . . . . . . . . . . . . . . . . . . . 187
9.3 Getting Sleep . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
9.4 Getting Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
xvi
9.5
Eating Healthy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
9.6 Creative Outlets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
Employing Mindfulness Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
9.7
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
9.8 Making Time for it All
Afterword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Author’s Biography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
Foreword
xvii
You may be dipping your toe into engineering research as an undergraduate or you may have
decided that a graduate degree in engineering is the right path to pursue. In either case, there are
a number of things that you can learn up front that will make your research experience a positive
one and will give you more time and capacity to be the most creative and innovative person that
you can be. Engineering research is a very different endeavor than the traditional coursework that
you have taken up to this point in your academic career. Students often learn about the broader
context of engineering research and the ancillary skills needed to be a successful researcher as they
stumble across the need for them. However, that unstructured process is wasteful and takes away
from opportunities for discovery and innovation. This book provides guidance and resources
on topics ranging from reading journal articles and responsible conduct of research to project
management and technical communication. It will serve as a supplement to your interactions
with research mentors, advisors, and peers as you engage in engineering research.
Student Perspective
Students who have recently begun engaging in research have fresh and
insightful viewpoints on both the context and process of research that is best
expressed through their own voices. Throughout this book you will find per-
spectives from students who are reflecting on their experiences conducting
research projects. These insights and comments are intended to give you a
review on research from a different lens.
Preface
xix
Both research as an undergraduate and the transition into research as a first-year graduate stu-
dent is unlike most of the coursework and school experiences that one has had prior to entering
into such an undertaking. Although we carry our technical expertise with us, there are often gaps
in knowledge. Additionally, the research enterprise itself is foreign. Without the proper guid-
ance and support, many students flounder and struggle to set themselves on a successful course.
This seems wasteful of people’s time, disheartening to the individuals involved, and ultimately
adds to the attrition seen in graduate programs.
Several years ago, I co-authored an article on topics important to the broader context of
engineering research based on an undergraduate course in engineering research developed at the
University of Wisconsin–Madison.1 Additionally, summer undergraduate research experiences
at campuses and national laboratories have developed accompanying workshops,2;3;4 courses,5;6
and even “boot-camp” experiences7 that help students to find and understand the scientific lit-
erature, appreciate the societal impact of engineering research responsible conduct of research,
communicating research findings, research careers, and the graduate school application process.
This broader training outside the of the specific research experience has been long advocated
by the Council on Undergraduate Research as critical to “socializ[ing] students in the research
laboratory culture.8”
The semester-long Introduction to Engineering Research course developed for the Engineer-
ing Physics undergraduate degree program and taught at University of Wisconsin–Madison ad-
1Cadwell, K., Crone, W., 2008. Training undergraduates in the broader context of the research enterprise, ASEE Annual
Conference and Exposition, Conference Proceedings, 1364, 1–9.
2The Undergraduate Research Center for Sciences, Engineering and Mathematics and the Center for Academic and
Research Excellence, University of California at Los Angeles, http://college.ucla.edu/urc-care/. Accessed January
2008.
3Wilson, R., Cramer, A., and Smith, J. L., 2004. Research is another word for education, from Reinvigorating the Un-
dergraduate Experience: Successful Models Supported by NSF’s AIRE/RAIRE Program, L. R. Kauffman and J. E. Stocks, Eds.,
Council on Undergraduate Research, Washington, DC.
4The University of Washington Undergraduate Research Program. http://www.washington.edu/research/urp/,
accessed January 2008.
5The University of Virginia Department of Science, Technology, and Society Undergraduate Thesis Project, http://
www.sts.virginia.edu/stshome/tiki-index.php?page=Undergraduate+ Thesis accessed January 2008.
6Katkin, W., 2004. The integration of research and education: A case study of reinventing undergraduate education at
a research university, from Reinvigorating the Undergraduate Experience: Successful Models Supported by NSF’s AIRE/RAIRE
Program, L. R. Kauffman and J. E. Stocks, Eds., Council on Undergraduate Research, Washington, DC: 2004.
7Bahr, D. F. and Findley, K. O., 2007. An intensive ‘camp’ format to provide undergraduate research experiences to
first year students. Materials Research Society 2007 Fall Meeting: Session W4: Implementing New Course Materials and Strategies,
November 28.
8Merkel, C. A. and Baker, S. M., How to Mentor Undergraduate Researchers, Council on Undergraduate Research, Wash-
ington, DC, 2002.
xx PREFACE
dressed the topic above as well as the importance of diversity in research, research collaboration,
safety, and intellectual property. This course was later adapted and implemented at Washington
State University and University of Central Florida in a National Science Foundation funded ef-
fort. The evaluation of the implementations on their campuses showed that “there was a measur-
able increase in the understanding of undergraduate research in the students at all institutions.9”
The subsequent work performed showed that the mode of delivery did not influence the student
outcomes. “Similar gains in conceptual awareness between each course format and at each insti-
tution” were shown with a one-week faculty-led boot camp, a three-day peer mentor-led course,
and a semester-long faculty-led course.10 Thus, I believe that the usage of the content provided
in this book can be successfully adapted to a number of different delivery modes.
I wholeheartedly agree with the assessment of Schneider et al. that “By introducing stu-
dents to the nuances of the research environment, we believe that preresearch courses reduce bar-
riers to involvement and provide confidence and knowledge for all students who participate.11”
In our evaluations of the Engineering Physics degree program at the UW–Madison, upon which
this book is based, the students who completed the program rated their research confidence and
skill levels highly. The majority of students felt that they were able to make contributions to a
research team, explain their research topic to other engineers as well as non-engineers, docu-
ment their research, provide their peers with constructive feedback on their research projects,
and identify research misconduct issues. They also reported that they gained skills in conducting
a literature search, understanding journal papers, conducting a research project, working both
independently and collaboratively, utilizing scientific method, dealing with setbacks, giving and
receiving feedback, presenting information, and articulating questions.
These topics are also highly relevant to the first-year graduate student. Even if a student has
had a prior undergraduate research experience, revisiting topics can lead to deeper understanding
and further skill development. My goal is that students using this book, either independently
or while engaged in a research professional development program/course, will be able to gain
the skills they need to be successful and achieve a high level of confidence in their research
capabilities.
Wendy C. Crone
February 2020
9Burkett, S. L., Lusth, J. C., Bahr, D., Pressley, S., and Schneider, K., 2013. Three training programs for preparing
undergraduates to conduct research. Proc. American Society for Engineering Education Annual Conference, Atlanta, GA.
10Schneider, K. R., Bahr, D., Burkett, S., Lusth, J. C., Pressley, S., and VanBennekom, N., 2016. Jump starting research:
Preresearch STEM programs. Journal of College Science Teaching, 45(5), p. 13.
11Schneider, K. R., Bahr, D., Burkett, S., Lusth, J. C., Pressley, S., and VanBennekom, N., 2016. Jump starting research:
Preresearch STEM programs. Journal of College Science Teaching, 45(5), p. 13.
Acknowledgments
xxi
This book is based on my experiences as a research mentor, graduate advisor, instructor in
the College of Engineering, and an administrator in the Graduate School of the University
of Wisconsin–Madison. I am grateful to all of the undergraduate and graduate research assis-
tants who worked with me over the years, not only for their research contributions, but also
for how they helped me to develop and learn as a mentor. Although I have taught the course
“Introduction to Engineering Research” for more semesters than I can count, it would not have
been as successful without the help of a number of key individuals over the years. I would like
to thank Professors Greg Moses, Jake Blanchard, and Carl Sovinec as well as other colleagues
at the University of Wisconsin–Madison for their collaboration and shared vision in developing
the Engineering Physics degree program and the research sequence upon which this book is
based.
I also appreciate the opportunities I had to interact with students in the Engineering
Physics undergraduate program and especially for their phenomenal engagement, performance,
and feedback. I am especially grateful to former undergraduate and graduate students whose
perspectives, insights, and comments are included in the Student Perspectives. These are in-
cluded in the book with permission from Grant Bodner, Christopher Coaty, Aidan Combs,
Brian Cornille, David Czajkowski, Chelsea D’Angelo, Tom Dobbins, Chris Everson, Thomas
E. Gage, Brad Gundlach, Cale Kasten, Matt Klebenow, Brian Kupczyk, Geoff McConohy,
Hugh Ni, Blair Seidlitz, Dan Segal, and Vladimir Zhdankin. I would also like to thank my
father, Richard Crone, and husband, Alan Carroll, for proofreading drafts, and my editor, Paul
Petralia, for both his patience and nudging to help me get this book completed.
Dr. Katie Cadwell, who was a postdoctoral research associate with the University of
Wisconsin–Madison Materials Research Science and Engineering Center (MRSEC) and is
now a Professor at Syracuse University, helped to collect valuable learning resources in an earlier
expansion of the course. She also helped to make aspects of it accessible to students outside Uni-
versity of Wisconsin–Madison, and worked with Prof. Naomi Chesler and myself on a related
project connected to the undergraduate engineering design experience. I appreciate the funding
support received from the National Science Foundation through the MRSEC (#DMR-0079983
and #DMR-0520527) and the University of Wisconsin–Madison College of Engineering 2010
grant for Transforming Undergraduate Education in the College of Engineering. Any opinions,
findings, and conclusions or recommendations expressed in this material are those of the author
and do not necessarily reflect the views of the National Science Foundation nor the University
of Wisconsin–Madison.
xxii ACKNOWLEDGMENTS
I had the pleasure of serving in several different administrative roles in the Graduate
School at the University of Wisconsin–Madison for five years. These roles included Associate
Dean for Graduate Education and Interim Dean, where I provided leadership for all aspects
of the graduate student experience, including admissions, academic services, academic analysis,
funding, professional development, and diversity. At the time, the University of Wisconsin–
9,000 in over 140 Master’s
Madison Graduate School had a diverse graduate student cohort of
and 100 doctoral fields across the University. I learned an immense amount from my colleagues
in the Graduate School and my faculty and staff colleagues across the University who devote
time and energy to graduate education. These experiences and interactions also allowed me to
see graduate education from a broader perspective beyond that of the graduate programs in the
College of Engineering where I have served as a graduate advisor and research mentor for over
20 years. This book draws from this range of experiences to provide the best guidance and advice
I can give to those entering engineering research at the undergraduate or graduate level.
(cid:24)
Wendy C. Crone
February 2020
xxiii
Credits
Table 3.1
Sec. 3.7
Figures 4.1–4.4
Questions on
page 85
Page 97
Sec. 5.7.1
Page 129
Adapted with permission from C. Eugene Allen, Emeritus Dean and
Distinguished Teaching Professor, and Former Associate Vice Pres-
ident for International Programs, Vice President and Provost, Uni-
versity of Minnesota, Minneapolis, MN.
Strategies for recognizing and overcoming bias adapted with per-
mission from Molly Carnes, Eve Fine, Manuela Romero, and Jen-
nifer Sheridan. “Breaking the Bias Habit.” Women in Science and En-
gineering Leadership Institute (WISELI), University of Wisconsin–
Madison, https://wiseli.wisc.edu.
Reproduced from Gall, K., Dunn, M. L., Liu, Y., Labossiere, P.,
Sehitoglu, H., and Chumlyakov, Y. I. (2002). Micro and macro de-
formation of single crystal NiTi. Journal of Engineering Materials and
Technology, 124(2):238–245, with the permission of ASME.
Reproduced from Maboudian, R. and Howe, R. T. (1997). Criti-
cal review: Adhesion in surface micromechanical structures. Journal of
Vacuum Science and Technology B: Microelectronics and Nanometer Struc-
tures Processing, Measurement, and Phenomena, 15(1):1–20, with the
permission of the American Vacuum Society.
From The Thinker’s Guide to Engineering Reasoning: Based on Critical
Thinking Concepts and Tools, 2nd ed., (“the work”) Richard Paul
© 2013. Used by permission of Rowman & Littlefield Publishing
Group. All rights reserved.
Courtesy of Springer Nature.
D.I.S.O.R.D.E.R. Framework used with permission of Lisa Newton,
Professor Emerita of Philosophy, Fairfield University.
Reprinted by Permission of the National Society of Professional En-
gineers (NSPE). www.nspe.org
xxiv CREDITS
Page 153
Tips for interacting with the public from Bringing Nano to the Pub-
lic: A Collaboration Opportunity for Researchers and Museums by Wendy
C. Crone, 2006. Reprinted with permission of the Nanoscale Infor-
mal Science Education Network, Science Museum of Minnesota, St.
Paul, MN. www.nspe.org
Figure 7.1
From Escape from the Ivory Tower by Nancy Baron. © 2010, by the
author. Reproduced by permission of Island Press, Washington, DC.
Assignment 8-1 Laboratory-to-Popular assignment adapted with permission from
Caitilyn Allen, Professor, Department of Plant Pathology, Univer-
sity of Wisconsin–Madison.
Sec. 8.4.1
Writing Workshop and “Some Suggestions for Responding to a Col-
league’s Draft” developed in collaboration with Bradley Hughes, Di-
rector of the University of Wisconsin–Madison Writing Center.
Contribution
list in Sec. 8.5
From Responsible Conduct of Research by A. E. Shamoo and D. B.
Resnik. © 2009 Oxford University Press. Used by permssion.
Page 203
Photo by Edna M. Kunkel
C H A P T E R 1
1
Introduction to Engineering
Research
1.1 WHO IS THIS BOOK FOR?
The information provided within these chapters is designed for both first-year graduate stu-
dents and undergraduate students engaging in on-campus or summer research opportunities.
For those already in a graduate program, some portions of Chapter 2 will not be relevant. For
those just beginning to consider graduate study as a future path, the later chapters will provide
you important information for undergraduate research you are currently undertaking as well as
some insights on what is ahead of you as you transition into graduate school.
Rather than being an exhaustive resource, this book is meant to supplement your interac-
tions with research mentors, advisors, and peers. There are also other numerous references cited
and bibliographies provided that will help you to delve into more detail on particular subjects.
You should strive to seek out multiple perspectives on critical topics of importance to you as you
move through your engineering research experience.
1.2 HOW RESEARCH IS DIFFERENT
Engineering research is a very different endeavor than the traditional coursework that you have
taken up to this point in your academic career. Research is a process of discovery, which means
that it has a very open-ended quality as a result. This open-endedness may not be something you
are as initially comfortable with depending on your background, but the prior knowledge and
the skills that you have developed thus far are still valuable and will help you make a contribution
with your research.
Discovery is not done in a vacuum. There is nearly always some prior work in an area
or related field that can help us build a foundation from which we can launch our work. The
research of today builds upon the findings of yesterday. You may find that you are building on
work ranging from 5 months to 50 years ago, so understanding what has come before is an
essential part of the process. If your purpose is discovery, then there is no point in rediscovering
something that is already known and published. Sometimes, however, as part of the process,
you may want or need to replicate the work of others, either as a way to learn a technique or to
confirm those results.
2
1. INTRODUCTION TO ENGINEERING RESEARCH
Research should also be a mentored experience. You will have many people—your peers,
those a bit ahead of you in their studies, staff, and faculty—who you will interact with and rely
on for direction, advice, and support. In contrast to the image that many have of research, it
is not a solitary activity. In fact, much of the engineering research that is done today occurs
in a team environment. These teams are frequently interdisciplinary and may include people
from a range of engineering and non-engineering disciplines. Working with people from other
disciplines helps us to tackle challenges and open research questions that we might not otherwise
be able to make progress on alone. The research group that you work within may be a handful of
people or an international collaboration that numbers in the hundreds. Either way, cultivating
the relationships within this group and connecting with people related to your research, both on
and off campus, will be a critical factor in your success.
The undertaking of research is also something we do with our colleague’s and society’s trust
that we will behave ethically. As individuals within a broader community of researchers, we have
the obligation to be responsible and honest. This is required in all aspects of the work, from the
design of an experiment to the publication of the results. Our analysis must be conducted with
an impartial eye; the results must be presented without manipulation; and, discussion of our
research with the broader community of scholars and the public must be done with integrity.
With these principles in mind, you will have the best opportunity to create new knowledge,
advance understanding in your field, and become a respected member of your discipline.
Ultimately, your goal will be to make what is often referred to as a “unique contribution”
to your field. This may seem a daunting task as you enter into research, but as you gain more
knowledge about your research area you will soon find that there are a number of things that are
not known. You, with the help of your research mentor, will be able to identify an area where
you can pursue the creation of new knowledge. It will likely leverage the work of those who
have come before you, both in the research group you have joined and in the field as a whole,
but you will find a way to make a contribution that is your own. Eventually, you will find that
you begin to surpass your research mentor in specific knowledge areas and can begin to think
independently about new research endeavors to undertake.
1.2.1
ENGINEERING RESEARCH DEFINED
When we hear the word research we often think of it as being synonymous with acquiring new
knowledge or even developing some “objective truth.” Engineering conjures up images in our
mind of applications ranging from computers to bridges. For many, engineering implies im-
proving our way of life or driving technological advancement. When the term “engineering
research” comes up, it may be hard to reconcile for some. Is it the creation of new knowledge
exclusively? Is it the application of new science to existing applications? Is it the development
of new applications? The answer is all of the above and more.
The basic commonality we find in all engineering research is that people are trying to
answer questions that have not been asked or answered before, to solve problems that humanity
1.3. ENGINEERING RESEARCH CAREERS 3
will find useful in some way. We do this through a process of inquiry that relies on careful
exploration using scientific method. The answers we find may be immediately applicable or they
may add to a base of knowledge that will only see application at a much later date.
There is a spectrum of research from basic to applied. In many cases the same type of basic
research might be found in both science and engineering departments and collaborations across
these disciplines are common in such circumstances. In a report from the National Academy of
Engineering, “Basic research in engineering is by definition concerned with the discovery and
systematic conceptual structuring of knowledge.1” In contrast to basic research, applied research
is much more closely tied to an immediate need and may even be conducted jointly or under a
research contract with a company. Across this broad spectrum, an engineering research project
might be motivated by some esoteric curiosity tied to the long-term needs of humanity or by an
immediate need in a particular community. Regardless of the origins of the research question,
the tools we use to answer them, and the time frame in which the results will be applied, these
are all a part of the spectrum of engineering research that you will find happening on a day to
day basis in universities, national laboratories, and industry.
ASSIGNMENT 1-1:
INDIVIDUAL ASSIGNMENT – ENGINEERING RESEARCH DEFINED
Talk to at least three individuals spanning the spectrum of experience with research related to
your general field of interest (e.g., undergraduate student researcher, graduate student researcher,
postdoctoral researcher, academic staff researcher/scientist faculty member). Ask these individ-
uals to discuss the topic of “engineering research” with you. What makes a good research ques-
tion? How do they approach conducting their research? What do they find interesting/exciting
about research? Write a 500-word summary of what you have heard that includes both the sim-
ilarities and differences between the answers obtained from your discussions.
1.3
ENGINEERING RESEARCH CAREERS
There are a wide range of research careers available to people with an advanced degree in en-
gineering. These careers occur most prevalently in industry, government, and academic sectors.
Even within one of these sectors the types of jobs that involve research can vary dramatically.
One way to explore the range of options for engineering research careers is to take a look
at current job postings in your area of study. Looking at the position descriptions can give you an
idea of the work activities and job responsibilities. It will also give you an idea of qualifications
and prior experience expected. Some positions may require a minimum education level of a
1National Academy of Engineering. Committee on Forces Shaping the U.S. Academic Engineering Research Enterprise,
1995. Forces Shaping the U.S. Academic Engineering Research Enterprise. National Academy Press.
4
1. INTRODUCTION TO ENGINEERING RESEARCH
Master’s degree (M.S.), a doctor of philosophy (Ph.D.), and/or a number of years of professional
experience. Finding the kinds of positions that might interest you in the future will provide you
with the components of a roadmap for the preparation you will want to pursue.
Although you will likely be most familiar with a traditional faculty position in academia
from your experience as a student, the range of research-related careers within the confines of
academia is quite broad. Within the faculty ranks alone, the emphasis on research varies be-
tween positions depending on the type of institution. A four-year college, for instance, might
stress engagement with undergraduate research but have lower levels of expectation on research
productivity and a larger amount of time committed to teaching. The research and teaching
expectations at research-intensive institutions will vary, but they usually stress research with
graduate students, and have a higher level of expectation for obtaining grant funding and pro-
ducing publications. At larger research-intensive academic institutions there are also a number
of non-tenured research positions to be aware of. These often carry titles like instrumentation
specialist, scientist, and research professor.
Graduate education both at the M.S. and Ph.D. levels is valuable for people interested
in a variety of career paths. Ph.D. recipients don’t just end up in academia, but are also sought
after by industry and government for their expertise and ability to be innovators and thought
leaders.2 Research laboratories span a range of institutions from government laboratories, some
with defense-related missions (e.g., Sandia National Laboratories, National Renewable Energy
Laboratory, and Argonne National Laboratory, U.S. Naval Research Laboratory), to other non-
governmental research labs, some of them with connections to or histories with universities (e.g.,
Southwest Research Institute, Draper Laboratory, MIT Lincoln Laboratory). Many medium
to large companies also have a research (or research and development) department, unit, or seg-
ment of the organization—a few of these being quite large and well-known research enterprises
(e.g., IBM Research, GE Global Research, ExxonMobil’s Research and Engineering Technol-
ogy Center, DuPont Experimental Station). The types of expertise needed and range of jobs
available are quite broad as you may imagine.
In some engineering disciplines, there is a growing expectation that a person complete a
postdoctoral experience after completing their Ph.D. and before obtaining that first “permanent”
position. Postdoctoral research positions they are most prevalent in academic settings, particu-
larly large, research universities, although they are also available in some industry and govern-
ment sectors. These positions are usually full-time paid jobs. Some fellowship opportunities are
also available for postdoctoral research positions, both in academia and national laboratories.
2Council of Graduate Schools, 2013, Open Doors with a Doctorate.
1.3. ENGINEERING RESEARCH CAREERS 5
ASSIGNMENT 1-2:
INDIVIDUAL ASSIGNMENT – INVESTIGATING ENGINEERING
RESEARCH CAREERS
Identify an engineering research job sector that you would like to learn more about: industry,
government, or academia. Find three different job advertisements in that job sector using on-
line resources such as monster.com, usa.gov, and chronicle.com. Ideally, these job postings
should advertise a research position related to your area of study. Compare and contrast the
positions. Consider things such as the education and prior experience required, duties and re-
sponsibilities that the position would entail, and location of the job. Choose the position that
you find most interesting and identify the kinds of things you would need to do in the next 5–10
years to make yourself an ideal candidate for this position.
C H A P T E R 2
7
Finding the Right Research
Position for You
2.1
SOCIETAL IMPLICATIONS OF TECHNOLOGY
Engineers help to shape the world and our personal experiences in it. Engineering design and
research impacts nearly every aspect of our lives: the indoor plumbing and sanitary systems we
take for granted, the transportation vehicles and networks we utilize to move about our commu-
nities and the world, the structures we live and work in, our communication and entertainment
systems, the power production and distribution networks we rely on, and the medications and
devices that keep us healthy, just to name a few. Engineers also have the ability to address the
grand challenges that face society and improve the human condition by doing so. These chal-
lenges exist throughout the array of human experience, from ensuring a stable food supply and
clean drinking water for the world’s population, to the further development of artificial intelli-
gence and treatment of neurological disease by reverse engineering the brain.
It is an exciting and important moment in human history for engineers. The world depends
on us, both to maintain our current standard of living and to innovate in new and unprecedented
ways to bring us into a better future. We have the capability and the responsibility.
ASSIGNMENT 2-1:
INDIVIDUAL ASSIGNMENT – YOUR ENGINEERING GRAND
CHALLENGE
Read the National Academy of Engineering’s “Grand Challenges for Engineering” list1 and
identify the challenge most closely related to your research interests. Summarize the challenge
and describe the ways in which research in your field of study have already impacted this topic
and how you imagine future research can make an impact on this challenge topic.
1National Academy of Engineering, “NAE Grand Challenges for Engineering,”
http://www.engineeringchallenges.org.
8
2. FINDING THE RIGHT RESEARCH POSITION FOR YOU
2.2
IDENTIFYING A RESEARCH PROJECT
Sometime students think that in order to engage in research you have to come up with the idea
yourself at the very start. This is quite a challenge if you are new to a field and have little prior
experience with research. Identifying a research project that you can undertake usually involves a
very different process. Experienced researchers are often looking for students to help them with
new and ongoing research projects. So, what you are actually seeking is a match between your
interests and existing research projects that are available.
Student Perspective
“[R]esearch never really ‘ends.’ What I mean by this is that even when
a group gets a paper published on an experiment it doesn’t end there. Fre-
quently, the group continues to do research on the same topic using the ideas
and results from their last paper. I guess this does make sense to me, but again
it was something I never really thought about. In some way, I suppose I as-
sumed that after one project was finished, they would look for something new
and exciting. But, once an experiment is completed, there is almost always
further research to be done to learn even more about the topic.”
Whether you are an undergraduate student or a graduate student, you should enter into a
research project that meshes well with your interests. Don’t just take on a project for the money
or because it is the first one offered to you. Cast your net wide and look for a variety of projects
that might fit your interests as well as a research mentor who would be a good match for your
personality and needs. After you find the right research project to pursue, your intrinsic interest
will motivate you through the difficult parts and ultimately help you to be more successful.
In order to identify research projects and mentors that are a good fit, first identify the
areas of engineering that interest you. Explore your options by reading about current research in
those areas and talking to people who have experience with ongoing research. Utilize a variety
of sources including websites and recently published journal papers. As you begin to identify
individual faculty members you might be able to work with, try to engage in face-to-face or
email conversations with these potential research mentors. It is easy to be energized by someone’s
enthusiasm for their work, but don’t fixate on the first thing you learn about. Look broadly and
determine what options might be available to you. Even if you are entering a summer research
opportunity, rather than a new degree program, often there are choices of projects available to
you and faculty mentors within the program that you can identify as your top choices.
Some people stumble across the perfect research position immediately, but often students
need to make some effort to both identify potential research mentors and find ones who are
willing to add you to their research group. Often available research funding can be a barrier.
If you are an undergraduate student looking for research experience, you might choose to do
2.2. IDENTIFYING A RESEARCH PROJECT 9
this work for credit rather than pay. That option may open additional opportunities that would
otherwise not be possible. Graduate students frequently have the challenge of finding a good
match between their interests and the funding available for a research assistantship. If you have
obtained a fellowship, this becomes less of an issue, but most students will need to find support
either as a research assistant or a teaching assistant.
Consider these strategies if you are having difficulty obtaining a research position.
• Cast a wide net so that you don’t limit your options too severely up front.
• Be as flexible with your research interests as is reasonable.
• Consult with faculty you have taken classes from; ask about openings they may know
of or colleagues they would recommend.
• Seek out new faculty (e.g., assistant professors) who may be looking to grow their
research group.
•
Identify research centers or facilitates that may have positions available.
After you have explored what is available to you, some introspection will be called for.
If you find that you have developed a keen interest that is not represented at your institution,
you may have to consider making a change. As an undergraduate, you can consider looking for
summer research opportunities elsewhere, transferring to another institution, and/or pursuing
your later graduate studies at an institution with a better fit with your interests. As a graduate
student, hopefully you will have taken on this exploration while looking for the right graduate
school for you, but, if you find yourself at an institution where your interests are not represented,
you have to make some decisions. Stay or go elsewhere? Some programs allow for a “coursework
only” Master’s degree that you can finish up more quickly so that you can move on to another
institution sooner. If you can find a research project peripherally related to your interests, you
might want to consider pursuing this for your Master’s degree research and then make a change
when you begin your Ph.D. or first industry position. This is not as unusual as you might think. I
have known many students who have made a significant change after their Bachelor’s or Master’s
degree. Their prior experience is not a waste, they will be able carry their skills and knowledge
forward and may be able to use them in unanticipated ways.
Some students find themselves paralyzed at having to choose which research project they
will take on. If you find research areas at your institution that excite you—which is often the
case—you may find that you have more options that you expected. The important thing to re-
member is that it does not have to be a decision you are married to forever. Although it is likely
that your research career will be related to the general area of study you are currently pursuing,
it is also likely that your research career will be long and varied. The research I did as an un-
dergraduate was in the same basic field as my graduate work, but not thoroughly connected to
it. Also, the specific research I did for my master’s degree was different from my Ph.D. (and
10
2. FINDING THE RIGHT RESEARCH POSITION FOR YOU
different from what I do now as a faculty member). You can choose to stay in the same area or
you can use the skills you have learned in related areas. You will find that much of what you gain
in both your coursework and research experience is transferable and can be used in other areas
of engineering application.
There are often opportunities to move around and try new things as you progress in your
studies and career. Technology also moves quickly, so even if you begin your career in a particular
specialty area, it is likely that you will have to learn and expand your expertise over time. Outside
of academia, change is even more common—switching between companies or organizations,
working in different positions—and often require different competencies and your own personal
career management.2 Most researchers, even faculty researchers, change their research focus over
the course of their careers even if they stay at the same institution.
Who your research mentor will be is as important as the topic of your research project.
Research mentor fit is often overlooked, but as Megan Poorman, GradHacker blogger, points
out: “Choose your mentor wisely: this is the biggest factor in your job satisfaction and degree
progress. Your advisor sets the tone for the lab and can either help or hinder your professional
development and your research progress. Find someone with whom you can communicate and
who will be on your side, looking out for your best interests. I would choose the mentor over the
research project. Obviously, you should be excited about the research, but projects change and
morph over time, your mentor likely will not. Choose wisely.3”
A Research Mentor Who Wants You to Succeed
Some of the proudest moments in my professional life have been be-
cause of the success of my students, either currenter or former. When they
give a fantastic research presentation, earn a prestigious award, win a fellow-
ship, get their dream job, or achieve the promotion that they were seeking, I
feel great pride. I hope that in some way I have helped them to make these
successes for themselves. Although I have been described variously as sympa-
thetic, supportive, and demanding as a research mentor, these are consistent
descriptions, given that my goal is to figure out the needs of each of my stu-
dents and help them to be their best and achieve their goals. But when it
comes right down to it, each individual is their own person figuring out who
they are and who they want to be. You need to find the right research mentor
for you who will help you be your best and work towards your goals.
Consider some of the following questions when you are interacting with potential research
mentors.
2Seibert, S. E., Kraimer, M. L., and Crant, J. M., 2001. What do proactive people do? A longitudinal model linking
proactive personality and career success. Personnel Psychology, 54(4), 845–874.
3Poorman, M., 2019. GradHacker, “Hacking Grad School,” Inside Higher Ed.
https://www.insidehighered.com/blogs/gradhacker/hacking-grad-school.
• How much time and attention do you need and does it match with the potential re-
search mentor’s availability?
2.2. IDENTIFYING A RESEARCH PROJECT 11
• Does this individual provide the following to their research students:
– constructive feedback?
– assistance in setting realistic goals?
– feedback about expectations?
– information about funding opportunities?
– professional development opportunities and connections?
– aid with the job search?
• Do you need someone who will be encouraging and nurturing or are you more com-
fortable with a higher level of autonomy and independence?
• Do more experienced students and the graduates from the research mentor’s group
develop professional independence and transition to the status of junior colleague?
•
If you are interested in a particular career outcome after your degree, will this mentor
be able to support this interest and help to launch you on this trajectory?
Even when you find the “perfect fit,” it is important to realize that you will need to develop
other mentors beyond your primary research mentor throughout your research experience. The
pool of possible mentors is large and includes other faculty members, research staff members,
postdoctoral researchers, as well as other students at both the graduate and undergraduate levels.
Student Perspective
“Other goals that might help me to become an independent researcher
include making sure to seek the advice of the experienced researchers and
research mentors I may work with in the future, and staying honest with
myself about who I am and what I want. Taking guidance from mentors and
forming close relationships with them seems to me to be one of the main
ways people find their place in the research world. Mentors know how the
research world works and can give good advice to young researchers on what
steps to take to get where they ultimately want to be. This is where the second
goal is important. I want to remain conscientious about where my path leads
me and to make sure at all times that I am not being funneled into an area
or profession that will be unfulfilling. I don’t want to look back in my middle
ages and wonder what happened.”
12
2. FINDING THE RIGHT RESEARCH POSITION FOR YOU
ASSIGNMENT 2-2:
GROUP ACTIVITY – RESEARCH INTERVIEWS WITH OTHER STUDENTS
Overview: This activity will give you the opportunity to find out about the research that others are
interested in and express your own interests about research. The objectives for this activity depend on
your prior experience.
1. For students with research experience: you will have the opportunity to practice your com-
munication skills in the context of the research you are conducting and reflect on the
progress you have made as a researcher.
2. For students inexperienced with research: the interviews will give you the opportunity to
learn more about the kinds of research being undertaken on your campus. The in-class
interview activity should also help to increase your comfort level when talking to potential
faculty research mentors outside of class.
Preparation: be sure take time to think about the following in preparation for the interviews.
For students inexperienced with research:
• Brainstorm questions that you might ask. (Note: you will be doing more than one
interview and you will be conducting interviews with students having different levels
of research experience.)
• Some suggested started questions might include the following:
– What is your area of research?
– How did you get involved with the research are you are currently working in?
– Has your research experience been what you expected?
– Have you run into any stumbling blocks in your research? How did you overcome
them?
– What approaches would you suggest for finding a research project?
• You will need to listen actively and do your best to ask probing follow-up questions
based upon hearing the initial response.
For students experienced with research:
• Consider the starter questions posed above and what you feel would be the most valu-
able information to discuss with a student inexperienced with research.
• Use the strategies discussed in Chapter 7 to organize your thoughts.
2.2. IDENTIFYING A RESEARCH PROJECT 13
ASSIGNMENT 2-3:
INDIVIDUAL ASSIGNMENT – PROFESSIONALISM IN EMAIL
INTERACTIONS
There will be many occasions throughout your research career where you will need to initiate
contact with someone via email. This is an important opportunity for you to make a good first
impression by displaying professionalism in your email communications. It can be a big mistake
to approach this initial interaction casually or sloppily.
Write precisely and clearly so that your meaning is understood. It is not appropriate to
include emoticons or emoji, but you want to be sure that your message comes across with the
right tone. Don’t use humor or sarcasm. Check your spelling and grammar. Err on the side of
formality. The person reading the email will make a lot of assumptions about you based on the
limited information that the email contains. You want to ensure these assumptions are as positive
as possible.
The message should begin with a salutation. “Dear Prof. Smith” is appropriate, “Hey”
is not. State your request up front. Tell the person who you are and why you are making this
request. Indicate how you would like to follow up (e.g., “I would appreciate it if we could set up
a meeting. I am available….”, or “Thank you in advance for your reply.”) Your message should
include enough information to be clear, but not be so long that it will not be read.
At the bottom of your message you should have a signature block with your contact in-
formation. Something simple like the following will suffice:
Ima N. Gineer
Undergraduate Student, Engineering Mechanics Program
University of Wisconsin-Madison
Cell: 999-999-9999
Your assignment is to compose an email to a faculty member using the above guidance. This
message should do one of the following.
• Request an opportunity to meet and discuss the research being undertaken in their
research group.
•
Identify a research question related to a published journal article that you have read
and request guidance on what additional follow up reading might help you to answer
your question.
•
Inquire about the availability of a research position.
14
2. FINDING THE RIGHT RESEARCH POSITION FOR YOU
• Pose a question about an area of content of a course you are currently taking and request
their guidance.
•
Inquire about an interesting course that you anticipate they will teach in the future.
ASSIGNMENT 2-4:
INDIVIDUAL ASSIGNMENT – CONVERSATIONS WITH POTENTIAL
FACULTY RESEARCH MENTORS
STEP 1: Identify five faculty members you will contact about research projects. In addition to
identifying their name and research interests, find their contact information, including email
address, phone number, and office hours.
STEP 2: Summarize the area of research that each faculty member specializes in. Look for a
recent news article, webpage summary, or journal publication to give yourself a bit more back-
ground about their work. Note that often faculty research interests change over time although
web pages may not be revised frequently, but this information will at least provide you with
some relevant background about their research interests.
STEP 3: Draft an email of introduction. Use professional language, including the appropri-
ate salutation (e.g., Dear Prof. Smith). Consider attaching a resume showing your prior work
experience—even if your work experience is not research-related, it shows that you can hold a
job and perform it reliably. Indicate in your message how you will follow up with contacting
them (e.g., I plan to visit your office hours next week so that I can learn more about your current
research interests). After using spell check, send your emails.
STEP 4: Follow through with your follow up! Ideally you should talk to the faculty members
you have contacted either in person or by phone. Come to the conversation prepared to do the
following:
• Describe what you find interesting about the research they have done.
• Discuss your experience and interests.
• Ask about their current research and future research interests.
• Specify what you are hoping for as a result of the conversation.
UNDERGRADUATE RESEARCH EXPERIENCES
2.3
If you want to learn about research, it is a great idea to start early while you are an undergraduate
student. There are many advantages to doing research as an undergraduate—you can learn about
2.3. UNDERGRADUATE RESEARCH EXPERIENCES 15
the process of research to determine if this is something you are interested in doing more of, you
can try out a particular research area to see if it is something you would like to pursue further,
and you can gain some basic research experiences that will be to your advantage when you apply
to graduate programs.
Student Perspective
“This experience [as an undergraduate researcher] was a valuable one.
It taught me a lot about myself and what I really wanted to do and was in-
terested in. It also gave me a great look at one style of lab organization in
terms of people and project roles within the group. I was able to work on and
realize the importance of networking and general looking out for myself in
research.”
The types of research positions available for undergraduates on university campuses vary.
They range from “bottle washer” positions to those that involve doing an independent research
project. Often it is the case that a research position is a combination of different tasks at a variety
of levels, from glamorous to tedious. (Someone must wash the glassware, right?) Undergraduates
are often hired into research labs to help out with some of the work that might be a little bit more
routine, but these are still great research opportunities because it allows you to learn about the
work taking place in that research group and gives you the potential to work your way up, and
take on more responsibility, as you prove yourself to be capable and dependable. Additionally,
undergraduate research is usually a bit lower stress and forgiving of failure.
Student Perspective
“I think one of the main expectations that the group has for me is that
I’m not afraid of failure. By this, I mean that the project I’m working on
has never been done the way they are asking me to do it. Because I am not a
Ph.D. or Master’s student, I am the perfect person to conduct the experiment
because I don’t have any pressure to produce publishable results and I’ll be
able to focus more on the research at hand. Although they do have high hopes
for the project I’m working on, I won’t have the pressure that the typical
Ph.D. or Master’s student would have. So, I guess my biggest goal for my
project is to produce results that the group can do something with. But, also
to be optimistic if they don’t always turn out as I had hoped.”
Many undergraduates find meaningful research experiences on their home campus. There
are a variety of different ways to connect in with research, and a variety of ways that you can go
about getting compensated beyond the experience you will gain. You can look for jobs that are
paid positions. These range from entry-level positions that pay minimum wage to more high-
paying positions that use your technical skills. This may begin as a part-time job, where you
16
2. FINDING THE RIGHT RESEARCH POSITION FOR YOU
are assisting with day-to-day needs in a laboratory and grow into a research experience as you
develop your skills and show initiative. Or, you may have the opportunity to conduct research for
credit. For instance, as an independent study project under the supervision of a faculty member.
Some campuses also offer scholarship or fellowship opportunities connected to research. Often
these kinds of opportunities will allow you to propose a specific research project with a research
mentor and apply for some funding to complete that research activity. If you have a research
area(s) in mind that you would like to get experience with, you might be able to find a research
group working in that area that would be willing to let you attend group meetings and/or spend
time shadowing a graduate student. Your academic advisor will be able to give you information
about the options available to you on your campus and how to go about pursuing them.
In all of these cases, you need to be able to devote enough time to do the research to make
it worthwhile for both you and your research mentor. I suggest that you need to devote at least 10
hours per week so that you can spend enough time to become competent and productive. That
also means putting in time every week in order to make research progress. In a paid position you
will be paid by the hour. If you are getting course credit the expectation is usually a minimum
of 45 hours per semester credit. If you assume a standard length semester and three credits of
research, this would be roughly equivalent to 10 hours per week. At the end of the semester you
will likely need to produce some kind of document, like a report or poster, which summarizes
your research project and the progress that you have made.
Now the question is how to find a research position. The first thing you want to do, before
you start sending emails and knocking on doors, is to figure out what kind of research is of most
interest to you. Take a look at what kind of research is being conducted on your campus. The
websites of faculty members, research groups, and research centers can provide useful informa-
tion, but keep in mind that the research projects that are actively being conducted may not be
represented on the website yet. Although the projects being discussed on the website may not
be ongoing, it should still give you a flavor for the type of research being done in that research
group.
The next task is to prepare yourself: put together a professional-looking resume. If you
don’t know where to start, the career services office will likely have helpful information, and
possibly even workshops to assist you in creating a resume to highlight your experiences and
skills. You also need to be prepared to talk to a faculty member about your research interests,
as well as your background and capabilities. You should not just show up and say “Hi, I want
a job.” You need to be able to articulate your interest in the research area that faculty member
is engaged in, and talk about the qualities and capabilities that you could bring to the research
group. You may not think you have much to talk about without prior research experience, but
you may have qualities like dependability, skills that you have developed through hobbies, and
background that you have obtained in courses, that you can speak about.
It is likely, however, that having access to these kinds of opportunities will require some
persistence on your part. Research positions for undergraduates on most campuses are relatively
2.3. UNDERGRADUATE RESEARCH EXPERIENCES 17
rare. If you throw up your hands and give up at the first obstacle, you will be unlikely to find the
sort of experience that you are interested in. This is also good preparation for doing research,
because doing research will require persistence and the ability to work your way around the
roadblocks that appear.
There are a variety of strategies that you could employ and you should begin with the one
you feel most comfortable with.
• One way to initiate contact is to send an email of introduction with your resume as an
attachment. Even if you don’t hear back from the faculty member right away, you can
then follow up during the faculty member’s office hours.
• You can talk to your academic advisor to find out if they have a suggestion for who may
have research openings that you can apply for.
• You may have friends and classmates who are already involved in research. You can talk
to them about whether there are openings in their research group, and if they would
make an introduction to their research mentor on your behalf.
• You can talk to professors you have taken classes from and have done well in. They
might have a research opening, or they might be able to suggest who would.
• Use your network. Talk to people about your interests and what you would like to do.
You never know, the person you see on the bus every day, or the person you know from
your soccer club, might be just the contact you were looking for!
Expect that it will take more than one contact attempt with a prospective research mentor,
as well as more than one potential research mentor on your contact list.
Student Perspective
“I tried once again to reach this professor through email, but realized
that I’d have to try for an in-person meeting if I was going to get anywhere.
I quickly learned that this professor was extremely busy. I spent a lot of time
that semester waiting in the hallway to meet with him. After more than a
month of rescheduled or missed meetings, I got an interview and soon began
work ... I think the clearest lesson I learned during the process of getting [a
research position] was that sometimes you have to be a little impertinent to
get noticed.”
An alternative or additional way to gain research experience as an undergraduate is to
apply for a summer undergraduate research program. These are often called research experiences for
undergraduates, but they have various titles and are offered by a number of different organizations.
For instance, the National Science Foundation (NSF) sponsors numerous research experiences
18
2. FINDING THE RIGHT RESEARCH POSITION FOR YOU
for undergraduate (REU) programs around the country, mainly based at university campuses.
Several national laboratories offer summer research opportunities, such as the Sandia National
Lab Summer Internships and the NIST Summer Undergraduate Research Fellowship (SURF).
There are also a few international opportunities to do research such as the DAAD Research
Internships in Science and Engineering (RISE) that sponsors U.S. student to go to Germany
for research opportunities. In your conversations with your academic advisor and professors, you
can ask about summer opportunities that they might know of at other campuses and institutions.
These summer programs are almost always paid opportunities and there is usually some
coverage for living expenses. The quality of the research experience can vary, so you will want to
be sure that the ones you are applying to provide authentic experiences in research. Look into
the range of different things that might be available to you. If you are persistent about seeking
them out, you are likely to find a really great research position.
Regardless of the specifics of the position, and how it is compensated, you should approach
it in a professional manner. Once you obtain a research position you need to be responsible with
how you conduct yourself and how you take on the work. Ideally, you will also show initiative by
thinking creatively and innovatively about the details of the project. As you exhibit these traits
within a research setting, you be given more responsibility as time goes on. If your contributions
are not noticed, then you need to point them out and ask for more responsibility so that you can
show what you are capable of contributing. Research shows that that being proactive is directly
linked to career success and satisfaction.4
Sometimes You Don’t Have to Make a Choice
One of my undergraduate advisees, who is already engaged in an ex-
tensive on-campus research experience, is now thinking about the tradeoffs
between gaining more research experience in a different area through a sum-
mer research program vs. studying abroad. It’s a tough choice, but my main
piece of advice is that she may not need to choose. It may be possible for all
of it to happen, just over a larger time span than she originally imagined.
It is easier, and more common, to study abroad as an undergraduate
than as a graduate student. However, it is possible to put off study abroad
without giving up the opportunity all together. I spent a semester in Australia
as a graduate student, but I had to independently organize it rather than
join an orchestrated program with multiple students. There are pluses and
minuses to the differences in those experiences, but both will give you an
opportunity to immerse yourself in another culture.
Summer research experiences can be a great way to get experience with
another area of research and another institution. If there is a specific area of
4Seibert, S. E., Kraimer, M. L., and Crant, J. M., 2001. What do proactive people do? A longitudinal model linking
proactive personality and career success. Personnel Psychology, 54(4), 845–874.
2.4. THE GRADUATE SCHOOL APPLICATION PROCESS 19
research you are interested in exploring a bit more prior to graduate school,
you can look for a program at a different institution that would provide you
with an experience related to your interests.
The other consideration may be money. Summer research programs
usually pay several thousand dollars and sometimes provide you with a place
to live. Study abroad programs are generally something you must fund your-
self as an undergraduate student. (There are some exceptions, with a few
scholarships that are available, and funding opportunities for graduate stu-
dents to do research abroad.)
2.4
THE GRADUATE SCHOOL APPLICATION PROCESS
2.4.1
IS GRADUATE SCHOOL RIGHT FOR YOU?
Graduate school is an excellent way to continue your education, deepen your engineering skills,
and open yourself to other career opportunities. However, graduate school should not be viewed
as simply an extension of your undergraduate studies. In most cases, earning a Master’s degree
or Ph.D. will take more than a few extra classes. Particularly for the Ph.D., it takes an interest in
and serious commitment to research. When considering applying to graduate school, examine
your motivations. Because you are not sure what to do next, don’t want to venture into the “real
world” yet, or think the job market is tough are NOT good reasons to go to graduate school. In
fact, these unsuitable motivations will likely show in your graduate school application materials
and make it very difficult for you to get accepted.
That being said, I encourage all of my advisees with good GPAs to seriously consider
graduate studies. Maybe it is not something they are interested in embarking on right away, but
it should be kept in mind in the coming years. Of engineers holding a B.S. degree, 40% go on
to get a Master’s degree and 9% go on for a Ph.D.5 Many companies will consider whether or
not someone has an advanced degree at hiring and/or promotion. Some companies will even
provide funding for courses and/or a graduate degree.
Once you have decided to consider graduate studies, then you need to decide if you want
to apply to get a Master’s degree (often called terminal Master’s) or to a Ph.D. program where
you will likely complete a master’s degree on your way to your Ph.D. It is alright to not be 100%
certain of your goals at the point of application, but you should represent yourself honestly and
indicate how strong your desire is to continue on for a Ph.D. In many programs this can be a
deciding factor for entry and for funding, so you should try to choose programs to apply to that
will be a good fit for your goals.
5National Academies Press, Understanding the Educational and Career Pathways of Engineers, 2018. https://www.
nap.edu/catalog/25284/understanding-the-educational-and-career-pathways-of-engineers.
20
2. FINDING THE RIGHT RESEARCH POSITION FOR YOU
Going Corporate
After I completed my Master’s degree, I began working in industry
(which I really enjoyed). As I learned more about the company and the engi-
neering positions available, I realized that I was keenly interested in research
and development. My main motivation for returning to graduate school was
that the jobs that I was most interested in obtaining in the future required a
Ph.D. This was a strong motivation to go back to graduate school. It turned
out later on that I found teaching and research to be my passions, so I never
did go back for that industry dream job that I had my eye on. Sometimes my
path has not been a linear one, but all the experiences I gained along the way
have been valuable.
ASSIGNMENT 2-5:
GROUP ACTIVITY – GRADUATE SCHOOL FIT
In small groups, brainstorm about what qualities are important for being successful at graduate-
level research. Share these with the class.
In small groups, discuss why you might pursue graduate studies immediately after com-
pleting the B.S. degree or wait 2–5 years; advantages and disadvantages of both. Share these
with the class.
ASSIGNMENT 2-6:
INDIVIDUAL ASSIGNMENT – GRADUATE SCHOOL APPLICATION
EXPERIENCE
Identify a current graduate student in the field of study you are interested in pursuing. Talk to
that person about their experience in applying to graduate schools.
2.4.2
THE GRADUATE SCHOOL APPLICATION PACKET
Understanding the main components of the graduate school application packet, well in advance
of when you plan to apply, will help you to build the strongest application possible. The main
pieces of most application packets will be information from your undergraduate institution, such
as your grade point average (GPA) and transcript, your Graduate Record Examination (GRE)
scores (if required), and letters of recommendation. You will also need to write one or two essays
for the application where you will commonly be asked to describe experiences that make you
2.4. THE GRADUATE SCHOOL APPLICATION PROCESS 21
well suited for this graduate program and your long-term goals related to the pursuit of this
advanced degree.
Nearly all graduate school application will require letters of recommendation (usually
three, sometimes four). These are important because they are often the best predictor of whether
or not an applicant will be successful in a particular graduate program. Some of these letters will
be written by faculty members—ideally ones who have interacted with you on a research project,
a student organization/team, or as an instructor (ideally in more than one class). It is also rel-
evant to ask a supervisor or manager from a current or prior work experience even if it is not
specifically engineering related (they can speak to issues such as reliability and initiative). You
may also have some more extensive involvement in a volunteer activity. A letter from someone
in authority in that organization might also prove useful.
Help your recommenders write you the best letter possible. Give them plenty of advance
notice and a reminder when the deadline is a few weeks away. Provide them with materials to
refer to such as your resume and/or your application essay(s). Remind them explicitly how you
have interacted previously (e.g., “As you may recall, I took Advanced Mechanics of Materials
from you last Fall semester and my team completed a design project on…”). Provide them with
a list of items you would like them to address in your letter (e.g., “I am hoping you can speak to
the work I did in the lab over the last several years, especially the project where I refurbished the
testing equipment and developed new protocols for operation. In addition to working in the lab
15 hours a week, I was also a member of the Marching Band and maintained a 3.5 GPA.”).
ASSIGNMENT 2-7:
INDIVIDUAL ASSIGNMENT – GRADUATE SCHOOL APPLICATION
REQUIREMENTS
Identify a graduate program that you would like to apply to and determine the deadline for
application and the application materials you will be required to submit.
Look for requirements such as minimum GPA and GRE test scores (usually just general,
but some programs require a specialty exam). Determine what documents (e.g., transcripts),
essay(s), and letters of recommendation will be needed. Read the instructions to determine if
there are any specific expectations for what should be addressed in the essay(s), and if your
resume should also be included in your application materials.
22
2. FINDING THE RIGHT RESEARCH POSITION FOR YOU
2.4.3
THE APPLICATION TIMELINE
A common timeline for the graduate school application process.
Summer/Early Fall
•
•
Identify some graduate programs that you are interested in applying to and identify the
application requirements and deadlines. Determine whether or not the GRE General
Test and GRE Subject Test are required (although the General Test is usually expected,
the Subject Test is not common for most engineering graduate programs).
In preparing for the GRE General Test, I do not generally recommend spending
money on a preparation course. Your score will be close to its maximum if you take
a few practice exams to familiarize yourself with the test format and the way in
which questions are posed. Educational Testing Service (ETS) offers free practice
tests and software which you can use to emulate the actual test environment (see
http://www.ets.org/gre).
• By the end of October you should have taken the GRE (although it is available year-
round).
Mid Fall
•
Identify and contact people who will provide you with letters of recommendation (see
above for ideas about who you should consider asking for a letter).
• Finalize the list of programs you will apply to. Identify faculty members in each of these
programs whose research you find interesting and initiate contact with them by email
or phone.
• Begin preparing your applications. Look for graduate school application workshops
and/or a faculty member who will read over your application materials for you and
provide you with feedback.
Late Fall/Early Winter
• Complete your applications and submit them BEFORE the published deadline. Often,
the review of applications begins prior to the cutoff deadline and you would like your
application to receive the fullest consideration.
• Thank your recommenders for taking the time to write letters of recommendation
for your applications. Send a brief note of appreciation—ideally in an “old fashioned”
thank-you card, or at the very least via email.
Winter
2.4. THE GRADUATE SCHOOL APPLICATION PROCESS 23
• Follow up with faculty members in the programs that you have applied to. Contact only
those who you are keenly interested in working with, but be persistent in attempting
to get through to them. If your email message does not get a response, then make a
phone call. Also, consider asking your letter writers if they know any of the individuals
you have identified and ask if they would be wiling to write an email of introduction
for you.
• Many departments offer a visit weekend for prospective graduate students. Ask the
department student services coordinator or faculty you have been in contact with if
there would be an opportunity for you to visit the campus and meet with faculty and
students. Often, some or all of the travel costs are paid for, but, even if they are not,
you should make your best effort to attend.
Late Winter
• Attend prospective graduate student visit weekends that you have been invited to. Meet
with faculty and graduate students and gather as much information as possible. It is a
two-way interview: you are trying to present yourself in the best possible light and you
are trying to determine if this graduate program is a good fit for you. See the list of
“Questions to Ask Yourself and Others” below.
Spring
• Consider the offers that you have received. Note that some programs make separate
offers for admission and funding, so be certain that you understand the implications of
each offer.
• YOU CAN ONLY SAY YES to one. Nearly all universities in the U.S. are members
of the Council of Graduate Schools and honor the April 15th resolution.6 This means
that students should not be obligated to respond to an offer prior to April 15th. This
gives each student an opportunity to see all offers available to them prior to making a
commitment. Additionally, this means that you can only accept one offer. A student
who accepts an offer has made a commitment and should not accept any other offer
without getting a written release.
•
Inform your advisor and recommenders of your decision so that they know where you
are going next. Provide them with an email address contact that will be yours for the
long term if your current student account will close after your graduation. Keep in
touch periodically over the coming years—ideally more frequently than when you need
another letter of recommendation for a fellowship or job application.
6Council of Graduate Schools, “April 15 Resolution: Resolution Regarding Graduate Scholars, Fellows, Trainees, and
Assistants,” http://cgsnet.org/april-15-resolution.
24
2. FINDING THE RIGHT RESEARCH POSITION FOR YOU
Many programs also accept students mid-year. Look at the deadlines and talk to faculty
in those programs to determine when you should have your application submitted. From there
you can adjust the timing discussed above.
In parallel to the graduate school application process you should also consider applying
for graduate school fellowships. Also, unless you are independently wealthy or have a particular
aversion to teaching, you should check all of the above if the application asks if you are inter-
ested in being considered for a teaching assistantship (TA), research assistantship (RA), and
fellowship.
2.4.4
VISITING A GRADUATE PROGRAM YOU WOULD LIKE TO
ATTEND
To make a well-informed decision, you should ideally visit the university and interact with the
faculty and graduate students there. Many graduate programs organize visit weekends in the
late winter/early spring. These are a great opportunity that you should try to take advantage of,
if at all possible. You will have access to faculty and students on the visit and you will be able to
see the facilities, campus, and community. Some programs invite only students that they have
accepted into the program. Others will invite admissible students they would like to consider
for funding offers.
If the programs you are interested in do not plan a visit weekend, you can arrange to
visit on your own. The best point of initial contact would be the staff member in charge of the
graduate program (e.g., program coordinator) or the faculty director of graduate studies (e.g.,
chair of the graduate studies committee). If you can’t visit then you should make arrangements
to set up virtual or phone conversations with the director of graduate studies and other faculty
members you may be interested in working with.
You should think of a visit weekend like an interview. You are being interviewed, but you
are also interviewing them. Everyone involved should be trying to determine if there is a good fit.
Although you would not be expected to wear a suit, do present yourself professionally (business
casual attire is usually appropriate). Be ready to present your experience and background clearly
and succinctly. If you have engaged in undergraduate research, you may want to print out a few
slides or have a copy of a research paper you wrote in order to share your prior experience more
effectively.
Do your homework before you go on the visit. Learn as much about the university and
faculty in the program as you can. If you are interested in working with a particular research
mentor, become familiar with their recent research publications. Prepare questions that will help
you determine if this is the right fit for you (see the list below).
“Questions to Ask Yourself and Others While Considering a Graduate Program”
2.4. THE GRADUATE SCHOOL APPLICATION PROCESS 25
This is a broad list of questions. Some of these questions are intended for you to answer yourself. Others
you can find the answer to by exploring the university website. Some are questions you should ask of the
faculty you speak with. Others you should ask of graduate students who are already in the program.
Overarching Questions to Ask Yourself
Am I most interested in experimental, computational, or theoretical research?
Would I rather be in an established group or do research with a more junior faculty
member?
How much time and attention do I expect to get from my thesis advisor/research men-
tor?
Am I interested in interdisciplinary research and does this position fit with those in-
terests?
Are the other students in the research group people that I can get along with?
School/City/Lifestyle
Is the campus a safe place? What safety programs are available (i.e., emergency phones,
campus escorts)?
Is housing easy/difficult to find?
What are living expenses like?
Is there a reliable mass transit system?
Are there bike paths for commuting to campus?
What kinds of entertainment are available?
Will I be able to pursue the recreational activities I am interested in?
Do I feel comfortable in this community/area of the country?
Can I see myself living here for the next
5 years?
(cid:24)
Program/General Atmosphere
What is the reputation of the program?
How is the quality of the teaching?
26
2. FINDING THE RIGHT RESEARCH POSITION FOR YOU
Are the required and elective courses ones that I am interested in taking? How fre-
quently are they offered?
Are graduate students happy here?
How is the rapport among students, staff, and faculty?
How is the atmosphere for women and underrepresented minority students?
What is the office space policy for new graduate students?
Are the labs and facilities broadly accessible? How do I get trained to use these facilities?
Do faculty members collaborate on research or work separately? Is collaborative re-
search encouraged and supported?
Funding/Financial Aid
Do I need to find/choose a thesis advisor before accepting an offer to join the program
or do I have the opportunity to spend a semester or two on campus before I decide?
How do I apply for a teaching and/or research assistantship?
What fellowship opportunities are available from the program/university? Am I auto-
matically considered for these opportunities or do I need to apply?
Is a tuition waiver included in my funding offer?
Is health insurance included in my funding offer?
What are the vacation/sick leave/family leave policies?
What is the stipend level? Do students live easily on this amount?
Does the funding continue through the summer months?
If I am offered an assistantship appointment, what are the work expectations?
What are the responsibilities associated with a teaching assistantship (TA)?
Is there training for new TAs?
How is my performance evaluated?
Who is my supervisor?
Who do I talk to if I need help with a problem in the classroom?
Research Mentor/Thesis Advisor
How stable is his/her research funding?
2.4. THE GRADUATE SCHOOL APPLICATION PROCESS 27
Does the advisor have tenure? If not, what is the tenure rate at this institution?
What is the advisor’s reputation in the department?
How do the advisor’s current students feel about working with this person?
Does the advisor treat students respectfully?
Does the advisor stand up for his/her students when a political situation arises?
Does the advisor give a lot of supervision or are students expected to work more inde-
pendently?
How is one’s thesis topic determined?
How is authorship handled on journal publications?
Will the research require traveling or working remotely?
How long does it usually take for the advisor’s students to graduate?
Are there opportunities available to attend a conference or two each year?
Where have previous students gotten jobs?
2.4.5 GETTING ACCEPTED INTO A GRADUATE PROGRAM
Different programs will handle graduate applications differently. However, there is likely a com-
mittee that determines an applicant’s overall fit for the program and selects the best applicants
for broader circulation among the faculty members in the graduate program. For large programs
and Master’s programs that do not have funding associated with them, it is more likely to be
a decision made at the committee level. For a Ph.D. program there is more match-making re-
quired because you will need to have an interest in the research taking place in a faculty members
lab and they will need to have funding to support you as a research assistantship.
In many graduate programs there needs to be at least one faculty member who is interested
in taking you on as an advisee in order for your application to progress. There are always excep-
tions though. Some programs have fellowship and teaching assistantship support that allows
them to bring in more students without the promise of a research assistantship. And, students
who have received a large external fellowship have more flexibility because they can often work
with the faculty member of their choice without as much concern over the availability of funding
for the research. I’ll note, however, that the fellowships do not generally cover research expenses,
so even a fully supported fellow is not “free” for the faculty research mentor. They will need to
28
2. FINDING THE RIGHT RESEARCH POSITION FOR YOU
have the necessary funds to cover the expenses of the research and the time to provide research
mentoring.
Getting Paid to Learn
Unlike some other disciplines, engineers are frequently given the op-
portunity to earn a stipend while doing research that will directly benefit their
own degree progress. When I occasionally hear it taken for granted and ex-
pected that their education should be completely paid for, I shake my head in
wonder at the entitled attitude of this individual. Having served in graduate
school administration, I am able to state definitively that in many other fields
of study graduate students must fund their education by working positions
that have no bearing on their research progress and their work may not even
be connected to their disciplinary expertise. In fact, having to pay for one’s
education in such away usually extends time to graduation dramatically.
As noted above, at some institutions your acceptance into the graduate program may be
separate from an offer of assistantship funding. Be certain to understand the details of your
particular situation before accepting an offer.
2.5
2.5.1
FUNDING OF RESEARCH
U.S. MODEL OF RESEARCH UNIVERSITIES
Students often come with misconceptions about where and how research funding is obtained.
What students rarely appreciate is that research funding is very difficult to obtain. In most
cases the funding for research (including a research assistantship) was obtained through a hard-
fought and competitive proposal process. It is likely that their research mentor has spent an
enormous amount of time and intellectual energy writing multiple proposals, of which only a
subset is actually funded. The vast majority of research proposals that are written and submitted
for consideration are rejected without being funded. Therefore, being supported on a research
assistantship funded by a research grant is a privilege not an entitlement.
Student Perspective
“The thing I found most surprising about how research is conducted
is the method by which most funding is procured and the overall attitude
of researchers toward that source. When I first started learning about aca-
demic research, I expected budgets from research institutions to pay a large
percentage of research costs. I believed that these budgets were heavily sub-
sidized by student tuition and the earnings from previous research achieve-
ments at those institutions. This is not typically the case. Grants from the
2.5. FUNDING OF RESEARCH 29
federal government are the single largest source of funding for the majority
of universities and fields. Whether the funding is from a government agency
such as NASA or the DOE, or from the Department of Defense, the money
still comes from the American tax payer.”
Grant funding may come from a federal source (such as the National Science Foundation
or National Institutes of Health) or a private foundation (such as the American Heart Associ-
ation or the Petroleum Research Fund). Research contracts are also a common funding source
as well, and commonly come from federal sources (such as the Air Force Office of Scientific
Research) or a private company (both small and large). Depending both on the source of the
funding and the specific type of funding there may be very well-defined timelines and deliv-
erables associated with the research. Some funding may require monthly, quarterly, annually,
and/or final reporting associated with the project progress and outcomes. In other words, re-
search funding comes with strings attached.
Given the overall framework of funding, I suggest to graduate students that they should
treat their assistantship as professional employment. If you have an assistantship, you are being
paid for your engineering skills through both the stipend (i.e., paycheck) and tuition (i.e., waiver
of tuition). If you were working in industry, you would be expected to treat the job professionally,
put in your best effort, and achieve regular progress. The same is expected in your graduate
research.
2.5.2
FUNDING YOUR GRADUATE STUDIES
For graduate students in engineering, and particularly students pursuing a Ph.D. program, grad-
uate school is usually paid for by a fellowship, a research assistantship, or a teaching assistantship.
Student Perspective
“I believed that you still had to spend lots of money to attend grad
school. I am extremely pleased to know that through applying for fellowships
and with how most engineering departments work, pretty much everything
from living expenses to tuition and lab funding is potentially covered.”
Fellowships come in many shapes and sizes. Some universities have fellowships to provide
and others are available through external programs. A fellowship may provide a “full ride” that
pays for all of your tuition and stipend expenses (for one or more years), or it may simply be a
supplement to other types of assistantship funding. A full fellowship gives you a huge advantage
because a potential research mentor does not need to find as much funding to support you. No
graduate student is truly “free” because the research mentor must have the time to interact with
you and be able to support other research expenses especially for experimental work, but it is
30
2. FINDING THE RIGHT RESEARCH POSITION FOR YOU
much easier for a research mentor to take on a fellowship student than to find funding for an
assistantship.
Fellowships provided by a university are usually ones that you are automatically considered
for when you apply to the graduate program. The best way to ensure that you have a good chance
at being considered for one of these is to have the best graduate school application possible and to
submit it early. Do not wait until the deadline!! Many fellowship and assistantship opportunities
will already be gone if your application is in the last batch of applicants to be considered.
There are also a variety of fellowships that you can apply for yourself as a senior under-
graduate or a first-year graduate student. Your academic advisor or research mentor will be able
to point you toward ones that might be a good fit for you, but you should consider looking into
some of the following:
Department of Energy Computational Science Graduate Fellowship
Hertz Foundation’s Graduate Fellowship
National Science Foundation Graduate Research Fellowship (NSF GRF)
National Defense Science and Engineering Graduate Fellowship Program (NDSEG)
National Defense Science, Mathematics and Research for Transformation (SMART)
Scholarship
NIH Kirschstein-NRSA Individual Predoctoral Fellowship
Tau Beta Pi Association Graduate Fellowship in Engineering
As you progress through your graduate studies there are also additional fellowships avail-
able at later stages, particularly dissertation fellowships that are designed to help students finish
up their Ph.D. program.
There are two basic types of support provided through universities that will fund your
graduate studies. There were some variations on the specific titles depending on the institution,
but many intuitions use the names research assistantship (RA) and teaching assistantship (TA).
These assistantships usually provide for both tuition and a stipend for your living expenses. In
return, you will be working on a research project or by teaching undergraduate students.
In many cases, research assistantships have a great deal of overlap with the research you
will ultimately use for your thesis or dissertation. So, you are getting paid to do the research you
would have needed to do anyway. Although the RA position may have a percentage appointment
or certain number of hours associated with it, you will likely need to spend more time than what
you are paid for in order to complete your degree in a timely manner. A good way to think about
it is that you need to do a certain amount of research in order to earn your degree, and you are
lucky enough to get paid for a portion of it!
2.5. FUNDING OF RESEARCH 31
As discussed above, there is more match-making needed in this case because you will
need to be highly qualified, find a good fit between your research interests and a faculty mem-
ber’s research program, and have this match up with available funding support. Once you have
identified schools that you are interested in attending, you also need to look at the research in-
terests of the faculty members and contact them about the availability of funding. If they have
an RA position available and you are a good match, then they may make you an offer!
In some cases graduate students may be brought into a degree program and initially funded
by a teaching assistantship. In other cases, the TA opportunities may come later in the graduate
experience and something that you do after you have progressed in your degree program. The
type of work that a TA would do depends on the specific position and may include grading,
holding office hours to answer student questions, running a discussion section, or teaching the
lecture component of a course. Regardless of the position, there will be an instructor or faculty
member in charge of the course, and you may also be working with other TAs on the same
course.
Teaching assistantships, although excellent skill building opportunities, will not be as di-
rectly related to your degree progress. If you are interested in an academic career path, the op-
portunity to be a TA can help you gain invaluable experience. Even if you are not interested in
being a faculty member some day, teaching a subject provides an opportunity for you to deepen
your own understanding of it. If you are in front of a classroom for a portion of your TA work
you will also be able to hone your presentation and explanation skills. Employers of every type
appreciate these skills.
For students planning to pursue a Master’s degree only, the funding opportunities are
fewer. Sometimes RA and TA positions are available, but if you do not intend to continue on for
a Ph.D. it is more likely that you will be paying tuition for the degree. Regardless, the investment
in a Master’s degree should pay off. On average, your salary will be higher,7 your lifetime earnings
with a M.S. vs. a B.S. are higher, and the unemployment rate is lower.8 Employers are also
increasingly requiring a Master’s degree.9
Finally, there are student loans. Generally speaking, if you have student loans coming into
graduate school, you will be able to defer your payment of them while you continue your studies.
It’s also often possible to get student loans for graduate studies to support the cost or supplement
funding you have from the university.10
7Doubleday, J., 2013. Earnings Gap Narrows, but College Education Still Pays, Report Says, Chronicle of Higher Educa-
tion, October 7.
8Council of Graduate Schools, 2013. “Open Doors with a Doctorate.”
9Council of Graduate Schools, 2013. “Why Should I Get a Master’s Degree.”
10Council of Graduate Schools, 2013. “Financing Graduate Education.”
32
2. FINDING THE RIGHT RESEARCH POSITION FOR YOU
2.5.3
FELLOWSHIP APPLICATIONS
As mentioned in the previous section, fellowships come in all sorts of shapes and sizes, from a
“full ride” to a small supplement. However, there are a number of commonalities in the appli-
cation process for those you would need to apply to yourself. This happens independent of the
university you are applying to or attending, so you will need to manage those deadlines in ad-
dition to graduate school application deadlines. Look into these opportunities early. Although
the deadlines vary quite a bit, many of them are due BEFORE the standard graduate school
application deadlines.
As you look into each fellowship opportunity, carefully read the eligibility criteria. You will
not want to waste time on an application where you do not meet the basic criteria or where you
are not a good fit. Keep in mind that in some cases you will be applying as a senior undergraduate
and in others as first year graduate student. Some fellowship competitions allow you to apply in
more than one year as well.
Don’t try to do this all on your own, without feedback. You will have a much higher
likelihood of being successful if you plan ahead and seek out guidance. Determine if there is
help available on your campus that will guide you in the fellowship application process. If there
are workshops offered, seek these out and attend. There may be one-on-one help available if
your campus has a writing center. You may also be able to seek feedback on portions of your
application from an academic advisor or faculty member willing to read the essay portions. You
should also use your network to find out if you know someone who has been successful in getting
one of these fellowships. Being able to look at a successful fellowships packet will give you a
model to emulate.
In addition to the fellowships available for your studies, there are also often small pockets
of money that can help to defray other costs. Keep an eye out for other opportunities along the
way, such as travel grants and other supplemental funding. Then later in your graduate studies,
when you become a dissertator, look at fellowship opportunities again. Although there are not
as many options as there are at the beginning of your graduate studies, in many fields there are
dissertator fellowships that you can apply to which will help speed up your degree completion.
2.6
UNDERSTANDING THE ORGANIZATION OF YOUR
RESEARCH GROUP
After you have joined a research group (or even while you are in the process of determining if a
research group is a good fit for you), it is important to understand how the group is organized.
There will be research projects underway—some ramping up, some ongoing, and others
winding down. You will be involved in at least one in detail, but you should also understand the
basic themes of the other topics that your colleagues within the research group are engaging with.
Having the basic framework of the research topics will allow you to sort and process additional
2.6. UNDERSTANDING THE ORGANIZATION OF YOUR RESEARCH GROUP 33
information that you pick up in research group meetings, conversations with other research
group members, and interactions with your research mentor.
Student Perspective
“My research group … usually meets on a weekly basis to give updates
on progress and get advice on how to proceed if we have a problem. I find
this to be very beneficial because it helps me get a feel for what everyone else
in my group is working on. Although it is hard to follow a lot of the time,
it’s good to learn what their projects are…”
Initially these interactions, particularly in research group meetings, may seem like a waste
of time because nearly everything that is discussed is going over your head. But it is important
to persist and try to follow as much of the information being exchanged as possible. You can
also connect with one of the other more experienced students afterward to ask then to help you
fill in some of the gaps. With time, you will be able, not only to understand more of what is
being discussed, but also help provide useful feedback and ideas to the group yourself. Just keep
in mind that it takes time to come up to speed, but you will make progress if you set goals for
yourself that sometimes feel like a stretch.
Student Perspective
“Where I used to attend group meetings with glazed over eyes, I am
now able [to] see what the other people are actually doing. However, I am
usually not able to contribute too much because I still lack a significant
amount of knowledge. Therefore, my main goal in the coming year is to be
able to talk more in group meetings and provide the other group members
with some helpful comments.”
Student Perspective
“I think the most crucial element in my development during these
meetings was that with every passing week, I felt more and more comfort-
able with the research, eventually to the point where I could try to suggest
explanations and various solutions to problems in conjunction with the same
inputs from the other members of the meeting. Having my ideas consid-
ered in a setting with three other people with considerably more experience
in the field was very rewarding. The collaborative effort of people from dif-
ferent backgrounds to develop solutions to a problem or explanations for a
phenomenon has become one of my favorite elements of research.”
34
2. FINDING THE RIGHT RESEARCH POSITION FOR YOU
You may be paid to do research (for instance as a research assistant or as hourly pay) or
you may be doing research for credit. Either way, it is likely that there is some type of funding
supporting your salary and/or the purchases of resources that you need to conduct the research.
You should understand what the funding source is for the research you are pursuing. It may be
a federal grant, an industry contract, institutional funds that your research mentor has at their
disposal, or some other mechanism. There may be multiple funding mechanisms supporting the
various projects and people involved in the research group.
As a member of a research group, you also need to get to know the others engaged in the
research group aside from your research mentor. Research groups come in many different sizes,
from the small tight-knit groups to large international collaborations. There may be undergrad-
uate researchers, graduate students, postdoctoral researchers, scientists, and faculty members.
Your research group may also be collaborating with other research groups. These people may be
working directly with you, using similar or complementary techniques, sharing research space
with you, or they may be working at a different location or on a project that does not overlap
with yours. Regardless, it is important to know who the research group members are and how
they are connected to the work you are undertaking.
ASSIGNMENT 2-8:
INDIVIDUAL ASSIGNMENT – MAP THE ORGANIZATION OF YOUR
RESEARCH GROUP
Create a visual depiction, or map, of the research you are currently working in (or planning to
join). Talk with your research mentor and other lab members to understand what projects are
underway, who are the people involved, and how the research is funded. You might depict one
or more of the following.
• A diagram of the funded projects showing how they are interrelated, who is working
on each, and what funding supports each person/project.
• For a highly collaborative group: this would include how the group collaborates
with other individual researchers, research groups, and institutions across the ongo-
ing projects.
• For an experimental group: the layout of physical lab space, how the experiments are
organized, who utilizes on each piece of equipment, and how they projects/people are
funded.
• For a computational group: the research projects that the group has going on and con-
nections between the projects, people, and software being used/developed.
C H A P T E R 3
35
Becoming a Researcher
3.1
DEVELOPING A RELATIONSHIP WITH YOUR
RESEARCH MENTOR
Research groups can be set up in a variety of different ways and range in size from 1 to 100
.
C
You may be working one-on-one with your research mentor or you may be working in more
of a group setting where you meet with your research mentor along with others working on
the same or related projects. In larger research groups you may find that there are researchers
at a variety of different levels. This might include undergraduate students, graduate students,
postdoctoral researchers, engineers, scientists, and faculty members. In some cases, your most
immediate research mentor may be someone at a level just above your own. For instance, you
may be an undergraduate researcher working most directly with a graduate student mentor.
Ultimately the responsibility for the research group, its direction, and the projects being
pursued are determined by the lead faculty member or lead scientist/engineer, sometimes called
the principal investigator or PI. This individual is also your research mentor (maybe you will
think of this person as your Mentor with a capital M), but your interactions with this individual
may be less frequent and may be in a group setting rather than one-on-one. You should not
discount the others in the research group as they may provide you with invaluable information,
advice, and mentoring that could prove to be important to your success.
Student Perspective
“I had some previous research experience at [a] National Lab. … I had a
mentor and a co-mentor that were constantly guiding me. I would meet with
them several times per week to discuss how progress was going and ask [any]
questions that I had. [My] two mentors also had offices right down the hall
from mine and had an open-door policy so I could stop in and ask anything
if I got stuck. This was so helpful to the ease and speed of my workflow. I
could work on my project and when I ran into a problem, I would try to solve
it on my own first, but if I couldn’t figure it out, I could easily consult one of
my mentors for help. Sometimes if they couldn’t figure out the problem, they
would point me in the direction of other researchers around the lab. This was a
neat experience to draw on the expertise of researchers from different groups.
I got to meet new people and learn about what they were working on while
36
3. BECOMING A RESEARCHER
also getting a new perspective on the problem I was originally trying to solve.
Prior to coming to grad school, I had guessed that my advisor would be play
a similar role as my mentors at [the National Lab]. This semester has taught
me otherwise. I didn’t take into consideration the seemingly countless other
obligations that grad school advisors have such as teaching, doing their own
research, being active members of academic organizations which causes their
time to be limited. Therefore, I do not have the same two-to-one relationship
as I had at [the National Lab] which makes my work more independent. I
think this is a good, and necessary step for me to take in my research career.
This has made my problem solving skills much better and also has made me
get to know the areas of expertise of the other students and staff members
in my group. I’m learning who can possibly help me depending on the issue
that I have run in to.”
The Guides at Your Side
I would be hard pressed to count the number of mentoring relation-
ships I have had over my career. Certainly somewhere in the multiples of
hundreds, if I consider both those where I have been the mentor and those
where I have been the mentee. These relationships have ranged from a few
weeks to decades and have varying levels of involvement, but the common
theme is a goal to help the other learn, evolve and be successful at what they
are trying to accomplish. The more everyone understands the goals and mo-
tivations at the heart of a mentoring relationship, the more successful the
results will be. This relies on communication and working to develop a rap-
port that will ultimately lead to a productive outcome.
Regardless of the size of the group and who specifically is your research mentor(s), you will
need to take an active role in getting the mentoring you need to be successful with your research.
Initially, you will be learning the basics of the project and the techniques you will be using, but
even at this early stage you need to take ownership of your progress. Let your mentor know what
you do know, and what you need help in learning, so that s/he can help you identify the resources
that can assist you. As you gain more experience you are likely to be given more independence,
both in terms of working more independently on specific tasks but also in carrying forward with
the next steps before your next check-in with your mentor.
In the business world the term is called “managing up”—making the management of you
as an employee easy for your boss—you can use these same ideas in a mentoring relationship
by “mentoring up.” In an article titled “Making the most of mentors: A guide for mentees,”
3.1. DEVELOPING A RELATIONSHIP WITH YOUR RESEARCH MENTOR 37
the authors1 suggest that you take responsibility for the mentoring relationship by “guiding and
facilitating the mentor’s efforts.” When working with a mentor, you have to figure out what you
need from that person in terms of time, energy, and influence, and help that person to help you.
Your goal is to ask for the help you need in a way that is easy for that person to give it to you. You
may need other things from them—like letters of recommendation for a scholarship/fellowship
for instance—and you need to make them aware of these needs as well as make it easy for them
to meet your needs. Tell them about your goals, and where you want to go with you career. Tell
them what would help you if you know and, if you don’t, ask them what might help you to
achieve your goals.
With your research mentor, determine how regularly will you meet—this may be more
frequent at first and at critical points in the research or your degree process, so you may need to
revisit and renegotiate the frequency of your interactions. If your mentor does not have regular
meetings with you, take responsibility for requesting and scheduling these. Go beyond simply
following through with the tasks that have been assigned to you and think ahead to what should
come next, set goals that you can discuss, generate ideas for overcoming the research obstacles
you have run into, and be responsive to the feedback you receive from your mentor. Most impor-
tantly, when you have an opportunity to interact with your research mentor, you should strive
to be prepared.
• Have a clear plan, at least for the next step of your research.
• Be prepared to discuss what you have accomplished recently and what you plan to do
next.
• Have questions to ask based on your research progress and/or your reading of the lit-
erature related to your project.
• Listen to your research mentor’s responses, and write them down (either immediately
or just after the interaction).
• Act on your plan and the suggestions made to you by your research mentor between
now and your next interaction.
Student Perspective
“It was good to realize that the student is in some way expected and
encouraged to dictate the schedule and flow of meetings. This made me more
confident to meet with my professor and decide what an appropriate pace for
my research is.
1Zerzan, J. T., Hess, R., Schur, E., Phillips, R. S., and Rigotti, N., 2009. Making the most of mentors: a guide for
mentees. Academic Medicine, 84(1):140–144.
38
3. BECOMING A RESEARCHER
In my experience, the most effective and successful research students come to each meeting
(whether it be in an individual or group format) with results in hand (either on a piece of paper or
in a set of slides on their computer). They have thought about the results and what they mean, are
ready to discuss them or ask questions about them, and have prepared a list of next steps that they
will take. They take notes on what we discuss and what we decide to do (either in a lab notebook
or a computer file). They identify resources they need, or questions that they have, so that I
can help them move the research forward by pointing them in the right direction, connecting
them to a person with the expertise they need, or purchasing something that is required for the
research. These successful research students are also constantly keeping up with the literature,
identifying recent publications that are relevant to their project. They bring those papers to my
attention and they share relevant papers with other members of the research group. They also
keep track of their own degree progress—deadlines for examinations, course requirements for
the degree program, etc. In addition to sharing information with me, they share information
with their peers, and mentor those who come in after them, either formally or informally. The
reason these individuals are so successful is that they have taken ownership of their progress and
help me to help them advance and succeed.
Student Perspective
“The change has been very gradual, but I’m starting to feel confident in
my ability to understand the day-to-day research goals of the research group,
and maybe more importantly, to know what questions to ask and when to
ask them. This is a transition that I think many new researchers go through.
How it often worked for me at first was that, when an unfamiliar topic came
up, I doubted that I even had the technical background to have the means to
learn about it. Not wanting to waste the time of the people who seemed to
be familiar with the subject, I generally kept my questions to myself.”
You can’t expect to know everything when you setup into a new research project, but
your goals should be to come up to speed quickly and ask relevant questions that will help you
to obtain the background information you need. The worst thing to do is pretend you know
something when you don’t. Your research mentor, and the colleagues you work with, can’t help
you get to where you need to be if they don’t know that you are lost. Phil Dee, who wrote the
book Building a Successful Career in Scientific Research, highlights this as a foundational element
providing “the ground rule” for your relationship with your research mentor: “communicate with
your boss.2”
If you step back and think about it, you will see that academic research is a symbiotic
relationship between the research mentor and the student. You and your research mentor must
2Dee, P., 2006. Building a Successful Career in Scientific Research: A Guide for PhD Students and Postdocs. Cambridge Uni-
versity Press.
3.1. DEVELOPING A RELATIONSHIP WITH YOUR RESEARCH MENTOR 39
depend on each other. In other words, it is mutually beneficial for you to be successful. My
colleague, Prof. Irving Herman at Columbia University, wrote a somewhat tongue-in-cheek
guide for graduate students in which he espoused the “The Laws of Herman.3” Several of the
laws are about the symbiotic relationship mentioned between you and your research mentor, the
last two being “Whatever is best for you is best for your advisor.” and “Whatever is best for your
advisor is best for you.” Meaning that your success is to everyone’s advantage.
Both Dee and Herman also bring up the topic of writing, which is a critical skill for every
researcher at every level. If it is not something that you feel you are good at yet, don’t worry,
you will have many opportunities to practice and you will become better the more you write!
If you take my advice above about preparing for meetings with your research mentor, you will
automatically be writing something about your research. It may be in a bulletpoint list initially,
but if you save these regular meeting notes you will find that later on, when you are at the stage
of writing about your research, you can go back to these notes for reference and turn portions of
your notes into sentences and even paragraphs. The other advantage of this chronological archive
of information you have created along the way is that it can help you to refresh your memory
about what you did to get to where you are, and the questions you were posing and answering.
Although a thesis or journal article that you will write is not a historical recounting of every step
and misstep that you took, a review of this information can help you to see the larger picture of
your work.
ASSIGNMENT 3-1:
INDIVIDUAL ASSIGNMENT – REFLECTIVE WRITING ON GIVING AND
RECEIVING FEEDBACK
Write a one-page reflection on giving and receiving feedback. First, describe a time that you
received feedback that you (ultimately) found valuable. Discuss how you reacted to it at the
time and how you looked back on this feedback as time passed. Then, describe a time that you
provided feedback to someone else. Discuss the reaction/response you observed in the other
person at the time and as time passed. Also discuss how you would have reacted if someone had
provided you that same feedback in the same way.
3Herman, I. P., 2007. Following the Law. Nature, 445, 11.
40
3. BECOMING A RESEARCHER
ASSIGNMENT 3-2:
INDIVIDUAL ASSIGNMENT – REFLECTIVE WRITING ON DEE’S RULES
Consider the following “rules” from Phil Dee’s chapter on “Choosing and Handling your Ph.D.
Adviser”4:
Rule 1: The ground rule: communicate with your boss.
Rule 2: Keep your boss informed.
Rule 3: Discover what makes your boss tick.
Rule 4: Earn your boss’s respect.
Rule 5: Assert yourself.
Rule 6: The golden rule: write for your boss.
Note that Phil Dee uses the term boss, where others use advisor, and this book most
commonly uses research mentor. Although these terms can have different connotations, we are
each talking about the same person (boss = advisor = research mentor).
Choose one of the rules above and discuss how you have seen this apply to your own
research experience (or how you would expect it to emerge in a future research experience).
3.2
ALIGNING EXPECTATIONS
Some research opportunities and relationships come with clearly outlines expectations, but this
is not always the case. When it is not discussed up front, it is up to you to seek clarification.
Formalizing the relationships and the expectations can often be helpful. Many faculty are begin-
ning to use written agreements with their students called variously a mentoring contract, mentor
agreement, research agreement, or advising statement.5;6 These can cover a wide range of topics,
but the basic intent is for both he mentor and the mentee to understand the expectations of the
relationship for the duration of the degree program.
4Dee, P., 2006. Building a Successful Career in Scientific Research: A Guide for Phd Students and Postdocs, Cambridge Uni-
versity Press.
5Branchaw, J., Pfund, C., and Rediske, R., 2010. Entering Research: A Facilitator’s Manual: Workshops for Students Beginning
Research in Science. WH Freeman.
6Masters, K. S. and Kreeger, P. K., 2017. Ten simple rules for developing a mentor—mentee expectations document.
PLoS Computational Biology, 13(9). e1005709. https://doi.org/10.1371/journal.pcbi.1005709 and https://doi.
org/10.1371/journal.pcbi.1005709.s001.
3.3. DEVELOPING EXPERTISE 41
These expectations usually revolve around the topics of:
•
•
shared goals including your career goals and what will be needed for you to achieve
them;
research skills you will need to develop to complete your project;
• work hours (number, time of day, days of week), work/life balance, and vacation time;
• graduate assistantship stipends, type of funding over time (e.g., RA vs. TA), and sum-
mer support;
• degree progress milestones and deadlines/goals for when they will be achieved;
•
•
•
fellowship applications and grant writing assistance;
expectations for documentation of research, publication, and authorship; and
conflict resolution.
Traditionally the alignment of expectations has been either done more informally (or not
at all). When it does occur informally, it likely happens over the course of time. Regardless of
whether it is informal or formal, if your mentor does not embark on a conversation about these
topics with you, it is something you will need to bring up. It can be anxiety provoking to be in
the dark about what is expected of you. Having you understand your mentor’s expectations will
help you to meet them, but equally important, having your mentor understand your goals will
help you to achieve them.
Student Perspective
“Over the meetings I’ve had with my research mentors, I’ve learned
that the expectations they have for me and skills they suggest I work on de-
veloping seem to be centered around the idea of taking the time necessary to
carry out my research carefully.”
3.3
DEVELOPING EXPERTISE
In your pursuit of a research career, you will be transitioning from a novice learner to an expert
in your chosen area of focus. But we should consider what is meant by the term expert. An
expert is not someone who knows all the answers. An expert has significant knowledge on a
topic, appreciate which knowledge is applicable in the given situation, and can seemly exert
little effort in solving a problem. An adaptive expert is someone who approaches a new situation
flexibly, applies their existing knowledge and skills, but is always seeking to learn more. It is
42
3. BECOMING A RESEARCHER
important to recognize that experts must be lifelong learners to maintain and strengthen their
expertise.
The U.S. National Research Council undertook an effort to link the research and practice
on the topic of learning which culminated in a seminal book titled How People Learn7 (cited well
over 22,000 times in the literature). Key principles they summarize on the topic of how experts
differ from novices include the following.
•
•
•
•
“Experts notice features and meaningful patterns of information that are not noticed
by novices.”
“Experts have acquired a great deal of content knowledge that is organized in ways that
reflect a deep understanding of their subject matter.”
“Experts’ knowledge cannot be reduced to sets of isolated facts of propositions but,
instead, reflects contexts of applicability: that is, the knowledge is “conditionalized” on
a set of circumstances.”
“Experts are able to flexibly retrieve important aspects of their knowledge with little
attentional effort.”
Everyone is a Novice
We are all novices and experts, it just depends on the topic area. I am
still in the novice learner stage when it comes to baseball. I know the basics
of the game, but when a fast and complex play occurs, I don’t always follow
exactly what happened and can’t begin to predict the outcome. Happily, when
I’m at one of my son’s games, other parents watching the game with me are
more expert. They are happy to explain what happened so that I can further
develop my knowledge of the game. I won’t ever have the expertise of a long-
time player, but I’m working on becoming a more expert fan!
For instance, expert mathematicians notice patterns and identify classes of problems in
order to develop an approach to a solution. They have not only solved many problems before,
but they have also stepped back to consider the underlying principles of each problem and how
solutions can be classified. This is the opposite of what I often see in novice engineers—they
are often too quick to throw down an equation and immediately start plugging in numbers. As
a student begins to refine and improve their approach, they find that they are most successful
when they first look at a problem and think about what category of approach might work best,
then work with the appropriate equations and manipulate them, and finally, at the end, plug
in values and find a numerical solution. When using this more advanced approach, students
7National Research Council, 2000. How People Learn: Brain, Mind, Experience, and School–Expanded Edition, National
Academies Press.
3.4. DEVELOPING YOUR OWN IDENTITY AS A RESEARCHER 43
find that the parallels which can be drawn between problems become more obvious because the
patterns become more recognizable.
I advocate that students start developing their intuition about problem solving in a new
area of learning by developing an initial “guess” associated with specific problems—Do you ex-
pect it to be positive or negative? What magnitude would it be? What units? Then at the end
of the problem, you check your solution back against the guess. If they agree, and the answer is
confirmed, then you can build confidence in your intuition. If they don’t agree, then either your
guess was off or you made an error in your solution. If your guess was wrong, the solution process
may shed light on where your intuition was off. If you feel confident in your guess, then it may
help you to identify where you made an error in your solution. The process of thinking about the
problem up front, and the retrospective analysis of the solution, will help you to advance toward
more expert thinking. This does not just apply to coursework, it applies to research as well. You
should have a hypothesis (a guess) before you begin, and you should design your research to
explore that hypothesis to prove or disprove it.
Whether it is coursework or research, it takes an investment of time to develop your skills
and begin to work toward expertise. Time on task has a big impact. But you must seek more
than superficial knowledge. You need to develop expert knowledge that is both conditionalized
on the context and centered around big ideas.8 As you build expertise in an area you will notice
that your performance will become automatic and fluid.
ASSIGNMENT 3-3:
INDIVIDUAL ASSIGNMENT – REFLECTIVE WRITING ON EXPERTISE
Consider areas of expertise that you have observed in others (such as your research mentor, others
in your research group, classmates, etc.). Reflect on areas of expertise that you are developing
with respect to your research. What can you do to either deepen your expertise in one of these
areas, or develop expertise in an area you recognize as important to your research?
3.4
DEVELOPING YOUR OWN IDENTITY AS A
RESEARCHER
As you engage in your academic career and through your experience as a researcher, one of your
goals should be to become an independent, critical thinker. In your academic pursuits (and in
life) you are on a continuous journey of learning. This journey is facilitated by self-awareness,
reflection, and authentic experiences that will prepare you for where you will go next.
8National Research Council, 2000. How People Learn: Brain, Mind, Experience, and School–Expanded Edition, National
Academies Press.
44
3. BECOMING A RESEARCHER
Self-awareness means knowing your own strengths and weakness, knowing what ex-
cites you and makes you curious, and knowing how you handle yourself when faced
with both success and failure.
Reflection involves you taking the time to think about things that happen to you in
life. How did you handle a challenge? How did you react to praise or criticism when
it was given? How might you do things differently the next time? Reflection can be
done all inside your own head, by journaling your experiences, or by talking to others
in thoughtful conversation.
Authentic experiences are real-world activities that give you an actual taste of what it
is like to do something. Sometimes this can be accomplished through a class assign-
ment, but most often this means getting out into the professional world and trying
your hand at something. In addition to research experiences, other valuable authentic
experiences include internships for a company or national laboratory, and volunteer
opportunities like Engineers Without Borders. What you will gain from these ex-
periences will not only be technical experience, but also knowledge about yourself;
who you are and who you want to be.
Becoming who you want to be can be thought of as self authorship. Marcia Baxter Magolda
defines the term self authorship, or internal identity, as “simultaneously a cognitive (how one
makes meaning of knowledge), interpersonal (how one views oneself in relationship to oth-
ers), and intrapersonal (how one perceives one’s sense of identity) matter.9” As you develop as
a person and as a researcher, you will rely more on yourself for interpreting data rather than
the interpretation of others; you will also begin to interact as a junior colleague, rather than a
student with your peers and research mentor(s), and you will begin to develop your own iden-
tity as a researcher. Inherent in becoming a self-authored individual, and critical to your success
as a researcher, will be your ability to realize that “the complexity of the world simultaneously
requires systematic thinking, the ability to judge knowledge claims offered by authorities, con-
structing convictions, and openness to new possibilities.10” All of this may seem a tall order at
the moment, but moving from authority dependence to self authorship, whether it be professors
or parents, is important in both your professional and private lives.
Student Perspective
“I think this is what’s most important for engineers in their capacity
for self authorship. Through their education, they are able to form their own
opinions, think critically, and problem solve. However, this means nothing
9Baxter Magolda, M., 1999. Creating Contexts for Learning and Self-Authorship: Constructive–Developmental Pedagogy,
Vanderbilt University Press, p. 10.
10Baxter Magolda, M. and King, P. M., 2004. Learning Partnerships: Theory and Models of Practice to Educate for Self Au-
thorship, Stylus, Sterling, VA, p. 3.
3.5. TRACKING YOUR DEVELOPMENT AS A RESEARCHER 45
if they are unable to share these opinions, listen to others, or form lasting
personal and professional relationships.”
The ultimate goal of higher education is to produce learners that have the following ca-
pacities.11
•
“Cognitive maturity, characterized by intellectual power, reflective judgment, mature
decision making, and problem solving in the context of multiplicity.
• An integrated identity, characterized by understanding one’s own particular history,
confidence, and capacity for autonomy and connection, and integrity.
• Mature relationships, characterized by respect for both one’s own and others’ particular
identities and cultures and by productive collaboration to integrate multiple perspec-
tives.”
ASSIGNMENT 3-4:
INDIVIDUAL ASSIGNMENT – REFLECTIVE WRITING ON YOUR
DEVELOPMENT AS A RESEARCHER
An important part of becoming a mature, independent researcher is discovering yourself. What
interests and excites you? What motivates you?
Write a two-page self-evaluation on your development as a researcher. Reflect on where
you have been, where you are now, and what you will work on next in your development as a
researcher.
Revisit this assignment periodically. First complete the writing assignment described
above, and then look back on previous assignments to remind yourself of where you have been
and how you are developing.
3.5
TRACKING YOUR DEVELOPMENT AS A
RESEARCHER
Initially, it may not seem relevant to track your progress in research, but in the long run it
can prove exceptionally helpful. In particular, if you have some long term goals in mind (like
submitting a research paper for publication, earning your Ph.D., or getting a faculty position)
you can break down the larger goals into smaller steps that you will need to take, and track your
progress along the way.
11Baxter Magolda, M. and King, P. M., 2004. Learning Partnerships: Theory and Models of Practice to Educate for Self Au-
thorship, Stylus, Sterling, VA, p. 6.
46
3. BECOMING A RESEARCHER
Student Perspective
“With this being my first semester in a lab, it has been a large learn-
ing curve and looking at it now, it really puts into relief all the skills I need
to further develop. I started out having to learn the safety procedures, loca-
tion of everything in the lab, and other basics. Whether it was following the
pipette rules and maintaining a clean working environment, it was all part of
the learning curve. Evaluating now, it is evident the skills I need to develop.
My basic laboratory skills are quite sufficient, but there is a large amount of
equipment I will need to know how to use.”
As a researcher, there are a variety of things that you will need to focus on, and master, in
the years to come:
•
•
•
•
•
•
a knowledge of the discipline in general and your specific subdiscipline specialty;
a basic understanding of, and experience in, the steps and techniques of engineering
research;
ability to employ the scientific habits of mind that engineering research requires;
awareness of ethical, social, political, and economic influences on, and impacts of, en-
gineering research;
skills in written and oral technical communication; and
skills in collaboration and teamwork.
An Individual Development Plan (IDP) can help you to make progress on several fronts.
For example, the American Association for the Advancement of Science (AAAS) has developed
on online tool called My IDP available at http://myidp.sciencecareers.org/. For early-
stage researchers the tool is helpful for identifying your skills, interests, and values and providing
you with career paths that may be a good fit for you. Additionally, this tool helps researchers
at various career stages in goal setting in areas like skill development, project completion, and
career advancement. The SMART goal strategy emphasizes creating goals that are “specific,
measurable, action-oriented, realistic, and time-bound,” hence the acronym.
Using the “SMART” Principle12
S—Specific—Is it focused and unambiguous?
M—Measurable—Could someone determine whether or not you achieved it?
12Goal-setting strategies for scientific and career success, Fuhrmann, C. N., Hobin, J. A., Clifford, P. S., and Lindstaedt,
B., 2013. Science, AAAS, Dec. 3. http://www.sciencemag.org/careerscareersresearch/2013/12/goal-setting-
strategies-scientific-and-career-success. Accessed January 2018.
3.5. TRACKING YOUR DEVELOPMENT AS A RESEARCHER 47
A—Action-oriented—Did you specify the action you will take?
R—Realistic—Considering difficulty and timeframe, is it attainable?
T—Time-bound—Did you specify a deadline?
Achieving your goals will take investment of time, but you will eventually be able to see
gains. You will begin to understand more of the seminar talks you attend and the journal articles
you read. You will gain the ability to operate independently and more efficiently. You will begin
to contribute new ideas to the research conversations you engage in. You may end up develop-
ing specialty expertise that others in your research group rely on. As you invest more time and
intellectual energy in your research, you will be begin to see payoffs in terms of progress and
recognition for your research accomplishments.
ASSIGNMENT 3-5:
INDIVIDUAL ASSIGNMENT – QUALITIES OF A SUCCESSFUL
RESEARCHER
List ten qualities that you will need to be a successful researcher. How far along in your devel-
opment are you in achieving these qualities? How can you go about developing these qualities
further?
ASSIGNMENT 3-6:
INDIVIDUAL ASSIGNMENT – REFLECTIVE WRITING ON MENTOR
FEEDBACK
Individuals grow accustomed to receiving and accepting feedback in different ways and may react
differently to the feedback depending on who it comes from, the context in which it is provided,
and our mood at the time of receiving it. Some of us are more effective at either or both giving
and receiving advice than others, but we can all become better. Before asking for feedback on
your work or your development as a researcher, prepare yourself to receive it openly. Receiving
feedback can be easier to handle when you ask for specific feedback on an area you already know
you want to improve on and put the feedback to good use right away.
If you have a research mentor, request a meeting to receive feedback regarding the self-
assessment above. Chose at least one area where you believe you have demonstrated strength,
and at least one area where you believe you need additional development that you are ready to
undertake.
48
3. BECOMING A RESEARCHER
Prior to the meeting, consider how you can be open to receiving the feedback you are
going to receive. After the meeting, write a one-page reflection about what you heard and your
reaction to it.
Be sure to express gratitude for the time your mentor takes with you to discuss this topic.
ASSIGNMENT 3-7:
INDIVIDUAL ASSIGNMENT – SELF ASSESSMENT
Use the “Evaluation of Research Progress and Researcher Development” rubric in Table 3.1 to
conduct a self-assessment. It covers a range for skills, from Research Documentation to Stress
Management. Your specific research project will also require specific skills, so space is provided
at the end of the self-assessment for you to define these and track your progress on their mastery.
Note that the skills are cumulative (from the left column, to the middle, and finally the
right column). If you have Mastery in an area, you will have demonstrated the items listed under
Beginning and Developing, as well as Mastery.
As you enter into research, it is likely that you are at the Beginning stage in most all
areas. Take a look ahead to the next level and see what items you should be working on in your
development. If you don’t know how to make progress on this next level, it is likely that your
research mentor will be able to give you some strategies for making progress.
Periodically assess yourself for your research progress and development as a researcher
(pages 49–55). Consider sharing the assessment with your research mentor to prompt discussion
about where you are in your development as a researcher and how you can make progress in areas
you would like to improve in. Although you may be able to achieve mastery in some areas during
your degree progress, other topics may be something you work on throughout your career.
3.6
BEING AN EFFECTIVE TEAM MEMBER AND
COLLABORATOR
Being an effective team member requires a wide range of social and organizational skills. You
may already come equipped with many of these skills given your prior experience, but there are
likely areas in which you may need to gain experience or improve on your current capabilities.
One of the earliest aspects of being a good team member that you may encounter is the
etiquette and expectations of participating in a team meeting. In the context of research, this
comes up in research group meetings or lab meetings. There are several strategies that you can
employ to determine the appropriate type and level of engagement expected of you. Another
basic strategy is simply to ask the question of those who are already in the know. This can be
posed to your research mentor, and you can also ask other research group members to give
their impressions of the expectations. This may mean coming prepared with certain materials in
3.6. BEING AN EFFECTIVE TEAM MEMBER AND COLLABORATOR 49
Table 3.1: “Evaluation of Research Progress and Researcher Development” rubric (Continues.)
Evaluation of Research Progress and Researcher DevelopmentMilestones and TimelineTh e ability to set realistic goals and use time and resources eff ectively; to obtain the maximum benefi t from a minimum investment of time and resources.r BeginningDemonstrated by:• focusing on tasks at hand without dwelling on past mistakes• completing assignments on time• making use of reference books and literature • coordinating and working with others on group project assignments• preparing for scheduled appointment times • using unscheduled time effi cientlyr DevelopingDemonstrated by:• planning ahead• setting up an eff ective schedule• coordinating schedule with others• demonstrating fl exibility • moving forward when mistakes are made• accepting responsibility in group activities• identifying alternative resources• using library and internet resources eff ectively• updating solutions based on review of available literaturer MasteryDemonstrated by:• setting priorities and reorganizing as necessary• performing multiple tasks simultaneously• delegating when appropriate• following up on projects in a timely manner• managing meeting time eff ectively• considering professional goals in the context of project• demonstrating the ability to say “no” if requests made in confl ict with set goals• actively seeking resources to solve problems or answer questions• using limited resources creativelyResponsibilityTh e ability to fulfi ll commitments and to be accountable for actions and outcomes.r BeginningDemonstrated by:• being punctual• completing tasks on time • following through on commitments• accepting responsibility for own actions and outcomes• recognizing own limitsr DevelopingDemonstrated by:• providing constructive feedback to the appropriate person(s)• offering and accepting help• completing projects without prompting• contributing to the provision of a safe and secure environmentr MasteryDemonstrated by:• promoting education• accepting leadership roles• delegating as necessary50
3. BECOMING A RESEARCHER
Table 3.1: (Continued.) “Evaluation of Research Progress and Researcher Development” rubric
(Continues.)
ProfessionalismTh e ability to exhibit appropriate professional conduct and to represent the profession eff ectively.r BeginningDemonstrated by:• following University, Department, and research group policies• demonstrating honesty, integrity, and respect to others• seeking opportunities for leadership• demonstrating an awareness of the professional role of the engineer in society r DevelopingDemonstrated by:• participating in professional activities/organizations• identifying positive professional role models• discussing societal expectations of the engineering profession• awareness of the impact of ethical issues and legal issues on the engineering profession• acting on moral commitmentr MasteryDemonstrated by:• acting in a leadership role• actively participating in professional organizations• actively promoting the engineering profession• advancing the engineering profession outside of the academic programCommitment to LearningTh e ability to self-assess, self-correct, and self-direct; to identify needs and sources of learning; and to continually seek new knowledge and understanding.r BeginningDemonstrated by:• identifying problems• identifying needs for further information• formulating appropriate questions• identifying and locating appropriate resources• attending class consistently• showing evidence of preparation prior to class• showing attentiveness• demonstrating a positive attitude toward learning• participating in small groups• off ering own thoughts and ideasr DevelopingDemonstrated by:• identifying own learning needs based on previous experiences• setting personal and professional goals• seeking new learning opportunities• seeking out professional literature• prioritizing information needs• reconciling diff erences in opinions or information• analyzing and subdividing large questions into components• demonstrating confi dence in presenting materialr MasteryDemonstrated by:• researching and studying areas where knowledge base is lacking• reading articles critically and understanding limitations• accepting that there may be more than one answer to a problem• recognizing the need to verify and then verifying solutions to problems• formulating and re-evaluating position based on available evidence• demonstrating confi dence in sharing new knowledge3.6. BEING AN EFFECTIVE TEAM MEMBER AND COLLABORATOR 51
Table 3.1: (Continued.) “Evaluation of Research Progress and Researcher Development” rubric
(Continues.)
Communication SkillsTh e ability to communicate eff ectively (i.e., speaking, body language, reading, writing, listening) for varied audiences and purposes.r BeginningDemonstrated by:• understanding and applying English (verbal, written, grammar, spelling, expression)• communicating appropriately per situation • providing appropriate feedback to team members and faculty• recognizing diff erences in communication styles• recognizing impact of non-verbal communication: maintaining eye contact, listening activelyr DevelopingDemonstrated by:• modifying communication when necessary• reflecting, clarifying, and restating messages• utilizing non-verbal communication to augment verbal messages• exhibiting appropriate communication per situation• maintaining quality in written work• maintaining quality in oral work• utilizing technology r MasteryDemonstrated by:• modifying written and verbal communication to meet needs of various audiences• presenting verbal or written messages with logical organization and sequencing• maintaining open and constructive communication• communicating professional needs and concerns• utilizing communication technology effectivelyInterpersonal SkillsTh e ability to interact eff ectively with faculty research mentor, scientifi c staff , graduate students, team members, and other department personnel, and to deal eff ectively with cultural and ethnic diversity issues.r BeginningDemonstrated by:• maintaining attentive behavior • demonstrating acceptance of limited knowledge and experience• communicating with others in a respectful, confi dent manner• appropriate behavior in discussion• maintaining professional demeanor in interactions• respecting diff erences in others• recognizing impact of non-verbal communication r DevelopingDemonstrated by:• seeking to gain knowledge and input from others• assuming responsibility for own actions• establishing trust and motivating others• recognizing impact of non-verbal communication and modifying accordingly• discussing problems with the appropriate person(s)r MasteryDemonstrated by:• approaching others to discuss diff erences in opinions• talking about diffi cult issues with sensitivity and objectivity• responding eff ectively to unexpected situations• delegating to others as necessary52
3. BECOMING A RESEARCHER
Table 3.1: (Continued.) “Evaluation of Research Progress and Researcher Development” rubric
(Continues.)
Use of Constructive FeedbackTh e ability to identify sources of feedback, to seek out feedback, and to eff ectively use and provide feedback for improving personal interaction.r BeginningDemonstrated by:• using active listening skills• showing a positive attitude• critiquing own performance• maintaining two-way communication• actively seeking constructive feedback and assistancer DevelopingDemonstrated by:• assessing own performance accurately• seeking, accepting, and integrating feedback from others• developing a plan of action in response to feedback r MasteryDemonstrated by:• considering multiple approaches when responding to feedback• modifying feedback given to others according to their learning styles• engaging in non-judgmental, constructive, problem-solving discussions• reconciling differences with sensitivityCritical Th inkingTh e ability to question logically; to identify, generate, and evaluate elements of logical argument; to recognize and diff erentiate facts, illusions, assumptions, and hidden assumptions; and to distinguish the relevant from the irrelevant.r BeginningDemonstrated by:• considering all available information• recognizing gaps in knowledge base• articulating ideas/problems• raising relevant questionsr DevelopingDemonstrated by:• understanding scientific method• critiquing hypotheses and ideas• formulating alternative hypotheses and ideas• examining new ideas• being able to distinguish relevant from irrelevant information • recognizing fact vs. opinionr MasteryDemonstrated by:• exhibiting an openness to contradictory ideas• assessing issues raised by contradictory ideas• justifying selected solutions• determining effectiveness of applied solutions• identifying complex patterns of associations• demonstrating intuitive thinking• distinguishing when to think intuitively vs. analytically• recognizing own biases and suspending judgmental thinking• challenging others to think critically3.6. BEING AN EFFECTIVE TEAM MEMBER AND COLLABORATOR 53
Table 3.1: (Continued.) “Evaluation of Research Progress and Researcher Development” rubric
(Continues.)
Scientifi c LiteracyTh e ability to use processes and skills of science to conduct investigations; to recognize and defi ne prob-lems, analyze data, develop and implement solutions, and evaluate outcomes.r BeginningDemonstrated by:• recognizing problems• identifying questions • knowing the basic steps of the problem-solving process (stating the problem, describing known solutions, identifying resources needed to develop solutions, beginning to examine multiple solutions to the problem)• seeking to fi ll gaps in knowledge • understanding diff erences between primary, secondary and other sourcesr DevelopingDemonstrated by:• distinguishing between fact and hypotheses • applying the problem-solving process• prioritizing problems• consulting with others to clarify the problem• identifying contributors to the problem• accepting responsibility for implementing solutions• considering consequences of possible solutions• generating alternative plans when diffi culties or obstacles present themselvesr MasteryDemonstrated by:• forming possible solutions • designing a data collection scheme and collecting data• drawing conclusions about the validity of the possible solution• seeking alternative hypotheses and contradictory ideas • evaluating outcomes• reassessing solutionsResearch DocumentationTh e ability to eff ectively document research approach, progress, hypotheses, and outcomes.r BeginningDemonstrated by:• recording research fi ndings • identifying methods usedr DevelopingDemonstrated by:• keeping record of research progress• writing out steps to possible solution• providing documentation that others can followr MasteryDemonstrated by:• describing thought processes, hypotheses and outcomes • supporting methods chosen with literature references • using project managements tools to stay on task 54
3. BECOMING A RESEARCHER
Table 3.1: (Continued.) “Evaluation of Research Progress and Researcher Development” rubric
(Continues.)
Stress ManagementTh e ability to identify sources of stress and to develop eff ective coping behaviors.r BeginningDemonstrated by:• recognizing own stressors or problems• recognizing stress or problems in others• seeking assistance as necessary• demonstrating appropriate responses • maintaining professional demeanor r DevelopingDemonstrated by:• accepting constructive criticism appropriately• handling unexpected changes appropriately• maintaining balance between professional and personal life• establishing outlets to cope with stressorsr MasteryDemonstrated by:• recognizing when problems are unsolvable• demonstrating a preventive approach to stress management• off ering solutions for stress reduction • assisting others with stress• establishing a support network• prioritizing multiple commitments• tolerating inconsistencies• responding calmly to urgent situationsProject-Specifi c Research Skill 1r BeginningDemonstrated by:r DevelopingDemonstrated by:r MasteryDemonstrated by:3.6. BEING AN EFFECTIVE TEAM MEMBER AND COLLABORATOR 55
Table 3.1: (Continued.) “Evaluation of Research Progress and Researcher Development” rubric
Project-Specifi c Research Skill 2r BeginningDemonstrated by:r DevelopingDemonstrated by:r MasteryDemonstrated by:Project-Specifi c Research Skill 3r BeginningDemonstrated by:r DevelopingDemonstrated by:r MasteryDemonstrated by:56
3. BECOMING A RESEARCHER
advance, listening attentively and asking for clarification at key points, asking critical questions
about the topic at hand, or contributing to the discussion by giving thoughts and ideas. It is
better to ask in advance of the meeting what the expectations are so you are not unprepared,
however you can simply choose to attend a meeting or two and use your observation skills to
take careful note of how the interactions work and who is expected to speak and about what
topics. It is highly likely that the expectations for your participation initially will be much lower
and will increase with your experience and time within the group. As you get more comfortable
in the group you may find that you are talking more. However, always be certain that your
contributions are succinct, meaningful, and on topic so that the meeting is maintaining forward
momentum and you are not being wasteful of other people’s time. Although you may feel “out
ranked” because of your limited experience, don’t discount the insights that can come from a
new person’s prior experience or simply a fresh set of eyes on the topic.
In general, you want to be respectful of your research mentor’s and other team members’
time. This means that you should arrive on time when a meeting or event has been scheduled,
and you should come prepared. In the case of a one-on-one meeting with your research mentor,
this means having thought through what outcomes you would like to get from the meeting,
as well as what your research mentor will expect to learn from you about your progress. If you
will be presenting a literature review or some results from your own research, you will want to
have this material organized and ready to discuss. If you will be needing to use audio visual
equipment, you will want to arrive at the room in advance and have everything set up and ready
to go so that meeting time is not wasted while you’re trying to get the equipment to work.
Outside of meetings you are likely to have a variety of different kinds of interactions with
other team members. If you need help from someone, it is perfectly reasonable to ask for it,
but you should strive to make it as convenient as possible for them to provide it to you. If you
will be trained on a piece of equipment or a technique, ask if there is information that would
be helpful for you to read in advance so that you can come better prepared. When you are
being trained, give it your undivided attention and take notes. Ask clarifying questions and for
repetition if necessary, so that you can minimize the likelihood that you will need to return again
for repeated training.
Student Perspective
“There was a very frustrating period of time where I wasted a lot of time
trying to make the experiment work with my limited knowledge … when all
I had to do was ask a grad student for advice. Back then, I often forgot that
scientific research is a collaborative effort and that asking others for help can
often save you a lot of time.”
Often collaborative research requires interdependencies between your research outputs
and the input needs of others in your research group. It can be complicated to move the re-
3.6. BEING AN EFFECTIVE TEAM MEMBER AND COLLABORATOR 57
search forward in an efficient manner. A high level of communication is required so that each
researcher understands exactly what is needed and what is being promised. It is also important to
have a clear understanding of the time frames in which activities will be taking place. What you
might consider to be a small delay could negatively impact someone else’s later progress more
significantly. A shared calendar, project management timeline, and/or scheduling software can
assist in making certain that everybody is aware of the timing of a particular project milestones
and deliverables so that the research can be kept on track and moving forward. Although your
research mentor may provide this for you, if it is not already available you should consider con-
sulting with your research mentor, and other group members, about putting something in place
that would be helpful to everyone.
As your engagement in research progresses, you may need to lead a meeting. That may be
an informal meeting between members of your own group to resolve an issue, or a meeting with
members of another research group to coordinate efforts in a joint project. Whether formal or
informal you should come to the meeting with clear goals in mind and the order in which you
would like to address topics. A written agenda is often helpful. Taking time to prepare an agenda
in advance can make the meeting run smoothly and ensure that you will accomplish your goals.
It also allows everyone to see the plans for the meeting and make any needed adjustments to the
order and topics up front.
Facilitating a productive and respectful discussion can sometimes be challenging. Al-
though it may seem too formal to start the meeting by agreeing to a set of ground rules, you
can insert them into the meeting when needed. For instance, if the conversation is going too
far off topic, you can bring people back by saying something like “that’s interesting, but we have
limited time today so we’ll need to stick with our agenda to get done.” When you are leading
a meeting one of your responsibilities is to ensure that everyone has an opportunity to provide
input and express their opinions. If one of the group members is getting talked over, or ignored,
you can say “we need to be sure we hear from everybody on this topic, let’s go around the room
and get each person’s input.” There are numerous ways to be effective, one strategy is to watch
others who are effective where you want to build skill and emulate them.
Being an effective team member also means getting to know the others on the team.
A basic understanding of the other individuals you interact with can help to reduce friction
and avoid conflict. For instance, something simple like asking about people’s music preferences
before playing your favorites at high volume in the lab can help to avoid irritation of other group
members. Or if you are always requesting to meet in the early evening when a team member
needs to pick up a child at day care, you could come across as insensitive when you had no
intention of doing so.
Even if you do take preventive measures, conflict can still come up. Rather than avoiding or
ignoring the situation, you can often achieve a better result by addressing the issue sooner rather
than later. Approach the individual or group with openness and seek to understand the issue. It
58
3. BECOMING A RESEARCHER
is likely that your effort will be appreciated, and you can work together to find an appropriate
resolution.
3.7 WORKING WITH A DIVERSE RESEARCH TEAM
Depending on your prior experience and how well aligned it is with the atmosphere of the
research group that you are joining, you may find that you have some adjustments to make as
you begin to engage in a new research project. You may come to research with the idea that you
will be the “lone genius” who operates entirely independently. This is exceptionally rare, and
not particularly realistic when one is just beginning to engage in research. Most engineering
research is conducted in a team environment. It may be a team of two—you and your research
mentor—but more often it is a team of several or many. The team usually includes a faculty
member (or members) and graduate students. Many also incorporate undergraduate researchers,
postdoctoral researchers, and scientists. These people may all be in the same building at the
same institution, they may be spread across a campus, or they may be distributed at different
institutions across the country or even throughout the world. There are good reasons for this.
Teams of people are able to tackle more complex and broader reaching research problems.
As a result of how research is organized, this inevitably means that you will need to work
with others effectively. Not everyone in your research team will come to the group with the
same background and experiences. If you think about even the most apparently homogeneous
group of people you have interacted with, you can identify ways in which the group is diverse—
for example the people in the group may look like each other but they may practice different
religions, identify with different political groups, or spent their childhoods being raised in dif-
ferent environments. Each one of these differences gives the group broader experiences to draw
from, and if it is a group of engineers this diversity may influence the way in which design de-
cisions are made or research problems are posed. Ideally, we would strengthen the diversity of
our engineering work groups to include people from a wide range of different backgrounds, and
have diversity along many other spectrums, such as gender, race, etc. Companies recognize and
hire with diversity in mind because research has shown that diverse groups are more produc-
tive, creative and innovative.13 This is true for engineering research environments as well and
engineering design. We all benefit from the higher-quality ideas—in terms of feasibility and
effectiveness—that are produced by diverse groups and the critical analysis of alternatives when
a wider variety of viewpoints is discussed.
13Women in Science and Engineering Leadership Institute, “Benefits and Challenges of Diversity,” University of
Wisconsin–Madison, http://wiseli.engr.wisc.edu/docs/Benefits_Challenges.pdf.
3.7. WORKING WITH A DIVERSE RESEARCH TEAM 59
Student Perspective
“Education has also taught me a great deal about relationships with
other people. Specifically how to work with others that may not share the
same viewpoint as your own. Particularly in the field of research, tolerance of
everyone’s ideas is critical for success.”
In order to build and maintain an effective diverse team we need to recognize some things
about human nature. Whether we like it or not, we all carry unintentional biases (also called
implicit biases) that are “habits of mind” and are influenced by where we have grown up and
spent our lives. Harvard University psychology researcher Prof. Mahzarin Banaji was quoted as
saying “Implicit biases come from the culture. I think of them as the thumbprint of the culture
on our minds.”14
As an example, if someone is asked to list the stereotypical characteristics of a man, they’ll
come up with many of the following: tall, physically strong, respected, intelligent, has high
status, leaders, sexist, like sports, providers, aggressive.15 However, even though we can list these
stereotypes (women and men carry the same stereotypes in their mind about women and men)
it does not mean we believe all men have these characteristics. We know that any individual man
does not embody all, or even most, of these and I am certain that we could find some men who
don’t display any of the characteristics on the list. Similarly, the stereotypical characteristics of
women can be listed: emotional, caring, soft, care about appearance, talkative, small built/petite,
submissive, dependent, motherly, feminine.16 But again, we don’t expect that every woman we
meet will conform to these characteristics. And it is not just gender at play. We hold numerous
biases about all sorts of things like race, ethnicity, age, country of origin, etc.
The problem comes when we make quick decisions or have limited information. When
we do this we fall back on our stereotypes. Say there is an election for county sheriff and all you
know about the slate of candidates is that one has a male name and the other has a female name.
The responsible thing to do would be to not vote without knowing more information, but many
people will vote and with such limited information the stereotypes may have influence: we tend
to think of police officers needing to be physically strong and in the role of sheriff they would
have to serve as a leader. These are two characteristics we more readily associate with men than
with women. These associations could push the voter toward the male candidate, even though
we know nothing about the actual qualifications of the two individuals running in the election.
14Hill, C., Corbett, C., and St. Rose, A., 2010. Why so few? Women in science, technology, engineering, and mathematics,
American Association of University Women. Washington, DC.
15Ghavami, N. and Peplau, L. A., 2013. An intersectional analysis of gender and ethnic stereotypes: Testing three hy-
potheses. Psychology of Women Quarterly, 37.1, 113–127.
16Ghavami, N. and Peplau, L. A., 2013. An intersectional analysis of gender and ethnic stereotypes: Testing three hy-
potheses. Psychology of Women Quarterly, 37.1, 113–127.
60
3. BECOMING A RESEARCHER
Unfortunately, these issues of unconscious bias play out in subtle ways that can have big
impacts: who gets hired for a job,17 who gets the award,18 who gets the grant funding.19 Because
most of us would want the most qualified person to get the job, the student with most potential
to get the fellowship, and the best idea to get the grant funding, we need to be aware of our
biases and work against applying them unintentionally.
Student Perspective
“[Thinking] about our inner biases and how they influence our lives
and decisions was awakening. How easily we can form biases based on mis-
information and then base judgments on those facts and then follow it by the
act of actually defending our biases was a good realization.”
The first thing to recognize is that you are not a bad person because you have biases.
Everyone has them. What we all need to do is to recognize our own biases and work to overcome
them. Some useful strategies are as follows.20
Recognize and Replace: Become more aware of the biases that you carry and work to
replace them by thinking of counter examples. The research shows that it is fruitless
in the long run to simply try to repress stereotypes—this backfires.21 Challenge your
automatic thoughts with concrete examples. Visualize an engineer. Now visualize
someone you know, who is an excellent engineer and also belongs to a group that is
underrepresented in engineering.
Intergroup Contact: Much of our work as engineers is done collaboratively and in
teams. Get to know the other research group members as individuals.22 Challenge
your assumptions of who they might be given the stereotypical information available
on the surface. Pay attention and don’t dismiss information that does not fit with the
17For example, see Segrest Purkiss, S. L., Perrewe, P. L., Gillespie, T. L., Mayes, B. T., and Ferris, G. R., 2006. Implicit
sources of bias in employment interview judgments and decisions. Organizational Behavior and Human Decision Processes,
101.2, 152–167.
18For example, see Lincoln, A. E., Pincus, S., and Schick, V., 2009. Evaluating science or evaluating gender. American
Physical Society News, 18.8.
19For example, see Ley, T. J. and Hamilton, B. H., 2008. The gender gap in NIH grant applications. Science, 322.5907,
1472–1474.
20Adapted from Carnes, M., Fine, E., Romero, M., and Sheridan, J. Breaking the bias habit, Women in Science and
Engineering Leadership Institute (WISELI), University of Wisconsin–Madison, https://wiseli.wiscweb.wisc.edu/
workshops/bbh-inclusive-campus/; see also Carnes, M., Devine, P. G., Manwell, L. B., Byars-Winston, A., Fine, E.,
Ford, C. E., Forscher, P., Isaac, C., Kaatz, A., Magua, W., Palta, M., and Sheridan, J., 2015. The effect of an intervention to
break the gender bias habit for faculty at one institution: A cluster randomized, controlled trial. Academic Medicine, 90(2):221–
30.
21Macrae, C. N., Bodenhausen, G. V., Milne, A. B., and Jetten, J., 1994. Out of mind but back in sight: Stereotypes on
the rebound. Journal of Personality and Social Psychology, 67(5), 808.
22Pettigrew, T. F. and Tropp, L. R., 2006. A meta-analytic test of intergroup contact theory. Journal of Personality and
Social Psychology, 90(5):751–83. And Lemmer, G. and Wagner, U., 2015. Can we really reduce ethnic prejudice outside the
lab? A meta-analysis of direct and indirect contact interventions. European Journal of Social Psychology, 45(2):152–68.
3.7. WORKING WITH A DIVERSE RESEARCH TEAM 61
stereotype. Appreciate the strengths that they bring to the shared goals your research
group is working toward.
Model Inclusion: Use inclusive language. When a joke is inappropriate, don’t laugh.
Approach students who may be different from you and get to know them. Don’t
always interact with the same people; mix with others and get to know them better.
Perspective Taking: Develop your ability to take someone else’s perspective and see
the world through their eyes.23 Use your empathy skills to see their perspective.
ASSIGNMENT 3-8:
INDIVIDUAL ASSIGNMENT – REFLECTIVE WRITING ON
PERSPECTIVE-TAKING
Practice perspective-taking. Think about your early experiences with engineering research:
What did you feel like walking into the first research group meeting or research seminar?
What were your first impressions about the people? What do you think people assumed about
you?
Now consider someone who would be in this same situation who is from a background or
group other than your own (e.g., different gender, race, ethnicity). How do you think it would
it feel for that person to walk into their first research group meeting or research seminar? Write
a 300–500 word reflection on this topic.
ASSIGNMENT 3-9:
INDIVIDUAL ASSIGNMENT – CASE STUDY
Instructions:
Read the brief case description provided. Reread while noting the important information
and questions that are raised in your mind about the information provided, the individuals in-
volved, and their situation. Determine both the basic issues and any deeper underlying issues at
play. Consider the questions posed at the end of the case and how you would respond to these
questions, as well as other questions that could be asked of this case. Write a one-page response
that includes a brief summary of the case and its issues, your answer to the questions posed, and
recommendations based on your understanding of the situation posed in the case.
23Todd, A. R., Bodenhausen, G. V., Richeson, J. A., and Galinsky, A. D., 2011. Perspective taking combats automatic
expressions of racial bias. Journal of Personality and Social Psychology, 100(6), 1027.
62
3. BECOMING A RESEARCHER
Case description:
Jeff is excited to begin his research career and join Prof. Jones’ research group. Now that
he has moved to town and gotten settled he’s eager to start getting acquainted with the other
research group members. He has just been assigned a desk and has begun to meet the other
students in the group with the help of Sam who Prof. Jones has asked to show him around.
While he and Sam are talking with other students in the group, Jeff notices another student
come into the room and go to a desk in the corner without greeting anyone. Later as Sam is
showing him around the building Jeff realizes that he did not get introduced to the student at
the corner desk and asks Sam who she is. Sam replies “Oh, that’s Ellen,” and continues talking
about other points of interest in the building.
After working in the groups for a couple of weeks and seeing Ellen come in every day and
go to her desk in the corner without any interaction among the group, Jeff decides that there
must be some issue with Ellen and proceeds to ignore her like the rest of the group members.
Questions to consider:
Should Jeff introduce himself to Ellen even though neither Sam nor Ellen have initiated
an introduction?
Is it appropriate for Jeff to inquire with other research group members about Ellen to find
out more about the situation?
How can Jeff go about engaging with Ellen given that they are both members of the same
research group?
Would your answers above change if the person’s name was Shaheed instead of Ellen?
How do stereotypes play into your answers for Ellen, and for Shaheed?
3.8
DEVELOPING GLOBAL COMPETENCY
One important way to broaden your horizons and become a more culturally competent engineer
is to spend time in another country. Ideally, you will do this in an immersed way and with
a structured program. Study abroad is an option, both as an undergraduate and as a graduate
student. There are also opportunities to work abroad and conduct research abroad. And of course,
you can attend conferences in other countries. The more time you are able spend, and the quality
of your immersion in the culture of that county, the more it will increase the impact of the
experience.
Student Perspective
“There is a fairly evident way in which my interpersonal foundation is
lacking as a result of my education and more specifically its location. I grew up
in a community that was predominantly white with some amount of people of
Asian descent. This has to some extent limited the variety of cultures to which
3.8. DEVELOPING GLOBAL COMPETENCY 63
I have been exposed. This means that there are many cultures in which I still
have something of a gap to cross to develop mature relationships. The global
nature of research will allow me to have contact with many more cultures and
start to understand them so that I can continue to work on my interpersonal
foundation of self-authorship.”
Increasing your cultural competence will benefit you in the long run regardless of your
career goals. The world is more interconnected than it has ever been and the field of engineering
is inherently a global one. Research teams are becoming more global and international collabo-
rations commonplace. As a result, employers are interested in hiring individuals with the skills
to operate in a range of settings and with people form a variety of backgrounds.
What does an immerse experience give you—both positive and negative? There are chal-
lenges in navigating a new culture and place. You will have experiences that stretch you a bit
and force you to be more flexible and adaptable. You will also need to be self-reliant, and it will
enhance your independence. You will learn about yourself through your experiences with the
culture you are immersed in. Marcia Baxter Magolda connects it to the self authorship (identity
development) ideas discussed previously: “Intercultural maturity includes the ability to use mul-
tiple cultural frames to construct knowledge, engaging in meaningful relationships with diverse
others that are grounded in appreciation of difference, and the capacity to openly engage chal-
lenges to one’s beliefs.24” Particularly as you work with people from other cultures, it is critical
that you are able to see ideas and events from more than just your own perspective. “Mature
relationships are characterized by respect for both one’s own and others’ particular identities
and cultures as well as by productive collaboration to negotiate and integrate multiple personal
needs.25”
Student Perspective
“Interactions and team-work with classmates and teachers coming
from various linguistic, cultural and religious backgrounds has made me un-
derstand the intricacies of society, a way to relate to people and build rela-
tionships both personal and professional.”
Some humorous illustrations of how cultural differences can impact both understanding
and ability to work together came out as a series of commercials (adverts) from HSBC, a large
international banking and finance origination. They illustrate a few cultural differences around
the world with a bit of humor thrown in. In one of their ads they show a British gentleman in a
restaurant in China with business colleagues who are hosting them (search www.youtube.com
24Baxter Magolda, M. and King, P. M., 2004. Learning Partnerships: Theory and Models of Practice to Educate for Self Au-
thorship, Stylus, Sterling, VA, p. 5.
25Baxter Magolda, M. and King, P. M., 2004. Learning Partnerships: Theory and Models of Practice to Educate for Self Au-
thorship, Stylus, Sterling, VA, p. 9–10.
64
3. BECOMING A RESEARCHER
for “HSBC ‘Eels’ Ad”). The British gentleman finishes his main course of eel and the Chinese
colleagues become agitated and order him another bigger eel. The narrator explains “The English
believe it is a slur on your host’s food if you don’t clear your plate. Whereas the Chinese feel that
you are questioning your generosity if you do.” After clearing his plate a second time even though
he has obviously had too much to eat given his peaked appearance, the host orders more again
and we see a gigantic eel being wrestled in the kitchen, presumably for yet another massive main
course.
ASSIGNMENT 3-10:
INDIVIDUAL ASSIGNMENT – INTERNATIONAL EXPERIENCE
INDICATORS SELF-EVALUATION
Complete the International Experience Indicators self-evaluation tool on the following pages and
see how you score. Use the second and third columns to reassess yourself in a year or two to
judge whether you are making progress.
Numbers in brackets are the points to be assigned for any experience. A range in points is
indicated to differentiate the extent and/or quality of the experience with regard to how much
you believe your international perspectives and understanding were enhanced.
ASSIGNMENT 3-11:
INDIVIDUAL ASSIGNMENT – REFLECTIVE WRITING ON
INTERNATIONAL EXPERIENCE
Choose two strategies for increasing your score on the International Experience Indicators. Ex-
plore how you could implement these strategies and discuss what you would need to do to carry
them out.
For example, simpler strategies include reading an international newspaper weekly, learn-
ing about the home country of the people in your research group, or inviting a new international
student in your program to your home for dinner. High investment strategies include enrolling
in a study abroad program, volunteering for Engineers Without Boarders, seeking an interna-
tional work placement, or taking a language or culture course (Table 3.2).
3.8.1 OTHER RESOURCES ON GLOBAL COMPETENCY
References courtesy of Dr. Laurie Cox, Assistant Dean and Director International Student Ser-
vices, University of Wisconsin–Madison:
Table 3.2: “International Experience Indicator” rubric (Continues.)
3.8. DEVELOPING GLOBAL COMPETENCY 65
International Experience IndicatorsSelf-Assessment Points (you make the judgment)Today's Date:Have a passport and have traveled outside the U.S.to:• English-speaking country (1-2pts/time with maximum of 6pts)• Non-English-speaking country (1-5pts/time with maximum of 10pts)Have lived in another country continuously for more than two months:• In a city/town of a non-English-speaking country (5-12pts)• In a village or rural area of a non-English-speaking country (7-15pts)• English-speaking city/town or rural village (3-7pts)Expansion of understanding other cultures or global issues from living with or married to someone from another country (1-10pts)Work with international colleagues on a regular basis in my work, program, major, or department through my university or outside organizations (1-10pts)Interact with and share perspectives about global issues on a regular basis with friends or international colleagues outside the U.S. (1-10pts)Currently live or have lived and been active in a neighborhood/community with multi-cultural diversity due to presence of recent immigrants or international peo-ple. (1-10pts)Hosted international student(s), faculty, scholars, or colleagues in my home• For a week or less (2pts)• For 1-10 weeks (3-6pts)• For more than 10 weeks (7-10pts)Had to successfully address a personally embarrassing situation in another culture because of my own cultural ignorance (1-10pts)Regularly exposed to international perspectives through reading an international newspaper, news publication or non-disciplinary journal published outside the U.S. and/or regularly listen to international radio or TV broadcasts of news and issues (1-10pts)Interact regularly with international people in a club/organization (1-3pts)Gave a presentation(s) or lecture in a language that is not my native language (3-7pts) 66
3. BECOMING A RESEARCHER
Table 3.2: (Continued.) “International Experience Indicator” rubric
Knowledge of language(s) other than your native language: ReadingSpokenWritten• Language A (1-10pts)• Language B (1-10pts)• Language C (1-10pts)• Language D (1-10pts)Participated in an education, research, work, or volunteer program abroadIn non-English- Speaking CountryIn English- Speaking Country• Four weeks or less in duration (4-7pts)(3-5pts)• Four to eight weeks in duration (7-10pts)(5-8pts)• Semester program (10-15pts)(8-11pts)• Two or more semesters (15-20pts) (11-15pts)Completed course(s) that greatly expanded my international competence• Course A (1-4pts)• Course B (1-4pts)• Course C (1-4pts)Completed course(s) that somewhat advanced my international competence• Course A (1-2pts)• Course B (1-2pts)• Course C (1-2pts)Wrote a research paper on a topic that greatly expanded my international competence• Paper A (1-4pts)• Paper B (1-4pts)Self-evaluation of your openness and understanding of diff erent cultures and your ability to interact with people from diff erent countries (1-10pts)Total: Althen, G., American Ways: A Guide for Foreigners in the United States
3.9. NETWORKING 67
Althen, G., Learning Across Cultures
Axtell, R., Gestures: The Do’s and Taboos of Body Language
Axtell, R., Do’s and Taboos around the World
Chai, M-L. and Chai W., China A to Z: Everything You Need to Know to Understand
Chinese Customs and Culture
Morrison, T., Kiss, Bow or Shake Hands
Nahm, A., An Introduction to Korean Culture
3.9
NETWORKING
Your professional network may have more influence on your success that you might imagine.
The people in your network can provide valuable feedback on your ideas, technical expertise in
areas you are less experienced in, opportunities for collaborations that allow you to approach
new research questions, contact with others in your discipline who you would like to work with
in the future, and much more.
Student Perspective
“The most surprising things I learned so far about research would be
the importance of professional networking and communication. My previ-
ous image of research community is that researchers largely focus on the lab
work and have few contacts with anyone besides their colleagues. Therefore,
I used to believe academic ability should outweigh any other abilities, and my
main focus in the school had always been school work and grades. It was not
until I [began a research position] that I realized what I believed was wrong.
As I learned more about how research [is] conducted, I found professional
networking and communication much more important than I thought.”
How do you create a professional network for yourself? You must make connections with
people. They can range from casual acquaintances to friendships, but the key is to know other
people in your discipline, in other disciplines, and in the community at large. Get to know them
and let them get to know you. Offer your knowledge and expertise, when they need it, and they
will be likely to return the favor at some other time. These connections can develop within the
research group through day-to-day contact, within the program or department through hallway
conversation and interactions at seminars and social functions. Connections can be made at the
bus, at the gym, or in a coffee shop. The key is to talk to people. If you sit down in class or at
68
3. BECOMING A RESEARCHER
a seminar five minutes before it starts, take the opportunity to introduce yourself to the person
next to you. Ask them about themselves and engage in a conversation. For some of us this is
easier said than done, but with a little practice it becomes more comfortable.
As you become further engaged in your discipline you will begin to attend seminars, work-
shops, and conferences associated with the topic area of your research. This is an important part
of your professional development in several respects: you will have an opportunity to learn about
the most recent developments in your field; you will have opportunities to present your own
research in either a poster session or a presentation; and you will have an opportunity to broaden
your professional network. This last piece is often overlooked, but very important. In order to
develop your network, you will need to engage with people informally. This can be done with
the people you sit next to before or after a session, joining a hallway conversation or a coffee
break, asking someone to have lunch or dinner with you, taking part in a student mixer, and
attending luncheons, receptions, or other organized social events. I suggest that students attend
these events with a goal in mind. Maybe it is as simple as deciding that you will try to talk with
five people you have not met before, or that you will seek out advice about a particular aspect
of your research with people that you meet, or that you will inquire about graduate school or
postdoctoral research opportunities at their institution. Giving yourself such a goal can help you
to overcome any reluctance that you might have to engage in these settings and provide you
with a meaningful task that will help you both in terms of building your network and acquiring
information that you need.
With a Little Help from Your Friends
Although I have my own professional network, I have found over the
years that the networks that my students develop can be just as valuable to
our work. I recall a research project where we were stuck on the interpreta-
tion of some data we had obtained. The data was produced by a technique
that my group had less expertise with than most, but it was critical to the
particular experiment we were conducting. One option was to simply repeat
the experiment, but it was still a question as to what that would tell us. In-
stead, we looked for an expert to talk to first. The graduate student working
on the project knew of someone in another lab who had used the technique
extensively, so he dropped by to chat with him. This developed into a half-
hour conversation with several other lab members in this group over coffee.
The graduate student walked away with new ideas as to how to approach the
problem with a different technique that would help us to interpret our data.
This half-hour conversation turned out to be invaluable to the research and
saved us significant time.
3.9. NETWORKING 69
Student Perspective
“I had the impression that scientists did most of their work in solitude,
with essentially all contact being with a few nearby colleagues such as collab-
orators, advisors, or lab partners. I understood that the goal of science was
to share knowledge, but I felt that this was done purely through publications
and lectures. I did not notice the existence of any network beyond this. It is
true that a scientist does work alone for much of the data-collecting phase
of a research project. However, I learned that a researcher must be involved
with a larger community to be successful in the profession. Hence, there was
a vast network of connections between researchers that I had not noticed
… Networking with people that have similar interests in the department is a
clear objective, but I learned that it was also beneficial to connect with people
at other campuses all around the world.”
The other critical aspect of developing your professional network involves broadening your
mentoring relationships beyond that of your research mentor, so that you can get different per-
spectives and a range of constructive criticism, advice, and/or support. Often people think of the
mentor-mentee relationship as an exclusive dyad, but in contemporary terms you are seldom the
sole mentee in your mentor’s life and, even if you were, you can’t expect to get everything you
need from one individual. Thus, mentoring should occur on multiple levels with multiple indi-
viduals, including your research mentor, your peers in and outside your research group, other
faculty and staff, and key individuals in your network. You can think of this as a “constellation”
of supporting individuals in a variety of mentor-related roles.
Longitudinal research studying career success has shown, that while the quality of your
primary mentor significantly impacts your short-term career outcomes, it is the “composition
and quality of an individual’s entire constellation of developmental relationships that account for
long-run protégé career outcomes.26” Having many and varied mentors will give you a broader
range of perspectives, a wider reaching network, and more opportunities over the course of your
career. This constellation of mentors is not something that you create overnight and it frequently
grows out of the network connections that you build. You should consider who is already in your
constellation of mentors, and watch for other individuals who you can get good mentoring from.
26Higgins, M. C. and Thomas, D. A., 2001. Constellations and careers: Toward understanding the effects of multiple
developmental relationships, Journal of Organizational Behavior, 22, 223–247.
70
3. BECOMING A RESEARCHER
ASSIGNMENT 3-12:
GROUP ACTIVITY – WHO IS IN YOUR NETWORK?
Individually:
Spend 5 minutes listing the people or groups in your network on a piece of paper.
As a Group:
Discuss the types of people in each individual’s network.
Is your network actually broader than you initially thought?
Brainstorm about what actions you can take immediately to broaden your network
further.
What strategies can you use to maintain your network?
ASSIGNMENT 3-13:
INDIVIDUAL ASSIGNMENT – DEVELOPING YOUR PROFESSIONAL
NETWORK
Identify your goals related to developing your professional network.
Do you want to improve your network to facilitate your research? Do you want to
develop a network that will help you get into graduate school? Or find a job?
Build a list of contacts:
•
•
•
Identify relevant people.
If you don’t know individual names, identify the types of people you need in your net-
work and then seek out individuals who are that type.
Identify professional organizations where you might meet people important for your
network.
Develop a strategy to court these people individually.
• Using online social networks like Facebook, LinkedIn, and other online tools can help
you reach your networking goals.
3.9. NETWORKING 71
• BUT meeting someone who is in your network helps to solidify the relationship. How
can you arrange to meet each person face-to-face?
• Don’t ask for something at your first contact with someone you have just met. And
as the relationship develops, take care not to always ask for something every time you
interact with a person.
• Reciprocity is important. Try to figure out a way for you to give something. This is why
it is important to build your network before you need it.
What can you do to follow up occasionally with these people?
• Schedule time each week to tend your network.
• Be reasonable with the frequency of contact. For some people who you have gotten to
know in more depth, your contact may be quite regular but for others it may be as little
as once a year.
What courtesies should you follow?
• Respect the time of others.
• Send a thank-you note when someone has provided you with something you truly
appreciate.
• Be prepared and willing to reciprocate.
• Ask permission before you use someone as a reference.
ASSIGNMENT 3-14:
INDIVIDUAL ASSIGNMENT – DEVELOPING YOUR PROFESSIONAL
NETWORK
Departments on campus and professional conferences often hold social events such as a mixer
or reception. These are great opportunities to broaden your professional network. But, it’s often
helpful to think about how you will start a conversation before you are actually in the position
to do so.
After saying “Hi, my name is…,” what comes next? You need a strategy to engage with
someone you have just met to learn more about them and let them get to know you. So, try
asking questions. Depending on the context you can start with something simple like: what’s
your major? Or. What brings you to this event/place? You might even ask if they have heard
about a recent article you have read or ask about what courses they are taking/teaching this/next
72
3. BECOMING A RESEARCHER
semester. If you are at a conference you can ask them what sessions they have been attending or
comment on a keynote talk that happened earlier in the day.
Brainstorm three questions you could ask or conversation starters you could use to engage
in a conversation with someone you have just met. Consider three different situations: talking to
a person at a nearby table in a coffee shop or cafeteria; sitting next to someone five minutes before
a class or seminar begins; mingling at a social event associated with a professional function.
Now put it into action. Set some goals for yourself, such as: meet three new people over
the next week; get to know two other people in my major; talk to someone more senior than
myself who is in my professional area.
ASSIGNMENT 3-15:
INDIVIDUAL PROJECT – STARTING YOUR OWN PEER MENTORING
GROUP27
Step 1: Identify a common topic of interest for the group, e.g., fellowship proposal writing,
journal club in your research area, qualifying exam preparation, dissertator support group, etc.
Step 2: Identify a few peers who you would like to invite to join you in the group. Have a con-
versation with each of them about their interest in meeting regularly on this topic. Identify the
best venue for the meetings and timing for the meetings. Take into account that some members
may have other obligations that prevent them from meeting at certain times or on certain days
of the week.
Step 3: Set up a text group or listserve with the initial members and send out a formal announce-
ment, e.g., an email might include the following: “Thank you for agreeing to join me in our
TOPIC group. I have reserved ROOM/BUILDING for our first meeting on DATE/TIME.
At this initial meeting I propose that we develop an agenda for our group and plans for our
future meetings over the semester.”
Step 4: Develop consensus within the group about the formality of meetings, frequency of meet-
ings, optimal size of the group, and responsibilities of the group members.
Step 5: Grow the group to a sustainable size. This can be accomplished through the networks
of the initial group members or talking with a staff member affiliated with your degree program
about other individuals they may know of who would be interested in the group.
Step 6: As the “convener” of the group, you will be responsible for sending out reminders for
meetings and keeping the momentum of the group going. It is good practice to rotate the “con-
vener” responsibility to a new individual for a group that meets for more than a few months.
27Adapted from Crone, W. C., 2010. Survive and Thrive: A Guide for Untenured Faculty, Morgan & Claypool Publishers.
C H A P T E R 4
73
Building on the Research of
Others
4.1
THE LITERATURE
Together the collection of journal publications, conference proceedings, handbooks, mono-
graphs, books, and student dissertations/theses is referred to as “the literature” and provides
a foundation of knowledge for you and others to build upon.
Your primary exposure to engineering concepts may have been through textbooks up until
a certain point in your education. As you pursue more advanced study, and particularly research,
you will more regularly gain information from journal articles, along with other sources, such as
conference proceedings, technical handbooks, and edited collections (books where each chapter
is contributed by different authors). The purpose of journal articles is to provide an open report
of findings and new discoveries in a timely manner. These papers will contain details that you
will not be able to find anywhere else. Certainly the most recent findings in a particular research
area will only be available in journal articles and conference proceedings, but you will also find
that journal articles published decades ago may also be critical to you in your research. These are
sometimes referred to as seminal papers if they contain the origins of a research idea, completely
changed the way a topic was understood, or provide results that are continuing to be foundational
to the field.
Ideally, you will want to read and rely only on articles that are published in peer-reviewed,
archival journals and conference proceedings. These may be available in both paper and electronic
formats, but the key issue is the reliability of the information being presented. The archival nature
of journal publications also ensures their longevity and provides a searchable record of findings.
Just because someone has published something, does not mean that it is correct. However, you
will find that more reliable information can be found in reputable journals that have a rigorous
peer review process. The “Impact Factor” of a journal will also give you a guide as to the stature
of the journal in its field.1 Regardless of where something is published and by whom, you must
look at all information that you read with a critical eye.
1The Impact Factor of a specific journal is based on a calculation involving the number of times that the papers in that
journal are cited by others. These values vary by field so it may be helpful to look at how the Impact Factors of journals within
a field compare to each other. You can find Impact Factor information from a variety of sources, but they trace back to the
Journal Citation Reports ( JCR) and the information is integrated into Web of Science, as well as other indexing systems.
74
4. BUILDING ON THE RESEARCH OF OTHERS
There are a range of different types of journal articles that you will find in the literature
which will vary in length and content. Some journals, and the articles published within, will
specialize in various ways—for instance you may find a journal in your field that specializes in
instrumentation or experimental methods, another that focus on modeling and computation,
etc. Additionally, there are short communications, some of which are specifically designed for
rapid publication of new findings, full length articles that present original research in full de-
tail, and review articles that synthesize the state of the art concerning a particular topic. You
may find review articles to be very helpful, especially as you are entering into a new area of in-
quiry. A review article, if done well, will not only summarize the research that has come before,
but will also synthesize these results, present challenges and future directions for research, and
provide some commentary on future directions in the field. Conference papers/proceedings are
also prevalent in some disciplines and in some cases may be the higher profile publication of a
disciplinary area.
The literature in a research subject area can be very challenging to navigate initially and
it is often a good first step to ask your research mentor to suggest key articles that you should
read and indicate which journals are the most relevant to your research. This will give you a good
foundation to start building your familiarity with the literature in your research area. Your initial
exposure will be challenging. But be assured, as you read more, you will understand more of
what you read.
4.2
VALUING WHAT CAME BEFORE YOU
The literature contains a vast amount of information which grows at an ever-increasing pace
each year. You may feel that there is so much to read and learn about that it is pointless to
even begin. But begin you must. With each paper your knowledge and understanding will grow,
along with your ability to discern when you have found an important research contribution that
will influence your own work. You will also develop the ability to occasionally disagree with and
challenge some of the methods, results, and conclusions that have been previously published.
Scientists and engineers read the literature for a number of important reasons: to learn
what others have done so we don’t reinvent the wheel; to build upon the prior published work
in order to advance our own research; to keep abreast of the recent findings from other research
groups; to be able to describe how our own research fits into a broader context of the field; and
to distinguish our contributions from the contributions of others.
Early in your research career, your purposes in reading the literature may be a bit different.
If you are applying to a graduate program and interested in working with a particular faculty
member, it will be important for you to become familiar with their prior research. Certainly you
will look at their webpage, but it is also important to look at what journal articles they have
published in the last few years to get a better idea of the trajectory of their recent research. If
you will be meeting with a faculty member on a campus visit, you may want to choose a journal
article that this faculty member authored recently so that you can read it prior to your visit.
4.2. VALUING WHAT CAME BEFORE YOU 75
Although you don’t need to understand everything in the paper, you want to read it carefully
enough to have intelligent questions to ask regarding the paper. Keep in mind that this is likely
to be a publication about a research project that is complete and no longer active. Although it is a
good starting point for a discussion, it may or may not be representative of the research that this
faculty member is currently doing. In your conversation you will want to find out the direction
of their current and future research projects. This information will be important for you in order
to determine if there is a good match between your interests and the direction of this faculty
member’s research group.
Know What You Don’t Know
I am always encouraged when a student I have invited to join my re-
search group asks for a few relevant journal articles to read before their po-
sition begins. This usually happens with the best of the undergraduate re-
searchers who will be joining the group for a summer research opportunity
and the graduate students who will be joining our research group in the next
semester. Asking the question is a positive indicator, but then having the fol-
low though to read the article before they arrive is a good sign that they will
be a successful researcher. I never expect that the student will understand ev-
erything they are reading at this stage. The outcome I like to see is that they
show up on the first day with questions about what they have read and how
these articles relate to what they will be doing.
When you have joined a research group—ideally even before you arrive in that group—
read the papers that the research group has published recently. Not only will this give you some
relevant background on the types of research that the group has undertaken and the techniques
they have used, the author list of each paper it will also give you an idea of who has been work-
ing together—which students, postdocs, and scientists have contributed to which projects, and
which other faculty the research group collaborates with most frequently.
An important area of value to you as you embark in a new research area will be the literature
most relevant to the project you will be working on. Some of this may have been published by the
research group you are joining, but much of it will likely have been published by other research
groups around the world. Your research mentor will be able to help you identify some of the
most relevant journal articles that you should become familiar with initially. Read these articles
carefully and save them—they may be articles that you will want to go back to and reread after
you have begun working on your research project. Pay attention to the author lists and watch
for new papers to come out from these same authors—those new articles may also be relevant
to your work. A later chapter will go into more detail on author order, but it is usually most
relevant to note the first author, last author, and corresponding author(s). Keeping abreast of
the relevant literature in your area is an important aspect of your development as a researcher.
76
4. BUILDING ON THE RESEARCH OF OTHERS
Eventually, you will be the one pointing out new articles that have appeared in the literature to
your research mentor!
Learning How to Read
Over time, everyone develops their own approach to reading journal
articles for both efficiency and getting the most information out of your lit-
erature search. My own style is to first focus on the abstract and conclusion
to decide whether or not I need to spend more time with the article. Then
I usually go to the figures next. If a paper is well done, the figures and their
captions give you an outline in a visual format. Then I will begin at the be-
ginning and give the article a quick read. If I find that I am still interested
and need more detailed information, then I take a second read in a much
more thorough fashion. I will work through equations, delve into the meth-
ods, carefully compare the figures to the text, question the assumptions made,
ponder the conclusions drawn, and decide what it is that I can take away from
this paper that will be useful to me. I also look at the references cited as well
as who has cited the paper, so that I can find other relevant journal papers
to read. For seminal papers on a topic, I may reread them a third, fourth, or
tenth time over the course of years.
Each journal article you read will build your knowledge and make you
more skillful at extracting the key concepts and pieces of information that
you need. Settling on you own approach will come with experience.
You will also need to keep track of the papers you read and the ideas that have come from
them (tools for doing this will be discussed in more detail in the section on Citation Manage-
ment below). When you eventually put together your own research findings for presentation and
publication it will be critical for you to cite the work of others. You will need to show that you
are knowledgeable about the literature in your area of research and you will need to give credit
for the ideas of others that have contributed to your own work. The best researchers are those
who show how their own work builds on and extends the field by acknowledging the work of
others.
Student Perspective
“Information is continuously flowing throughout the globe and with
the elaborate access we have to it, via Internet and also extensive communi-
cation modes; an individual has at his disposal a plethora of work, ideas and
information about almost everything. This can be at times dangerous, as we
as people forget that though we have access to all this information, we don’t
have ownership over it. These works or ideas aren’t ours; they’re someone
else’s intellectual property.”
4.3. READING JOURNAL ARTICLES 77
4.3
READING JOURNAL ARTICLES
Journal articles tend to have a similar structure, although some variations may be found depend-
ing on the style requirements of the particular journal. An important aspect of reading journal
articles is knowing about all the information that is there for you to find. Common components
include: author information; an abstract; an introduction and/or background section; a methods
or techniques section; results; discussion; conclusion; acknowledgments; and references/citation
information. The examples shown on the following pages (Figures 4.1–4.4) come from two sam-
ples articles published in peer reviewed journals. The paper by Gall et al.2 is a research article
containing new experimental results, whereas the paper by Maboudian and Howe3 is a review
article providing the current state of understanding on a topic.
Every journal article begins with a title. There are many styles to creating a title, but they
should be descriptive of the content. The title can often give you a hint as to whether or not you
want to read the article, but the most succinct and descriptive piece of the article is the abstract.
It should give you a good idea of whether or not you want to read further.
The first page of the journal article will also contain author names and affiliations. The
affiliations will often give you a hint about the disciplinary background of the authors, given
the department or unit that they are affiliated with, as well as the institution where the research
was conducted. In collaborative projects you will sometimes see authors listed from multiple
institutions. For instance, it may be that experiments were conducted at one institution and the
modeling work was conducted in a different research group at another institution.
Sometimes the journal will also provide information about the publication timeline. For
instance, when the article was first received and when it was accepted for publication. This infor-
mation together with the actual publication date gives you a sense of how quick of a turnaround
this journal does in their review process, which may be important to you when you consider
journals to submit your own work to (Figures 4.1 and 4.3).
You will also find the title of the journal, volume and number of the journal, and page
number(s) on the first page. Together with the title and author list, you can compile a complete
citation for the article. For example:
Salick, M. R., Napiwocki, B. N., Sha, J., Knight, G. T., Chindhy, S. A., Kamp, T. J.,
Ashton, R. S., and Crone, W. C., 2014. Micropattern width dependent sarcomere
2Gall, K., Dunn, M. L., Liu, Y., Labossiere, P., Sehitoglu, H., and Chumlyakov, Y. I., 2002. Micro and macro defor-
mation of single crystal NiTi. Journal of Engineering Materials and Technology, 124(2), 238–245.
3Maboudian, R. and Howe, R. T., 1997. Critical review: Adhesion in surface micromechanical structures. Journal of
Vacuum Science and Technology B: Microelectronics and Nanometer Structures Processing, Measurement, and Phenomena, 15(1),
1–20.
78
4. BUILDING ON THE RESEARCH OF OTHERS
Figure 4.1: Title, author information, and abstract for a research article (top) and a review article
(bottom).
development in human ESC-derived cardiomyocytes. Biomaterials, 35(15), 4454–
4464.
It should be noted that there are multiple citation styles and most citation management
systems will output information in whatever the style you need. The citation shown above uses
the Chicago style based on The Chicago Manual of Style. Each journal will have a required citation
format, some of which adhere to one of the common styles and some that are unique to the
journal.
Often on the first page, but sometimes at the end of the article, you will find contact
information for the corresponding author (Figure 4.2). This is an individual that you can contact
with questions about the paper if you are delving deeply into its content. This contact information
should not be used frivolously, but if you have a serious question that you have not been able
Author Namesand AffiliationsTitleAbstractTitleAbstractAuthor Namesand AffiliationsPublicationTimeline4.3. READING JOURNAL ARTICLES 79
Figure 4.2: First page of a sample review journal article.
to find the answer to in any other way, it is reasonable to make contact with the corresponding
author.
Although some short papers will not separate out the article into sections, most will have
some variation of the following section titles: Abstract, Introduction, Background, Methods,
Results, Discussion, Conclusion, Acknowledgments, References. You will notice throughout,
and particularly in the Introduction/Background sections, that prior published work related to
the subject has been cited (Figure 4.3). Even if the current article does not contain exactly the
information you are looking for, these citations will refer you to other references that may be
useful. Following this trail of breadcrumbs can sometimes be more fruitful than a search engine
because the work being cited has been read by these authors and deemed useful and relevant
enough to include in their article. In a way, this is an additional level of review and thus a reason
why these papers are worth taking a look at beyond the many others that may pop up in a
literature search on the topic.
Depending on what you are trying to glean from a particular article may mean that you
will spend more or less time with it or delve into more detailed reading of particular sections.
In many cases, you are simply interested in the findings and what the authors have contributed
to the field. The Discussion and Conclusion are likely the sections of the article that you will
read more than once. Or, if you are an experimentalist, and you are specifically looking for a
Author Namesand AffiliationsTitleCorresponding AuthorContact InformationJournal Name, Volume, Date80
4. BUILDING ON THE RESEARCH OF OTHERS
Figure 4.3: First page of a sample research journal article.
method relevant to a process or technique you need to accomplish in the lab, then you may be
most interested in the Methods section. In a different situation you could be trying to figure out
the best way to show the data that you have collected or produced, so the figures in the Results
section would be your focus. Keep track of the papers you have read—ideally using a citation
management system like what is discussed later in this chapter—it is likely you may want to cite
the paper in something that you will write later (e.g., a report, thesis, or journal article of your
own) and you may want to reread that paper again at a later date. It is often the case that you
get more from a article on a second (or tenth) reading, especially when you have had a chance
to learn more about the topic through your own research and other things you have read in the
literature.
Usually at the end of the paper you will find an Acknowledgments section. This might
include individuals or facilities that aided the research, but whose contributions did not rise to
the level of authorship. It will also include the funding source(s) that made this research possible
(Figure 4.4). In some cases this might indicate a possible conflict of interest, or bias, in the
interpretations of the findings because of this support (for instance, there have been accusations
of researchers supported by Google who wrote about Google in articles but did not acknowledge
that they had received funding from them). The funding sources listed will also give you an idea
of what federal agencies or foundations are interested in supporting this type of work. It is
AbstractPublication TimelineTitlePublication DateIntroductionCitation ofReferences4.3. READING JOURNAL ARTICLES 81
Figure 4.4: Conclusions, Acknowledgment, and References sections of two sample journal ar-
ticles.
possible that you may want to look to apply for funding from one of these sources in the future,
through fellowship or grant opportunities.
The References section gives you the complete information for all of the citations in the
paper (Figure 4.4). This is a valuable resource for you because the paper has identified other
relevant literature which you may be interested in reading.
Finally, the last item to be aware of is Supplemental Material. A growing expectation in
publication is for the authors/journal to provide additional on-line supplemental content that
is relevant to the article. Usually a link will be given somewhere in the article that sends you
to a page with additional content on the publisher’s website. You might find in the supplement
that more results are available, details of the methods are given, or code used in the research is
being provided for others to download and implement. Sometimes critical information for your
research will be found in the Supplemental Material, so do not forget to watch for a link to it!
ConclusionsAcknowledgmentsand Funding SourcesReferences82
4. BUILDING ON THE RESEARCH OF OTHERS
Remember that for an important article with high relevance to your research, you should
expect that you will need to read it more than once. To really understand what you need to, you
may also need to do some additional learning outside of the article and talk to others about the
meaning of certain aspects.
Student Perspective
“Learning how to effectively locate relevant papers and navigate
through scientific literature is a critical skill to develop. Simply finding the
right journal articles is only part of the process of conducting a thorough
literature search. In order to actually gain understanding and benefit from
scientific publications, I need to develop skills in critically analyzing the re-
search methods and conclusions which are presented. This skill is developed
with practice in reading journal articles, which will help with gaining fa-
miliarity with projects related to mine and their associated terminology. As
my research advisor has pointed out, fully comprehending the meaning of a
certain publication often requires more than one reading, even for someone
well versed in the subject matter. It is therefore best to be honest with myself
about what I do and do not understand, and to give myself time to become
knowledgeable.”
ASSIGNMENT 4-1:
INDIVIDUAL ASSIGNMENT – SUMMARIZING WHAT YOU HAVE READ
Choose a journal article of interest to you. Read the article and become familiar with the main
points being put forward by the author(s). Summarize the article in a short paragraph, high-
lighting the main points that you have identified. Refrain from just rewording the abstract that
was written by the author(s). Write from your own understanding of the article, even if you feel
that understanding is incomplete. Use your own words, even if they are not as technical as the
one used in the article.
ASSIGNMENT 4-2:
GROUP ACTIVITY – JOURNAL CLUB
A strategy for becoming familiar with and keeping up with the literature is what is commonly
referred to as a Journal Club. These are used more often in some fields than in others. Some
4.3. READING JOURNAL ARTICLES 83
research groups hold their own journal clubs with faculty participating. Some groups of graduate
students take it upon themselves to create a Journal Club group in order to help each other read,
understand, and interpret the literature.
Journal Clubs run in a variety of ways, but they have commons features: the group has
a research theme, everyone participates by choosing and reviewing journal articles as well as
commenting on the ones chosen by others, and the end goal is for everyone to increase their
knowledge of the topic area. In many cases the expectation is that everyone has looked at the
article prior to the Journal Club meeting.
K. Barker suggests the following guidance for discussing a paper in Journal Club fashion
in her book At the Helm4:
•
•
•
•
•
•
•
“Summarize the main point of the paper.”
“Describe the paper in detail.”
“Analyze the data.”
“Itemize the strengths and flaws in the paper.”
“Compare the paper to other papers.”
“Is the paper well written and the data clearly presented?”
“Predict the next step in the research.”
As a Journal Club presenter you should come well prepared. This is important so that
everyone is making the best use of their time, and it is part of showing that you are a professional
who takes research seriously. Your ability to present a paper successfully will improve over time,
as will the depth of analysis you can bring to each paper you review. As you begin in this process,
choose a paper that is central to the research you are conducting.
When my students present in lab meeting about a paper they have read, I suggest that
they attempt to answer the following questions in their presentation of the work.
• Describe the paper.
– Who authored this journal paper? What institution are they from?
– How is that researcher or group related to your research group?
– What is the problem being studied? How is this problem related to your research?
• Summarize the main point of the paper.
– What are the key methods used?
– What are the main results?
4Barker, K., 2002. At The Helm: A Laboratory Navigator.
84
4. BUILDING ON THE RESEARCH OF OTHERS
• Detail both the strengths and flaws of the paper.
– Is the paper well written and the data clearly presented?
– What are the assumptions in the paper? How realistic are they?
• Take a deep look into the data presented.
– What story does each figure tell?
– Is there supplemental data provided that should also be considered?
– How sensitive are the results to the assumptions?
– What did you learn from this paper? How is this relevant to your research?
• Compare the paper to prior published work and relate it to your own research.
– What are the similarities and differences of this research compared to other related
research?
– How does it connect with your research?
Similar/different approach/methods/findings?
• Discuss potential opportunities for future work based on this paper’s findings.
– What do you think the authors are working on now?
– What would be a natural extension of this work?
Note that you may not be able to address all of these topics given the time constraints, so
you may have to prioritize.
4.4
READING CRITICALLY
It’s important to note, especially for those new to research, that not everything that is published
is perfect, or even correct. There are a variety of reasons for this. Some of them quite innocent:
maybe the understanding the field has changed/deepened since the article was written so it is no
longer the appropriate methods or interpretation. Maybe there was an error in the publication
process that made an equation incorrect (check to see if there is an Errata for the paper because
the error may already have been discovered). Maybe the results are reported accurately, but the
interpretation might be made differently by others. Or, maybe the paper is just poorly written
and difficult to read.
Student Perspective
“Before this semester, I generally interpreted published research as
always having the most accurate information about a subject. However, I
4.4. READING CRITICALLY 85
learned in class and from my research mentor that sometimes journal articles
contain inaccuracies. My most vivid example of this was when my research
mentor asked me to read an article relating to the project I was interested
in. After reading the article, a theory mentioned still was unclear and I could
not find any background information online. When I asked my mentor about
the theory, he said that the group that published the research was more in-
terested in producing a product than explaining a phenomena and the theory
they proposed was not very sound. Indeed, it seems that some published ar-
ticles offer the chance for the scientific community to debate and arrive at a
conclusion rather than accept an article as fact.”
Unfortunately, there is also the darker side of error, negligence, and misconduct where
what has been published was intentionally misleading or incorrect. This may be the fault of one,
some, or all of the authors. This topic will be discussed in more detail in Chapter 5.
Our role as a researchers is to always question, rather than take things at face value. This
applies to your own results and conclusion in addition to the results and conclusions of others. As
you read about the work published by others, you are determining if the research is trustworthy
and done in a reliable way by examining the methods used and determining if the conclusions
are supported by evidence. One way to do this is to draw your own conclusions about the work
before reading the conclusions section and then compare yours to those of the authors. This will
be very challenging to do initially, but as you gain more experience it will become easier.
Among the recommendations for analyzing an engineering document in the advice of
Paul, Niewoehner, and Elder5 are the following items that focus on reading critically:
Data
“Is the data accurate? How was accuracy established?”
“Is there data missing? Is there adequate data?”
“Is the data of sufficient quality?”
“What controls were applied to isolate causal factors?”
“Is the entire dataset presented? What criteria were used to select the presented data
sample from the complete set?”
Concepts
“Are the appropriate theories applied?”
5Paul, R., Niewoehner, R., and Elder, L., 2007. The Thinker’s Guide to Engineering Reasoning, The Foundation for Critical
Thinking. www.criticalthinking.org.
86
4. BUILDING ON THE RESEARCH OF OTHERS
“Have alternatives concepts been considered?”
“Are concepts used justifiable?”
Point of View
“Are there competing theories that could explain the data?”
“Have alternative ways of looking at the situation been avoided in order to maintain a
particular view?”
Assumptions
“Are the assumptions articulated/acknowledged?”
“Are the assumptions legitimate or necessary?”
“Do the assumptions take into account the problem’s complexity?”
Conclusions
“Are there alternative conclusions?”
“Is speculation misrepresented as fact?”
“Do the conclusions follow from the assumptions?”
“Is further testing required?”
Last, you should determine if the authors or others have found problems with the pa-
pers you are citing. Journal articles can be retracted after they are published. John M. Budd, a
professor in the School of Information Science and Learning Technologies at the University of
Missouri at Columbia, reported that between 1999 and 2009, 1,164 articles were retracted from
biomedical journals and “55% of the articles included in this analysis were retracted for some
type of scientific misconduct.6” Jennifer Howard, a Chronicle of Higher Education Reporter,
advises: “Authors, you really ought to take a look at the journal articles you cite. Not only is
it the responsible thing to do, it will save you the embarrassment of discovering after the fact
that you have given a nod to a retracted or discredited paper.7” If you are not downloading and
looking at the article yourself because you are simply copying someone else’s citation you will
not see the retraction notices. You need to get a copy of the original source, read it, and draw
your own conclusions in order to cite the work reliably.
6Budd, J. M., Zach C. C., and Anderson, K. M., 2011. Retracted publications in biomedicine: Cause for concern. In
Association of College and Research Libraries Conference, pp. 390–395.
7Howard, J., 2011. Despite warnings, biomedical scholars cite hundreds of retracted papers. The Chronicle of Higher
Education.
4.5. LITERATURE SEARCH 87
4.5
LITERATURE SEARCH
A literature search begins with a topic and the key terms that are relevant to that topic. An impor-
tant first step is to get familiar with the terminology and jargon that is relevant to your research.
Attend seminars, engage in research group meetings, and talk with others in your research area
to begin becoming fluent with the terminology and jargon commonly used. Wikipedia can be a
useful resource to get a basic understanding of the topic and terms. Your research mentor may
also suggest some textbooks or journal articles that are helpful.
At some point you may be specifically asked to conduct a literature search. Your research
mentor may give specific key words to use, a question to answer, or a general topic area for
you to explore. Regardless of whether you are specifically requested to do so or not, conducting
literature searches is an important part of the research process that you will need to undertake. In
the early stages of your research project you may simply be looking for background information,
and a general idea of what has already been published on your topic area. Later you may need to
go to the literature to answer a specific question, such as the details of an experimental technique
of an algorithm. Literature in your specific research field is important, but also keep in mind that
literature in related fields may also provide you with valuable transferable information. As you
continue on your research project, you will also need to keep up on the most recent publications
periodically so that you can be certain that your work has not been “scooped” by someone else. If
someone publishes specifically on your research topic, you will want to know about it right away
so that you can work with your research mentor to determine if you still have complementary
research to publish in the area or if you will need to adjust the focus of your research so that you
can produce results that distinguish you form the previously published work in the field.
Check with other individuals in your research group and the librarians at your institution
to determine what resources are available to you, both in terms of abstract and indexing databases
where you can search for relevant content and access to electronic copies of the articles you are
interested in. For instance, Web of Science is one of my favorites, but you may find that there
is a different database that is more relevant to the literature search that you need to do in your
research area. Other abstract and indexing databases are freely available like Google Scholar
and PubMed. Some journal articles are available free of charge as open access. However, many
others that you would otherwise have to pay for will be available through your institutions library,
either directly or through an interlibrary loan function. You will need to talk with others in your
research group and the librarians at your institution to determine what is available to you.
Finding that Golden Nugget
Panning for gold is a good analogy for the literature search; you are
looking for the gold nuggets hidden among a lot of sand and gravel that that
you will need to sift through. One of the challenges is distinguishing be-
tween useful and irrelevant papers so that you can identify information that
88
4. BUILDING ON THE RESEARCH OF OTHERS
addresses your needs. This can be harder at first, but once you have built up
some experience you will develop skills that make it quicker. I have always
personally enjoyed the process of the literature search; I love the accomplish-
ment of finding that gold nugget, and using it to help me bring about the
research idea that I have in mind.
Student Perspective
“And even though looking back it took me a laughably long time to
complete this initial literature review, it definitely taught [me] an important
lesson. Of course I learned how to more successfully carry out a literature re-
view; which search engines to use, how to intelligently read papers and look
into other references etc. More than that though, I learned how to break a
task up into manageable chunks; this makes the task seem much less daunt-
ing, which personally gives me much more confidence and optimism.”
There are numerous strategies for approaching the literature search. They fit into two basic
categories: start broad and sift down vs. start narrow and expand up. Each can be employed
successfully and often you will want to use a combination of strategies, but let’s consider one
example of how you might begin. When I am just entering into a new area, I like to start broad,
so that I get a sense of the scope of the literature that is available. Once you include all of the
relevant terms to your search you will likely have a mountain of literature to look at. When
employing this strategy, I will need to quickly eliminate items that have come up in my search
by deciding the things that I am NOT interested in.
Take functionally graded materials for instance. In Google Scholar I might start by simply
typing those three words: functionally graded materials. In Web of Science I might search on
the topic using: functional* grad* material* (the * is a wildcard symbol that tells the database to
include things that have alternate endings like functional and functionally, graded and gradient,
material and materials). In Web of Science this search gives me nearly 20,000 results. Yikes!
It’s a HUGE research area, but it is unlikely that all of these results are relevant to my interest.
Maybe I am most interested in polymeric materials, so I refine my search by using polymer* as
another key word with the AND operator requiring both terms to appear in the database record.
This has already narrowed it down to a little over 1,000. The research application I am interested
in also requires that the material is biocompatibility. I’m not interested in the biocompatibility
research itself right now, but I want to make sure the materials can be used in the way I want,
so in that case I might want to include a term like cell culture in order to include only those
materials that are useful in that circumstance. This narrows it down to a more reasonable 45
results. Now I’ll start sorting through titles and abstracts to decide which ones of these may be
4.5. LITERATURE SEARCH 89
relevant to my research. I still have a lot of papers to consider but I’d at least narrowed down my
search by including a few additional parameters.
When I am looking at the results that a search engine produces for me, I want to consider
the titles first. For the ones that look relevant I will look through the abstract and excluded many
of the ones I read, because this added information tells me right away that they’re not likely to
be very useful to me. When I do find an abstract that looks relevant, I can then open the PDF of
the article and skim through the paper. I personally tend to focus on the figures first, and if some
of them look relevant read the conclusions next to see if there is likely to be useful information
in the article. At this stage I don’t necessarily want to read the whole paper because I’m still in
the middle of literature search, so if the abstract and the conclusions look promising I will save
that PDF and come back to it later for more detailed reading. Even of the ones I have saved,
only a fraction will ultimately be useful to me when I read them more thoroughly.
As you are doing the search it is always important to also think about different ways in
which the terminology may be used. After you have read through a number of titles and abstracts,
you may realize that there is an additional term or refinement of a term that may give you more
relevant results. You are seldom done after just one try and you will probably want to re-run your
search with slightly different key words after you have become more familiar with the topic. It’s
also important to realize that not all databases have access to the same collections of resources.
You may need to use more than one. Often your research mentor, or a reference librarian, can
point you toward the most relevant databases for your discipline. A librarian can also tell you
how to access search engines and databases remotely, so that you can access the literature even
when you are off campus.
Once you find some papers that are particularly relevant to your research question, you
can use those with the search engines to look both forward and backward in time. Start with
an article that you had already found to be relevant, then go backward and forward in time by
looking at the papers that this article cites and who have cited this article since it was published.
Abstract and indexing databases like Web of Science and Google Scholar have this functionality.
You might find some nice new nuggets this way. You can also set up a various different alters
with most of these systems to notify you of future publications when they appear. If you have
developed a set of useful search parameters, you can set an alert to notify you whenever an newly
published article meets those same parameters. A citation alert on particularly relevant article
that you have already identified will notify when another new article comes out that cities this
previous one. You can also set up a citation alert on key authors in the field so that you are
notified when they have a new publication appear. However, even with these alerts in place, you
will need to regularly revisit your literature search to identify if new publications have appeared.
Review articles can also be an excellent place to resource if one has been written in your
research area. Happily, the authors of the review article will have already done the hard work
of pulling together and summarizing the relevant research papers on the topic. However, your
search does not end with the review article. You will want to get a copy of the most relevant
90
4. BUILDING ON THE RESEARCH OF OTHERS
papers that the review article cites and read them yourself. You will also want to see what other
papers have cited the review since it has been published because there may be more recent papers
that are also important for your research. To find review articles, you will want to use the key
words relevant to your topic and include the word “review” if you are searching in an index like
Google Scholar, or in a database like Web of Science, there is a Review button you can click.
This will narrow your search results to only the ones that have been identified as review articles,
so that you can take a look at those in more detail.
You also need to recognize that journal articles may not be the only source of relevant
information on your topic. You will want to look at patents, government reports, handbooks,
and books that are available through the internet, or your institution’s library. Some of these
resources can be a little bit more difficult to identify. I have found over my career that the refer-
ence librarians can be an incredibly valuable resource for helping you to identify how to access
the information that you need. If you have such a person available at your institution, you should
take the opportunity to get to know them.
ASSIGNMENT 4-3:
INDIVIDUAL ASSIGNMENT – LITERATURE SEARCH WITH KEY WORDS
Work with your research mentor to identity key words for a useful literature search that you
can undertake and get their suggestion on which databases to use in your search. Conduct the
literature search using these key words and identify the 5–10 most relevant journal articles on
this topic.
ASSIGNMENT 4-4:
INDIVIDUAL ASSIGNMENT – PATENT SEARCH
Use the United States Patent and Trademark Office website at www.uspto.gov to conduct a
patent search using key words that are relevant to your area of research. (Note that this website
provides a number of resources for how to conduct a search effectively.) Identify 1–2 patents
most closely associated with some aspect of your research and summarize your findings.
4.5. LITERATURE SEARCH 91
ASSIGNMENT 4-5:
INDIVIDUAL ASSIGNMENT – GOVERNMENT REPORT SEARCH
Conduct a search for technical government reports that are relevant to your area of research.
You may need to work with your reference librarian to identify a relevant database, or you may
be able to start with one of the following:
U.S. Department of Defense, Defense Technical Information Center, www.dtic.mil
NASA Technical Reports Server, ntrs.nasa.gov
U.S. Department of Energy Office of Scientific and Technical Information, www.osti.
gov
ASSIGNMENT 4-6:
INDIVIDUAL ASSIGNMENT – DISSERTATION SEARCH
Students completing a master’s degree often do so with a research thesis, and all Ph.D. students
produce a dissertation on their research as the main degree requirement. These are often archived
documents and many of repository such as Proquest. Identify five relevant thesis or dissertation
titles by conducting a search through your university’s library catalog/database or by using Pro-
quest Dissertation Express at https://dissexpress.proquest.com/search.html. (If you
find that you are interested in learning more about one of these titles, contact your library about
access before ordering a copy. Often you can obtain access to a copy through your institution’s
library.)
ASSIGNMENT 4-7:
INDIVIDUAL ASSIGNMENT – ALTERNATIVE SOURCES SEARCH
Use your institution’s library to identity e-books and online handbooks that are relevant to your
area of research. Identify 3–5 relevant resources. Choose one of these and write a one paragraph
summary of the resource and how it is related to your research interests.
92
4. BUILDING ON THE RESEARCH OF OTHERS
4.6
PROPER CITATION
When you discuss someone’s work, either in a presentation or in writing, you need to identify
where those ideas came from originally. By including the citation, you have identified to the
listener or reader that what you are discussing has its origins in the work of others. The format
of the citation may vary depending on the requirements of who you are preparing the work for
(e.g., your instructor or a journal). The easiest way to include a citation in written work is to use
a footnote (the reference is included at the bottom of the page) or an endnote (the reference is
included at the end of the paper).
In the case of a presentation where you have included an image or figure from a source
such as a journal article or a website, it is most common to provide the reference information
directly on the slide. In this case a short citation is fine, but you will need to provide enough
information so that someone else can find the original source. For instance, you might use one
of the following formats:
Mature primary cardiomyocyte image from www.e-heart.org
Image Credit: Srivinasan, Protocol Exchange, 2011.
[Salick, M. R., et al., Biomaterials, 2014]
Even if the material is unpublished, you need to provide credit to the source. For instance,
you might credit a colleague with: J. Rogers, with permission. Or for your own work not yet
published: B. N. Napiwocki, et al., in preparation.
For written work, there are a number of ways in which the citation may appear. As you
are reading journal papers you will have noticed superscript numbers, numbers in brackets, or
parenthetical notations that include author names: superscript1, some type of parentheses (1),
the name of the author (Crone, 2010) or authors (De, et al. 2002). These citations identify con-
cepts and results that are culled from other sources and the notations refer you to the particular
source in the References section at the end of the article.
For instance, the following sentence appears in the Johnson, et al., 2004 article whose
citation is given above:
“In an aqueous environment, stimuli-responsive hydrogels undergo a reversible phase
transformation that results in dramatic volumetric swelling and shrinking upon ex-
posure and removal of a stimulus. Thus, these materials can be used as muscle like
actuators [1], fluid pumps [2], and valves [3].”
The numbers in brackets, e.g., [3], refer to the reference section of the paper. In this case,
an article detailing the use of a stimuli-responsive hydrogel for each component application is
given. If you were particularly interested in valves, then you would want to look at reference 3.
In other journals, instead of the number appearing in brackets [ ] or parenthesize ( ), it
will appear as a superscript as in this example8:
4.7. CITATION MANAGEMENT 93
“One of the drawbacks of coil embolization is coil compaction over time, leading to
recanalization of the aneurysm. Some degree of aneurysm recanalization occurs in
as many as 20% of cases.1
3 In larger aneurysms, placement of multiple coils can be
time consuming, and longer procedural times may lead to increased morbidity and
mortality.3 An alternative to coil embolization is the use of liquid embolic agents.
It is thought that filling aneurysms with such polymers will reduce many of the
shortcomings associated with coiling, such as coil compaction.4;5”
(cid:0)
The other common style you will see includes the author and year of the publication in the
body of the text. Although this makes the sentence longer, the reader does not have to repeatedly
look back to the references section to see whose work is being referred to. For example9:
“Previous studies have reported lineage reprogramming into a diverse range of differ-
entiated cells types, including neurons (Vierbuchen et al., 2010), hepatocytes (Sekiya
and Suzuki, 2011), and cardiomyocytes (CMs) (Ieda et al., 2010; Song et al., 2012).”
The abbreviation “et al.” will often appear in citations. This refers to the Latin phrase et
alia meaning “and others.” When the author list is long, usually more than two, the first author’s
name is given and followed by et al. to indicate that the paper had multiple authors although
all of their names are not listed. Generally, in the references section, the entire author list is
included. Every citation style is a bit different and some journals will even deviate from the
more common citation styles (e.g., Chicago, IEEE, APA), so you will have to adapt your own
writing depending on the requirements.
4.7
CITATION MANAGEMENT
If you are not already familiar with one, now is the perfect opportunity to learn about citation
management systems. Keeping track of all the journal articles and other references that you will
begin to accumulate on your research project will quickly turn into a big organizational challenge.
Happily software systems have been developed that can be a huge timesaver for you, allowing
you to easily collect relevant references, organize them, and cite them in your writing.
Examples if such programs include EndNote, Zotero, and RefWorks, but there are many
to choose from. Your institution may provide you access to one of these programs, or you may
decide to purchase one of these pieces of software for yourself. If you are new to a research group,
8Moftakhar, R., Xu, F., Aagaard-Kienitz, B., Consigny, D. W., Grinde, J. R., Hart, K., Flanagan, C. E., Crone, W. C.,
and Masters, K. S., 2015. Preliminary in vivo evaluation of a novel intrasaccular cerebral aneurysm occlusion device. Journal
of Neurointerventional Surgery, 7(8), 584–590.
9Lalit, P. A., Salick, M. R., Nelson, D. O., Squirrell, J. M., Shafer, C. M., Patel, N. G., Saeed, I., et al., 2016. Lineage
reprogramming of fibroblasts into proliferative induced cardiac progenitor cells by defined factors. Cell Stem cell, 18(3), 354–
367.
94
4. BUILDING ON THE RESEARCH OF OTHERS
you should ask your research mentor, or the group members, if the group has a designated citation
management system. In some research groups, citation management system content is shared
among group members. Although the different citation management systems all contain the
same basic functionality, each is a bit different and it may be helpful to look into the details of
the options available or talk to a reference librarian before you choose one for yourself.
Spend Time to Save Time
I was slower than I should have been to adopt a citation management
system. When I finally did so, I realized that I had wasted enormous amounts
of time by not doing it sooner. My suggestion to you is to start early and
save yourself the time upfront! Even so, moving into a citation management
system after you already have a collection of journal articles is not as over-
whelming as it might seem and fully worth the time investment.
The basic functionality of these systems comes into play beginning with your literature
search as you identify articles that are particularly important to your research. Many of the
databases that you will use (like Google Scholar and Web if Science) have the functionality
to automatically insert the citation into your management system with the click of a button. In
most cases, the citation management systems can be connected with your institution’s library so
that the system can pull the PDF of the article from the library into the management system.
Additionally, you can use the management system to organize these references and make addi-
tional notations and comments about them as you read and utilize them further. Finally, when
it comes time to cite the article as a reference in a paper or report that you are writing, many of
the word processing programs available have add-ons that work with the citation management
system so that you can easily integrate your citations without a lot of extra work. If you have ever
built and formatted a bibliography by hand, you know what a time consuming and irritating task
that can be. With one of these systems fully in place, the insertion of a citation into your paper
is just a few clicks of a button; the citation is attached to the appropriate place in the paper, and
the bibliographic information is included at the end of the paper in your references section.
These systems also allow you to make separate project folders so that you can keep related
references bundled together and easier to find. This may not seem very important when you
are just starting out, but it will be very handy to have your citations grouped as the number of
them grows over time. Using folders or groups also means that you can easily use your citation
management system, not only for your research, but also for your coursework and other projects
that you undertake.
4.8. PREPARING A REVIEW 95
ASSIGNMENT 4-8:
INDIVIDUAL ASSIGNMENT – INVESTIGATING HOW CITATION
MANAGEMENT SYSTEMS WORK
Find a peer or research group member who uses a citation management system. Talk with them
about how they use the system and its functionality. Identify at least two new functions or tips
about usage that you did not previously know about.
ASSIGNMENT 4-9:
INDIVIDUAL ASSIGNMENT – COMPARING CITATION MANAGEMENT
SYSTEMS
Choose two citation management systems and compare their functionality. Describe the pros
and cons of the systems in a two-paragraph summary. Consider at least six of the following
topics in your comparison:
• Cost (short term while you are a student and long term after you have left the university)
• Operating systems requirements
• Plugins available for word processing programs such as Word and LaTex
• Attachment limits for article PDFs
• Ability to annotate with your own notes and PDF markups
• Ability to create new or edit existing citation styles
• Folder organization and sorting capabilities
• Duplicate citation detection
• Capability to collaborate and share with others in your research group
• Export options between other citation managers
4.8
PREPARING A REVIEW
There are a variety of instances in which you may be asked to review the written work of others.
As a student, this most frequently occurs in course settings where you may be asked to do a
peer review on a paper written by another student in the course. Alternatively, an instructor
96
4. BUILDING ON THE RESEARCH OF OTHERS
may assign a journal paper review as an assignment in an advanced engineering course. More
advanced engineering graduate students may even be asked to provide input on a review of a
manuscript submitted to a journal. This might be done in collaboration with your advisor or
through your advisor’s recommendation. Regardless of the particulars of the situation, there are
numerous commonalities to the review process. A later chapter will focus on providing feedback
to a colleague in a classroom setting, such as a writing workshop. Here we will focus on providing
a critical review of a journal article.
Get Critical
The first time I was asked to review a journal article, it was for an
assignment in a first-year graduate course. We were told to choose an article
of interest and turn in a critical review. That was the extent of the instruction
and I had no idea of where to start!
In many ways, this section is written for those of you faced with such
an assignment. However, even if you don’t have to do a critical review for a
course, it is a good habit to always read critically and this information should
help you get started on the path to doing so.
It is important to first understand the expectations of those asking for the review. In the
case of a journal, they are very likely to provide you with a set of criteria, or some brief instruc-
tions, on the feedback that they would like to receive. In addition to looking for a good article,
they want to make sure that the article is a good fit for their journal. You will also have access to
the journal’s scope through its website. You can use the scope information to tell you whether
the manuscript fits with the journal to which it has been submitted.
You will need to begin by reading the manuscript thoroughly. It will likely require multiple
passes through the manuscript in order for you to complete your review, but in the first reading
you will get a general sense of the article. You may also take away an impression of its overall
strengths and weaknesses in this first reading. During this first reading keep in mind a few
questions.
What is the key takeaway message?
Does the abstract give a compelling and yet reasonable summary?
What points did you find initially confusing?
Are the figures clearly presented?
Are the terms defined and the equations understandable?
Do the conclusions follow from the results?
What is the significance of the findings presented?
Is prior published work appropriately cited?
Is the paper written with good organization and grammar?
4.8. PREPARING A REVIEW 97
In your review you will be seeking to help the authors improve their manuscript so that
the future readers can easily understand it. You will need to be on the lookout for both scientific
problems in the methods and analysis, as well as writing issues such as clarity and presentation.
Your review should be detailed enough to help the authors improve their manuscript regardless of
whether or not you recommend to the editor that it be accepted for publication in this particular
journal. The decision of whether or not to publish will ultimately be the editors to make, but you
will need to give an option if it should be accepted (with minor or major revisions) or rejected
based on the quality and impact on the field.
As you get more involved in your research area you will begin to learn which journals are
the most important in your field and you should become familiar with their scope and criteria
for publication as you begin to work toward publishing your own research. Every journal has
defined its scope to identify what research it will publish. For example, the journal Experimental
Mechanics is published by Springer with the Society for Experimental Mechanics, a professional
organization that I have been a member of for 30 years. If you go to the journal website you will
find the following information describing the scope of that journal10:
• Explores experimental mechanics, including its theoretical and computational analysis.
• Addresses research in design and implementation of novel or enhanced experiments to
characterize materials, structures, and systems.
• Spans research in solid and fluid mechanics to fields at the intersection of disciplines
such as physics, chemistry, and biology.
• Extends the frontiers of experimental mechanics at both large and small scales.
Below are some example criteria provided to the reviewers to give you an idea of what is
commonly requested. Each journal will give instructions to its reviewers and may have special-
ized criteria to consider, but the recommendations for reviewing provided by Springer Interna-
tional Publishing are representative of the type of requests you would see.
Evaluating Manuscripts11
When you first receive the manuscript it is recommended that you read it through once
and focus on the wider context of the research.
Springer Publishing recommends that you ask questions such as the following.
10Experimental Mechanics,
mechanics/journal/11340.
Springer
International Publishing,
https://www.springer.com/engineering/
11Evaluation Manuscripts, Springer International Publishing, https://www.springer.com/us/authors-editors/
authorandreviewertutorials/howtopeerreview/evaluating-manuscripts/10286398.
98
4. BUILDING ON THE RESEARCH OF OTHERS
• What research question(s) do the authors address? Do they make a good argument for
why a question is important?
• What methods do the authors use to answer the question? Are the methods the most
current available or is there a newer more powerful method available? Does their overall
strategy seem like a good one, or are there major problems with their methods? Are
there other experiments that would greatly improve the quality of the manuscript? If
so, are they necessary to make the work publishable? Would any different data help
confirm the presented results and strengthen the paper?
• Were the results analyzed and interpreted correctly? Does the evidence support the
authors’ conclusions?
• Will the results advance your field in some way? If so, how much? Does the importance
of the advance match the standards of the journal?
• Will other researchers be interested in reading the study? If so, what types of re-
searchers? Do they match the journal’s audience? Is there an alternative readership
that the paper would be more suitable for? For example, a study about renal disease in
children might be suitable for either a pediatrics-centric journal or one that is targeted
at nephrologists.
• Does the manuscript fit together well? Does it clearly describe what was done, why it
was done, and what the results mean?
•
Is the manuscript written well and easy to read? If the manuscript has many mistakes,
you can suggest that the authors have it checked by a native English speaker. If the lan-
guage quality is so poor that it is difficult to understand, you can ask that the manuscript
be corrected before you review it.
After your first reading, write one or two paragraphs summarizing what the manuscript
is about and how it adds to current knowledge in your field. Mention the strengths of the
manuscript, but also any problems that make you believe it should not be published, or that
would need to be corrected to make it publishable. These summary paragraphs are the start
of your review, and they will demonstrate to the editor and authors that you have read the
manuscript carefully. They will also help the editor, who may not be a specialist in this particu-
lar topic, understand the wider context of the research. Finally, these paragraphs will highlight
the manuscript’s main messages that will be taken away by readers.
You can then proceed in evaluating the individual sections of the paper. (Note that
Springer’s website gives additional detailed questions to consider in each section of the
manuscript.)
Most engineering journals use a “closed” peer review process where you will know the
identities of the authors, but they will not be informed of your identity. Even though your review
4.9. CREDITING THE WORK OF OTHERS 99
will be anonymous in this sense, you should always behave respectfully and professionally in your
review. It is also likely that your criticism will be better received if you note what the authors did
well, in addition to what they need to improve. Additionally, keep in mind that as a reviewer
you are seeing research before it is published and publicly available, but you must keep this
information confidential until the publication is released.
As a future author of a journal paper the review criteria described above and the practice of
being a reviewer will be helpful in allowing you to evaluate your own manuscript with a critical
eye prior to its submission.
ASSIGNMENT 4-10:
INDIVIDUAL ASSIGNMENT – WRITING A JOURNAL ARTICLE REVIEW
Choose a journal article relevant to your area of research. Conduct a 2-page written review.
Begin with a 1–2 paragraph summary of the paper. Conduct the remainder of the review us-
ing the “Evaluating Manuscripts” guidance from Springer above or the review criteria from
the journal in your area of research (find the journal’s website and look for the guidelines for
referees/reviewers).
ASSIGNMENT 4-11:
INDIVIDUAL ASSIGNMENT – ANNOTATED BIBLIOGRAPHY
Compile an annotated bibliography of a research topic of your choice. This topic may be related
to a seminar that you have attended this or a research topic in your own subdiscipline area of
interest.
Your annotated bibliography must contain a minimum of eight journal articles. For each
article you must give the full citation (using a standard style such as APA or Chicago) and a
brief description (roughly 150 words) of the main purpose and findings of the paper. Include a
topic title at the top of the top of the bibliography.
4.9
CREDITING THE WORK OF OTHERS
Very seldom is research done in a vacuum. In the vast majority of cases research is built on prior
work that has been documented by others, often in journal publications. Even truly interdisci-
plinary research done at boundary areas not previously explored, often borrow the techniques
and approaches from one discipline or the other and apply them in a new field or to ask new
questions. As a member of the research community it is essential that you not only know what
100
4. BUILDING ON THE RESEARCH OF OTHERS
research has been done previously but also cite that prior work as the foundation of your own
when it is appropriate.
Student Perspective
“The whole of the scientific pursuit is based on openness and peer-
review, and it constantly builds on previous discoveries. As the step-by-step
process continues, what was learned before must be acknowledged and re-
spected.”
On the positive side, citing the work of others also helps to build your own credibility.
When you have informal conversations with people in your field, write reports and publications
about your research, present at a conference, or give a formal research talk on y our campus, it
is critical that you discuss the background of your research area. In doing so you will need to
identify who made the early findings, who established critical techniques, and who has presented
research results that you have built upon or contradict your own. Depending on the specific
circumstances, there are a variety of ways in which other people’s ideas are credited. What is
critical is that you find a way to acknowledge the work of others and distinguish it from your
own. If you do not do so you are at risk of committing plagiarism.
Student Perspective
“‘Word-for-word plagiarism’ happens when the method of expression
and sentence structure are largely maintained. ‘Patchwork paraphrase’ is the
paraphrase that contains the language from the author without rephrase and
some writer’s own words. Both situations are considered plagiarism since the
writer has only changed around a few words and phrases or changed the order
of the original’s sentences. For an acceptable paraphrasing, the information
in the original sources is accurately relayed and expressed in the writer’s own
words.”
Sometimes plagiarism is done intentionally. This is a risky proposition, especially given the
techniques that are now available at universities and with publishers for identifying plagiarism.
Even if you think you have found a way to cut corners by taking credit for someone else’s work
and get away with it, don’t do it. It not only carries high risk of repercussions; it also carries a
risk of luring yourself into other dishonest actions that not only jeopardize your career, but also
the careers’ of those around you and the integrity of the research in your field.12
12Ariely, D., 2012. The (Honest) Truth About Dishonesty, Harper Collins Publishers, New York.
4.9. CREDITING THE WORK OF OTHERS 101
Student Perspective
“This has become a bigger problem because it is so easy to access tons
of information from different sources on the internet and it can be tempting
to copy and paste and then rearrange and change a few words here and there
in sentences. It can also be difficult to find the author information and dates
of publication on some websites so some people may think they either do
not need to cite the source or just simply don’t do it. Another issue is the
availability of online essay writing services. These services will prepare essays
for students for a fee. It is plagiarism to hand in one of these essays because
it is passing work off as your own that you did not do.”
Sometimes plagiarism is committed because the rules are not well understood. However,
ignorance of this in an academic setting will not be excused and often has severe consequences.
If you are unsure how your institution defines plagiarism, look it up. Identify campus resources
(e.g., a Writing Center) and look for guidelines and workshops on how to avoid plagiarism.
ASSIGNMENT 4-12:
INDIVIDUAL ASSIGNMENT – WATCHING THE CREDITS
While reading a journal article or listening to a seminar presentation, pay close attention to how
prior research is credited. How is sentence structure used in combination with the citations? Is
the work of others mentioned with the researcher name(s), if so is the first author indicated or
the principal investigator of the research group? Does the author or speaker also cite their own
work? If so, how is this done similarly or differently to citing the work of others?
Write a brief summary about your observations, including a few representative examples.
There may be some variation within your discipline, so continue to pay attention to how
the work of others is credited when you are reading about and listening to research.
C H A P T E R 5
103
Conducting Research
SCIENTIFIC HABITS OF THE MIND
5.1
Much has been written on the scientific method throughout history. Historian of science Daniel
Siegel of the University of Wisconsin–Madison tells us that scientific method is a complex
topic but often philosophers categorize scientific method into three idealized types: empiricist,
rationalist, hypothetical.1
• The empiricist methodology, championed by Sir Frances Bacon during the Scientific
Revolution, is based on experience, observation, and experiment. The basic idea is that
generalizations can be developed from careful observations and categorizations. The
basis of this method is to avoid prejudices and be guided by one’s experience. An ex-
ample of this methodology is taxonomy. However, this method cannot address all areas
of science effectively and answer all questions we may want to pose.
• The rationalist methodology, promoted by Renee Descartes in the 17th century, is
based on reasoning. The method begins with careful contemplation and consideration
of how things must be. Geometry is an excellent example of this method. You begin
with basic, self-evident axioms and then from there you can reason to a conclusion. If
the appropriate basic principles can be found, then the rest can be developed through
reason. Although its utility in isolation is limited, this is a powerful method in com-
bination with the other methods. Prof. Siegel gives the example of the principle of
conservation of energy, which is a constitutive principle of science which can be ap-
plied to a wide range of topics and can guide thinking and new experimentation.
• Hypothetical methodology is based on suppositions and conjectures. This was a
method frequently used in science historically, but often denied. You imagine possibil-
ities, try out ideas, and ask the question: If my supposition was true, what consequences
would there be to things that I can observe directly? This method allows scientists to
get beyond the limitations of what can be observed directly. For example, the idea of
the atom came well before direct observations were possible, but the hypothesis of the
existence of atoms has logical consequences that can be observed through experiments.
The hypothesis must be a testable, or verifiable, in order for it to be a scientific method.
Although this is an indirect method, it is very powerful.
1Siegel, D., video on “The Scientific Method,” in “Professional Development,” Materials Research Science and Engineer-
ing Center, University of Wisconsin-Madison, https://education.mrsec.wisc.edu/professional-development/.
104
5. CONDUCTING RESEARCH
If we look at the practice of science and engineering research today, we find that most
often a combination of these methods is used. We can do research using a mixture of methods
according to the needs of the research problem we are approaching.
ASSIGNMENT 5-1:
GROUP ACTIVITY – DIET COKE AND MENTOS
This activity explores scientific method through two sets of results.
The first is from the popular Discovery Channel T.V. show, MythBusters. In Episode 57:
Mentos and Soda (first aired August 9, 2006), the show investigates the cause of the explosive
reaction between these two ingredients. The results of the reaction can be found in numerous
videos through an Internet search using the key words: diet coke and mentos. See https://go.
discovery.com/tv-shows/mythbusters/ for episodes.
The second set of results comes from the experiments in an article “Diet Coke and Mentos:
What is really behind this physical reaction?” by Tonya Shea Coffey (American Journal of Physics,
76(6), (2008) pp. 551–557).
After viewing the MythBusters episode and reading the journal article, discuss the scientific
method approaches taken in each. Consider the following questions.
MythBusters Episode
• What aspects of the scientific method are incorporated into the experiments presented,
and what aspects are lacking?
• What hypotheses have the hosts made? Do they add new or modify their hypotheses
as they continue with experimentation?
• Do you agree with the conclusions of the hosts?
•
Is the question closed, or must further research be done?
Coffey Paper
• What aspects of the scientific method are incorporated into the experiments presented,
and what aspects are lacking?
• What if any issues did you have with the experiments, methods, and analysis that they
chose?
• Do you agree with the conclusions of the author?
•
Is the question closed, or must further research be done?
Comparison of the Two Studies
5.1. SCIENTIFIC HABITS OF THE MIND 105
• Compare and contrast the methods, experiments, and conclusions of the Coffey paper
with the MythBusters study.
• Which of the studies better adheres to the scientific method?
• How does presentation style matter to the general public vs. the scientific community
(would you choose to highlight one of these studies over the other when discussing the
results with a friend or colleague)?
ASSIGNMENT 5-2:
GROUP ACTIVITY – MONTY HALL EXPERIMENT
Ken Overway at Bridgewater College Developed an activity based on an old television show
called “Let’s Make a Deal,” hosted by Monty Hall.2 In the show, contestants have the chance
to win a big prize if they choose the correct door out of three options. After they have made
their initial choice, the host eliminates one of the remaining doors that does not have a prize. At
that point, the contestant is offered the choice of staying with their original choice or switching
to the one remaining door. What choice should the contestant make in order to have the best
chance of winning the prize?
Develop a hypothesis statement about the anticipated outcome of the relative odds of
winning the prize at the end by choosing to switch vs. stay. Consider why you have decided on
this hypothesis. Then collect empirical evidence, ideally by working with a partner so that you
have a host and a contestant. Complete 20 trials of the game. Analyze the results and determine
if you find the evidence to support or refute your hypothesis. Finally, restate or reaffirm your
hypothesis based on this evidence. (Note: A thorough discussion of the background and activity
are given by Overway.)
ASSIGNMENT 5-3:
INDIVIDUAL ASSIGNMENT – COMPARING THE EVIDENCE
Choose a science question that has received coverage by the media and is perhaps a subject of
controversy. A number of health questions fall under this category, e.g., How does living near
2Overway, K., 2007. Empirical evidence or intuition? An activity involving the scientific method. Journal of Chemical
Education, 84, 606. http://jchemed.chem.wisc.edu/Journal/Issues/2007/Apr/abs606.html.
106
5. CONDUCTING RESEARCH
high voltage power lines affect people? Does cell phone use cause brain tumors? Will ingesting
large amounts of vitamin C help to prevent a cold?
Investigate your topic using a variety of sources, including web searches, online encyclo-
pedias, and scientific journals. Can you find scientific journal articles that disagree with one
another?
After completing your investigation, consider the following questions.
• What types of evidence are available on this question?
• Does the media coverage accurately describe the research reported in journal articles?
• How might inaccuracies in the media portrayal of research findings occur?
• What responsibilities do scientists and engineers have to convey their findings to the
public?
•
If scientific journal articles come to opposing conclusions, can you identify flaws in the
methods or conclusions of these articles?
• What conclusion do you draw from your investigation, and why?
5.1.1 OTHER RESOURCES ON SCIENTIFIC METHOD
This section only touches on a very large topic that has been the subject of inquiry for centuries.
For additional content, the following references are suggested.
American Association for the Advancement of Science, 1990. Science for All Americans,
Project 2061, 183–187. See Chapter 12: Habits of Mind.
Niewoehner, R., Paul, R., and Elder, L., 2007. The Thinker’s Guide to Engineering Rea-
soning, The Foundation for Critical Thinking, p. 57.
Paul, R. and Elder, L., 2006. A Miniature Guide for Students and Faculty to Scientific
Thinking, The Foundation for Critical Thinking, p. 49.
Wolfs, F. “Introduction to the Scientific Method,” University of Rochester, http://
teacher.pas.rochester.edu/phy_labs/AppendixE/AppendixE.html
5.2
DEVELOPING A RESEARCH PROPOSAL
Different types of research take different time frames to accomplish. You need to be aware of
this, and you should also engage your research mentor to help you develop a research project that
is appropriate. Depending on whether you are embarking on a summer research experience, a
master’s thesis, or a Ph.D. dissertation, the scope of your research project will necessarily be quite
different. Your project might be quite independent of the work of others, or other researchers
5.2. DEVELOPING A RESEARCH PROPOSAL 107
may be depending on the results you produce in order to carry on their work. It may also be the
case that your research is a contribution in a much larger, longer-term effort, but you should have
an idea of what milestone(s) you are trying to achieve. Understanding these aspects up front are
important to appreciating how your research fits into the research group you are joining and the
broader field that your work connects to.
Student Perspective
“I think the most surprising thing I learned about the nature of research
this semester was the time frame on which research is carried out. Although
I wasn’t under the assumption that research is instantaneous or that a project
could be finished in a week, I never really thought about the actual time frame
of it all. But, now I realize that some research projects can take years, even
decades to get publishable results.”
After you have the basic of the research you will be undertaking you need to be able to
express it clearly and succinctly in your own words. This can be harder than it sounds because you
need to have done some reading and had some conversations with your research mentor so that
you can develop a good background understanding of the area of research that you are undertak-
ing. Then you need to write out the research question in a way that everyone will understand—
you, your mentor, others in the field, and people outside the field. Use the least amount of jargon
possible and rework this short statement until you can hand it to someone who does not know
your research area; success is when they understand what you are planning to do and can explain
it back to you based on what you have written. If they can’t, listen carefully to their questions
because this will likely help you to identify where you may have leaps in logics or fuzziness in
your explanation.
Student Perspective
“One of the former … students advised that we should be able to write
down the basic idea of our thesis on a napkin. This doesn’t mean I should ex-
pect to know what results I will achieve or what questions I will answer along
the way, but having a clear question in mind at the start of the project is im-
portant. Before beginning and while performing the research, it is important
also to set realistic goals.”
Often as part of research you will need to take a role in preparing a proposal. This may be
a requirement of your degree program, part of a fellowship application, or something you work
on with your research mentor to submit to a funding agency. There are a variety of styles and
expectations depending on the specific proposal being written. The first step you should take
is to learn about the expectations—check your program requirements, fellowship application
108
5. CONDUCTING RESEARCH
instructions, or funding agency guidelines. The next step is to see if you can find a good example
to help you understand what a succesful proposal looks like—use your network to see if you
can find someone willing to share a copy of theirs. These steps will help you prepare to write a
successful proposal of your own.
In some cases, you will need to frame your research with a hypothesis. If you have not
been given a hypothesis to explore by you research mentor, you will have to come up with your
own hypothesis after you have read the literature and immersed yourself in your research group
to learn, explore, and develop your ability to come up with a new idea. Regardless of whether
the idea originates from you or your research mentor, you need to make sure that someone else
has not already answered this question. This means taking a deep dive into the literature (not
just the last 10 years) and looking into both the literature of the specific subdiscipline in which
you are engaged and other related research areas. Use more than one search engine, as each has
slightly different coverage and indexing nuances. Then you can go about fashioning a testable
hypothesis around your idea.
To propose this work, you will not only need a research question or hypothesis, but you
will also need to have a plan for how to go about carrying out the research. Determine what you
will need to do to test this hypothesis. Will you approach it with theory, modeling, experiment,
or some combination? Determine the facilities, tools, materials, and background knowledge you
will need. Do you have access to these or can you find a way to gain access? Develop a plan for
the steps you will take to complete the research. Identify alternative strategies you will use if your
initial approach does not work out as expected. All along the way, get input from your research
mentor, other experts, and others in your research group.
There is a balance however, and at some point, you need to start on the research itself.
This might mean conducting a preliminary experiment or making a simplified model to test out
some of your ideas. If you are able to capture some data or show some initial results this can be
very useful to include in the proposal. It shows that you have already been able to make some
progress on your idea and that you have the fundamental skills needed to carry out the research
you are proposing.
Student Perspective
“One of the tendencies I had (and still pops up at times) is that I wanted
to understand all the theory, papers, and work out there first so I knew what
I was doing in my experiments and not wasting time. I almost feared just
running a trial or two to better understand, and often without doing so I
had no chance. I received advice to spend more time in the lab. That looking
for research papers was good for understanding where things have been, save
time when implementing earlier found discoveries or techniques, and even
inspires new ideas. However, the quickest way to see what works and doesn’t
will be through trial and error, learning form each experiment.”
5.3. GETTING STARTED AND STAYING MOTIVATED 109
Figure 5.1: Proposal submission process.
If you are writing a fellowship proposal or working with your research mentor on a pro-
posal to be submitted to a funding agency, it is helpful to understand how the decision process
works once you have submitted the proposal. Most funding agencies and foundations follow the
same general submission process and have the same basic steps. The schematic3 in Figure 5.1
provides a general sequence for how this often occurs. Your research mentor can help you to
determine if this is a good representation for the proposal process that you are currently under-
taking.
5.3
GETTING STARTED AND STAYING MOTIVATED
Research is inherently challenging, but it can also be fun and exciting if you make the commit-
ment and put in the effort. The front-end investment of time is often high and it usually requires
you to do a significant amount of learning and skill building before you can make progress.
Student Perspective
“In the lab, I’ve been given much more independence and responsibil-
ity. This has forced me to take more initiative than I had to previously, but
3Adapted from: Barker, K., 2006. At the Bench: A Laboratory Navigator, Updated Edition, Cold Spring Harbor Laboratory
Press, Cold Spring Harbor, NY.
SubmissionGranting AgencyChoose an agency andprogram where your workwill have a good fit.Identify the Program Managermost relevant to your topic.Use the constructive criticismprovided by the reviewers toimprove your work.Does the agency allow you toprovide a list of reviewersdisallowed from reviewingyour work?Program ManagerReviewersGrant Review PanelWrite aDifferent ProposalRewrite with New Ideasor Preliminary DataConduct ResearchFunding GrantedFunding Denied+ Score orReview- Score orReviewVery -Slightly -110
5. CONDUCTING RESEARCH
also allowed me to figure things out for myself. This isn’t to say that I wasn’t
working hard before this, but I had a little bit more of a “safety net” when
doing my work. Although this extra responsibility may entail a little bit work
on my part, it is also much more rewarding.”
Students are used to taking classes and having an externally determined schedule of home-
work deadlines, project due dates, and exams. Sometimes it can be difficult to transition to re-
search when the work schedule and deadlines are most often self-imposed. Your research mentor
does not want to spend the time it would take to micromanage your research project and lay out
every step for you. Initially, you will require more guidance, but quickly you need to take on this
responsibility for yourself. Often students find it easiest to make progress if that have a regular
work schedule for research (i.e., specific hours set aside each day when research will take place).
You need to have a basic plan in mind and list of tasks that need to be accomplished, but you
will also need to be flexible and adjust as the need arises and obstacles pop up.
Student Perspective
“I realized that when taking on a research project, there is no room to
make excuses and if you want to successfully complete a project you must
always take the initiative and go the extra mile.”
Learning to work carefully and with intention is also a critical research skill. Mistakes will
happen, things will break, and that will be forgiven if you own up to the issue quickly and take
action to prevent it from happening again. Time and money can be wasted. The worst thing
you can do is risk the safety of yourself or others, so you must be certain you understand the
potential safety hazards before you take action.
Student Perspective
“The most important lesson that I’ve learned is that it is always im-
portant to be careful. I have experienced several occasions when I didn’t have
time to do things right, so I had to find time to do them twice. Not only does
this cause lost time, but it also incurs financial costs. I must always be sure to
slow down, check, double check, and then proceed. This is the most crucial
skill for an experimentalist.”
Even if you are handed a project concept by your research mentor at the beginning, it
will not only be your responsibility to carry it out but you will also need to take it to the next
level. Because the very nature of research means that it has never been done before, it often does
not go quite as it is planned. Changes in the scope and direction of the research often occur as
the research progresses. Sometimes things don’t work out and you have to develop a new path
5.3. GETTING STARTED AND STAYING MOTIVATED 111
forward. If you have been thinking about the research deeply along the way, you may have noted
opportunities for other exploratory work or alternative hypotheses.
Regardless of how much time you invest and your level of perseverance, there will still be
low points. Everyone struggles at some point with either getting started or staying motivated
while conducting experiments, coding, writing, etc. This can happen for a variety of reasons. The
beginning of a project can feel daunting because you don’t know where to begin or you are afraid
you will make a mistake. The middle of a project can feel like chaos sometimes, especially when
the direction you had started in does not work out and you have to rethink things. And there can
be struggles at the end with just getting that last bit done or writing about what you have already
accomplished. These are normal experiences and they are not insurmountable. Often you can
get back on track by asking for advice.
Turning the Tide
In a book I wrote for junior faculty called Survive and Thrive: A Guide
for Untenured Faculty [Morgan & Claypool Publishers, 2010] I talk about
the challenges of doing creative work like research: “I have come to believe
that these ups and downs are a natural part of the cycle of any career that
demands creativity. Smart creative people can’t be smart and creative 100
believe that the denial we actively engage in often exacerbates the problem.
So, I’ll admit it to you, I have had slumps. For the most part, I too have
hidden them from my colleagues. For me, I think it is primarily the fear
of being accused of being a fraud—the old “imposter syndrome.” But what I
have discovered over the years is that, if I can find one trusted colleague that I
can feel comfortable talking with honestly and assured in their ability to keep
my confessions confidential, the conversation relieves much of the burden
and is often enough to turn the tide and give me the means to get myself out
of the slump. When I have initiated these conversations, I have also found
my colleague telling me “I’ve been feeling the exact same way recently” or “I
remember feeling the same way you are now back when.” Knowing you are
not alone also relieves some of the self-doubt.”
Student Perspective
“In a naïve way, I previously believed that if you worked hard enough,
progress will always be made. This may be true, but I have learned to gain
motivation in the small accomplishments and even the disappointments that
end up setting you back on track.”
Often people who have not run into serious motivational slumps before are the most
challenged because they have not developed strategies to turn things around. It’s also the case
112
5. CONDUCTING RESEARCH
that strategies which have previously worked for you in other situations or at other times may
fail you at some point. Thus, it is helpful to have a collection of strategies that you can choose
from to help you build or regain your momentum. Here are a few to consider.
• Use your calendar to block out time for specific activities. If you don’t actually have
time in your calendar for research and you expect to “fit it in” then you need to treat
your research time more like an appointment that you must keep. The simple act of
spending time on tasks will help you to make progress and get back to a productive
trajectory. During these blocks of research time you should be free from distraction
(devices off if possible, no additional screens or popups to distract you).
• Break larger tasks down into smaller ones, pick the task that seems easiest and do that
one first, then take each one in turn.
• Create “To Do” lists where you to prioritize critical items and focus on those initially.
Using the Project Management strategies are essential (discussed further in the next
section), but you may have to break things down into smaller steps and provide yourself
with a very specific, task oriented “To Do” list every day. Be realistic about that you
put on the list given the time you have available and pause to take some pleasure in
crossing off an item once it is done.
• Team up with a peer working on a similar kind of project and help each other to set
goals and keep them.
• Sometimes it is as simple as just getting started. Set a timer for 25 minutes. Press start
and work until the timer goes off. After the timer goes off, take a short break—ideally
you will get up and stretch if you have been sitting or sit down and rest if you have been
standing, you might also choose to listen to a piece of music or grab a refreshment—
but this is also a timed event. Your break should be a short one (3–5 minutes) and then
reset the timer for another 25 minutes of work. This is a time management strategy
called the Pomodoro Technique and there are apps available, although a simple timer
is all that is really needed.
The most important thing to remember is that what gets you started and what keeps you
motivated is a personal thing. There are a number of different factors that may influence your
motivation, including the variety in the tasks you undertake, the flexibility you have in the work,
and the level of responsibility you are given. Consider what factors influence your motivation
and you may be able to talk to your research mentor about making modification so that you can
maximize these aspects.
Ultimately, you are responsible for understanding what works best for you to keep moti-
vated and then use that strategy. If it stops working, try out a different strategy. The ones given
above are just a small sample, if this is a topic you want to learn more about, there are plenty of
resources available. Consider looking into campus workshops that might be available and books
that might be useful to you. I found that The Power of Habit by Charles Duhigg and Drive: The
Surprising Truth About What Motivates Us by Daniel Pink to have some interesting and useful
strategies.
5.4. PROJECT MANAGEMENT 113
5.4
PROJECT MANAGEMENT
Project management generally involves ideas, people, resources, and time. Often in engineering
research you are handed an idea as you walk into a project, however, this idea is likely to be
incomplete or even contain fundamental flaws that you will only be able to discover as you
engage with the research over time. As a result, you will need to manage the ideas as you go—
altering assumptions, reframing hypotheses, developing ideas to get yourself around roadblocks.
These modified and new ideas may be ones you generate yourself, or they may be ones that you
generate with your research mentor, your collaborators, and/or your research group. For your
research project you will need to track and manage these ideas through the evolution of the
project and check in with your research mentor about how the ideas are modifying your project
over time.
Ideas—You will have to manage ideas as they shift and change over time when new
information and data become available. Write these ideas down. Forcing yourself to
articulate them will help to refine your thinking. Revisit this over time and think
about how things have changed with new information you now have. You may in
fact have to shift your project or redefine your hypothesis.
The ideas themselves have to be actualized by someone—usually you—but often in en-
gineering research there is collaboration and teamwork involved. The people involved may be
simply you and your research mentor or it may include you, your research mentor, and many
other people involved with a large collaborative project. Usually it is somewhere in between—
involving you, your research mentor, other students in your research group, and maybe a key
collaborator. In some cases it may be as simple as working with another graduate student or
technician in the group to learn how to perform a task or us a piece of equipment that you
will need. If everyone works with the same research mentor it is often fairly straightforward; in
other cases you may need to discuss with your research mentor about how to navigate getting
the help you need. A more involved people management aspect of research is working in a more
supervisory role, for instance with an undergraduate research assistant.
People—The people management aspects of a project involve both managing yourself
and the interactions with others you are working with, including your interactions
with your research mentor. Take responsibility and determine how best to interact
in order to move your research forward. Strive to develop productive working rela-
tionships.
114
5. CONDUCTING RESEARCH
You may think it is your research mentor’s responsibility to manage you, but it is also your
responsibility to manage your relationship with your research mentor. As discussed previously in
concepts surrounding “mentoring up,” you need to take ownership in managing this relationship
so it is the most productive possible. Also, you need to understand that your research mentor has a
broad range of responsibilities and you need to take into account the time they will have available
to devote to you. Recognize that the amount of time available and when it is available may not be
exactly when it is most ideal for you, so you will need to plan ahead and make adjustments. This
means considering their availability for meetings, how quickly they can respond to questions
that arise, and how long it will take them to review drafts of your writing.
As you progress in your graduate degree program you will have a committee that is made
up of several faculty who you will interact with. These individuals may be variously involved
with details of your research, evaluation of your progress, your preliminary examination (the
written and/or oral exam that determines your readiness to become a dissertator in the Ph.D.
program), and the final defense of your thesis/dissertation (i.e., the oral presentation you make
about your research at the end of your degree program). You should be able to determine the
specific requirements of your degree program and who is involved at each stage from handbook
information provided to you by the program and conversations with your research mentor. When
working with your committee and scheduling time for key events such as the final defense of your
thesis/dissertation, you must also take into account their schedules and availability. You will need
to know when they will be in town or otherwise available (via phone call, video conferencing,
or some other electronic means) and how that relates to the timeframe in which you would
like to finish as well as the deadlines set by your university. It will also be important for you to
ensure that you planning timeline provides enough time for them to give you feedback and read
whatever written materials you are expected to provide them. This requires advance planning for
the semester you expect to complete a major milestone and the semester you plan to finish your
degree.
In order to get your project done you will likely need some resources. Even a theoretician,
who only uses pencil and paper to do their research, needs resources (e.g., a paycheck). As a
principal investigator some day you would have to worry about where the money comes from
(i.e., getting grants and contracts) and how to manage the expenditures of money responsibly
to get the research project done. As a student you will need to worry about money in a couple
of ways too. You need money to live off of, possibly this comes through the research grant or
a fellowship so it is less to worry over, but you may have to do other non-research work like a
teaching assistantship that will help to pay your bills and cover your tuition. Needing to earn
income through a non-research related position will inevitably cut into the time you spend on
research. At some stage you may decide to seek other alternatives like a student loan so there are
fewer time constraints.
Resources—From personal finance to the resources required of your research project,
you will need to understand the resources available to you. Determine where the
5.4. PROJECT MANAGEMENT 115
funding for your project comes from and what constraints exist regarding that fund-
ing (how much is available, what it can be spent on, when it expires, and what pro-
posals are being planned for future research funding related to your project).
There are variety of different kinds of resources that you may need. It may be access to
computational or equipment time; it may be samples, supplies and consumables. It may be bench
space to conduct the work or lab space to build a structure for your research. Determining what
these resources are and their availability will involve a number of questions that your research
mentor can help you to answer. These pieces of information will impact your schedule, planning,
and time to degree.
Student Perspective
“I will be writing a substantial amount of code, and I haven’t quite de-
cided if I should write it on my personal laptop in an integrated development
environment or physically move myself to my office …. While working on
my own computer seems “easier,” it’s probably a more efficient use of time
to move myself to a dedicated working environment. With a schedule and
place in mind, I hope to condition myself to work well and make a consistent
amount of progress on a weekly level.”
The aspect of your project that is often in most short supply is time. Your project does
not happen in isolation and it is not the only thing vying for your time. Early in your career
you are juggling time spent on coursework with time spent on research, not to mention personal
time. All are important. Certainly, the coursework and research will both have deadlines and
expectations for progress associated with them. At some point in your graduate studies you
will likely be done with coursework and “only” have to do your research, but at this point you
are likely to have other responsibilities you are juggling too. This may involve supervising and
training others in the research group for instance. You will also need to balance the time of doing
the research with presenting and writing about the research for conferences, journal publications,
or your thesis/dissertation. As you approach the end of your degree you will also need to devote
time to finding the next position—as an undergraduate you may be applying to graduate school,
as a graduate student you will be applying to postdoctoral or permanent positions in academia,
government labs, or industry. The application and job search processes takes time and needs
attention while you are busy finishing up the degree you are currently working on.
Time—Time for coursework, research, and yourself. You need to be a healthy per-
son to be an effective researcher. Get enough sleep, exercise, and have time for your
family, friends, and a hobby. Being your best at research will be easier if you are a
healthy, happy person. You will be more easily able to achieve this if you employ
some time management techniques. You also need to communicate about time with
your research mentor: your work schedule, your ability to achieve deadlines, your
116
5. CONDUCTING RESEARCH
need for modification when other obligations and personal needs arise, your plans
for vacation, etc.
Time on task is a critical component to seeing any project through to completion. But
don’t be fooled, just spending lots of time on something does not mean that you are making
progress. Not only do you need to monitor the time you are spending, but also how you are
spending that time, and if this time if productive. In order to do so effectively, you need a good
project plan to guide the use of your time and measure your progress against.
Student Perspective
“In my opinion, to be a successful researcher you don’t simply need to
produce good research (obviously a huge plus), but you need to do it effi-
ciently. Splitting projects up into manageable chunks not only made it seem
less daunting, but also saved me a significant amount of time in the long run.
Putting in the effort to lay out a solid plan before doing the work will save
a lot of time, allowing me to focus on the difficult tasks. Writing down my
thoughts in a notebook will save time by not having to produce the same
thought twice, or completely wasting time by forgetting the thought alto-
gether. Similarly, learning to be more consistent about writings will save time
in the same way. Finally, as I continue to grow as a researcher, I’ll need to
learn when to cut my loses and switch the direction of a project. Every re-
searcher, good or bad, will reach a point where they need to change a project;
a good one will be efficient and not waste time trying to save the idea.”
In a section titled “Professional Success: Project Management” author Prof. Mark Horen-
stein of Boston University gives three “Laws of Time Estimation”4:
1. “Everything takes longer than expected.”
2. “If you’ve worked on something similar before, estimate the amount of time required to
finish the task. The actual amount of time required will be about four times longer.”
3. “If you’ve never worked on something similar before, estimate the amount of time required
to finish the task. The actual amount of time required will be equal to the next highest
time unit. (For example, something estimated to take an hour will take a day; something
estimated to take a day will take a week, etc.)”
Although you might think that these “Laws” are an exaggeration, they can actually fit
reality better than you expect. Certainly, in research it is often true that it takes longer to ac-
complish what you set out to than you would have anticipated. When by definition you are
4Horenstein, M. N., 1999. Design Concepts for Engineers, Prentice Hall, Upper Saddle River, NJ, p. 86.
5.4. PROJECT MANAGEMENT 117
doing something that has never been done before, it can be very challenging to judge how long
it will take! Even for the mundane everyday tasks that you will have to handle, you probably
already appreciate from experience that you have to give yourself some cushion. Back in the day
when I had a paper calendar where all of my appointments, deadlines, and tasks were written,
the first page contained the following quote from an anonymous source: “WARNING—Dates
in calendar are closer than they appear!!” I found it often to be true, particularly when working
on an immovable deadline for a research proposal.
However, this does not mean that we should throw all planning out the window. In fact,
planning is what makes it all more manageable.
5.4.1
PROJECT MANAGEMENT TOOLS
Often what is critical in project management is not simply understanding all the components
of a project, but how they relate to and are dependent on each other. In a research setting this
is important whether the work be experimental, analytical, theoretical or some combination. It
may also be the case that your project is dependent on other projects or a part of a larger effort
involving a number of other researchers. In that case you would not be responsible for the overall
project management, but you would still have to manage the aspect you are responsible for and
meet the necessary deadlines so that the rest of the project can go as planned.
The project must also not be over-constrained.5 In other words, the scope must be within
your capabilities (or, more likely, the capabilities you will be developing); the resource require-
ments must be within the budget, equipment, and facilities available; and the schedule must be
reasonable, given the amount of time you are able to invest in your research.
In order to fully understand your project and develop a project plan there are a some basic
steps that you can take.6
1. First, clearly identify the project with a succinct problem statement.
2. Identify the tasks that will need to be accomplished in the course of the project with as
much specificity as possible. Define specific milestones for the project that will allow you
to measure your progress.
3. State the objective of each task so that its purpose is clearly delineated. You can think of
these as deliverables.
4. Identify the people who will be involved with each task. You may perform some of them
independently, but other tasks may require the participation of others (e.g., training), or
rely on someone else to provide you something in order to complete the task.
5Kendrick, T., 2009. Identifying and Managing Project Risk: Essential Tool for Failure-Proofing Your Project, 2nd ed., Amer-
ican Management Association, New York.
6Adapted from Ullman, D. G., 2003. The Mechanical Design Process, 3rd ed., McGraw Hill, Boston, MA.
118
5. CONDUCTING RESEARCH
5. Estimate the time it will take to complete each task and identify the distribution of the
time across different phases of the project.
6. Estimate the funding, equipment, computing time, or other resources that will be required
to complete each task.
7. Identify the sequence of the task, noting interdependencies between tasks (precessors and
successors). Note which tasks can be done in parallel. Identify any externally imposed
deadlines.
A number of tools are at your disposal. The most basic is a simple timeline. You have
a target end date to work toward and various milestones to meet along the way. An example
is given in Figure 5.2. However, a simple timeline does not provide information about how
one might be working on multiple objectives simultaneously or show how long each task will
take. A Gantt Chart, like the one shown in Figure 5.3, provides additional information about
when project activities are occurring and shows how they overlap. Additional modification to
the Gantt Chart can also indicate dependencies and relationships between items, e.g., data must
be collected before analysis can be undertaken, but to fully explore the interdependencies, you
may want to create a structure matrix like the one shown in Figure 5.4. Each task is assigned a
letter name and each row of the chart identifies the other tasks on which each one is dependent.
At some point, however, you must stop planning and start doing. The fact is, no matter
how much time and effort you put into your planning, some changes will inevitably arise. Your
ability to handle these changes and quickly develop a revised plan will depend on the thorough-
ness of your original planning effort, built in cushion within the plans, and flexibility with at
least one of the constraints of scope, schedule and resources. As Kendrick summarizes, “Your
primary goal in managing project constraints is to remove, or at least minimize, the differences
between the project objective and your project plan, in terms of scope, schedule, and resources.7”
ASSIGNMENT 5-4:
INDIVIDUAL ASSIGNMENT – COURSEWORK TIMELINE
Create a timeline for completion of the coursework you have remaining in your degree program.
7Kendrick, T., 2009. Identifying and Managing Project Risk: Essential Tool for Failure-Proofing Your Project, 2nd ed., Amer-
ican Management Association, New York, p. 128.
5.4. PROJECT MANAGEMENT 119
Figure 5.2: A simple timeline identifying critical milestones in a fictitious Master’s degree re-
search project.
September, Year 1• Identify a research project• Complete training on necessary equipment/software• Complete design of test rig• All test rig components on site and construction underway• All design modifications complete• Validation experiments on test rig complete• All data collection runs and analysis complete• First run of data collected• Data analysis complete• Update literature search• First draft of thesis handed to research advisor• Revisions of thesis complete• Thesis provided to committee members• Oral thesis defense• Written thesis document deposited• First literature search complete• Provide project proposal outline to research mentorNovember, Year 1December, Year 1February, Year 1April, Year 1June, Year 1February, Year 2March, Year 2April, Year 2May, Year 2July, Year 1August, Year 1120
5. CONDUCTING RESEARCH
Figure 5.3: A Gantt Chart showing when activities in the time line are occurring.
Year 1Year 2MonthASONDJFMAMJJASONDJFMAMIdentify a research projectLiterature searchProvide project proposal outline to research mentorTraining on necessary equipment/ softwareDesign rest rigOrder test rig componentsAll test rig components on site and construction underwayDesign modifi cationsValidation experiments on test rigFirst run of data collectedData analysis on fi rst runRemaining data collection runs and analysisUpdate literature searchTh esis writingFirst draft of thesis to research advisorTh esis revisionsTh esis provide to committee memberOral thesis defenseWritten thesis document deposited5.5. SCHEDULING COMMITTEE MEETINGS 121
Figure 5.4: A structure matrix showing the interdependencies of project tasks.
ASSIGNMENT 5-5:
INDIVIDUAL ASSIGNMENT – RESEARCH PROJECT GANTT CHART
Develop a Gantt Chart that will help you to plan and complete your research project by the
deadline you have chosen. Consider all competing demands on your time, such as your course-
work requirements. Consult your research mentor with a draft version and seek input about
whether or not your planning is reasonable.
5.5
SCHEDULING COMMITTEE MEETINGS
It’s likely that you have a busy schedule on a day-to-day basis given your courses, research, and
other personal obligations. The faculty members you interact with also have busy schedules and
are usually quite busy at just the time of year that you may need their time and attention most.
With your research mentor it is ideal to have regular times when you have the opportunity to
interact so that you do not need to schedule each individual meeting that you two will have.
Your interactions with other faculty will require individual scheduling of meetings and
in the case of a committee meeting, it will require juggling the availability of multiple very
busy people. In these cases, it is critical to plan ahead. If you know you will need to give an
oral presentation to your committee in order to complete your degree requirements in the last
two weeks of the semester, then you should start the planning process more than a month in
advance so that you are certain you can find a time when everyone can meet. There are a variety
of strategies to go about this planning process, but I think the smoothest interactions take place
using the following steps.
• Consult any degree deadlines that apply and ensure that you know when the latest
acceptable meeting date will be.
TASKABCDEFGDesign test rigAAOrder test rig componentsBXBXTest rig constructionCXXCDesign modifi cationsDXDXValidation experiments on test rigEXXEFirst run of data collectedFXFData analysis on fi rst runGXG122
5. CONDUCTING RESEARCH
• With the guidance of your research mentor, determine what weeks would be appropri-
ate for the meeting/presentation and identify the length of time that you will need to
schedule (this could be anywhere from 1–3 hour time block depending on the specific
circumstances).
• Contact the individuals on your committee independently (by email or a personal visit
to their office) to determine which dates they will be in town and generally available
during the target time frame that you are interested in.
• Compare their availability to your own and make a list of all the potential time blocks
that fit all the criteria.
• Using a scheduling tool such as Doodle or WhenIsGood can be helpful in narrowing
down workable options. Alternatively, you can list the potential day/date/times in an
email and request they respond with all options that will work for their schedule.
• The above steps should be undertaken as quickly as possible because as time passes
more and more obligations will fill up your research mentor’s and committee members’
calendars. If someone does not respond to an email request, go to their office to ask
about their availability.
•
If your scheduling attempt does not work the first time, you will have to start over again
and identify different dates with your research mentor. If that is not an option, it may
be possible to have one member join the meeting by phone or video conference call if
they are out of town, or have you meet with them independent from the remainder of
the committee. If none of these options work you may have to determine if a committee
member can be substituted with a different person. For all these reasons, it is a very
good idea to start the scheduling process early.
5.6
NAVIGATING ROADBLOCKS AND OBSTACLES
Inevitably there will be roadblocks and setbacks. I can’t think of a single research project that did
not have at least small issues arise. It is the fundamental nature of research, particularly as you
investigate areas that are at the cutting edge of a field. You may find it helpful to not only keep
your overall research goal in mind, but also to break this large goal down into smaller sub-goals
that you can more easily focus on when you run into bumps and hurdles.
Research can create more unknowns in the initial planning process and therefore more
points at which the plan must be revised. In some cases, the constraints are quite hard. For
example, when working on a research contract, certain deliverables are expected within a specific
time frame. The time frame might be modifiable, but usually the deliverables are quite fixed in
nature. In other more open-ended research projects, there may be more room for modification
of the research objectives, particularly if preliminary findings uncover what has the potential to
5.6. NAVIGATING ROADBLOCKS AND OBSTACLES 123
be a more fruitful line of inquiry. Even if you are not able to pursue these new ideas right away,
it is helpful to write them down so you can explore some of them later or use one as a basis
for a new research proposal. It is important to always be open to opportunities, especially when
faced with challenges. “Opportunity management also may result in a more interesting, more
motivating project….8”
Student Perspective
“My proposal had very lofty goals, and I knew that from the outset. I
was upset on my progress at first, but now I realize that for every problem I
run into, I’m learning more and more about the subject matter. The whole “if
at first you don’t succeed…” motto has some clear consequences …. I know
now that the full scope of my proposal will not be represented in my final
product.”
In some cases, you will have to re-propose or re-negotiate your project with your research
mentor, but you can do so in a way that sets you up for success. To do so, you must develop a
revised plan with an appropriate scope, resources, and schedule which will allow you to accom-
plish the research. Before deciding to simply reduce the scope, think creatively about how the
scope might be shifted to take advantage of what you now know about the project. If reducing
the scope is still required, determine the essential outcomes of the research before making any
cuts. When considering resources, think creatively about what other resources might be avail-
able to you. Examples range from applying for a small seed grant to finding an assistant to take
some of the work burden. Schedule modification will require a careful analysis of critical path
activities and interdependencies in the project. Additional aspects of the project may need to be
conducted in parallel and/or some time frames tightened up to allow for more flexibility in a
different part of the schedule.
Research is inherently challenging because you are trying to do something that has not
been done before. You will inevitably run into roadblocks and obstacles and have to think of
creative ways to get around, over, or through them. This is fundamentally a part of the process
and sometimes these obstacles can be the very thing that lead you to an unexpected and fruitful
outcome.
Student Perspective
“The most surprising thing that I learned about research this year was
that research could be easily delayed by sudden problems in the laboratory.
One of the largest aspects that research deals with is getting the research
systems to not only work, but to work continuously over a certain period of
8Kendrick, T., 2009. Identifying and Managing Project Risk: Essential Tool for Failure-Proofing Your Project, 2nd ed., Amer-
ican Management Association, New York, p. 133.
124
5. CONDUCTING RESEARCH
time. Working in a research lab on campus has shown me the multitude of
failures in machinery, computer code, and other systems that can slow down
the efficiency of a lab and delay the research mission. A failure in a critical
system of the experiment could stop work in the entire laboratory until the
problem with that one system is addressed. I believe that a lot of people have
misconceptions about the research process, and they believe that scientists
and engineers just turn buttons inside of a control room and research hap-
pens. Being exposed to research on campus, has shown me the “dirty” side
of research, where hours upon hours are spent fixing machine failures, de-
signing new systems to replace faulty processes, and brainstorming how to
fix a problem that you encounter that you originally thought not possible….
While, these problems can prove detrimental to the research mission, I be-
lieve that experiencing and overcoming these problems is one of the general
responsibilities of being an engineer in research. Encountering problems al-
low you to design new more efficient systems; as well as take a step back and
think about your research in a different way.”
Consider the following challenge as an example that came up in my own research group a
few years ago. The supplier we had used previously to make a photolithographic master changed
its focus and was no longer supplying what we needed. Having a new master was a key compo-
nent that we required in order to test out a new design critical to the successful completion of
our project. We had to look for options and develop a plan to find a way to have a new master
made and do the experiments we had planned. In our lab we use the photolithographic master to
make polydimethylsiloxane (PDMS) stamps that can transfer a protein pattern onto a substrate.
This allows us to seed and culture cells in specific pattern designs. The basics of the process are
illustrated in Figure 5.5.
How could we get a new photolithographic master made? First, we considered both off-
campus and on-campus sources. We looked for a new commercial supplier—a Google search
can be helpful if you know the right words to search on, but it can also be helpful to look at
recently published journal articles using the same technique to determine who their supplier
was. We quickly found two potential commercial options and began to make inquiries about
whether they could meet our specifications and how much the cost would be. While doing this,
we also wondered if there was another lab on campus using the same method. We turned to the
professional network of our research group to see if one of us knew someone who could help.
It turned out that one of my students knew a student in another lab who was doing something
similar—their research mentor was a colleague of mine, so it was easy to ask for advice. They
made something similar in their lab and offered to try to make what we needed. Additionally, we
thought about whether this was something we could easily make ourselves. There are methods
5.6. NAVIGATING ROADBLOCKS AND OBSTACLES 125
Figure 5.5: Use of a photolithographic master to create patterned stamps for protein transfer
onto a substrate and subsequent cell growth on the protein.
Figure 5.6: Do-it-yourself approach to creating a photolithographic master.
published in the literature and we could purchase the photolithographic master solutions. A
sketch of our do-it-yourself approach is shown in Figure 5.6.
We quickly ended up with several options: two potential commercial suppliers, a colleague
willing to try to help us out, and a method for how to approach it if we decided to make it
ourselves. What had seemed like a big obstacle—the loss of a supplier for a critical research
component—had turned into a solvable problem! In the end we tried one from our colleague’s
lab, but it did not quite work with our subsequent processing steps, so we used one of the other
SiWaferPolymerize PDMS on top of patternPDMSStampProteinStampSubstrate totransfer proteinSeed cells ontop of protein patternμ scale patternsSiWaferdevelopphotoresistPhotoresistUse a commercial printerto make a high-restransparency1,000 dpiSpin coat photoresistto desired thicknesstRPMUVDIY126
5. CONDUCTING RESEARCH
commercial suppliers and the roadblock was cleared. There are several lessons to take away from
this example: there is often more than one solution to a problem; there is often more help with
an issue than you might have initially suspected; and pursuing multiple paths simultaneously
may produce alternatives for you to choose from.
ASSIGNMENT 5-6:
GROUP DISCUSSION – FINDING THE RESOURCES YOU NEED
You have discovered that the key to your research is getting access to a “Widget Measurement
Device.” It is critical to the successful completion of your project but your research group does
not have one. Develop a plan to find such a device and get the measurements you need. Consider
the following questions.
• How would you go about finding such a device?
• How can you get access?
• What if it does not exist on campus?
•
If you have to borrow someone else’s widget, what can you give in return?
ASSIGNMENT 5-7:
INDIVIDUAL ASSIGNMENT – OPPORTUNITY MANAGEMENT
Reconsider your research plan in light of an actual or imagined roadblock and look for alternate
opportunities.
5.7
RESEARCH ETHICS (ERROR, NEGLIGENCE,
MISCONDUCT)
Research is conducted by human beings, so human failings inevitably enter into the picture.
Sometimes the failing at play is simply error: people make mistakes. The defining moment is
when you discover the error and decide what to do about it. Unfortunately, sometimes an other-
wise honest person might be inclined to hide the error, which is the big mistake. The temptation
to hide it may come from feelings of shame or concern over the repercussions but hiding an er-
ror crosses the line between making a mistake and being dishonest. In my own research group,
I stress with students that it is critical for them to let me know when they make a mistake or
discover an error. The sooner we know about it, the sooner we can resolve things. For instance,
5.7. RESEARCH ETHICS (ERROR, NEGLIGENCE, MISCONDUCT)
127
if a piece of equipment gets broken accidentally, we want to know right away so we can repair it
before it is needed again, and we’d like to determine what went wrong so we can prevent it from
happening again. If the error occurred in how an experiment was run for instance, we want to
know as soon as that comes to light so that we can determine the best course of action mov-
ing forward, e.g., repeating the experiment using the correct protocol and revising our training
procedures so that this type of error does not happen in the future.
If the error ends up not getting caught until after publication of the results, there is still
an opportunity to fix things. Journals publish errata to correct errors discovered after publication
and corrigenda to correct errors made in the publication process (like a typographical error in an
equation). In the most extreme cases, when the results and conclusions are significantly altered
by the issue, the article may be withdrawn from publication if the editor(s) agree that this is
the best course of action. You may think this is a horrid outcome, but your scientific colleagues
would rather know that the error exists than to base their ongoing work on something that is
flawed. It will waste your time and waste the funding supporting your research if you pursue
a path that someone has discovered is wrong but has not made the effort to report it. When
Prof. Pamela Ronald discovered that two papers had incorrect data and conclusions because of
lab errors associated with the mislabeling of a bacterial strain and an unreliable protein assay,
she decided to announce it publicly at a Keystone Symposia Meeting and retract the papers.9
Her colleagues applauded her forthrightness about the issue. Honest mistakes happen, and they
should be acknowledged when they are discovered. “Scientists who make such acknowledgments
promptly and openly are rarely condemned by colleagues.10”
In the hierarchy of bad things happening, negligence falls somewhere between error and
misconduct. Haste, carelessness, and inattention must be treated more harshly than an error. In
the case of negligence someone is cutting corners. This is not good science, and the outcomes
may have very negative ramifications for engineering research broadly. Money, structures, and
people’s lives may be put at risk. On the professional side, not only is the negligent individual
placing their own reputation at risk but they can also damage the reputations of their colleagues
and researchers as a whole. Further, “By introducing preventable errors into science, sloppy or
negligent research can do great damage—even if it is eventually uncovered and corrected.11”
Sometimes we don’t realize that there was a problem until we try to repeat the work later. In an
anonymous survey of National Institutes of Health researchers, 6% of the respondents admitted
to “Failing to present data that contradict one’s own previous research.12” This is really problem-
atic. Even though you and others are reading the literature critically, everyone is relying on that
prior research so that they can build upon it.
9Grens, K., 2015. Self correction: What to do when you realize your publication is fatally flawed. The Scientist, December,
29(12).
10On Being a Scientist: Responsible Conduct in Research. National Academies Press, 1995.
11On Being a Scientist: Responsible Conduct in Research, National Academies Press, 1995.
12Martinson, B. C., Anderson, M. S., and De Vries, R. 2005. Scientists behaving badly. Nature, 435.7043, 737.
128
5. CONDUCTING RESEARCH
Misconduct is when the action crosses into the realm of intentionally deceptive. “Making
up data (fabrication), changing or misreporting data or results (falsification), and using the ideas
or words of another person without giving appropriate credit (plagiarism)—all strike at the heart
of values on which science is based.13” A meta-analysis conducted by Daniele Fanelli “…found
that, on average, about 2% of scientists admitted to have fabricated, falsified or modified data or
results at least once—a serious form of misconduct by any standard [10,36,37]—and up to one
third admitted a variety of other questionable research practices including ‘dropping datapoints
based on a gut feeling,’ and ‘changing the design, methodology or results of a study in response
to pressures from a funding source.’14”
So why do people do it? The heart of the matter is that research can involve “…intense
competition, and is further burdened by difficult, sometimes unreasonable, regulatory, social, and
managerial demands.15” But the consequences of succumbing to the temptation can be severe.
In many of the public cases reports, careers have been ruined. In some cases people have even
gone to prison.16
For students, I find that their choice to undertake an act of misconduct is frequently an
issue of time. Things don’t go as planned, it takes longer than anticipated, deadlines are ap-
proaching, graduation is near, a job is waiting……. then someone gives into the temptation to
cut corners by fabricating, falsifying, or plagiarizing. Maybe they did it before and got away with
it the first time, or even the second time, but at some point these indiscretions will get caught.
What then? It can be career ending. Years of effort invested are now wasted. As an example,
discovering plagiarism in one Mechanical Engineering master’s thesis, led to Ohio University
to conduct a broader investigation and ultimately “…taking action against 39 mechanical en-
gineering graduates…. It has ordered them to address plagiarism allegations involving theses
dating back 20 years or risk having their degrees revoked.17”
In some cases, the consequences can be absolutely tragic. The work and research that
engineers do is intrinsically connected to society and often directly related to safety, healthy, and
environmental issues.
13On Being a Scientist: Responsible Conduct in Research, National Academies Press, 1995.
14Fanelli, D., 2009. How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey
data. PloS One, 4(5):e5738.
15Martinson, B. C., Anderson, M. S., and De Vries, R., 2005. Scientists behaving badly. Nature, 435.7043, 737.
16Rogers, G., 2014. Former ISU scientist who faked AIDS research indicted, Des Moines Register.
https://www.desmoinesregister.com/story/news/crime-and-courts/2014/06/19/former-isu-scientist-
who-faked-aids-research-indicted/10881781/.
17Tomsho, R., 2006. Student Plagiarism Stirs Controversy At Ohio University, Wall Street Journal.
https://www.wsj.com/articles/SB115560632839035809.
5.7. RESEARCH ETHICS (ERROR, NEGLIGENCE, MISCONDUCT)
129
Most engineering professional organizations have a code of conduct for their members.
The National Society of Professional Engineers (NSPE) Code of Ethics for Engineers includes
the following Fundamental Canons.18
“Engineers, in the fulfillment of their professional duties, shall:
• Hold paramount the safety, health, and welfare of the public.
• Perform services only in areas of their competence.
•
Issue public statements only in an objective and truthful manner.
• Act for each employer or client as faithful agents or trustees.
• Avoid deceptive acts.
• Conduct themselves honorably, responsibly, ethically, and lawfully so as to enhance the
honor, reputation, and usefulness of the profession.”
These seem sensible. Engineers should make the world around them a better and safer
place for the benefit of society. If you keep this underlying motivation in mind as you conduct
your engineering research, you will be much more able to do it in an ethical manner.
5.7.1 MISCONDUCT CASE STUDIES AND THE D.I.S.O.R.D.E.R.
FRAMEWORK
Lisa Newton, Professor of Philosophy and author of Ethics in America proposes a decision pro-
cedure for dealing with ethical dilemmas that I find very useful in applying to both case studies
and real-life situations. The basic approached is summarized.19
“…as participants and decision makers, we should organize our options in the
situation—what alternatives are really open to us? and note the probable outcomes of
each. What, in this situation, is it possible, and reasonable, for us to do? And what
will be the likely results of each of those choices? Which of the outcomes on the
list are totally unacceptable? They should be eliminated, and the rest left for further
consideration at a later stage.”
The acronym developed by Newton is D.I.S.O.R.D.E.R. where we: Define the dilemma
we are trying to address, investigate or identify the necessary information, identify all of the
stakeholders that are impacted by the dilemma, explore the options available to us, spend time
considering the rights and rules associated with the issue and the individuals involved, make a
18National Society of Professional Engineers, “Code of Ethics,” http://www.nspe.org/resources/ethics/code-
ethics.
19Newton, L. H., 2002. Doing Good and Avoiding Evil, Part I. Principles and Reasoning. http://www.rit.edu/
\simw-ethics/resources/manuals/dgae1p6.html.
130
5. CONDUCTING RESEARCH
decision or determination about what actions should be taken, then evaluate the effects that
will likely be a consequence of that decision, and at the end take time to review and reconsider
to make sure we have made a decision that is reasonable. The D.I.S.O.R.D.E.R. frameworks
provides a logical and yet flexible decision making procedure that can be used in a variety of
situations.
In applying the disorder framework we focus on the options available, the rights and
responsibilities of the parties involved, making a preliminary decision, considering what the
effects of that decision will be, and then reconsidering. It is intentionally an iterative process that
asks you to do the thought experiment of what will happen as a consequence of the decisions
that you take, and then to reconsider the situation with those ideas in mind. Once you have been
through the O.R.D.E.R. portion of the framework at least twice you have probably gotten to
the point where you have settled on a reasonable course of action where the probable results are
in alignment with the ethical quandary being and dealt with.
Let’s practice by considering a simple case example that came up a number of years ago in
a class I taught. The students were assigned to write a review paper. The first draft went through
a peer review process with the other student colleagues in the class serving as the reviewers.
When reading the paper he was assigned to review, one of the students went to Wikipedia to
read some more background on the topic because it was unfamiliar to him. However, in the
process he discovered that two paragraphs of the paper’s introduction were directly copied from
Wikipedia by the author of the paper. Quotations were not used and Wikipedia was not cited.
Let’s apply the D.I.S.O.R.D.E.R. framework to this case. First, what is the student’s
dilemma? He suspects plagiarism and must decide what to do. He’s effectively already done part
of the investigation by determining that there was copying from Wikipedia, but he might also
want to check the syllabus and/or the University definition of plagiarism to see if this situation
fits. In this case it certainly fits the definition of plagiarism without question—the paper’s author
has tried to pass off the work of someone else as their own. It does not matter if it is freely
available on the internet, the author did not use quotes or paraphrase the source by putting it
in their own words, and author did not cite the source. Who are the stakeholders? Clearly the
author of the draft that is being reviewed is a key stakeholder, but so is the student who has
identified the suspected plagiarism. The instructor for the course is an obvious stakeholder, but
also the other students taking the course are stakeholders too because they are all expected to do
the same assignment. More broadly, you would consider the University as one of the stakeholders
as well.
The student who discovered the plagiarism has a few options available—say nothing, say
something to the author, or say something to the instructor. If he looks into the University of
Wisconsin-Madison policy that spells out the rules associated with academic integrity, he will
see that the University states that “Students are expected to uphold the core values of academic
5.7. RESEARCH ETHICS (ERROR, NEGLIGENCE, MISCONDUCT)
131
integrity which include honesty, trust, fairness, respect, and responsibility.20” With this in mind,
the “say nothing” option could potentially jeopardize him, so that does not seem like a good op-
tion. Alternatively, he could decide to talk with the author and clarify the University policies on
plagiarism, but this is not really his job and it could result in a backlash from the other student
that may be challenging to handle. In fact, the University website on “Academic Integrity” goes
on to say that “As a member of the UW-Madison community, it is your responsibility to help up-
hold the integrity of the university. If you suspect a classmate is cheating or committing another
type of academic dishonesty, notify your instructor, professor, or teaching assistant. Remember
that it is not your responsibility to investigate this. It is the job of the instructor to determine if
misconduct occurred. All you need to do is report what you heard or saw.” After reviewing and
reconsidering the options, the student who conducted the review decided that the best course
of action was to contact the instructor who then went on to take actions in accordance with
University policy.
ASSIGNMENT 5-8:
GROUP ACTIVITY – AN ETHICS CASE STUDY ON DATA FABRICATION
Consider another case study that will allow you to further apply the D.I.S.O.R.D.E.R. frame-
work. Many years ago a colleague contacted me about one of their students asking for my
thoughts on what actions they could and should take in a suspected research misconduct case.
My colleague had a student who was working to finish their Ph.D. and had given a draft chap-
ter of their dissertation to my colleague, their advisor, for review. Some of the data presented
in the chapter looked odd and drew the suspicion of my colleague. He decided to take a look
at the raw data files that are kept by the instrument that was used to take the measurements in
question. Upon investigation he found that although the instrument files corresponded to some
of the data presented in the chapter, large portions of the data had no corresponding files on the
instrument. He suspected that some of the data presented in the chapter had been fabricated.
Apply the D.I.S.O.R.D.E.R. framework to this situation. The dilemma is defined as data
fabrication. The advisor has already found some information about the extent of the data fabri-
cation, but more information is needed about the potential ramifications associated with what
the student has done. Begin by determining your institution’s rules about research misconduct,
in particular, data fabrication. Assume that the research was conduced with federal funding,
choose an agency (such as the National Institutes of Health or the National Science Founda-
tion) and determine what their relevant policies say. Continue through the D.I.S.O.R.D.E.R.
framework by identify all of the stakeholders and exploring the options, being sure to pay at-
tention to the rights of the student and rules of the institution and agency involved. Once you
20Office of Student Conduct and Community Standards, University of Wisconsin-Madison, “Academic Integrity,”
https://conduct.students.wisc.edu/academic-integrity/.
132
5. CONDUCTING RESEARCH
have all of this information, make a decision about what actions should be taken, then evaluate
the effects that will likely be a consequence of that decision. You may find the that outcome is
more severe or more lax than you expected. This is the time to review and reconsider so you can
make sure that you have come to a decision that is reasonable. Repeat the O.R.D.E.R. portion
of the framework and settle on a reasonable course of action.
5.7.2 OTHER RESOURCES ON RESEARCH ETHICS
Although this chapter touches on some key issues of research ethics, this is a broad field of study
and one that entire books are devoted to. For additional reading on research ethics, the following
references are suggested.
National Academy of Sciences, N. A., 2009. On Being a Scientist: A Guide to Responsible
Conduct in Research. National Academies Press (U.S.).
Shamoo, A. E., and Resnik, D. B., 2009. Responsible Conduct of Research. Oxford Uni-
versity Press.
Lipson, C., 2019. Doing Honest Work in College: How to Prepare Citations, Avoid Pla-
giarism, and Achieve Real Academic Success. University of Chicago Press.
5.8
SAFETY
From the basic ergonomics of your work environment to the issues that arise from the need to
use dangerous substances, safety is a critical issue in your research environment. Even if you are
not conducting experimental work yourself, it is likely that there are labs down the hall, on the
next floor, or someplace in the buildings you frequent. Make sure that you know how to respond
when something dangerous occurs that may impact your own safety and the safety of others.
Basic building safety features are the first thing to become acquainted with. You should
determine where the emergency exits are located, where to go in the case of a fire, how to respond
in the case of a natural disaster, and what actions your campus recommends in the case of an
active shooter. Pay attention to drills and take part in training opportunities.
Be observant about the spaces that you inhabit—read the signage and identify safety
equipment that is readily available (e.g., fire extinguishers, automated external defibrillators,
emergency sowers, eye wash stations, etc.). You may not be using chemicals, operation lasers, or
interacting with radiation sources, but this might be a regular activity in a lab down the hall, so
be sure to note any signage about hazards near your work environment. Identify the location of
phones for emergency calls—your cell phone may be most accessible but if a land line is readily
available, use it. The location of the call can be more easily identified.
In the realm of experimental research, the potential dangers are myriad: chemical splashes,
particle inhalation, burns from heat sources, etc. If your research is conducted in a laboratory
or field environment, you will be required to take part in standard training associated with the
5.8. SAFETY 133
common safety hazards and you will likely receive specific training on procedures you will use so
that you understand and can handle the specific safety hazards. Training is essential and should
occur BEFORE you begin in the research so that you know how to protect yourself and others,
prevent hazardous situations from arising, and respond appropriately if a safety issue occurs. If
you are not offered training, you should request it. If for some reason it is not readily available,
seek out the information and educate yourself.
One of the keys to keeping a safe work environment is to ensure that the entire environ-
ment is a safe one, not just the area and materials that are directly your responsibility. If you see
something, say something. If you notice another person in the lab doing something that puts
them or others in danger, talk to them immediately. If you see a way to improve safety around a
piece of equipment or standard procedure, talk to your research mentor or the laboratory man-
ager. You should also make sure you know who to call if you see an unexpected problem, e.g.,
water leaking from under the door across the hall, or a noxious smell emanating from another
laboratory. On most campuses each building will have a building manager or safety officer. Labs
will have an emergency contact sheet on the door that also includes contact information. But, if
in doubt, call emergency services using 911.
There are even health concerns sitting at your desk. You may spend much of your research
time in front of a computer—coding, collecting data, analyzing data, and writing. It is important
to have a setup that is ergonomic so that you do not develop issues over time—such as back or
neck pain, carpel tunnel syndrome, etc. Maintaining the health of your eyes is also important.
Eye strain is also something to guard against if your time in front of a screen is lengthy.
ASSIGNMENT 5-9:
INDIVIDUAL ASSIGNMENT – YOUR OWN SAFETY
Determine the three main safety issues relevant to your daily workspace. Investigate them in
more detail to determine: What protections have been put in place to mitigate the safety haz-
ards? What are your responsibilities with relationship to these safety issues? How can safety be
improved and what actions can you take to suggest these improvements or make these improve-
ments yourself?
134
5. CONDUCTING RESEARCH
ASSIGNMENT 5-10:
INDIVIDUAL ASSIGNMENT – CASE STUDY
Instructions:
Read the brief case description provided. Reread while noting the important information,
and questions that are raised in your mind about the information provided, the individuals in-
volved, and their situation. Determine both the basic issues and any deeper underlying issues at
play. Consider the questions posed at the end of the case and how you would respond to these
questions as well as other questions that could be asked of this case. Write a one-page response
that includes a brief summary of the case and its issues, your answer to the questions posed, and
recommendations based on your understanding of the situation posed in the case.
Case description:
Mary has done an excellent job in navigating the safety issues associated with her research
project and is recognized in her research group for being adept with the logistics of handling
both the day-to-day safety issues and the associated campus requirements. She is also a student
member of a newly formed safety committee in her department which meets several hours every
month. Her work on the committee has been helpful to her research group because Mary makes
sure that they all stay up to date on the safety issues relevant to their work.
Dr. Smith, her research mentor, has recently taken on a new project and has already in-
dicated that Jonah, a first year graduate student, will use this project for his thesis. The project
is a very interesting one, but it will involve some new safety requirements and consultation with
campus safety experts before it can be started. However, instead of having Jonah coordinate the
safety requirements of the new project, Dr. Smith has asked Mary to take the lead.
Questions to consider:
Is it reasonable for Mary to take on this duty?
Is it possible for Mary to say “No” in the situation?
Will Jonah’s lack of involvement have the potential to compromise safety?
How might Mary work with Jonah to get safety issues of the new project coordinated
without overloading herself?
C H A P T E R 6
135
Documenting Your Research
Findings
6.1
KEEPING A RESEARCH NOTEBOOK
Several decades ago, when I worked in the medical device industry, the engineers spent the end
of each day signing and dating their lab notebook pages and proving witness signatures as a
cognizant individual on the notebooks of other engineers. When your notebook was full you
turned it back to the company librarian and were assigned a new one. If you needed to reference
one of your old notebooks you could check it back out again. These procedures were in place
for data management and patent protection. Your clever ideas could result in patentable work
and the signed, witnessed, and dated pages of your lab notebook might be used to prove that
you were the first to come up with the invention. In 2013 a new patent law change went into
effect in the U.S., moving patent priority from first-to-invent to first-to-file, however well-kept
research documentation is still critically important today.
These lab notebooks, or research notebooks as I will call them, are an important tool
for every researcher, whether an experimentalist, computationalist, or theoretician. The research
notebook provides a place for you to document your thinking, your results, and your conclusions.
These days your notebook make take the traditional form of a bound paper laboratory notebook
or it may be an electronic document (or some hybrid form). If you have joined a research group,
find out the practices of the group. The researcher in charge of the project (i.e., the principal
investigator, or PI) may provide you with a physical notebook or give you an account to an
electronic notebook. If not, it may be up to you to decide what format works best for you.
Student Perspective
“I work mainly on the computer and did not really understand how
I could possibly keep track of anything outside of the digital medium. Af-
ter looking at examples of lab notebooks in class and discussing what makes
a good lab notebook I realized that my thought process and various other
things to organize my thoughts and results could find a place in my note-
book.”
Ultimately though, it is common practice that the notebook will stay with the research
group when you move on to your next position. This may be a requirement of the funding agency
136
6. DOCUMENTING YOUR RESEARCH FINDINGS
supporting the research on which you are working or an aspect of a broader data management
plan of the research group or institution. You may be allowed to keep a copy for yourself (for
instance a scan or photocopy of a paper notebook or a duplicate copy of an electronic notebook).
The obvious exception to this would be if you are working on a classified project or working for
a company where the intellectual property is owned by the company.
Although you are likely the primary person who will read this notebook other than your
research mentor, keep in mind that it needs to be readable by others and this must be kept neatly
and completely. Others should be able to reproduce your work based on what they read in your
notebook. For instance, there might a student who follows on in your research area after you
have left the research group. So, you should keep in mind that you are not the only person you
are writing for. Your research notebook should be clear and understandable to someone working
in the area.
The basic content of your notebook should do all of the following in order to create a
traceable record of research progress/findings in one place where it can be easily accessed.
• Describe your research goal(s).
•
Identify methods used.
• Support methods chosen with literature references.
•
•
•
Include original raw data/images (or references to e-data).
Include procedures/designs/programs/calculations (or references to e-files).
Include final results and their interpretation.
• Attach print screens of e-data file directory hierarchies where applicable.
• Provide documentation that others can follow with enough detail that they could recre-
ate the work.
• Describe thought processes, hypotheses, and outcomes.
• Plan future research activities.
• Write out steps to possible solutions for problems encountered.
Beyond the research itself, your notebook is a good place to record other research-related
interactions and information. It is an excellent place to keep notes from lab meeting discussions,
agendas for meetings with your research mentor along with the comments/suggestions made
during those meetings, and a summary of research seminars that you have attended. Being able
to refer back to these additional notes at a later time will become invaluable.
6.1. KEEPING A RESEARCH NOTEBOOK 137
Student Perspective
“The longer that I continue to do research, the more pertinent that
it is to have a good notebook, as I find myself looking back at certain past
experiments and trying to evaluate where we’ve been, thus helping determine
a plan forward.”
You can also use your research notebook as a project management tool. Minimally it should
describe the research goal or hypothesis you are currently testing. You can also use it as a roadmap
for what comes next. This can be particularly valuable if there is a time lag and you will not be
able to get back to your research to make further progress right away.
Student Perspective
“Documenting research properly and storing files in a way that makes
sense has also been a skill I’ve had to develop over the past year. My lab
notebook entries now are more helpful than they were at the start of the
project. At some point I started ending each one with a list of immediate
next steps, and I have found that really helpful, especially during the school
year when I can’t work on the project every day and need my notebook to
remind me where I left off. I have gotten better about naming and storing
files in a way that makes them easy to find again, which is important because
as the project has gone on, I’ve collected a lot of files.”
It may seem like a lot of work to keep a good research notebook and it is. However, doing
so will pay off in a multitude of ways over time. Sometimes it is possible to make keeping a good
research notebook faster and easier by creating procedures that you write once and then refer to
(noting any modifications that you make over time) or making fill-in-the-blank tables for things
that you do routinely.
Before starting your research notebook for a mentored project, talk to your research men-
tor. Your mentor may have expectations about what type of notebook is kept and what informa-
tion must be kept in it. Additionally, if a federal agency or foundation is funding the research
project, they may have requirements of their own (for example, the Nuclear Energy Univer-
sity Program funded by the Department of Energy put out a document titled “Proper Use and
Maintenance of Laboratory Notebooks” with expectations for all the funded projects).
6.1.1 DOCUMENTING YOUR RESEARCH IN A PAPER LABORATORY
NOTEBOOK
The lab notebook in its paper form has been used for hundreds of years. There are good reasons
for this, other than calamities like a fire, they are long-lived documents. For example, we still
138
6. DOCUMENTING YOUR RESEARCH FINDINGS
have Leonardo Da Vinci’s notebooks to look at today! Paper continues to be the way to record
information for many.
Some basic expectations for a paper notebook are as follows.
• Use a permanently bound notebook with numbered pages.
• Ensure researcher name, contact information, and research group are prominently vis-
ible.
• Develop a table of contents as research is documented.
•
Include a key to abbreviations used and naming conventions of samples/files.
• Date and sign each page.
• Write neatly in pen with lined through corrections; X any skipped/unused pages.
• Secure all additions with tape and signed/dated over edges (no loose pages, no staples).
Plan ahead with a numbering system for each notebook so you can reference between
them. For instance my lab notebooks at the University of Wisconsin–Madison started with
UWL1. If you are working on multiple large projects it may work better to devote different
notebooks to different projects.
Hopefully you have developed some practice though laboratory classes you have taken
in high school or as an undergraduate, but a research notebook is more than just the entries
about experimental procedures/protocols and results. The research notebook is often the key
place where everything gets tied together. For an experimentalist, this includes the reason why
you are conducting the experiment, details about or references to protocols/procedures used,
names of raw data files and output files from analysis, a plot of the results to date, the name of
the folder containing relevant image files, and methods gleaned from a literature citation.
Many computationalists keep notebooks in addition to commenting their code so they
can track their broader thinking about their research beyond the changes in functionality of a
component of the code, along with version number of the code or output file name. Theorists
keep notebooks to track their thought process, identify where ideas are built off of a literature
citation, and capture their evolution in thinking about a topic.
In addition to notes about the research you undertake, you should also capture information
about meetings you take part in, the seminars you attend, and key journal articles you read.
It may seem onerous at first, but the more you capture in you research notebook along
the way, the easier your later work will be. You will find that a well-kept research notebook is
particularly helpful in writing a paper or thesis. You will thank yourself later for developing good
documentation habits early!
You will also want to regularly back up your research notebook. If it is kept on paper, this
simply means making a photocopy or scan of the pages every month (or more frequently). For an
6.1. KEEPING A RESEARCH NOTEBOOK 139
electronic notebook you can export file or make an electronic duplicate that is kept on a different
server (or alternate storage device) and in a different building. You have to think about the worst-
case scenario—what if there is a fire in the building, or what if you backpack is stolen—it would
be bad enough to lose a month of data but horrible to lose it all. Unfortunately, there are actual
instances of this happening. Make sure you are not the star of the next cautionary tale!
6.1.2 DOCUMENTING YOUR RESEARCH IN AN ELECTRONIC
RESEARCH NOTEBOOK
Some campuses, research institutions, and companies provide access to electronic lab notebook
software. In some cases the use of a specific electronic lab notebook software may be required.
These products are reasonably new and have varied levels of adoptions in different places. There
are also a variety of different styles, including blogs, wikis, note taking software, and document
management systems. As people have become more comfortable taking notes directly on an
electronic device, electronic lab notebook products have seen increasing adoption. The advanced
search functions and data management capabilities make these software options very attractive.
Some of this software can also provide you with added ease in connecting from a variety of
devices over the Internet and sharing with other research group members and collaborators.
There can also be added benefits in electronic signing, file versioning, and activity tracking.
As with all software however, there are some lingering concerns over long term accessi-
bility with software changes or a software company no longer providing updates to make the
product compatible with new operating systems. Data security can also be a concern. Look into
the software that your campus recommends or supports.
As discussed above, before you decide on how you will record your research activity, check
with your research mentor about the practices and requirements of the research group.
6.1.3
REGULAR EVALUATION OF YOUR RESEARCH NOTEBOOK
Checkups are important and will help you to maintain a good level of completeness with your
documentation of research. Some funding agencies will go so far as to require the researcher
in charge of the project (i.e., the principal investigator, or PI) to regularly review your research
notebook. Ideally this should take place regardless of such requirements. However, even if your
research mentor is not doing regular reviews, it is good practice to periodically review it yourself.
Use the activities below to do so. You can also ask your research mentor for guidance, by asking
them to review your research notebook and provide you with feedback.
140
6. DOCUMENTING YOUR RESEARCH FINDINGS
ASSIGNMENT 6-1:
INDIVIDUAL ASSIGNMENT – SELF-EVALUATION OF YOUR RESEARCH
NOTEBOOK
Assess the last several months of your research documentation with the rubric below.
Check your PAPER research notebook for the following items:
(cid:3) Name and contact information on the beginning of the notebook.
(cid:3) Date and initial each page.
(cid:3) Write in pen; cross out mistakes (but leave them legible); do not erase; do not tear out
pages.
(cid:3) Write neatly (so anyone can read it); leave space between things—do not crowd.
(cid:3) No blank pages between entries.
(cid:3) No loose pages; tape additions to a page.
Check your ELECTRONIC research notebook for the following items:
(cid:3) Name and contact information on the beginning of the notebook.
(cid:3) Logical naming system for each entry/file.
(cid:3) Date and name on each entry/file.
(cid:3) Electronic lock (i.e., archiving) activated on past entries/files.
Consider the following best practices for documenting research. Check the:
(cid:3) Recording thoughts and ideas consistently.
(cid:3) Statement of objective and description of specific work to be performed, or reference
to an approved planning document or implementing document that addresses those
topics.
(cid:3) Identification of method(s) and computer software used.
(cid:3) Identification of any samples, test equipment, and characterization equipment used.
(cid:3) Description of the work as it was performed and results obtained, including names
of individuals performing the work, and dated initials or signature, as appropriate, of
other individuals making the entries.
6.2. DATA STORAGE AND BACKUP 141
(cid:3) Methods and procedures described in detail and updated as needed.
(cid:3) Description of any problems encounter and their resolution described.
(cid:3) Entries clear enough so that the ideas can be reconstructed at a later date if necessary.
(cid:3) Sufficient detail provided to retrace the investigations and confirm the results or re-
peat the investigation and achieve comparable results independent of the individual
investigator.
What are you doing well? What needs improvement? Describe what action steps you will take
in the next month to improve your documentation of research.
ASSIGNMENT 6-2:
INDIVIDUAL ASSIGNMENT – USING YOUR RESEARCH NOTEBOOK
Give an example of how you have used information that was previously recorded in your research
notebook or in someone else’s research notebook. How did you find that prior work? How was
it helpful to you? Were you able to save time by having access to good documentation of prior
work in a research notebook?
DATA STORAGE AND BACKUP
6.2
What would happen if your laptop was stolen or your hard drive crashed and the data was
unrecoverable? Is that the only location your files are stored? If so, the thought should send you
into a panic and make you immediately seek a method for backing up your data.
In research your data is not just yours. You are responsible for the data and its loss could
negatively impact not just you, but also your research mentor, any peers and collaborators you
are working with, and the scientific community as a whole. Funding agencies recognize this
and many require a Data Management Plan to be developed and submitted with the proposal
for funding. This Data Management Plan will include a description of the types of data to be
collected (even file format types in some cases) and how that data will be managed and preserved.
Not only backup systems to avoid any data loss but also how data will be shared with other
researchers. The National Science Foundation expects the following items to be addressed1:
1National Science Foundation, “Grant Proposal Guide,” Chapter II.C.2.j, https://www.nsf.gov/pubs/policydocs/
pappguide/nsf15001/gpg_2.jsp#IIC2j.
142
6. DOCUMENTING YOUR RESEARCH FINDINGS
1. the types of data, samples, physical collections, software, curriculum materials, and other
materials to be produced in the course of the project;
2. the standards to be used for data and metadata format and content (where existing stan-
dards are absent or deemed inadequate, this should be documented along with any pro-
posed solutions or remedies);
3. policies for access and sharing including provisions for appropriate protection of privacy,
confidentiality, security, intellectual property, or other rights or requirements;
4. policies and provisions for re-use, re-distribution, and the production of derivatives; and
5. plans for archiving data, samples, and other research products, and for preservation of
access to them.
The research you are engaged with may have a Data Management Plan in place and/or
standards within the research group for collecting, organizing, storing, and backing up data.
You should begin by asking your research mentor if there are data standards for your research
that you should follow. However, some groups leave much of the detail up to each individual
researcher, so in that case you need to think about some of these key aspects yourself.
Always keep the original is a basic rule. If you take an image you should keep an untouched
original and make modification only to copies of the image file. It’s a good idea to lock the
original file so you don’t accidentally make changes to it. In some cases a publisher may want
you to submit both the original image file along with the version of the image that you will be
putting into the figure. The same rule applies to raw data output from an instrument or codes.
Keep the original output and make a copy on which you then conduct analysis.
You will obviously need your files somewhere you can easily access and work on them. This
may be a laptop computer for instance. But you will also want to make sure ALL of your files are
regularly backed up somewhere. This may be an external drive or cloud storage for instance. The
data management professionals suggest that you have your data in three separate storage media
in at least two separate locations in case one of them gets corrupted. This may mean that you
have an external drive at home where you back up your files regularly and additionally you use a
cloud data storage system like Box or Dropbox to keep a copy of your files in a separate location.
Ideally these backups occur automatically without you having to manually initiate it, but if not,
you need to get into a backup routine so that the time span between backups in minimized.
Think of the worst-case scenario like your apartment building catching fire in the middle of the
night—you get out safely, but your laptop and external drive are destroyed. However, if you still
have a recent backup in the cloud then your months or years of research work are not lost.
When considering cloud data storage systems look into what is available through your
institution. It is likely free to you and the institution has negotiated a license agreement that
takes into account security and issues surrounding sensitive. It is also likely that you can easily
6.2. DATA STORAGE AND BACKUP 143
share data with your research collaborators and reassign ownership of the folder to your research
mentor when you graduate and leave the institution.
Before you have too many files to deal with, step back and decide how you could best
provide organization for your data. You will want to develop a logical folder system so that all
the files are not jumbled into one place. They should also be named clearly and succinctly. Keep
in mind that this collection of files will likely be accessed by someone else—maybe after you
graduate—in order to extend and build upon your research. You need the structure to be logical
and intuitive. A text README.txt file at the top of the file structure can help you to describe
how things are organized and keep a list of key data attributes that will be helpful not only to
other but to yourself. Here is a simple file structure to illustrate these ideas:
Creep Crack Growth
README.txt
Equipment
Drawings
Images
Quotes
Validation
Experiments
CCGProcedures
CCGTesting
Alloy617
Analysis
CrackGrowthData
Microscopy
Alloy800H
Analysis
CrackGrowthData
Microscopy
HeatTreatCharacterization
Analysis
RawData
Publications
ExpMech-TechniquesPaper
Drafts
Figures
FinalSubmission
MetMatTransE-CCGPaper
Drafts
Figures
144
6. DOCUMENTING YOUR RESEARCH FINDINGS
FinalSubmission
SEM-ConfProc
Reports
AnnualReports
FinalReport
Having an agreed upon naming convention for experiments or version control system for
software is especially important when you are working collaboratively. If each part of the name
for the folder/file is defined and everyone has this shared information then a sample with the
name
U4-H9P63Y98-PS12-PS20-L30-PDMS5M-D4-1Hz8V-D19
will tell everyone what it was part of experiment U4 that involved a specific protocol of that
number, and that the cell type used was H9 passage 63 with a yield purity of 98%. The sample
was pre-seeded on day 12 which ended on day 20 when the cells were put on lanes of 30 micron
width using a PDMS substrate with Young’s modulus of 5 kPa and an extracellular matrix of
Matrigel. Pacing was then begun on day 4 in the lanes at 1 Hz frequency at 8 volts and ended
on day 19. This experiment has a page-long README file that provides detailed information
about the naming system so every researcher on the project knows what is happening at each
stage of the experiment and can add to the naming string as appropriate when they take data
after doing the next step of the protocol with the samples.
Software version control allows you to track how software changes over time and keep old
archival versions to return to as needed. Versioning is also critical so that you know what results
were created by which version of code and that multiple people can work on a code simultane-
ously. Platforms like GitHub allow individuals and groups to deposit their code. Similar to the
README file, a wiki within the shared resource provides people with information about the
code, how to use it, etc. This would be overarching information beyond what would already be
included in the comments within the code.
Student Perspective
“I’ve collected a lot of files. I started using GitHub for my code, not
because I have to share it with anyone, but because I find it’s helpful for
keeping track of what version is the most current, and I think it’s a good idea
to have a backup copy of it somewhere other than my laptop hard drive. I
should be better about keeping other particularly important files in the cloud
as well, so if my laptop were to die, it wouldn’t be as big of a problem (I do
keep my laptop backed up using an external hard drive, so my files would not
be permanently gone, but in the short term, files there wouldn’t be as easily
accessible as files stored somewhere in the cloud).”
There a number of resources available on best practices for data management.2 Your cam-
pus will likely have an office of Research Data Services or librarians who can help you with
campus-specific resources and practices.
6.3. AVOIDING DATA MANIPULATION 145
6.3
AVOIDING DATA MANIPULATION
We may not often think about it, but there are some important ethics issues when it comes to
handling data, figures, and images. As discussed in the previous chapter, this begins by avoiding
error, negligence, and misconduct. As discussed above, the first step is to collect and retain
all original output, raw data, and image files. The original version should be locked and left
untouched. A copy should be made when the file is needed for further analysis.
Obviously, fabricating data is wrong, but sometimes it is more fuzzy when considering
falsification. For instance, you may have what you believe to be outlier data points. If there is a
documented reason (for instance a comment in your lab notebook about high room temperature
due to broken HVAC in the building, a sample contamination, or a power fluctuation during
data collection), then it is reasonable to exclude those data points. However, if you do not have
a known reason for exclusion, you will need to report all the data points. In some cases, you may
be able to show statistically that the outliers do not fall within the data population, but even
in this case you would report any statistical exclusions that you made when writing about your
results.
There are also subtle ethics of presenting data in best light vs. manipulating the presen-
tation of the data to make a false impression. For instance, you may see a small trend in your
data ranging between 90 and 100 that is interesting. However, if presented on an axis scaled
between 80 and 100 this will exaggerate the trend to a reader who is not scrutinizing the graph
carefully. When you present your data, you want to do so in a way that is honest and discloses
the full picture. It’s fine to focus in on the small trend, just be sure to do so in a way that is not
misleading.
With the advent of digital images and the capability to manipulate them, a few ethical
lapses in data manipulation have been made very public. Because of this, some journals regularly
screen submitted images to guard against image manipulation and publish standards of practice.3
There are some general guidelines that you can follow when handling images with software
packages, like PhotoShop and ImageJ, that will keep you on the side of good ethics.4 To start
with, keep an archival copy of every image you generate that you never manipulate. If you need
to crop or change contrast for example, then, work on a copy of the original image and log
every change that you make to that image either in your research notebook or a text file stored
2See for instance, DataONE. A collaborative project on data management funded by the National Science Foundation
(NSF). https://www.dataone.org/.
3“Image integrity and standards,” Nature Research, Springer Nature Limited. https://www.nature.com/nature-
research/editorial-policies/image-integrity.
4Hendrickson, M., 2010. “Digital Images,” a talk presented in “Optical Microscopy Course,” W. M. Keck Laboratory
for Biomedical Imaging, University of Wisconsin-Madison.
146
6. DOCUMENTING YOUR RESEARCH FINDINGS
with the image(s). If you have made any manipulations beyond simple cropping and changes
in brightness or contrast, then describe what you have done when you present the image. This
can be described in the figure caption or the methods section of your paper. You should avoid
things like modification of a part of an image, aggressive cropping of an image, using extreme
or nonlinear adjustments in intensity, and digital filtering of an image. Furthermore, you should
always present representative examples of the results you have observed. If a particular image
was an outlier, then it must be described as such if you want to present it.
C H A P T E R 7
147
Sharing Your Research via Oral
Communication
7.1
INFORMAL CONVERSATIONS WITH OTHER
RESEARCHERS
This chapter discusses a variety of different ways in which researchers need to communicate
their work orally. It is an important skill to develop so that you can comfortably talk about
your research with a range of different audiences, at different levels of technical depth, and with
different levels of formality.
Student Perspective
“I was very surprised that the researchers have to care a lot about com-
municating with different groups of people, like the general public, reporters,
students, colleague, and etc. A stereotype of a mad scientist who does not
communicate with other people at all is actually not possible in reality.”
One of the most important audiences with which you will need to communicate are the
other researchers in your field. These may be people you work with every day, a research men-
tor you communicate with regularly, collaborators you communicate with periodically, or other
engineers and scientists that you interact with at meetings and conferences. It may be tempting
to shy away from these sorts of communication initially, but they are truly critical both to your
personal development as a researcher and to the research project you are undertaking. Even if it
pushes you outside your comfort zone a bit, you need to engage in these conversations.
Student Perspective
“Overall, I have learned that perseverance, confidence, and communi-
cation are the most important skills that a researcher can possess. In order
to have a successful project, one must communicate with other scientists to
resolve problems in an appropriate and timely manner as well as be able to
resolve issues when help is not available.”
148
7. SHARING YOUR RESEARCH VIA ORAL COMMUNICATION
ASSIGNMENT 7-1:
INDIVIDUAL ASSIGNMENT – RESEARCH MEETING UPDATE
Each research group has its own style and practices involving meetings, but at some point you
can expect that you will need to talk about your work in front of others in your research group.
This may be more or less formal and may or may not involve preparing presentation slides.
Your assignment is to present short research talk of 5–8 minutes in duration. The talk
must be relevant to your research topic but may range in content from a review of a paper to
an explanation of some aspect of your ongoing research. Formal slides are not required but you
may use them if it is customary in your research group.
7.2
INFORMAL CONVERSATIONS WITH
NONSPECIALIST AUDIENCES
It is frequently the case that we have the need or desire to talk about our research with people
who are not specialists in the field or even comfortable with technical subject matter. It may be
important for us to convey the purpose of our specific research project and/or the motivation
behind the general area in which we are doing research. Communicating with people outside
your research group might originate from a need to write a cover letter to a journal editor about
a manuscript you are submitting, discuss your research with a program manager or funder of
your research group, prepare for a job interview, or communicate with the general public about
policy decisions related to your research.1
Certainly, developing these non-specialist communication skills are to your own advan-
tage when it comes to getting your research funded and published or landing a new job when
you complete your degree, but these skills are also important because of their societal benefit.
Your ability and willingness to explain your research has the benefit of promoting a scientifically
literate society, so that people who have an opportunity to influence the paths of future research
and technological uses are doing so with a basic understanding of the related research. Scientists
and engineers also have an obligation to report back to the taxpayers who fund their work.
To do this type of communication effectively you need to invest time in figuring out what it
is you want to convey and the best way to go about doing it. Depending on the specific audience
you will have to change the amount of depth and technical detail you discuss, eliminate the
jargon you might normally use with colleagues, provide more background about the subject
area, and shift your emphasis from details of project to a more general discussion of its potential
applications.
When you engage with other researchers in your area you have the opportunity to use jar-
gon to speed up the transfer of information in conversations. However, you need to be sure that
1Baron, N., 2010. Escape from the Ivory Tower: A Guide to Making your Science Matter. Island Press.
7.2. INFORMAL CONVERSATIONS WITH NONSPECIALIST AUDIENCES 149
you know what this jargon means to others so that you don’t have a problematic miscommu-
nication. Some junior researchers try to hide their limited understanding behind jargon—this
may seem to work, but if you really don’t know what you are taking about the improper use of
jargon will soon give you away. When you communicate with nonspecialists you need to remove
the jargon, and this forces you to really understand what’s underneath these words.
The order in which you talk about things will also need to change. The big idea, or headline,
needs to come at the beginning to keep the listener engaged. Narrow the key points that you
will make to only those that are essential for the particular audience you have in mind. Include
some relevant and memorable facts or theories related to each key takeaway message. When you
actually talk to people, don’t be afraid to repeat yourself—the main message should be touched
on multiple times throughout.
Several proponents of informal science communication focus on story telling as a way to
convey research concepts to general audiences.2 This can be particularly effective when commu-
nicating to public audiences. Traditionally, stories that people tell have a hero and a goal. The
“hero” can be a person (you) or a thing (like a technology) and the goal should be something
we care about (such as an outcome that will make people’s lives better). Like any good story,
there are obstacles along the way that the hero must overcome to succeed. As you tell the story,
your goal is to engage the listener so they want to know what happens next. However, success
is not a requirement for every story… sometimes it is a “tragedy”… for example, the equipment
broke! Even these sorts of stories are important ones to tell, it’s one of the realities of research
that people don’t often understand.
Although you may not want to craft an entire talk to fit a traditional narrative story arc,
an anecdote can help to “seduce” your audience into paying attention. The key is to make sure
that your anecdote offers a concrete example that is representative of the research.3
ASSIGNMENT 7-2:
INDIVIDUAL ASSIGNMENT – HONING YOUR MESSAGE
Identify a topic area—either a current research project or a topic related to your research
interests—and craft talking points that you would want to convey if you had five minutes to
convince someone to fund this type of research. Don’t think of this as preparing a speech, but
rather as preparing for a conversation in which you want to make some key points and be able
to respond to anticipated questions. Develop a topical “headline” and 3–4 main messages with
1–2 pieces of supporting evidence. Develop a brief closing summary statement that links back
to the headline captures themes brought up in your main message.
2Miller, T., 2015. Muse of Fire: Storytelling and The Art of Science Communication, https://www.spokenscience.com/
publications/. Olson, R., 2018. Don’t be such a Scientist: Talking Substance in an Age of Style. Island Press.
3Laszlo, P., 2007. Communicating Science: A Practical Guide, Springer Science & Business Media.
150
7. SHARING YOUR RESEARCH VIA ORAL COMMUNICATION
Headline
Main Message 1
Supporting Evidence
Main Message 2
Supporting Evidence
Main Message 3
Supporting Evidence
Closing Summary Statement
For example, below is a brief example of once main message on the topic of graduate education:
Main Message: Graduate education is integral to university-wide goals
Evidence: Grad students are essential in research and undergrad education
Evidence: Grad students increase the diversity of the campus community
Relevant data would also be identified on the number of research and teaching
assistantships that graduate students hold and the national and international
diversity statistics of the graduate student population at the institution.
ASSIGNMENT 7-3:
INDIVIDUAL ASSIGNMENT – MESSAGE BOX
You can use the Message Box4 shown below as a graphical method (Figure 7.1) to aid in honing
your message. The Message Box is a communication tool developed by Nancy Baron, who has
worked in science outreach with Seaweb and COMPASS.5 It was created for scientist so that
they can prepare for interactions with the media and policy makers, but it is a generally applicable
communication tool to organize the main points surrounding a technical topic in preparation
for a discussion with a non-expert.
The Message Box helps you to create “talking points,” or key points that you believe are
important to cover in a conversation about your topic. You can use this framework to explain
what you do to those who know little about your area of expertise. It is a flexible tool that can
be used not only to prepare for a verbal conversation but also for written communication, such
as a cover letter, press release, or website.
The Message Box itself is a tool for you to use to organize your thinking. Using a piece
of paper or a PowerPoint slide, divide the page into four quadrants with the Issue in the middle.
Create a list of 2–4 talking points around the Problem, So What, Solutions, and Benefit. Although
4Baron, N. and Weiss, K., 2007. “The message box” preparation for talking with the media, seminar given on September
21.
21.
5Baron, N. and Weiss, K., 2007. “The message box” preparation for talking with the media, seminar given on September
7.2. INFORMAL CONVERSATIONS WITH NONSPECIALIST AUDIENCES 151
Figure 7.1: Technical presentation of information (left) vs. Message Box (right). Consider your
audience when you translate into Message Box talking points.
it might be helpful to show it to your research mentor for feedback, it is not something you would
show to someone you are speaking with about your research. It is a tool for your to facilitate your
communication.
ASSIGNMENT 7-4:
INDIVIDUAL ASSIGNMENT – VIDEO COMMUNICATION PRACTICUM
It is often useful to have practice presenting in front of a camera so you can see how you sound
and critique your own presentation. This assignment will give you an opportunity to gain expe-
rience talking in front of a video camera in a low risk setting. Find a place where you can be
undisturbed and use your phone or computer to capture the video.
There are a number of questions you can choose to address during your video session.
Ideally, you should focus on 1–2 questions because the total time of the video should be approx-
imately five minutes. Although you should prepare some talking points in advance, it is best if
you do not read off of notes.
Questions to consider.
Why should someone consider (or not) entering your major?
Why should someone consider (or not) going to graduate school?
What have you learned about the process of research?
What strategies would you suggest for finding a research project and research mentor?
What tips would you give someone just beginning their research project?
What tips would you give to someone about staying on track with research progress?
What advice would you give to someone preparing an oral presentation?
AbstractBackgroundMethodsResultsDiscussionConclusionProblem?Solutions?Benefit?IssueSo What?AUDIENCE152
7. SHARING YOUR RESEARCH VIA ORAL COMMUNICATION
Review the video after you have recorded it. Identify things you have done well and things
you would like to improve.
7.3
ENGINEERING OUTREACH
One specific type of nonspecial communication comes in the form of “outreach”—those events
that campuses and communities hold to engage the public with science and engineering topics.
Often these are targeted to the K-12 age group and frequently involve some type of hands-
on activities. Even if your campus does not hold an Engineering Expo type of event, there are
frequently opportunities for undergraduate and graduate students to interact with kids about
engineering through K-12 schools and after school programs.
Ideally, you pick a topic to present that you are interested in, maybe even something re-
lated to your research. Engaging more of the senses will make their experience memorable. It is
particularly helpful if people can get their hands into and onto an experiment or demonstration
materials. You may be able to help a group experienced with outreach who already has activities
developed. If not, there are numerous resources available which can provide you with content
that you can use and adapt. If the professional society in your discipline does not already have
materials available, try looking at places like TryEngineering.org or volunteer.asee.org.
Although precocious middle and high school students who voluntarily come to campus
for a science or engineering outreach event can surprise you with the level of their knowledge
about technical topics, many people understand less about basic science concepts than you might
expect. For example, “four out of five Americans do not understand the concept of a scientific
study sufficiently well enough to provide a short sentence or two of explanation.6” Because of
this you need to think about ways to engage with people to understand what they already know
before you begin an explanation. You can do this more readily by asking questions and interacting
with them about the topic area. Ideally, you do this in a fun an engaging way, rather than making
the person feel like they are taking a quiz.
Student Perspective Student Perspective
“In short, I learned it is better to teach someone something simple well
than it is to teach them something more complicated badly.”
When presenting, it is important not to overload people with too much information.
You don’t want to talk down to them, but you need to simplify the ideas that you are trying
to get across without introducing errors or creating misconceptions. Often you must adjust as
you are interacting with people based on how they are responding. It’s like running an ongoing
6Knight-Williams, V., Santigian, L., and Williams, D., 2005. An overview and annotated bibliography of the use of
analogies, hands-on activities, models, simulations, and metaphors: Part III of front-end analysis in support of nanoscale
informal science education network. (Knight–Williams Research Communications.).
experiment! You may have to try different things to see what’s most effective and be willing to
modify your original plans.
7.3. ENGINEERING OUTREACH 153
Student Perspective
“[Doing outreach] turned out to be a good practice exercise for me,
because I hadn’t anticipated that most of the people that came to the exhibit
were people that had never taken a chemistry class. So for the first hour or so,
I found myself struggling to keep people interested in the topic simply be-
cause I was out of practice explaining things that I wrongly assumed were
common knowledge. For example, many people didn’t understand atoms
bond to one another. This is actually a fairly complex concept for people
who haven’t had a science heavy education like me. Through this, I learned
the miracle of teaching through visual aids. The molecular model kits that
we had were really useful for those complex yet basic concepts that I needed
to explain, especially when I was speaking to younger patrons. I learned that
I really need to improve on how I present technical topics to younger people
and people who are less interested in science in general. Part of it is a matter
of using the right amount of scientific detail, but a lot of it has to do with
how engaging I can make the topic seem as a presenter.”
Below are some basic tips to think about before you interact with the public about a science
our engineering topic in an informal science education setting.7
• Know the intended audience.
• Define a limited set of learning goals (2–3 at most).
• Be aware of the length and attention span of the audience.
• Use multiple modalities to address a range of learning styles.
• Don’t assume prior knowledge.
• Define terms and avoid jargon.
• Avoid graphs, especially multidimensional graphs and log scales.
• Explain what you see in scientific images and diagrams.
• Use metaphors and analogies that explain and enlighten.
•
Include personal aspects of the story, not just the scientific facts.
7Crone, W. C., 2006. Bringing nano to the public: A collaboration opportunity for researchers and museums, S. E. Koch,
Ed., Nanoscale Informal Science Education Network, Science Museum of Minnesota, St. Paul, MN.
154
7. SHARING YOUR RESEARCH VIA ORAL COMMUNICATION
• Repeat the message, explaining it in multiple ways, but be concise.
• Provide clear directions for an activity.
• Encourage visitor conversation.
• Test for misconceptions.
• Evaluate at every stage.
The arts can also be used to engage public audiences with engineering topics and employed
as an entry point for those who might not be as intrinsically interested in science and engineer-
ing topics. For example, in collaboration with professional artists and science museum exhibit
designers I have worked with other engineers and scientists to engage audiences by using silver
and gold nanoparticles suspended in polymers to make “stained glass” artwork with the public.
It’s amazing what creative minds can come up with!
The most important thing I learned when working with museum exhibit developers was
that you have to make it fun. If it is not fun people can just walk away or walk on by. This gives
you an opportunity to be creative! For example, I worked with a balloon artist several years ago
to develop an interactive balloon model for a carbon nanotube structure that we could build
with kids visiting outreach events. It was a huge success and the activity is now being used by
outreach presenters all over the world.
Hopefully you will engage in outreach yourself at some point if you have not already done
so. Being the presenter can be fun and it will likely remind you of why you got excited about
your area of study in the first place.
ASSIGNMENT 7-5:
INDIVIDUAL ASSIGNMENT – EXPLANATION FOR AN 8-YEAR-OLD
• Pick a topic that you will explain to a group of 8-year-olds (e.g., rainbow, fluorescent
light bulb, hibernation).
• Develop a 30-second explanation (oral, written, movement, and/or illustration).
• Be prepared to share your explanation.
7.4
POSTER PRESENTATIONS
The research poster is a common form of communication, both on campus and in poster sessions
held at research conferences. The poster size is usually designated but is often somewhere be-
80; a size large enough to enable viewing by someone standing a few feet
tween 30 (cid:2)
40 and 40 (cid:2)
7.4. POSTER PRESENTATIONS 155
away. In some cases, the poster content and organization are prescribed, but more commonly
they simply follow the general organization of a research article. The poster title, authors, and
their affiliations usually appear across the top, with the title in a larger font (72–100 point font is
common for a title). The content sections of the poster usually feature an abstract, background,
methods, results, and conclusion (the body text is usually 24–32 point font, with larger font
for section headings). References and acknowledgments are usually at the bottom of the poster,
often below the conclusion. The layout should progress logically from left to right and top to
bottom.
Above all, a poster should be designed to be visually appealing, with graphics, figures, and
images as a key focus and large in size. The overall design and color scheme should be harmo-
nious. You can use boxes and borders to set apart different sections of the content. Although it
is important to include some text, it should be carefully chosen to be both informative and brief.
Black or very dark text on a white background is easy to read. You may want to think of the
poster as an advertisement or announcement of your work—thus “It needs good copy.8” Make
“headlines” and easily readable text to go along with great visuals. In many circumstances, you
will accompany the poster to provide a verbal explanation.
Student Perspective
“Through the creation of a [conference] poster… I learned much about
scientific communication. First, many basic skills were cultivated through this
poster designing process such as making something readily understandable,
uses of bullets highlights and bold, use of white space, and knowing your
own poster. Second, I learned that the number one priority when creating
a poster is, making it simple to understand for the audience and aiding in
breaking down any barriers which might have confused yourself. In addition,
the honing of the elevator speech was a process. There was a large amount
learned on what not to say (difficult or misleading concepts) and what to say
(most widely appealing/relevant virtues of my work).”
When preparing a poster, it is critical to determine the requirements before you begin. If it
will be presented in a conference venue, then the conference organization will provide guidelines.
It is essential that you find out the size restrictions and whether the poster will be displayed in a
portrait or landscape format. It is also important to determine what printing process you will be
using so that you can find out if there are restrictions on the size, how much whitespace should
be left around the edges, and how much advance time will be needed to get the poster printed.
You do not want to print out the poster only to have the last three inches cut off, forcing you
to both revise your poster in a hurry and spend money to print it twice. Carefully proofread
your poster before it is printed. Also determine how you will transport the poster—with some
8Laszlo, P., 2007. Communicating Science: A Practical Guide, Springer Science & Business Media.
156
7. SHARING YOUR RESEARCH VIA ORAL COMMUNICATION
printing processes you can fold the material and pack it in a suitcase, but if it is printed on
paper you will need a poster tube (be sure to label the tube with your name, address and phone
number).
You must also consider the verbal part of the poster session. You need to have your talk-
ing points thought out in advance and designed to augment the poster content. As with non-
specialist audiences you need to engage in a conversation and be prepared to discuss your work
at the appropriate level of detail for the people you are discussing it with. This means that you
might actually have two (or more) levels of explanation based on whether you are talking to a
specialist who works in an area close to your own or someone who is interested in the work but
not an expert in your technical specialty. Practice what you will say out loud before the big day.
Ideally, you will do this with your research mentor and/or members of your research group who
can give you feedback on both the poster content and your accompanying explanation.
During the poster session, be open to a conversation as you are getting across your main
points. Ideally, a poster session is an opportunity to engage in discussions that will help you
to move your research to the next stage and you may even have great questions posed by non-
experts that help you to think about your research in new ways. If you are nearing a junction in
your academic career (i.e., applying to graduate schools or job hunting), you should let that be
known in the conversation. You can even bring along business cards and copies of your resume
in case an appropriate opportunity arises.
7.5
THE RESEARCH TALK
Presenting your research may feel overwhelming at first, but it will get easier the more times you
do it, both in terms of the time it takes to prepare your talk and how comfortable you will feel
in giving the talk in front of an audience.
When preparing a research talk, the first thing to determine is who your audience will be.
Your research mentor should be able to tell you the kinds of people who are expected to attend.
This will help you understand the level of jargon that can be used and how thoroughly you will
need to discuss the background for your topic. The second thing to know is the time limit on the
talk. In many cases the amount of time you will have to present will be quite ridged. Conference
talk slots are usually somewhere in the range of 12–18 minutes, whereas a seminar talk could
be 50 minutes. Be certain to ask if the time you are being given is inclusive of questions so that
you know whether you need to make your talk a bit shorter to allow time for questions. With
these two pieces of information you will be able to determine the scope of your talk. You will
likely need to pick and choose what you talk about from the research you have done—don’t try
to cram everything in.
Develop an outline for your talk and select the key visuals that will accompany it. Often the
flow of a talk is similar to a paper with background, methods, results/discussion, and conclusion.
If you have already submitted an abstract to a conference then your topic will be somewhat
fixed, but you may need to do more thinking about how you will motivate the work and what
7.5. THE RESEARCH TALK 157
background you need to provide so the audience understands where your research fits into the
field. Showing that you know the context of your research and have read the related literature
is an important aspect of the talk and will help to convince the audience of the knowledge you
have developed. As with writing, your presentation will also need to give credit to others where
appropriate. This means that you will include the names and/or references to the work of other
researchers/groups on your slides (as well as in the words that you say). In addition to a title
slide at the beginning that may include information about co-authors and funding, you will also
include an acknowledgments slide at the end. Be certain to include your research group, other
collaborators, and all of the funding that supported the research.
Emphasize the visuals in your talk and add a few key phrases or bullet points on each slide.
You will use your spoken words to fill in the details. Having less text will also prevent you from
falling in the trap of simply reading the slides to the audience. You can use figures and images
from the literature as long as you give credit to the original source. This is best done with the
citation information on the slide where it appears (rather than a number and a reference list at
the end). You will also be presenting figures, graphs, images, and/or videos about your results,
but you may find that you need to supplement these with additional images, animations, and
graphics that fill in the gaps and visually explain your approach and methods.
Discuss your outline or draft slides with your research mentor early so that you can de-
termine if you are on the right track. Once you have developed a complete draft of the talk,
practice it independently and in front of others. Many research groups will have practice talks in
their regular group meeting as a conference approaches, but if this is not planned then ask your
research mentor and others in your research group if they would be willing to watch a practice
talk and give you feedback. Be sure to organize this far enough in advance so that you can make
changes and implement the feedback thoroughly. Practice the revised talk again before the big
day.
When you speak, do so facing the audience. If you plan to use a pointer, practice using it
so that you hand is steady. Speak with a volume that will be heard by everyone in the room. If you
will have a microphone, determine how to turn it on an off and check what position will pick up
your voice clearly. As you give your talk be sure to actually explain your slides to the audience.
Although you are very familiar with your research, your audience will need to be oriented so
that they can understand them too. This means that you will need to take time to point out the
axes on a graph, the color coding for a figure, or the definition of the symbols that appear in
equations.
During your practice session, be certain to ask for questions that you might get from the
real audience. This will help you to practice thinking on your feet and constructing some of the
responses you might use. For obvious questions that you just don’t have time to address in detail
in the main talk, you might prepare a few backup slides that you can place at the end of your talk
and refer to as needed. During your actual presentation, it may happen that you get a question
you don’t know how to respond to. Don’t panic. You can reply gracefully with “Thank you for that
158
7. SHARING YOUR RESEARCH VIA ORAL COMMUNICATION
question. I’m not certain of the answer at the moment, but that is something I will look into.” If
you get a more aggressive question that calls the underlying basis of your work into question and
you are not certain how to deal with it, you can reply “That is a much broader discussion than
we have time for now, but I would be happy to talk with you in more detail after the session.”
Of course, then you will need to be certain to track down this person after the session and have
that difficult discussion, but at least you can have the discussion without a large audience.
Student Perspective
“Another thing that my research project has helped me improve on has
been my presentation skills. I’ve given short presentations about my project,
or topics closely related to it, a number of different times…. I’ve gotten bet-
ter at putting slides together that are clear and informative, and at judging
how much material I should have for a presentation that is to last a speci-
fied amount of time. I’ve also gotten better at explaining my project in a way
that makes sense to people outside the … field. Increased familiarity with the
field has helped me be better able to explain it as well. Practice presenting to
people is also always helpful for improving my presentation skills. Every time
I present I am less nervous about it, and I think in general my presentations
have gotten better over time.”
Whether you plan to give your talk from your own laptop, submit the file in advance to
conference organizers, have it on a file sharing system, or external memory device, be certain to
have it available in more than one place just in case the primary source does not work for some
reason. Some conferences have a speaker prep room set aside where you can check to make sure
your talk is working correctly and run through it one last time. Make sure that any videos or
animations you are have included are functioning properly. If possible, especially for something
like an oral proposal talk or thesis defense talk, practice the talk in the room you will be using
so that you can be one familiar and comfortable with the space. At a bare minimum, make sure
that the technology will work in advance of your talk.
Whether the talk will be given in a class or a conference, arrive early so you won’t feel
rushed. Make sure the host, instructor, or session chair knows that you are there. Ask if there
are any timing signals that they plan to provide. If not, you can ask a colleague to give you a
discreet wave when you are within a minute of the end time. Running long is considered rude
so you should avoid doing so.
Take a deep calming breath, release it in a slow steady exhale, and then give a great talk!
7.5. THE RESEARCH TALK 159
ASSIGNMENT 7-6:
INDIVIDUAL ASSIGNMENT – THE FLASH TALK
Sometimes it is actually easier to give a long talk than a short talk, but in some circumstances you
will be given a tight time limit. In a recent conference I attended, the graduate students either
presented their research in either a “flash talk” or a traditional poster session. The flash talk had a
strict 3 minute time limit—exactly 180 seconds. These short format presentations vary, but they
are designed to give a large number of speakers time to share a glimpse of their work, usually
prior to some time frame in which people will be able to mingle and follow up with speakers
whose research interested them.
Essentially this talk is an advertisement for your work, so you want to get across a few key
items to encourage follow-up. Be sure to include your name, institution, and email address. After
mentioning the topic, which should be succinctly summarized by the title of your talk, begin
with a brief motivation for the work. Discuss the method of research you are using to approach
the problem. Present a summary of your most important results. Sum up with conclusions you
have been able to draw from the work you have done. In some cases you may be talking about
work in progress. If so, the first half of the talk stays the same. If you have preliminary results to
share, you can include those. You will wrap up with your plans for future work.
Hone your slides to get across key visual information with your talk. There are several
strategies for doing this effectively. One strategy is to use a single slide layout (see quad chart
format below). This gives the speaker a chance to talk without worrying about changing slides on
the computer and allows the audience to look through all the information over the 3-minute time
frame. Alternatively you can break up the information in a traditional slide format—title slide,
motivation, methods, results, conclusions—no more than 5 slides can be reasonably covered in
3 minutes. Ideally you should condense it to fewer if possible (for instance: title and motivation,
methods, results, conclusions).
Once you have the draft talk prepared, do a practice run and time yourself. It is likely you
will have to adjust both the slides and what you say. Practice several times to get your talk to
be exactly 180 seconds. You may want to use an audio recorder so you can listen to your talk
and make adjustments after listening to it critically. You may also want to practice in front of
someone. You may find that you tend to talk faster or slower in front of an audience and it is
important to know this in advance.
160
7. SHARING YOUR RESEARCH VIA ORAL COMMUNICATION
ASSIGNMENT 7-7:
INDIVIDUAL ASSIGNMENT – THE QUAD CHART
Another term you will sometimes hear regarding short presentations of research is the “quad
chart.” Sometimes this is intended to be a standalone entity and other times it is used as a back-
drop to a short presentation. It is somewhat reminiscent of a poster in that all of the information
is contained in one view. It is often created using PowerPoint and usually intended as something
that is projected on a screen all at once or printed on a single page. Funding agency program
managers may request a quad chart be produced for a research project that is underway, or it
might be used to summarize a completed project.
It might seem a bit reminiscent of the Message Box discussed earlier but the Message
Box is a tool for you to use and is not shown to someone you are speaking with, whereas the
quad chart is the main product being delivered. Although there is some overlap in the content
between a quad chart and the Message Box, the audience and intent is usually quite different.
There is no single format for a quad chart other than that it is usually broken into four
quadrants. The quad chart is intended for a technical audience and it shares many features with a
poster. It also has the formality of a title, authors (e.g., research project participants), and funding
acknowledgments.
Usually specific instructions are given by the requester (often a funding agency program
manager) about what is to be included. One common format is to use the following content in
the quadrants: (1) an image or graphic that depicts the overall project; (2) a statement about the
objective(s) of the research; (3) a description of the approach being taken to reach the objec-
tive(s); and (4) a timeline that lists milestones and progress to date.
When engaged in a Department of Energy research project several years ago, we were
asked to provide a quad chart for a progress report meeting that involved the principal investi-
gators (PIs) of all the research projects being funded by the program. The instructions were very
explicit about what exactly was to be included in each quadrant, but it basically boiled down to
the following key elements.9
1st quadrant:
Purpose/Objective—“A short description of the major contribution envisioned from
the project.”
2nd quadrant:
Importance/Relevance—“Highlight the relevance of the research being conducted.”
9Nuclear Energy University Program, Department of Energy, https://neup.inl.gov.
7.5. THE RESEARCH TALK 161
Figure 7.2: Sample quad chart format.
Impact Areas—“Identify the areas of impact from the successful completion of the
project. These should provide a broad view of the project’s scope as opposed to the
more technical ‘Purpose/Objective’ section.”
3rd quadrant:
Tools/Methods/Facilities—“Include the details of the various specialized set of
tools/methods and facilities that are being used/developed as part of the project.”
4th quadrant:
Sample Results—“Highlight the key result obtained from the work done to date. A
figure should be included along with a caption that explains the key findings.”
Status of Deliverables—“Include the list of the deliverables submitted in the project
proposal and indicate the status of each deliverable.”
For this individual assignment, use the quad chart format in Figure 7.2 to summarize the
status of the project you are currently working on.
TitleAuthors/Project ResearchersInstitution(s)/Affiliation(s)Purpose/Objective: Importance/Relevance:Tools/Methods/Facilities: Sample Results: Status of Deliverables:Citations/References:Acknowledgments/Funding:162
7. SHARING YOUR RESEARCH VIA ORAL COMMUNICATION
ASSIGNMENT 7-8:
INDIVIDUAL ASSIGNMENT – CASE STUDY
Instructions:
Read the brief case description provided. Reread while noting the important information,
and questions that are raised in your mind about the information provided, the individuals in-
volved, and their situation. Determine both the basic issues and any deeper underlying issues at
play. Consider the questions posed at the end of the case and how you would respond to these
questions as well as other questions that could be asked of this case. Write a one-page response
that includes a brief summary of the case and its issues, your answer to the questions posed, and
recommendations based on your understanding of the situation posed in the case.
Case description:
Fan has been a graduate student in the Hoffman research group for four years. Her
progress as a graduate student is going very well. She has successfully passed her qualifying exams
and preliminary exam and her research project has been producing excellent results. However,
she’s beginning to feel invisible and worries that no one recognizes her research accomplish-
ments.
In the weekly research group meeting, Prof. Hoffman always asks for a volunteer to give a
more detailed presentation in the following week. Fan faithfully gives her brief research update,
but she never volunteers to give the more detailed presentation. People frequently ask her to
repeat herself when she gives her updates, which makes her self-conscious about her English
proficiency.
This week in the research group meeting several other graduate students in the research
group are presenting draft abstracts for the upcoming annual conference in the field, but Fan’s
advisor did not ask her to prepare one even though Fan thinks her research is ready. In previous
years when other students have come back from the conference, she hears about the interesting
talks they attended, how well received their own talk was, and senses that they come back even
more energized about their research projects. Fan is beginning to get worried that she will never
get a chance to go to a conference and present her research.
Fan is friendly with Susan, another graduate student in the group who started at the same
time as her. She asks Susan to help her to make a case with Prof. Hoffman about giving her the
change to attend the annual conference.
Questions to consider:
What might Susan suggest that Fan do to get an opportunity to attend the conference?
What other assistance can Fan seek from campus resources to improve her presentation
skills?
7.6. RESOURCES ON ORAL COMMUNICATION 163
7.6
RESOURCES ON ORAL COMMUNICATION
Although this chapter touches on some key issues related to oral communication, this is a broad
topic that entire courses and books are devoted to. For additional content, the following refer-
ences are suggested.
Baron, N., 2010. Escape from the Ivory Tower: A Guide to Making your Science Matter.
Island Press.
Hayes, R. and Grossman, D., 2006. A Scientist’s Guide to Talking with the Media: Prac-
tical Advice from the Union of Concerned Scientists. Rutgers University Press.
Humphrey, J. D. and Holmes, J. W., 2008. Style and ethics of communication in sci-
ence and engineering. Synthesis Lectures on Engineering, 3(1):1–140.
Miller, T., 2015. Muse of Fire: Storytelling and The Art of Science Communication,
https://www.spokenscience.com/publications/
Olson, R., 2018. Don’t be Such a Scientist: Talking Substance in an Age of Style. Island
Press.
Vernon, B., 1993. Communicating in Science: Writing a Scientific Paper and Speaking at
Scientific Meetings. Cambridge University Press.
C H A P T E R 8
165
Sharing your Research via
Written Communication
Although writing might not be the first thing you think of when you imagined the sorts of
things you would do as an engineer, it is an essential aspect of nearly every engineering position
and a skill that you can develop to maximize your career outcomes. In engineering research, the
ability to share your methods and findings with others via written communication is essential.
8.1
TRANSLATING TECHNICAL TOPICS IN WRITTEN
FORMATS
Before delving into technical writing, it is helpful to begin by translating technical topics into
more general explanations. This will allow you to use your prior writing experience and help you
hone your explanation skills. Much of this approach echoes the early sections of the previous
chapter. Your goal in writing about technical topics for nonspecialists is to translate the technical
so that it is more broadly understandable. Initially, this can also be a very helpful way for you to
develop a deeper understanding of a new research area you are engaging with.
In the long run you will also need to be able to translate your own research so that broader
audiences can understand what you have done or what you plan to do. This comes up in a variety
of contexts, but commonly it ties into funding your research. Working in industry you would
need to be able to write a memo about your research/development work so that your boss’ boss
can understand its importance and how it impacts the company and its products. Working in
an academic institution often requires communicating with the public through press releases
and updates to both alumni and donors. Many funding agencies also require you to write a
“lay abstract” about your research proposals and often expect short research updates that are
understandable for public consumption.
As discussed earlier, you will need to tailor the depth and technical detail depending on
the specific audience you want to communicate with. You will also need to provide a more gen-
eral discussion of the research, avoid the details, and focus on the potential applications. In
some cases, you will have the benefit of working with a professional communications specialist,
but often we are left to our own devices to figure this out. One way to be successful is to see
how others have done it so you can emulate their approach. The experts in this area are sci-
ence writers, and you can find their work in print and online. Look for good science writing
166
8. SHARING YOUR RESEARCH VIA WRITTEN COMMUNICATION
in places like the Science section of the New York Times, National Geographic Magazine, Wired,
DiscoverMagazine.com, ScientificAmerican.com, and the news written about research at
your own institution by the university communications writers.
ASSIGNMENT 8-1:
INDIVIDUAL ASSIGNMENT – LABORATORY-TO-POPULAR-PRESS1
Current scientific research is often covered by the popular press. What happens to a scientific
idea as it travels from the lab to the newspaper, news blog, or web magazine? How is scientific
information “translated” by the press for the general public? Is press coverage accurate, objective,
and complete?
Look for recent media coverage of research in your area of interest. Sources might include
the business/technology section of a print newspaper, popular science magazines, web mag-
azines, or science news blogs (for example: DiscoverMagazine.com, ScientificAmerican.
com, EurekAlert.com Spectrum.IEEE.org). After finding an article of interest, use your lit-
erature search skills to find the peer-reviewed journal paper related to this article that has been
published by the researcher(s) in the scientific literature.
Write a 2-page paper about the original research and the media coverage. Begin your
paper with a brief summary of the research and the results based on the journal article. After
this summary, critically consider mass media reporting of the research described in the journal
article. What aspect of the research was emphasized? Was anything important omitted? Were
the results accepted uncritically? Were conflicting opinions discussed?
ASSIGNMENT 8-2:
INDIVIDUAL ASSIGNMENT – SEMINAR PRESS RELEASE
Attend a research seminar and write a summary in the style of a short “press release” for a general
audience. Summarize the seminar talk in 250–500 words using the following structure.
•
Include a short and enticing title.
• Use the first few sentences to introduce the speaker, their university affiliation, the
date and title of the talk, and the seminar forum (e.g., the Mechanics Seminar Series
at UW-Madison).
1Adapted from Caitilyn Allen, Department of Plant Pathology, University of Wisconsin–Madison.
• A short description of the main finding(s) and relevance of the work should appear
8.2. BASIC PRINCIPLES OF TECHNICAL WRITING 167
early in the summary.
• The remainder of the “press release” should provide additional information about the
findings presented, including context for the work that was presented, how the work
advances the field through new advances, new methodologies, or reinforcement of prior
work.
• Minimize the amount of jargon you use and if you must include a technical term be
certain to explain what it means.
• Do not write about the miniscule details for the research presented and use active voice.
•
Include a quote from the speaker if appropriate.
BASIC PRINCIPLES OF TECHNICAL WRITING
8.2
Technical writing, which would include writing of a thesis, technical report, or engineering
journal article, is different in structure, tone, and format from other types of writing.
Initially, it may be challenging and awkward for you to write in this style. Two styles of
the same events are described below—the first in a “normal” description that I might write down
as a description of something that happened in my day in a diary, the second in a style more
appropriate for a technical journal.
I came home from work and was greeted at the door by my chatty cat, Marty. Her
insistent meowing made me realize that her food bowl was empty. After setting
down my backpack, I filled her bowl and she immediately began to eat, inhaling half
of what I had fed her.
At 6:20 pm, a vocalization from feline subject #1 (female) was noted upon initial
interaction. Within one minute, 0.5 cups of Cat Chow (Indoor Dry Cat Food, Pu-
rina) was dispensed into the feeding bowl positioned at ground level. The subject
commenced ingestion within 5 seconds, and 0.3 cups was consumed by 6:34 pm.
Occasionally students will question why they must write in a particular style. It is a good
question to pose, because certainly information can be effectively communicated with a variety
of writing styles. However, you will notice in the simple examples above there is more specific
information available. If I gave you the second example and asked you to take care of my cat
because I was unexpectedly called away from home, you would know that my cat expects to eat
a little after 6 pm, how much to feed her, where to place the bowl, and what kind of cat food
to buy if none was left. An engineering researcher must be able to master technical communi-
cation techniques to provide the detail necessary to fully describe their research so that it can
be replicated and do it in such a way that their work will be accepted and acknowledged by the
field.
168
8. SHARING YOUR RESEARCH VIA WRITTEN COMMUNICATION
Every discipline has its standards and forms that it uses. You must know how to play
by these rules before you can consider bending or breaking them. For example, if you were an
aspiring screenwriter trying to launch your career in Hollywood, you will want to write it in
the customary format so that your ideas are presented in a familiar way and the producer does
not discard it as amateurish before even reading it. Once you have hit the big time, and written
several blockbusters, you can choose to write in a different style, but the likelihood is that by
then you will have discovered why the standard style has evolved for this discipline and writing
in that style will no longer feel foreign to you. Whether you are an aspiring screenwriter or
engineer the style in which the people in that field communicate will seem foreign at first, but
ultimately you will learn how to write in that style effectively and it will become more natural.
8.2.1 DEALING WITH WRITER’S BLOCK
Whether a seasoned writer or a novice, nearly everyone gets stuck at some point and finds it
hard to either start writing or make progress on writing. Before launching into the particulars of
specific types of technical writing, it is useful to have some strategies to employ if you run into
a snag with the writing itself. Here are some suggestions that you might try.
• You don’t have to start at the beginning, and usually the abstract is what gets written
last anyway. Try starting with the section that you find easiest first so that you can gain
some momentum.
• Make appointments with yourself for writing. This is very useful when you have a large
writing project that you need to accomplish over a period of time. At the time you have
designated on your calendar, you must write, no excuses.
• Create the diagrams/charts/figures you believe tell the story of your work, put them
in a logical order, and then go about describing them. Describe the methods used to
acquire the data found in your figures, what the reader should see when they look at
the figure, and what conclusions can be drawn from the figure. This text will likely end
up in different sections of the paper (Methods, Results, Discussion), but sometimes it
is easier to write about a figure and move the text that you generate to the appropriate
section at a later time.
• Some people find that developing a progressively more detailed outline is a fruitful
strategy. In this case you would begin with a skeleton outline, then add detailed bullet
points to it until you can eventually turn the bullets into sentences and the sentences
into paragraphs, ultimately fleshing out each section of the paper.
• Try “free writing” where you just capture your stream of consciousness. This means
that you don’t edit along the way or search for just the right word. You don’t even
worry about sentence structure and punctuation, you just get the ideas captured. Later
8.3. STANDARD FORMATS IN TECHNICAL WRITING 169
you can go back to the product of your free writing and begin editing and sculpting
these ideas into the appropriate format.
• Talk to a friend about your work and record the conversation. Encourage them to ask
you probing questions about your research. Afterward, review the audio file and type
the pieces that seem useful, adding more to the verbal description as you are transcrib-
ing it.
8.3
STANDARD FORMATS IN TECHNICAL WRITING
There are several basic mechanisms of communications in the world of research, and in written
communications there are a number of standard formats that are used. Writing can be used for
very different purposes, so it is important to understand both the purpose, and the audience for
whom you are writing. Initially your writing may be intended solely for your research mentor.
As time progresses, however, you may begin writing for venues in which other researchers in
your field will be the primary audience.
Who your audience is will have an impact on a variety of different aspects of your writing.
When writing to be read by members of your research community you can use more technical
jargon, however, it is critical that you understand this jargon rather than hide behind it. Any
missteps in the use of jargon will be quickly identified by experts in the field. When writing to
a more general audience, or a broader technical audience who are not members of your specific
subspecialty area, you will need to be much more limited in your use of technical terms and
jargon. Choose a few of the most important technical terms, and make sure that you clearly
define them through your writing. The layering on additional jargon should be avoided if at all
possible.
8.3.1
ABSTRACTS
The abstract is likely the most ubiquitous format in technical writing. For instance, you may
want to submit your work to a conference for a poster or oral presentation; in most cases this
will require you to provide an abstract that summarizes the work you will present. Abstracts
are also commonly used at the beginning of longer forms of writing such as technical reports,
proposals, theses, and journal articles.
A good abstract will provide motivation for the work, information about the approach
taken in the research, and a summary of the key findings. Ideally, as part of the abstract you will
also indicate how the research findings impact the field. Commonly the abstract will include: the
context for the research, a description of the methods used, the important results/findings, and
the impact/importance of these results. Depending on the context, the length of the abstract may
be prescribed. Commonly abstracts fall in the range of 200–300 words, although an “extended
abstract” will be the longer in length. In general, it is not appropriate to include citations and
abbreviations in an abstract.
170
8. SHARING YOUR RESEARCH VIA WRITTEN COMMUNICATION
Take some time to read abstracts in your discipline area that have been written for a similar
purpose to your current writing task. For example, if you are writing a conference abstract, then
look at example abstracts from the previous conference year. If you are writing an abstract for a
journal article or research paper, then look at the abstracts of recent journal publications in your
discipline. While reviewing these example abstracts, think about the type of information that
is being conveyed by each sentence. Dissecting what others have included in their abstract will
help you decide what readers will expect to see in yours.2
Some publications require what is referred to as a “structured abstract” which includes
specific subheadings.3 This began in fields of health and medicine to assist clinicians for quickly
identifying methodology and results in journal articles, but this format is being adopted by more
science and engineering journals because it helps the reader to quickly identify relevant content.
Subheading may include some or all of the following: Introduction, Objectives, Methods, Re-
sults, Discussion, Conclusion. Each subheading will usually have 1–3 sentences of concisely
written content that the reader can easily understand.
As a reader of journal articles we usually begin with the abstract, but as a writer of a
manuscript to be submitted as a report or journal article you will generally find it easiest to write
the abstract last (or at least after you have completed a substantial amount of the other writing
involved in your manuscript). These sentences maybe the ones you spend the most time crafting
in the entire manuscript!
ASSIGNMENT 8-3:
INDIVIDUAL ASSIGNMENT – ABSTRACT DISSECTION
Identify a journal paper of importance to your research area, preferably one that is considered
to be an influential paper or has been highly cited. Begin by reading the abstract, then read the
rest of the paper. Return to the abstract again and dissect each sentence by identifying how it
addresses one or more of the four components below:
• Motivation/objectives
• Approach/methods
• Results/findings
•
Impact/significance
2“Writing an Abstract for Your Research Paper,” UW-Madison Writer’s Handbook, The Writing Center, Uni-
versity of Wisconsin–Madison. https://writing.wisc.edu/handbook/assignments/writing-an-abstract-for-
your-research-paper/.
3National Library of Medicine, “Structured Abstracts,” https://www.nlm.nih.gov/bsd/policy/structured_
abstractsabstracts.html.
8.3. STANDARD FORMATS IN TECHNICAL WRITING 171
8.3.2
REPORTS
The technical report is a common form of writing for engineers. This may be required within
your research group as a standard mechanism of keeping your research mentor updated, or as an
expectation of research funding. If you hold a fellowship you may be required to provide a report
at the end of the fellowship year, for instance. Many federal agencies, private foundations, and
industry contracts that provide funding to support research activity also expect regular reports
on the progress being made. These reports may be expected monthly, quarterly, or annually.
Although your particular research contributions may only be a portion of the reported activity,
you will likely be responsible for providing not only data and results, but also a written summary
of the work to date.
Student Perspective
“I had to write quarterly and annual reports for my portion of the
project and was involved in the discussion of how we were going to proceed
with the project for renewal in funding.”
Before you begin your writing, determine the required format and expected detail. Often,
your research mentor or other research group members will be able to provide you with this type
of background information. As you embark on your writing, focus on presenting information
in a logical order, using clear sentence structure. Consider whether figures and tables will assist
you in presenting the information more effectively. Relevant figures might include a schematic
diagram of a process, a photograph of an experimental setup, a graphical depiction of results,
and/or a micrograph.
ASSIGNMENT 8-4:
INDIVIDUAL ASSIGNMENT – WRITTEN RESEARCH UPDATE
Some research mentors expect students to provide weekly or monthly written reports, but even
if that is not the case for you on a regular basis there may be the need for you to occasionally
provide a written research update (e.g., when travel prevents your usual face-to-face meeting).
If your research mentor has asked for a specific format of update, you should provide feedback
in that format. If a specific format is not required, then present your update in a logical manner
broken down by project and objectives. Provide actions taken since the last meeting/report,
results obtained, next steps planned, and questions that need to be addressed. In preparation for
your next interaction with you research mentor, draft a written report.
172
8. SHARING YOUR RESEARCH VIA WRITTEN COMMUNICATION
8.3.3
TECHNICAL WRITING FOR A PROPOSAL, THESIS, OR JOURNAL
ARTICLE
Entire courses and books are written on the topic of technical writing alone.4;5;6 In this section
we will focus on some key highlights and strategies you can use to help improve your writing. In
addition to this content, you may also want to avail yourself of other resources such as textbooks
on the subject and campus resources (e.g., your university’s Writing Center).
For most formal writing, there are standard format requirements, or at least a few options
for acceptable formats. Begin your writing process by understanding the format expectations.
In the case of a thesis, there may be detailed guidelines provided by your program or institution
on both the structure and layout out of the document. If you are writing a journal article, then
you will need to consult the guide for authors that the journal publishes (usually on its web
site). Depending on the journal they will be more or less prescriptive. Regardless, these author
instructions should be followed to the letter. If the macro-scale organization of the document is
not predetermined, then you will need to work with your research mentor to develop the basic
outline. This will usually include: abstract, introduction, methods, results, discussion, conclusion,
acknowledgments, references.
It is also the case that within the discipline a certain style off writing is expected. As
discussed above, you will need to become aware of those stylistic expectations and to use them
in your writing. This is oftentimes easiest to do by looking at examples of prior writing in that
style. Remember that there are both good examples and bad examples available to you, so you
may want to ask for advice about which examples are the ones you should use to guide your
writing.
Whether it be a proposal, thesis, or journal article, you must convince the reader to in-
vest their time in what you have written. Having it clearly written, well organized, and visually
appealing is essential. You also have to anticipate that the reader may not actually read the doc-
ument in the order that you have presented it. For example, if I am deciding whether to invest
time in reading a journal article, I’ll first read the abstract. If the abstract looks worthwhile, then
I’ll glance through figures and jump to the conclusion to see what the key findings were. If my
interest is piqued, then I will invest more time in reading everything in between. For the actual
writing process that you will undertake, you will actually write the abstract and conclusion sec-
tion LAST because these get written after you know everything you want to say. You will likely
spend far more time per word writing and rewriting these sections of the paper, so that they are
as clearly and compellingly written as possible.
4Humphrey, J. D. and Holmes, J. W., 2008. Style and ethics of communication in science and engineering. Synthesis
Lectures on Engineering, 3.1, 1–140.
5Northey, M. and McKibbin, J., 2009. Making Sense: A Student’s Guide to Research and Writing. Oxford University Press.
6Day, R. A. and Gastel, B. How to Write and Publish a Scientific Paper. Cambridge University Press.
8.4. REFINING YOUR WRITING 173
8.3.3.1 Persuasive Writing
Although it must always be scientific and objective, technical writing should also be considered
persuasive writing. This is obvious in the case of a proposal where you are trying to pitch an idea
and potentially convince someone to fund that idea, but it is also true for technical writing in
general. You must always persuade your reader of the merits of your work with logical argument,
compelling evidence, and engaging language. You may also need to consider addressing any
counter arguments in order to deal with common objections preemptively.
Choose an introductory paragraph from a prior writing assignment that you have com-
pleted and rewrite it while focusing on persuasion. The original paragraph should be from a
technical writing topic, although it does not have to be related to your research. Include both
the original and revised paragraph in your assignment.
8.4
REFINING YOUR WRITING
Different people vary in their writing speed and effectiveness, but I have never met someone who
was able to sit down and write the final version of their paper on the first try. As Paul Silvia,
professor of psychology, puts it: “Writing productively is a skill, not a genetic gift, and you can
learn how to do it.7”
Good writing requires reworking and editing what you have written (sometimes dozens
of drafts are required). Additionally, when writing with the input of your research mentor or
coauthors for a journal publication, you will need to seek their feedback and incorporate modifi-
cations. The process can sometimes be long and frustrating, but the outcome will be substantially
improved.
Student Perspective
“I suppose I was under the assumption that good experiments and more
so good researchers are those who can finish projects as quickly as possible.
After what I know now, this seems to be true and false at the same time. On
one side, researchers are undoubtedly judged by how many published papers
they have produced, which is directly related to how quickly they can start and
finish projects. But, on the other hand, they are also judged by the substance
of their publications, which could be related to how slowly and carefully they
start and finish projects. It seems that there is a very fine line between this
race to produce as many published papers as you can and trying to maintain
a standard of quality of information being produced.”
Ideally, the process of writing will start well before the research has been completed. There
are a number of ways to go about writing productively while you are conducting research.
7Silvia, P. J., 2007. How to Write a Lot: A Practical Guide to Productive Academic Writing. American Psychological Asso-
ciation.
174
8. SHARING YOUR RESEARCH VIA WRITTEN COMMUNICATION
• When you read an important journal article that you expect to cite, take some time to
write about the key aspects of the article. You may be able to use the notes section of
your citation management system to capture this type of writing.
• As the methodology that you are using to conduct your research becomes solidified, it
is an opportune time to begin crafting your methods section.
• Writing about your results as they begin to accumulate allows you to organize this
information more effectively and this practice can be helpful in identifying previously
unanticipated gaps that you will need to fill.
It is important to work far enough ahead of your deadline to give enough time for you
and the others invested in your writing to make the needed iterations. In the case of a thesis you
may also be expected to provide a complete draft weeks in advance of your defense date.
Writing does not have to be done in isolation. You will likely have peers who are also in
the process a writing (or should be) at the same time you are. Even if it is not helpful for you to
write in the same physical space, it can be helpful to set writing goals that you hold each other
to and provide each other with feedback periodically.
8.4.1 WRITING WORKSHOPS
In some course settings you may be asked to review the written work of a peer in your class and
provide constructive feedback. Ideally you will be given some instruction and a rubric to do so.
If not, the guidance in this section should be helpful to you. Keep in mind that the author will
be reading your review, so you should take pains be constructive in your criticism and to state
the positives with the negatives. Every author, whether a classmate or a seasoned researcher, is
a person just like you and me! They will be more open to the criticism and making changes to
their writing based on your comments if they are framed constructively.
When giving feedback you might ask the receiver if they want constructive criticism, and
how it can be best delivered. This will show that you are willing to adjust how you provide the
feedback and it will allow the recipient to reflect on how they can best receive it.
Bradley Hughes, Director of the University of Wisconsin–Madison Writing Center,
helped me to develop guidelines for writing workshops that we hold twice a semester in our
research course sequence. Over the years I have asked students for feedback on how to improve
these guidelines for responding to writing on engineering research topics. The resulting sugges-
tions below can be used in a writing workshop forum or when exchanging your writing with a
peer for feedback.
Some Suggestions for Responding to a Colleague’s Draft8
8.4. REFINING YOUR WRITING 175
Before reading the draft–
1. Find out what the writer is intending to do in the document and who the intended audience
is.
2. Find out what the writer wants from you as a reviewer at this stage of their writing and
use that information to prioritize your feedback.
When reading and responding–
3. Read the entire draft before commenting.
4. Praise what works well in the draft; point to specific passages; explain why these passages
work well. PICK AT LEAST ONE THING to compliment and begin your response
with that.
5. Describe what you found to be the main point of the draft so that the author can determine
if their intent has been achieved.
(a) Try describing what you see in the draft.
(b) What you see as the main point?
(c) What you see as the organizational pattern?
6. When providing criticism, be honest (but polite and constructive) in your response. Try
responding as a reader, in the first person (e.g., “I like _____.” “I got lost here …” “I think
you could help readers follow this if ___________”).
7. Time is limited (for your response and for the author’s revision), so concentrate on the most
important ways the draft could be improved. Comment on large issues first. Consider the
following questions.
(a) Is the purpose of the document clear to a reader? Does the draft achieve its purpose?
i. Is the writing accessible to a scientifically literate audience with some background
in your area of research?
ii. Are ideas presented in an interesting manner?
iii. Can the reader infer what the specific aim of the research is? Are the goals clearly
stated?
8Adapted with permission from Bradley Hughes with modifications and edits from Engineering Physics majors at UW-
Madison. See “Peer Reviews,” UW-Madison Writer’s Handbook, The Writing Center, University of Wisconsin–Madison,
https://writing.wisc.edu/handbook/process/peerreview/.
176
8. SHARING YOUR RESEARCH VIA WRITTEN COMMUNICATION
iv. Is the scope of the project clear? What are the deliverables of the research?
v. For a proposal:
A. Does the writer propose the research is such a way that it appeals both to
a general technical audience and members of the author’s specific research
field?
B. Does the writing provide a compelling argument for the significance of the
proposed research?
vi. For a report, manuscript, or thesis:
A. Is enough background given so the reader understands how the data was
collected, or how a theory was developed?
B. Is the draft convincing in its argument to support the conclusions? Are the
results clearly documented? Is evidence used properly?
(b) Are ideas adequately developed?
i. Are the important ideas of the work presented? Is there a clear focus?
ii. Is the draft effectively organized? Is the sequence of points logical?
iii. Is there an appropriate balance between major and minor points?
iv. Do the author’s ideas flow logically from one paragraph to the next?
v. Were there any paragraphs within the author’s draft that seemed out of place?
vi. Are the transitions between sections strong? Is material from earlier in the doc-
ument built upon and referred to clearly in later sections?
(c) Is prior published work on the topic described in sufficient detail to give context to
the current work? Are the references clearly cited?
(d) Are the figures/tables/equations clear and appropriate?
i. Do the figure captions provide appropriate detail?
ii. Do the figures/tables support the claims that are made in the text?
iii. Are the mathematical equations understandable?
8. Be specific in your response (explain where you get stuck, what you don’t understand)
and in your suggestions for revision. And as much as you can, explain why you’re making
particular suggestions.
9. Identify what’s missing, what needs to be explained more fully. Also identify what can be
cut.
10. Engage in a discussion, but refrain from arguing with the author or with other respondents.
11. Mark proofreading edits (awkward or confusing sentences, style, grammar, word choice,
proofreading) on a printout to hand it to the author rather than spending time on these
details in the discussion.
8.4. REFINING YOUR WRITING 177
ASSIGNMENT 8-5:
GROUP ACTIVITY – WRITING WORKSHOP
If you do not already participated in a class or research group that holds Writing Workshops,
you can form your own writing group with peers at your institution. A group of 4–6 people is
ideal. You will need to agree on the frequency of your meetings, how long you will meet, and
what deadlines you will pose on sharing your writing prior to the workshop. A suggested format
follows.
Writing Workshop Group
You will each need to produce a piece of writing by midnight on Monday for the writing work-
shop you will be participating on Wednesday. Provide a copy of your written piece, including the
cover page information discussed below, to all of the other workshop members. Before meeting,
everyone must read the written pieces of all the other group members and come to the workshop
prepared to discuss the writings. Bring a copy of your own written piece and cover page as well
so that you can reference it and make notes.
Writing Assignment
Choose a research report, journal article manuscript, research proposal, or thesis you are working
on as the subject of your writing piece. Provide 3–5 pages of new writing to your Writing Work-
shop group members. Figures and tables (if needed) as well as references should be attached to
the end and should NOT be counted toward the 3–5 pages. Include an outline of the overall
piece with a description of where this writing will be incorporated.
If you are not actively writing at this time, ask your research mentor to identify a “good”
thesis in the same general field as your research topic. Read this thesis and write a 1-page re-
flection commenting on the organization of the thesis, what you learned about thesis writing
through your reading of this “good” example, what was done well by the author, and what mod-
ifications you would suggest to improve the thesis.
Writing Workshop Cover Page
The following questions should be addressed in the cover page of the writing piece.
1. What part of your proposal/thesis is this draft (for example, the introduction to my thesis;
or the review of technical literature; or the first part of the results section …)?
2. What are your main points in this section?
3. What specifically are you happy with and do you think is working well in this section?
4. What specifically would you especially like some feedback on or help with in this draft?
178
8. SHARING YOUR RESEARCH VIA WRITTEN COMMUNICATION
5. Anything else your readers should know to read this draft in a way that will be helpful to
you?
ASSIGNMENT 8-6:
INDIVIDUAL ASSIGNMENT – WRITING WORKSHOP REFLECTION
Reflect on the Writing Workshop activity. Discuss the parts of the process that worked well
and what could be improved. Consider “Suggestions for Responding to a Colleague’s Draft” in
Section 8.4.1 and how it can be refined for technical writing. What are specific critical questions
that must be asked for the type of writing you reviewed? Should the questions differ when
considering a journal article manuscript vs. a research proposal vs. a thesis?
8.5
ISSUES SURROUNDING AUTHORSHIP
Who is included as an author and the order of the authors can become a contentious subject
because it involves both getting credit for the work and taking responsibility for the work. To
avoid or at least minimize such problems, it can be helpful to talk about authorship when you
are embarking on the research, well before you get to the stage of writing. As an early stage
researcher, it is a natural topic for you to bring up for discussion with your research mentor so
that you better understand the norms within your research area.
Shamoo and Resnick suggest beginning the determination of authorship by identifying
the ways in which individuals have contributed to a research project. They identify the following
areas of research contribution9:
• Defining problems
• Proposing hypotheses
• Summarizing background literature
• Designing experiments
• Developing methodology
• Collecting and recording data
• Providing data
• Managing data
9Shamoo, A. E. and Resnik, D. B., 2009. Responsible Conduct of Research. Oxford University Press.
8.5. ISSUES SURROUNDING AUTHORSHIP 179
• Analyzing data
•
Interpreting results
• Assisting in technical aspects of research
• Assisting in logistical aspects of research
• Applying for grant/obtaining funding
• Drafting and editing manuscripts
Who appears on the author list can be more complex, particularly in a larger project that
has involved a number of people at different stages of the work. In some cases, the journal
will identify the criteria authorship. The medical community has spent time wrestling with this
issue as a result of some historical problems where individuals were included on the author
list although they did not contribute to the work. The International Committee of Medical
Journal Editors (ICMJE) proposes the following criteria for inclusion as an author on a journal
publication10:
•
substantial contributions to the conception or design of the work; or the acquisition,
analysis, or interpretation of data for the work; AND
• drafting the work or revising it critically for important intellectual content; AND
• final approval of the version to be published; AND
•
agreement to be accountable for all aspects of the work in ensuring that questions
related to the accuracy or integrity of any part of the work are appropriately investigated
and resolved.
You will notice here that the last bullet explicitly deals with the authors taking responsi-
bility for the work that is published. In large collaborative projects, and particularly as a junior
colleague, it is difficult for you to know about the details of every aspect of the work. Certainly,
you have responsibility for the aspects of the research and writing that you were directly involved
with, thus you can ensure that those parts are conducted in the most ethical manner possible.
And, if for some reason the publication is called into question, you have the responsibility to
provide information related to the research.
Disciplines and sub-disciplines have different ways of determining author order: whose
name goes first on the author list, whose name goes last, and in what order others appear. In
some disciplines, it is simply alphabetical. In many disciplines, the principal investigator of a
research project is usually the last author. The student or researcher who conducted the majority
10International Committee of Medical Journal Editors (ICMJE), 2019. Defining the Role of Authors and Con-
tributors, http://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-
of-authors-and-contributors.html#two.
180
8. SHARING YOUR RESEARCH VIA WRITTEN COMMUNICATION
work and wrote the majority of the paper is usually the first author. A paper that coincides with a
chapter or more of the student’s thesis will usually list student’s name first, with other individuals
such as the research mentor on the author list.
ASSIGNMENT 8-7:
INDIVIDUAL ASSIGNMENT – AUTHOR ORDER IN YOUR DISCIPLINE
Identify three journal articles published by your research group and look at the individuals in the
author list. Using the author affiliation information given in the journal article and your per-
sonal knowledge about your research group, identify the roles that each author holds in research
group or in other collaborating research groups (e.g., undergraduate student, graduate student,
postdoctoral researcher, scientist, principal investigator, etc.).
Make an appointment with your research mentor to discuss the common practices of
authorship in your discipline. Using the journal articles that you have identified, discuss author
order on these examples in the context of common practice within your discipline.
8.6
PUBLISHING YOUR RESEARCH
The first question to consider is whether or not your research can be published in a journal. In
some cases, research is done that can’t be published openly or its publication has to be delayed
for some period of time (often referred to as an embargo period). These publication restrictions
usually only come up if you are working on classified research or working under a non-disclosure
agreement. As a student this is not a desirable circumstance because you need to publish your
work to build your resume. If you believe this may be the case with your research, it is important
to talk to you research mentor about what aspects of the work will be publishable, and how you
will be able to build your credentials so that you are ready for the job market when you complete
your degree.
For the majority of work conducted at university campuses, external presentation and
publication restrictions are seldom an issue. However, you and your research mentor may decide
to delay dissemination of your work because of a desire to patent. If this is the case, you will be
working with your campus research office to determine the patentability and submit a patent
application. They will help you determine the appropriate timing the public disclosure of your
research (e.g., a conference presentation or journal article submission).
Aside from the cases above, journal publication is the primary outcome of the engineering
research that you will do (as well as conference proceedings publications in some fields). This
allows other researchers, and people interested in the field, to learn from and use your findings.
Adding to the body of knowledge in the open literature helps everyone to move the field forward,
and often enables future advances in technology and products that benefit society.
8.6. PUBLISHING YOUR RESEARCH 181
The point at which the research is ready for publication is a judgment call that your re-
search mentor will help you determine. But often, there is a desire to get your work published
sooner rather than later, especially if you are working in a fast moving and competitive field
of research. You would also prefer to have publications listed on your resume when you apply
for a job, so it is in your best interest to help get the research completed and the manuscript
submitted for publication. However, in the end, it will be up to your research mentor (or the
principal investigator of the project) to make the determination of when the research is ready
for dissemination.
Student Perspective
“My previous understanding about publication of research was that
is important but not essential. I thought that getting published was not a
necessary condition for career advancement. I thought that other methods of
disseminating information like conferences, colloquia, and informal group
meetings and conversations with other institutions were of equal importance
to being published. I assumed that conferences were the best way of spreading
ideas, since those ideas are being told by the originator with the opportunity
for immediate questions and/or feedback. Conferences are important, but the
most important way of spreading information and ideas is through journal
publications.”
Publishers use similar review processes for evaluating manuscripts that they receive. The
schematic11 in Figure 8.1 gives the general flow of the decision-making process—both from
the journal’s perspective and the options you and your co-authors have once a decision has been
rendered. Your research mentor will likely provide guidance on both choosing an appropriate
journal to submit your work to, and the details of how to go about submission.
It is essential to keep in mind that for a coauthored paper everyone must agree on the final
version prior to it being submit it.
Although the review process can seem adversarial, the ultimate goal is to assure that the
research being published has been rigorously conducted, well documented, and written about in
a clear manner. Usually the comments that come back from a reviewer will help you to improve
the writing and clarify what you have done and the conclusions you have drawn from your
outcomes. Sometimes the review may identify additional work that should be completed prior
to publication, e.g., an additional control experiment or a validation run that was missing. Other
times the reviewer may ask for something that is out of scope of this manuscript or it may seem
that the reviewer does not understand the fundamentals of the work you are doing. When this
happens, it is possible to write a rebuttal to the editor asking that a particular review or portions
of the review be set aside. You will need to work with your coauthors to determine the best
11Adapted from: Barker, k., 2006. At the Bench: A Laboratory Navigator, Updated Edition, Cold Spring Harbor Laboratory
Press, Cold Spring Harbor, NY.
182
8. SHARING YOUR RESEARCH VIA WRITTEN COMMUNICATION
Figure 8.1: Flow of the review and decision-making process in taking a manuscript to journal
publication.
course of action once you receive your reviews. It is important to act quickly though, often the
response to reviews must be submitted within a deadline period.
Student Perspective
“The process of getting published involves a fairly rigorous (when done
correctly, anyway) peer review process. The data is scrutinized, the ideas an-
alyzed, and conclusions examined before the information is ever released to
the scientific community. This generally prevents bad data and poor science
from being published, thus preventing wasted time and funds by other sci-
entists attempting to build on others’ work.”
Because the process is rigorous, depending on many people doing a variety of difficult
tasks, and involves multiple levels of communication, the publication process is time consuming.
Doing the research and writing the manuscript are certainly the majority of the work and time
SubmissionPublisherChoose a publisher where your work will have a good fit.Write a compelling coverletter describing the strengthsof your manuscript.Identify the editor or associateeditor most relevant to your topic.Use the constructive criticismprovided by the reviewers toimprove your work.Consider trying to persuadethe editor of the manuscript’sstrengths. Does the publisher allow you (orrequire you) to provide a listof suggested reviewers?EditorReviewersEditorManuscript AcceptedPublicationAre there other publishers where your work will have a better fit?Revise and ResubmitManuscript Acceptedwith RevisionSubmit Manuscript toa Different PublisherManuscript RejectedRewritespent, but completing the manuscript submission, responding to reviews, and making revisions
will take weeks or months. You need to be prepared for this additional work.
8.6. PUBLISHING YOUR RESEARCH 183
Student Perspective
“I was very surprised to find out how long it takes for a journal to ac-
cept and publish an article. After submitting an article for publication, it can
take months to hear back on whether your article was accepted or not. Then,
if your article does get accepted, it can take even longer for it to actually be
published. The longest wait I was able to find when looking through papers
this semester was around 13 months, which might have been the most sur-
prising thing I learned all semester. But, with a little more searching I found
that many journals are starting to post accepted articles online before their
actual publication in the journal. I think this is a step in the right direction,
as it will definitely help get published articles to the community faster…”
For nearly every journal you will need to do some writing beyond the manuscript itself.
The guide to authors published on the journal’s website will detail the additional items required
for submission. Often a cover letter to the editor is expected—the journal may prescribe the
contents, but it often is expected to include information about the importance of your findings,
how your work fits into the scope of the journal, the most appropriate associate editor to handle
your manuscript, and assurances that the manuscript is not under consideration with another
journal. The response to reviewer stage of the process will also require writing—usually this
includes a letter that only the editor sees, as well as a detailed written accounting of how you are
responding to each of the reviewers’ points that is usually seen by both the editors and reviewers.
Because the reviewers will also have access to this written response to the reviews, it is critical to
be respectful and use carefully crafted language when you are in disagreement with a reviewer’s
point.
ASSIGNMENT 8-8:
INDIVIDUAL ASSIGNMENT – WHERE TO PUBLISH
Identify potential journals where you might publish the research you are currently working on.
Begin by looking at the papers that you are currently citing, and journals that they are published
in. Also consider other key journals in your research area that you may be familiar with, or that
your research mentor has mentioned. Determine whether your topic area is a good fit for aims
and scope for these journals. Identify the Impact Factor of these journals and other relevant
statistics provided, such as the timeframe between submission and publication. Look at each of
the journal’s web pages and identify the information/guide for authors.
184
8. SHARING YOUR RESEARCH VIA WRITTEN COMMUNICATION
After you have considered several journals, identify the top three candidates and summa-
rize why you think these journals would be a good fit for your research.
8.7
RESOURCES ON WRITTEN COMMUNICATION
Although this chapter touches on some key issues related to written communication, this is a
broad topic that entire courses and books are devoted to. For additional content, the following
references are suggested.
Humphrey, J. D. and Holmes, J. W., 2008. Style and ethics of communication in sci-
ence and engineering. Synthesis Lectures on Engineering, 3(1):1–140.
Northey, M. and Jewinski, J., 2012. Making Sense in Engineering and the Technical Sci-
ences: Making Sense in Engineering and the Technical Sciences: A Student’s Guide to Research
and Writing. OUP Canada.
Day, R. A. and Gastel, B., 2006. How to write and publish a scientific paper. Cam-
bridge University Press.
Silvia, P. J., 2007. How to Write a Lot: A Practical Guide to Productive Academic Writing.
American Psychological Association.
Sternberg, D., 2014. How to Complete and Survive a Doctoral Dissertation. St. Martin’s
Griffin.
Luey, B., 2002. Handbook for Academic Authors. Cambridge University Press.
C H A P T E R 9
185
Safeguarding Your Personal
Health and Happiness
9.1
THE CHALLENGES YOU MAY FACE IN GRADUATE
SCHOOL
Life brings us different challenges at different times. Some of us live many years or even the bulk
of our lives without experiencing much difficulty in our personal relationships, work, or health.
For others, hardship is something that comes earlier, and potentially, more often. Regardless
of your level of experience with adversity, each experience sharpens your ability to overcome
obstacles and brings new opportunities for learning about yourself.
For the average person, the speed of everyday life has quickened to a frenzy. Many of us
are continuously digitally connected with seemingly endless new information being thrown at
us. IFLScience reports that “Ninety percent of the data in the world today has been created in
the last two years alone.1” Exposure to the bombardment of information leads to distraction and
mind wandering that can negatively impact our attention and our day-to-day fulfillment.2 On
top off all the usual stuff that the average person has to deal with, graduate school makes things
a bit more amplified, with higher stress levels associated with education-related deadlines and
expectations.3
The next few paragraphs are going to dwell on the negatives of graduate school, but I’d
like to pause here and give advance notice that there are concrete steps you can take to mitigate
and even eliminate these issues. In fact, if you are already well aware of the issues, please feel
free to skip ahead to the next section!
Graduate study is different from undergraduate study and requires a student to make a
transition in their approach to education and scholarship. Your experience will become more
centered on your research activities, particularly as you progress in a Ph.D. program. It is also
common for a graduate student’s experience in their program to be punctuated with critical
deadlines and exams that have broader career implications with limited opportunities for a “do-
1IFLScience, “How Much Data Does The World Generate Every Minute?” http://www.iflscience.com/
technology/how-much-data-does-the-world-generate-every-minute/.
2Killingsworth, M. A. and Gilbert, D. T., 2010. A wandering mind is an unhappy mind. Science, 330.6006, 932–932.
3Hyun, J. K., Quinn, B. C., Madon T., and Lustig, S., 2006. Graduate student mental health: Needs assessment and
utilization of counseling services. Journal of College Student Development, 47(3):247–66.
186
9. SAFEGUARDING YOUR PERSONAL HEALTH AND HAPPINESS
over.” These exams can produce high levels of stress that have measurable biological impacts on
your body.4
Although over 80% of Ph.D. students who are enrolled full time are funded by an assis-
tantship in engineering disciplines, these are not high-paid positions. Additionally, part-time
students may be part time simply because they can’t secure an assistantship. For these reasons,
financial pressures can be another source of stress for graduate students.
Because graduate school is generally attended by individuals in their prime child-bearing
years, often graduate students have partners, spouses, and children. In combination with a lower
income and a demanding program of scholarship, family responsibilities can be challenging to
juggle. Sometimes our families and friends may not be as supportive as we would like, often this
is rooted in a lack of understanding about what we are doing in graduate school and what we
are trying to achieve.
The graduate student experience often has similar traits to an apprenticeship. There can be
negatives that arise from having strong or even singular ties to one individual research advisor.5
Occasionally the student’s committee can help to buffer the situation, but strong committee
engagement is not always present for students. Furthermore, during the course of their graduate
studies, a student should undergo a transition to a junior colleague. For a variety of reason’s this
transition may be arrested, and the student may be trapped in a role where they have little or
no say over their activities even though they have established significant expertise. As a result,
having a low level of autonomy can be a big source of dissatisfaction.
9.1.1 GRADUATE STUDENT MENTAL HEALTH
Engineering Ph.D. students spend an average of 6.7 years in graduate school to complete
their degree.6 It is a long and intellectually strenuous process. Students sometimes experience
“slumps” that can lead to depression. Research published in Nature Biotechnology reported that
“…graduate students are more than six times as likely to experience depression and anxiety as
compared to the general population.7” There has been quite a bit of research on the topic of
depression in graduate school and some findings point to causes such as “..social isolation, the
often abstract nature of the work and feelings of inadequacy…8” If you are dealing with mental
health issues it is important to seek out help sooner rather than later.
4Lacey, K., Zaharia, M., Griffiths, J., Ravindran, A., Merali, Z., and Anisman, H., 2000. A prospective study of neu-
roendocrine and immune alterations associated with the stress of an oral academic examination among graduate students.
Psychoneuroendocrinology, 25(4):339–56.
5Martin, M. M., Goodboy, A. K., and Johnson, Z. D., 2015. When professors bully graduate students: Effects on student
interest, instructional dissent, and intentions to leave graduate education. Communication Education, 64(4):438–54.
6National Science Board, “Science and Engineering Indicators 2018,” https://www.nsf.gov/statistics/2018/
nsb20181/.
7Evans, B., Gastelum, B., and Weiss, V., 2018. Evidence for a mental health crisis in graduate education, Nature Biotech-
nology, 36, 282–284.
8Flaherty, C., 2018. Mental health crisis for grad students, Inside Higher Education.
9.2. STEPS YOU CAN TAKE TO BE HEALTHIER AND HAPPIER 187
Perfectionism can also be an issue that some struggle with, particularly because of the need
for validation and fear of criticism that can go along with it. Perfectionism can manifest itself
differently9: As a personal demand and expectation of oneself, as a perception that others expect
perfection in you, and as an expectation that others perform to unreasonably high standards. For
graduate studies, the issue of “self-oriented perfectionism” can cause a number of problems that
can interfere with progress. Certainly holding yourself to high standards is good, but when those
high standards require you to always portray an image of perfection to others, concealing prob-
lems and struggles that you may be having, and being unwilling to ask for help when you need
it, then it is a detriment to being successful. Some graduate students suffer from the “imposter
syndrome,” the feeling that someone made a mistake by letting them into graduate school and
at any moment they will be found out as a fraud. This can lead to the need to appear perfect
to others and conceal and flaws or perceived inadequacies. A recent research study of graduate
students showed that “…avoiding outward displays of imperfection was the strongest and most
consistent predictor of academic problems….10” Whether independently or with the help of
counselling, if you consider yourself a perfectionist or identify with the imposter syndrome, you
need to accept the reality that everyone is imperfect. Ask for the help you need, so that you can
be successful.
These previous paragraphs may sound dismal, but it is important to recognize that if you
are experiencing issues you are not the only one.11 If you find at some point in your graduate
career that you are struggling—facing one or more of the above issues—it is important to seek
out help. Not only will you find that other graduate students experience similar issues, but there
are also people, strategies, and resources available to you, if you are willing to reach out for some
help. Because universities now better recognize the issues faced by students at the undergraduate
and graduate levels, there are often campus resources available. You may have a university health
services that you can turn to, and you are likely to have a graduate school or dean of graduate
studies office on your campus that can help you to identify the resources that are available on
campus.
9.2
STEPS YOU CAN TAKE TO BE HEALTHIER AND
HAPPIER
The challenging aspects of graduate school can have a detrimental effect on you as a person, but
you have control over more than you may think. In particular, you have the ability to organize
9Hewitt, P. L., Flett, G. L., Sherry, S. B., Habke, M., Parkin, M., Lam, R. W., McMurtry, B., Ediger, E., Fairlie, P., and
Stein, M. B., 2003. The interpersonal expression of perfection: Perfectionistic self-presentation and psychological distress.
Journal of Personality and Social Psychology, 84(6):1303.
10Cowie, M. E., Nealis, L. J., Sherry, S. B., Hewitt, P. L., and Flett, G. L., 2018. Perfectionism and academic difficulties
in graduate students: Testing incremental prediction and gender moderation. Personality and Individual Differences, 123, 223–
228.
11Evans, T. M., Bira, L., Gastelum, J. B., Weiss, L. T., and Vanderford, N. L., 2018. Evidence for a mental health crisis
in graduate education. Nat. Biotechnol., 36(3):282.
188
9. SAFEGUARDING YOUR PERSONAL HEALTH AND HAPPINESS
your schedule and set priorities so that you have the opportunity to be both healthy and happy
as you pursue your graduate studies.
For graduate students, just like other professionals, “Work—life balance is associated with
physical and mental well-being…12 The next several sections identify some of these work-life
balance topics—such as getting exercise and eating healthy—and the strategies that can help
you achieve your goals—such as mindfulness practice and time management.
Some of what you need to do is simple. In Claire Potter’s essay outlining “The Ten Com-
mandments of Graduate School13” her second commandment, after “Thou shalt no rack up
unnecessary credit card debt,” advises that you not neglect your dental and health care. If you
move to a new location for graduate school, you will need to set up new doctors and dentists
for yourself. Determine what your insurance benefits are—at some institutions you will have
coverage—and find out who the providers are. Don’t wait until you have a crisis, get established
with a new doctor and a new dentist early. Additionally, there are numerous campus resources
that can help you to navigate specific health issues that you are already aware of, or may arise in
the future.
Part of your baseline for happiness in graduate school is the people who you interact with
on a day-to-day basis. Recall back to Chapter 2 on Finding the Right Research Position for You.
It has been shown that having a “…strong, supportive and positive mentoring relationships
between graduate students and their PI/advisors correlate significantly with less anxiety and
depression.14” Knowing that is helpful for making a good choice at the start, but even if you
find that you don’t have the kind of relationship you would have wished for with your research
mentor, it is not the end of hope. Focus on broadening your constellation of mentors to find the
support you need to succeed. On occasion, however, some graduate students find themselves in
a position that is negative and destructive, and a change of research mentor is needed. If the
relationship is one that you need to remove yourself from, it does not mean that you have to give
up your goals for achieving your Ph.D. Work with trusted colleagues on your campus to help
you identify a better path forward (for instance, many campuses have an Ombudsperson who
you can consult with confidentially).
Changing Course
I have had at least one case in my research group where in the process
of mentoring we discovered that the alignment between the student’s newly
refined goals and the research that was being conducted in my lab were not
as good as we once thought they were. In this case, I helped the student to
identify the new direction that they would like to take and assisted them in
12Evans, B., Gastelum, B., and Weiss, V., 2018. Evidence for a mental health crisis in graduate education, Nature Biotech-
nology, 36, 282–284.
13Potter, C., 2013. The ten commandments of graduate school, Chronicle of Higher Education.
14Flaherty, C., 2018. Mental health crisis for grad students, Inside Higher Education.
9.3. GETTING SLEEP 189
getting to where they wanted to be. Although this is a problem in the short
term for me, and a loss because I have invested both time and funding into
their training, I have found that in the long run it’s best for both the people
and the project to make the change.
I know that changing research groups was a difficult conversation for
my student to initiate with me. Part of what helped to make it work was their
willingness to help us complete our short-term goals on the research project
while we were looking for a better long-term path for the student.
ASSIGNMENT 9-1:
INDIVIDUAL ASSIGNMENT – IDENTIFYING SUPPORT RESOURCES
Nearly every graduate school in the U.S. has support resources that their graduate students can
take advantage of.15
This may include access to workshops on stress management, child care sharing groups,
individual mental health counselling sessions, support groups or boot camps on dissertation
writing, athletic facilities, non-credit classes offered by the union or continuing studies, just to
name a few. Investigate the resources available to you on your campus (use websites, graduate
student coordinators, and fellow students). Identify at least two resources that you would find
personally beneficial immediately, and two additional resources that you could envision benefit-
ing from in the future when you are at a different stage in your graduate career or experiencing
a specific struggle. Choose one of these resources that you can utilize this week and make an
appointment and/or schedule time for it in your calendar.
9.3
GETTING SLEEP
It’s easy to let the end of the term, a looming deadline, or an important degree program exam,
get your schedule out of whack. But that’s actually the worst time to get less sleep. Having a
consistent sleep routine and sleep schedule are important for both your physical and mental
health.
Lacey et al. found that “…during the course of lengthy anticipatory periods preceding a
scheduled oral examination, graduate students reported more frequent malaise (e.g., headaches,
sore throat, fatigue) than did controls.” Furthermore, “…anticipation of an imminent oral aca-
15Bird, L. T. and Sheryl, A. Principles of Good Practice in Dealing with Students in Distress: Council of Graduate
Schools. Available from: http://cgsnet.org/principles-good-practice-dealing-students-distress-0.
190
9. SAFEGUARDING YOUR PERSONAL HEALTH AND HAPPINESS
demic examination was also associated with increased cortisol levels16”—a hormone that regu-
lates important bodily functions like metabolism and immune system response. Even pulling one
all-nighter or getting minimal sleep before an exam can be detrimental. Alterations of immune
function occur after only a modest loss of sleep.17
Good “sleep hygiene” begins with taking care of your body during the day and allowing
your brain to cool down before you turn off the light to go to sleep. You have likely heard much
of this advice before, but it is important to avoid caffeine in the later part of the day and alcohol
before bedtime. You also need to put away the screens and do a relaxing activity like meditation,
journaling, or reading (from paper) before you turn off the lights. Make sure your sleeping sit-
uation is comfortable, dark, and quiet (if not, an eye mask and earplugs can be helpful). Once
you have developed a routine stick with it. Some people have no trouble going to sleep at the
beginning of the night but can’t get a full night’s sleep because they wake before they intend to
and have difficulty going back to sleep. If this happens to you and your mind is racing, it may
be helpful to keep a notepad by your bedside to write down what you are thinking about so you
can let go of it for now and get back to sleep. You might also find that re-engaging with aspects
of your bedtime routine, like meditation of reading, may help you to return to restful sleep.
ASSIGNMENT 9-2:
INDIVIDUAL ASSIGNMENT – PERSONAL SLEEP LOG
You may not realize how irregular your sleep pattern is if you are not attuned to the issue. Place
a paper calendar and pencil next your bedside. Each morning jot down the approximate time
you fell asleep the previous night, the time at which you woke up, the total hours of sleep, and
a quality rating of your sleep between 1 and 10. Do this during a representative week of the
semester. For the next week identify a target time at which you will go to bed every night and
try to maintain that nighttime routine while continuing to record data. At the end of the second
week identify whether your sleep quality improved. Use subsequent weeks to experiment with
other sleep improvement techniques, such as limiting exposure to TV/computer/phone screens
before bedtime, making modifications to your sleep environment to ensure that it is dark and
quiet, and avoiding caffeine during the second half of the day.18
16Lacey, K., Zaharia, M., Griffiths, J., Ravindran, A., Merali, Z., and Anisman, H., 2000. A prospective study of neu-
roendocrine and immune alterations associated with the stress of an oral academic examination among graduate students.
Psychoneuroendocrinology, 25(4):339–56.
17Irwin, M., Mcclintick, J., Costlow, C., Fortner, M., White, J., and Gillin, J. C., 1996. Partial night sleep deprivation
reduces natural killer and cellular immune responses in humans. FASEB 10, 643–653.
18Mayo Clinic, “Sleep tips: 6 steps to better sleep,” https://www.mayoclinic.org/healthy-lifestyle/adult-
health/in-depth/sleep/art-20048379.
9.4
GETTING EXERCISE
9.4. GETTING EXERCISE 191
It’s important to get exercise, particularly if your coursework and research keep you pinned at a
desk most of the day. In addition to being good for your body, exercise can help you reduce stress
and can be beneficial for your brain. Depending on what you decide to do for exercise, it also
the potential to help you meet new people beyond your research group and graduate program
and develop new friendships. These can be a valuable support network for you.
If you have moved to a new location for your graduate studies, the types of exercise you
used to rely may not be as readily available because of a different climate or access to facilities.
Take the opportunity to expand your horizons—try out new sports, identify clubs, and test
different activities. Many campuses have club sports, leagues, and even non-credit classes that
can help you to test out something new and develop skills. Minimally, most campuses have some
sort of gym/pool access available to students.
Think broadly about what might be available locally: running and/or biking trails, hik-
ing trails, ice skating, cross-country ski trails, downhill skiing, sailing, rowing, kayaking, rock
climbing, swimming, dancing, etc. Consider club and sport teams like baseball/softball, volley-
ball, lacrosse, kickball, and even quidditch.
For some people exercise is already a part of their daily routine, but for others it is some-
thing we have to push ourselves to do regularly. If you are in the second category, there are a
number of strategies that might work for you. Try scheduling exercise into your calendar just
like you would do for a course, sign up for a non-credit class that meets regularly, or find an
“exercise buddy” who will help you get out to exercise regularly.
If you’re starting a new exercise routine or sport, start out realistically and slowly ramp up
to a level that is healthy and sustainable. If you have concerns about how exercise may impact
past injury or other health condition, consult with your physician before embarking on anything
strenuous.
ASSIGNMENT 9-3:
INDIVIDUAL ASSIGNMENT – CAMPUS SPORT AND RECREATION
RESOURCES
Access your institution’s website and determine what campus resources are available to you for
getting/staying fit. Search on terms like “recreational sports” and “club teams.” Identify several
that would be of interest to you and choose one to check out in person. Get a facility tour or
meet with someone who will provide you with an orientation.
192
9. SAFEGUARDING YOUR PERSONAL HEALTH AND HAPPINESS
EATING HEALTHY
9.5
One of the challenges of being a student, particularly if you live on or near campus, is finding
ways to consistently eat healthy. Pizza deliveries, fast food restaurants, and sandwich shops are
readily available, but these do not generally provide the kind of food that will help you stay
healthy and fuel your brain effectively. Unfortunately, many campuses meet the definition of a
food desert, thus it is difficult to easily obtain healthy, fresh food. In response some campuses
have welcomed area farmers to hold small farmer’s markets on campus during the growing sea-
son, this can be a great way to insert more fresh produce into your diet.
In some areas of the country, community-supported agriculture (CSA) provides a way for
you to both support a local farm by buying directly from them and receive a box of fresh produce
weekly during the growing season. Depending on the particular CSA you join, you may be able
to choose the size of box, delivery frequency, and even the choice of items. Some farms have
drop off locations on university campuses.
There may also be opportunities to buy fresh ingredients for healthier eating just a bus ride
away. Look into the areas grocery stores that are available and the public transportation options
that are connected to campus.
If you are on a land-grant campus or a university with a large agricultural program you may
also have the opportunity to buy food from campus sources. At the University of Wisconsin-
Madison for instance, the Meat Sciences Laboratory has Bucky’s Butchery shop (an under-
graduate operated store that sells meat one day a week), the Babcock Dairy Store has award-
winning cheese (and phenomenal ice cream), and the UW Poultry Science Club sells turkeys
each Thanksgiving. Take a look into what your campus has to offer, you might be surprised at
what you find.
You may have to do a bit of internet sleuthing, but there are likely some good options to
help you eat healthy that are more easily accessible than you might have initially appreciated!
ASSIGNMENT 9-4:
INDIVIDUAL ASSIGNMENT – HEALTHY FOOD EXPLORATION
Identify a fresh vegetable available to you locally that you have never tried before or don’t nor-
mally eat. Use this vegetable as a search term in your favorite cookbook or in an online recipe
resource (e.g., www.allrecipes.com). Find a recipe that looks appealing, buy the ingredients,
and give it a try.
CREATIVE OUTLETS
9.6
The creative nature of engineers enables their ability to innovate and discover. To imagine what
might be possible.
9.6. CREATIVE OUTLETS 193
Engineers often express their creativity in a number of ways outside of their engineering
practice. If you ask your engineering colleagues you may find that you are among musicians,
dancers, writers, painters, potters, woodworkers, and more. If you are one of these engineer
artists, be sure to allow time for your creativity both inside and outside of engineering. Not
only is it a good stress relief, you may find that is a helpful way to practice achieving a state of
immersion that you also need to be productive with your engineering data analysis, technical
writing, etc.
Your arts practice may also ultimately link with your engineering work in ways you may
not have initially anticipated.19 The more you focus on observing, the more you will see. Your
thinking and skills and creativity will be enhanced not only by improving your math skills and
your language skills, but also your perceptual skills.20
For example, engagement with visual representation has been crucial aspect of my profes-
sional career. My training as a visual artist has been essential skill building that has helped me
to understand and interpret the images obtained from a variety of microscopy instrumentation.
The ability to see detail and attend to subtle changes in images is critical to my engineering
research. I believe that these skills are enhanced by my artistic practice with painting. In my
teaching, I use visual representations of engineering elements and concepts. They are integrated
throughout the courses that I teach at the undergraduate and graduate levels to provide learners
with additional ways of interacting with complex concepts.
The Pause that Refreshes
I have always enjoyed art as a hobby and had taken painting and sculp-
ture classes prior to attending graduate school. During my Ph.D. program I
enrolled in a ceramics class offered through the Art Department (it was pretty
intense, so I chose the pass/fail option for the course even though I probably
could have earned a good grade). After taking the class I realized that being
able to immerse myself in art periodically was reducing my stress overall and
helping me to be more focused when I came back to my engineering studies
and research. I discovered that the campus student union also had non-credit
classes and you could have access to the studio, pottery wheels, and kilns by
paying a small fee even if you were not enrolled in a class. In the studio I
bumped into a fellow graduate student studying chemistry who enjoyed ce-
ramics as well. She and I began meeting regularly at the studio to work on
our pottery. It was a wonderful complement to my engineering work that I
continued throughout my Ph.D. program.
19Walesh, S. G., 2019. Can creating art make you a more effective engineer?, PE Magazine, National Society of Profes-
sional Engineers, pp. 24–27.
20Edwards, B., 2008. Drawing on the Artist Within, Simon and Schuster.
194
9. SAFEGUARDING YOUR PERSONAL HEALTH AND HAPPINESS
9.7
EMPLOYING MINDFULNESS PRACTICES
There is ample scientific research that the use of regular mindfulness practices, such as medita-
tion, can have a positive impact on our body and make changes in how our mind works.21
What are mindfulness practices? “An operational working definition of mindfulness is:
the awareness that emerges through paying attention on purpose, in the present moment, and
nonjudgmentally to the unfolding of experience moment by moment.22” One way to achieve
the qualities of attention and awareness, thought of as being characteristic of mindfulness, is the
practice of meditation. Historically, meditation has been connected to Hinduism and Buddhism,
but more recently it has been Westernized and converted into a secular practice.
Mindfulness practices involve two basic components: “The first component involves the
self-regulation of attention so that it is maintained on immediate experience, thereby allow-
ing for increased recognition of mental events in the present moment. The second component
involves adopting a particular orientation toward one’s experiences in the present moment, an
orientation that is characterized by curiosity, openness, and acceptance.23” Mindfulness prac-
tices are broader than just meditation. Other mindfulness practices you might be familiar with
include yoga and Tai Chi.
There is mounting scientific evidence that regular mindfulness practice such as meditation
can change your brain and your body. Studies that ask participants to employ daily meditation
show that individuals can manage chronic pain, reduce stress hormones, and improve their re-
silience. There is a growing literature showing that activities like Tai Chi, Qigong, yoga, and
meditation can alter inflammatory gene expression and change cellular markers of inflamma-
tion, even after just 6–8 weeks of training and practice.24
You should not feel that a major lifestyle change is required to achieve some benefit. Small
amounts of regular meditation can also be helpful to reduce stress and improve your capacity for
creative thinking.25 One simple mindfulness practice is a focus on the breath. For example:
Sitting in a comfortable position, you close your eyes and notice your breath. It is
sometimes easier to focus by using a count with your breathing. Breath in counting to
one, and breath out to one, breath in counting to two and breath out to two, breath
in counting to three and breath out three, and so on. There will be a point where
you find the count length to be uncomfortable, so then reverse your count until you
21Davidson, R. J. and Begley, S., 2012. The Emotional Life of Your Brain: How Its Unique Patterns Affect the Way, Your Think,
Feel, and Live—and How You Can Change Them, Plume, New York.
22Kabat-Zinn, J., 2003. Mindfulness-based interventions in context: past, present, and future. Clinical Psychology: Science
and Practice, 10(2), 144–156.
23Bishop, S. R., Lau, M., Shapiro, S., Carlson, L., Anderson, N. D., Carmody, J., Segal, Z. V., et al., 2004. Mindfulness:
A proposed operational definition. Clinical Psychology: Science and Practice, 11(3):230–241.
24For example: Ader, R., Cohen, N., and Felten, D. L., 1987. Brain, behavior, and immunity. Brain, Behavior, and Im-
munity 1(1):1–6. And Bower, J. E. and Irwin, M. R., 2016. Mind—body therapies and control of inflammatory biology: a
descriptive review. Brain, Behavior, and Immunity, 51, 1–11.
25Schootstra, E., Deichmann, D., and Dolgova, E., 2017. Can ten minutes of mediation make your more creative? Harvard
Business Review.
9.8. MAKING TIME FOR IT ALL 195
find a comfortable count length for your breath and continue breathing in and out
at this comfortable rate. As you continue to breath in and out, things will pop into
your mind and that’s ok. Just take note of it, let the idea pass without dwelling on it,
and then refocusing your mind on your breath. When you feel you are ready to stop,
simply open your eyes.
This sort of practice allows you to enhance your focus, select what you choose to focus
on in the moment, and build the ability to notice your thoughts objectively without further
elaboration.
There are a wide range of skills and techniques that you can use to build mindfulness into
your everyday life. I highly recommend that students consider getting training in one or more
mindfulness techniques. There are often opportunities available on or near university campuses,
as well as a plethora of online resources26 and aps that you can use.27
ASSIGNMENT 9-5:
INDIVIDUAL ASSIGNMENT – MINDFUL RESET, MINDFUL RECHARGE,
MINDFUL REFRESH
Use brief mindfulness activities during your day to reduce your stress and increase your produc-
tivity.
Make a copy of the activity chart (Table 9.1). Cut along the lines and place the pieces in
an envelope. The next time you are feeling stuck, or stressed, or just need a break, pick an activity
card at random from the envelope and spend some time practicing your mindfulness skills with
the activity described.
9.8 MAKING TIME FOR IT ALL
In Chapter 5 we discussed strategies for project management as it relates to your research. For
some of us it might be helpful to employ project management tools with aspects of your personal
life too.
26For example:
Center for Healthy Minds, University of Wisconsin–Madison (resources on cultivating wellbeing and relieving suffering
(cid:15)
through a scientific understanding of the mind, including some guided practices):
http://centerhealthyminds.org/join-the-movement/workplace
Center for Advanced Studies in Business, University of Wisconsin–Madison (guided audio practices):
(cid:15)
http://www.uwcultivatingwellbeing.com/guided-audio-practices
Inner Sense Consulting, Bev Hays (guided mindfulness meditations):
http://www.innersenseconsulting.com/meditations.html.
27Two of my current favorites:
Simply Being
Stop, Breathe and Think
(cid:15)
(cid:15)
196
9. SAFEGUARDING YOUR PERSONAL HEALTH AND HAPPINESS
Table 9.1: Mindful Reset, Mindful Recharge, Mindful Refresh Activities (Continues.)
MeditateMeditate for 10 minutes on the present moment. Find a comfortable position, sitting upright if possible, and bring yourself into stillness. Pay close attention to your breath. Note, but don’t dwell on, the thoughts, emotions, and sensations that occur.SenseTranslate music into drawing. Listen to a piece of music that you enjoy. Translate the feeling that music gives you into a drawing – abstract or realism.DrawFind an object in your environment that is familiar to you. While looking at the object, do a line drawing of the object while NOT looking at the paper. Imagine that your pencil or pen is actually touching the contour of the object as you draw.JournalTh ink about ways in which you are committed to the common good. Choose one of these and write about what you will do in the future to advance that common good.ExerciseWalk 5+ fl ights of stairs. Count each step.(Pay attention to your body and stop if you feel pain.)MeditateClose your eyes, take slow deep breaths, and count 20 of them. If your mind wanders, take note of what you were thinking about, and then bring yourself gently back to focus on your breath.JournalWrite about 3 things that give you a genuine smile.ConnectSpend some time talking to someone you have not spoken to in a while.JournalWrite down 20 things that you are grateful for.DrawFill an entire piece of paper with repetitions of a shape or pattern.JournalWhat was the best thing that happened to you today? Take a few minutes to write about it.SenseEat a healthy snack while giving the experience your full attention. Focus on the taste, texture, and aroma.Table 9.1: (Continued.) Mindful Reset, Mindful Recharge, Mindful Refresh Activities
9.8. MAKING TIME FOR IT ALL 197
ConnectSend a brief note/message of appreciation to a colleague, friend, or family member.SenseTake a walk in nature. Find a small space or a large expanse where you can observe the fl ora and fauna around you.JournalWhen did you feel the most proud of yourself today? Take a few minutes to write about it.MeditateTake a mindful walk in a safe place. Walk a bit slower than your usual pace. Focus your awareness on your movements, balance, and the rhythm of your steps. If your mind wan-ders, take note of what you were thinking about, and then bring yourself gently back to focusing on your walk.ComposeCreate a theme song for your day. Consider which word or topics repeat as you think about the positive ways in which you would like your day to progress.Use these words or topics to create a chorus for your theme song.ExerciseDo sitting isometric exercises. For instance, while sitting and keeping your knees bent at a right angle, pick up one foot off the fl oor for a count of 10. Switch and lift the other foot for a count of 10. Pay atten-tion to your body during each exercise and stop if you feel pain.ComposeTranslate feeling into rhythm. Compose a short rhythm expressing a feeling you have chosen. Express it with fi nger snapping, toe tapping, tongue clicking, etc.JournalTh ink of a time when you were resilient in the face of adversity, small or large. Focus on that resiliency and write about how you can use this resiliency in the future.SenseFind a pleasant scent in your environment. Close your eyes and inhale deeply. Concen-trate on the sensations and thoughts that spring to mind.SenseFind something that feels cool or warm to the touch. Place your hands on the object. Close your eyes and engage in the moment.DrawDo a sketch of something in your fi eld of view. Focus on the energy or mood of what you are drawing.ExerciseDo yoga stretches for 5 min-utes. Th ree poses.(Pay attention to your body and stop if you feel pain.)198
9. SAFEGUARDING YOUR PERSONAL HEALTH AND HAPPINESS
Different people have different challenges when it comes to making time for it all. For
some of us we have too little “me time” and for others there is too much “play time.” All of
us need a balance. You need to devote substantial time to making progress in your graduate
education, and you will be able to more effectively do so, if you have a healthy mind and body.
It is also important to resist making comparisons between yourself and other students in your
degree program. Each person’s path is a unique one. Your goal is to find the most effective and
efficient path for yourself, that allows you to achieve your goals while also being a well-balanced
individual over time.
ASSIGNMENT 9-6:
INDIVIDUAL ASSIGNMENT – PLANNING YOUR WEEK
In reality, every week of your life will be different from the next and you will have to be flexible
as a deadline or special event approaches. However, you can develop some principles that you
would like to follow in managing your time on a regular basis. Begin by creating a list of the
major activities that you undertake regularly, e.g., coursework, if you are taking classes, research
activities, including writing, personal time that includes exercise, and other activities that make
you either relaxed or energized, social/family time, and sleep. Now set out goals for the number
of hours a week you feel you should devote to each activity. Schedule how this should look on a
weekly calendar for the “average” week (Table 9.2).
Now plan your actual calendar for the coming week. Live your life, do your work, and at
the end of each day jot down the number of hours you spent on each activity category. At the end
of the week total up how you spent your time and compare it to your goals. You may find that
you have not set your goals realistically, or you may find that this week had unexpected events.
Respond to this by balancing out how you spend your time in the following week. Continue this
planning activity for four weeks.
This process is designed to help you find a routine that includes all of the activities you
should spend time on, and a sufficient amount of time on them to make progress and meet your
deadlines. Be sure you have not eliminated sleep, personal, and social/family time to make time
for everything else—without balance you will be less productive overall.
9.8. MAKING TIME FOR IT ALL 199
Table 9.2: Weekly calendar used to set goals for the number of hours planned for each activity
and actual time spent on each activity daily
Planned ActualResearchCourse/TeachingCommunication/Networking/ServiceRelax/Exercise/Recharge ActivitiesFamily/Personal ResponsibilitiesSleepMondayTuesdayWednesdayTh ursdayFridaySaturdaySundayTotalAfterword
201
This book is based on my experiences as a research mentor, graduate advisor, instructor,
and administrator in the Graduate School of the University of Wisconsin–Madison. I am grate-
ful to all of the undergraduate and graduate research assistants who have worked with me over
the years, not only for their research contributions, but also for how they helped me to develop
and learn as a mentor. My teaching in the Engineering Physics undergraduate program and
the phenomenal undergraduate honors students I have worked with have helped me to better
understand what novice researchers want and need to know as they begin their research career.
I also had the pleasure of serving in several different administrative roles in the University of
Wisconsin–Madison Graduate School for five years, where I provided leadership for all aspects
of the graduate student experience, including admissions, academic services, academic analysis,
funding, professional development, and diversity. I learned an immense amount from my col-
leagues in the Graduate School and my faculty and staff colleagues across the University who
devote time and energy to graduate education. Those experiences and interactions also allowed
me to see graduate education from a broader perspective, beyond that of the graduate programs
in the College of Engineering where I serve as a graduate advisor and research mentor. This
book draws from this range of experiences and offers guidance and advice to those entering
engineering research as an undergraduate or a new graduate student.
Author’s Biography
203
WENDY C. CRONE
Wendy C. Crone is the Karen Thompson Medhi Professor in
the Department of Engineering Physics, with affiliate faculty
appointments in the Department of Biomedical Engineering
and the Department of Materials Science and Engineering,
and she holds the honor of Discovery Fellow with the Wis-
consin Institute for Discovery at the University of Wisconsin–
Madison.
Her research is in the area of solid mechanics, and many
of the topics she has investigated are connected with nanotech-
nology and biotechnology. She has applied her technical ex-
pertise to improving fundamental understanding of mechani-
cal response of materials, enhancing material behavior through surface modification and nanos-
tructuring, exploring the interplay between cells and the mechanics of their surroundings, and
developing new material applications and medical devices. Her research has been funded by the
National Institutes of Health, National Science Foundation, Department of Energy, Air Force
Office of Scientific Research, and Whitaker Foundation.
She teaches courses in the areas of engineering mechanics, engineering physics, and in-
formal science education. Over the last two decades, Prof. Crone has trained over two dozen
graduate students and postdocs in engineering mechanics, materials science, biomedical engi-
neering, and engineering education. Her former students hold positions in academia, national
laboratories, and industry.
Prof. Crone has received awards for research, teaching, and mentoring. In addition to
numerous peer reviewed journal publications, dozens of explanatory education products, and
four patents, she is the author of the book Survive and Thrive: A Guide for Untenured Faculty.
She has also served in several leadership roles over the course of her career, including Interim
Dean and Associate Dean of the Graduate School at UW-Madison and President of the Society
for Experimental Mechanics.
205
Index
abstracts, 89, 162, 169
authorship, 178
corresponding author, 78
bias
and research, 60
working to avoid, 60
careers, research, 3, 10, 46, 69, 100
citation, 78, 92
and plagiarism, 100
citation management, 93
crediting others, 99
formats for, 92
collaboration, 48
interdependencies, 56
with diverse team, 58
communication, oral, 147
and jargon, 148
engineering outreach, 152
flash talk, 159
informal research interactions, 147
poster presentations, 154
quad chart, 160
research talk, 156
with nonspecialists, 148
communication, written, 165
abstracts. See abstracts
email, 13
for nonexpert audiences, 165
journal articles. See journal articles
persuasive writing, 173
popular press, 166
press release, 166
proposals. See proposals
providing feedback to others, 174
publishing, 180
revising and editing, 173
technical reports, 171
technical writing, 167
thesis/dissertation.
See thesis/dissertation
writers block, 168
writing workshops, 174
conferences, research
and networking, 67
corresponding author, 78
data management plan, 141
data, research, 141
backup, 142
data management plan, 141
file naming, 144
manipulation, 145
organization, 143
original archival copy, 142
security, 142
storage, 142
version control, 144
dissertation. See thesis/dissertation
diversity
and research, 58
206
INDEX
doctor of philosphy, Ph.D., See graduate
National Academy of Engineering, 7
school
ethics, 2, 126
data manipulation, 145
D.I.S.O.R.D.E.R. framework, 129
misconduct, 127
negligence, 127
plagiarism. See plagiarism
Evaluation of Research Progress and
Researcher Development rubric, 48,
54
expertise, developing, 41, 43
evaluation of progress, 48
Individual Development Plan, 46
SMART Goal Strategy, 46
tracking progress of, 45
fellowships, 29
application, 32
finances
funding, 29
student loans, 31, 114
funding of research, 28
proposals, 108
global competency, 62
International Experience Indicators
rubric, 64–66
graduate school, 4, 19
and mental health, 186
and personal health, 185
application, 20, 22
application timeline, 21
committee, 114, 121
fit, 19, 20, 24
funding for, 24, 29
getting accepted, 27
visiting prospective programs, 24
grand challenges for engineering, 7
health and wellbeing
challenges, 185
creative outlets, 192
eating healthy, 192
excercise, 191
mental health, 185
mindfulness practices, 194
sleep, 189
strategies to support, 188
support resources, 189
time management, 195
identity as a researcher, 43
impact factor, 73
Individual Delopment Plan (IDP).
See expertise, developing
International Experience Indicators rubric,
64–66
intuition, developing, 43
journal articles, 73, 74
abstract, 77
acknowledgments, 80
authorship, 178
citation management. See citation
citation. See citation
citing, 76
evaluating, 97
guide to authors, 183
journal club, 83
journal review process, 181
literature search, 87
organization of, 79
publishing, 180
reading, 75, 77
reading critically, 84
review articles, 89
reviewing, 95
submission, 181
supplemental materials, 81
types of, 74
writing, 172
journal club, 82
laboratory notebook. See research notebook
literature, 73, 74
citation, 92
citation. See citation
dissertations, 91
governments reports, 91
indexing databases, 87
patents, 90
review, 95
search, 87
search strategies, 89
see also journal articles, 73
manuscript. See journal article, 201
master’s degree, MS. See graduate school
meetings
leading, 57
preparation for, 37
scheduling, 121
mental health, 186
mentor(s), 35
and time, 28, 56
choosing, 10
constellation of mentors, 11, 69, 188
expectations, 40
fit, 10
identifying, 8, 14
meetings with, 37
mentoring up, 36
multiple perspectives, 69
non-faculty, 1
peer mentoring, 72
mentoring. See mentor(s)
mindfulness practices, 194
INDEX 207
network, professional, 67
and conferences, 67
peer mentoring group, 72
strategies for building, 68
networking. See network, professional
peer mentoring group, 72
plagiarism, 100, 128
poster presentations, 154
principal investigator (PI), 35
project management, 113
resources, 114
timeline, 118
tools, 118
proposals, 106
hypothesis, 108
research plan, 107
submission, 109
writing, 172
research assistantships, 30
research group, 32, 35
collaboration, 56
data practices, 142
inclusive stragegies for, 61
interactions, 56
meetings, 32
research notebook, 135
and project management, 137
bound paper notebook, 137
contents, 135, 136
electronic notebook, 139
evaluation, 139
ownership, 135
research project
changing, 10
choosing, 8
fit, 8, 9
getting started, 110
208
INDEX
identifying, 8
making progress, 112
navigating obstacles, 122
scheduling time to work, 110
staying motivated, 112
research, engineering, 1, 2
and global competency, 62
and scientific method, 103
and undergraduates, 15
careers, 3
data. See data, research
documentation. See research notebook
ethics. See ethics
funding of, 28
misconduct, 127
project management. See project
management
publishing. See journal articles
research group. See research group
safety, 132
societal needs, 129
teams, 58
responsible conduct of research. See ethics
safety, 132
scientific method, 103
self authorship, 44
and global competency, 63
skills, developing, 43
Evaluation of Research Progress and
Researcher Development rubric,
49–55
tracking of, 48
societal needs, engineering and, 7
software version control, 144
summer undergraduate research programs,
17
teaching assistantships, 30
thesis/dissertation
and literature search, 91
defense talk, 156, 158
proposal, 106
writing, 172
undergraduate research experience, 14
identifying opportunities, 15
professionalism, 18
summer programs, 18
writers block, 168
writing workshops, 174, 177
providing feedback in, 174
|
https://ieeexplore.ieee.org/xpl/ebooks/bookPdfWithBanner.jsp?fileName=6813191.pdf&bkn=6813190&pdfType=book
|
MOBK070-FM
MOBKXXX-Sample.cls
March 22, 2007
13:6
Essentials of Applied Mathematics
for Scientists and Engineers
MOBK070-FM
MOBKXXX-Sample.cls
March 22, 2007
13:6
Copyright © 2007 by Morgan & Claypool
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in
any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations
in printed reviews, without the prior permission of the publisher.
Essentials of Applied Mathematics for Scientists and Engineers
Robert G. Watts
www.morganclaypool.com
ISBN: 1598291866
paperback
ISBN: 9781598291865 paperback
ISBN: 1598291874
ISBN: 9781598291872
ebook
ebook
DOI 10.2200/S00082ED1V01Y200612ENG003
A Publication in the Morgan & Claypool Publishers series
SYNTHESIS LECTURES ON ENGINEERING SEQUENCE IN SERIES #3
Lecture #3
Series ISSN: 1559-811X print
Series ISSN: 1559-8128
electronic
First Edition
10 9 8 7 6 5 4 3 2 1
MOBK070-FM
MOBKXXX-Sample.cls
March 22, 2007
13:6
Essentials of Applied Mathematics
for Scientists and Engineers
Robert G. Watts
Tulane University
SYNTHESIS LECTURES ON ENGINEERING SEQUENCE IN SERIES #3
M&C M o r g a n & C l a y p o o l P u b l i s h e r s
MOBK070-FM
MOBKXXX-Sample.cls
March 22, 2007
13:6
iv
ABSTRACT
This is a book about linear partial differential equations that are common in engineering and
the physical sciences. It will be useful to graduate students and advanced undergraduates in
all engineering fields as well as students of physics, chemistry, geophysics and other physical
sciences and professional engineers who wish to learn about how advanced mathematics can
be used in their professions. The reader will learn about applications to heat transfer, fluid
flow, mechanical vibrations. The book is written in such a way that solution methods and
application to physical problems are emphasized. There are many examples presented in detail
and fully explained in their relation to the real world. References to suggested further reading
are included. The topics that are covered include classical separation of variables and orthogonal
functions, Laplace transforms, complex variables and Sturm-Liouville transforms.
KEYWORDS
Engineering mathematics, separation of variables, orthogonal functions, Laplace transforms,
complex variables and Sturm-Liouville transforms, differential equations.
MOBK070-FM
MOBKXXX-Sample.cls
March 22, 2007
13:6
v
Contents
1.
2.
Partial Differential Equations in Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Introductory Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1
Fundamental Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 The Heat Conduction (or Diffusion) Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3.1 Rectangular Cartesian Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3.2 Cylindrical Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Spherical Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.3
The Laplacian Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.4 Boundary Conditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
1.4 The Vibrating String . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4.1 Boundary Conditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
1.5 Vibrating Membrane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.6 Longitudinal Displacements of an Elastic Bar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
The Fourier Method: Separation of Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1 Heat Conduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Scales and Dimensionless Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.1.1
Separation of Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1.2
2.1.3
Superposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.4 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.5 Lessons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Scales and Dimensionless Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.1.6
2.1.7
Separation of Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.8 Choosing the Sign of the Separation Constant . . . . . . . . . . . . . . . . . . . . . . 17
2.1.9
Superposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.1.10 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.1.11 Lessons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.1.12 Scales and Dimensionless Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
MOBK070-FM
MOBKXXX-Sample.cls
March 22, 2007
13:6
vi ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
2.1.13 Getting to One Nonhomogeneous Condition . . . . . . . . . . . . . . . . . . . . . . 20
2.1.14 Separation of Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.1.15 Choosing the Sign of the Separation Constant . . . . . . . . . . . . . . . . . . . . . . 21
2.1.16 Superposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.1.17 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.1.18 Lessons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.1.19 Scales and Dimensionless Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.1.20 Relocating the Nonhomogeneity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.1.21 Separating Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.1.22 Superposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.1.23 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.1.24 Lessons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.2 Vibrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Scales and Dimensionless Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2.1
Separation of Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2.2
2.2.3 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.2.4 Lessons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.2
3. Orthogonal Sets of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.1 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.1.1 Orthogonality of Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.1.2 Orthonormal Sets of Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.2.1 Orthonormal Sets of Functions and Fourier Series . . . . . . . . . . . . . . . . . . 32
3.2.2 Best Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.2.3 Convergence of Fourier Series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35
3.2.4 Examples of Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Sturm–Liouville Problems: Orthogonal Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.3.1 Orthogonality of Eigenfunctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.3
4.
Series Solutions of Ordinary Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.1 General Series Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
MOBK070-FM
MOBKXXX-Sample.cls
March 22, 2007
13:6
CONTENTS vii
4.1.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.1.2 Ordinary Points and Series Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.1.3 Lessons: Finding Series Solutions for Differential Equations
with Ordinary Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.1.4 Regular Singular Points and the Method of Frobenius . . . . . . . . . . . . . . . 49
4.1.5 Lessons: Finding Series Solution for Differential Equations with
4.2.1
Regular Singular Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.1.6 Logarithms and Second Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.2 Bessel Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Solutions of Bessel’s Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Here are the Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Fourier–Bessel Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.3 Legendre Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.4 Associated Legendre Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.2.2
5.
6.
Solutions Using Fourier Series and Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.1 Conduction (or Diffusion) Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.1.1 Time-Dependent Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
5.2 Vibrations Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Fourier Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.3
Integral Transforms: The Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
6.1 The Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Some Important Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
6.2
6.2.1 Exponentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Shifting in the s -domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
6.2.2
Shifting in the Time Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
6.2.3
6.2.4
Sine and Cosine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
6.2.5 Hyperbolic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Powers of t: tm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
6.2.6
MOBK070-FM
MOBKXXX-Sample.cls
March 22, 2007
13:6
viii ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
6.2.7 Heaviside Step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
6.2.8 The Dirac Delta Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
6.2.9 Transforms of Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
6.2.10 Laplace Transforms of Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
6.2.11 Derivatives of Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
6.3 Linear Ordinary Differential Equations with Constant Coefficients . . . . . . . . . 102
Some Important Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
6.4
Initial Value Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
6.4.1
Final Value Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
6.4.2
6.4.3 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Partial Fractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.5.1 Nonrepeating Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.5.2 Repeated Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
6.5.3 Quadratic Factors: Complex Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
6.5
7.
Complex Variables and the Laplace Inversion Integral . . . . . . . . . . . . . . . . . . . . . . . . . 111
7.1 Basic Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
7.1.1 Limits and Differentiation of Complex Variables:
Analytic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
7.1.2 The Cauchy Integral Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
8.
9.
Solutions with Laplace Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
8.1 Mechanical Vibrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
8.2 Diffusion or Conduction Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
8.3 Duhamel’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Sturm–Liouville Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
9.1 A Preliminary Example: Fourier Sine Transform . . . . . . . . . . . . . . . . . . . . . . . . . . 141
9.2 Generalization: The Sturm–Liouville Transform: Theory . . . . . . . . . . . . . . . . . . 143
9.3 The Inverse Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
MOBK070-FM
MOBKXXX-Sample.cls
March 22, 2007
13:6
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
CONTENTS ix
10.
Introduction to Perturbation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
10.1 Examples from Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
10.1.1 Regular Perturbation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
10.1.2 Singular Perturbation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Appendix A: The Roots of Certain Transcendental Equations . . . . . . . . . . . . . . . . . . 159
Appendix B: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Author Biography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
MOBK070-FM
MOBKXXX-Sample.cls
March 22, 2007
13:6
book
Mobk070
March 22, 2007
11:7
1
C H A P T E R 1
Partial Differential Equations
in Engineering
INTRODUCTORY COMMENTS
1.1
This book covers the material presented in a course in applied mathematics that is required
for first-year graduate students in the departments of Chemical and Mechanical Engineering
at Tulane University. A great deal of material is presented, covering boundary value problems,
complex variables, and Fourier transforms. Therefore the depth of coverage is not as extensive
as in many books. Our intent in the course is to introduce students to methods of solving
linear partial differential equations. Subsequent courses such as conduction, solid mechanics,
and fracture mechanics then provide necessary depth.
The reader will note some similarity to the three books, Fourier Series and Boundary
Value Problems, Complex Variables and Applications, and Operational Mathematics, originally by
R. V. Churchill. The first of these has been recently updated by James Ward Brown. The
current author greatly admires these works, and studied them during his own tenure as a
graduate student. The present book is more concise and leaves out some of the proofs in an
attempt to present more material in a way that is still useful and is acceptable for engineering
students.
First we review a few concepts about differential equations in general.
FUNDAMENTAL CONCEPTS
1.2
An ordinary differential equation expresses a dependent variable, say u, as a function of one
independent variable, say x, and its derivatives. The order of the differential equation is given
by the order of the highest derivative of the dependent variable. A boundary value problem
consists of a differential equation that is defined for a given range of the independent variable
(domain) along with conditions on the boundary of the domain. In order for the boundary value
problem to have a unique solution the number of boundary conditions must equal the order of
the differential equation. If the differential equation and the boundary conditions contain only
terms of first degree in u and its derivatives the problem is linear. Otherwise it is nonlinear.
book
Mobk070
March 22, 2007
11:7
2 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
(cid:1)
A partial differential equation expresses a dependent variable, say u, as a function of more
than one independent variable, say x, y, and z. Partial derivatives are normally written as ∂u/∂ x.
This is the first-order derivative of the dependent variable u with respect to the independent
variable x. Sometimes we will use the notation u x or when the derivative is an ordinary derivative
. Higher order derivatives are written as ∂ 2u/∂ x2 or u xx. The order of the differential
we use u
equation now depends on the orders of the derivatives of the dependent variables in terms of
each of the independent variables. For example, it may be of order m for the x variable and of
order n for the y variable. A boundary value problem consists of a partial differential equation
defined on a domain in the space of the independent variables, for example the x, y, z space,
along with conditions on the boundary. Once again, if the partial differential equation and the
boundary conditions contain only terms of first degree in u and its derivatives the problem is
linear. Otherwise it is nonlinear.
A differential equation or a boundary condition is homogeneous if it contains only terms
involving the dependent variable.
Examples
Consider the ordinary differential equation
a(x)u
(cid:1)(cid:1) + b(x)u = c (x),
0 < x < A.
Two boundary conditions are required because the order of the equation is 2. Suppose
u(0) = 0
and
u(A) = 1.
(1.1)
(1.2)
The problem is linear. If c (x) is not zero the differential equation is nonhomogeneous. The first
boundary condition is homogeneous, but the second boundary condition is nonhomogeneous.
Next consider the ordinary differential equation
a(u)u
(cid:1)(cid:1) + b(x)u = c
0 < x < A
(1.3)
Again two boundary conditions are required. Regardless of the forms of the boundary condi-
tions, the problem is nonlinear because the first term in the differential equations is not of first
since the leading coefficient is a function of u. It is homogeneous only if
degree in u and u
c = 0.
(cid:1)(cid:1)
Now consider the following three partial differential equations:
u x
u xx
uu x
+ u xx
+ u y y
+ u y y
+ u xy
+ u zz
= 1
= 1
= 0
(1.4)
(1.5)
(1.6)
book
Mobk070
March 22, 2007
11:7
PARTIAL DIFFERENTIAL EQUATIONS IN ENGINEERING 3
The first equation is linear and nonhomogeneous. The third term is a mixed partial derivative.
Since it is of second order in x two boundary conditions are necessary on x. It is first order
in y, so that only one boundary condition is required on y. The second equation is linear and
homogeneous and is of second order in all three variables. The third equation is nonlinear
because the first term is not of first degree in u and u x. It is of order 1 in x and order 2 in y.
In this book we consider only linear equations. We will now derive the partial differential
equations that describe some of the physical phenomena that are common in engineering
science.
Problems
Tell whether the following are linear or nonlinear and tell the order in each of the independent
variables:
(cid:1)(cid:1) + xu
u
tan(y)u y
tan(u)u y
(cid:1) + u2 = 0
+ u y y
= 0
+ 3u = 0
+ u = 0
u y y y
+ u y x
THE HEAT CONDUCTION (OR DIFFUSION) EQUATION
1.3
1.3.1 Rectangular Cartesian Coordinates
The conduction of heat is only one example of the diffusion equation. There are many other
important problems involving the diffusion of one substance in another. One example is the
diffusion of one gas into another if both gases are motionless on the macroscopic level (no
convection). The diffusion of heat in a motionless material is governed by Fourier’s law which
states that heat is conducted per unit area in the negative direction of the temperature gradient
in the (vector) direction n in the amount ∂u/∂n, that is
q n = −k∂u/∂n
(1.7)
where q n denotes the heat flux in the n direction (not the nth power). In this equation u is the
local temperature and k is the thermal conductivity of the material. Alternatively u could be the
partial fraction of a diffusing material in a host material and k the diffusivity of the diffusing
material relative to the host material.
Consider the diffusion of heat in two dimensions in rectangular Cartesian coordinates.
Fig. 1.1 shows an element of the material of dimension (cid:2)x by (cid:2)y by (cid:2)z. The material has a
specific heat c and a density ρ. Heat is generated in the material at a rate q per unit volume.
Performing a heat balance on the element, the time (t) rate of change of thermal energy
within the element, ρc (cid:2)x(cid:2)y(cid:2)z∂u/∂t is equal to the rate of heat generated within the element
book
Mobk070
March 22, 2007
11:7
4 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
FIGURE 1.1: An element in three dimensional rectangular Cartesian coordinates
(cid:1)(cid:1)(cid:1)(cid:2)x(cid:2)y(cid:2)z minus the rate at which heat is conducted out of the material. The flux of heat
q
conducted into the element at the x face is denoted by q x while at the y face it is denoted by q y .
At x + (cid:2)x the heat flux (i.e., per unit area) leaving the element in the x direction is q x + (cid:2)q x
while at y + (cid:2)y the heat flux leaving in the y direction is q y + (cid:2)q y . Similarly for q z. Expanding
xx((cid:2)x)2
the latter three terms in Taylor series, we find that q x + (cid:2)q x = q x + q x
x
+ terms of order ((cid:2)x)3 or higher order. Similar expressions are obtained for q y + (cid:2)q y and
q z + (cid:2)q z Completing the heat balance
(cid:2)x + (1/2)q x
ρc (cid:2)x(cid:2)y(cid:2)z∂u/∂t = q
(cid:1)(cid:1)(cid:1)(cid:2)x(cid:2)y(cid:2)z + q x(cid:2)y(cid:2)z + q y (cid:2)x(cid:2)z
− (q x + q x
(cid:2)x + (1/2)q x
x
− (q y + q y
(cid:2)y + (1/2)q y
y
− (q z + q z
(cid:2)z + (1/2)q z
z
xx((cid:2)x)2 + · · · )(cid:2)y(cid:2)z
y y ((cid:2)y)2 + · · · )(cid:2)x(cid:2)z
zz((cid:2)z)2 + · · · )(cid:2)x(cid:2)y
(1.8)
The terms q x(cid:2)y(cid:2)z, q y (cid:2)x(cid:2)z, and q z(cid:2)x(cid:2)y cancel. Taking the limit as (cid:2)x, (cid:2)y, and (cid:2)z
approach zero, noting that the terms multiplied by ((cid:2)x)2, ((cid:2)y)2, and ((cid:2)z)2 may be neglected,
dividing through by (cid:2)x(cid:2)y(cid:2)z and noting that according to Fourier’s law q x = −k∂u/∂ x,
q y = −k∂u/∂ y, and q z = −k(∂u/∂z) we obtain the time-dependent heat conduction equation
in three-dimensional rectangular Cartesian coordinates:
ρc ∂u/∂t = k(∂ 2u/∂ x2 + ∂ 2u/∂ y 2) + q
(1.9)
The equation is first order in t, and second order in both x and y. If the property values
ρ, c and k and the heat generation rate per unit volume q are independent of the dependent
book
Mobk070
March 22, 2007
11:7
PARTIAL DIFFERENTIAL EQUATIONS IN ENGINEERING 5
FIGURE 1.2: An element in cylindrical coordinates
variable, temperature the partial differential equation is linear. If q is zero, the equation is
homogeneous. It is easy to see that if a third dimension, z, were included, the term k∂ 2u/∂z2
must be added to the right-hand side of the above equation.
1.3.2 Cylindrical Coordinates
A small element of volume r (cid:2)(cid:4)(cid:2)r (cid:2)z is shown in Fig. 1.2.
The method of developing the diffusion equation in cylindrical coordinates is much the
same as for rectangular coordinates except that the heat conducted into and out of the element
depends on the area as well as the heat flux as given by Fourier’s law, and this area varies in
the r -direction. Hence the heat conducted into the element at r is q r r (cid:2)(cid:4)(cid:2)z, while the heat
conducted out of the element at r + (cid:2)r is q r r (cid:2)(cid:4)(cid:2)z + ∂(q r r (cid:2)(cid:4)(cid:2)z)/∂r ((cid:2)r ) when terms
of order ((cid:2)r )2 are neglected as (cid:2)r approaches zero. In the z- and θ -directions the area does
not change. Following the same procedure as in the discussion of rectangular coordinates,
expanding the heat values on the three faces in Tayor series’, and neglecting terms of order
((cid:2)(cid:4))2 and ((cid:2)z)2 and higher,
ρc r (cid:2)θ (cid:2)r (cid:2)z∂u/∂t = −∂(q r r (cid:2)θ (cid:2)z)/∂r (cid:2)r − ∂(q
θ (cid:2)r (cid:2)z)/∂θ (cid:2)θ
− ∂(q zr (cid:2)θ (cid:2)r )/∂z(cid:2)z + qr (cid:2)θ (cid:2)r (cid:2)z
(1.10)
book
Mobk070
March 22, 2007
11:7
6 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
FIGURE 1.3: An element in spherical coordinates
Dividing through by the volume, we find after using Fourier’s law for the heat fluxes
ρc ∂u/∂t = (1/r )∂(r ∂u/∂r )/∂r + (1/r 2)∂ 2u/∂θ 2 + ∂ 2u/∂z2 + q
(1.11)
1.3.3 Spherical Coordinates
An element in a spherical coordinate system is shown in Fig. 1.3. The volume of the element is
r sin θ (cid:2)(cid:6)(cid:2)r r (cid:2)θ = r 2 sin θ (cid:2)r (cid:2)θ (cid:2)(cid:6). The net heat flows out of the element in the r , θ , and
(cid:6) directions are respectfully
q r r 2 sin θ (cid:2)θ (cid:2)(cid:6)
θ
r sin θ (cid:2)r (cid:2)(cid:6)
(cid:6)
r (cid:2)θ (cid:2)r
q
q
It is left as an exercise for the student to show that
ρc ∂u/∂t = k[(1/r 2)∂/∂r (r 2∂u/∂r ) + (1/r 2 sin2 θ )∂ 2u/∂(cid:6)2
+ (1/r 2 sin θ )∂(sin θ ∂u/∂θ )/∂θ + q
(1.12)
(1.13)
(1.14)
(1.15)
The Laplacian Operator
The linear operator on the right-hand side of the heat equation is often referred to as the
Laplacian operator and is written as ∇ 2.
book
Mobk070
March 22, 2007
11:7
PARTIAL DIFFERENTIAL EQUATIONS IN ENGINEERING 7
1.3.4 Boundary Conditions
Four types of boundary conditions are common in conduction problems.
a) Heat flux prescribed, in which case k∂u/∂n is given.
b) Heat flux is zero (perhaps just a special case of (a)), in which case ∂u/∂n is zero.
c) Temperature u is prescribed.
d) Convection occurs at the boundary, in which case k∂u/∂n = h(U − u).
Here n is a length in the direction normal to the surface, U is the temperature of the fluid
next to the surface that is heating or cooling the surface, and h is the coefficient of convective
heat transfer. Condition (d) is sometimes called Newton’s law of cooling.
THE VIBRATING STRING
1.4
Next we consider a tightly stretched string on some interval of the x-axis. The string is vibrating
about its equilibrium position so that its departure from equilibrium is y(t, x). The string is
assumed to be perfectly flexible with mass per unit length ρ.
Fig. 1.4 shows a portion of such a string that has been displaced upward. We assume
that the tension in the string is constant. However the direction of the tension vector along the
string varies. The tangent of the angle α(t, x) that the string makes with the horizontal is given
by the slope of the wire, ∂ y/∂ x,
V (x)/H = tan α(t, x) = ∂ y/∂ x
(1.16)
If we assume that the angle α is small then the horizontal tension force is nearly equal to
the magnitude of the tension vector itself. In this case the tangent of the slope of the wire
FIGURE 1.4: An element of a vibrating string
book
Mobk070
March 22, 2007
11:7
8 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
at x + (cid:2)x is
V (x + (cid:2)x)/H = tan α(x + (cid:2)x) = ∂ y/∂ x(x + (cid:2)x).
(1.17)
The vertical force V is then given by H∂ y/∂ x. The net vertical force is the difference between
the vertical forces at x and x + (cid:2)x, and must be equal to the mass times the acceleration of
that portion of the string. The mass is ρ(cid:2)x and the acceleration is ∂ 2 y/∂t2. Thus
ρ(cid:2)x∂ 2/∂t2 = H[∂ y/∂ x(x + (cid:2)x) − ∂ y/∂ x(x)]
(1.18)
Expanding ∂ y/∂ x(x + (cid:2)x) in a Taylor series about (cid:2)x = 0 and neglecting terms of order
((cid:2)x)2 and smaller, we find that
ρytt
= Hyxx
which is the wave equation. Usually it is presented as
ytt
= a 2 yxx
where a 2 = H/ρ is a wave speed term.
(1.19)
(1.20)
Had we included the weight of the string there would have been an extra term on the
right-hand side of this equation, the acceleration of gravity (downward). Had we included a
damping force proportional to the velocity of the string, another negative term would result:
ρytt
= Hyxx
− byt
− g
(1.21)
1.4.1 Boundary Conditions
The partial differential equation is linear and if the gravity term is included it is nonhomo-
geneous. It is second order in both t and x, and requires two boundary conditions (initial
conditions) on t and two boundary conditions on x. The two conditions on t are normally
specifying the initial velocity and acceleration. The conditions on x are normally specifying the
conditions at the ends of the string, i.e., at x = 0 and x = L.
VIBRATING MEMBRANE
1.5
The partial differential equation describing the motion of a vibrating membrane is simply an
extension of the right-hand side of the equation of the vibrating string to two dimensions.
Thus,
ρytt
+ byt
= −g + ∇ 2 y
(1.22)
In this equation, ρ is the density per unit area and ∇ 2 y is the Laplacian operator in either
rectangular or cylindrical coordinates.
book
Mobk070
March 22, 2007
11:7
PARTIAL DIFFERENTIAL EQUATIONS IN ENGINEERING 9
LONGITUDINAL DISPLACEMENTS OF AN ELASTIC BAR
1.6
The longitudinal displacements of an elastic bar are described by Eq. (1.20) except the in this
case a 2 = E/ρ, where ρ is the density and E is Young’s modulus.
FURTHER READING
V. Arpaci, Conduction Heat Transfer. Reading, MA: Addison-Wesley, 1966.
J. W. Brown and R. V. Churchill, Fourier Series and Boundary Value Problems. 6th edition. New
York: McGraw-Hill, 2001.
P. V. O’Neil, Advanced Engineering Mathematics. 5th edition. Pacific Grove, CA: Brooks/Cole-
Thomas Learning, 2003.
book
Mobk070
March 22, 2007
11:7
book
Mobk070
March 22, 2007
11:7
11
C H A P T E R 2
The Fourier Method: Separation
of Variables
In this chapter we will work through a few example problems in order to introduce the general
idea of separation of variables and the concept of orthogonal functions before moving on to a
more complete discussion of orthogonal function theory. We will also introduce the concepts
of nondimensionalization and normalization.
The goal here is to use the three theorems stated below to walk the student through the
solution of several types of problems using the concept of separation of variables and learn some
early lessons on how to apply the method without getting too much into the details that will
be covered later, especially in Chapter 3.
We state here without proof three fundamental theorems that will be useful in finding
series solutions to partial differential equations.
Theorem 2.1. Linear Superposition: If a group of functions un, n = m through n = M are all
solutions to some linear differential equation then
M(cid:1)
n=m
c nun
is also a solution.
Theorem 2.2. Orthogonal Functions: Certain sets of functions (cid:8)
possess the property that
n defined on the interval (a, b)
b(cid:2)
a
b(cid:2)
a
(cid:8)
(cid:8)
md x = constant, n = m
n
(cid:8)
(cid:8)
md x = 0, n (cid:3)= m
n
book
Mobk070
March 22, 2007
11:7
12 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
These are called orthogonal functions. Examples are the sine and cosine functions. This idea is discussed
fully in Chapter 3, particularly in connection with Sturm–Liouville equations.
Theorem 2.3. Fourier Series: A piecewise continuous function f (x) defined on (a, b) can be
represented by a series of orthogonal functions (cid:8)
n(x) on that interval as
∞(cid:1)
f (x) =
(cid:8)
n(x)
An
n=0
where
(cid:3)
=
(cid:3)
An
b
x=a f (x)(cid:8)
b
n(x)(cid:8)
(cid:8)
x=a
n(x)d x
n(x)d x
These properties will be used in the following examples to introduce the idea of solution of partial
differential equations using the concept of separation of variables.
2.1 HEAT CONDUCTION
We will first examine how Theorems 1, 2, and 3 are systematically used to obtain solutions
to problems in heat conduction in the forms of infinite series. We set out the methodology
in detail, step-by-step, with comments on lessons learned in each case. We will see that the
mathematics often serves as a guide, telling us when we make a bad assumption about solution
forms.
Example 2.1. A Transient Heat Conduction Problem
Consider a flat plate occupying the space between x = 0 and x = L. The plate stretches out
in the y and z directions far enough that variations in temperature in those directions may be
neglected. Initially the plate is at a uniform temperature u0. At time t = 0
the wall at x = 0
is raised to u1 while the wall at x = L is insulated. The boundary value problem is then
+
0 < x < L t > 0
= ku xx
ρc ut
u(t, 0) = u1
u x(t, L) = 0
u(0, x) = u0
(2.1)
(2.2)
2.1.1 Scales and Dimensionless Variables
When it is possible it is always a good idea to write both the independent and dependent
variables in such a way that they range from zero to unity. In the next few problems we shall
show how this can often be done.
book
Mobk070
March 22, 2007
11:7
We first note that the problem has a fundamental length scale, so that if we define another
space variable ξ = x/L, the partial differential equation can be written as
THE FOURIER METHOD: SEPARATION OF VARIABLES 13
ρc ut
= L
−2kuξ ξ
0 < ξ < 1
t < 0
(2.3)
Next we note that if we define a dimensionless time-like variable as τ = αt/L2, where α = k/ρc
is called the thermal diffusivity, we find
uτ = uξ ξ
(2.4)
We now proceed to nondimensionalize and normalize the dependent variable and the boundary
conditions. We define a new variable
U = (u − u1)/(u0
− u1)
(2.5)
Note that this variable is always between 0 and 1 and is dimensionless. Our boundary value
problem is now devoid of constants.
Uτ = Uξ ξ
U (τ, 0) = 0
Uξ (τ, 1) = 0
U (0, ξ ) = 1
(2.6)
(2.7)
All but one of the boundary conditions are homogeneous. This will prove necessary in our analysis.
2.1.2 Separation of Variables
Begin by assuming U = (cid:11)(τ )(cid:6)(ξ ). Insert this into the differential equation and obtain
Next divide both sides by U = (cid:6)(cid:11),
(cid:6)(ξ )(cid:11)τ (τ ) = (cid:11)(τ )(cid:6)ξ ξ (ξ ).
(cid:11)τ
(cid:11)
=
(cid:6)ξ ξ
(cid:6)
= ±λ2
(2.8)
(2.9)
The left-hand side of the above equation is a function of τ only while the right-hand side is a
function only of ξ . This can only be true if both are constants since they are equal to each other.
λ2 is always positive, but we must decide whether to use the plus sign or the minus sign. We
have two ordinary differential equations instead of one partial differential equation. Solution
for (cid:11) gives a constant times either exp(−λ2τ ) or exp(+λ2τ ). Since we know that U is always
between 0 and 1, we see immediately that we must choose the minus sign. The second ordinary
book
Mobk070
March 22, 2007
11:7
14 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
differential equation is
and we deduce that the two homogeneous boundary conditions are
(cid:6)ξ ξ = −λ2(cid:6)
(cid:6)(0) = 0
(cid:6)ξ (1) = 0
Solving the differential equation we find
(cid:6) = A cos(λξ ) + B sin(λξ )
(2.10)
(2.11)
(2.12)
where A and B are constants to be determined. The first boundary condition requires that
A = 0.
The second boundary condition requires that either B = 0 or cos(λ) = 0. Since the former
cannot be true (U is not zero!) the latter must be true. ξ can take on any of an infinite number
= (2n − 1)π/2, where n is an integer between negative and positive infinity.
of values λ
n
Equation (2.10) together with boundary conditions (2.11) is called a Sturm–Liouville problem.
The solutions are called eigenfunctions and the λ
n are called eigenvalues. A full discussion of
Sturm–Liouville theory will be presented in Chapter 3.
Hence the apparent solution to our partial differential equation is any one of the following:
Un
= Bn exp[−(2n − 1)2π 2τ/4)] sin[π (2n − 1)ξ/2].
(2.13)
2.1.3 Superposition
Linear differential equations possess the important property that if each solution Un satisfies
the differential equation and the boundary conditions then the linear combination
∞(cid:1)
n=1
Bn exp[−(2n − 1)2π 2τ/4] sin[π (2n − 1)ξ/2] =
∞(cid:1)
n=1
Un
(2.14)
also satisfies them, as stated in Theorem 2. Can we build this into a solution that satisfies the one
remaining boundary condition? The final condition (the nonhomogeneous initial condition)
states that
1 =
∞(cid:1)
n=1
Bn sin(π (2n − 1)ξ/2)
(2.15)
This is called a Fourier sine series representation of 1. The topic of Fourier series is further discussed in
Chapter 3.
book
Mobk070
March 22, 2007
11:7
THE FOURIER METHOD: SEPARATION OF VARIABLES 15
2.1.4 Orthogonality
It may seem hopeless at this point when we see that we need to find an infinite number of
constants Bn. What saves us is a concept called orthogonality (to be discussed in a more general
way in Chapter 3). The functions sin(π (2n − 1)ξ/2) form an orthogonal set on the interval
0 < ξ < 1, which means that
1(cid:2)
0
sin(π (2n − 1)ξ/2) sin(π (2m − 1)ξ/2)d ξ = 0 when m (cid:3)= n
(2.16)
= 1/2 when m = n
Hence if we multiply both sides of the final equation by sin(π (2m − 1)ξ/2)d ξ and integrate
over the interval, we find that all of the terms in which m (cid:3)= n are zero, and we are left with
one term, the general term for the nth B, Bn
1(cid:2)
Bn
= 2
sin(π (2n − 1)ξ/2)d ξ =
4
π (2n − 1)
0
(2.17)
Thus
U =
∞(cid:1)
n=1
4
π (2n − 1)
exp[−π 2(2n − 1)2τ/4] sin[π (2n − 1)ξ/2]
(2.18)
satisfies both the partial differential equation and the boundary and initial conditions, and
therefore is a solution to the boundary value problem.
2.1.5 Lessons
We began by assuming a solution that was the product of two variables, each a function of only
one of the independent variables. Each of the resulting ordinary differential equations was then
solved. The two homogeneous boundary conditions were used to evaluate one of the constant
coefficients and the separation constant λ. It was found to have an infinite number of values.
ξ are called eigenfunctions. Linear
These are called eigenvalues and the resulting functions sinλ
superposition was then used to build a solution in the form of an infinite series. The infinite
series was then required to satisfy the initial condition, the only nonhomogeneous condition.
The coefficients of the series were determined using the concept of orthogonality stated in
Theorem 3, resulting in a Fourier series. Each of these concepts will be discussed further in
Chapter 3. For now we state that many important functions are members of orthogonal sets.
n
book
Mobk070
March 22, 2007
11:7
16 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
The method would not have worked had the differential equation not been homoge-
neous. (Try it.) It also would not have worked if more than one boundary condition had been
nonhomogeneous. We will see how to get around these problems shortly.
Problems
1. Equation (2.9) could just as easily have been written as
(cid:11)τ
(cid:11)
=
(cid:6)ξ ξ
(cid:6)
= +λ2
Show two reasons why this would reduce to the trivial solution or a solution for which
(cid:11) approaches infinity as τ approaches infinity, and that therefore the minus sign must
be chosen.
2. Solve the above problem with boundary conditions
Uξ (τ, 0) = 0
and U (τ, 1) = 0
using the steps given above.
Hint: cos(nπ x) is an orthogonal set on (0, 1). The result will be a Fourier cosine series
representation of 1.
3. Plot U versus ξ for τ = 0.001, 0.01, and 0.1 in Eq. (2.18). Comment.
Example 2.2. A Steady Heat Transfer Problem in Two Dimensions
Heat is conducted in a region of height a and width b. Temperature is a function of two space
dimensions and independent of time. Three sides are at temperature u0 and the fourth side is
at temperature u1. The formulation is as follows:
∂ 2u
∂ y 2
∂ 2u
∂ x2
(2.19)
= 0
+
with boundary conditions
u(0, x) = u(b, x) = u(y, a) = u0
u(y, 0) = u1
(2.20)
2.1.6 Scales and Dimensionless Variables
First note that there are two obvious length scales, a and b. We can choose either one of them
to nondimensionalize x and y. We define
ξ = x/a
and η = y/b
(2.21)
so that both dimensionless lengths are normalized.
book
Mobk070
March 22, 2007
11:7
THE FOURIER METHOD: SEPARATION OF VARIABLES 17
To normalize temperature we choose
U = u − u0
− u0
u1
The problem statement reduces to
(cid:4)
Uξ ξ +
(cid:5)
2
Uηη = 0
a
b
U (0, ξ ) = U (1, ξ ) = U (η, 1) = 0
U (η, 0) = 1
(2.22)
(2.23)
(2.24)
2.1.7 Separation of Variables
As before, we assume a solution of the form U (ξ, n) = X(ξ )Y (η). We substitute this into the
differential equation and obtain
Y (η)Xξ ξ (ξ ) = −X(ξ )
(cid:5)
2
(cid:4)
a
b
Yηη(η)
(2.25)
Next we divide both sides by U (ξ, n) and obtain
(cid:5)
(cid:4)
a
b
In order for the function only of ξ on the left-hand side of this equation to be equal to the
function only of η on the right-hand side, both must be constant.
2 Ynn
Y
Xξ ξ
X
= ±λ2
(2.26)
= −
2.1.8 Choosing the Sign of the Separation Constant
However in this case it is not as clear as the case of Example 1 what the sign of this constant
must be. Hence we have designated the constant as ±λ2 so that for real values of λ the ± sign
determines the sign of the constant. Let us proceed by choosing the negative sign and see where
this leads.
Thus
or
Xξ ξ = −λ2 X
Y (η)X(0) = 1
Y (η)X(1) = 0
X(0) = 1
X(1) = 0
(2.27)
(2.28)
book
Mobk070
March 22, 2007
11:7
18 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
and
Yηη = ∓
(cid:7)
2
(cid:6)
b
a
λ2Y
X(ξ )Y (0) = X(ξ )Y (1) = 0
Y (0) = Y (1) = 0
The solution of the differential equation in the η direction is
Y (η) = A cosh(bλη/a) + B sinh(bλη/a)
(2.29)
(2.30)
(2.31)
Applying the first boundary condition (at η = 0) we find that A = 0. When we apply the
boundary condition at η = 1 however, we find that it requires that
0 = B sinh(bλ/a)
(2.32)
so that either B = 0 or λ = 0. Neither of these is acceptable since either would require that
Y (η) = 0 for all values of η.
We next try the positive sign. In this case
Xξ ξ = λ2 X
(cid:6)
Yηη = −
(cid:7)
2
b
a
λ2Y
with the same boundary conditions given above. The solution for Y (η) is now
Y (η) = A cos(bλη/a) + B sin(bλη/a)
The boundary condition at η = 0 requires that
0 = A cos(0) + B sin(0)
so that again A = 0. The boundary condition at η = 1 requires that
0 = B sin(bλ/a)
Since we don’t want B to be zero, we can satisfy this condition if
λ
n
= anπ/b,
n = 0, 1, 2, 3, . . .
Thus
Y (η) = B sin(nπ η)
(2.33)
(2.34)
(2.35)
(2.36)
(2.37)
(2.38)
(2.39)
book
Mobk070
March 22, 2007
11:7
THE FOURIER METHOD: SEPARATION OF VARIABLES 19
Solution for X(ξ ) yields hyperbolic functions.
X(ξ ) = C cosh(λ
ξ ) + D sinh(λ
n
ξ )
n
The boundary condition at ξ = 1 requires that
0 = C cosh(λ
n) + D sinh(λ
n)
or, solving for C in terms of D,
C = −D tanh(λ
n)
One solution of our problem is therefore
(2.40)
(2.41)
(2.42)
Un(ξ, η) = Kn sin(nπ η)[sinh(anπξ/b) − cosh(anπξ/b) tanh(anπ/b)]
(2.43)
2.1.9 Superposition
According to the superposition theorem (Theorem 2) we can now form a solution as
U (ξ, η) =
∞(cid:1)
n=0
Kn sin(nπ η)[sinh(anπξ/b) − cosh(anπξ/b) tanh(anπ/b)]
(2.44)
The final boundary condition (the nonhomogeneous one) can now be applied,
1 = −
∞(cid:1)
n=1
Kn sin(nπ η) tanh(anπ/b)
(2.45)
2.1.10 Orthogonality
We have already noted that the sine function is an orthogonal function as defined on (0, 1).
Thus, we multiply both sides of this equation by sin(mπ η)d η and integrate over (0, 1), noting
that according to the orthogonality theorem (Theorem 3) the integral is zero unless n = m.
The result is
1(cid:2)
η=0
1(cid:2)
sin(nπ η)d η = −Kn
sin2(nπ η)d η tanh(anπ/b)
η=0
1
nπ [1 − (−1)n] = −Kn tanh(anπ/b)
= − 2[1 − (−1)n]
nπ tanh(anπ/b)
Kn
1
2
(2.46)
(2.47)
(2.48)
book
Mobk070
March 22, 2007
11:7
20 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
The solution is represented by the infinite series
∞(cid:1)
U (ξ, η) =
2[1 − (−1)n]
nπ tanh(anπ/b)
sin(nπ η)
n=1
× [cosh(anπξ/b) tanh(anπ/b) − sinh(anπξ/b)]
(2.49)
2.1.11 Lessons
The methodology for this problem is the same as in Example 1.
Example 2.3. A Steady Conduction Problem in Two Dimensions: Addition of Solutions
We now illustrate a problem in which two of the boundary conditions are nonhomogeneous.
Since the problem and the boundary conditions are both linear we can simply break the problem
into two problems and add them. Consider steady conduction in a square region L by L in size.
Two sides are at temperature u0 while the other two sides are at temperature u1.
u xx
+ u y y
= 0
(2.50)
We need four boundary conditions since the differential equation is of order 2 in both inde-
pendent variables.
u(0, y) = u(L, y) = u0
u(x, 0) = u(x, L) = u1
(2.51)
(2.52)
2.1.12 Scales and Dimensionless Variables
The length scale is L, so we let ξ = x/L and η = y/L. We can make the first two bound-
ary conditions homogeneous while normalizing the second two by defining a dimensionless
temperature as
Then
U = u − u0
− u0
u1
Uξ ξ + Uηη = 0
U (0, η) = U (1, η) = 0
U (ξ, 0) = U (ξ, 1) = 1
(2.53)
(2.54)
(2.55)
(2.56)
2.1.13 Getting to One Nonhomogeneous Condition
There are two nonhomogeneous boundary conditions, so we must find a way to only have one.
Let U = V + W so that we have two problems, each with one nonhomogeneous boundary
book
Mobk070
March 22, 2007
11:7
condition.
THE FOURIER METHOD: SEPARATION OF VARIABLES 21
Wξ ξ + Wηη = 0
W(0, η) = W(1, η) = W(ξ, 0) = 0
W(ξ, 1) = 1
Vξ ξ + Vηη = 0
V (0, η) = V (1, η) = V (ξ, 1) = 0
V (ξ, 0) = 1
(2.57)
(2.58)
(2.59)
(2.60)
(It should be clear that these two problems are identical if we put V = W(1 − η). We will
therefore only need to solve for W.)
2.1.14 Separation of Variables
Separate variables by letting W(ξ, η) = P (ξ )Q(η).
Pξ ξ
P
= − Qηη
Q
= ±λ2
(2.61)
2.1.15 Choosing the Sign of the Separation Constant
Once again it is not immediately clear whether to choose the plus sign or the minus sign. Let’s
see what happens if we choose the plus sign.
Pξ ξ = λ2 P
The solution is exponentials or hyperbolic functions.
P = A sinh(λξ ) + B cosh(λξ )
(2.62)
(2.63)
Applying the boundary condition on ξ = 0, we find that B = 0. The boundary condition on
ξ = 1 requires that A sinh(λ) = 0, which can only be satisfied if A = 0 or λ = 0, which yields
a trivial solution, W = 0, and is unacceptable. The only hope for a solution is thus choosing
the minus sign.
If we choose the minus sign in Eq. (2.61) then
with solutions
Pξ ξ = −λ2 P
Qηη = λ2 Q
P = A sin(λξ ) + B cos(λξ )
(2.64)
(2.65)
(2.66)
book
Mobk070
March 22, 2007
11:7
22 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
and
Q = C sinh(λη) + D cosh(λη)
(2.67)
respectively. Remembering to apply the homogeneous boundary conditions first, we find that for
W(0, η) = 0, B = 0 and for W(1, η) = 0, sin(λ) = 0. Thus, λ = nπ , our eigenvalues cor-
responding to the eigenfunctions sin(nπ ξ ). The last homogeneous boundary condition is
W(ξ, 0) = 0, which requires that D = 0. There are an infinite number of solutions of the form
P Qn
= Kn sinh(nπ η) sin(nπ ξ )
(2.68)
2.1.16 Superposition
Since our problem is linear we apply superposition.
W =
∞(cid:1)
n=1
Kn sinh(nπ η) sin(nπ ξ )
Applying the final boundary condition, W(ξ, 1) = 1
1 =
∞(cid:1)
n=1
Kn sinh(nπ ) sin(nπ ξ ).
(2.69)
(2.70)
2.1.17 Orthogonality
Multiplying both sides of Eq. (2.70) by sin(mπ ξ ) and integrating over the interval (0, 1)
1(cid:2)
0
sin(mπ ξ )d ξ =
∞(cid:1)
n=0
1(cid:2)
Kn sinh(nπ )
sin(nπ ξ ) sin(mπ ξ )d ξ
(2.71)
0
The orthogonality property of the sine eigenfunction states that
Thus,
and
1(cid:2)
0
sin(nπ ξ ) sin(mπ ξ )d ξ = 0, m (cid:3)= n
1/2, m = n
Kn
= 2/ sinh(nπ )
W =
∞(cid:1)
n=0
2
sinh(nπ )
sinh(nπ η) sin(nπ ξ )
(2.72)
(2.73)
(2.74)
book
Mobk070
March 22, 2007
11:7
Recall that
THE FOURIER METHOD: SEPARATION OF VARIABLES 23
V = W(ξ, 1 − η)
and U = V + W
2.1.18 Lessons
If there are two nonhomogeneous boundary conditions break the problem into two problems
that can be added (since the equations are linear) to give the complete solution. If you are
unsure of the sign of the separation constant just assume a sign and move on. Listen to what the
mathematics is telling you. It will always tell you if you choose wrong.
Example 2.4. A Non-homogeneous Heat Conduction Problem
Consider now the arrangement above, but with a heat source, and with both boundaries held
at the initial temperature u0. The heat source is initially zero and is turned on at t = 0
. The
exercise illustrates the method of solving the problem when the single nonhomogeneous condition is in
the partial differential equation rather than one of the boundary conditions.
+
ρc ut
= ku xx
+ q
u(0, x) = u0
u(t, 0) = u0
u(t, L) = u0
(2.75)
(2.76)
2.1.19 Scales and Dimensionless Variables
Observe that the length scale is still L, so we define ξ = x/L. Recall that k/ρc = α is the
diffusivity. How shall we nondimensionalize temperature? We want as many ones and zeros
in coefficients in the partial differential equation and the boundary conditions as possible.
Define U = (u − u0)/S, where S stands for “something with dimensions of temperature” that
we must find. Dividing both sides of the partial differential equation by q and substituting
for x
L2Sρc Ut
q
= k SUξ ξ
q
+ 1
(2.77)
Letting S = q /k leads to one as the coefficient of the first term on the right-hand side.
Choosing the same dimensionless time as before, τ = αt/L2 results in one as the coefficient of
book
Mobk070
March 22, 2007
11:7
24 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
the time derivative term. We now have
Uτ = Uξ ξ + 1
U (0, ξ ) = 0
U (τ, 0) = 0
U (τ, 1) = 0
(2.78)
(2.79)
2.1.20 Relocating the Nonhomogeneity
We have only one nonhomogeneous condition, but it’s in the wrong place. The differential
equation won’t separate. For example if we let U (ξ, τ ) = P (ξ )G(τ ) and insert this into the
partial differential equation and divide by P G, we find
G
(cid:1)
(τ )
G
= P
(cid:1)(cid:1)
(ξ )
P
+ 1
P G
(2.80)
The technique to deal with this is to relocate the nonhomogenous condition to the initial
condition. Assume a solution in the form U = W(ξ ) + V (τ, ξ ). We now have
Vτ = Vξ ξ + Wξ ξ + 1
(2.81)
If we set Wξ ξ = −1, the differential equation for V becomes homogeneous. We then set
both W and V equal to zero at ξ = 0 and 1 and V (0, ξ ) = −W(ξ )
and
The solution for W is parabolic
Wξ ξ = −1
W(0) = W(1) = 0
Vτ = Vξ ξ
V (0, ξ ) = −W(ξ )
V (τ, 0) = 0
V (τ, 1) = 0
W = 1
2
ξ (1 − ξ )
(2.82)
(2.83)
(2.84)
(2.85)
(2.86)
book
Mobk070
March 22, 2007
11:7
2.1.21 Separating Variables
We now solve for V using separation of variables.
THE FOURIER METHOD: SEPARATION OF VARIABLES 25
V = P (τ )Q(ξ )
Pτ
= Qξ ξ
Q
P
= ±λ2
(2.87)
(2.88)
We must choose the minus sign once again (see Problem 1 above) to have a negative exponential
for P (τ ). (We will see later that it’s not always so obvious.) P = exp(−λ2τ ).
The solution for Q is once again sines and cosines.
Q = A cos(λξ ) + B sin(λξ )
(2.89)
The boundary condition V (τ, 0) = 0 requires that Q(0) = 0. Hence, A = 0. The boundary
condition V (τ, 1) = 0 requires that Q(1) = 0. Since B cannot be zero, sin(λ) = 0 so that our
eigenvalues are λ = nπ and our eigenfunctions are sin(nπ ξ ).
2.1.22 Superposition
Once again using linear superposition,
V =
∞(cid:1)
n=0
Bn exp(−n2π 2τ ) sin(nπ ξ )
Applying the initial condition
ξ (ξ − 1) =
1
2
∞(cid:1)
n=1
Bn sin(nπ ξ )
(2.90)
(2.91)
This is a Fourier sine series representation of 1
2
ξ (ξ − 1). We now use the orthogonality of
the sine function to obtain the coefficients Bn.
2.1.23 Orthogonality
Using the concept of orthogonality again, we multiply both sides by sin(mπ ξ )d ξ and integrate
over the space noting that the integral is zero if m is not equal to n. Thus, since
1(cid:2)
0
sin2(nπ ξ )d ξ = 1
2
1(cid:2)
=
Bn
ξ (ξ − 1) sin(nπ ξ )d ξ
0
(2.92)
(2.93)
book
Mobk070
March 22, 2007
11:7
26 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
2.1.24 Lessons
When the differential equation is nonhomogeneous use the linearity of the differential equation
to transfer the nonhomogeneous condition to one of the boundary conditions. Usually this will
result in a homogeneous partial differential equation and an ordinary differential equation.
We pause here to note that while the method of separation of variables is straightforward
in principle, a certain amount of intuition or, if you wish, cleverness is often required in order
to put the equation and boundary conditions in an appropriate form. The student working
diligently will soon develop these skills.
Problems
1. Using these ideas obtain a series solution to the boundary value problem
= u xx
ut
u(t, 1) = 0
u(t, 0) = 0
u(0, x) = 1
2. Find a series solution to the boundary value problem
+ x
= u xx
ut
u x(t, 0) = 0
u(t, 1) = 0
u(0, x) = 0
VIBRATIONS
2.2
In vibrations problems the dependent variable occurs in the differential equation as a second-
order derivative of the independent variable t. The methodology is, however, essentially the same
as it is in the diffusion equation. We first apply separation of variables, then use the boundary
conditions to obtain eigenfunctions and eigenvalues, and use the linearity and orthogonality
principles and the single nonhomogeneous condition to obtain a series solution. Once again, if
there are more than one nonhomogeneous condition we use the linear superposition principle
to obtain solutions for each nonhomogeneous condition and add the resulting solutions. We
illustrate these ideas with several examples.
Example 2.5. A Vibrating String
Consider a string of length L fixed at the ends. The string is initially held in a fixed position
y(0, x) = f (x), where it is clear that f (x) must be zero at both x = 0 and x = L. The boundary
book
Mobk070
March 22, 2007
11:7
value problem is as follows:
THE FOURIER METHOD: SEPARATION OF VARIABLES 27
= a 2 yxx
ytt
y(t, 0) = 0
y(t, L) = 0
y(0, x) = f (x)
yt(0, x) = 0
(2.94)
(2.95)
2.2.1 Scales and Dimensionless Variables
The problem has the obvious length scale L. Hence let ξ = x/L. Now let τ = ta/L and the
equation becomes
yτ τ = yξ ξ
(2.96)
One could now nondimensionalize y, for example, by defining a new variable as
f (x)/ fmax, but it wouldn’t simplify things. The boundary conditions remain the same except t
and x are replaced by τ and ξ .
2.2.2 Separation of Variables
You know the dance. Let y = P (τ )Q(ξ ). Differentiating and substituting into Eq. (2.96),
Pτ τ Q = P Qξ ξ
(2.97)
Dividing by P Q and noting that Pτ τ /P and Qξ ξ /Q cannot be equal to one another unless
they are both constants, we find
Pτ τ /P = Qξ ξ /Q = ±λ2
(2.98)
It should be physically clear that we want the minus sign. Otherwise both solutions will be
hyperbolic functions. However if you choose the plus sign you will immediately find that
the boundary conditions on ξ cannot be satisfied. Refer back to (2.63) and the sentences
following.
The two ordinary differential equations and homogeneous boundary conditions are
Pτ τ + λ2 P = 0
Pτ (0) = 0
(2.99)
book
Mobk070
March 22, 2007
11:7
28 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
and
The solutions are
Qξ ξ + λ2 Q = 0
Q(0) = 0
Q(1) = 0
P = A sin(λτ ) + B cos(λτ )
Q = C sin(λξ ) + D cos(λξ )
(2.100)
(2.101)
(2.102)
The first boundary condition of Eq. (2.100) requires that D = 0. The second requires that
= nπ . The boundary condition at τ = 0, that
C sin(λ) be zero. Our eigenvalues are again λ
Pτ = 0 requires that A = 0. Thus
n
P Qn
= Kn sin(nπ ξ ) cos(nπ τ )
The final form of the solution is then
y(τ, ξ ) =
∞(cid:1)
n=0
Kn sin(nπ ξ ) cos(nπ τ )
2.2.3 Orthogonality
Applying the final (nonhomogeneous) boundary condition (the initial position).
f (ξ ) =
∞(cid:1)
n=0
Kn sin(nπ ξ )
In particular, if f (x) = h x,
0 < x < 1/2
1(cid:2)
0
and
= h(1 − x),
1/2 < x < 1
1/2(cid:2)
1(cid:2)
f (x) sin(nπ x)d x =
h x sin(nπ x)d x +
h(1 − x) sin(nπ x)d x
0
= 2h
n2π 2 sin
(cid:5)
(cid:4)
nπ
2
1/2
= 2h
n2π 2 (−1)n+1
1(cid:2)
0
Kn sin2(nπ x)d x = Kn
/2
(2.103)
(2.104)
(2.105)
(2.106)
(2.107)
(2.108)
book
Mobk070
March 22, 2007
11:7
so that
THE FOURIER METHOD: SEPARATION OF VARIABLES 29
y = 4h
π 2
∞(cid:1)
n=1
(−1)n+1
n2
sin(nπ ξ ) cos(nπ τ )
(2.109)
2.2.4 Lessons
The solutions are in the form of infinite series. The coefficients of the terms of the series
are determined by using the fact that the solutions of at least one of the ordinary differential
equations are orthogonal functions. The orthogonality condition allows us to calculate these
coefficients.
Problem
1. Solve the boundary value problem
= u xx
utt
u(t, 0) = u(t, 1) = 0
u(0, x) = 0
ut(0, x) = f (x)
Find the special case when f (x) = sin(π x).
FURTHER READING
V. Arpaci, Conduction Heat Transfer. Reading, MA: Addison-Wesley, 1966.
J. W. Brown and R. V. Churchill, Fourier Series and Boundary Value Problems. 6th edition. New
York: McGraw-Hill, 2001.
book
Mobk070
March 22, 2007
11:7
book
Mobk070
March 22, 2007
11:7
31
C H A P T E R 3
Orthogonal Sets of Functions
In this chapter we elaborate on the concepts of orthogonality and Fourier series. We begin
with the familiar concept of orthogonality of vectors. We then extend the idea to orthogonality
of functions and the use of this idea to represent general functions as Fourier series—series of
orthogonal functions.
Next we show that solutions of a fairly general linear ordinary differential equation—the
Sturm–Liouville equation—are orthogonal functions. Several examples are given.
VECTORS
3.1
We begin our study of orthogonality with the familiar topic of orthogonal vectors. Suppose u(1),
u(2), and u(3) are the three rectangular component vectors in an ordinary three-dimensional
space. The norm of the vector (its length) ||u|| is
||u|| = [u(1)2 + u(2)2 + u(3)2]1/2
(3.1)
If ||u|| = 1, u is said to be normalized. If ||u|| = 0, u(r ) = 0 for each r and u is the zero vector.
A linear combination of two vectors u1 and u2 is
u = c 1u1
+ c 2u2
,
The scalar or inner product of the two vectors u1 and u2 is defined as
(u1
, u2) =
3(cid:1)
r =1
u1(r )u2(r ) = (cid:6)u1
(cid:6)(cid:6)u2
(cid:6) cos θ
3.1.1 Orthogonality of Vectors
If neither u1 nor u2 is the zero vector and if
(u1
, u2) = 0
then θ = π/2 and the vectors are orthogonal. The norm of a vector u is
||u|| = (u, u)1/2
(3.2)
(3.3)
(3.4)
(3.5)
book
Mobk070
March 22, 2007
11:7
32 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
|| has magnitude unity, and if u1 and u2 are orthogonal then (cid:1)
= un
3.1.2 Orthonormal Sets of Vectors
The vector (cid:1)
(cid:1)
/||un
2 are orthonormal and their inner product is
m) = δ
((cid:1)
n
, (cid:1)
n
nm
= 0, m (cid:3)= n
= 1, m = n
1 and
(3.6)
where δ
nm is called the Kronecker delta.
If (cid:1)
2, and (cid:1)
1, (cid:1)
vector in three-dimensional space can be written as a linear combination of (cid:1)
that is,
3 are three vectors that are mutually orthogonal to each other then every
2, and (cid:1)
3;
1, (cid:1)
f (r ) = c 1
Note that due to the fact that the vectors (cid:1)
(cid:1)
+ c 2
(cid:1)
2
+ c 3
(cid:1)
3
1
n form an orthonormal set,
(f , (cid:1)
1) = c 1
, (f , (cid:1)
2) = c 2
, (f , (cid:1)
3) = c 3
Simply put, suppose the vector f is
f = 2(cid:1)
1
+ 4(cid:1)
2
+ (cid:1)
.
3
(3.7)
(3.8)
(3.9)
Taking the inner product of f with (cid:1)
(f , (cid:1)
and according to Eq. (3.8) c 1
1 we find that
1) + 4((cid:1)
1
= 4 and c 3
1) = 2((cid:1)
1
= 2. Similarly, c 2
, (cid:1)
, (cid:1)
2) + ((cid:1)
= 1.
, (cid:1)
3)
(3.10)
1
FUNCTIONS
3.2
3.2.1 Orthonormal Sets of Functions and Fourier Series
Suppose there is a set of orthonormal functions (cid:6)
√
(
defined as one whose inner product, defined as
n(x) defined on an interval a < x < b
2 sin(nπ x) on the interval 0 < x < 1 is an example). A set of orthonormal functions is
(cid:3)
b
x=a
m(x)d x, is
n(x)(cid:6)
(cid:6)
b(cid:2)
((cid:6)
n
, (cid:6)
m) =
(cid:6)
n
(cid:6)
m d x = δ
nm
x=a
Suppose we can express a function as an infinite series of these orthonormal functions,
f (x) =
∞(cid:1)
n=0
(cid:6)
n
c n
on a < x < b
(3.11)
(3.12)
Equation (3.12) is called a Fourier series of f (x) in terms of the orthonormal function set (cid:6)
n(x).
book
Mobk070
March 22, 2007
11:7
ORTHOGONAL SETS OF FUNCTIONS 33
If we now form the inner product of (cid:6)
m with both sides of Eq. (3.12) and use the
definition of an orthonormal function set as stated in Eq. (3.11) we see that the inner product
of f (x) and (cid:6)
n(x) is c n.
b(cid:2)
c n
(cid:6)2
n(ξ )d ξ = c n
=
x=a
b(cid:2)
x=a
f (ξ ) (cid:6)
n(ξ )d ξ
(3.13)
In particular, consider a set of functions (cid:8)
n that are orthogonal on the interval (a, b) so that
b(cid:2)
(cid:8)
n(ξ )(cid:8)
m(ξ )d ξ = 0, m (cid:3)= n
where (cid:6)(cid:8)
n
(cid:6)2 =
(cid:3)
b
x=a
x=a
(cid:6)2 , m = n
n (ξ )d ξ is called the square of the norm of (cid:8)
= (cid:6)(cid:8)
n
(cid:8)2
(cid:8)
(cid:6)(cid:8)
n
n
(cid:6)
= (cid:6)
n
n. The functions
(3.14)
(3.15)
then form an orthonormal set. We now show how to form the series representation of the
function f (x) as a series expansion in terms of the orthogonal (but not orthonormal) set of
functions (cid:8)
n(x).
f (x) =
∞(cid:1)
n=0
(cid:8)
(cid:6)(cid:8)
n
n
b(cid:2)
(cid:6)
ξ =a
f (ξ )
n(ξ )
(cid:8)
(cid:6)(cid:8)
(cid:6) d ξ =
n
∞(cid:1)
n=0
b(cid:2)
(cid:8)
n
f (ξ )
ξ =a
(cid:8)
(cid:6)(cid:8)
n(ξ )
(cid:6)2 d ξ
n
(3.16)
This is called a Fourier series representation of the function f (x).
As a concrete example, the square of the norm of the sine function on the interval
(0, π ) is
π(cid:2)
(cid:6) sin(nx)(cid:6)2 =
sin2(nξ )d ξ =
ξ =0
π
2
so that the corresponding orthonormal function is
(cid:8)
(cid:6) =
2
π sin(nx)
A function can be represented by a series of sine functions on the interval (0, π ) as
f (x) =
∞(cid:1)
n=0
π(cid:2)
sin(nx)
ς =0
sin(nς )
(cid:9)
π
2
f (ς )d ς
This is a Fourier sine series.
(3.17)
(3.18)
(3.19)
book
Mobk070
March 22, 2007
11:7
34 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
3.2.2 Best Approximation
We next ask whether, since we can never sum to infinity, the values of the constants c n in
Eq. (3.13) give the most accurate approximation of the function. To illustrate the idea we return
to the idea of orthogonal vectors in three-dimensional space. Suppose we want to approximate a
three-dimensional vector with a two-dimensional vector. What will be the components of the
two-dimensional vector that best approximate the three-dimensional vector?
Let the three-dimensional vector be f = c 1
(cid:1)
(cid:1)
2
2. We wish to minimize ||k − f ||.
+ a2
+ c 2
vector be k = a1
(cid:1)
(cid:1)
1
1
+ c 3
(cid:1)
3. Let the two-dimensional
||k − f || =
− c 1)2 + (a2
− c 2)2 + c 2
3
(cid:10)
(a1
(cid:11)
1/2
(3.20)
It is clear from the above equation (and also from Fig. 3.1) that this will be minimized when
a1
= c 1 and a2
= c 2.
Turning now to the orthogonal function series, we attempt to minimize the difference
between the function with an infinite number of terms and the summation only to some finite
value m. The square of the error is
b(cid:2)
E2 =
( f (x) − Km(x))2d x =
x=a
b(cid:2)
x=a
(cid:12)
(cid:13)
f 2(x) + K 2(x) − 2 f (x)K (x)
d x
(3.21)
where
and
f (x) =
∞(cid:1)
n=1
(cid:6)
c n
n(x)
=
Km
m(cid:1)
n=1
(cid:6)
an
n(x)
(3.22)
(3.23)
FIGURE 3.1: Best approximation of a three-dimensional vector in two dimensions
book
Mobk070
March 22, 2007
11:7
Noting that
ORTHOGONAL SETS OF FUNCTIONS 35
m(cid:1)
m(cid:1)
m(x)d x =
K 2
b(cid:2)
ana j
(cid:6)
n(x)(cid:6)
j (x)d x =
n=1
j =1
x=a
m(cid:1)
n=1
a 2
n
= a 2
1
+ a 2
2
+ a 2
3
+ · · · + a 2
m
(3.24)
b(cid:2)
x=a
and
b(cid:2)
x=a
∞(cid:1)
m(cid:1)
f (x)K (x)d x =
b(cid:2)
c na j
(cid:6)
n(x)(cid:6)
j (x)d x
j =1
x=a
c nan
= c 1a1
+ c 2a2
+ · · · + c mam
(3.25)
=
n=1
m(cid:1)
n=1
b(cid:2)
E2 =
f 2(x)d x + a 2
1
+ · · · + a 2
m
− 2a1c 1
− · · · − 2amc m
(3.26)
x=a
Now add and subtract c 2
1
, c 2
2
, . . . , c 2
m. Thus Eq. (3.26) becomes
E2 =
b(cid:2)
x=a
f 2(x)d x − c 2
1
− c 2
2
− · · · − c 2
m
+ (a1
− c 1)2 + (a2
− c 2)2 + · · · + (am
− c m)2
which is clearly minimized when an
= c n.
(3.27)
3.2.3 Convergence of Fourier Series
We briefly consider the question of whether the Fourier series actually converges to the function
f (x) for all values, say, on the interval a ≤ x ≤ b. The series will converge to the function if
the value of E defined in (3.19) approaches zero as m approaches infinity. Suffice to say that
this is true for functions that are continuous with piecewise continuous first derivatives, that
is, most physically realistic temperature distributions, displacements of vibrating strings and
bars. In each particular situation, however, one should use the various convergence theorems
that are presented in most elementary calculus books. Uniform convergence of Fourier series
is discussed extensively in the book Fourier Series and Boundary Value Problems by James Ward
Brown and R. V. Churchill. In this chapter we give only a few physically clear examples.
book
Mobk070
March 22, 2007
11:7
36 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
3.2.4 Examples of Fourier Series
Example 3.1. Determine a Fourier sine series representation of
(0, 1). The series will take the form
f (x) = x on the interval
x =
∞(cid:1)
j =0
c j sin( j π x)
(3.28)
since the sin( j π x) forms an orthogonal set on (0, 1), multiply both sides by sin(kπ x)d x and
integrate over the interval on which the function is orthogonal.
1(cid:2)
x=0
x sin(kπ x)d x =
1(cid:2)
∞(cid:1)
k=0
x=0
c j sin( j π x) sin(kπ x)d x
(3.29)
Noting that all of the terms on the right-hand side of (2.20) are zero except the one for which
k = j ,
1(cid:2)
x=0
After integrating we find
Thus,
1(cid:2)
x sin( j π x)d x = c j
sin2( j π x)d x
(3.30)
x=0
(−1) j +1
j π
= c j
2
x =
∞(cid:1)
j =0
(−1) j +1
j π
2 sin( j π x)
(3.31)
(3.32)
This is an alternating sign series in which the coefficients always decrease as j increases, and
it therefore converges. The sine function is periodic and so the series must also be a periodic
function beyond the interval (0, 1). The series outside this interval forms the periodic continuation
of the series. Note that the sine is an odd function so that sin( j π x) = − sin(− j π x). Thus the
periodic continuation looks like Fig. 3.2. The series converges everywhere, but at x = 1 it is
identically zero instead of one. It converges to 1 − ε arbitrarily close to x = 1.
book
Mobk070
March 22, 2007
11:7
ORTHOGONAL SETS OF FUNCTIONS 37
1
-1
1
2
3
FIGURE 3.2: The periodic continuation of the function x represented by the sine series
Example 3.2. Find a Fourier cosine for f (x) = x on the interval (0, 1). In this case
x =
∞(cid:1)
n=0
c n cos(nπ x)
(3.33)
Multiply both sides by cos(mπ x)d x and integrate over (0, 1).
1(cid:2)
x=0
x cos(mπ x)d x =
∞(cid:1)
1(cid:2)
c n
n=0
x=0
cos(mπ x) cos(nπ x)d x
(3.34)
and noting that cos(nπ x) is an orthogonal set on (0, 1) all terms in (2.23) are zero except when
n = m. Evaluating the integrals,
= [(−1)2 − 1]
(nπ )2
There is a problem when n = 0. Both the numerator and the denominator are zero there.
(3.35)
c n
2
However we can evaluate c 0 by noting that according to Eq. (3.26)
1(cid:2)
x=0
xd x = c 0
= 1
2
and the cosine series is therefore
x = 1
2
+
∞(cid:1)
n=1
[(−1)n − 1]
(nπ )2
2
cos(nπ x)
(3.36)
(3.37)
The series converges to x everywhere. Since cos(nπ x) = cos(−nπ x) it is an even function and
its periodic continuation is shown in Fig. 3.3. Note that the sine series is discontinuous at x = 1,
while the cosine series is continuous everywhere. (Which is the better representation?)
book
Mobk070
March 22, 2007
11:7
38 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
1
-1
1
2
3
FIGURE 3.3: The periodic continuation of the series in Example 3.2
It should be clear from the above examples that in general a Fourier sine/cosine series of
a function f (x) defined on 0 ≤ x ≤ 1 can be written as
f (x) = c 0
2
+
∞(cid:1)
n=1
c n cos(nπ x) +
∞(cid:1)
n=1
bn sin(nπ x)
(3.38)
where
(cid:3)
(cid:3)
=
c n
=
bn
Problems
1. Show that
1
1
x=0 f (x) cos(nπ x)d x
(cid:3)
x=0 cos2(nπ x)d x
1
x=0 f (x) sin(nπ x)d x
(cid:3)
x=0 sin2(nπ x)d x
1
n = 0, 1, 2, 3, . . .
n = 1, 2, 3, . . .
(3.39)
π(cid:2)
x=0
sin(nx) sin(mx)d x = 0
when n (cid:3)= m.
2. Find the Fourier sine series for f (x) = 1 − x on the interval (0, 1). Sketch the periodic
continuation. Sum the series for the first five terms and sketch over two periods. Discuss
convergence of the series, paying special attention to convergence at x = 0 and x = 1.
3. Find the Fourier cosine series for 1 − x on (0, 1). Sketch the periodic continuation.
Sum the first two terms and sketch. Sum the first five terms and sketch over two periods.
Discuss convergence, paying special attention to convergence at x = 0 and x = 1.
book
Mobk070
March 22, 2007
11:7
ORTHOGONAL SETS OF FUNCTIONS 39
3.3
STURM–LIOUVILLE PROBLEMS: ORTHOGONAL
FUNCTIONS
We now proceed to show that solutions of a certain ordinary differential equation with certain
boundary conditions (called a Sturm–Liouville problem) are orthogonal functions with respect to a
weighting function, and that therefore a well-behaved function can be represented by an infinite
series of these orthogonal functions (called eigenfunctions), as in Eqs. (3.12) and (3.16).
Recall that the problem
Xxx
+ λ2 X = 0, X(0) = 0, X(1) = 0
0 ≤ x ≤ 1
(3.40)
has solutions only for λ = nπ and that the solutions, sin(nπ x) are orthogonal on the interval
(0, 1). The sine functions are called eigenfunctions and λ = nπ are called eigenvalues.
As another example, consider the problem
with boundary conditions
Xxx
+ λ2 X = 0
X(0) = 0
X(1) + H Xx(1) = 0
The solution of the differential equation is
X = A sin(λx) + B cos (λx))
(3.41)
(3.42)
(3.43)
The first boundary condition guarantees that B = 0. The second boundary condition is satisfied
by the equation
A[sin(λ) + Hλ cos(λ)] = 0
Since A cannot be zero, this implies that
− tan(λ) = Hλ.
(3.44)
(3.45)
The eigenfunctions are sin(λx) and the eigenvalues are solutions of Eq. (3.45). This is illustrated
graphically in Fig. 3.4.
We will generally be interested in the fairly general linear second-order differential
equation and boundary conditions given in Eqs. (3.46) and (3.47).
(cid:14)
(cid:15)
+ [q (x) + λp(x)]X = 0 a ≤ x ≤ b
r (x)
d X
d x
d
d x
a1 X(a) + a2d X(a)/d x = 0
b1 X(b) + b2d X(b)/d x = 0
(3.46)
(3.47)
book
Mobk070
March 22, 2007
11:7
40 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
FIGURE 3.4: Eigenvalues of − tan(λ) = Hλ
Solutions exist only for discrete values λ
are the eigenfunctions.
n the eigenvalues. The corresponding solutions Xn(x)
3.3.1 Orthogonality of Eigenfunctions
Consider two solutions of (3.46) and (3.47), Xn and Xm corresponding to eigenvalues λ
λ
m. The primes denote differentiation with respect to x.
n and
(r X
(r X
(cid:1)
m)
(cid:1)
n)
(cid:1) + q Xm
(cid:1) + q Xn
= −λ
m p Xm
= −λ
n p Xn
Multiply the first by Xn and the second by Xm and subtract, obtaining the following:
(cid:1)
(r Xn X
m
− r Xm X
(cid:1)
n)
(cid:1) = (λ
n
− λ
m) p Xm Xn
Integrating both sides
(3.48)
(3.49)
(3.50)
r (X
(cid:1)
m Xn
− X
(cid:1)
n Xm)b
a
= (λ
n
− λ
m)
b(cid:2)
a
p(x)Xn Xmd x
(3.51)
Inserting the boundary conditions into the left-hand side of (3.51)
(cid:1)
(cid:1)
m(a)Xn(a) − X
m(b)Xn(b) − X
X
= − b1
b2
Xm(b)Xn(b) + a1
a2
Xm(a)Xn(a) − a1
a2
(cid:1)
n(b)Xm(b) + X
(cid:1)
n(a)Xm(a)
Xn(a)Xm(a) + b1
b2
Xm(b)Xn(b) = 0
(3.52)
book
Mobk070
March 22, 2007
11:7
ORTHOGONAL SETS OF FUNCTIONS 41
Thus
b(cid:2)
(λ
n
− λ
m)
p(x)Xn Xmd x = 0, m (cid:3)= n
(3.53)
a
Notice that Xm and Xn are orthogonal with respect to the weighting function p(x) on the interval
(a, b). Obvious examples are the sine and cosine functions.
Example 3.3. Example 2.1 in Chapter 2 is an example in which the eigenfunctions are sin(λ
and the eigenvalues are (2n − 1)π/2.
ξ )
n
Example 3.4.
If the boundary conditions in Example 2.1 in Chapter 2 are changed to
(cid:6)(cid:1)
(0) = 0
(cid:6)(1) = 0
we note that the general solution of the differential equation is
(cid:6)(ξ ) = A cos(λξ ) + B sin(λξ )
(3.54)
(3.55)
The boundary conditions require that B = 0 and cos(λ) = 0. The values of λ can take on
any of the values π/2, 3π/2, 5π/2, . . . , (2n − 1)π/2. The eigenfunctions are cos(λ
ξ ) and the
eigenvalue are λ
n
= (2n − 1)π/2.
n
Example 3.5. Suppose the boundary conditions in the original problem (Example 1, Chapter
2) take on the more complicated form
(cid:6)(0) = 0
(cid:6)(1) + h(cid:6)(cid:1)
(1) = 0
(3.56)
The first boundary condition requires that B = 0. The second boundary conditions require that
sin(λ
n) + hλ
n cos(λ
n) = 0, or
λ
n
= − 1
h
tan(λ
n)
(3.57)
(3.58)
which is a transcendental equation that must be solved for the eigenvalues. The eigenfunctions
are, of course, sin(λ
nx).
Example 3.6. A Physical Example: Heat Conduction in Cylindrical Coordinates
The heat conduction equation in cylindrical coordinates is
∂u
∂t
=
∂ 2u
∂r 2
+ 1
r
∂u
∂r
0 < r < 1
(3.59)
with boundary conditions at R = 0 and r = 1 and initial condition u(0, r ) = f (r ).
book
Mobk070
March 22, 2007
11:7
42 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
Separating variables as u = R(r )T(t),
1
T
d T
d t
= 1
R
d 2 R
dr 2
+ 1
r R
d R
dr
= −λ2
0 ≤ r ≤ 1,
0 ≤ t
(3.60)
(Why the minus sign?)
The equation for R(r ) is
(cid:1)
(r R
(cid:1) + λ2r R = 0,
)
(3.61)
which is a Sturm–Liouville equation with weighting function r. It is an eigenvalue problem with
an infinite number of eigenfunctions corresponding to the eigenvalues λ
n. There will be two
solutions R1(λ
n. The solutions are called Bessel functions, and they
will be discussed in Chapter 4.
nr ) for each λ
nr ) and R2(λ
Rn(λ
nr ) = An R1(λ
nr ) + Bn R2(λ
nr )
(3.62)
The boundary conditions on r are used to determine a relation between the constants A and
B. For solutions R(λ
nr ) and R(λ
mr )
1(cid:2)
0
r R(λ
nr )R(λ
mr )dr = 0,
n (cid:3)= m
(3.63)
is the orthogonality condition.
The solution for T(t) is the exponential e
−λ2
nt for all n. Thus, the solution of (3.60),
because of superposition, can be written as an infinite series in a form something like
u =
∞(cid:1)
n=0
Kne
−λ2
n R(λ
nr )
(3.64)
and the orthogonality condition is used to find Kn as
1(cid:2)
1(cid:2)
=
Kn
f (r )R(λ
nr )r dr/
f (r )R2(λ
nr )r dr
(3.65)
r =0
r =0
Problems
1. For Example 2.1 in Chapter 2 with the new boundary conditions described in Example
3.2 above, find Kn and write the infinite series solution to the revised problem.
book
Mobk070
March 22, 2007
11:7
ORTHOGONAL SETS OF FUNCTIONS 43
FURTHER READING
J. W. Brown and R. V. Churchill, Fourier Series and Boundary Value Problems, 6th edition. New
York: McGraw-Hill, 2001.
P. V. O’Neil, Advanced Engineering Mathematics. 5th edition. Brooks/Cole Thompson, Pacific
Grove, CA, 2003.
book
Mobk070
March 22, 2007
11:7
book
Mobk070
March 22, 2007
11:7
45
C H A P T E R 4
Series Solutions of Ordinary
Differential Equations
4.1 GENERAL SERIES SOLUTIONS
The purpose of this chapter is to present a method of obtaining solutions of linear second-order
ordinary differential equations in the form of Taylor series’. The methodology is then used to
obtain solutions of two special differential equations, Bessel’s equation and Legendre’s equation.
Properties of the solutions—Bessel functions and Legendre functions—which are extensively
used in solving problems in mathematical physics, are discussed briefly. Bessel functions are
used in solving both diffusion and vibrations problems in cylindrical coordinates. The functions
R(λ
nr ) in Example 3.4 at the end of Chapter 3 are called Bessel functions. Legendre functions
are useful in solving problems in spherical coordinates. Associated Legendre functions, also
useful in solving problems in spherical coordinates, are briefly discussed.
4.1.1 Definitions
In this chapter we will be concerned with linear second-order equations. A general case is
Division by a(x) gives
a(x)u
(cid:1)(cid:1) + b(x)u
(cid:1) + c (x)u = f (x)
(cid:1)(cid:1) + p(x)u
(cid:1) + q (x)u = r (x)
u
(4.1)
(4.2)
Recall that if r (x) is zero the equation is homogeneous. The solution can be written as the sum of
= 0. The nature
a homogeneous solution u h (x) and a particular solution u p(x). If r (x) is zero, u p
of the solution and the solution method depend on the nature of the coefficients p(x) and q (x).
If each of these functions can be expanded in a Taylor series about a point x0 the point is said
to be an ordinary point and the function is analytic at that point. If either of the coefficients is
not analytic at x0, the point is a singular point. If x0 is a singular point and if (x − x0) p(x) and
(x − x0)2q (x) are analytic, then the singularities are said to be removable and the singular point
is a regular singular point. If this is not the case the singular point is irregular.
book
Mobk070
March 22, 2007
11:7
46 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
4.1.2 Ordinary Points and Series Solutions
If the point x0 is an ordinary point the dependent variable has a solution in the neighborhood
of x0 of the form
u(x) =
∞(cid:1)
n=0
c n(x − x0)n
(4.3)
We now illustrate the solution method with two examples.
Example 4.1. Find a series solution in the form of Eq. (4.3) about the point x = 0 of the
differential equation
(cid:1)(cid:1) + x2u = 0
u
(4.4)
The point x = 0 is an ordinary point so at least near x = 0 there is a solution in the form of
the above series. Differentiating (4.3) twice and inserting it into (4.4)
(cid:1) =
u
(cid:1)(cid:1) =
u
∞(cid:1)
n=0
∞(cid:1)
n=0
nc nxn−1
n(n − 1)c nxn−2
∞(cid:1)
n=0
n(n − 1)c nxn−2 +
∞(cid:1)
n=0
xn+2c n
= 0
(4.5)
Note that the first term in the u
series are zero.
We can shift the indices in both summations so that the power of x is the same in both series
by setting n − 2 = m in the first series.
series is zero while the first two terms in the u
(cid:1)
(cid:1)(cid:1)
∞(cid:1)
n=0
n(n − 1)c nxn−2 =
∞(cid:1)
m=−2
(m + 2)(m + 1)c m+2xm =
∞(cid:1)
m=0
(m + 2)(m + 1)c m+2xm
(4.6)
Noting that m is a “dummy variable” and that the first two terms in the series are zero the series
can be written as
∞(cid:1)
n=0
(n + 2)(n + 1)c n+2xn
In a similar way we can write the second term as
∞(cid:1)
n=0
c nxn+2 =
∞(cid:1)
n=2
c n−2xn
(4.7)
(4.8)
book
Mobk070
March 22, 2007
11:7
We now have
SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 47
∞(cid:1)
n=0
(n + 2)(n + 1)c n+2xn +
∞(cid:1)
n=2
c n−2xn = 0
(4.9)
which can be written as
2c 2
+ 6c 3x +
∞(cid:1)
n=2
[(n + 2)(n + 1)c n+2
+ c n−2]xn = 0
(4.10)
Each coefficient of xn must be zero in order to satisfy Eq. (4.10). Thus c 2 and c 3 must be zero
and
c n+2
= −c n−2
/(n + 2)(n + 1)
(4.11)
while c 0 and c 1 remain arbitrary.
Setting n = 2, we find that c 4
, c 7
, c 10
c 3 are zero, so are c 6
= −c 0
/12 and setting n = 3, c 5
, c 11, etc. Also, c 8
= −c 4
/(8)(7) = c 0
= −c 1
/(4)(3)(8)(7) and
/20. Since c 2 and
c 9
= −c 5
/(9)(8) = c 1
/(5)(4)(9)(8).
The first few terms of the series are
u(x) = c 0(1 − x4/12 + x6/672 + · · · ) + c 1(1 − x5/20 + x9/1440 + · · · )
(4.12)
The values of c 0 and c 1 may be found from appropriate boundary conditions. These are both
alternating sign series with each term smaller than the previous term at least for x ≤ 1 and it is
therefore convergent at least under these conditions.
The constants c 0 and c 1 can be determined from boundary conditions. For example if
u(0) = 0, c 0
+ c 1
= 0, so c 1
= −c 0. If u(1) = 1,
c 0[−1/12 + 1/20 + 1/672 − 1/1440 + · · · ] = 1
Example 4.2. Find a series solution in the form of Eq. (4.3) of the differential equation
(cid:1)(cid:1) + xu
(cid:1) + u = x2
u
(4.13)
valid near x = 0.
Assuming a solution in the form of (4.3), differentiating and inserting into (4.13),
∞(cid:1)
n=0
(n − 1)nc nxn−2 +
∞(cid:1)
n=0
nc nxn +
∞(cid:1)
n=0
c nxn − x2 = 0
(4.14)
book
Mobk070
March 22, 2007
11:7
48 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
Shifting the indices as before
∞(cid:1)
n=0
(n + 2)(n + 1)c n+2xn +
∞(cid:1)
n=0
nc nxn +
∞(cid:1)
n=0
c nxn − x2 = 0
(4.15)
Once again, each of the coefficients of xn must be zero.
Setting n = 0, we see that
n = 0 : 2c 2
n = 1 : 6c 3
n = 2 : 12c 4
n > 2 : c n+2
+ c 0
+ 2c 1
+ 3c 2
= c n
n + 2
c 2
= 0,
= 0,
− 1 = 0,
/2
= −c 0
/3
= −c 1
c 3
= (1 + 3c 0
c 4
/2)/12
The last of these is called a recurrence formula.
Thus,
u = c 0(1 − x2/2 + x4/8 − x6/(8)(6) + · · · )
+ c 1(x − x3/3 + x5/(3)(5) − x7/(3)(5)(7) + · · · )
+ x4(1/12 − x2/(12)(6) + · · · )
(4.16)
(4.17)
Note that the series on the third line of (4.17) is the particular solution of (4.13). The constants
c 0 and c 1 are to be evaluated using the boundary conditions.
4.1.3 Lessons: Finding Series Solutions for Differential Equations
with Ordinary Points
If x0 is an ordinary point assume a solution in the form of Eq. (4.3) and substitute into
the differential equation. Then equate the coefficients of equal powers of x. This will give a
recurrence formula from which two series may be obtained in terms of two arbitrary constants.
These may be evaluated by using the two boundary conditions.
Problems
1. The differential equation
(cid:1)(cid:1) + xu
(cid:1) + xu = x
u
has ordinary points everywhere. Find a series solution near x = 0.
2. Find a series solution of the differential equation
near x = 0 and identify the particular solution.
(cid:1)(cid:1) + (1 + x2)u = x
u
book
Mobk070
March 22, 2007
11:7
SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 49
3. The differential equation
(1 − x2)u
(cid:1)(cid:1) + u = 0
has singular points at x = ±1, but is analytic near x = 0. Find a series solution that is
valid near x = 0 and discuss the radius of convergence.
4.1.4 Regular Singular Points and the Method of Frobenius
If x0 is a singular point in (4.2) there may not be a power series solution of the form of Eq. (4.3).
In such a case we proceed by assuming a solution of the form
u(x) =
∞(cid:1)
n=0
c n(x − x0)n+r
(4.18)
(cid:3)= 0 and r is any constant, not necessarily an integer. This is called the method of
in which c 0
Frobenius and the series is called a Frobenius series. The Frobenius series need not be a power
series because r may be a fraction or even negative. Differentiating once
and differentiating again
(cid:1) =
u
∞(cid:1)
n=0
(n + r )c n(x − x0)n+r −1
(cid:1)(cid:1) =
u
∞(cid:1)
n=0
(n + r − 1)(n + r )c n(x − x0)n+r −2
(4.19)
(4.20)
These are then substituted into the differential equation, shifting is done where required so
that each term contains x raised to the power n, and the coefficients of xn are each set equal
to zero. The coefficient associated with the lowest power of x will be a quadratic equation that
can be solved for the index r . It is called an indicial equation. There will therefore be two roots
of this equation corresponding to two series solutions. The values of c n are determined as above
by a recurrence equation for each of the roots. Three possible cases are important: (a) the roots
are distinct and do not differ by an integer, (b) the roots differ by an integer, and (c) the roots
are coincident, i.e., repeated. We illustrate the method by a series of examples.
Example 4.3 (distinct roots). Solve the equation
x2u
(cid:1)(cid:1) + x(1/2 + 2x)u
(cid:1) + (x − 1/2)u = 0
The coefficient of the u
(cid:1)
term is
p(x) = (1/2 + 2x)
x
(4.21)
(4.22)
book
Mobk070
March 22, 2007
11:7
50 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
and the coefficient of the u
(cid:1)(cid:1)
term is
q (x) = (x − 1/2)
x2
(4.23)
Both have singularities at x = 0. However multiplying p(x) by x and q (x) by x2 the singularities
are removed. Thus x = 0 is a regular singular point. Assume a solution in the form of the
Frobenius series: u =
n=0 c nxn+r , differentiate twice and substitute into (4.21) obtaining
(cid:16)∞
∞(cid:1)
n=0
(n + r )(n + r − 1)xn+1 +
∞(cid:1)
n=0
1
2
(n + r )c nxn+r +
∞(cid:1)
n=0
2(n + r )c nxn+r +1
+
∞(cid:1)
n=0
c nxn+r +1 −
∞(cid:1)
n=0
1
2
c nxn+r = 0
(4.24)
The indices of the third and fourth summations are now shifted as in Example 4.1 and we find
(cid:14)
r (r − 1) + 1
2
r − 1
2
(cid:15)
c 0xr +
(cid:14)
(n + r )(n + r − 1) + 1
2
∞(cid:1)
n=1
(cid:15)
(n + r ) − 1
2
c nxn+r
+
∞(cid:1)
n=1
[2(n + r − 1) + 1]c n−1xn+r = 0
(4.25)
Each coefficient must be zero for the equation to be true. Thus the coefficient of the c 0 term
must be zero since c 0 itself cannot be zero. This gives a quadratic equation to be solved for r ,
and this is called an indicial equation (since we are solving for the index, r ).
r (r − 1) + 1
2
r − 1
2
= 0
(4.26)
with r = 1 and r = −1/2. The coefficients of xn+r must also be zero. Thus
[(n + r )(n + r − 1) + 1/2(n + r ) − 1/2]c n
+ [2(n + r − 1) + 1]c n−1
= 0 .
(4.27)
The recurrence equation is therefore
= −
c n
For the case of r = 1
2(n + r − 1) + 1
(n + r )(n + r − 1) + 1
2 (n + r ) − 1
2
c n−1
(4.28)
c n
= − 2n + 1
(cid:17)
n + 3
n
2
(cid:18) c n−1
(4.29)
book
Mobk070
March 22, 2007
11:7
Computing a few of the coefficients,
SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 51
c 1
c 2
c 3
= − 3
5
2
c 0
= − 6
5
c 0
c 1
= − 5
7
= − 7
c 2
27
2
c 0
= − 6
7
= − 4
9
c 0
etc. and the first Frobenius series is
(cid:6)
(cid:7)
u1
= c 0
x − 6
5
x2 + 6
7
x3 − 4
9
x4 + · · ·
(4.30)
Setting r = −1/2 in the recurrence equation (4.26) and using bn instead of c n to distinguish it
from the first case,
bn
= − 2n − 2
(cid:17)
n − 3
n
2
(cid:18) bn−1
(4.31)
Noting that in this case b1
series has only one term: b0x
= 0, all the following bns must be zero and the second Frobenius
−1/2. The complete solution is
(cid:6)
x − 6
5
x2 + 6
7
x3 − 4
9
x4 + · · ·
+ b0x
(4.32)
−1/2
(cid:7)
u = c 0
Example 4.4 (repeated roots). Next consider the differential equation
There is a regular singular point at x = 0, so we attempt a Frobenius series around x = 0.
x2u
(cid:1)(cid:1) − xu
(cid:1) + (x + 1)u = 0
(4.33)
Differentiating (4.17) and substituting into (4.30),
(n + r − 1)(n + r )c nxn+r −
∞(cid:1)
n=0
(n + r )c nxn+r +
∞(cid:1)
n=0
c nxn+r +
∞(cid:1)
n=0
c nxn+r +1 = 0 (4.34)
∞(cid:1)
n=0
or
[r (r − 1) − r + 1]c 0xr +
∞(cid:1)
n=1
[(n + r − 1)(n + r ) − (n + r ) + 1]c nxn+r +
where we have shifted the index in the last sum.
The indicial equation is
r (r − 1) − r + 1 = 0
∞(cid:1)
n=1
c n−1xn+r = 0
(4.35)
(4.36)
book
Mobk070
March 22, 2007
11:7
52 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
and the roots of this equation are both r = 1. Setting the last two sums to zero we find the
recurrence equation
and since r = 1,
= −
c n
1
(n + r − 1)(n + r ) − (n + r ) + 1
c n−1
= −
c n
1
n(n + 1) − (n + 1) + 1
c n−1
c 1
= −c 0
(4.37)
(4.38)
=
c 2
=
c 3
c 1
−1
6 − 3 + 1
−1
12 − 4 + 1
c 2
c 0
= 1
4
−1
9
=
=
c 1
−1
36
c 0
etc.
The Frobenius series is
(cid:6)
(cid:7)
u1
= c 0
x − x2 + 1
4
x3 − 1
36
x4 + . . .
(4.39)
In this case there is no second solution in the form of a Frobenius series because of the repeated
root. We shall soon see what form the second solution takes.
Example 4.5 (roots differing by an integer 1). Next consider the equation
x2u
(cid:1)(cid:1) − 2xu
(cid:1) + (x + 2)u = 0
(4.40)
There is a regular singular point at x = 0. We therefore expect a solution in the form of the
Frobenius series (4.18). Substituting (4.18), (4.19), (4.20) into our differential equation, we
obtain
∞(cid:1)
∞(cid:1)
∞(cid:1)
∞(cid:1)
(n + r )(n + r − 1)c nxn+r −
2(n + r )c nxn+r +
2c nxn+r +
c nxn+r +1 = 0
n=0
n=0
n=0
n=0
Taking out the n = 0 term and shifting the last summation,
∞(cid:1)
n=1
[r (r − 1) − 2r + 2]c 0xr +
+
∞(cid:1)
n=1
c n−1xn+r = 0
[(n + r )(n + r − 1) − 2(n + r ) + 2]c nxn+r
(4.41)
(4.42)
book
Mobk070
March 22, 2007
11:7
SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 53
The first term is the indicial equation.
There are two distinct roots, r1
= 2 and r2
= 1. However they differ by an integer.
r (r − 1) − 2r + 2 = 0
(4.43)
r1
− r2
= 1.
Substituting r1
= 2 into (4.39) and noting that each coefficient of xn+r must be zero,
[(n + 2)(n + 1) − 2(n + 2) + 2]c n
+ c n−1
= 0
(4.44)
The recurrence equation is
=
c n
=
c 1
=
c 2
=
c 3
−c n−1
(n + 2)(n − 1) + 2
−c 0
2
−c 1
6
−c 2
12
c 0
12
−c 0
144
= c 0
=
The first Frobenius series is therefore
(cid:14)
(cid:15)
u1
= c 0
x2 − 1
2
x3 + 1
12
x4 − 1
144
x5 + . . .
(4.45)
(4.46)
We now attempt to find the Frobenius series corresponding to r2
we find that
= 1. Substituting into (4.44)
[n(n + 1) − 2(n + 1) + 2]c n
= −c n−1
(4.47)
When n = 1, c 0 must be zero. Hence c n must be zero for all n and the attempt to find a second
Frobenius series has failed. This will not always be the case when roots differ by an integer as
illustrated in the following example.
Example 4.6 (roots differing by an integer 2). Consider the differential equation
x2u
(cid:1)(cid:1) + x2u
(cid:1) − 2u = 0
(4.48)
You may show that the indicial equation is r 2 − r − 2 = 0 with roots r1
roots differ by an integer. When r = 2 the recurrence equation is
= 2, r2
= −1 and the
c n
= − n + 1
n(n + 3)
c n−1
(4.49)
book
Mobk070
March 22, 2007
11:7
54 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
The first Frobenius series is
(cid:14)
(cid:15)
u1
= c 0x2
1 − 1
2
x + 3
20
x2 − 1
30
x3 + . . .
When r = −1 the recurrence equation is
[(n − 1)(n − 2) − 2]bn
+ (n − 2)bn−1
= 0
(4.50)
(4.51)
When n = 3 this results in b2
= 0. Thus bn
= 0 for all n ≥ 2 and the second series terminates.
(cid:6)
(cid:7)
u2
= b0
1
x
− 1
2
(4.52)
4.1.5 Lessons: Finding Series Solution for Differential Equations with Regular
Singular Points
1. Assume a solution of the form
u =
∞(cid:1)
n=0
c nxn+r , c 0
(cid:3)= 0
(4.53)
Differentiate term by term and insert into the differential equation. Set the coefficient
of the lowest power of x to zero to obtain a quadratic equation on r .
If the indicial equation yields two roots that do not differ by an integer there will
always be two Frobenius series, one for each root of the indicial equation.
2.
If the roots are the same (repeated roots) the form of the second solution will be
u2
= u1 ln(x) +
∞(cid:1)
n=1
bnxn+r1
(4.54)
This equation is substituted into the differential equation to determine bn.
3.
If the roots differ by an integer, choose the largest root to obtain a Frobenius series for
u1. The second solution may be another Frobenius series. If the method fails assume a
solution of the form
u2
= u1 ln(x) +
∞(cid:1)
n=1
bnxn+r2
(4.55)
This equation is substituted into the differential equation to find bn.
This is considered in the next section.
book
Mobk070
March 22, 2007
11:7
SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 55
4.1.6 Logarithms and Second Solutions
Example 4.7. Reconsider Example 4.4 and assume a solution in the form of (4.54). Recall
that in Example 4.4 the differential equation was
x2u
(cid:1)(cid:1) − xu
(cid:1) + (1 + x)u = 0
(4.56)
and the indicial equation yielded a double root at r = 1.
A single Frobenius series was
u1
= x − x2 + x3
4
− x4
36
+ · · ·
Now differentiate Eq. (4.54).
(cid:1)
2
u
= u
(cid:1)
1 ln x + 1
x
+
u1
∞(cid:1)
n=1
(n + r )bnxn+r −1
(cid:1)(cid:1)
2
u
= u
(cid:1)(cid:1)
1 ln x + 2
x
(cid:1)
1
u
− 1
x2 u1
+
∞(cid:1)
n=1
(n + r − 1)(n + r )bnxn+r −2
(4.57)
Inserting this into the differential equation gives
(cid:1)(cid:1)
1
− xu
(cid:1)
1
+ (1 + x)u1] + 2(xu
(cid:1)
1
− u1)
ln(x)[x2u
∞(cid:1)
+
[bn(n + r − 1)(n + r )xn+r − bn(n + r )xn+r + bnxn+r ]
n=1
∞(cid:1)
n=1
+
bnxn+r +1 = 0
(4.58)
The first term on the left-hand side of (4.52) is clearly zero because the term in brackets is the
original equation. Noting that r = 1 in this case and substituting from the Frobenius series for
u1, we find (c 0 can be set equal to unity without losing generality)
(cid:14)
−x2 + x3
3
2
− x4
12
(cid:15)
+ · · ·
+
∞(cid:1)
n=1
[n(n + 1) − (n + 1) + 1]bnxn+1 +
∞(cid:1)
n=2
bn−1xn+1 = 0
or
−2x2 + x3 − x4
6
+ · · · + b1x2 +
n2bn
+ bn−1
(cid:13)
xn+1 = 0
∞(cid:1)
(cid:12)
n=2
Equating coefficients of x raised to powers we find that b1
= 2
(4.59)
(4.60)
book
Mobk070
March 22, 2007
11:7
56 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
For n ≥ 2
etc.
+ b1
= 0
b2
1 + 4b2
− 1
6
+ 9b3
+ b2
= 0
= −3/4
= 11
108
b3
(cid:6)
(cid:7)
u2
= u1 ln x +
2x2 − 3
4
x3 + 11
108
x4 − · · ·
The complete solution is
u = [C1
+ C2 ln x] u1
+ C2
(cid:14)
2x2 − 3
4
x3 + 11
108
(cid:15)
x4 − · · ·
(4.61)
(4.62)
Example 4.8. Reconsider Example 4.5 in which a second Frobenius series could not be found
because the roots of the indicial equation differed by an integer. We attempt a second solution
in the form of (4.55).
The differential equation in Example 4.5 was
x2u
(cid:1)(cid:1) − 2xu
(cid:1) + (x + 2)u = 0
and the roots of the indicial equation were r = 2 and r = 1, and are therefore separated by an
integer. We found one Frobenius series
= x2 − 1
2
x4 − 1
144
x3 + 1
12
x5 + · · ·
u1
for the root r = 2, but were unable to find another Frobenius series for the case of r = 1.
Assume a second solution of the form in Eq. (4.55). Differentiating and substituting into
(4.40)
(cid:1) + (x + 2)u] ln(x) + 2xu
(cid:1) − 3u1
(cid:1)(cid:1)
− 2xu
1
∞(cid:1)
bn[(n + r )(n + r − 1) − 2(n + r ) + 2]xn+r
[x2u
+
+
n=1
∞(cid:1)
n=1
bnxn+r +1 = 0
Noting that the first term in the brackets is zero, inserting u1 and u
that r2
= 1
x2 − 3
2
x3 + 5
12
x4 − 7
144
x5 + . . . + b0x2 +
∞(cid:1)
n=2
{[n(n − 1)]bn
+ bn−1
}xn+1 = 0
(4.64)
(4.63)
(cid:1)
1 from (4.50) and noting
book
Mobk070
March 22, 2007
11:7
Equating x2 terms, we find that b0
= −1. For higher order terms
SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 57
Taking b1
= 0,
3
2
= 2b2
+ b1
= 2b2
+ b1
b2
= 3
4
− 5
12
= 6b3
+ b2
= 6b3
+ 3
4
b3
= − 7
36
The second solution is
(cid:6)
(cid:7)
u2
= u1 ln(x) −
x − 3
4
x3 + 7
36
x4 − . . .
The complete solution is therefore
u = [C1
+ C2 ln x] u1
− C2
(cid:14)
x − 3
4
x3 + 7
36
(cid:15)
x4 − · · ·
Problems
1. Find two Frobenius series solutions
x2u
(cid:1)(cid:1) + 2xu
(cid:1) + (x2 − 2)u = 0
2. Find two Frobenious series solutions
x2u
(cid:1)(cid:1) + xu
(cid:1) +
(cid:6)
(cid:7)
x2 − 1
4
u = 0
3. Show that the indicial equation for the differential equation
(cid:1)(cid:1) + u
(cid:1) + xu = 0
xu
(4.65)
(4.66)
has roots s = −1 and that the differential equation has only one Frobenius series
solution. Find that solution. Then find another solution in the form
u = ln
∞(cid:1)
n=0
c nxn+s +
∞(cid:1)
m=0
anxs +m
where the first summation above is the first Frobenius solution.
book
Mobk070
March 22, 2007
11:7
58 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
BESSEL FUNCTIONS
4.2
A few differential equations are so widely useful in applied mathematics that they have been
named after the mathematician who first explored their theory. Such is the case with Bessel’s
equation. It occurs in problems involving the Laplacian ∇ 2u in cylindrical coordinates when
variables are separated. Bessel’s equation is a Sturm–Liouville equation of the form
ρ2 d 2u
dρ2
+ ρ d u
dρ
+ (λ2ρ2 − ν2)u = 0
Changing the independent variable x = λρ, the equation becomes
x2u
(cid:1)(cid:1) + xu
(cid:1) + (x2 − ν2)u = 0
(4.67)
(4.68)
4.2.1 Solutions of Bessel’s Equation
Recalling the standard forms (4.1) and (4.2) we see that it is a linear homogeneous equation
with variable coefficients and with a regular singular point at x = 0. We therefore assume a
solution of the form of a Frobenius series (4.17).
u =
∞(cid:1)
j =0
c j x j +r
(4.69)
Upon differentiating twice and substituting into (4.68) we find
∞(cid:1)
j =0
[( j + r − 1)( j + r ) + ( j + r ) − ν2]c j x j +r +
(cid:1)
j =0
c j x j +r +2 = 0
(4.70)
In general ν can be any real number. We will first explore some of the properties of the solution
when ν is a nonnegative integer, 0, 1, 2, 3, . . . . First note that
( j + r − 1)( j + r ) + ( j + r ) = ( j + r )2
(4.71)
Shifting the exponent in the second summation and writing out the first two terms in the first
(r − n)(r + n)c 0
∞(cid:1)
+ (r + 1 − n)(r + 1 + n)c 1x
+
[(r + j − n)(r + j + n)c j
+ c j −2]x j = 0
(4.72)
j =2
In order for the coefficient of the x0 term to vanish r = n or r = −n. (This is the indicial
= 0. For each term in the
equation.) In order for the coefficient of the x term to vanish c 1
book
Mobk070
March 22, 2007
11:7
SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 59
summation to vanish
−1
(r + j − n)(r + j + n)
c j
=
c j −2
=
This is the recurrence relation. Since c 1
convenient to write j = 2k and note that
−1
j (2n + j )
= 0, all c j
c j −2
,
r = n
j = 2, 3, 4, · · · (4.73)
= 0 when j is an odd number. It is therefore
so that
=
c 2k
−1
22k(r + k)
c 2k−2
=
c 2k
(−1)k
k!(n + 1)(n + 2) . . . (n + k)22k c 0
The Frobenius series is
(cid:19)
u = c 0xn
1 +
∞(cid:1)
k=1
(−1)k
k!(n + 1)(n + 2) . . . .(n + k)
(cid:20)
(cid:5)
2k
(cid:4)
x
2
(4.74)
(4.75)
(4.76)
Now c 0 is an arbitrary constant so we can choose it to be c 0
equation reduces to
= 1/n!2n in which case the above
J n
= u =
∞(cid:1)
k=0
(−1)k
k!(n + k)!
(cid:4)
x
2
(cid:5)
n+2k
(4.77)
The usual notation is J n and the function is called a Bessel function of the first kind of order n.
Note that we can immediately conclude from (4.77) that
J n(−x) = (−1)n J n(x)
Note that the roots of the indicial equation differ by an integer. When r = −n (4.72) does not
yield a useful second solution since the denominator is zero for j = 0 or 2n. In any case it is easy
to show that J n(x) = (−1)n J −n, so when r is an integer the two solutions are not independent.
A second solution is determined by the methods detailed above and involves natural
(4.78)
logarithms. The details are very messy and will not be given here. The result is
(cid:21)
Yn(x) = 2
π
J n(x)
(cid:4)
(cid:22)
ln
x
2
(cid:5)
(cid:23)
+ γ
+
∞(cid:1)
k=1
(−1)k+1[φ(k) + φ(k + 1)]
22k+n+1k!(k + n)!
x2k+n
− 2
π
n−1(cid:1)
k=0
(n − k − 1)!
22k−n+1k!
x2k−n
(cid:24)
(4.79)
In this equation (cid:6)(k) = 1 + 1/2 + 1/3 + · · · + 1/k and γ is Euler’s constant 0.5772156649
. . . . . .
book
Mobk070
March 22, 2007
11:7
60 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
FIGURE 4.1: Bessel functions of the first kind
Bessel functions of the first and second kinds of order zero are particularly useful in
solving practical problems (Fig. 4.1). For these cases
and
J 0(x) =
∞(cid:1)
k=0
(cid:5)
2k
(−1)k
(k!)2
(cid:4)
x
2
Y0
= J 0(x) ln(x) +
∞(cid:1)
k=1
(−1)k+1
22k(k!)2
φ(k)x2k
(4.80)
(4.81)
The case of ν (cid:3)= n. Recall that in (4.70) if ν is not an integer, a part of the denominator is
(1 + ν)(2 + ν)(3 + ν) . . . (n + ν)
(4.82)
We were then able to use the familiar properties of factorials to simplify the expression for
J n(x). If ν (cid:3)= n we can use the properties of the gamma function to the same end. The gamma
function is defined as
∞(cid:2)
(cid:11)(ν) =
ν−1e
−td t
t
0
(4.83)
book
Mobk070
March 22, 2007
11:7
SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 61
∞(cid:2)
(cid:11)(ν + 1) =
ν
−td t
e
t
0
(4.84)
Note that
and integrating by parts
(cid:11)(ν + 1) = [−t νe
−t]
∞
0
+ ν
∞(cid:2)
0
ν−1e
−td t = ν(cid:11)(ν)
t
(4.85)
and (4.82) can be written as
(1 + ν)(2 + ν)(3 + ν) . . . .(n + ν) =
(cid:11)(n + ν + 1)
(cid:11)(ν + 1)
so that when ν is not an integer
J ν(x) =
∞(cid:1)
n=0
(−1)n
22n+νn!(cid:11)(n + ν + 1)
x2n+ν
(4.86)
(4.87)
Fig. 4.3 is a graphical representation of the gamma function.
Here are the rules
1.
If 2ν is not an integer, J ν and J −ν are linearly independent and the general solution of
Bessel’s equation of order ν is
u(x) = A J ν(x) + B J −ν(x)
(4.88)
2.
3.
where A and B are constants to be determined by boundary conditions.
If 2ν is an odd positive integer J ν and J −ν are still linearly independent and the solution
form (4.88) is still valid.
If 2ν is an even integer, J ν(x) and J −ν(x) are not linearly independent and the solution
takes the form
u(x) = A J ν(x) + BY ν(x)
(4.89)
Bessel functions are tabulated functions, just as are exponentials and trigonometric functions.
Some examples of their shapes are shown in Figs. 4.1 and 4.2.
Note that both J ν(x) and Yν(x) have an infinite number of zeros and we denote them as
λ
j
, j = 0, 1, 2, 3, . . .
book
Mobk070
March 22, 2007
11:7
62 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
FIGURE 4.2: Bessel functions of the second kind
FIGURE 4.3: The gamma function
Some important relations involving Bessel functions are shown in Table 4.1. We will
derive only the first, namely
d
d x
d
d x
ν
(x
J ν(x)) = x
ν
ν
(x
J ν(x)) = d
d x
J ν−1(x)
(cid:19)
∞(cid:1)
n=0
(−1)n
22n+νn!(cid:11)(n + ν + 1)
x2n+2ν
(cid:20)
(4.90)
(4.91)
book
Mobk070
March 22, 2007
11:7
SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 63
TABLE 4.1: Some Properties of Bessel Functions
ν
−ν
J ν(x)]
J ν(x)]
J ν+1(x)
J ν−1(x)
−ν
(cid:1) = x
ν
1. [x
(cid:1) = −x
2. [(cid:10)x
3. J ν−1(x) + J ν+1(x) = 2ν/x[J ν(x)]
4. Jν−1(x) − J ν+1(x) = 2J ν(x)(cid:1)
J ν−1(x)d x = x
5.
−ν
J ν+1(x)d x = x
6.
J v+ constant
−ν
x
x
(cid:3)
(cid:3)
ν
ν
J ν(x) + constant
=
∞(cid:1)
n=0
ν
= x
(−1)n2(n + ν)
22n+νn!(n + ν)(cid:11)(n + ν)
∞(cid:1)
(−1)n
22n+ν−1n!(cid:11)(n + ν)
n=0
x2n+2ν−1
x2n+2ν−1 = x
ν
J ν−1(x)
(4.92)
(4.93)
These will prove important when we begin solving partial differential equations in cylindrical
coordinates using separation of variables.
Bessel’s equation is of the form (4.138) of a Sturm–Liouville equation and the func-
tions J n(x) are orthogonal with respect to a weight function ρ (see Eqs. (3.46) and (3.53),
Chapter 3).
Note that Bessel’s equation (4.67) with ν = n is
ρ2 J
(cid:1)(cid:1)
n
+ ρ J
(cid:1)
n
+ (λ2ρ2 − n2)J n
= 0
d
dρ (ρ J
(cid:1)
n)2 + (λ2ρ2 − n2)
d
dρ J 2
n
= 0
(4.94)
(4.95)
which can be written as
Integrating, we find that
(cid:1)
[(ρ J
)2 + (λ2ρ2 − n2)J 2] 1
0
− 2λ2
ρ J 2dρ = 0
(4.96)
ρ=0
1(cid:2)
Thus,
1(cid:2)
2λ2
ρ J 2
n dρ = λ2[J
(cid:1)
n(λ)]2 + (λ2 − n2)[J n(λ)]2
(4.97)
ρ=0
book
Mobk070
March 22, 2007
11:7
64 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
Thus, we note from that if the eigenvalues are λ
condition is, according to Eq. (3.53) in Chapter 3
j , the roots of J ν(λ
ρ) = 0 the orthogonality
j
1(cid:2)
0
ρ J n(λ
j
ρ)J n(λ
k
ρ)dρ = 0,
j (cid:3)= k
On the other hand, if the eigenvalues are the roots of the equation
= 1
2
[J n+1(λ
j )]2,
j = k
(4.98)
H J n(λ
j ) + λ
j J
(cid:1)
n(λ
j ) = 0
ρ J n(λ
j
ρ)J n(λ
k
ρ)dρ = 0,
j (cid:3)= k
1(cid:2)
0
(λ2
j
=
− n2 + H2)[J n(λ
2λ2
j
j )]2
,
j = k
(4.99)
Using the equations in the table above and integrating by parts it is not difficult to show that
x(cid:2)
s =0
s n J 0(s )d s = xn J 1(x) + (n − 1)xn−1 J 0(x) − (n − 1)2
s n−2 J 0(s )d s
(4.100)
x(cid:2)
s =0
4.2.2 Fourier–Bessel Series
Owing to the fact that Bessel’s equation with appropriate boundary conditions is a Sturm–
Liouville system it is possible to use the orthogonality property to expand any piecewise
continuous function on the interval 0 < x < 1 as a series of Bessel functions. For example,
let
f (x) =
∞(cid:1)
n=1
An J 0(λ
nx)
(4.101)
Multiplying both sides by x J 0(λ
k x)d x and integrating from x = 0 to x = 1 (recall that the
weighting function x must be used to insure orthogonality) and noting the orthogonality
property we find that
f (x) =
∞(cid:1)
j =1
(cid:3)
1
x=0 x f (x)J 0(λ
(cid:3)
x=0 x[J 0(λ
j x)d x
j x)]2d x
1
J 0(λ
j x)
(4.102)
book
Mobk070
March 22, 2007
11:7
SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 65
Example 4.9. Derive a Fourier–Bessel series representation of 1 on the interval 0 < x < 1.
We note that with J 0(λ
j ) = 0
and
Thus
1(cid:2)
x=0
x[J 0(λ
j x)]2d x = 1
2
[J 1(λ
j )]2
1(cid:2)
x=0
x J 0(λ
j x)d x = J 1(λ
j )
1 = 2
∞(cid:1)
j =1
J 0(λ
j x)
j J 1(λ
λ
j )
(4.103)
(4.104)
(4.105)
Example 4.10 (A problem in cylindrical coordinates). A cylinder of radius r1 is initially at
a temperature u0 when its surface temperature is increased to u1. It is sufficiently long that
variation in the z direction may be neglected and there is no variation in the θ direction. There
is no heat generation. From Chapter 1, Eq. (1.11)
α
=
(r ur )r
ut
r
u(0, r ) = u0
u(t, r1) = u1
(4.106)
u is bounded
(4.107)
The length scale is r1 and the time scale is r 2
1
normalizes the problem is (u − u1)/(u0
/α. A dimensionless dependent variable that
− u1) = U . Setting η = r/r1 and τ = tα/r 2
1 ,
Uτ = 1
η (ηUη)η
U (0, η) = 1
U (τ, 1) = 0
U is bounded
(4.108)
(4.109)
Separate variables as T(τ )R(η). Substitute into the differential equation and divide by T R.
Tτ
T
= 1
Rη (η Rη)η = ±λ2
(4.110)
book
Mobk070
March 22, 2007
11:7
66 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
where the minus sign is chosen so that the function is bounded. The solution for T is exponential
and we recognize the equation for R as Bessel’s equation with ν = 0.
1
η (η Rη)η + λ2 R = 0
The solution is a linear combination of the two Bessel functions of order 0.
C1 J 0(λη) + C2Y0(λη)
(4.111)
(4.112)
Since we have seen that Y0 is unbounded as η approaches zero, C2 must be zero. Furthermore,
the boundary condition at η = 1 requires that J 0(λ) = 0, so that our eigenfunctions are J 0(λη)
and the corresponding eigenvalues are the roots of J 0(λ
n) = 0.
Un
= Kne
−λ2
n
τ
J 0(λ
n
η),
n = 1, 2, 3, 4, . . .
(4.113)
Summing (linear superposition)
Using the initial condition,
U =
∞(cid:1)
n=1
Kne
−λ2
n
τ
J 0(λ
n
η)
1 =
∞(cid:1)
n=1
Kn J 0(λ
n
η)
(4.114)
(4.115)
Bessel functions are orthogonal with respect to weighting factor η since theyare solutions to a
Sturm–Liouville system. Therefore when we multiply both sides of this equation by η J 0(λ
η)d η
m
and integrate over (0, 1) all of the terms in the summation are zero except when m = n. Thus,
1(cid:2)
η=0
J 0(λ
n
η)ηd η = Kn
1(cid:2)
η=0
0 (λ
J 2
n
η)ηd η
(4.116)
but
1(cid:2)
η=0
1(cid:2)
η=0
η J 2
0 (λ
n
1 (λ
η)d η = J 2
n)
2
η J 0(λ
n
η)d η = 1
λ
n
J 1(λ
n)
(4.117)
book
Mobk070
March 22, 2007
11:7
Thus
SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 67
U (τ, η) =
∞(cid:1)
n=0
2
n J 1(λ
λ
n)
−λ2
n
τ
e
J 0(λ
n
η)
(4.118)
Example 4.11 (Heat generation in a cylinder). Reconsider the problem of heat transfer in a
long cylinder but with heat generation and at a normalized initial temperature of zero.
uτ = 1
r
(r ur )r
+ q0
u(τ, 1) = u(0, r ) = 0, u bounded
(4.119)
(4.120)
Our experience with the above example hints that the solution maybe of the form
u =
∞(cid:1)
j =1
A j (τ )J 0(λ
jr )
(4.121)
This equation satisfies the boundary condition at r = 1 and A j (τ ) is to be determined. Substi-
tuting into the partial differential equation gives
∞(cid:1)
∞(cid:1)
(cid:15)
(cid:14)
(cid:1)
j (τ )J 0(λ
j ) =
A
j =1
j =1
A j (τ )
1
r
d
dr
r
d J 0
dr
+ q0
(4.122)
(4.123)
(4.124)
In view of Bessel’s differential equation, the first term on the right can be written as
∞(cid:1)
j =1
−λ2
j J 0(λ
jr )A j (τ )
The second term can be represented as a Fourier–Bessel series as follows:
q0
= q0
∞(cid:1)
j =1
2J 0(λ
j J 1(λ
λ
jr )
j )
as shown in Example 4.9 above.
Equating coefficients of J 0(λ
jr ) we find that A j (τ ) must satisfy the ordinary differential
equation
(cid:1)
A
(τ ) + λ2
j A(τ ) = q0
2
j J 1(λ
λ
j )
with the initial condition A(0) = 0.
Solution of this simple first-order linear differential equations yields
A j (τ ) = 2q0
j J 1(λ
λ3
j )
+ C exp(−λ2
j
τ )
(4.125)
(4.126)
book
Mobk070
March 22, 2007
11:7
68 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
After applying the initial condition
A j (τ ) = 2q0
j J 1(λ
λ3
j )
(cid:12)
(cid:13)
1 − exp(−λ2
j
τ )
The solution is therefore
u(τ, r ) =
∞(cid:1)
j =1
(cid:12)
2q0
j J 1(λ
λ3
j )
1 − exp(−λ2
j
τ )
(cid:13)
J 0(λ
jr )
(4.127)
(4.128)
Example 4.12 (Time dependent heat generation). Suppose that instead of constant heat
generation, the generation is time dependent, q (τ ). The differential equation for A(τ ) then
becomes
(cid:1)
A
(τ ) + λ2
j A(τ ) = 2q (τ )
j J 1(λ
λ
j )
(4.129)
An integrating factor for this equation is exp(λ2
j
τ ) so that the equation can be written as
(cid:12)
d
d τ
A j exp(λ2
j
τ )
(cid:13)
= 2q (τ )
j J 1(λ
λ
j )
exp(λ2
j
τ )
(4.130)
Integrating and introducing as a dummy variable t
A j (τ ) =
2
j J 1(λ
λ
j )
τ(cid:2)
t=0
q (t) exp(−λ2
j (τ − t))d t
(4.131)
Problems
1. By differentiating the series form of J 0(x) term by term show that
(cid:1)
0(x) = −J 1(x)
J
2. Show that
(cid:2)
x J 0(x)d x = x J 1(x) + constant
3. Using the expression for
(cid:3)
x
s =0 s n J 0(s )d s show that
x(cid:2)
s 5 J 0(s )d s = x(x2 − 8)[4x J 0(x) + (x2 − 8)J 1(x)]
s =0
4. Express 1 − x as a Fourier–Bessel series.
book
Mobk070
March 22, 2007
11:7
SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 69
LEGENDRE FUNCTIONS
4.3
We now consider another second-order linear differential that is common for problems involv-
ing the Laplacian in spherical coordinates. It is called Legendre’s equation,
(1 − x2)u
(cid:1)(cid:1) − 2xu
(cid:1) + ku = 0
(4.132)
This is clearly a Sturm–Liouville equation and we will seek a series solution near the origin,
which is a regular point. We therefore assume a solution in the form of (4.3).
u =
∞(cid:1)
j =0
c j x j
(4.133)
Differentiating (4.133) and substituting into (4.132) we find
∞(cid:1)
j =0
or
[ j ( j − 1)c j x j −2(1 − x2) − 2 j c j x j + n(n + 1)c j x j ]
(4.134)
∞(cid:1)
{[k − j ( j + 1)]c j x j + j ( j − 1)c j x j −2} = 0
(4.135)
j =0
On shifting the last term,
∞(cid:1)
j =0
{( j + 2)( j + 1)c j +2
+ [k − j ( j + 1)]c j
}x j = 0
(4.136)
The recurrence relation is
c j +2
= − j ( j + 1) − k
( j + 1)( j + 2)
c j
(4.137)
There are thus two independent Frobenius series. It can be shown that they both diverge at
x = 1 unless they terminate at some point. It is easy to see from (4.137) that they do in fact
terminate if k = n(n + 1).
Since n and j are integers it follows that c n+2
, c n+6, etc. are
all zero. Therefore the solutions, which depend on n (i.e., the eigenfunctions) are polynomials,
= 0 and the solution is a constant. If
series that terminate at j = n. For example, if n = 0, c 2
= 0 and consequently c n+4
book
Mobk070
March 22, 2007
11:7
70 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
n = 1 c n
(cid:14)
u = Pn(x) = c n
= 0 when n ≥ 1 and the polynomial is x. In general
xn − n(n − 1)
2(2n − 1)
(2n − 2k)!
(n − 2k)!(n − k)!
(−1)k
k!
= 1
2k
xn−2k
m(cid:1)
xn−2 + n(n − 1)(n − 2)(n − 3)
2(4)(2n − 1)(2n − 3)
k=0
(cid:15)
xn−4 − . . .
(4.138)
where m = n/2 if n is even and (n − 1)/2 if n is odd.
The coefficient c n is of course arbitrary. It turns out to be convenient to choose it to be
c 0
c n
= 1
= (2n − 1)(2n − 3) · · · 1
n!
(4.139)
the first few polynomials are
P0
= 1, P1
= x, P2
= (3x2 − 1)/2, P3
= (5x3 − 3x)/2, P4
= (35x4 − 30x2 + 3)/8,
Successive Legendre polynomials can be generated by the use of Rodrigues’ formula
For example
Pn(x) = 1
2nn!
d n
d xn (x2 − 1)n
P5
= (63x5 − 70x3 + 15x)/8
Fig. 4.4 shows graphs of several Legendre polynomials.
(4.140)
FIGURE 4.4: Legendre polynomials
book
Mobk070
March 22, 2007
11:7
SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 71
The second solution of Legendre’s equation can be found by the method of variation of
parameters. The result is
Qn(x) = Pn(x)
(cid:2)
d ζ
n (ζ )(1 − ζ 2)
P 2
(4.141)
It can be shown that this generally takes on a logarithmic form involving ln [(x + 1)/(x − 1)]
which goes to infinity at x = 1. In fact it can be shown that the first two of these
functions are
Q0
= 1
2
ln
1 + x
1 − x
and Q1
= x
2
ln
1 + x
1 − x
− 1
Thus the complete solution of the Legendre equation is
u = APn(x) + B Qn(x)
(4.142)
(4.143)
where Pn(x) and Qn(x) are Legendre polynomials of the first and second kind. If we require
the solution to be finite at x = 1, B must be zero.
Referring back to Eqs. (3.46) through (3.53) in Chapter 3, we note that the eigenvalues
λ = n(n + 1) and the eigenfunctions are Pn(x) and Qn(x). We further note from (3.46) and
(3.47) that the weight function is one and that the orthogonality condition is
1(cid:2)
−1
Pn(x)Pm(x)d x = 2
2n + 1
δ
mn
(4.144)
where δ
mn is Kronecker’s delta, 1 when n = m and 0 otherwise.
Example 4.13. Steady heat conduction in a sphere
Consider heat transfer in a solid sphere whose surface temperature is a function of θ , the angle
measured downward from the z-axis (see Fig. 1.3 Chapter 1). The problem is steady and there
is no heat source.
∂ 2
r
∂r 2 (r u) + 1
sin θ
u(r = 1) = f (θ )
(cid:7)
(cid:6)
∂
∂θ
sin θ
∂u
∂θ
= 0
(4.145)
u is bounded
book
Mobk070
March 22, 2007
11:7
72 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
Substituting x = cos θ ,
∂ 2
∂r 2 (r u) +
∂
∂ x
r
(cid:14)
(1 − x2)
(cid:15)
∂u
∂ x
= 0
(4.146)
We separate variables by assuming u = R(r )X(x). Substitute into the equation and divide by
RX and find
r
R
(r R)
(cid:1)(cid:1) = − [(1 − x2)X
(cid:1)
(cid:1)
]
= ±λ2
X
or
(cid:1)(cid:1) ∓ λ2 R = 0
(cid:1)
r (r R)
[(1 − x2)X
(cid:1) ± λ2 X = 0
]
(4.147)
(4.148)
The second of these is Legendre’s equation, and we have seen that it has bounded
solutions at r = 1 when λ2 = n(n + 1). The first equation is of the Cauchy–Euler type with
solution
R = C1r n + C2r
−n−1
(4.149)
Noting that the constant C2 must be zero to obtain a bounded solution at r = 0, and using
superposition,
u =
∞(cid:1)
n=0
Knr n Pn(x)
(4.150)
and using the condition at fr = 1 and the orthogonality of the Legendre polynomial
π(cid:2)
θ=0
f (θ )Pn(cos θ )d θ =
π(cid:2)
θ=0
Kn P 2
n (cos θ )d θ = 2Kn
2n + 1
(4.151)
ASSOCIATED LEGENDRE FUNCTIONS
4.4
Equation (1.15) in Chapter 1 can be put in the form
(cid:25)
1
α
∂u
∂t
=
∂ 2u
∂r 2
+ 2
r
∂u
∂r
(cid:26)
+ 1
r 2
∂
∂µ
(cid:25)
(1 − µ2)
(cid:26)
∂u
∂µ
+
1
r 2(1 − µ2)
∂ 2u
∂(cid:6)2
(4.152)
by substituting µ = cos θ .
book
Mobk070
March 22, 2007
11:7
SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 73
We shall see later that on separating variables in the case where u is a function of r, θ, (cid:6),
and t, we find the following differential equation in the µ variable:
(cid:26)
(cid:25)
(cid:26)
(cid:25)
(1 − µ2)
d
d µ
d f
d µ
+
n(n + 1) − m2
1 − µ2
f = 0
(4.153)
We state without proof that the solution is the associated Legendre function P m
associated Legendre polynomial is given by
n (µ). The
P m
n
= (1 − µ2)1/2m d m
d µm Pn(µ)
The orthogonality condition is
and
1(cid:2)
−1
[P m
n (µ)]2d µ =
2(n + m)!
(2n + 1)(n − m)!
1(cid:2)
−1
n P m
P m
n(cid:1) d µ = 0
(cid:1)
n (cid:3)= n
(4.154)
(4.155)
(4.156)
The associated Legendre function of the second kind is singular at x = ±1 and may be
computed by the formula
n (x) = (1 − x2)m/2 d m Qn(x)
Qm
d xm
(4.157)
Problems
1. Find and carefully plot P6 and P7.
2. Perform the integral above and show that
and that
x(cid:2)
Q0(x) = C P0(x)
ξ =0
d ξ
(1 − ξ 2)P0(ξ )
= C
2
ln
(cid:7)
(cid:6)
1 + x
1 − x
x(cid:2)
Q1(x) = C x
ξ =0
d ξ
ξ 2(1 − ξ 2)
= C x
2
ln
(cid:7)
(cid:6)
1 + x
1 − x
− 1
3. Using the equation above find Q0
0(x) and Q1
1(x)
book
Mobk070
March 22, 2007
11:7
74 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
FURTHER READING
J. W. Brown and R. V. Churchill, Fourier Series and Boundary Value Problems. New York:
McGraw-Hill, 2001.
C. F. Chan Man Fong, D. DeKee, and P. N. Kaloni, Advanced Mathematics for Engineering
and Science. 2nd edition. Singapore: World Scientific, 2004.
P. V. O’Neil, Advanced Engineering Mathematics. 5th edition. Brooks/Cole Thompson, Pacific
Grove, CA, 2003.
book
Mobk070
March 22, 2007
11:7
75
C H A P T E R 5
Solutions Using Fourier Series
and Integrals
We have already demonstrated solution of partial differential equations for some simple cases
in rectangular Cartesian coordinates in Chapter 2. We now consider some slightly more
complicated problems as well as solutions in spherical and cylindrical coordinate systems to
further demonstrate the Fourier method of separation of variables.
CONDUCTION (OR DIFFUSION) PROBLEMS
5.1
Example 5.1 (Double Fourier series in conduction). We now consider transient heat con-
duction in two dimensions. The problem is stated as follows:
ut
= α(u xx
+ u y y )
u(t, 0, y) = u(t, a, y) = u(t, x, 0) = u(t, x, b) = u0
u(0, x, y) = f (x, y)
(5.1)
That is, the sides of a rectangular area with initial temperature f (x, y) are kept at a constant
temperature u0. We first attempt to scale and nondimensionalize the equation and boundary
conditions. Note that there are two length scales, a and b. We can choose either, but there will
remain an extra parameter, either a/b or b/a in the equation. If we take ξ = x/a and η = y/b
then (5.1) can be written as
(cid:6)
(cid:7)
a 2
α ut
=
uξ ξ + a 2
b 2 uηη
(5.2)
The time scale is now chosen as a 2/α and the dimensionless time is τ = αt/a 2. We also choose
a new dependent variable U (τ, ξ, η) = (u − u0)/( fmax
− u0). The now nondimensionalized
system is
Uτ = Uξ ξ + r 2Uηη
U (τ, 0, η) = U (τ, 1, η) = U (τ, ξ, 0) = U (τ, ξ, 1) = 0
U (0, ξ, η) = ( f − u0)/( fmax
− u0) = g (ξ, η)
(5.3)
book
Mobk070
March 22, 2007
11:7
76 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
We now proceed by separating variables. Let
U (τ, ξ, η) = T(τ )X(ξ )Y (η)
Differentiating and inserting into (5.3) and dividing by (5.4) we find
(cid:1)
T
T
= X
(cid:1)(cid:1)
X
(cid:1)(cid:1)
Y + r 2Y
XY
(5.4)
(5.5)
where the primes indicate differentiation with respect to the variable in question and r = a/b.
Since the left-hand side of (5.5) is a function only of τ and the right-hand side is only a function
of ξ and η both sides must be constant. If the solution is to be finite in time we must choose
the constant to be negative, –λ2. Replacing T
(cid:1)/T by –λ2 and rearranging,
(cid:1)(cid:1)
(cid:1)(cid:1)
−λ2 − X
X
= r
Y
Y
(5.6)
Once again we see that both sides must be constants. How do we choose the signs? It should be
clear by now that if either of the constants is positive solutions for X or Y will take the form of
hyperbolic functions or exponentials and the boundary conditions on ξ or η cannot be satisfied.
Thus,
(cid:1)
T
T
(cid:1)(cid:1)
X
X
(cid:1)(cid:1)
r 2 Y
Y
= −λ2
= −β 2
= −γ 2
(5.7)
(5.8)
(5.9)
Note that X and Y are eigenfunctions of (5.8) and (5.9), which are Sturm–Liouville equations
and β and γ are the corresponding eigenvalues.
Solutions of (5.7), (5.8), and (5.9) are
T = A exp(−λ2τ )
X = B1 cos(βξ ) + B2 sin(βξ )
Y = C1 cos(γ η/r ) + C2 sin(γ η/r )
(5.10)
(5.11)
(5.12)
= 0.
Applying the first homogeneous boundary condition, we see that X(0) = 0, so that B1
= 0.
Applying the third homogeneous boundary condition we see that Y (0) = 0, so that C1
The second homogeneous boundary condition requires that sin(β) = 0, or β = nπ . The last
homogeneous boundary condition requires sin(γ /r ) = 0, or γ = mπr . According to (5.6),
λ2 = β2 + γ 2 . Combining these solutions, inserting into (5.4) we have one solution in the
book
Mobk070
March 22, 2007
11:7
SOLUTIONS USING FOURIER SERIES AND INTEGRALS 77
form
Umn(τ, ξ, η) = Knme
−(n2π 2+m2π 2r 2)τ
sin(nπ ξ ) sin(mπ η)
(5.13)
for all m, n = 1, 2, 3, 4, 5, . . .
Superposition now tells us that
∞(cid:1)
∞(cid:1)
n=1
m=1
Knme
−(n2π 2+m2π 2r 2)τ
sin(nπ ξ ) sin(mπ )
Using the initial condition
g (ξ, η) =
∞(cid:1)
∞(cid:1)
n=1
m=1
Knm sin(nπ ξ ) sin(mπ η)
(5.14)
(5.15)
We have a double Fourier series, and since both sin(nπ ξ ) and sin(mπ η) are members of
orthogonal sequences we can multiply both sides by sin(nπ ξ )sin(mπ η)dξ dη and integrate over
the domains.
1(cid:2)
1(cid:2)
ξ =0
η=0
g (ξ, η) sin(nπ ξ ) sin(mπ η)d ξ d η
sin2(nπ ξ )d ξ sin2(mπ η)d η
= Knm
1(cid:2)
(cid:2)
1
η=0
ξ =0
= Knm
4
1(cid:2)
(5.16)
Our solution is
1(cid:2)
∞(cid:1)
∞(cid:1)
4
g (ξ, η) sin(nπ ξ ) sin(mπ η)d ξ d η e
−(n2π 2+m2π 2r 2)τ
sin(nπ ξ ) sin(mπ η)
(5.17)
n=1
m=1
ξ =0
η=0
Example 5.2 (A convection boundary condition). Reconsider the problem defined by (2.1)
in Chapter 2, but with different boundary and initial conditions,
= u(0, x)
u(t, 0) = u0
ku x(t, L) − h[u1
− u(t, L)] = 0
(5.18)
(5.19)
The physical problem is a slab with conductivity k initially at a temperature u0 suddenly exposed
at x = L to a fluid at temperature u1 through a heat transfer coefficient h while the x = 0 face
is maintained at u0.
book
Mobk070
March 22, 2007
11:7
78 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
The length and time scales are clearly the same as the problem in Chapter 2. Hence, τ =
tα/L2 and ξ = x/L. If we choose U = (u − u0)/(u1
− u0) we make the boundary condition
at x = 0 homogeneous but the condition at x = L is not. We have the same situation that we
had in Section 2.3 of Chapter 2. The differential equation, one boundary condition, and the
initial condition are homogeneous. Proceeding, we find
Uτ = Uξ ξ
U (τ, 0) = U (0, ξ ) = 0
Uξ (τ, 1) + B[U (τ, 1) − 1] = 0
(5.20)
where B = h L/k. It is useful to relocate the nonhomogeneous condition as the initial condition.
As in the previous problem we assume U (τ, ξ ) = V (τ, ξ ) + W(ξ ).
Vτ = Vξ ξ + Wξ ξ
W(0) = 0
Wξ (1) + B[W(1) − 1] = 0
V (τ, 0) = 0
Vξ (τ, 1) + BV (τ, 1) = 0
V (0, ξ ) = −W(ξ )
Set Wξ ξ = 0. Integrating twice and using the two boundary conditions on W,
W(ξ ) = Bξ
B + 1
The initial condition on V becomes
V (0, ξ ) = −Bξ/(B + 1) .
(5.21)
(5.22)
(5.23)
Assume V (τ, ξ ) = P (τ )Q(ξ ), substitute into the partial differential equation for V , and divide
by P Q as usual.
(cid:1)
P
P
(cid:1)(cid:1)
= Q
Q
= ±λ2
We must choose the minus sign for the solution to be bounded. Hence,
−λ2τ
P = Ae
Q = C1 sin(λξ ) + C2 cos(λξ )
(5.24)
(5.25)
book
Mobk070
March 22, 2007
11:7
SOLUTIONS USING FOURIER SERIES AND INTEGRALS 79
FIGURE 5.1: The eigenvalues of λ
= −B tan(λ
n)
n
Applying the boundary condition at ξ = 0, we find that C2
condition on V at ξ = 1,
= 0. Now applying the boundary
or
C1
λ cos(λ) + C1 B sin(λ) = 0
λ = −B tan(λ)
(5.26)
(5.27)
This is the equation for determining the eigenvalues, λ
n. It is shown graphically in Fig. 5.1.
Example 5.3 (Superposition of several problems). We’ve seen now that in order to apply
separation of variables the partial differential equation itself must be homogeneous and we have
also seen a technique for transferring the inhomogeneity to one of the boundary conditions or to
the initial condition. But what if several of the boundary conditions are nonhomogeneous? We
demonstrate the technique with the following problem. We have a transient two-dimensional
problem with given conditions on all four faces.
+ u y y
= u xx
ut
u(t, 0, y) = f1(y)
u(t, a, y) = f2(y)
u(t, x, 0) = f3(x)
u(t, x, b) = f4(x)
u(0, x, y) = g (x, y)
(5.28)
book
Mobk070
March 22, 2007
11:7
80 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
+ u5.
The problem can be broken down into five problems. u = u1
+ u3
+ u4
+ u2
+ u1y y
= u1xx
u1t
u1(0, x, y) = g (x, y)
u1
= 0,
all boundaries
= 0
+ u2y y
u2xx
u2(0, y) = f1(y)
u2
= 0
on all other boundaries
= 0
+ u3y y
u3xx
u3(a, y) = f2(y)
u3
= 0
on all other boundaries
= 0
+ u4y y
u4xx
u4(x, 0) = f3(x)
u4
= 0
on all other boundaries
= 0
+ u5y y
u5xx
u5(x, b) = f4(x)
u5
= 0
on all other boundaries
(5.29)
(5.30)
(5.31)
(5.32)
(5.33)
5.1.1 Time-Dependent Boundary Conditions
We will explore this topic when we discuss Laplace transforms.
Example 5.4 (A finite cylinder). Next we consider a cylinder of finite length 2L and radius
r1. As in the first problem in this chapter, there are two possible length scales and we choose
r1. The cylinder has temperature u0 initially. The ends at L = ±L are suddenly insulated while
the sides are exposed to a fluid at temperature u1. The differential equation with no variation
in the θ direction and the boundary conditions are
α
r
=
(r ur )r
+ u zz
ut
u z(t, r, −L) = u z(t, r, +L) = 0
kur (r1) + h[u(r1) − u1(r1)] = 0
u(0, r, z) = u0
u is bounded
(5.34)
book
Mobk070
March 22, 2007
11:7
SOLUTIONS USING FOURIER SERIES AND INTEGRALS 81
If we choose the length scale as r1 then we define η = r/r1
normalized temperature can be chosen as U = (u − u1)(u0
, ζ = z/L, and τ = αt/r 2
− u1). With these we find that
1 . The
(cid:4)
(cid:5)
2
Uς ς
r1
L
Uτ = 1
η (ηUη)η +
Uς (ς = ±1) = 0
Uη(η = 1) + BU (η = 1) = 0
U (τ = 0) = 1
where B = hr1
/k.
Let U = T(τ )R(η)Z(ζ ). Insert into the differential equation and divide by U .
(cid:1)
T
T
= 1
η R
(η R
(cid:1)
)
(cid:1) +
(cid:4)
r1
L
(cid:5)
(cid:1)(cid:1)
2 Z
Z
(5.35)
(5.36)
Zς (ς = ±1) = 0
Rη(η = 1) + B R(η = 1) = 0
U (τ = 0) = 1
Again, the dance is the same. The left-hand side of Eq. (5.36) cannot be a function of η or ζ so
each side must be a constant. The constant must be negative for the time term to be bounded.
(cid:1)(cid:1)/Z must be a negative constant because otherwise Z would
be exponential functions and we could not simultaneously satisfy the boundary conditions at
ζ = ±1. Thus, we have
Experience tells us that Z
T
η2 R
(cid:1) = −λ2T
(cid:1) + β 2η2 R = 0
(cid:1)(cid:1) + η R
(cid:6)
(cid:7)
(cid:1)(cid:1) = −γ 2
Z
L
r1
with solutions
2
Z
−λ2t
T = Ae
Z = C1 cos(γ Lς/r1) + C2 sin(γ Lς/r1)
R = C3 J 0(βη) + C4Yo (βη)
(5.37)
(5.38)
It is clear that C4 must be zero always when the cylinder is not hollow because Y0 is unbounded
when η = 0. The boundary conditions at ς = ±1 imply that Z is an even function, so that C2
book
Mobk070
March 22, 2007
11:7
82 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
must be zero. The boundary condition at ζ = 1 is
Zζ = −C1(γ L/r1) sin(γ L/r1) = 0,
or γ L/r1
= nπ
(5.39)
The boundary condition at η = 1 requires
(cid:1)
0 (β) + B J 0(β)] = 0 or
C3[J
B J 0(β) = β J 1(β)
which is the transcendental equation for finding β
m. Also note that
λ2 = γ 2
n
+ β 2
m
(5.40)
(5.41)
By superposition we write the final form of the solution as
U (τ, η, ς) =
∞(cid:1)
∞(cid:1)
n=0
m=0
−(γ 2
n
+β2
m )τ
Knme
J 0(β
m
η) cos(nπ ς )
(5.42)
Knm is found using the orthogonality properties of J 0(β
condition.
m
η) and cos(nπ ζ ) after using the initial
1(cid:2)
r =0
1(cid:2)
r J 0(β
m
η)d η
cos(nπ ς )d ς = Knm
ς =−1
1(cid:2)
r =0
1(cid:2)
r J 2
0 (β
m
η)d η
cos2(nπ ς )d ς
(5.43)
ς =−1
Example 5.5 (Heat transfer in a sphere). Consider heat transfer in a solid sphere whose
surface temperature is a function of θ , the angle measured downward from the z-axis (see Fig.
1.3, Chapter 1). The problem is steady and there is no heat source.
Substituting x = cos θ ,
(cid:7)
(cid:6)
sin θ
∂u
∂θ
= 0
∂ 2
∂r 2 (r u) + 1
r
sin θ
u(r = 1) = f (θ )
∂
∂θ
u is bounded
∂ 2
∂r 2 (r u) +
∂
∂ x
r
(cid:14)
(1 − x2)
(cid:15)
∂u
∂ x
= 0
(5.44)
(5.45)
We separate variables by assuming u = R(r )X(x). Substitute into the equation, divide by RX
and find
r
R
(r )
(cid:1)(cid:1) = − [(1 − x2)X
(cid:1)
(cid:1)
]
= ±λ2
X
(5.46)
book
Mobk070
March 22, 2007
11:7
or
SOLUTIONS USING FOURIER SERIES AND INTEGRALS 83
(cid:1)(cid:1) ∓ λ2 R = 0
r (r R)
[(1 − x2)X
(cid:1)
(cid:1) ± λ2 X = 0
]
(5.47)
The second of these is Legendre’s equation, and we have seen that it has bounded solutions at
r = 1 when ±λ2 = n(n + 1). The first equation is of the Cauchy–Euler type with solution
R = C1r n + C2r
−n−1
(5.48)
Noting that the constant C2 must be zero to obtain a bounded solution at r = 0, and using
superposition,
∞(cid:1)
u =
Knr n Pn(x)
n=0
and using the condition at f (r = 1) and the orthogonality of the Legendre polynomial
π(cid:2)
θ=0
f (θ )Pn(cos θ )d θ =
Kn
= 2n + 1
2
π(cid:2)
θ=0
π(cid:2)
θ=0
Kn P 2
n (cos θ )d θ = 2Kn
2n + 1
f (θ )Pn(cos θ )d θ
(5.49)
(5.50)
(5.51)
VIBRATIONS PROBLEMS
5.2
We now consider some vibrations problems. In Chapter 2 we found a solution for a vibrating
string initially displaced. We now consider the problem of a string forced by a sine function.
Example 5.6 (Resonance in a vibration problem). Equation (1.21) in Chapter 1 is
ytt
= a 2 yxx
+ A sin(ηt)
(5.52)
Select a length scale as L, the length of the string, and a time scale L/a and defining
ξ = x/L and τ = ta/L,
yτ τ = yξ ξ + C sin(ωτ )
(5.53)
where ω is a dimensionless frequency, ηL/a and C = AL2a 2.
The boundary conditions and initial velocity and displacement are all zero, so the bound-
ary conditions are all homogeneous, while the differential equation is not. Back in Chapter 2 we
book
Mobk070
March 22, 2007
11:7
84 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
saw one way of dealing with this. Note that it wouldn’t have worked had q
been a function of
time. We approach this problem somewhat differently. From experience, we expect a solution
of the form
(cid:1)(cid:1)(cid:1)
y(ξ, τ ) =
∞(cid:1)
n=1
Bn(τ ) sin(nπ ξ )
(5.54)
where the coefficients Bn(τ ) are to be determined. Note that the equation above satisfies the
end conditions. Inserting this series into the differential equation and using the Fourier sine
series of C
C =
∞(cid:1)
n=1
(cid:1)(cid:1)
n (τ ) sin(nπ ξ ) =
B
∞(cid:1)
n=1
∞(cid:1)
2C[1 − (−1)n]
nπ
sin(nπ ξ )
[−(nπ )2 Bn(τ )] sin(nπ ξ )
(5.55)
n=1
+ C
∞(cid:1)
n=1
2[1 − (−1)n]
nπ
sin(nπ ξ ) sin((cid:24) τ )
(5.56)
Thus
(cid:1)(cid:1)
B
n
= −(nπ )2 Bn
+ C
2[1 − (−1)n]
nπ
sin((cid:24) τ )
(5.57)
subject to initial conditions y = 0 and yτ = 0 at τ = 0. When n is even the solution is zero.
That is, since the right-hand side is zero when n is even,
But since both Bn(0) and B
Bn
= C1 cos(nπ τ ) + C2 sin(nπ τ )
= C2
(cid:1)
n(0) are zero, C1
= 0. When n is odd we can write
(5.58)
(cid:1)(cid:1)
2n−1
B
+ [(2n − 1)π ]2 B2n−1
4C
=
(2n − 1)π sin(ωτ )
n. The homogeneous solution of the above
(5.59)
(2n − 1)π is the natural frequency of the system, ω
equation is
B2n−1
= D1 cos(ω
n
τ ) + D2 sin(ω
n
τ ) .
To obtain the particular solution we assume a solution in the form of sines and cosines.
BP
= E1 cos(ωτ ) + E2 sin(ωτ )
Differentiating and inserting into the differential equation we find
(5.60)
(5.61)
−E1
ω2 cos(ωτ ) − E2
ω2 sin(ωτ ) + ω2
n[E1 cos(ωτ ) + E2 sin(ωτ )] = 4C
ω
n
sin(ωτ )
(5.62)
book
Mobk070
March 22, 2007
11:7
Equating coefficients of sine and cosine terms
SOLUTIONS USING FOURIER SERIES AND INTEGRALS 85
Thus
E1(ω2
n
E2(ω2
n
− ω2) cos(ωτ ) = 0
− ω2) sin(ωτ ) = 4C
ω
n
ω (cid:3)= ω
n
sin(ωτ )
E1
= 0
=
E2
ω
4C
n(ω2
n
− ω2)
ω (cid:3)= ω
n
Combining the homogeneous and particular solutions
B2n−1
= D1 cos(ω
n
τ ) + D2 sin(ω
n
τ ) +
4C
n(ω2
n
− ω2)
ω
sin(ωτ )
The initial conditions at τ = 0 require that
D1
D2
= 0
= − 4C(ω/ω
n(ω2
ω
n
n)
− ω2)
(5.63)
(5.64)
(5.65)
(5.66)
The solution for B2n−1 is
B2n−1
=
4C
n(ω2 − ω2
n)
ω
(cid:6)
ω
ω
n
(cid:7)
sin(ω
n
τ ) − sin(ωτ )
, ω (cid:3)= ω
n
(5.67)
The solution is therefore
y(ξ, τ ) = 4C
∞(cid:1)
n=1
sin(ω
ξ )
n
n(ω2 − ω2
n)
ω
(cid:6)
ω
ω
n
(cid:7)
sin(ω
n
τ ) − sin(ωτ )
(5.68)
When ω = ω
n the above is not valid. The form of the particular solution should be chosen as
BP
= E1
τ cos(ωτ ) + E2
τ sin(ωτ )
(5.69)
Differentiating and inserting into the differential equation for B2n−1
[E1
τ ω2
n
+ 2E2
ω
n
− E1
τ ω2
n] cos(ω
n
τ ) + [E2
τ ω2
n
− E2
τ ω2
n
− 2E1
ω
n] sin(ω
n
Thus
E2
= 0
E1
= − 4C
2ω2
n
τ ) = 4C
ω
n
sin(ω
n
τ )
(5.70)
(5.71)
book
Mobk070
March 22, 2007
11:7
86 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
and the solution when ω = ω
n is
B2n−1
= C1 cos(ω
n
τ ) + C2 sin(ω
n
τ ) − 2C
ω2
n
τ cos(ω
τ )
n
(5.72)
The initial condition on position implies that C1
velocity is zero gives
= 0. The initial condition that the initial
ω
nC2
− 2C
ω2
n
= 0
The solution for B2n−1 is
Superposition now gives
B2n−1
= 2C
ω3
n
[sin(ω
n
τ ) − ω
n
τ cos(ω
n
τ )]
(5.73)
(5.74)
y(ξ, τ ) =
∞(cid:1)
n=1
2C
ω3
n
sin(ω
n
ξ )[sin(ω
n
τ ) − ω
n
τ cos(ω
n
τ )]
(5.75)
An interesting feature of the solution is that there are an infinite number of natural frequencies,
η = a
L
[π, 3π, 5π, . . . , (2n − 1)π, . . .]
(5.76)
If the system is excited at any of the frequencies, the magnitude of the oscillation will grow
(theoretically) without bound. The smaller natural frequencies will cause the growth to be
fastest.
Example 5.7 (Vibration of a circular membrane). Consider now a circular membrane (like a
drum). The partial differential equation describing the displacement y(t, r, θ ) was derived in
Chapter 1.
(cid:6)
(cid:7)
∂ 2 y
∂t2
a
−2
= 1
r
= 0. The
Suppose it has an initial displacement of y(0, r, θ ) = f (r, θ ) and the velocity yt
displacement at r = r1 is also zero and the displacement must be finite for all r, θ , and t. The
length scale is r1 and the time scale is r1
= η and ta/r1
+ 1
r 2
/a. r/r1
(5.77)
= τ .
r
∂
∂r
∂ y
∂r
∂ 2 y
∂θ 2
We have
∂ 2 y
∂τ 2
= 1
η
∂
∂η
(cid:7)
(cid:6)
η
∂ y
∂η
+ 1
η2
∂ 2 y
∂θ 2
(5.78)
book
Mobk070
March 22, 2007
11:7
Separation of variables as y = T(τ )R(η)S(θ ), substituting into the equation and dividing by
TRS,
SOLUTIONS USING FOURIER SERIES AND INTEGRALS 87
(cid:1)(cid:1)
T
T
= 1
η R
(η R
(cid:1)
)
(cid:1) + 1
η2
(cid:1)(cid:1)
S
S
= −λ2
The negative sign is because we anticipate sine and cosine solutions for T.
We also note that
λ2η2 +
η
R
(η R
(cid:1)
)
(cid:1)(cid:1)
(cid:1) = − S
S
= ±β 2
(5.79)
(5.80)
To avoid exponential solutions in the θ direction we must choose the positive sign. Thus we
have
T
(cid:1)(cid:1) = −λ2T
(cid:1)(cid:1) = −β 2S
S
(cid:1) + (η2λ2 − β 2)R = 0
η(η R
(cid:1)
)
The solutions of the first two of these are
T = A1 cos(λτ ) + A2 sin(λτ )
S = B1 cos(βθ ) + B2 sin(βθ )
(5.81)
(5.82)
= 0. β must be an integer so
The boundary condition on the initial velocity guarantees that A2
that the solution comes around to the same place after θ goes from 0 to 2π . Either B1 and B2
can be chosen zero because it doesn’t matter where θ begins (we can adjust f (r, θ )).
T(τ )S(θ ) = AB cos(λτ ) sin(nθ )
(5.83)
The differential equation for R should be recognized from our discussion of Bessel functions.
The solution with β = n is the Bessel function of the first kind order n. The Bessel function
of the second kind may be omitted because it is unbounded at r = 0. The condition that
mn) = 0. The solution can now be completed
R(1) = 0 means that λ is the mth root of J n(λ
using superposition and the orthogonality properties.
y(τ, η, θ ) =
∞(cid:1)
∞(cid:1)
n=0
m=1
Knm J n(λ
mn
η) cos(λ
mn
τ ) sin(nθ )
(5.84)
Using the initial condition
f (η, θ ) =
∞(cid:1)
∞(cid:1)
n=0
m=1
Knm J n(λ
mn
η) sin(nθ )
(5.85)
book
Mobk070
March 22, 2007
11:7
88 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
and the orthogonality of sin(nθ ) and J n(λ
mn
η)
2π(cid:2)
(cid:2)
1
η=0
θ=0
f (η, θ )η J n(λ
mn
η) sin(nθ )d θ d η = Knm
sin2(nθ )d θ
2π(cid:2)
θ=0
1(cid:2)
r =0
= Knm
4
n+1(λ
J 2
mn)
η J 2
n (λ
mn
η)d η
(5.86)
=
Knm
4
n+1(λ
J 2
nm)
2π(cid:2)
(cid:2)
1
η=0
θ=0
f (η, θ )η J n(λ
nm
η) sin(nθ )d θ d η
(5.87)
Problems
1. The conduction equation in one dimension is to be solved subject to an insulated surface
at x = 0 and a convective boundary condition at x = L. Initially the temperature is
u(0, x) = f (x), a function of position. Thus
= α u xx
ut
u x(t, 0) = 0
ku x(t, L) = −h[u(t, L) − u1]
u(0, x) = f (x)
First nondimensionalize and normalize the equations. Then solve by separation of
variables. Find a specific solution when f (x) = 1 − x2.
2. Consider the diffusion problem
+ q (x)
= α u xx
ut
u x(t, 0) = 0
u x(t, L) = −h[u(t, L) − u1]
u(0, x) = u1
Define time and length scales and define a u scale such that the initial value of the
dependent variable is zero. Solve by separation of variables and find a specific solution
for q (x) = Q, a constant. Refer to Problem 2.1 in Chapter 2.
book
Mobk070
March 22, 2007
11:7
3. Solve the steady-state conduction
SOLUTIONS USING FOURIER SERIES AND INTEGRALS 89
= 0
+ u y y
u xx
u x(0, y) = 0
u(a, y) = u0
u(x, 0) = u1
u y (x, b) = −h[u(x, b) − u1]
Note that one could choose a length scale either a or b. Choose a. Note that if you
choose
U = u − u1
− u1
u0
there is only one nonhomogeneous boundary condition and it is normalized. Solve by
separation of variables.
FOURIER INTEGRALS
5.3
We consider now problems in which one dimension of the domain is infinite in extent. Recall
that a function defined on an interval (−c , c ) can be represented as a Fourier series
f (x) = 1
2c
c(cid:2)
ς =−c
f (ς )d ς + 1
c
+ 1
c
c(cid:2)
∞(cid:1)
n=1
ς =−c
f (ς ) sin
c(cid:2)
∞(cid:1)
ς =−c
n=1
(cid:6)
nπ ς
c
f (ς ) cos
(cid:5)
(cid:4)
nπ ς
c
d ς cos
(cid:5)
(cid:4)
nπ x
c
(cid:7)
d ς sin
(cid:5)
(cid:4)
nπ x
c
(5.88)
which can be expressed using trigonometric identities as
f (x) = 1
2c
c(cid:2)
ς =−c
f (ς )d ς + 1
c
We now formally let c approach infinity. If
(cid:2)α = π/c . Then
c(cid:2)
∞(cid:1)
(cid:14)
f (ς ) cos
(cid:15)
(ς − x)
d ς
(5.89)
ς =−c
n=1
(cid:3) ∞
ς =−c f (ς )d ς exists, the first term vanishes. Let
nπ
c
f (x) = 2
π
c(cid:2)
∞(cid:1)
n=1
ς =0
f (ς ) cos[n(cid:2)α(ς − x)d ς (cid:2)α
(5.90)
book
Mobk070
March 22, 2007
11:7
90 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
or, with
we have
c(cid:2)
gc (n(cid:2)α, x) =
f (ς ) cos[n(cid:2) α(ς − x)]d ς
(5.91)
ς =0
f (x) =
∞(cid:1)
n=1
gc (n(cid:2)α, x)(cid:2)α
(5.92)
As c approaches infinity we can imagine that (cid:2)α approaches dα and n(cid:2)α approaches α,
whereupon the equation for f (x) becomes an integral expression
f (x) = 2
π
∞(cid:2)
∞(cid:2)
ς =0
α=0
f (ς ) cos[α(ς − x)]d ς d α
(5.93)
which can alternatively be written as
∞(cid:2)
f (x) =
[A(α) cos α x + B(α) sin α x]d α
(5.94)
where
and
α=0
A(α) = 2
π
B(α) = 2
π
∞(cid:2)
ς =0
∞(cid:2)
ς =0
f (ς ) cos ας d ς
f (ς ) sin ας d ς
(5.95)
(5.96)
Example 5.8 (Transient conduction in a semi-infinite region). Consider the boundary value
problem
(x ≥ 0, t ≥ 0)
= u xx
ut
u(0, t) = 0
u(x, 0) = f (x)
(5.97)
This represents transient heat conduction with an initial temperature f (x) and the boundary
at x = 0 suddenly reduced to zero. Separation of variables as T(t)X(x) would normally yield a
book
Mobk070
March 22, 2007
11:7
SOLUTIONS USING FOURIER SERIES AND INTEGRALS 91
solution of the form
Bn exp(−λ2t) sin
(cid:7)
(cid:6)
λx
c
for a region of x on the interval (0, c ). Thus, for x on the interval 0 ≤ x ≤ ∞ we have
B(α) = 2
π
∞(cid:2)
ς =0
f (ς ) sin ας d ς
(5.98)
(5.99)
and the solution is
Noting that
and that
we have
u(x, t) = 2
π
∞(cid:2)
λ=0
∞(cid:2)
exp(−λ2t) sin(λx)
f (s ) sin(λs )d s d α
(5.100)
s =0
2 sin α s sin α x = cos α (s − x) − cos α(s + x)
(5.101)
exp(−γ 2α) cos(γ b)d γ = 1
2
(cid:8)
π
α exp
(cid:7)
(cid:6)
− b 2
4α
∞(cid:2)
0
u(x, t) = 1
√
2
π t
∞(cid:2)
(cid:25)
(cid:14)
f (s )
exp
(cid:15)
− (s − x)2
4t
− exp
(cid:15)(cid:26)
(cid:14)
− (s + x)2
4t
d s
0
Substituting into the first of these integrals σ 2 = (s −x)2
4t
and into the second integral
σ 2 = (s + x)2
4t
u(x, t) = 1√
π
∞(cid:2)
f (x + 2σ
√
−σ 2
d σ
t)e
−x/2
√
t
∞(cid:2)
− 1√
π
√
x/2
t
f (−x + 2σ
√
−σ 2
d σ
t)e
(5.105)
(5.102)
(5.103)
(5.104)
book
Mobk070
March 22, 2007
11:7
92 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
In the special case where f (x) = u0
u(x, t) = 2u0√
π
√
t(cid:2)
x/2
exp(−σ 2)d σ = u0 erf
0
(cid:6)
(cid:7)
x
√
t
2
(5.106)
where erf( p) is the Gauss error function defined as
erf ( p) = 2√
π
p(cid:2)
0
exp(−σ 2)d σ
(5.107)
Example 5.9 (Steady conduction in a quadrant). Next we consider steady conduction in the
region x ≥ 0, y ≥ 0 in which the face at x = 0 is kept at zero temperature and the face at
y = 0 is a function of x: u = f (x). The solution is also assumed to be bounded.
= 0
+ u y y
u xx
u(x, 0) = f (x)
u(0, y) = 0
(5.108)
(5.109)
(5.110)
−αy sin α x, which is, according to our
Since u(0, y) = 0 the solution should take the form e
experience with separation of variables, a solution of the equation ∇ 2u = 0. We therefore
assume a solution of the form
∞(cid:2)
u(x, y) =
B(α)e
−α y sin α xd α
(5.111)
with
0
B(α) = 2
π
The solution can then be written as
∞(cid:2)
u(x, y) = 2
π
f (ς ) sinας d ς
(5.112)
∞(cid:2)
0
∞(cid:2)
f (ς )
−α y sin α x sin α ς d α d ς
e
(5.113)
ς =0
α=0
Using the trigonometric identity for 2 sin a x sin aς = cos a(ς − x) − cos a(ς + x) and noting
that
∞(cid:2)
0
−α y cos aβ d α =
e
y
β 2 + y 2
(5.114)
book
Mobk070
March 22, 2007
11:7
we find
SOLUTIONS USING FOURIER SERIES AND INTEGRALS 93
u(x, y) = y
π
∞(cid:2)
(cid:14)
f (ς )
0
1
(ς − x)2 + y 2
−
1
(ς + x)2 + y 2
(cid:15)
d ς
(5.115)
Problem
Consider the transient heat conduction problem
ut
= u xx
+ u y y
x ≥ 0, 0 ≤ y ≤ 1,
t ≥ 0
with boundary and initial conditions
u(t, 0, y) = 0
u(t, x, 0) = 0
u(t, x, 1) = 0
u(0, x, y) = u0
and u(t, x, y) is bounded.
Separate the problem into two problems u(t, x, y) = v(t, x)w(t, y) and give appropriate
boundary conditions. Show that the solution is given by
u(t, x, y) = 4
π erf
(cid:14)
x
√
t
2
(cid:15) ∞(cid:1)
n=1
sin(2n − 1)π y
2n − 1
exp[−(2n − 1)2π 2t]
FURTHER READING
V. Arpaci, Conduction Heat Transfer. Reading, MA: Addison-Wesley, 1966.
J. W. Brown and R. V. Churchill, Fourier Series and Boundary Value Problems. 6th edition. New
York: McGraw-Hill, 2001.
book
Mobk070
March 22, 2007
11:7
book
Mobk070
March 22, 2007
11:7
95
C H A P T E R 6
Integral Transforms: The Laplace
Transform
Integral transforms are a powerful method of obtaining solutions to both ordinary and partial
differential equations. They are used to change ordinary differential equations into algebraic
equations and partial differential into ordinary differential equations. The general idea is to
multiply a function f (t) of some independent variable t (not necessarily time) by a Kernel
function K (t, s ) and integrate over some t space to obtain a function F(s ) of s which one hopes
is easier to solve. Of course one must then inverse the process to find the desired function f (t).
In general,
b(cid:2)
F(s ) =
K (t, s ) f (t)d t
t=a
(6.1)
THE LAPLACE TRANSFORM
6.1
A useful and widely used integral transform is the Laplace transform, defined as
∞(cid:2)
L[ f (t)] = F(s ) =
f (t)e
−s td t
(6.2)
t=0
Obviously, the integral must exist. The function f (t) must be sectionally continuous and of
(cid:27)
(cid:27) ≤ Me kt when t > 0 for some constants M and k. For
f (t)
exponential order, which is to say
example neither the Laplace transform of t
−1 nor exp(t2) exists.
(cid:27)
(cid:27)
The inversion formula is
L
−1[F(s )] = f (t) = 1
2πi
lim
L → ∞
γ +i L(cid:2)
γ −i L
F(s )e ts d s
(6.3)
book
Mobk070
March 22, 2007
11:7
96 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
in which γ – iL and γ + iL are complex numbers. We will put off using the inversion integral
until we cover complex variables. Meanwhile, there are many tables giving Laplace transforms
and inverses. We will now spend considerable time developing the theory.
SOME IMPORTANT TRANSFORMS
6.2
6.2.1 Exponentials
First consider the exponential function:
∞(cid:2)
L[e
−at] =
−ate
−s td t =
e
t=0
∞(cid:2)
t=0
e
−(s =a)td t = 1
s + a
If a = 0, this reduces to
6.2.2 Shifting in the s-domain
L[1] = 1/s
∞(cid:2)
(6.4)
(6.5)
L[e a t f (t)] =
−(s −a) t f (t)d t = F(s − a)
e
(6.6)
t=0
6.2.3 Shifting in the time domain
Consider a function defined as
Then
f (t) = 0 t < a
f (t) = f (t − a)
t > a
∞(cid:2)
τ =0
a(cid:2)
∞(cid:2)
−s τ
e
f (τ − a)d τ =
0d τ +
−s τ
e
f (τ − a)d τ
τ =0
τ =a
Let τ − a = t. Then
∞(cid:2)
t=0
−s (t+a) f (t)d t = F(s )e
−as = L[ f (t − a)]
e
the shifted function described above.
(6.7)
(6.8)
(6.9)
book
Mobk070
March 22, 2007
11:7
INTEGRAL TRANSFORMS: THE LAPLACE TRANSFORM 97
6.2.4 Sine and cosine
Now consider the sine and cosine functions. We shall see in the next chapter (and you should
already know) that
e ikt = cos(kt) + i sin(kt)
(6.10)
Thus the Laplace transform is
L[e ikt] = L[cos(kt)] + i L[sin(kt)] = 1
s − ik
=
s + ik
(s + ik)(s − ik)
=
s
s 2 + k 2
+ i
k
s 2 + k 2
(6.11)
so
L[sin(kt)] =
L[cos(kt)] =
k
s 2 + k 2
s
s 2 + k 2
6.2.5 Hyperbolic functions
Similarly for hyperbolic functions
L[sinh(kt)] = L
(cid:14)
1
2
(e kt − e
−kt)
(cid:15)
(cid:14)
(cid:15)
= 1
2(cid:1)
1
s − k
− 1
s + k
=
k
s 2 − k 2
Similarly,
L[cosh(kt)] =
s
s 2 − k 2
6.2.6 Powers of t: tm
We shall soon see that the Laplace transform of tm is
L[tm] =
(cid:11)(m + 1)
s m+1
m > −1
Using this together with the s domain shifting results,
L[tme
−at] =
(cid:11)(m + 1)
(s + a)m+1
Example 6.1. Find the inverse transform of the function
F(s ) =
1
(s − 1)3
(6.12)
(6.13)
(6.14)
(6.15)
(6.16)
(6.17)
book
Mobk070
March 22, 2007
11:7
98 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
This is a function that is shifted in the s -domain and hence Eq. (6.6) is applicable. Noting that
L
−1(1/s 3) = t2/ (cid:11)(3) = t2/2 from Eq. (6.16)
f (t) = t2
2
e t
Or we could use Eq. (6.17) directly.
Example 6.2. Find the inverse transform of the function
The inverse transform of
is, according to Eq. (6.11)
F(s ) = 3
s 2 + 4
−s
e
F(s ) = 2
s 2 + 4
f (t) = 3
2
sin(2t)
The exponential term implies shifting in the time domain by 1. Thus
f (t) = 0,
= 3
2
t < 1
sin[2(t − 1)],
t > 1
Example 6.3. Find the inverse transform of
F(s ) =
s
(s − 2)2 + 1
The denominator is shifted in the s -domain. Thus we shift the numerator term and write F(s )
as two terms
F(s ) =
s − 2
(s − 2)2 + 1
+
2
(s − 2)2 + 1
Equations (6.6), (6.12), and (6.13) are applicable. The inverse transform of the first of these is
a shifted cosine and the second is a shifted sine. Therefore each must be multiplied by exp(2t).
The inverse transform is
f (t) = e 2t cos(t) + 2e 2t sin(t)
book
Mobk070
March 22, 2007
11:7
INTEGRAL TRANSFORMS: THE LAPLACE TRANSFORM 99
1
k
t
FIGURE 6.1: The Heaviside step
6.2.7 Heaviside step
A frequently useful function is the Heaviside step function, defined as
It is shown in Fig. 6.1.
The Laplace transform is
Uk(t) = 0
= 1
0 < t < k
k < t
L[Uk(t)] =
∞(cid:2)
t=k
e
−s td t = 1
s
e
−ks
(6.18)
(6.19)
The Heaviside step (sometimes called the unit step) is useful for finding the Laplace transforms
of periodic functions.
Example 6.4 (Periodic functions). For example, consider the periodic function shown in
Fig. 6.2.
It can be represented by an infinite series of shifted Heaviside functions as follows:
f (t) = U0
− 2Uk
+ 2U2k
− 2U3k
+ · · · = U0
+
∞(cid:1)
n=1
(−1)n2Unk
(6.20)
1
-1
FIGURE 6.2: A periodic square wave
book
Mobk070
March 22, 2007
11:7
100 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
1
h
t0-h
t0
FIGURE 6.3: The Dirac delta function
The Laplace transform is found term by term,
L[ f (t)] = 1
s
= 1
s
{1 − 2e
(cid:25)
1 − 2e
−s k[1 − e
(cid:26)
−s k
1 + e −s k
−s k + e
(cid:6)
= 1
s
−3s k · · · ]}
(cid:7)
−2s k − e
−s k
1 − e
1 + e −s k
6.2.8 The Dirac delta function
Consider a function defined by
Ut0
lim
− Ut0
h
−h
= δ(t0)
h → 0
L[δ(t0)] = e
−s t0
The function, without taking limits, is shown in Fig. 6.3.
6.2.9 Transforms of derivatives
(cid:14)
(cid:15)
L
d f
d t
=
∞(cid:2)
t=0
d f
d t
∞(cid:2)
−s td t =
e
−s td f
e
t=0
(6.21)
(6.22)
(6.23)
(6.24)
and integrating by parts
(cid:14)
(cid:15)
L
d f
d t
= f (t)e
−s t
(cid:27)
(cid:27)∞
0
+ s
∞(cid:2)
t=0
f (t)e
−s td t = s F(s ) − f (0)
To find the Laplace transform of the second derivative we let g (t) − f
transform
(cid:1)
(t). Taking the Laplace
(cid:1)
L[g
(t)] = s G(s ) − g (0)
book
Mobk070
March 22, 2007
11:7
INTEGRAL TRANSFORMS: THE LAPLACE TRANSFORM 101
and with
we find that
In general
G(s ) = L[ f
(cid:1)
(t)] = s F(s ) − f (0)
(cid:15)
(cid:14)
L
d 2 f
d t2
= s 2 F(s ) − s f (0) − f
(cid:1)
(0)
(cid:7)
(cid:6)
L
d n f
d tn
= s n F(s ) − s n−1 f (0) − s n−2 f
(cid:1)
(0) − · · · − d n−1 f
d tn−1 (0)
The Laplace transform of tm may be found by using the gamma function,
∞(cid:2)
L[tm] =
tme
−s td t
and let x = s t
0
∞(cid:2)
(cid:4)
L[tm] =
(cid:5)
m
e
x
s
−x d x
s
= 1
s m+1
∞(cid:2)
x=0
xme
−xd x =
(cid:11)(m + 1)
s m+1
x=0
which is true for all m > −1 even for nonintegers.
6.2.10 Laplace Transforms of Integrals
f (τ )d τ
= L[g (t)]
t(cid:2)
τ =0
L
where dg /d t = f (t). Thus L[dg /d t] = s L[g (t)]. Hence
t(cid:2)
= 1
s
L
f (τ )d τ
τ =0
6.2.11 Derivatives of Transforms
(6.25)
(6.26)
(6.27)
(6.28)
(6.29)
F(s )
(6.30)
∞(cid:2)
F(s ) =
f (t)e
−s td t
t=0
(6.31)
book
Mobk070
March 22, 2007
11:7
102 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
so
and in general
For example
d F
d s
∞(cid:2)
= −
t f (t)e
−s td t
t=0
d n F
d s n
= L[(−t)n f (t)]
(cid:6)
(cid:7)
L[t sin(kt)] = − d
d s
k
s 2 + k 2
=
2s k
(s 2 + k 2)2
(6.32)
(6.33)
(6.34)
6.3
LINEAR ORDINARY DIFFERENTIAL EQUATIONS WITH
CONSTANT COEFFICIENTS
Example 6.5. A homogeneous linear ordinary differential equation
Consider the differential equation
(cid:1) + 3y = 0
(cid:1)(cid:1) + 4y
y
y(0) = 0
(cid:1)
(0) = 2
y
Therefore
(cid:1)(cid:1)
L[y
] = s 2Y − s y(0) − y
(cid:1)
(0) = s 2Y − 2
(cid:1)
L[y
] = s Y − y(0) = s Y
(s 2 + 4s + 3)Y = 2
Y =
2
(s + 1)(s + 3)
= A
s + 1
+ B
s + 3
To solve for A and B, note that clearing fractions,
A(s + 3) + B(s + 1)
(s + 1)(s + 3)
=
2
(s + 1)(s + 3)
Equating the numerators, or
A + B = 0 3A + B = 2 :
A = 1 B = −1
(6.35)
(6.36)
(6.37)
(6.38)
(6.39)
(6.40)
(6.41)
book
Mobk070
March 22, 2007
11:7
and from Eq. (6.8)
INTEGRAL TRANSFORMS: THE LAPLACE TRANSFORM 103
Y = 1
s + 1
−t − e
y = e
− 1
s + 3
−3t
6.4
6.4.1
SOME IMPORTANT THEOREMS
Initial Value Theorem
Thus
∞(cid:2)
lim
s →∞
t=0
(cid:1)
f
(t)e
−s td t = s F(s ) − f (0) = 0
s →∞ s F(s ) = lim
lim
t→0
f (t)
(6.42)
(6.43)
(6.44)
6.4.2 Final Value Theorem
As s approaches zero the above integral approaches the limit as t approaches infinity minus
f (0). Thus
lim s F(s ) = lim f (t)
s → 0
t → ∞
(6.45)
6.4.3 Convolution
A very important property of Laplace transforms is the convolution integral. As we shall see
later, it allows us to write down solutions for very general forcing functions and also, in the
case of partial differential equations, to treat both time dependent forcing and time dependent
boundary conditions.
Consider the two functions f (t) and g (t). F(s ) = L[ f (t)] and G(s ) = L[g (t)]. Because
of the time shifting feature,
−s τ
e
G(s ) = L[g (t − τ )] =
−s t g (t − τ )d t
e
∞(cid:2)
t=0
∞(cid:2)
F(s )G(s ) =
f (τ )e
−s τ
G(s )d τ
τ =0
(6.46)
(6.47)
book
Mobk070
March 22, 2007
11:7
104 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
But
so that
∞(cid:2)
−s τ
e
G(s ) =
−s t g (t − τ )d t
e
(6.48)
t=0
∞(cid:2)
t(cid:2)
F(s )G(s ) =
−s t
e
f (τ )g (t − τ )d τ d t
(6.49)
t=0
τ =0
where we have used the fact that g (t − τ ) = 0 when τ > t. The inverse transform of F(s )G(s )
is
−1[F(s )G(s )] =
L
t(cid:2)
τ =0
f (τ )g (t − τ )d τ
(6.50)
PARTIAL FRACTIONS
6.5
In the example differential equation above we determined two roots of the polynomial in the
denominator, then separated the two roots so that the two expressions could be inverted in
forms that we already knew. The method of separating out the expressions 1/(s + 1) and
1/(s + 3) is known as the method of partial fractions. We now develop the method into a more
user friendly form.
6.5.1 Nonrepeating Roots
Suppose we wish to invert the transform F(s ) = p(s )/q (s ), where p(s ) and q (s ) are polynomi-
als. We first note that the inverse exists if the degree of p(s ) is lower than that of q (s ). Suppose
q (s ) can be factored and a nonrepeated root is a.
F(s ) =
φ(s )
s − a
According to the theory of partial fractions there exists a constant C such that
φ(s )
s − a
Multiply both sides by (s − a) and take the limit as s → a and the result is
+ H(s )
= C
s − a
C = φ(a)
(6.51)
(6.52)
(6.53)
book
Mobk070
March 22, 2007
11:7
INTEGRAL TRANSFORMS: THE LAPLACE TRANSFORM 105
(6.54)
(6.55)
(6.56)
Note also that the limit of
p(s )
s − a
q (s )
as s approaches a is simply p(s )/q
(cid:1)
(s ).
If q (s ) has no repeated roots and is of the form
q (s ) = (s − a1)(s − a2)(s − a3) · · · (s − an)
then
−1
L
(cid:14)
(cid:15)
p(s )
q (s )
=
n(cid:1)
m=1
p(am)
q (cid:1)(am)
e am t
Example 6.6. Find the inverse transform of
F(s ) =
4s + 1
(s 2 + s )(4s 2 − 1)
First separate out the roots of q (s )
q (s ) = 4s (s + 1)(s + 1/2)(s − 1/2)
q (s ) = 4s 4 + 4s 3 − s 2 − s
(cid:1)
(s ) = 16s 3 + 12s 2 − 2s − 1
q
Thus
(cid:1)
(cid:1)
(cid:1)
q
q
(0) = −1
(−1) = −3
(−1/2) = 1
(1/2) = 3
q
f (t) = e
q
(cid:1)
−t − e
p(0) = 1
p(−1) = −3
p(−1/2) = −1
p(1/2) = 3
−t/2 + e t/2 − 1
Example 6.7. Solve the differential equation
subject to initial conditions
(cid:1)(cid:1) − y = 1 − e 3t
y
(cid:1)
y
(0) = y(0) = 0
book
Mobk070
March 22, 2007
11:7
106 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
Taking the Laplace transform
(s 2 − 1)Y = 1
s
Y (s ) =
− 1
s − 3
1
s (s 2 − 1)
−
1
(s − 3)(s 2 − 1)
=
1
s (s + 1)(s − 1)
−
1
(s − 3)(s + 1)(s − 1)
First find the inverse transform of the first term.
(cid:1)
q
q = s 3 − s
(cid:1) = 3s 2 − 1
(0) = −1
(1) = 2
(−1) = 2
q
q
q
(cid:1)
(cid:1)
p(0) = 1
p(1) = 1
p(−1) = 1
The inverse transform is
Next consider the second term.
−1 + 1/2e t + 1/2 e
−t
(cid:1)
q
q = s 3 − 3s 2 − s + 3
(cid:1) = 3s 2 − 6s − 1
(−3) = 44
(1) = −4
(−1) = 8
q
q
q
(cid:1)
(cid:1)
p(−3) = 1
p(1) = 1
p(−1) = 1
The inverse transform is
Thus
1
44
e
−3t − 1
4
e t + 1
8
−t
e
y(t) = 1
4
e t + 5
8
e
−t + 1
44
−3t − 1
e
book
Mobk070
March 22, 2007
11:7
INTEGRAL TRANSFORMS: THE LAPLACE TRANSFORM 107
6.5.2 Repeated Roots
We now consider the case when q (s ) has a repeated root (s + a)n+1. Then
=
F(s ) = p(s )
q (s )
= Aa
(s − a)
φ(s )
(s − a)n+1
+ A1
(s − a)2
n = 1, 2, 3, . . .
+ · · · +
An
(s − a)n+1
+ H(s )
(6.57)
It follows that
φ(s ) = A0(s − a)n + · · · + Am(s − a)n−m + · · · + An
+ (s − a)n+1 H(s )
(6.58)
By letting s →a we see that An
and take the limit as s → a.
= φ(a). To find the remaining A’s, differentiate φ (n – r ) times
Thus
φ(n−r )(a) = (n − r )!Ar
F(s ) =
n(cid:1)
r =0
φ(n−r )(a)
(n − r )!
1
(s − a)r +1
+ H(s )
(6.59)
(6.60)
If the inverse transform of H(s ) (the part containing no repeated roots) is h(t) it follows from
the shifting theorem and the inverse transform of 1/s m that
f (t) =
n(cid:1)
r =0
φ(n−r )(a)
(n − r )!r !
tr e at + h(t)
(6.61)
Example 6.8.
Inverse transform with repeated roots
F(s ) =
s
(s + 2)3(s + 1)
= A0
(s + 2)
+ A1
(s + 2)2
+ A2
(s + 2)3
+ C
(s + 1)
Multiply by (s + 2)3.
s
(s + 1)
= A0(s + 2)2 + A1(s + 2) + A2
+ C(s + 2)3
(s + 1)
= φ(s )
Take the limit as s → −2,
A2
= 2
book
Mobk070
March 22, 2007
11:7
108 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
Differentiate once
φ(cid:1) =
1
(s + 1)2
−2
(s + 1)3
To find C, multiply by (s + 1) and take s = −1 (in the original equation).
(−2) = 2 = A0
(−2) = 1 = A1
φ(cid:1)(cid:1) =
φ(cid:1)(cid:1)
φ(cid:1)
C = −1.
Thus
F(s ) = 2
(s + 2)
+
1
(s + 2)2
+
2
(s + 2)3
− 1
(s + 1)
and noting the shifting theorem and the theorem on tm,
−2t + 2t2e
f (t) = 2e
−2t + te
−2t + e
−t
6.5.3 Quadratic Factors: Complex Roots
If q (s ) has complex roots and all the coefficients are real this part of q (s ) can always be written
in the form
This is a shifted form of
(s − a)2 + b 2
s 2 + b 2
(6.62)
(6.63)
This factor in the denominator leads to sines or cosines.
Example 6.9. Quadratic factors
Find the inverse transform of
F(s ) = 2(s − 1)
s 2 + 2s + 5
=
2s
(s + 1)2 + 4
−
1
(s + 1)2 + 4
Because of the shifted s in the denominator the numerator of the first term must also be shifted
to be consistent. Thus we rewrite as
The inverse transform of
F(s ) = 2(s + 1)
(s + 1)2 + 4
−
3
(s + 1)2 + 4
2s
s 2 + 4
book
Mobk070
March 22, 2007
11:7
INTEGRAL TRANSFORMS: THE LAPLACE TRANSFORM 109
is
and the inverse of
is
Thus
2 cos(2t)
−3
s 2 + 4
= − 3
2
2
(s 2 + 4)
− 3
2
sin(2t)
f (t) = 2e
−t cos(2t) − 3
2
−t sin(2t)
e
Tables of Laplace transforms and inverse transforms can be found in many books such as the
book by Arpaci and in the Schaum’s Outline referenced below. A brief table is given here in
Appendix A.
Problems
1. Solve the problem
(cid:1)(cid:1)(cid:1) − 2y
y
y(0) = y
(cid:1)(cid:1) + 5y
(cid:1)
(0) = 0
(cid:1) = 0
(cid:1)(cid:1)
(0) = 1
y
using Laplace transforms.
2. Find the general solution using Laplace transforms
(cid:1)(cid:1) + k 2 y = a
y
3. Use convolution to find the solution to the following problem for general g (t). Then find
the solution for g (t) = t2.
4. Find the inverse transforms.
(a)
(b)
(cid:1) + y = g (t)
(cid:1)(cid:1) + 2y
(cid:1)
(0) = y(0) = 0
y
y
F(s ) =
s + c
(s + a)(s + b)2
F(s ) =
1
(s 2 + a 2)s 3
book
Mobk070
March 22, 2007
11:7
110 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
(c)
F(s ) = (s 2 − a 2)
(s 2 + a 2)2
5. Find the periodic function whose Laplace transform is
−s
1 − e
1 + e −s
F(s ) = 1
s 2
(cid:14)
(cid:15)
and plot your results for f (t) for several periods.
FURTHER READING
M. Abramowitz and I. A. Stegun, Eds., Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables. New York: Dover Publications, 1974.
V. S. Arpaci, Conduction Heat Transfer. Reading, MA: Addison-Wesley, 1966.
R. V. Churchill, Operational Mathematics, 3rd edition. New York: McGraw-Hill, 1972.
I. H. Sneddon, The Use of Integral Transforms. New York: McGraw-Hill, 1972.
book
Mobk070
March 22, 2007
11:7
111
C H A P T E R 7
Complex Variables and the Laplace
Inversion Integral
BASIC PROPERTIES
7.1
A complex number z can be defined as an ordered pair of real numbers, say x and y, where x is
the real part of z and y is the real value of the imaginary part:
z = x + iy
(7.1)
where i =
√
−1
I am going to assume that the reader is familiar with the elementary properties of
addition, subtraction, multiplication, etc. In general, complex numbers obey the same rules as
real numbers. For example
(x1
+ iy1) (x2
+ iy2) = x1x2
− y1 y2
+ i (x1 y2
+ x2 y1)
The conjugate of z is
¯z = x − iy
(7.2)
(7.3)
It is often convenient to represent complex numbers on Cartesian coordinates with x and
y as the axes. In such a case, we can represent the complex number (or variable) z as
z = x + iy = r (cos θ + i sin θ )
(7.4)
as shown in Fig. 7.1. We also define the exponential function of a complex number as cos θ +
i sin θ = e iθ
which is suggested by replacing x in series e x =
(cid:16)∞
n=0
xn
n! by iθ .
Accordingly,
and
e iθ = cos θ + i sin θ
−iθ = cos θ − i sin θ
e
(7.5)
(7.6)
book
Mobk070
March 22, 2007
11:7
112 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
y
r
x
FIGURE 7.1: Polar representation of a complex variable z
Addition gives
and subtraction gives
Note that
cos θ = e iθ + e
2
−iθ
= cosh(iθ )
sin θ = e iθ − e
2i
−iθ
= −i sinh(iθ )
(cid:18)
−x−iy
(cid:17)
cosh z = 1
2
= e x + e
2
e x+iy + e
−x
= 1
2
e x − e
2
cos y + i
sin y
(cid:17)
e x [cos y + i sin y] + e
−x
−x [cos y − i sin y]
(cid:18)
= cosh x cos y + i sinh x sin y
The reader may show that
sinh z = sinh x cos y + i cosh x sin y.
Trigonometric functions are defined in the usual way:
sin z = e i z − e
2i
−i z
cos z = e i z + e
2
−i z
tan z = sin z
cos z
(7.7)
(7.8)
(7.9)
(7.10)
(7.11)
Two complex numbers are equal if and only if their real parts are equal and their imaginary
parts are equal.
book
Mobk070
March 22, 2007
11:7
COMPLEX VARIABLES AND THE LAPLACE INVERSION INTEGRAL 113
Noting that
(cid:14)
z2 = r 2(cos2 θ − sin2 θ + i2 sin θ cos θ )
(1 + cos 2θ ) − 1
2
= r 2
1
2
(1 − cos 2θ ) + i sin 2θ
(cid:15)
= r 2[cos 2θ + i sin 2θ ]
We deduce that
In fact in general
Example 7.1. Find i 1/2.
z1/2 = r 1/2(cos θ/2 + i sin θ/2)
zm/n = r m/n[cos(mθ/n) + i sin(mθ/n)]
(7.12)
(7.13)
Noting that when z = I , r = 1 and θ = π/2, with m = 1 and n = 2.
Thus
i 1/2 = 11/2[cos(π/4) + i sin(π/4)] = 1√
2
(1 + i)
Note, however, that if
w = cos
(cid:4) π
4
(cid:5)
+ π
+ i sin
(cid:5)
+ π
(cid:4) π
4
then w2 = i. Hence 1√
2
(−1 − i) is also a solution. The roots are shown in Fig. 7.2.
i
FIGURE 7.2: Roots of i 1/2
book
Mobk070
March 22, 2007
11:7
114 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
-1
+1
FIGURE 7.3: The roots of 11/2
In fact in this example θ is also π/2 + 2kπ . Using the fact that
z = r e
−i(θ+2kπ)
k = 1, 2, 3, . . .
it is easy to show that
√
z1/n = n
r
(cid:14)
(cid:6)
cos
θ + 2π k
n
(cid:7)
(cid:6)
+ i sin
θ + 2π k
n
(cid:7)(cid:15)
(7.14)
This is De Moivre’s theorem. For example when n = 2 there are two solutions and when
n = 3 there are three solutions. These solutions are called branches of z1/n. A region in which
the function is single valued is indicated by forming a branch cut, which is a line stretching from
the origin outward such that the region between the positive real axis and the line contains
only one solution. In the above example, a branch cut might be a line from the origin out the
negative real axis.
Example 7.2. Find 11/2 and represent it on the polar diagram.
11/2 = 1
cos
and since θ = 0 in this case
(cid:14)
(cid:6)
(cid:7)
(cid:6)
(cid:7)(cid:15)
θ
2
+ kπ
+ i sin
θ
2
+ kπ
11/2 = cos kπ + i sin kπ
There are two distinct roots at z = +1 for k = 0 and −1 for k = 1. The two values are
1, and an appropriate branch cut
shown in Fig. 7.3. The two solutions are called branches of
might be from the origin out the positive imaginary axis, leaving as the single solution 1.
√
Example 7.3. Find the roots of (1 + i)1/4.
Making use of Eq. (7.13) with m = 1 and n = 4, r =
(cid:6)
(cid:7)
(cid:6)
(cid:14)
√
(1 + i)1/4 = (
2)1/4
π
+ 2kπ
4
cos
16
+ i sin
π
16
+ 2kπ
4
√
(cid:9)
2, θ = π
4, we find that
(cid:7)(cid:15)
k = 0, 1, 2, 3
book
Mobk070
March 22, 2007
11:7
COMPLEX VARIABLES AND THE LAPLACE INVERSION INTEGRAL 115
21/2
21/8
1+ i
1
16
FIGURE 7.4: The roots of (1 + i)1/4
Hence, the four roots are as follows:
(cid:6)
(cid:14)
(1 + i)1/4 = 21/8
cos
= 21/8
= 21/8
(cid:14)
(cid:14)
(cid:14)
cos
cos
(cid:6)
(cid:6)
(cid:6)
= 21/8
cos
π
16
π
16
π
16
π
16
(cid:7)
(cid:6)
π
16
(cid:7)(cid:15)
(cid:6)
+ i sin
(cid:7)
π
+
2
(cid:7)
+ π
+ 3π
2
+ i sin
(cid:6)
+ i sin
(cid:7)
+ i sin
π
16
π
(cid:7)(cid:15)
π
+
2
(cid:7)(cid:15)
+ π
(cid:7)(cid:15)
16
(cid:6)
π
16
+ 3π
2
The locations of the roots are shown in Fig. 7.4.
The natural logarithm can be defined by writing z = r e iθ
for −π ≤ θ < π and noting
that
ln z = ln r + iθ
(7.15)
and since z is not affected by adding 2nπ to θ this expression can also be written as
ln z = ln r + i (θ + 2nπ) with n = 0, 1, 2, . . .
(7.16)
When n = 0 we obtain the principal branch. All of the single valued branches are analytic
for r > 0 and θ
0
< θ < θ
+ 2π .
0
7.1.1 Limits and Differentiation of Complex Variables: Analytic Functions
Consider a function of a complex variable f (z). We generally write
f (z) = u(x, y) + iv(x, y)
book
Mobk070
March 22, 2007
11:7
116 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
where u and v are real functions of x and y. The derivative of a complex variable is defined as
follows:
f
f (z + (cid:2)z) − f (z)
(cid:2)z
(cid:1) = lim
(cid:2)z → 0
or
(cid:1)
f
(z) = lim
u(x + (cid:2)x, y + (cid:2)y) + iv(x + (cid:2)x, y + (cid:2)y) − u(x, y) − iv(x, y)
(cid:2)x + i(cid:2)y
(cid:2)x, (cid:2)y → 0
Taking the limit on (cid:2)x first, we find that
u(x, y + (cid:2)y) + iv(x, y + (cid:2)y) − u(x, y) − iv(x, y)
i(cid:2)y
(cid:1)
(z) = lim
f
(cid:2)y → 0
and now taking the limit on (cid:2)y,
f
(cid:1)
(z) = 1
i
∂u
∂ y
+
∂v
∂ y
=
∂v
∂ y
− i
∂u
∂ y
Conversely, taking the limit on (cid:2)y first,
u(x + (cid:2)x, y) + iv(x + (cid:2)x, y) − u(x, y) − iv(x, y)
(cid:2)x
(cid:1)
(z) = lim
j
(cid:2)x → 0
∂u
∂ x
=
+ i
∂v
∂ x
The derivative exists only if
∂u
∂ x
=
∂v
∂ y
and
∂u
∂ y
= −
∂v
∂ x
(7.17)
(7.18)
(7.19)
(7.20)
(7.21)
(7.22)
These are called the Cauchy—Riemann conditions, and in this case the function is said to
be analytic. If a function is analytic for all x and y it is entire.
Polynomials are entire as are trigonometric and hyperbolic functions and exponential
functions. We note in passing that analytic functions share the property that both real and
imaginary parts satisfy the equation ∇ 2u = ∇ 2v = 0 in two-dimensional space. It should be
obvious at this point that this is important in the solution of the steady-state diffusion equation
book
Mobk070
March 22, 2007
11:7
COMPLEX VARIABLES AND THE LAPLACE INVERSION INTEGRAL 117
y
1
0
B
A
1
2
x
FIGURE 7.5: Integration of an analytic function along two paths
in two dimensions. We mention here that it is also important in the study of incompressible,
inviscid fluid mechanics and in other areas of science and engineering. You will undoubtedly
meet with it in some of you clurses.
Example 7.4.
f = z2
f = sin z
f = e a z
(cid:1) = 2z
f
(cid:1) = cos z
f
(cid:1) = ae a z
f
Integrals
Consider the line integral along a curve C defined as x = 2y from the origin to the point
x = 2, y = 1, path OB in Fig. 7.5.
(cid:2)
C
z2d z
z2 = x2 − y 2 + 2i xy = 3y 2 + 4y 2i
We can write
and d z = (2 + i)d y
Thus
1(cid:2)
y=0
(3y 2 + 4y 2i)(2 + i)d y = (3 + 4i)(2 + i)
1(cid:2)
y=0
y 2d y = 2
3
+ 11
3
i
book
Mobk070
March 22, 2007
11:7
118 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
On the other hand, if we perform the same integral along the x axis to x = 2 and then
along the vertical line x = 2 to the same point, path OAB in Fig. 7.5, we find that
2(cid:2)
x=0
x2d x +
1(cid:2)
y=0
(2 + iy)2id y = 8
3
+ i
1(cid:2)
y=0
(4 − y 2 + 4iy)d y = 2
3
+ 11
3
i
This happened because the function z2 is analytic within the region between the two curves.
In general, if a function is analytic in the region contained between the curves, the integral
(cid:2)
f (z)d z
C
(7.23)
is independent of the path of C. Since any two integrals are the same, and since if we integrate
the first integral along BO only the sign changes, we see that the integral around the closed
contour is zero.
f (z)d z = 0
C
(7.24)
This is called the Cauchy–Goursat theorem and is true as long as the region R within the
closed curve C is simply connected and the function is analytic everywhere within the region. A
simply connected region R is one in which every closed curve within it encloses only points in R.
The theorem can be extended to allow for multiply connected regions. Fig. 7.6 shows
a doubly connected region. The method is to make a cut through part of the region and to
integrate counterclockwise around C1, along the path C2 through the region, clockwise around
the interior curve C3, and back out along C4. Clearly, the integral along C2 and C4 cancels, so
that
f (z)d z +
f (z)d z = 0
C1
C3
(7.25)
where the first integral is counterclockwise and second clockwise.
7.1.2 The Cauchy Integral Formula
Now consider the following integral:
f (z)d z
(z − z0)
If the function f (z) is analytic then the integrand is also analytic at all points except z = z0.
We now form a circle C2 of radius r0 around the point z = z0 that is small enough to fit inside
(7.26)
C
book
Mobk070
March 22, 2007
11:7
COMPLEX VARIABLES AND THE LAPLACE INVERSION INTEGRAL 119
FIGURE 7.6: A doubly connected region
FIGURE 7.7: Derivation of Cauchy’s integral formula
the curve C1 as shown in Fig. 7.7. Thus we can write
f (z)
z − z0
C1
d z −
f (z)
z − z0
C2
d z = 0
(7.27)
where both integrations are counterclockwise. Let r0 now approach zero so that in the second
and d z = r0ie iθ
d θ . The second integral is as follows:
integral z approaches z0
2π(cid:2)
= r0e iθ
, z − z0
f (z0)
r0e iθ r0ie iθ
C2
d θ = − f (z0)i
d θ = −2πi f (z0)
θ=0
Thus, Cauchy’s integral formula is
f (z0) = 1
2πi
f (z)
z − z0
C
d z
(7.28)
where the integral is taken counterclockwise and f (z) is analytic inside C.
book
Mobk070
March 22, 2007
11:7
120 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
We can formally differentiate the above equation n times with respect to z0 and find an
extension as
Problems
1. Show that
f (n)(z0) = n!
2πi
f (z)
(z − z0)n+1 d z
C
(7.29)
(a) sinh z = sinh x cos y + i cosh x sin y
(b) cos z = cos x cosh y − i sin x sinh y
and show that each is entire.
2. Find all of the values of
√
(a) (−1 + i
(b) 8
1
6
3
2
3)
3. Find all the roots of the equation
sin z = cosh 4
4. Find all the zeros of
(a) sinh z
(b) cosh z
book
Mobk070
March 22, 2007
11:7
121
C H A P T E R 8
Solutions with Laplace Transforms
In this chapter, we present detailed solutions of some boundary value problems using the Laplace
transform method. Problems in both mechanical vibrations and diffusion are presented along
with the details of the inversion method.
8.1 MECHANICAL VIBRATIONS
Example 8.1. Consider an elastic bar with one end of the bar fixed and a constant force F per
unit area at the other end acting parallel to the bar. The appropriate partial differential equation
and boundary and initial conditions for the displacement y(x, t) are as follows:
t > 0
yτ τ = yζ ζ , 0 < ζ < 1,
y(ζ, 0) = yt(ζ, 0) = 0
y(0, τ ) = 0
yς (1, τ ) = F/E = g
We obtain the Laplace transform of the equation and boundary conditions as
s 2Y = Yς ς
Y (s , 0) = 0
Yς (s , 1) = g /s
Solving the differential equation for Y (s , ζ ),
Y (s ) = (A sinh ς s + B cosh ς s )
Applying the boundary conditions we find that B = 0 and
g
= As cosh s
s
g
A =
s 2 cosh s
Y (s ) = g sinh ς s
s 2 cosh s
book
Mobk070
March 22, 2007
11:7
122 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
Since the function 1
s sinh ς s = ς + s 2ς 3
3!
+ s 4ς 5
5!
+ . . . the function
is analytic and Y (s ) can be written as the ratio of two analytic functions
sinh ς s
1
s
Y (s ) =
1
s sinh ς s
s cosh s
Y (s ) therefore has a simple pole at s = 0 and the residue there is
R(s = 0) = lim
s → 0
s Y (s ) = lim
s → 0
+ . . .
ς + s 2ς 3
3!
cosh s
= g ς
The remaining poles are the singularities of cosh s . But cosh s = cosh x cos y + i sinh x sin y,
so the zeros of this function are at x = 0 and cosy = 0.
Hence, s n
= i(2n − 1)π/2. The residues at these points are
(cid:19)
(cid:20)
R(s = s n) = lim
s → s n
g sinh ς s
s d
d s (s cosh s )
e s τ = g
s 2
n
sinh ς s n
sinh s n
τ
e s n
(n = ±1, ±2, ±3 . . .)
Since
we have
and
(cid:14)
sinh
i
(cid:15)
(π ς )
2n − 1
2
= i sin
(cid:14)
2n − 1
2
(cid:15)
(π ς )
R(s = s n) =
(cid:12)
gi sin
2n−1
2
π
(cid:13)
−
2
(cid:12)
2n−1
2
i sin
(cid:13)
(π ς )
(cid:12)
2n−1
2
(cid:14)
(cid:13) exp
i
π
(cid:15)
2n − 1
2
π τ
(cid:14)
sin
(cid:15)
2n − 1
s
π
= (−1)n+1
The exponential function can be written as
2n − 1
2
2n − 1
2
= cos
exp
π τ
(cid:14)
(cid:15)
(cid:14)
i
(cid:15)
(cid:14)
π τ
+ i sin
(cid:15)
2n − 1
2
π τ
Note that for the poles on the negative imaginary axis (n < 0) this expression can be
written as
(cid:14)
exp
i
2m − 1
2
(cid:15)
(cid:14)
π τ
= cos
2m − 1
2
(cid:15)
(cid:14)
π τ
− i sin
2m − 1
2
π τ
(cid:15)
where m = −n > 0. This corresponds to the conjugate poles.
book
Mobk070
March 22, 2007
11:7
SOLUTIONS WITH LAPLACE TRANSFORMS 123
Thus for each of the sets of poles we have
R(s = s n) = 4g (−1)n
π 2(2n − 1)2 sin
(2n − 1)π ς
2
exp
(cid:15)
(cid:14)
(2n − 1)π τ i
2
Now adding the residues corresponding to each pole and its conjugate we find that the
final solution is as follows:
(cid:19)
ς + 8
π 2
y(ς, τ ) = g
∞(cid:1)
n=1
(−1)n
(2n − 1)2 sin
(2n − 1)π ς
2
cos
(2n − 1)π τ
2
(cid:20)
Suppose that instead of a constant force at ζ = 1, we allow g to be a function of τ . In
this case, the Laplace transform of y(ζ , τ ) takes the form
Y (ς, s ) = G(s ) sinh(ς s )
s cosh s
The simple pole with residue g ζ is not present. However, the other poles are still at the
same s n values. The residues at each of the conjugate poles of the function
F(s ) = sinh(ς s )
s cosh s
are
2(−1)n
π (2n − 1)
sin
(2n − 1)π ς
2
sin
(2n − 1)π τ
2
= f (ς, τ )
According to the convolution theorem
τ(cid:2)
y(ς, τ ) =
y(τ − τ (cid:1)
)g (τ (cid:1)
)d τ (cid:1)
τ (cid:1)=0
y(ς, τ ) = 4
π
∞(cid:1)
n=0
(−1)n
(2n − 1)
sin
(2n − 1)π ς
2
τ(cid:2)
τ (cid:1)
g (τ − τ (cid:1)
) sin
(2n − 1)π τ (cid:1)
2
d τ (cid:1).
In the case that g = constant, integration recovers the previous equation.
Example 8.2. An infinitely long string is initially at rest when the end at x = 0 undergoes
a transverse displacement y(0, t) = f (t). The displacement is described by the differential
book
Mobk070
March 22, 2007
11:7
124 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
equation and boundary conditions as follows:
=
∂ 2 y
∂ 2 y
∂t2
∂ x2
y(x, 0) = yt(x, 0) = 0
y(0, t) = f (t)
y is bounded
Taking the Laplace transform with respect to time and applying the initial conditions
yields
s 2Y (x, s ) = d 2Y (x, s )
d x2
The solution may be written in terms of exponential functions
Y (x, s ) = Ae
−s x + Be s x
In order for the solution to be bounded B = 0. Applying the condition at x = 0 we find
A = F(s )
where F(s ) is the Laplace transform of f (t).
Writing the solution in the form
Y (x, s ) = s F(s )
−s x
e
s
and noting that the inverse transform of e
−s x/s is the Heaviside step Ux(t) where
Ux(t) = 0
Ux(t) = 1
t < x
t > x
and that the inverse transform of s F(s ) is f
(cid:1)
(t), we find using convolution that
t(cid:2)
y(x, t) =
(cid:1)
f
(t − µ)Ux(µ)d µ = f (t − x)
x < t
µ=0
= 0
x > t
For example, if f (t) = sin ω t
y(x, t) = sin ω(t − x)
x < t
= 0
x > t
book
Mobk070
March 22, 2007
11:7
SOLUTIONS WITH LAPLACE TRANSFORMS 125
Problems
1. Solve the above vibration problem when
y(0, τ ) = 0
y(1, τ ) = g (τ )
Hint: To make use of convolution see Example 8.3.
2. Solve the problem
∂ 2 y
∂t2
=
∂ 2 y
∂ x2
yx(0, t) = y(x, 0) = yt(x, 0) = 0
y(1, t) = h
using the Laplace transform method.
8.2 DIFFUSION OR CONDUCTION PROBLEMS
We now consider the conduction problem
Example 8.3.
uτ = uς ς
u(1, τ ) = f (τ )
u(0, τ ) = 0
u(ς, 0) = 0
Taking the Laplace transform of the equation and boundary conditions and noting that
u(ς, 0) = 0,
solution yields
s U (s ) = Uς ς
U = A sinh
√
s ς + B cosh
√
s ς
U (0, s ) = 0
U (1, s ) = F(s )
The first condition implies that B = 0 and the second gives
and so U = F(s ) sinh
sinh
√
s ς
√
s .
F(s ) = A sinh
√
s
book
Mobk070
March 22, 2007
11:7
126 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
If f (τ ) = 1, F(s ) = 1/s , a particular solution, V , is
√
V = sinh
s sinh
s ς
√
s
where
Now,
v = L
−1V (s )
√
s ς
√
s
sinh
sinh
ς
=
√
s + (ς
√
√
s + (
√
s )3
3!
s )3
3!
+ (ς
√
+ (
5!
s )5
5!
√
s )5
+ . . .
+ . . .
and so there is a simple pole of V e s τ
necessarily zero, there are simple poles at sinh
s = 0 is
at s = 0. Also, since when sinh
s not
s = 0 or s = −n2π 2. The residue at the pole
s = 0, sinhς
√
√
√
lim
s → 0
s V (s )e s τ = ς
and since V (s ) e s τ
has the form P (s )/Q(s ) the residue of the pole at −n2π 2 is
P (ς, −n2π 2)
Q(cid:1)(−n2π 2)
e
−n2π 2τ =
(cid:20)
√
sinh ς
√
√
s
2 cosh
−n2π 2τ
s e
s + sinh
√
s
s =−n2π 2
= 2
sin(nπ ς )
nπ cos(nπ )
e
−n2π 2τ
The solution for v(ζ , τ ) is then
v(ς, τ ) = ς +
∞(cid:1)
n=1
2(−1)n
nπ e
−n2π 2τ
sin(nπ ς )
The solution for the general case as originally stated with u(1, τ ) = f (τ ) is obtained by
first differentiating the equation for v(ζ , τ ) and then noting the following:
and
so that
U (ς, s ) = s F(s )
sinh ς
s sinh
√
s
√
s
(cid:12)
L
(cid:13)
(cid:1)
(τ )
f
= s F(s ) − f (τ = 0)
U (ς, s ) = f (τ = 0)V (ς, s ) + L
(cid:12)
(cid:1)
(cid:13)
(s )
f
V (ς, s )
book
Mobk070
March 22, 2007
11:7
SOLUTIONS WITH LAPLACE TRANSFORMS 127
Consequently
u(ς, τ ) = f (τ = 0)v(ς, τ ) +
(cid:1)
(τ − τ (cid:1)
)v(ς, τ (cid:1)
)d τ (cid:1)
f
τ(cid:2)
= ς f (τ ) + 2 f (0)
π
+ 2
π
∞(cid:1)
n=1
(−1)n
n
τ (cid:1)=0
∞(cid:1)
n=1
(−1)n
n
e
τ(cid:2)
−n2π 2τ
sin(nπ ς )
sin(nπ ς )
(cid:1)
(τ − τ (cid:1)
)e
f
−n2π 2τ (cid:1)
d τ (cid:1)
τ (cid:1)=0
This series converges rapidly for large values of τ . However for small values of τ , it
converges slowly. There is another form of solution that converges rapidly for small τ .
The Laplace transform of v(ζ , τ ) can be written as
sinh ς
s sinh
√
s
√
s
ς
−ς
√
s
√
√
s − e
√
s − e −
s )
(cid:22)
√
ς
s − e
e
(cid:22)
e
√
s
∞(cid:1)
= e
s (e ς
= 1
s e
= 1
s
n=0
−(1+2n−ς )
= 1
√
s
s e
(cid:23) (cid:22)
√
s
−ς
ς
e
√
s
√
−ς
s − e
√
1 − e −2
s
√
−2
s + e
(cid:23)
1 + e
√
s − e
−(1+2n+ς )
√
s
−4
√
s + e
√
(cid:23)
s + . . .
−6
The inverse Laplace transform of e
√
s
=k
s
is the complimentary error function, defined by
erfc(k/2
√
τ ) = 1 − 2√
π
√
τ(cid:2)
k/2
x=0
−x2
e
d x
Thus we have
v(ς, τ ) =
(cid:14)
∞(cid:1)
(cid:6)
erfc
n=0
1 + 2n − ς
√
τ
2
(cid:7)
(cid:6)
− erfc
1 + 2n + ς
√
τ
2
(cid:7)(cid:15)
and this series converges rapidly for small values of τ .
Example 8.4. Next we consider a conduction problem with a convective boundary condition:
uτ = uς ς
u(τ, 0) = 0
uς (τ, 1) + Hu(τ, 1) = 0
u(0, ς ) = ς
book
Mobk070
March 22, 2007
11:7
128 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
Taking the Laplace transform
s U − ς = Uς ς
U (s , 0) = 0
Uς (s , 1) + HU (s , 1) = 0
The differential equation has a homogeneous solution
√
√
= A cosh(
s ς ) + B sinh(
s ς )
Uh
and a particular solution
=
U p
ς
s
so that
U =
ς
s
+ A cosh(
√
s ς ) + B sinh(
s ς )
√
Applying the boundary conditions, we find A = 0
B = −
(cid:12)√
s
s cosh(
1 + H
√
s ) + H sinh(
√
(cid:13)
The Laplace transform of the solution is as follows:
√
U =
ς
−
(cid:12)√
s
s
(1 + H) sinh(
√
s cosh(
s ς )
s ) + H sinh(
s )
√
(cid:13)
s )
The inverse transform of the first term is simply ζ . For the second term, we must first
find the poles. There is an isolated pole at s = 0. To obtain the residue of this pole note that
lim
s → 0
−
√
(1 + H) sinh ς
√
s cosh
√
s
s + H sinh
√
s
e s τ = lim
s → 0
√
− (1 + H)(ς
√
√
s + H(
√
s = x + iy. Then
s + · · · )
s + · · · )
= −ς
canceling the first residue. To find the remaining residues let
(x + iy) [cosh x cos y + i sinh x sin y] + H [sinh x cos y + i cosh x sin y] = 0
Setting real and imaginary parts equal to 0 yields
x cosh x cos y − y sinh x sin y + H sinh x cos y = 0
and
y cosh x cos y + x sinh x sin y + H cosh x sin y = 0
book
Mobk070
March 22, 2007
11:7
which yields
SOLUTIONS WITH LAPLACE TRANSFORMS 129
x = 0
y cos y + H sin y = 0
The solution for the second term of U is
lim
s → iy
(s − iy)(1 + H) sinh(
(cid:12)√
√
s
s cosh(
s ) + H sinh
s ς )e s τ
(cid:13)
√
s )
√
or
where
(cid:15)
(cid:14)
P (ς, s )e s τ
Q(cid:1)(ς, s )
s =−y 2
cosh
√
s + 1
2
sinh
√
s + H
√
s
2
(cid:15)
√
s
cosh
(cid:12)√
√
Q = s
√
(cid:1) =
Q
s cosh
√
s + H sinh
√
s cosh
s + H sinh
s + s
(cid:13)
√
s
(cid:14)
(cid:1) =
Q
(cid:1) =
Q
√
s (1 + H)
2
(cid:14) √
s (1 + H)
2
(cid:14)
sinh
cosh
− s
√
s + s
2
√
(cid:15)
s
2H
H(H + 1) + y 2
2H
cosh
(cid:15)
√
s
iy cos(y)
1
√
s
2
√
s
(cid:1)
Q
(s = −y 2) =
while
−y 2τ
un(ς, τ ) =
−y 2τ
ς )e
P (s = −y 2) = (1 + H)i sin(yς )e
−(1 + H) sin(yn
(cid:22)
H(H+1)+y 2
2H
2(H + 1)
H(H + 1) + y 2
yn cos(yn)
(cid:15)
sin ς yn
sin yn
=
(cid:14)
(cid:23)
=
−y 2τ
−2H(H + 1) sin(yn
[H(H + 1) + y 2]yn cos(yn)
ς )e
−y 2
n
τ
e
The solution is therefore
u(ς, τ ) =
∞(cid:1)
n=1
2(H + 1)
H(H + 1) + y 2
sin ς yn
sin yn
−y 2
n
τ
e
book
Mobk070
March 22, 2007
11:7
130 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
Note that as a partial check on this solution, we can evaluate the result when H → ∞ as
u(ς, τ ) =
∞(cid:1)
n=1
−2
yn cos yn
sin ς yne
−y 2
n
τ =
∞(cid:1)
n=1
2(−1)n+1
nπ
sin(nπ ς )e
−n2π 2τ
in agreement with the separation of variables solution. Also, letting H → 0 we find
u(ς, τ ) =
∞(cid:1)
n=1
ς )
sin(yn
sin(yn)
2
y 2
n
τ
e y 2
n
with yn
= 2n−1
2
π again in agreement with the separation of variables solution.
Example 8.5. Next we consider a conduction (diffusion) problem with a transient source q (τ ).
(Nondimensionalization and normalization are left as an exercise.)
uτ = uς ς + q (τ )
u(ς, 0) = 0 = uς (0, τ )
u(1, τ ) = 1
Obtaining the Laplace transform of the equation and boundary conditions we find
A particular solution is
s U = Uς ς + Q(s )
Uς (0, s ) = 0
U (1, s ) = 1
s
UP
= Q(s )
s
and the homogeneous solution is
UH
= A sinh(ς
√
s ) + B cosh(ς
√
s )
Hence the general solution is
U = Q
s
+ A sinh(ς
√
s ) + B cosh(ς,
√
s )
book
Mobk070
March 22, 2007
11:7
Using the boundary conditions
SOLUTIONS WITH LAPLACE TRANSFORMS 131
The poles are (with
Uς (0, s ) = 0,
= Q
U (1, s ) = 1
s
s
+ 1 − Q
s
U = Q
s
√
s = x + iy)
√
s = 0 or
(cid:6)
(cid:7)
2
2n − 1
2
cosh
s = −
A = 0
s ) B = 1 − Q
√
s cosh(
s )
√
+ B cosh(
√
cosh(ς
√
cosh(
s )
s )
cos y = 0
√
s = ± 2n − 1
π i
2
π 2 = −λ2
n
n = 1, 2, 3, . . .
or when s = 0.
When s = 0 the residue is
Res = lim
s → 0
s U (s )e s τ = 1
√
The denominator of the second term is s cosh
s and its derivative with respect to s is
√
s +
cosh
√
sinh
s
√
s
2
When s = −λ2
n, we have for the residue of the second term
lim
s → −λ2
n
(cid:19)
√
(1 − Q) cosh(ς
s +
√
s
2 sinh
√
cosh
(cid:20)
s )
√
s
e s τ
and since
and
we have
−1 cosh(ς
s cosh
L
√
√
s )
s
√
s = i sin
sinh
(cid:7)
(cid:6)
2n − 1
2
π = i(−1)n+1
cosh(ς
√
s ) = cos
(cid:6)
2n − 1
2
(cid:7)
ς π
(cid:17)
(cid:18)
2n−1
2
ς π
π i 2(−1)n+1
e
= cos
(cid:18)
(cid:17)
2n−1
2
2 )2π 2τ = 2(−1)n cos
−( 2n−1
(cid:17)
2n−1
2
(2n − 1)π
(cid:18)
ς π
−( 2n−1
2 )2π 2τ
e
book
Mobk070
March 22, 2007
11:7
132 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
We now use the convolution principle to evaluate the solution for the general case of
q (τ ). We are searching for the inverse transform of
(cid:6)
cosh(ς
cosh
1
s
√
s )
√
s
+ Q(s )
s
1 − cosh(ς
cosh
(cid:7)
√
√
s )
s
The inverse transform of the first term is given above. As for the second term, the inverse
transform of Q(s ) is simply q (τ ) and the inverse transform of the second term, absent Q(s) is
(cid:17)
(cid:18)
1 − 2(−1)n+1 cos
2n−1
2
(2n − 1)π
ς π
−( 2n−1
2 )2π 2τ
e
According to the convolution principle, and summing over all poles
u(ς, τ ) =
∞(cid:1)
n=1
(cid:18)
ς π
2(−1)n+1 cos
(cid:17)
2n−1
2
(2n − 1)π
(cid:19)
−( 2n−1
2 )2π 2τ
e
(cid:2) τ
∞(cid:1)
+
n=1
τ (cid:1)=0
(cid:17)
1 − 2(−1)n+1 cos
2n−1
2
(2n − 1)π
(cid:20)
(cid:18)
ς π
−( 2n−1
2 )2π 2τ
e
q (τ − τ (cid:1)
)d τ (cid:1)
Example 8.6. Next consider heat conduction in a semiinfinite region x > 0, t > 0. The initial
temperature is zero and the wall is subjected to a temperature u(0, t) = f (t) at the x = 0
surface.
= u xx
ut
u(x, 0) = 0
u(0, t) = f (t)
and u is bounded.
Taking the Laplace transform and applying the initial condition
Thus
s U = Uxx
U (x, s ) = A sinh x
√
s + B cosh x
√
s
Both functions are unbounded for x → ∞. Thus it is more convenient to use the
equivalent solution
U (x, s ) = Ae
−x
√
s + Be x
√
s = Ae
√
s
−x
in order for the function to be bounded. Applying the boundary condition at x = 0
F(s ) = A
book
Mobk070
March 22, 2007
11:7
SOLUTIONS WITH LAPLACE TRANSFORMS 133
Thus we have
U (x, s ) = F(s )e
√
s
−x
Multiplying and dividing by s gives
U (x, s ) = s F(s )
√
s
−x
e
s
The inverse transform of e
−x
√
s /s is
(cid:19)
√
e
−1
L
(cid:20)
s
= erfc
(cid:6)
(cid:7)
x
√
t
2
−x
s
and we have seen that
L{ f
(cid:1)} = s F(s ) − f (0)
Thus, making use of convolution, we find
u(x, t) = f (0)erfc
(cid:6)
x
√
t
2
(cid:7)
t(cid:2)
+
(cid:1)
f
(t − µ) erfc
µ=0
x
µ d µ
√
2
Example 8.7. Now consider a problem in cylindrical coordinates. An infinite cylinder is
initially at dimensionless temperature u(r, 0) = 1 and dimensionless temperature at the surface
u(1, t) = 0. We have
(cid:6)
(cid:7)
∂u
∂r
r
= 1
r
∂
∂u
∂t
∂r
u(1, t) = 0
u(r, 0) = 1
u bounded
The Laplace transform with respect to time yields
(cid:6)
(cid:7)
s U (r, s ) − 1 = 1
r
d
dr
r
dU
dr
with
U (1, s ) = 1
s
book
Mobk070
March 22, 2007
11:7
134 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
Obtaining the homogeneous and particular solutions yields
U (r, s ) = 1
s
+ A J 0(i
√
s r ) + BY0(i
√
s r )
The boundedness condition requires that B = 0, while the condition at r = 1
Thus
A = −
1
√
s J 0(i
s )
U (r, s ) = 1
s
√
s r )
√
s )
− J 0(i
s J 0(i
The inverse transform is as follows:
(cid:1)
u(r, t) = 1 −
Residues of
(cid:15)
(cid:14)
e s t J 0(i
s J 0(i
√
√
s r
√
s )
√
Poles of the function occur at s = 0 and J 0(i
s = λ
function of the first kind order are zero. Thus, they occur at s = −λ2
s ) = 0 or i
n, the roots of the Bessel
n. The residues are
(cid:14)
lim
s → 0
√
√
s r )
s )
e s t J 0(i
J 0(i
(cid:15)
= 1
√
(cid:15)
and
lim
s → −λ2
n
(cid:15)
(cid:14)
√
s r )
√
s )
e s t J 0(i
(cid:1)
0(i
s J
=
lim
s → −λ2
n
(cid:14)
e s t
J 0(i
√
s r )
s ) i/2
= e
−λ2
nt
√
s
−J 1(i
(cid:19)
(cid:20)
J 0(λ
nr )
n J 1(λ
λ
n)
− 1
2
The two unity residues cancel and the final solution is as follows:
u(r, t) =
∞(cid:1)
n=1
−λ2t
n
e
J 0(λ
nr )
n J 1(λ
λ
n)
Problems
1. Consider a finite wall with initial temperature zero and the wall at x = 0 insulated.
The wall at x = 1 is subjected to a temperature u(1, t) = f (t) for t > 0. Find u(x, t).
2. Consider a finite wall with initial temperature zero and with the temperature at x =
0 u(0, t) = 0. The temperature gradient at x = 1 suddenly becomes u x(1, t) = f (t)
for t > 0. Find the temperature when f (t) = 1 and for general f (t).
3. A cylinder is initially at temperature u = 1 and the surface is subject to a convective
boundary condition ur (t, 1) + Hu(t, 1) = 0. Find u(t, r ).
book
Mobk070
March 22, 2007
11:7
SOLUTIONS WITH LAPLACE TRANSFORMS 135
8.3 DUHAMEL’S THEOREM
We are now prepared to solve the more general problem
∂u
∂t
∇ 2u + g (r, t) =
(8.1)
where r may be considered a vector, that is, the problem is in three dimensions. The general
boundary conditions are
and
∂u
∂ni
+ h i u = fi (r, t) on the boundary Si
u(r, 0) = F(r )
(8.2)
(8.3)
initially. Here
theorem without proof.
∂u
∂ni
represents the normal derivative of u at the surface. We present Duhamel’s
Consider the auxiliary problem
∇ 2 P + g (r, λ) =
∂ P
∂t
where λ is a timelike constant with boundary conditions
∂ P
∂ni
+ h i P = fi (r, λ) on the boundary Si
and initial condition
P (r, 0) = F(r )
The solution of Eqs. (8.1), (8.2), and (8.3) is as follows:
u(x, y, z, t) =
∂
∂t
t(cid:2)
λ=0
P (x, y, z, λ, t − λ)d λ = F(x, y, z) +
t(cid:2)
λ=0
(8.4)
(8.5)
(8.6)
∂
∂t
P (x, y, z, λ, t − λ)d λ
(8.7)
This is Duhamel’s theorem. For a proof, refer to the book by Arpaci.
Example 8.8. Consider now the following problem with a time-dependent heat source:
−t
= u xx
+ xe
ut
u(0, t) = u(1, t) = 0
u(x, 0) = 0
book
Mobk070
March 22, 2007
11:7
136 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
We first solve the problem
−λ
= Pxx
+ xe
Pt
P (0, t) = P (1, t) = 0
P (x, 0) = 0
while holding λ constant.
Recall from Chapter 2 that one technique in this case is to assume a solution of the form
so that
and
P (x, λ, t) = X(x) + W(x, λ, t)
= Wxx
Wt
W(0, λ, t) = W(1, λ, t) = 0
W(x, λ, 0) = −X(x, λ)
−λ = 0
+ xe
Xxx
X(0) = X(1) = 0
Separating variables in the equation for W(x, t), we find that for W(x, λ, t) = S(x)Q(t)
Qt
Q
= Sxx
S
= −β 2
The minus sign has been chosen so that Q remains bounded. The boundary conditions
on S(x) are as follows:
The solution gives
S(0) = S(1) = 0
S = A sin(β x) + B cos(βx)
Q = Ce
−β t
Applying the boundary condition at x = 0 requires that B = 0 and applying the boundary
condition at x = 1 requires that sin(β) = 0 or β = nπ .
Solving for X(x) and applying the boundary conditions gives
X = x
6
(1 − x2)e
−λ = −W(x, λ, 0)
book
Mobk070
March 22, 2007
11:7
SOLUTIONS WITH LAPLACE TRANSFORMS 137
The solution for W(x, t) is then obtained by superposition:
W(x, t) =
∞(cid:1)
n=0
Kne
−n2π 2t sin(nπ x)
and using the orthogonality principle
−λ
e
1(cid:2)
x=0
x
6
(x2 − 1) sin(nπ x)d x = Kn
1(cid:2)
n=0
sin2(nπ x)d x = 1
2
Kn
so
and
W(x, t) =
P (x, λ, t) =
∞(cid:1)
n=1
(cid:21)
x
6
−λ
e
1(cid:2)
x=0
x
3
(1 − x2) +
(x2 − 1) sin(nπ x)d x e
−n2π 2t sin(nπ x)
(x2 − 1) sin(nπ x)d x sin(nπ x) e
−n2π 2t
−λ
e
(cid:24)
(cid:2)
∞(cid:1)
1
n=1
x=0
x
3
P (x, λ, t − λ) =
!
x
6
+
"
−λ
(1 − x2)e
(cid:2)
∞(cid:1)
1
n=1
x
3
x=0
1(cid:2)
(x2 − 1) sin(nπ x)d x sin(nπ x) e
−n2π 2t e n2π 2λ−λ
∂
∂t
P (x, λ, t − λ) =
∞(cid:1)
n=1
n2π 2
x=0
(1 − x2) sin(nπ x)d xe
−n2π 2t e (n2π 2−1)λ
x
3
According to Duhamel’s theorem, the solution for u(x, t) is then
u(x, t) =
1(cid:2)
∞(cid:1)
n=1
x=0
x
3
(1 − x2)n2π 2 sin(nπ x)d x sin(nπ x)
e
−n2π 2(t−λ) −λ
d λ
t(cid:2)
λ=0
=
∞(cid:1)
n=1
n2π 2
n2π 2 − 1
1(cid:2)
x=0
x
3
(1 − x2) sin(nπ x)d x [e
−t − e
−n2π 2t] sin(nπ x)
Example 8.9. Reconsider Example 8.6 in which ut
= u xx on the half space, with
u(x, 0) = 0
u(0, t) = f (t)
book
Mobk070
March 22, 2007
11:7
138 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
To solve this using Duhamel’s theorem, we first set
f (t) = f (λ) with λ a timelike
constant.
Following the procedure outlined at the beginning of Example 8.6, we find
U (x, s ) = f (λ)
The inverse transform is as follows:
u(x, t, λ) = f (λ) erfc
Using Duhamel’s theorem,
√
s
−x
e
s
(cid:6)
(cid:7)
x
√
t
2
u(x, t) =
(cid:14)
∂
∂t
t(cid:2)
λ=0
f (λ)erfc
(cid:6)
(cid:7)(cid:15)
√
x
t − λ
2
d λ
which is a different form of the solution given in Example 8.6.
Problems
1. Show that the solutions given in Examples 8.6 and 8.9 are equivalent.
2. Use Duhamel’s theorem along with Laplace transforms to solve the following conduc-
tion problem on the half space:
= u xx
ut
u(x, 0) = 0
u x(0, t) = f (t)
3. Solve the following problem first using separation of variables:
+ sin(π x)
=
∂ 2u
∂u
∂t
∂ x2
u(t, 0) = 0
u(t, 1) = 0
u(0, x) = 0
4. Consider now the problem
∂u
∂t
=
∂ 2u
∂ x2
+ sin(π x)t e
−t
with the same boundary conditions as Problem 7. Solve using Duhamel’s theorem.
book
Mobk070
March 22, 2007
11:7
SOLUTIONS WITH LAPLACE TRANSFORMS 139
FURTHER READING
V. S. Arpaci, Conduction Heat Transfer, Reading, MA: Addison-Wesley, 1966.
R. V. Churchill, Operational Mathematics, 3rd ed. New York: McGraw-Hill, 1972.
I. H. Sneddon, The Use of Integral Transforms, New York: McGraw-Hill, 1972.
book
Mobk070
March 22, 2007
11:7
140
book
Mobk070
March 22, 2007
11:7
141
C H A P T E R 9
Sturm–Liouville Transforms
Sturm–Liouville transforms include a variety of examples of choices of the kernel function
K (s , t) that was presented in the general transform equation at the beginning of Chapter 6. We
first illustrate the idea with a simple example of the Fourier sine transform, which is a special
case of a Sturm–Liouville transform. We then move on to the general case and work out some
examples.
A PRELIMINARY EXAMPLE: FOURIER SINE TRANSFORM
9.1
Example 9.1. Consider the boundary value problem
with boundary conditions
and initial condition
ut
= u xx
x ≤ 0 ≤ 1
u(0, t) = 0
u x(1, t) + Hu(1, t) = 0
u(x, 0) = 1
Multiply both sides of the differential equation by sin(λx)d x and integrate over the
interval x ≤ 0 ≤ 1.
1(cid:2)
x=0
sin(λx)
d 2u
d x2 d x = d
d t
1(cid:2)
x=0
u(x, t) sin(λx)d x
Integration of the left hand side by parts yields
1(cid:2)
x=0
d 2
d x2 [sin(λx)]u(x, t)d x +
(cid:14)
sin(λx)
d u
d x
− u
d
d x
[sin(λx)]
(cid:15)
1
0
book
Mobk070
March 22, 2007
11:7
142 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
and applying the boundary conditions and noting that
d 2
d x2 [sin(λx)] = −λ2 sin(λx)
−λ2
1(cid:2)
x=0
sin(λx)u(x, t)d x + [u x sin(λx) − λu cos(λx)]1
0
= −λ2U (λ, t) − u(1)[λ cos λ + H sin λ]
we have
Defining
1(cid:2)
Sλ{u(x, t)} =
u(x, t) sin(λx)d x = U (λ, t)
x=0
as the Fourier sine transform of u(x, t) and setting
we find
whose solution is
λ cos λ + H sin λ = 0
Ut(λ, t) = −λ2U (λ, t)
U (λ, t) = Ae
−λ2t
The initial condition of the transformed function is
U (λ, 0) =
1(cid:2)
x=0
sin(λx)d x = 1
λ [1 − cos(λ)]
Applying the initial condition we find
U (λ, t) = 1
λ [1 − cos(λ)]e
It now remains to find from this the value of u(x, t).
Recall from the general theory of Fourier series that any odd function of x defined on
−λ2t
0 ≤ x ≤ 1 can be expanded in a Fourier sine series in the form
u(x, t) =
∞(cid:1)
n=1
sin(λ
#
#
sin(λ
nx)
#
#2
n)
1(cid:2)
ξ =0
u(ξ, t) sin(λ
ξ )d ξ
n
book
Mobk070
March 22, 2007
11:7
and this is simply
STURM–LIOUVILLE TRANSFORMS 143
u(x, t) =
∞(cid:1)
n=1
sin(λ
#
#
sin(λ
nx)
#2 U (λ
#
n)
, t)
n
with λ
n given by the transcendental equation above. The final solution is therefore
u(x, t) =
∞(cid:1)
n=1
2(1 − cos λ
n)
2 sin(2λ
λ
n)
n
− 1
sin(λ
nx)e
−λ2
nt
9.2 GENERALIZATION: THE STURM–LIOUVILLE TRANSFORM:
THEORY
Consider the differential operator D
D[ f (x)] = A(x) f
(cid:1)(cid:1) + B(x) f
(cid:1) + C(x) f
a ≤ x ≤ b
(9.1)
with boundary conditions of the form
Nα[ f (x)]x=a
Nβ[ f (x)]x=b
= f (a) cos α + f
= f (b) cos β + f
(cid:1)
(cid:1)
(a) sin α
(b) sin β
(9.2)
where the symbols Nα and Nβ are differential operators that define the boundary conditions.
For example the differential operator might be
D[ f (x)] = fxx
and the boundary conditions might be defined by the operators
Nα[ f (x)]x=a
= f (a) = 0
and
Nβ[ f (x)]x=b
= f (b) + H f
(cid:1)
(b) = 0
We define an integral transformation
b(cid:2)
T[ f (x)] =
f (x)K (x, λ)d x = F(λ)
(9.3)
a
book
Mobk070
March 22, 2007
11:7
144 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
We wish to transform these differential forms into algebraic forms. First we write the
differential operator in standard form. Let
r (x) = exp
x(cid:2)
a
B(ξ )
A(ξ )
d ξ
p(x) = r (x)
A(x)
q (x) = − p(x)C(x)
Then
D[ f (x)] = 1
p(x)
where (cid:12) is the Sturm–Liouville operator.
(cid:12)
(cid:1)
(cid:1) − q f
)
(r f
(cid:13)
= 1
p(x)
(cid:12)[ f (x)]
Let the kernel function K (x, λ)in Eq. (9.3) be
K (x, λ) = p(x)(cid:6)(x, λ)
(9.4)
(9.5)
(9.6)
Then
while
so that
b(cid:2)
T[D[ f (x)]] =
(cid:6)(x, λ)(cid:12)[ f (x)]d x
a
b(cid:2)
a
=
f (x)(cid:12)[(cid:6)(x, λ)]d x + [((cid:6)fx
− (cid:6)
x f )r (x)]b
a
(9.7)
Nα[ f (a)] = f (a) cos α + f (a) sin α
(cid:17)
α[ f (a)] = d
N
d α
f (a) cos α + f
(cid:1)
(cid:1)
(cid:18)
(a) sin α
= − f (a) sin α + f
(a) cos α
(cid:1)
f (a) = Nα[ f (a)] cos α − N
(cid:1)
(cid:1)
α[ f (a)] sin α
(cid:1)
α[ f (a)] cos α + Nα[ f (a)] sin α
(a) = N
f
where the prime indicates differentiation with respect to α.
(9.8)
(9.9)
book
Mobk070
March 22, 2007
11:7
The lower boundary condition at x = a is then
STURM–LIOUVILLE TRANSFORMS 145
(cid:1)
(a, λ) f (a)] r (a)
(a) − (cid:6)(cid:1)
α[ f (a)] cos α + (cid:6)(a, λ)Nα[ f (a)] sin α − (cid:6)(cid:1)
(cid:1)
[(cid:6)(a, λ) f
=
(cid:6)(a, λ)N
+(cid:6)(cid:1)
(a, λ)N
(cid:1)
α[ f (a)] sin α
(a, λ)Nα[ f (a)] cos α
r (a)
(9.10)
But if (cid:6)(x, λ) is chosen to satisfy the Sturm–Liouville equation and the boundary con-
ditions then
and
and we have
Nα[(cid:6)(x, λ)]x=a
Nβ[(cid:6)(x, λ)]x=b
= (cid:6)(a, λ) cos α + (cid:6)(cid:1)
= (cid:6)(b, λ) cos β + (cid:6)(cid:1)
(a, λ) sin α
(b, λ) sin β
(cid:6)(a, λ) = Nα[(cid:6)(a, λ)] cos α − N
(cid:6)(cid:1)
(cid:1)
α[(cid:6)(a, λ)] sin α
(cid:1)
α[(cid:6)(a, λ)] cos α + Nα[(cid:6)(a, λ)] sin α
(a, λ) = N
[(N
(cid:1)
α[(cid:6)(a, λ)] cos α + Nα[ f (a)] sin α)(Nα[(cid:6)(a, λ)] cos α + N
(cid:1)
− (N
α[ f (a)] cos α
− Nα[ f (a)] sin α)]r (a)
(cid:1)
α[(cid:6)(a, λ)] cos α + Nα[(cid:6)(a, λ)] sin α)(N
(cid:1)
α[(cid:6)(a, λ)] sin α)
(9.11)
(9.12)
(9.13)
= {N
(cid:1)
α[ f (a)]Nα[(cid:6)(a, λ)] − Nα[ f (a)]N
(cid:1)
α[(cid:6)(a, λ)]}r (a)
If the kernel function is chosen so that Nα[(cid:6)(a, λ)] = 0, for example, the lower boundary
condition is
−Nα[ f (a)]N
(cid:1)
α[(cid:6)(a, λ)]r (a)
(9.14)
Similarly, at x = b
(cid:12)
(cid:6)(b, λ) f
(cid:1)
(b) − (cid:6)(cid:1)
(cid:13)
(b, λ) f (b)
r (b) = −Nβ[ f (b)]N
(cid:1)
β[(cid:6)(b, λ)]r (b)
(9.15)
Since (cid:6)(x, λ) satisfies the Sturm–Liouville equation, there are n solutions forming a set
of orthogonal functions with weight function p(x) and
(cid:12)(cid:6)
n(x, λ
n) = −λ2
n p(x)(cid:6)
n(x, λ
n)
(9.16)
book
Mobk070
March 22, 2007
11:7
146 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
so that
where
(cid:10)
(cid:11)
D[ f (x)]
T
= −λ2
b(cid:2)
p(x) f (x)(cid:6)
n(x, λ)d x + Nα[ f (a)]N
(cid:1)
α[(cid:6)
n(a, λ)]r (a)
x=a
− Nβ[ f (b)]N
(cid:1)
β[(cid:6)
n(b, λ)]r (b)
λ2
n
b(cid:2)
a
p(x) fn(x)(cid:6)
n(x, λ
n)d x = λ2
n Fn(λ
n)
(9.17)
(9.18)
THE INVERSE TRANSFORM
9.3
The great thing about Sturm–Liouville transforms is that the inversion is so easy. Recall that
the generalized Fourier series of a function f (x) is
f (x) =
∞(cid:1)
(cid:6)
n=1
n)
n(x, λ
(cid:6)
(cid:6)(cid:6)
n
(cid:6)
fn(ξ ) p(ξ )
b(cid:2)
a
n(ξ, λ
(cid:6)(cid:6)
n)
(cid:6) d ξ =
n
∞(cid:1)
n=1
(cid:6)
(cid:6)(cid:6)
n(x)
(cid:6)2 F(λ
n)
n
(9.19)
where the functions (cid:6)
n(x, λ
n)form an orthogonal set with respect to the weight function p(x).
Example 9.2 (The cosine transform). Consider the diffusion equation
0 ≤ x ≤ 1
= yxx
yt
yx(0, t) = y(1, t) = 0
y(x, 0) = f (x)
t > 0
To find the proper kernel function K (x, λ) we note that according to Eq. (9.16) (cid:6)
n(x, λ
n)
must satisfy the Sturm–Liouville equation
(cid:12)[(cid:6)
n(x, λ)] = − p(x)(cid:6)
n(x, λ)
where for the current problem
(cid:12)[(cid:6)
n(x, λ)] = d 2
d x2 [(cid:6)
n(x, λ)]
and
p(x) = 1
along with the boundary conditions (9.11)
Nα[(cid:6)(x, λ)]x=a
Nβ[(cid:6)(x, λ)]x=b
= (cid:6)
x(0, λ) = 0
= (cid:6)(1, λ) = 0
book
Mobk070
March 22, 2007
11:7
Solution of this differential equation and applying the boundary conditions yields an
infinite number of functions (as any Sturm–Liouville problem)
STURM–LIOUVILLE TRANSFORMS 147
(cid:6)(x, λ
n) = A cos(λ
nx)
with
cos(λ
n) = 0
λ
n
π
= (2n − 1)
2
n) = cos(λ
Thus, the appropriate kernel function is K (x, λ
Using this kernel function in the original partial differential equation, we find
nx) with λ
n
= (2n − 1)
π
2 .
where Cλ
(cid:10)
(cid:11)
y(x, t)
= Y (t, λ
d Y
d t
= −λ2
nY
n) is the cosine transform of y(t, x). The solution gives
−λ2 t
Y (t, λ
n) = Be
and applying the cosine transform of the initial condition
1(cid:2)
B =
f (x) cos(λ
nx)d x
x=0
According to Eq. (9.19) the solution is as follows:
y(x, t) =
∞(cid:1)
n=0
#
#
cos(λ
cos(λ
nx)
#
#2
nx)
1(cid:2)
x−0
f (x) cos(λ
nx)d xe
−λ2
n t
Example 9.3 (The Hankel transform). Next consider the diffusion equation in cylindrical
coordinates.
(cid:6)
(cid:7)
Boundary and initial conditions are prescribed as
ut
= 1
r
d
dr
r
d u
dr
ur (t, 0) = 0
u(t, 1) = 0
u(0, r ) = f (r )
First we find the proper kernel function
(cid:12)[(cid:6)(r, λ
n)] = d
dr
(cid:7)
(cid:6)
r
n
d (cid:6)
dr
= −λ2
n r (cid:6)
book
Mobk070
March 22, 2007
11:7
148 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
with boundary conditions
(cid:6)
, 0) = 0
r (λ
n
(cid:6)(λ
, 1) = 0
n
The solution is the Bessel function J 0(λ
nr ) with λ
n given by J 0(λ
n) = 0. Thus the
transform of u(t, r ) is as follows:
(cid:11)
(cid:10)
u(t, r )
Hλ
1(cid:2)
= U (t, λ
n) =
r J 0(λ
nr )u(t, r )dr
r =0
This is called a Hankel transform. The appropriate differential equation for U (t, λ
n) is
so that
dUn
d t
= −λ2
nUn
Un(t, λ
n) = Be
−λ2
n t
Applying the initial condition, we find
1(cid:2)
B =
r f (r )J 0(λ
nr )dr
r =0
and from Eq. (9.19)
u(t, r ) =
∞(cid:1)
n=0
1
(cid:3)
r =0 r f (r )J 0(λ
nr )dr
#
#2
J 0(λ
nr )
#
#
J 0(λ
nr )e
−λ2
n t
Example 9.4 (The sine transform with a source). Next consider a one-dimensional transient
diffusion with a source term q (x):
= u xx
+ q (x)
ut
y(0, x) = y(t, 0) = t(t, π ) = 0
First we determine that the sine transform is appropriate. The operator (cid:12) is such that
(cid:12)(cid:6) = (cid:6)
xx
= λ(cid:6)
book
Mobk070
March 22, 2007
11:7
and according to the boundary conditions we must choose (cid:6) = sin(nx) and λ = −n2. The sine
transform of q (x) is Q(λ).
STURM–LIOUVILLE TRANSFORMS 149
= −n2U + Q(λ)
Ut
U = U (λ, t)
The homogeneous and particular solutions give
when t = 0, U = 0 so that
where Qn is given by
Un
= Ce
−n2t + Qn
n2
C = − Qn
n2
=
Qn
π(cid:2)
x=0
q (x) sin(nx)d x
Since Un
= Qn
n2 [1 − e
−n2t] the solution is
u(x, t) =
∞(cid:1)
n=1
Qn
n2 [1 − e
−n2t]
#
#
sin(nx)
#
#2
sin(nx)
Note that Qn is just the nth term of the Fourier sine series of q (x). For example, if
q (x) = x,
=
Qn
π
n
(−1)n+1
Example 9.5 (A mixed transform). Consider steady temperatures in a half cylinder of infi-
nite length with internal heat generation, q (r ) that is a function of the radial position. The
appropriate differential equation is
ur r
+ 1
r
ur
+ 1
r 2 uθθ + u zz
with boundary conditions
+ q (r ) = 0
0 ≤ r ≤ 1 0 ≤ z ≤ ∞ 0 ≤ θ ≤ π
u(1, θ, z) = 1
u(r, 0, z) = u(r, π, z) = u(r, θ, 0) = 0
book
Mobk070
March 22, 2007
11:7
150 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
(cid:10)
u(r, θ, z)
(cid:11)
= Un(r, n, z) with respect to θ
Let the sine transform of u be denoted by Sn
on the interval (0, π ). Then
∂ 2Un
∂r 2
+ 1
r
∂Un
∂r
− n2
r 2 Un
+
∂ 2Un
∂z2
+ q (r )Sn(1) = 0
where Sn(1) is the sine transform of 1, and the boundary conditions for u(r, θ, z) on θ have
been used.
Note that the operator on (cid:6) in the r coordinate direction is
(cid:12)
(cid:12)
(cid:6)(r, µ
j )
(cid:13)
= 1
r
d
dr
(cid:7)
(cid:6)
r
d (cid:6)
dr
− n2
r 2
(cid:6) = −µ2
j
(cid:6)
With the boundary condition at r = 1 chosen as (cid:6)(1, µ
(cid:6) = r J n(r, µ
j ) with eigenvalues determined by J n(1, µ
j ) = 0 this gives the kernel function as
j ) = 0
We now apply the finite Hankel transform to the above partial differential equation and
denote the Hankel transform of Un by U j n
.
After applying the boundary condition on r we find, after noting that
Nβ[Un(z, 1)] = Sn(1)
(cid:1)
β[(cid:6)(1, z)] = −µ
N
j J n+1(µ
j )
j J n+1(µ
j )Sn(1) + d 2U j n
d z2
+ Q j (µ
j )Sn(1) = 0. Here Q j (µ
j ) is the Hankel trans-
+ µ
−µ2
j U j n
form of q (r ).
Solving the resulting ordinary differential equation and applying the boundary condition
at z = 0,
U j n(µ
j
, n, z) = Sn(1)
Q j (µ
j ) + µ
µ2
j
j J n+1(µ
j )
[1 − exp(−µ
j z)]
We now invert the transform for the sine and Hankel transforms according to Eq. (9.19)
and find that
Note that
u(r, θ, z) = 4
π
∞(cid:1)
∞(cid:1)
n=1
j =1
U j n(µ
j
[J n+1(µ
, n, z)
j )]2 J n(µ
jr ) sin(nθ )
Sn(1) = [1 − (−1)n]/n
book
Mobk070
March 22, 2007
11:7
Problems
Use an appropriate Sturm–Liouville transform to solve each of the following problems:
STURM–LIOUVILLE TRANSFORMS 151
1. Chapter 3, Problem 1.
2. Chapter 2, Problem 2.
3. Chapter 3, Problem 3.
4. u(r, 0) = 0
u(1, t) = 0
u bounded
∂u
∂t
= 1
r
∂
∂r
(cid:7)
(cid:6)
r
∂u
∂r
+ G(constant t)
5. Solve the following using an appropriate Sturm–Liouville transform:
=
∂ 2u
∂u
∂ x2
∂t
u(t, 0) = 0
u(t, 1) = 0
u(0, x) = sin(π x)
6. Find the solution for general ρ(t):
=
∂ 2u
∂ x2
∂u
∂t
u(t, 0) = 0
u(t, 1) = ρ(t)
u(0.x) = 0
FURTHER READING
V. S. Arpaci, Conduction Heat Transfer, Reading, MA: Addison-Wesley, 1966.
R. V. Churchill, Operational Mathematics, 3rd ed. New York: McGraw-Hill, 1972.
I. H. Sneddon, The Use of Integral Transforms, New York: McGraw-Hill, 1972.
book
Mobk070
March 22, 2007
11:7
book
Mobk070
March 22, 2007
11:7
153
C H A P T E R 10
Introduction to Perturbation Methods
Perturbation theory is an approximate method of solving equations which contain a parameter
that is small in some sense. The method should result in an approximate solution that may
be termed “precise” in the sense that the error (the difference between the approximate and
exact solutions) is understood and controllable and can be made smaller by some rational
technique. Perturbation methods are particularly useful in obtaining solutions to equations that
are nonlinear or have variable coefficients. In addition, it is important to note that if the method
yields a simple, accurate approximate solution of any problem it may be more useful than an
exact solution that is more complicated.
10.1 EXAMPLES FROM ALGEBRA
We begin with examples from algebra in order to introduce the ideas of regular perturbations
and singular perturbations. We start with a problem of extracting the roots of a quadratic
equation that contains a small parameter ε (cid:13) 1.
10.1.1 Regular Perturbation
Consider, for example, the equation
The exact solution for the roots is, of course, simply obtained from the quadratic formula:
x2 + εx − 1 = 0
(10.1)
which yields exact solutions
and
x = −
ε
2
(cid:8)
±
1 +
ε2
4
x = 0.962422837
x = −1.062422837
(10.2)
book
Mobk070
March 22, 2007
11:7
154 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
Equation (10.2) can be expanded for small values of ε in the rapidly convergent series
x = 1 −
+
ε
2
ε2
8
−
ε4
128
+ · · ·
(10.3)
or
x = −1 −
ε2
ε
−
+
ε4
− · · ·
(10.4)
8
To apply perturbation theory we first note that if ε = 0 the two roots of the equation, which
we will call the zeroth-order solutions, are x0
= ±1. We assume a solution of the form
128
2
x = x0
+ a1
ε + a2
ε2 + a3
ε3 + a4
ε4 + · · ·
(10.5)
Substituting (10.5) into (10.1)
1 + (2a1
+ 1)ε +
(cid:17)
a 2
1
+ 2a2
+ a1
(cid:18)
ε2 + (2a1a2
+ 2a3
+ a2)ε3 + · · · − 1 = 0
(10.6)
where we have substituted x0
we find
= 1. Each of the coefficients of εn must be zero. Solving for an
= − 1
2
= 1
8
= 0
a1
a2
a3
so that the approximate solution for the root near x = 1 is
x = 1 −
+
ε
2
ε2
8
+ O(ε4)
The symbol O(ε4) means that the next term in the series is of order ε4
(10.7)
(10.8)
Performing the same operation with x0
(cid:17)
a 2
1
1 − (1 + 2a1)ε +
− 2a2
+ a1
(cid:18)
ε2 + (2a1a2
= −1
Again setting the coefficients of εn equal to zero
a1
a2
= − 1
2
= − 1
8
a3
= 0
− 2a3
+ a2)ε3 + · · · − 1 = 0
(10.9)
(10.10)
book
Mobk070
March 22, 2007
11:7
INTRODUCTION TO PERTURBATION METHODS 155
so that the root near x0
= −1 is
x = −1 −
−
ε
2
ε2
8
+ O(ε4)
(10.11)
The first three terms in (10.8) give x = 0.951249219, accurate to within 1.16% of the exact
value while (10.11) gives the second root as x = −1.051249219, which is accurate to within
1.05%.
Next suppose the small parameter occurs multiplied by the squared term,
εx2 + x − 1 = 0
Using the quadratic formula gives the exact solution.
(cid:8)
x = − 1
2ε
±
1
4ε2
+ 1
ε
If ε = 0.1 (10.13) gives two solutions:
and
x = 0.916079783
x = −10.91607983
(10.12)
(10.13)
We attempt to follow the same procedure to obtain an approximate solution. If ε = 0 identically,
x0
= 1 and substituting into (10.12) we find
= 1. Using (10.5) with x0
(1 + a1)ε + (2a1
+ a2)ε2 +
(cid:17)
2a2
+ a 2
1
+ a3
(cid:18)
ε3 + · · · = 0
Setting the coefficients of εn = 0 , solving for an, and substituting into (10.5)
x = 1 − ε + 2ε2 − 5ε3 + · · ·
(10.14)
(10.15)
gives x = 0.915, close to the exact value. However Eq. (10.12) clearly has two roots, and the
method cannot give an approximation for the second root.
The essential problem is that the second root is not small. In fact (10.13) shows that as
ε → 0, |x| → 1
2ε so that the term εx2 is never negligible.
10.1.2 Singular Perturbation
Arranging (10.12) in a normal form
x2 + x − 1
ε
= 0
(10.12a)
book
Mobk070
March 22, 2007
11:7
156 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
and the equation is said to be singular as ε → 0. If we set xε = u we find an equation for u as
u2 + u − ε = 0
(10.16)
With ε identically zero, u = 0 or −1. Assuming that u may be approximated by a series like
(10.5) we find that
(−a1
− 1)ε +
− a2
ε2 + (2a1a2
− a3)ε3 + · · · = 0
(10.17)
(cid:18)
(cid:17)
a 2
1
a1
a2
a3
= −1
= 1
= −2
so that
x = − 1
ε
− 1 + ε − 2ε2 + · · ·
(10.18)
(10.19)
The three-term approximation of the negative root is therefore x = −10.92, within 0.03% of
the exact solution.
As a third algebraic example consider
x2 − 2εx − ε = 0
(10.20)
This at first seems like a harmless problem that appears at first glance to be amenable to a regular
perturbation expansion since the x2 term is not lost when ε → 0. We proceed optimistically
by taking
x = x0
+ a1
ε + a2
ε2 + a3
ε3 + · · ·
Substituting into (10.20) we find
x2
0
+ (2x0a1
− 2x0
− 1)ε +
(cid:17)
a 2
1
+ 2x0a2
− 2a1
(cid:18)
ε2 + · · · = 0
from which we find
x0
= 0
− 2x0
+ 2x0a2
2x0a1
a 2
1
− 1 = 0
= 0
− 2a1
(10.21)
(10.22)
(10.23)
From the second of these we conclude that either 0 = −1 or that there is something wrong.
That is, (10.21) is not an appropriate expansion in this case.
Note that (10.20) tells us that as ε → 0, x → 0. Moreover, in writing (10.21) we have
→ constant. Let us suppose instead
essentially assumed that ε → 0 in such a manner that x
ε
book
Mobk070
March 22, 2007
11:7
INTRODUCTION TO PERTURBATION METHODS 157
that as ε → 0
We than define a new variable
x(ε)
ε p
→ constant
x = ε pv(ε)
such that v(0) (cid:3)= 0. Substitution into (10.20) yields
ε2 pv2 − 2ε p+1v − ε = Q
(10.24)
(10.25)
(10.26)
where Q must be identically zero. Note that since Q
becomes, as long as it is not identically zero.
ε must also be zero no matter how small ε
Now, if p > 1/2, 2 p − 1 > 0 and in the limit as ε → 0 ε2 pv(ε) − 2ε pv(ε) − 1 → −1,
which cannot be true given that Q = 0 identically. Next suppose p < 1/2. Again, Q
is
ε2 p
identically zero for all ε including the limit as ε → 0. In the limit as ε → 0, v(ε)2 − ε1− pv(ε) −
ε1−2 p → v(0) (cid:3)= 0. p = 1/2 is the only possibility left, so we attempt a solution with this value.
Hence
Substitution into (10.20) gives
x = ε1/2v(ε)
v2 − 2
√
εv − 1 = 0
and this can now be solved by a regular perturbation assuming β =
√
ε (cid:13) 1. Hence,
v = v
0
Inserting this into (10.28) with β =
β + a2
β 2 + a3
β 3 + · · ·
+ a1
√
ε
v
0
− 1 + (2v
0a1
− 2v
0)β +
(cid:17)
a 2
1
+ 2v
0a2
− 2a1
(cid:18)
β 2 + · · · = 0
Thus
v
0
a1
a2
= ±1
= 1
= + 1
2
or − 1
2
(10.27)
(10.28)
(10.29)
(10.30)
(10.31)
book
Mobk070
March 22, 2007
11:7
158 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
Thus the two solutions are
and
v =
√
ε + ε + 1
2
√
ε + · · ·
ε
v = −
√
ε + ε − 1
2
√
ε + · · ·
ε
book
Mobk070
March 22, 2007
11:7
159
Appendix A: The Roots of Certain
Transcendental Equations
TABLE A.1: The first six roots, † α
n, of
C
0
0.001
0.002
0.004
0.006
0.008
0.01
0.02
0.04
0.06
0.08
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
α
1
0
0.0316
0.0447
0.0632
0.0774
0.0893
0.0998
0.1410
0.1987
0.2425
0.2791
0.3111
0.4328
0.5218
0.5932
0.6533
0.7051
0.7506
0.7910
α
2
3.1416
3.1419
3.1422
3.1429
3.1435
3.1441
3.1448
3.1479
3.1543
3.1606
3.1668
3.1731
3.2039
3.2341
3.2636
3.2923
3.3204
3.3477
3.3744
α tan α + C = 0.
α
α
3
4
6.2832
6.2833
6.2835
6.2838
6.2841
6.2845
6.2848
6.2864
6.2895
6.2927
6.2959
6.2991
6.3148
6.3305
6.3461
6.3616
6.3770
6.3923
6.4074
9.4248
9.4249
9.4250
9.4252
9.4254
9.4256
9.4258
9.4269
9.4290
9.4311
9.4333
9.4354
9.4459
9.4565
9.4670
9.4775
9.4879
9.4983
9’5087
α
5
12.5664
12.5665
12.5665
12.5667
12.5668
12.5670
12.5672
12.5680
12.5696
12.5711
12.5727
12.5743
12.5823
12.5902
12.5981
12.6060
12.6139
12.6218
12.6296
α
6
15.7080
15.7080
15.7081
15.7082
15.7083
15.7085
15.7086
15.7092
15.7105
15.7118
15.7131
15.7143
15.7207
15.7270
15.7334
15.7397
15.7460
15.7524
15.7587
book
Mobk070
March 22, 2007
11:7
160 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
TABLE A.1: (continue)
C
0.9
1.0
1.5
2.0
3.0
4.0
5.0
6.0
7.0
8.0
9.0
10.0
15.0
20.0
30.0
40.0
50.0
60.0
80.0
100.0
∞
α
1
0.8274
0.8603
0.9882
1.0769
1.1925
1.2646
1.3138
1.3496
1.3766
1.3978
1.4149
1.4289
1.4729
1.4961
1.5202
1.5325
1.5400
1.5451
1.5514
1.5552
1.5708
α
2
3.4003
3.4256
3.5422
3.6436
3.8088
3.9352
4.0336
4.1116
4.1746
4.2264
4.2694
4.3058
4.4255
4.4915
4.5615
4.5979
4.6202
4.6353
4.6543
4.6658
4.7124
α tan α + C = 0.
α
α
3
4
6.4224
6.4373
6.5097
6.5783
6.7040
6.8140
6.9096
6.9924
7.0640
7.1263
7.1806
7.2281
7.3959
7.4954
7.6057
7.6647
7.7012
7.7259
7.7573
7.7764
7.8540
9.5190
9.5293
9.5801
9.6296
9.7240
9.8119
9.8928
9.9667
10.0339
10.0949
10.1502
10.2003
10.3898
10.5117
10.6543
10.7334
10.7832
10.8172
10.8606
10.8871
10.9956
† The roots of this equation are all real if C > 0.
α
5
12.6375
12.6453
12.6841
12.7223
12.7966
12.8678
12.9352
12.9988
13.0584
13.1141
13.1660
13.2142
13.4078
13.5420
13.7085
13.8048
13.8666
13.9094
13.9644
13.9981
14.1372
α
6
15.7650
15.7713
15.8026
15.8336
15.8945
15.9536
16.0107
16.0654
16.1177
16.1675
16.2147
16.2594
16.4474
16.5864
16.7691
16.8794
16.9519
17.0026
17.0686
17.1093
17.2788
book
Mobk070
March 22, 2007
11:7
APPENDIX A: THE ROOTS OF CERTAIN TRANSCENDENTAL EQUATIONS 161
TABLE A.2: The first six roots, † α
n, of
C
−1.0
−0.995
−0.99
−0.98
−0.97
−0.96
−0.95
−0.94
−0.93
−0.92
−0.91
−0.90
−0.85
−0.8
−0.7
−0.6
−0.5
−0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
0.4
α
1
0
0.1224
0.1730
0.2445
0.2991
0.3450
0.3854
0.4217
0.4551
0.4860
0.5150
0.5423
0.6609
0.7593
0.9208
1.0528
J.l656
1.2644
1.3525
1.4320
1.5044
1.5708
1.6320
1.6887
1.7414
1.7906
α
2
4.4934
4.4945
4.4956
4.4979
4.5001
4.5023
4.5045
4.5068
4.5090
4.5112
4.5134
4.5157
4.5268
4.5379
4.5601
4.5822
4.6042
4.6261
4.6479
4.6696
4.6911
4.7124
4.7335
4.7544
4.7751
4.7956
α cotα + C = 0.
C
α
3
7.7253
7.7259
7.7265
7.7278
7.7291
7.7304
7.7317
7.7330
7.7343
7.7356
7.7369
7.7382
7.7447
7.7511
7.7641
7.7770
7.7899
7.8028
7.8156
7.8284
7.8412
7.8540
7.8667
7.8794
7.8920
7.9046
10.9041
10.9046
10.9050
10.9060
10.9069
10.9078
10.9087
10.9096
10.9105
10.9115
10.9124
10.9133
10.9179
10.9225
10.9316
10.9408
10.9499
10.9591
10.9682
10.9774
10.9865
10.9956
11.0047
11.0137
11.0228
11.0318
α
1
14.0662
14.0666
14.0669
14.0676
14.0683
14.0690
14.0697
14.0705
14.0712
14.0719
14.0726
14.0733
14.0769
14.0804
14.0875
14.0946
14.1017
14.1088
14.1159
14.1230
14.1301
14.1372
14.1443
14.1513
14.1584
14.1654
α
2
17.2208
17.2210
17.2213
17.2219
17.2225
17.2231
17.2237
17.2242
17.2248
17.2254
17.2260
17.2266
17.2295
17.2324
17.2382
17.2440
17.2498
17.2556
17.2614
17.2672
17.2730
17.2788
17.2845
17.2903
17.2961
17.3019
book
Mobk070
March 22, 2007
11:7
162 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
TABLE A.2: (continue)
C
0.5
0.6
0.7
0.8
0.9
1.0
1.5
2.0
3.0
4.0
5.0
6.0
7.0
8.0
9.0
10.0
15.0
20.0
30.0
40.0
50.0
60.0
80.0
α
1
1.8366
1.8798
1.9203
1.9586
1.9947
2.0288
2.1746
2.2889
2.4557
2.5704
2.6537
2.7165
2.7654
2.8044
2.8363
2.8628
2.9476
2.9930
3.0406
3.0651
3.0801
3.0901
3.1028
α
2
4.8158
4.8358
4.8556
4.8751
4.8943
4.9132
5.0037
5.0870
5.2329
5.3540
5.4544
5.5378
5,6078
5.6669
5.7172
5.7606
5.9080
5.9921
6.0831
6.1311
6.1606
6.1805
6.2058
α cotα + C = 0.
C
α
3
7.9171
7.9295
7.9419
7.9542
7.9665
7.9787
8.0385
8.0962
8.2045
8.3029
8.3914
8.4703
8.5406
8.6031
8.6587
8.7083
8.8898
9.0019
9.1294
9.1987
9.2420
9.2715
9.3089
11.0409
11.0498
11.0588
11.0677
11.0767
11.0856
1 J.l296
1 J.l727
11.2560
11.3349
11.4086
11.4773
11.5408
11.5994
11.6532
11.7027
11.8959
12.0250
12.1807
12.2688
12.3247
12.3632
12.4124
α
1
14.1724
14.1795
14.1865
14.1935
14.2005
14.2075
14.2421
14.2764
14.3434
14.4080
14.4699
14.5288
14.5847
14.6374
14.6870
14.7335
14.9251
15.0625
15.2380
15.3417
15.4090
15.4559
15.5164
α
2
17.3076
17.3134
17.3192
17.3249
17.3306
17.3364
17.3649
17.3932
17.4490
17.5034
17.5562
17.6072
17.6562
17.7032
17.7481
17.7908
17.9742
18.1136
18.3018
18.4180
18.4953
18.5497
18.6209
3.1105
6.2211
100.0
∞
18.8496
† The roots of this equation are all real if C > −1. These negative values of C arise in
connection with the sphere, §9.4.
15.7080
12.5664
15.5537
12.4426
18.6650
3.1416
6.2832
9.3317
9.4248
book
Mobk070
March 22, 2007
11:7
APPENDIX A: THE ROOTS OF CERTAIN TRANSCENDENTAL EQUATIONS 163
TABLE A.3: The first six roots α
n, of
C
0
0.01
0.02
0.04
0.06
0.08
0.1
0.15
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
1.5
2.0
3.0
4.0
5.0
6.0
7.0
8.0
9.0
10.0
15.0
20.0
30.0
40.0
50.0
60.0
80.0
100.0
∞
α
1
0
0.1412
0.1995
0.2814
0.3438
0.3960
0.4417
0.5376
0.6170
0 7465
0.8516
0.9408
1.0184
1.0873
1.1490
1.2048
1.2558
1.4569
1.5994
1.7887
1.9081
1.9898
2.0490
2.0937
2.1286
2.1566
2.1795
2.2509
2.2880
2.3261
2.3455
2.3572
2.3651
2.3750
2.3809
2.4048
α J 1(α) − C J 0(α) = 0
α
4
α
3
7.0156
7.0170
7.0184
7.0213
7.0241
7.0270
7.0298
7.0369
7.0440
7.0582
7.0723
7.0864
7.1004
7.1143
7.1282
7.1421
7.1558
7.2233
7.2884
7.4103
7.5201
7.6177
7.7039
7.7797
7.8464
7.9051
7.9569
8.1422
8.2534
8.3771
8.4432
8.4840
8.5116
8.5466
8.5678
8.6537
10.1735
10.1745
10.1754
10.1774
10.1794
10.1813
10.1833
10.1882
10.1931
10.2029
10.2127
10.2225
10.2322
10.2419
10.2516
10.2613
10.2710
10.3188
10.3658
10.4566
10.5423
10.6223
10.6964
10.7646
10.8271
10.8842
10.9363
11.1367
11.2677
11.4221
11.5081
11.5621
11.5990
11.6461
11.6747
11.7915
α
2
3.8317
3.8343
3.8369
3.8421
3.8473
3.8525
3.8577
3.8706
3.8835
3.9091
3.9344
3.9594
3.9841
4.0085
4.0325
4.0562
4.0795
4.1902
4.2910
4.4634
4.6018
4.7131
4.8033
4.8772
4.9384
4.9897
5.0332
5.1773
5.2568
5.3410
5.3846
5.4112
5.4291
5.4516
5.4652
5.5201
α
5
13.3237
13.3244
13.3252
13.3267
13.3282
13.3297
13.3312
13.3349
13.3387
13.3462
13.3537
13.3611
13.3686
13.3761
13.3835
13.3910
13.3984
13.4353
13.4719
13.5434
13.6125
13.6786
13.7414
13.8008
13.8566
13.9090
13.9580
14.1576
14.2983
14.4748
14.5774
14.6433
14.6889
14.7475
14.7834
14.9309
α
6
16.4706
16.4712
16.4718
16.4731
16.4743
16.4755
16.4767
16.4797
16.4828
16.4888
16.4949
16.5010
16.5070
16.5131
16.5191
16.5251
16.5312
16.5612
16.5910
16.6499
16.7073
16.7630
16.8168
16.8684
16.9179
16.9650
17.0099
17.2008
17.3442
17.5348
17.6508
17.7272
17.7807
17.8502
17.8931
18.0711
book
Mobk070
March 22, 2007
11:7
book
Mobk070
March 22, 2007
11:7
165
Appendix B
In this table q = ( p/a)1/2; a and x are positive real; α, β, γ are unrestricted; k is a finite integer;
n is a finite integer or zero; v is a fractional number; 1 · 2 · 3 · · · n = n!; 1 · 3 · 5 · · · (2n − 1) =
(2n − 1)!! n(cid:11)(n) = (cid:11)(n + 1) = n!; (cid:11)(1) = 0! = 1; (cid:11)(v)(cid:11)(1 − v) = π/ sin vπ ; (cid:11)( 1
2 ) = π 1/2
NO.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
TRANSFORM
1
p
1
p 2
1
p k
1
p 1/2
1
p 3/2
1
p k+1/2
1
p v
p 1/2
p 3/2
p k−1/2
p n−v
1
p + α
1
( p + α)( p + β)
1
( p + α)2
FUNCTION
1
t
tk−1
(k − 1)!
1
(πt)1/2
(cid:6)
(cid:7) 1
2
2
t
π
2k
π 1/2(2k − 1)!!
tk−1/2
−
v−1
t
(cid:11)(v)
1
2π 1/2t5/2
3
4π 1/2t5/2
(−1)k (2k − 1)!!
2k π 1/2tk+1/2
v−n−1
t
(cid:11)(v − n)
−α t
e
e
−α t
−β t − e
α − β
−α t
te
book
Mobk070
March 22, 2007
11:7
166 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
(γ − β)e
e
1
( p + α)( p + β)( p + γ )
1
( p + α)2( p + β)
1
( p + α)3
1
( p + α)k
p
( p + α)( p + β)
p
( p + α)2
−γ t
−βt + (β − α)e
−αt + (α − γ )e
(α − β)(β − γ )(γ − α)
−αt[1 − (β − α)t]
−βt − e
(β − α)2
1
2
tk−1e
−αt
(k − 1)!
−αt − βe
α − β
t2e
αe
−βt
−αt
(1 − αt)e
−αt
p
( p + α)( p + β)( p + γ )
α(β − γ )e
−βt + γ (α − β)e
−γ t
−αt + β(γ − α)e
(α − β)(β − γ )(γ − α)
−αt − βe
[β − α(β − α)t]e
−βt
p
( p + α)2( p + β)
p
( p + α)3
α
p 2 + α2
p
p 2 + α2
α
p 2 − α2
p
p 2 − α2
−q x
−q x
q
−q x
p
−q x
q p
−q x
e
e
e
e
e
p 2
−q x
e
p 1+n/2
(cid:6)
(cid:6)
(β − α)2
(cid:7)
1 − 1
2
αt
t
−αt
e
sin αt
cos αt
sinh αt
cosh αt
−x2/4αt
−x2/4αt
x
2(πα t3)1/2 e
(cid:5)
(cid:4) α
π t
1/2
e
(cid:14)
erfc
x
2(αt)1/2
(cid:15)
(cid:14)
(cid:15)
(cid:7)
1/2
(cid:6)
2
αt
π
(cid:7)
−x2/4αt − xerfc
(cid:6)
(cid:15)
e
(cid:14)
x
2(αt)1/2
(cid:7)
1/2
t + x2
2α
(γ − β)e
erfc
− x
x
2(αt)1/2
−αt + (α − γ )e
(α − β)(β − γ )(γ − α)
t
απ
−βt + (β − α)e
e
−x2/4αt
−γ t
book
Mobk070
March 22, 2007
11:7
APPENDIX B 167
−βt − e
e
−αt[1 − (β − α)t]
(β − α)2
−αt
t2e
1
2
tk−1e
−αt
(k − 1)!
−αt − βe
α − β
αe
−βt
(1 − αt)e
−αt
α(β − γ )e
−βt + γ (α − β)e
−γ t
−αt + β(γ − α)e
(α − β)(β − γ )(γ − α)
−αt − βe
[β − α(β − α)t]e
−βt
(cid:6)
(β − α)2
(cid:7)
1 − 1
2
αt
t
−αt
e
sin αt
(cid:14)
(cid:7)
1/2
(cid:6)
α
γ
γ t
1
2
e
−x(γ /α)1/2
e
erfc
+e x(γ /α)1/2
(cid:15)
erfc
x
2(α t)1/2
x
2(α t)1/2
t +
−x(γ /α)1/2
e
(cid:15)
erfc
e x(γ /α)1/2
γ t
1
2
e
(cid:14)
t −
(cid:14)
+
x
2(α t)1/2
(cid:14)
x
2(α t)1/2
(cid:14)
x
2(α t)1/2
(cid:14)
x
2(α t)1/2
erfc
(cid:14)
− (γ t)1/2
+ (γ t)1/2
(cid:15)
(cid:15)
(cid:15)
(cid:15)
(cid:15)
− (γ t)1/2
+ (γ t)1/2
(cid:15)
− (γ t)1/2
+ (γ t)1/2
γ t
e
+
αβ
αβ 2 − γ e
−x(γ /α)1/2
α1/2
α1/2β + γ 1/2 e
α1/2
α1/2β − γ 1/2 e x(γ /α)1/2
x
βx+αβ2terfc
2(α t)1/2
(cid:4)
(cid:12)
1/2
(cid:5)
(cid:14)
erfc
erfc
x
2(α t)1/2
(cid:14)
x
2(α t)1/2
(cid:15)
+ β(α t)1/2
(cid:13)
x
t
I1
2(xt)1/2
(cid:12)
I0
2(xt)1/2
(cid:13)
(cid:6)
(cid:7)
t
x
(v−1)/2
(cid:12)
2(xt)1/2
(cid:13)
Iv−1
e
−q x
( p − γ )(q + β)
γ (cid:3)= αβ 2
,
1
2
−
e x/ p − 1
e x/ p
1
p
1
p y e x/ p
e
e
e
−q x
e
p 3/4
−q x
e
q + β
−q x
q (q + β)
−q x
p(q + β)
−q x
q p(q + β)
−q x
q n+1(q + β)
−q x
(q + β)2
−q x
p(q + β)2
−q x
e
p − γ
e
e
e
e
−q x
q ( p − γ )
e
−q x
( p − γ )2
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
book
Mobk070
March 22, 2007
11:7
168 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
49
50
51
52
53
54
55
56
K0(q x)
1
p 1/2 K2v(q x)
v/2−1 Kv(q x)
p
v/2 Kv(q x)
p
(cid:12)
p − ( p 2 − x2)1/2
(cid:13)v
e x[( p+α)1/2−( p+β)1/2]z − 1
e x[ p−( p+α)1/2( p+β)1/2]
( p + α)1/2( p + β)1/2
e x[( p+α)1/2−( p+β)1/2]2
(cid:12)
( p + α)1/2( p + β)1/2
( p + α)1/2 + ( p + β)1/2
(cid:13)
2v
−x2/4α t
1
2t
e
1
2(πt)1/2 e
v−1
−vαv/22
−x28α t Kv
(cid:3) ∞
x2/4α t e
x
(cid:7)
(cid:6)
x2
8α t
−u u
v−1d u
−x2/4α t
v
x
αv/2(2t)v+1 e
v x
t
(cid:12)
−(α+β)t/2 I1
1
v
Iv(xt)
t1/2(t + 4x)1/2
2 (α − β)t1/2(t + 4x)1/2
(cid:13)
x(α − β)e
−(α+β)(t+x)/2 I0
e
v/2e
−(α+β)t/2 Iv
t
(cid:12)
(cid:12)
1
2 (α − β)t1/2(t + 2x)1/2
1
2 (α − β)t1/2(t + 4x)1/2
(α − β)v(t + 4x)v/2
(cid:13)
(cid:13)
book
Mobk070
March 22, 2007
11:7
169
Author Biography
Dr. Robert G. Watts is the Cornelia and Arthur L. Jung Professor of Mechanical Engineering
at Tulane University. He holds a BS (1959) in mechanical engineering from Tulane, an
MS(1960) in nuclear engineering from the Massachusetts Institute of Technology and a PhD
(1965) from Purdue University in mechanical engineering. He spent a year as a Postdoctoral
associate studying atmospheric and ocean science at Harvard University. He has taught advanced
applied mathematics and thermal science at Tulane for most of his 43 years of service to that
university.
Dr. Watts is the author of Keep Your Eye on the Ball: The Science and Folklore of Baseball
(W. H. Freeman) and the editor of Engineering Response to Global Climate Change (CRC Press)
and Innovative Energy Strategies for CO2 Stabilization (Cambridge University Press) as well
as many papers on global warming, paleoclimatology energy and the physic of sport. He is a
Fellow of the American Society of Mechanical Engineers.
book
Mobk070
March 22, 2007
11:7
|
https://ieeexplore.ieee.org/xpl/ebooks/bookPdfWithBanner.jsp?fileName=7042954.pdf&bkn=7042953&pdfType=book
|
S Y N T H E S I S L E C T U R E S O N E N G I N E E R I N G
S Y N T H E S I S L E C T U R E S O N E N G I N E E R I N G
S Y N T H E S I S L E C T U R E S O N E N G I N E E R I N G
Series ISSN: 1939-5221
Series ISSN: 1939-5221
Series ISSN: 1939-5221
SERIES EDITOR: Stephen F. Barrett, University of Wyoming
SERIES EDITOR: Stephen F. Barrett, University of Wyoming
SERIES EDITOR: Stephen F. Barrett, University of Wyoming
THE CAPTAINS OF ENERGY
THE CAPTAINS OF ENERGY
THE CAPTAINS OF ENERGY
Systems Dynamics from an Energy Perspective
Systems Dynamics from an Energy Perspective
Systems Dynamics from an Energy Perspective
Vincent C. Prantil, Milwaukee School of Engineering
Vincent C. Prantil, Milwaukee School of Engineering
Vincent C. Prantil, Milwaukee School of Engineering
Timothy Decker, Milwaukee Area Technical College, University of Wisconsin Milwaukee
Timothy Decker, Milwaukee Area Technical College, University of Wisconsin Milwaukee
Timothy Decker, Milwaukee Area Technical College, University of Wisconsin Milwaukee
In teaching an introduction to transport or systems dynamics modeling at the undergraduate
In teaching an introduction to transport or systems dynamics modeling at the undergraduate
In teaching an introduction to transport or systems dynamics modeling at the undergraduate
level, it is possible to lose pedagogical traction in a sea of abstract mathematics. What the
level, it is possible to lose pedagogical traction in a sea of abstract mathematics. What the
level, it is possible to lose pedagogical traction in a sea of abstract mathematics. What the
mathematical modeling of time-dependent system behavior offers is a venue in which students
mathematical modeling of time-dependent system behavior offers is a venue in which students
mathematical modeling of time-dependent system behavior offers is a venue in which students
can be taught that physical analogies exist between what they likely perceive as distinct areas
can be taught that physical analogies exist between what they likely perceive as distinct areas
can be taught that physical analogies exist between what they likely perceive as distinct areas
of study in the physical sciences. We introduce a storyline whose characters are superheroes
of study in the physical sciences. We introduce a storyline whose characters are superheroes
of study in the physical sciences. We introduce a storyline whose characters are superheroes
that store and dissipate energy in dynamic systems. Introducing students to the overarching
that store and dissipate energy in dynamic systems. Introducing students to the overarching
that store and dissipate energy in dynamic systems. Introducing students to the overarching
conservation laws helps develop the analogy that ties the different disciplines together under a
conservation laws helps develop the analogy that ties the different disciplines together under a
conservation laws helps develop the analogy that ties the different disciplines together under a
common umbrella of system energy. In this book, we use the superhero cast to present the effort-
common umbrella of system energy. In this book, we use the superhero cast to present the effort-
common umbrella of system energy. In this book, we use the superhero cast to present the effort-
flow analogy and its relationship to the conservation principles of mass, momentum, energy,
flow analogy and its relationship to the conservation principles of mass, momentum, energy,
flow analogy and its relationship to the conservation principles of mass, momentum, energy,
and electrical charge. We use a superhero movie script common to mechanical, electrical, fluid,
and electrical charge. We use a superhero movie script common to mechanical, electrical, fluid,
and electrical charge. We use a superhero movie script common to mechanical, electrical, fluid,
and thermal engineering systems to illustrate how to apply the analogy to arrive at governing
and thermal engineering systems to illustrate how to apply the analogy to arrive at governing
and thermal engineering systems to illustrate how to apply the analogy to arrive at governing
differential equations describing the systems’ behavior in time. Ultimately, we show how only
differential equations describing the systems’ behavior in time. Ultimately, we show how only
differential equations describing the systems’ behavior in time. Ultimately, we show how only
two types of differential equation, and therefore, two types of system response are possible.
two types of differential equation, and therefore, two types of system response are possible.
two types of differential equation, and therefore, two types of system response are possible.
This novel approach of storytelling and a movie script is used to help make the mathematics of
This novel approach of storytelling and a movie script is used to help make the mathematics of
This novel approach of storytelling and a movie script is used to help make the mathematics of
lumped system modeling more approachable for students.
lumped system modeling more approachable for students.
lumped system modeling more approachable for students.
ABOUT SYNTHESIS
ABOUT SYNTHESIS
ABOUT SYNTHESIS
This volume is a printed version of a work that appears in the Synthesis Digital Library
This volume is a printed version of a work that appears in the Synthesis Digital Library
This volume is a printed version of a work that appears in the Synthesis Digital Library
of Engineering and Computer Science. Synthesis Lectures provide concise, original
of Engineering and Computer Science. Synthesis Lectures provide concise, original
of Engineering and Computer Science. Synthesis Lectures provide concise, original
presentations of important research and development topics, published quickly, in digital
presentations of important research and development topics, published quickly, in digital
presentations of important research and development topics, published quickly, in digital
and print formats. For more information visit www.morganclaypool.com
and print formats. For more information visit www.morganclaypool.com
and print formats. For more information visit www.morganclaypool.com
w w w . m o r g a n c l a y p o o l . c o m
w w w . m o r g a n c l a y p o o l . c o m
w w w . m o r g a n c l a y p o o l . c o m
9 781627 055888
9 781627 055888
9 781627 055888
ISBN: 978-1-62705-588-8
ISBN: 978-1-62705-588-8
ISBN: 978-1-62705-588-8
90000
90000
90000
P
P
R
R
A
A
N
N
T
T
I
I
L
L
P
R
A
N
T
I
L
•
•
•
D
D
E
E
C
C
K
K
E
E
R
R
D
E
C
K
E
R
THE CAPTAINS OF ENERGY
THE CAPTAINS OF ENERGY
THE CAPTAINS OF ENERGY
Systems Dynamics from an Energy Perspective
Systems Dynamics from an Energy Perspective
Systems Dynamics from an Energy Perspective
T
H
E
C
A
P
T
A
T
T
H
H
E
E
C
C
A
A
P
P
T
T
A
A
N
N
S
S
O
O
F
F
N
S
O
F
I
I
I
E
E
N
N
E
E
R
R
G
G
Y
Y
E
N
E
R
G
Y
M
M
M
O
O
O
R
R
R
G
G
G
A
A
A
N
N
N
&
&
&
C
C
C
L
L
L
A
A
A
Y
Y
Y
P
P
P
O
O
O
O
O
O
L
L
L
VINCENT C. PRANTIL • TIMOTHY DECKER
VINCENT C. PRANTIL • TIMOTHY DECKER
VINCENT C. PRANTIL • TIMOTHY DECKER
S Y N T H E S I S L E C T U R E S O N E N G I N E E R I N G
S Y N T H E S I S L E C T U R E S O N E N G I N E E R I N G
S Y N T H E S I S L E C T U R E S O N E N G I N E E R I N G
Stephen F. Barrett, SERIES EDITOR
Stephen F. Barrett, SERIES EDITOR
Stephen F. Barrett, SERIES EDITOR
e Captains of Energy
Systems Dynamics from an Energy Perspective
Synthesis Lectures on
Engineering
Each book in the series is written by a well known expert in the field. Most titles cover subjects
such as professional development, education, and study skills, as well as basic introductory
undergraduate material and other topics appropriate for a broader and less technical audience.
In addition, the series includes several titles written on very specific topics not covered elsewhere
in the Synthesis Digital Library.
e Captains of Energy: Systems Dynamics from an Energy Perspective
Vincent C. Prantil and Timothy Decker
2015
Lying by Approximation: e Truth about Finite Element Analysis
Vincent C. Prantil, Christopher Papadopoulos, and Paul D. Gessler
2013
Simplified Models for Assessing Heat and Mass Transfer in Evaporative Towers
Alessandra De Angelis, Onorio Saro, Giulio Lorenzini, Stefano D’Elia, and Marco Medici
2013
e Engineering Design Challenge: A Creative Process
Charles W. Dolan
2013
e Making of Green Engineers: Sustainable Development and the Hybrid Imagination
Andrew Jamison
2013
Crafting Your Research Future: A Guide to Successful Master’s and Ph.D. Degrees in
Science & Engineering
Charles X. Ling and Qiang Yang
2012
Fundamentals of Engineering Economics and Decision Analysis
David L. Whitman and Ronald E. Terry
2012
iv
A Little Book on Teaching: A Beginner’s Guide for Educators of Engineering and Applied
Science
Steven F. Barrett
2012
Engineering ermodynamics and 21st Century Energy Problems: A Textbook
Companion for Student Engagement
Donna Riley
2011
MATLAB for Engineering and the Life Sciences
Joseph V. Tranquillo
2011
Systems Engineering: Building Successful Systems
Howard Eisner
2011
Fin Shape ermal Optimization Using Bejan’s Constructal eory
Giulio Lorenzini, Simone Moretti, and Alessandra Conti
2011
Geometric Programming for Design and Cost Optimization (with illustrative case study
problems and solutions), Second Edition
Robert C. Creese
2010
Survive and rive: A Guide for Untenured Faculty
Wendy C. Crone
2010
Geometric Programming for Design and Cost Optimization (with Illustrative Case Study
Problems and Solutions)
Robert C. Creese
2009
Style and Ethics of Communication in Science and Engineering
Jay D. Humphrey and Jeffrey W. Holmes
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Analog Multimedia
Explorations
Lina J. Karam and Naji Mounsef
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Digital Multimedia and
Robotics Explorations
Lina J. Karam and Naji Mounsef
2008
CAD/CAM of Sculptured Surfaces on Multi-Axis NC Machine: e DG/K-Based
Approach
Stephen P. Radzevich
2008
v
Tensor Properties of Solids, Part Two: Transport Properties of Solids
Richard F. Tinder
2007
Tensor Properties of Solids, Part One: Equilibrium Tensor Properties of Solids
Richard F. Tinder
2007
Essentials of Applied Mathematics for Scientists and Engineers
Robert G. Watts
2007
Project Management for Engineering Design
Charles Lessard and Joseph Lessard
2007
Relativistic Flight Mechanics and Space Travel
Richard F. Tinder
2006
Copyright © 2015 by Morgan & Claypool
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in
any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations
in printed reviews, without the prior permission of the publisher.
e Captains of Energy: Systems Dynamics from an Energy Perspective
Vincent C. Prantil and Timothy Decker
www.morganclaypool.com
ISBN: 9781627055888
ISBN: 9781627055895
paperback
ebook
DOI 10.2200/S00610ED1V01Y201410ENG024
A Publication in the Morgan & Claypool Publishers series
SYNTHESIS LECTURES ON ENGINEERING
Lecture #24
Series ISSN
Print 1939-5221 Electronic 1939-523X
e Captains of Energy
Systems Dynamics from an Energy Perspective
Vincent C. Prantil
Milwaukee School of Engineering
Timothy Decker
Milwaukee Area Technical College
University of Wisconsin Milwaukee
SYNTHESIS LECTURES ON ENGINEERING #24
CM&cLaypoolMorganpublishers&ABSTRACT
In teaching an introduction to transport or systems dynamics modeling at the undergraduate level,
it is possible to lose pedagogical traction in a sea of abstract mathematics. What the mathematical
modeling of time-dependent system behavior offers is a venue in which students can be taught that
physical analogies exist between what they likely perceive as distinct areas of study in the physical
sciences. We introduce a storyline whose characters are superheroes that store and dissipate energy
in dynamic systems. Introducing students to the overarching conservation laws helps develop the
analogy that ties the different disciplines together under a common umbrella of system energy. In
this book, we use the superhero cast to present the effort-flow analogy and its relationship to the
conservation principles of mass, momentum, energy, and electrical charge. We use a superhero
movie script common to mechanical, electrical, fluid, and thermal engineering systems to illustrate
how to apply the analogy to arrive at governing differential equations describing the systems’
behavior in time. Ultimately, we show how only two types of differential equation, and therefore,
two types of system response are possible. is novel approach of storytelling and a movie script is
used to help make the mathematics of lumped system modeling more approachable for students.
KEYWORDS
mathematical modeling, systems dynamics, transport modeling, lumped system anal-
ysis, engineering mechanics, systems modeling, modeling approximation, energy,
storage, effort, flow, multi-disciplinary systems
Contents
ix
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Language of Mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
e Language of Experts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
e Importance of Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
e Captains of Energy Story . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Outline of Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
1
2
If You Push It, It Will Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 e Effort-Flow Analogy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 System Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.2 e Energy Balance Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Governing Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1 Deriving a Governing Differential Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 e Four Casts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
System Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.3
Linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.4
3.1
3.2
3 e Electrical Cast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Effort and Flow Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Storage Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.2.1 Potential Energy Storage Character . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.2.2 Kinetic Energy Storage Character . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.3 Dissipative Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Single Storage Element Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.4
3.4.1 RC Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.4.2 RL Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.4.3 A Generalized Mathematical Form for the Single Storage Element
Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.5 Multiple Storage Element Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
x
3.5.1 Series RLC Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.5.2 Parallel RLC Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.5.3 Idealized LC Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.5.4 A Generalized Mathematical Form for the Dual Storage Element
Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Chapter Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.6
4.1
4.2
4 e Mechanical Cast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Effort and Flow Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Storage Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.2.1 Potential Energy Storage Character . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.2.2 Kinetic Energy Storage Character . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.3 Dissipative Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Single Storage Element Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.4
4.4.1 Spring-Damper Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.4.2 Mass-Damper Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.4.3 A Generalized Mathematical Form for the Single Storage Element
Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.5 Multiple Storage Element Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.5.1 e Classical Mass-Spring-Damper System . . . . . . . . . . . . . . . . . . . . . . 54
4.5.2 Idealized Mass-Spring Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.5.3 A Generalized Mathematical Form for the Dual Storage Element
Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Rotational Mechanical Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.6.1 Effort and Flow Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.6.2 Storage Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.6.3 Dissipative Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.6.4 e Simple Pendulum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Chapter Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.6
4.7
5
5.1
A Common Notion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Time Domain Solutions of 1st Order Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.1.1 Transient Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.1.2 Forced Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.1.3 Dimensionless Solutions for 1st Order Systems . . . . . . . . . . . . . . . . . . . 80
5.1.4 Universal Truths for 1st Order System Response in the Time Domain . 81
Time Domain Solutions of 2nd Order Systems . . . . . . . . . . . . . . . . . . . . . . . . . 83
5.2
6
xi
5.2.1 Free Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.2.2 Forced Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.2.3 Dimensionless Solutions for 2nd Order Systems . . . . . . . . . . . . . . . . . . . 95
5.2.4 Characteristic Times for Transients in 2nd Order Systems . . . . . . . . . . . 96
5.2.5 Universal Truths for 2nd Order System Response in the Time
Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.2.6 Energy Storage and Dissipation for 2nd Order System Response in
the Time Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Chapter Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.3
6.1
Going Nowhere? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Frequency Domain Solutions of 1st Order Systems . . . . . . . . . . . . . . . . . . . . . 114
6.1.1 Transfer Function Analysis for Harmonic Input . . . . . . . . . . . . . . . . . . 114
6.1.2 Steady-State Response and Bode Plot Analysis . . . . . . . . . . . . . . . . . . . 116
6.1.3 An Interpretation of Dimensionless Frequency Ratio . . . . . . . . . . . . . . 118
6.1.4 Filtering Characteristics of 1st Order Systems . . . . . . . . . . . . . . . . . . . . 120
6.1.5 Universal Truths for 1st Order Systems Subject to Harmonic Input . . 127
6.1.6 Energy Storage and Dissipation in 1st Order Systems Subject to
Harmonic Input Excitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Frequency Domain Solutions of 2nd Order Systems . . . . . . . . . . . . . . . . . . . . 130
6.2.1 Transfer Function Analysis for Harmonic Input . . . . . . . . . . . . . . . . . . 131
6.2.2 Steady-State Response and Bode Plot Analysis . . . . . . . . . . . . . . . . . . . 132
6.2.3 Universal Truths for 2nd Order Systems Subject to Harmonic Input . . 144
Redesigning Systems for Steady-State Behaviors . . . . . . . . . . . . . . . . . . . . . . . 144
Energy Storage and Dissipation in 2nd Order Systems Subject to
Harmonic Input Excitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Chapter Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
6.2
6.3
6.4
6.5
7.1
7 e Fluid and ermal Casts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Fluid Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
7.1.1 Fluid Effort and Flow Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
7.1.2 Storage Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
7.1.3 Dissipative Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
7.1.4 Single Storage Element Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
7.1.5 Multiple Storage Element Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
7.2 ermal Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
7.2.1 ermal Effort and Flow Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
xii
8
7.2.2 Storage Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
7.2.3 Dissipative Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
7.2.4 Single Storage Element Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Chapter Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
7.3
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Afterword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Authors’ Biographies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Preface
If I make a mark in time,
I can’t say the mark is mine;
I’m only the underline
Of the word.
Like everybody else, I’m searchin’ through
All I’ve heard.
xiii
Cat Stevens
“Tuesday’s Dead”
ere is a transparency to my accumulated writing. When I look deep
beneath my declarations, I see the underlying thoughts of others. I
realize now how much of what I have said is neither original nor unique.
ought is forever being revived, recycled and renewed.
Robert Fulghum
Words I Wish I Wrote
e technical content in this book is based on disciplinary physics whose mathematical
modeling is well-known. e overarching concepts of effort and flow variables have been pre-
sented before in a variety of ways [6–8, 18, 19]. Personally, I wish I’d been taught this way of
analogical thinking in my undergraduate studies. Only recently was I taken by the power in the
analogy when tasked to teach a course in systems dynamics. In the course of teaching, I developed
a story to accompany the analogy. What is offered here is this story. e mathematical relations
are not new, but the story is. Like Cat Stevens and Robert Fulghum, I still find value in this
interpretation of “words said before.” As the physicist Joanne Lavvan admits in her interview for
the book Einstein’s God [17], “I have not changed the facts; I’ve only changed the approach to the
facts.”
xiv PREFACE
THE LANGUAGE OF MATHEMATICS
Schooling, Frey asserts, discriminates against right-brained functions in
favor of left-brain functions. Analogical thinking should be done BEST
by right-brain-dominant individuals, but transport processes are often
taught in an abstract, mathematically oriented manner. us, people
who should be best able to understand transport process applications
must struggle to learn them in the abstract.
Arthur T. Johnson
Biological Process Modeling: An Analogical Approach
Mathematics is the language of modeling. Richard Feynman has called it “the language
Mother Nature speaks” [5]. erefore, it does no good to try to understand her without it. In
the business of mathematically modeling material behavior, it turns out that polymer transport
of embedded fibers, stresses in dry, densely packed granular materials, and anisotropy in crys-
talline metals have something in common. Mathematical models for all of these physical phe-
nomena share a common mathematical formulation based on the discipline-specific underlying
physics. e ultimate commonality between different physical systems is how they are represented
mathematically. What can make studying these fields daunting is the level of abstraction in the
mathematics. is mathematics can seem cumbersome, but it is also the single underlying story-
line, the common thread for which each of the individual applications is but one manifestation.
Mathematics can be like the DNA that is common to two people who are more alike than they
appear.
In using mathematics to model, we draw a unique picture of what is inherently similar about
distinct scientific disciplines under a wide modeling umbrella. Previous treatments have success-
fully applied the principles of mathematical modeling to draw the boundaries of this umbrella.
But mathematical abstraction has often kept the umbrella at bay for those who think less “left-
brained.” In today’s digital world, more and more is done on our behalf by models and simulations
entrusted to the computer and crunching “big data.”
Students don’t understand numbers as well as they once did. ey rely on
the computer’s perfection, and they are unable to check its answers in
case they type the numbers in wrong. Perhaps our society will decide
that the average person does not need to understand numbers and that
we can entrust this knowledge to an elite caste (the computer) [but
either way] there is a catch. In order to say anything about the universe
with mathematics, we have to construct a mathematical model. And
models are always imperfect. ey always oversimplify reality, and every
mathematical model begins with assumptions. Sometimes we forget
these are only assumptions. We fall in love with our models. Major
trauma ensues when we have to modify or discard them.
Dana MacKenzie
e Universe in Zero Words
PREFACE xv
Dana MacKenzie may be right that we are possibly moving to a world where mathematics
may be the machine behind the curtain. But engineers will still have to build, maintain, and
ultimately understand the machine. So, math matters! Ultimately what is essential for today’s
engineering student is to understand the implications of mathematical simulation performed on
their behalf. How that is done is not necessarily the end of the story, but it may be finding the
path of least abstract resistance. We ultimately need a way to introduce the mathematics at an
appropriate level for new learners. We too often trudge through a nest of complexity trying to
find the kernel of wisdom that excites. Complexity is often left to “the experts to explain.” e
problem is we don’t often enough pull it off. Fortunately, complex systems have always been, on
some level, simplified through the telling of stories.
THE LANGUAGE OF EXPERTS
Students are challenged by important aspects of engineering that can
seem obvious and easy to experts, the so called “expert-blind-spot”
which can impede effective classroom instruction.
Susan Singer and Karl Smith
Understanding and Improving Learning in Undergraduate Science and
Engineering
Singer and Smith [13] make a salient point: that experts have too often forgotten more than
students have yet to learn. We’re so far into the forest, we may have forgotten how to describe
the trees. e reason some experts fail to communicate is that they’ve been trained to talk in
jargon and unnecessary precision which begets complexity without understanding. is sentiment
is passionately outlined by Tyler DeWitt, an MIT doctoral student in microbiology and high
school teacher: that good science communication can cut through exhaustive detail by telling a
good story.
In the communication of science, there is this obsession with
seriousness. Science communication has taken on this idea I call the
tyranny of precision where you can’t just tell a story. Good storytelling is
not about detail; it’s all about emotional connection! We have to
convince our audience that what we’re talking about matters by knowing
which details to leave out so that the main point still comes across! e
great architect Mies van der Rohe said “Sometimes you have to learn to
lie in order to tell the truth.” I understand the importance of detailed,
specific scientific communication between experts. But not when we’re
trying to teach young learners. (In this case) leave out the seriousness,
leave out the jargon, leave out those annoying details, and just get to the
point! Make me laugh. Make me care. How should you start? How
about saying “Listen let me tell you a story”?
Tyler DeWitt
TED Talk: Hey Science Teachers, Make It Fun!
xvi PREFACE
So we set out to tell a story. A story where animation, characters, roles, and a script of-
fer a less formal introduction to the common story of energy storage and loss. e way around
abstraction is through metaphor and analogy.
THE IMPORTANCE OF TRIANGULATION
It is of first rate importance that you know how to “triangulate” – that
is, to know how to figure something out from what you already know.
R.P. Feynman
Tips on Physics
Analogical reasoning is based on the brain’s ability to form patterns by
association. A new idea is compared to an idea that is already
well-understood. e brain may be able to understand (these) new
concepts more easily if they are perceived as being part of a pattern.
Jonah Lehrer
How We Decide
Educators can help students change misconceptions by using “bridging
analogies” that link students’ correct knowledge with the situation about
which they harbor false beliefs. Using multiple representations in
instruction is one way to move students to expertise.
Susan Singer and Karl Smith
Understanding and Improving Learning in Undergraduate Science and
Engineering
e common theme of bridging disciplines shines forth. is can be accomplished pow-
erfully by employing analogies. An emphasis on analogical thinking is adopted throughout this
book. e concepts are not new. Only the presentation. ere is a common story, “a single script
to essentially the same movie.” e movie can be set in a variety of stages: electrical current flow,
fluid mass transport, heat flow, and momentum transfer. In each of these distinct applications, we
are essentially watching remakes of this same underlying movie: same script, same characters, but
different actors playing the roles. ese different actors bring their own nuanced interpretation
to the specific characters they play.
If you’ve ever seen a re-make of an old movie, you’ve experienced this sort of thinking.
You’ve seen the story told before through the eyes of one director and a specific cast of actors. In
what follows, our pedagogical approach is simply to view the common script through the eyes of
four distinct casts. We’ll see that the story told is the same, but each cast brings its own distinct
feel to the common script. Also, as is the case whenever one is presented with two tellings of
essentially the same story, we tend to prefer one cast. People often relate all other interpretations
to this favorite telling.
PREFACE xvii
THE CAPTAINS OF ENERGY STORY
Storytelling provides a method for scholarly discourse in engineering
education to make implicit knowledge more explicit, promote reflective
practice, and provide entry points into a community of practice.
C.J. Atman, et al.
Enabling Engineering Student Success
is book uses storytelling to unify the concepts that underlie transport modeling, and
make that modeling come alive. In IMAGINE: How Creativity Works, Jonah Lehrer [9] describes
how such a premise can dramatically awaken the reader:
Our breakthroughs often arrive when we apply old solutions to new
situations. e best way to understand this conceptual blending is to look
at the classic children’s book Harold and the Purple Crayon. e premise
of the book is simple: Harold has a magic crayon. When he draws with
this purple crayon, the drawing becomes real. If Harold wants to go for a
walk, he simply draws a path with his crayon. But here’s the twist that
makes Harold and the Purple Crayon (so) engaging: it blends together two
distinct concepts of the world. Although the magic crayon is a fantastical
invention Harold still has to obey the rules of reality. When Harold
draws a mountain and tries to climb it, gravity still exists in the crayon
universe. e book is a delicate balance of the familiar and the fictional.
Jonah Lehrer
IMAGINE: How Creativity Works
One of the problems with math is that we learn to speak the language on time scales that are
not always aligned with our understanding of unifying physical concepts such as energy. Energy
is a great unifier of discussions on physical systems, but has not always been exploited as the
storyteller that it can be. Energy illustrates a common pattern in each story. In this book, you
will be introduced to the Captains of Energy who are at work in engineering systems that are
excited by a world outside of themselves, a world controlled by Father Force. Father Force will
deliver energy to the system. e Captains will play a game of catch with the imparted energy, a
game of monkey-in-the-middle where the Evil Dr. Friction eats away at the energy cache as it
is exchanged between Captains Potential and Kinetic Energy again and again. e familiar and
fictional are used to unify the mathematical abstraction in an exercise in conceptual blending. e
purpose is to convince you that there are only three characters, four casts, one script, and only two
equations you need to understand. e purple crayon is tied to reality and made familiar in an
attempt to foster longer lasting learning. We script the movie and screenplay with the different
xviii PREFACE
actors that appear on the mechanical, electrical, fluid, and thermal stages. One important result
of thinking in this way is that you can learn that “breadth at the expense of depth” has inherent
advantages for life-long learning. e ability to see how “different things look alike” will equip
you with the tools that allow you to adapt to other applications whose underlying physics may be
distinctly different, but whose mathematical formulation you have “already seen” before.
OUTLINE OF THE BOOK
Chapter 1 addresses the overarching analogy of all systems variables as belonging to one of two
categories:
1. Effort variables and
2. Flow variables
We introduce characters that represent the three key system elements in any transport sys-
tem: inertia, stiffness, and friction. ereby, we cast several simple systems in this analogical
framework and set the stage for the analogy’s universality among separate engineering disciplines.
We summarize well-known and essential mathematical relations that correspond to each of the
system elements and their respective characters. We introduce the idea that there are separate
casts of players in each engineering discipline, but they play the same three roles of the system
elements. e script is, in this sense, always the same. Only the actors playing the roles are differ-
ent. As when any movie is cast with different actors, the same script, when played out, can have
a quite different feel, but the storyline remains unchanged.
In Chapter 2, we use the mathematical relations for system elements directly in a conserva-
tion principle resulting in a governing differential equation. We provide an example of how this
is accomplished for an electrical system, as this is most often the discipline to which all others are
made analogous.
In Chapter 3, we illustrate several examples of electrical systems and derive their respective
governing differential equations. We examine several possible systems, but stress the procedure
more than the system specifics. We do this to emphasize that the specifics can be viewed as
incidental. Here, we provide reasoning for students to understand when governing equations will
be first order and second order. We also introduce the notion of a normalized form of these
equations and their solutions.
Chapter 4 presents the mechanical analog of systems similar to those examined in Chap-
ter 3. Actors in the mechanical cast are presented and their roles in specific systems are offered as
examples. We present single and dual energy character scripts that result in first and second order
differential equations, respectively.
In Chapter 5, we exploit linearity to find solutions to the normalized governing differential
equations in the time domain. We offer an examination of dimensionless solutions as a means to
illustrate the concept of a master curve that cements the analogy mathematically. We present the
PREFACE xix
forms of master curves for first and second order systems and set the stage for analogies in fluid
and thermal systems.
In Chapter 6, we present classical solutions for systems in steady state that are excited by
harmonic loads. Classically referred to as system response in the frequency domain, solutions
are obtained via use of Laplace transforms and sinusoidal transform functions. It is typical to see
these solutions already in dimensionless form rendering total system solutions that are entirely di-
mensionless. We explain why casting models in dimensionless form is serendipitous for predictive
capability.
In Chapter 7, we present the system analogy for fluid and thermal systems. We illustrate
several examples of where first and second order systems arise and the nonexistence of second
order thermal systems.
roughout this book, our intention is to provide an analogous procedure whereby
students can see that deriving governing differential equations is a task accomplished always in
the same manner, independent of the system’s discipline. In the Chapter Activities following
Chapters 2–7, we present a small series of applications whereby the analogy can be used to
construct equivalent systems that should now “look familiar.” We hope this belies a complexity
that is born of specific detail, a detail which we argue does not actually exist when one approaches
the mathematical model from the perspective of a common movie script merely played out by
new and different casts of actors.
Vincent C. Prantil and Timothy Decker
January 2015
Acknowledgments
xxi
We greatly appreciate the contributions of Drs. John E. Pakkala and Hope L. Weiss at the
Milwaukee School of Engineering for their meticulous reviewing, proofing, and vetting of this
manuscript. eir perspectives bring a clarity and consistency to the presentation that it otherwise
might not have found. We are grateful to them for providing a review of an earlier draft of this
work which helped us to polish and refine many details.
From Vincent C. Prantil: I wish to dedicate this book to my uncle and life-long mentor in all
things academic, Dr. Carl Calliari, retired professor of Education at Rowan University. I also wish
to thank the enormous vat of patience exhibited by my partner in life and crime, Laurna, and my
children, Carmen and Lorin. eir support, laughter, and love continue to carry me through my
journey with more encouragement, enthusiasm, and sanity than it otherwise would possess. ey
have unselfishly encouraged and supported the many adventures in my calling as a teacher. I would
like to thank my parents, Dolores and Joseph Prantil, for rearing me in a home with much room
for laughter and looking at the world in unconventional ways. ey let me find my own way and
have always been there to support even the craziest of ideas.
I am grateful for the likes of Steven Strogatz, Michael Guillen, and Bill Nye of Cornell,
along with Tyler DeWitt of MIT for their testimony to the art of writing beautifully about sci-
ence and to the pedagogical power in making science fun. I dedicate this book to my students who
adopted the energy characters as routes to analogical peg points in the mind’s eye. My students
doubt, prod, question, and keep me young. We travel through the forest together. I am grateful
to Sandy Haeger and her coterie at the One Way Cafe in Wauwatosa, Wisconsin. Sandy weekly
allowed me to nurse a bottomless cup of coffee and a hard roll, the smallest tab in the Midwest,
while penning these pages and perusing Tim’s drawings. I am blessed to have been provided
with an amazingly understanding publisher in Joel Claypool and the ever accommodating editor,
Andrea Koprowicz, whose encouragement and upbeat demeanor saved many of my faltering mo-
ments. Finally, I am forever grateful to my Creator who blesses me every day with a mysterious
mix of skepticism, faith, failure, humility, humor, energy, and imagination. Ego adhuc cognita.
xxii ACKNOWLEDGMENTS
From Timothy Decker: I dedicate this book to my son Evan, a constant source of support,
strength, and hockey. He centers me and makes me remember daily what is important in life.
I also dedicate this book to my many students who keep me young, and keep me guessing, laugh-
ing, and learning.
Vincent C. Prantil and Timothy Decker
January 2015
C H A P T E R 1
1
If You Push It, It Will Flow
Lenny: “What makes things move, George?”
George: “Forces do, Lenny.”
Lenny: “What makes things stop moving, George?”
George: “Forces do, Lenny.”
Leonard Susskind and George Hrabovsky
e eoretical Miminum:
What You Need to Know to Start Doing Physics
At first glance, it is not often evident that individual disciplines in the physical sciences
exhibit a fascinating similitude. at is, behaviors in distinct fields share a unifying theme. For
instance, a voltage drop across a circuit causes charge or current to flow. Similarly, a temperature
difference causes heat to flow from hot to cold. e windfall for engineers is that the mathematical
models for these transport processes, either for current or heat, are identical! Richard Feynman
has said that “mathematics are the eyes with which we see physics” [3, 5]. To the more physically
inclined, this may appear to be placing the cart ahead of the horse. But when we view the world
this way, models allow us to “see” a unifying theme that underlies what we physically observe.
Mother Nature, in her sense of orderliness, has chosen to sing a similar song in different keys.
e music is mathematics [2]. But mathematics can be a double-edged sword. While it can help
us to see patterns and maybe even search for physical insight through patterns, it can be abstract
and elusive for the new learner with less experience using their newly acquired tools of calculus.
Here, we define a movie script that has only four character roles. ese will be a character
putting energy into the system, two characters who store energy, and an energy eater. ese roles
will be played by a new and different cast in each discipline (the electrical, mechanical, fluid, and
thermal worlds). When a movie is remade with new actors portraying the characters, often people
will take a liking to one cast over another. In other words, one particular cast of actors bring the
screenplay to life in a particularly more meaningful way for them. So the relationship between
voltage and current above is analogous to an identical relationship between temperature and heat
flow. Often engineers with a propensity for viewing the world “electrically” can translate a thermal
system into an equivalent electrical one for the purposes of understanding “the movie” with a new
and different cast. e reason this is so is because there is a common framework in which current
and heat flow may be cast where the characters are the same; they are merely portrayed by different
actors. is analogical thinking is a formidably powerful tool for fostering learning.
2
1. IF YOU PUSH IT, IT WILL FLOW
1.1 THE EFFORT-FLOW ANALOGY
All learning is by analogy.
Albert Einstein
No set of engineering principles is more useful or pervasive than the
concepts of effort variables and flow variables. By analogy, these can be
applied to almost any situation involving transfer of something from one
location or situation to another.
A.T. Johnson
University of Maryland, College Park
e substrate of analogical thinking involves recognizing a commonality between what,
on the surface, may initially appear to be unrelated. For instance, the flow of mass, momentum,
heat, and electrical charge are not as independent as they may appear at first glance. In fact, a
powerful unifying theme or analogy exists linking the transport models in these otherwise distinct
disciplines.
Figure 1.1: A force applied to a mechanical system causes motion to occur. Force must continually
be applied in the presence of friction if motion is to continue.
Effort variables represent the force-like quantities, forces in and on a system. Flow variables
are quantities that change in response to the applied effort. e effort and flow are called con-
jugate pairs because they are necessarily married in a description of work and energy. Consider
the example of a force applied to a block along a frictional surface. If there is sufficient force, the
block will move. e block is a system characterized by its inertia and the friction between the
block and the floor. A character we will call Father Force provides an externally applied effort
1.1. THE EFFORT-FLOW ANALOGY 3
to the block. Father Force lives in a world outside of the system. e external force or effort he
supplies, if high enough to overcome the friction force, will cause a change in the block’s velocity
or flow.
e force on the block and the resulting motion cannot be specified independently, i.e.,
there is an explicit relation between these two quantities. We can associate motions with requisite
forces or, just the same, forces with the ensuing motion. While causality is, in some sense, in the
mind of the observer, we can agree from this point on that a force applied to a system causes
motion to take place. It is these quantities of force and subsequent motion that will form the
basis for an elementary analogy. Consider now an electrical analog to this mechanical system. If
you place a voltage difference across a resistor, current will flow through the resistor. For a known
amount of resistance, you cannot specify the voltage difference applied and the resulting current
independently. ey are related. e electrical voltage difference acts like a net force. is net
force pushes electrical charge through the resistor. e resistor represents an electrical analog to
friction, if you will. And the current is a rate of change of electrical charge with time, just as the
velocity of the block is a rate of change of displacement with time. What remains the same is
that when you place an effort difference or a net effort across a system, flow occurs through the
system.
Of Special Note
In any transport process, a difference in an effort variable across a small region
of a system drives a transport or flow of some quantity through the small region.
..
So, a force difference or net force across a mass will cause a change in its momentum.
A difference in electrical potential (or voltage) causes current to flow. A temperature difference
causes heat to flow while a pressure difference causes fluid to flow. In a unifying template, force,
voltage, temperature, and pressure play analogous roles. ey are the effort driving the flow of,
respectively, momentum, (electrical) charge, (heat) energy, and mass. ese are the four quantities
that are classically conserved or balanced in all systems. ese are four quantities you can neither
create nor destroy. Effort always drives flow. And what flows is usually related to whatever is
conserved. Learn to think this way and almost everything you will learn in engineering will abide
by this same set of rules wherever transport or dynamics are involved.
We list the conjugate effort and flow variables for the four separate disciplines in Table 1.1.
Here, the four disciplines have fostered models that describe how mechanical momentum, fluid
mass, electrical charge, and thermal heat flow under the influence of force, pressure, voltage, and
temperature differences respectively.
In the course of your education, you may come across the nomenclature of a generalized
force. A simple description in the current context is that a generalized force acts through a gen-
4
1. IF YOU PUSH IT, IT WILL FLOW
Table 1.1: Effort and flow variables used to describe transport of momentum, mass, heat, and charge
eralized displacement to produce work [16]. In a mechanical system, forces act through displace-
ments to do work. In rotational mechanical systems, torque acts through an angular displacement
to perform work (see Table 1.2). Note the units of force multiplied by displacement, e.g., N-m
or joules, J, is the same as the product of torque and angular displacement, Nm-rad or N-m or
J, units of work (and energy). In electrical systems, the product of voltage and charge is given by
the product of volts and coulombs. By definition, this product is also measured in joules, J. We
have chosen to associate flow with the time rate of change of a displacement-like quantity, e.g.,
velocity, angular velocity, or current. As such, we will work with the following convention: effort
is a generalized force, while flow is the derivative of a generalized displacement. e product of
effort and flow will result in power or the rate at which work is performed on or energy is input
to a system.
Table 1.2: Concept of generalized force and motion in mechanical systems
Of Special Note
Because we follow the flow of a conserved quantity, most often the flow variable
is the time rate of change of the conserved quantity.
..
1.1.1
SYSTEM ELEMENTS
e screenplay of the transport process movie is written in terms of energy which is always con-
served. As we will see soon, the concept of conservation plays a critical role in modeling. Since
all real systems involve losses in energy, it would be more correct to say that energy is always bal-
DisciplineEffortFlowElectricalVoltageCurrentMechanicalForceVelocityFluidPressureMassFlowRateThermalTemperatureHeatFlowRateDisciplineEffortFlowMechanicalGeneralizedForceGeneralizedMotionTranslationalForceVelocityRotationalTorqueAngularVelocityanced. e balance is composed of two types of stored energy pitted against the eventual losses.
So our movie has two types of characters or elements: those that store energy and those that
dissipate energy. Further, there are two elementary storage characters: those that store potential
energy and those that store kinetic energy.
1.1. THE EFFORT-FLOW ANALOGY 5
Storage Elements
e key role of the system elements or components in modeling is that they represent explicit
relations between the effort and flow. A system element will be portrayed by a character in our
movie. Any transport process can, at any moment in time, store energy by virtue of its effort
variable or its flow variable. We call energy stored by virtue of a system’s effort variable potential
energy. Any system element that stores potential energy will play the role of Captain Potential
Energy. is energy is locked inside a system by way of an effort difference that can be relaxed to
allow the energy to be released in a form evidenced by the system’s flow variable. Energy stored
by virtue of a system’s flow variable is kinetic energy. Any system element that stores kinetic
energy will play the role of Captain Kinetic Energy. In what follows, we will write mathematical
expressions for the energy storage that will have analogs in each discipline of study. ey will
always look the same. To plant the analogy, we choose the electrical and mechanical disciplines
to demonstrate examples of the system elements or characters. We will also use these disciplinary
examples to attempt to shed light on “how” potential and kinetic energy are stored and “who”
stores them.
Potential Energy Storage Elements ose elements of transport that store potential energy do
so by virtue of building up an effort difference that can be released to perform useful work. In
these cases, flow is always proportional to a time derivative of effort:
FLOW
d
dt
/
(cid:140)EFFORT(cid:141)
(1.1)
where the proportionality constant determines the specific amount of flow released upon relax-
ation of an effort difference or the capacity of the process to perform work. As such, we term this
constant the system capacity or capacitance, C.
Of Special Note
Energy stored by virtue of stored differences in effort is potential energy.
Characters that store potential energy follow the equation:
FLOW
C
D
d
dt
(cid:140)EFFORT(cid:141)
(1.2)
that defines the character’s capacitance.
..
6
1. IF YOU PUSH IT, IT WILL FLOW
Recall that in electrical circuits, capacitors are system elements that store energy through
voltage differences across dielectric plates. Upon discharge, a flow of charge or current is released.
For this process:
i.t/
C
dV .t/
dt
D
(1.3)
Analogously, we may ask which system element stores potential energy in a mechanical
system. We typically recall from elementary mechanics that this is a spring. Potential energy is
stored by virtue of a stored effort or mechanical force in the deformed spring. Recall that the force
and displacement are related by Hooke’s law for simple, linear springs.
F
D
kx
(1.4)
No differential relation is evident, so let’s examine our storage a bit more closely. Recall
that the flow variable is velocity, the time derivative of displacement. In fact, flow variables are
often related to conserved or balanced quantities. In mechanical systems, momentum is balanced.
In systems where mass is constant, this implies that velocity is the appropriate flow variable when
linear momentum is conserved. Using the definition of velocity as the time derivative of displace-
ment relates velocity to a time derivative of force:
x.t/
(cid:29)
)
1
k F .t/
dx.t/
dt D
D
D
1
k
dF .t/
dt
(1.5)
en, by analogy, the mechanical capacitance is given by the reciprocal of the spring stiff-
ness:
CMECH
1
k
D
Generalizing, by analogy, a transport process exhibits a capacitance given by:
C
D
1
EFFORT
Z .FLOW/ dt
(1.6)
(1.7)
When the energy is stored by virtue of effort, it is potential energy. Energy is given by an
integral of power expended in a process. ereby, the potential energy stored in a capacitor would
be given as:
Z V .t/i.t/ dt
D
D
Z (cid:18) 1
C
Z (cid:18) 1
C
Z i.t/dt (cid:19) i.t/ dt
q.t/(cid:19) i.t/ dt
Z (cid:18) 1
C
D
q(cid:19) dq
1
2
(cid:18) 1
C
(cid:19) q2
D
where the definition of current is:
i.t/
D
dq.t/
dt
(1.8)
(1.9)
Once again, analogously in a mechanical system, the potential energy stored by a spring by
virtue of the force within it is given as:
1.1. THE EFFORT-FLOW ANALOGY 7
Z F (cid:29) dt
D
Z .kx/ (cid:29) dt
D
Z .kx/ dx
1
2
kx2
D
(1.10)
You may recall from your elementary physics courses that this is the expression for potential
energy in a deformed spring. To begin our energy story, any system element who stores potential
energy is Captain Potential Energy. He possesses energy by virtue of effort or the force contained
in his springs!
Figure 1.2: Captain Potential Energy stores energy by virtue of effort in his compressed springs. e
containment vessel for the effort is the stiffness or capacitance. Captain Potential Energy is distin-
guished by his possession of a system’s capacitance.
Kinetic Energy Storage Elements ose elements of transport that store kinetic energy do so
by virtue of their flow variable. As a result, effort differential is related to a time derivative of flow:
d
dt
Where the proportionality constant determines the specific amount of effort difference
required to cause the prescribed rate of change of flow. is term is referred to as the system
inductance, L.
EFFORT
(cid:140)FLOW(cid:141)
(1.11)
/
8
1. IF YOU PUSH IT, IT WILL FLOW
Of Special Note
Energy stored by virtue of stored flow is kinetic energy. Characters that store
kinetic energy follow the equation:
EFFORT
L
d
dt
D
(cid:140)FLOW(cid:141)
(1.12)
that defines the character’s inductance.
..
Recall that in electrical circuits, inductors are system elements across whose terminals a
voltage drop is related to a time rate of change of current.
L
d i.t/
dt
V
D
(1.13)
Analogously, we may ask which system element stores kinetic energy in a mechanical sys-
tem. We typically recall from elementary mechanics that this is the mass or inertia of the system.
is comes naturally from Newton’s Second Law relating the net force on a mass to its time
rate of change of linear momentum. When mass is constant, the rate of change of momentum is
proportional to the mass’s acceleration.
F
D
ma
m
d (cid:29).t/
dt
D
(1.14)
us, when a mass exhibits some non-zero speed, it possesses kinetic energy by virtue of
its speed and in proportion to its mass or inertia. If there were no mass, there would be no entity
to have speed! In this interesting way, we can learn to say that the mass stores the kinetic energy
in the form of its speed. Because the mass stores the kinetic energy in a mechanical system, we
can say that
Generalizing, by analogy, a transport process exhibits an inductance given by:
LMECH
m
D
L
D
1
FLOW
Z .EFFORT/ dt
(1.15)
(1.16)
With energy being an integral of power, we can work in terms of the flow variable to define
the kinetic energy:
Z V .t/i.t/ dt
Z (cid:18)L
d i.t/
dt
D
(cid:19) i.t/ dt
D
Z .Li/ d i
1
2
Li 2
D
(1.17)
Once again, analogously in a mechanical system
1.1. THE EFFORT-FLOW ANALOGY 9
Z F (cid:29).t/ dt
Z (cid:18)m
d (cid:29).t/
dt
(cid:19) (cid:29) dt
D
D
Z .m(cid:29)/ d (cid:29)
1
2
m(cid:29) 2
D
(1.18)
is may look familiar to you as the kinetic energy in a moving mass.Continuing our energy story,
any system element who stores kinetic energy is Captain Kinetic Energy. He possesses energy by
virtue of flow or the velocity associated with his mass or inertia!
Figure 1.3: Captain Kinetic Energy stores energy by virtue of his speed. e containment vessel for
the speed is the inertia or mechanical inductance. Captain Kinetic Energy is distinguished by his
possession of a system’s inductance.
So let’s remember that mathematically, both storage elements (or characters) relate flow or
effort to the derivative of the other. Differential relations imply energy storage.
Of Special Note
Differential mathematical relationships for system elements imply energy
storage.
..
10
1. IF YOU PUSH IT, IT WILL FLOW
Dissipative Elements
In dynamic transport, dissipative elements relate flow and effort strictly algebraically. Algebraic
relations imply energy dissipation. Resistive elements, in principle, play a part in every disciplinary
story. Flow experiences resistance under the action of any difference in effort that drives it:
EFFORT
FLOW
(1.19)
/
Here the proportionality constant determines the specific amount of resistance that must
be overcome by a given net effort to drive a given amount of flow. is term is referred to as the
system resistance, R.
Of Special Note
Characters that dissipate energy follow the equation:
EFFORT
R
(cid:3)
D
FLOW
(1.20)
that defines the character’s resistance.
..
where
(cid:3)
indicates multiplication.
Figure 1.4: e Evil Dr. Friction dissipates energy as flow occurs under the driver of an effort dif-
ference across some resistive element. e Evil Dr. Friction is distinguished by his possession of a
system’s resistance to flow.
Recall that in electrical circuits, resistors are system elements across whose terminals a volt-
age drop is proportional to the current flowing through it as prescribed by Ohm’s law:
1.1. THE EFFORT-FLOW ANALOGY 11
Similarly in a mechanical system, force can be applied to produce motion by overcoming
the effects of friction. For example, when viscous forces oppose the motion, a good representation
of the force required to overcome this resistance is given by:
V
iR
D
(1.21)
where clearly, by analogy,
Of Special Note
F
D
b(cid:29)
RMECH
b
D
(1.22)
(1.23)
Algebraic mathematical relations imply energy dissipation. Often, these dissi-
pative relations bear someone’s name, e.g., Ohm, Fourier, Newton, Toricelli,
etc.
..
Generalizing, by analogy, a transport process exhibits a resistance given by:
EFFORT
FLOW
e energy “eaten by” any resistive element is equivalent to the work done by the dissipating
agent, e.g., friction in a mechanical system. When energy is “eaten” it is no longer available to
be stored in potential and/or kinetic forms. We say it is effectively “lost.” e lost or dissipated
energy is quantified by the work done by the force or effort across the element:
(1.24)
D
R
Z d W
dt
dt
D
Z d W
D
Z
(cid:0)i.t/2R(cid:1) dt
D
Z .V .t/i.t// dt
Z V .q/ dq
D
Analogously in a mechanical system, the lost energy is given by:
Z d W
dt
dt
D
Z d W
D
Z FFRICTION(cid:29) dt
D
Z FFRICTIONdx
(1.25)
(1.26)
You may recall from undergraduate engineering dynamics that this expression is equivalent
to the work done or energy dissipated by friction acting on a moving mass. As energy is trans-
ported and exchanged between potential and kinetic forms, resistive agents essentially steal part of
the transfer. ere is a balance between energy transferred and energy lost. e resistive element
acts to transform energy to a form not useful by the particular system, i.e., resistors transform
12
1. IF YOU PUSH IT, IT WILL FLOW
useful electrical energy to heat, a loss by-product of current flow in a real circuit. Similarly, fric-
tion in mechanical systems steals energy, transforming it to sound and heat, no longer useful for
producing motion. e Evil Dr. Friction is the character who irreversibly robs energy in a system
as it is being released from either potential or kinetic forms and in any transfer between the two.
1.1.2 THE ENERGY BALANCE PRINCIPLE
In any given system, transport occurs when effort drives flow of some quantity. By defining a
control volume into and out of which flow occurs, one can create a balance of any quantity, Q, by
stating that
QIN
P
QOUT
(cid:0) P
QSTORED
D P
(1.27)
dQIN
dt
QIN
D
. As we will see, this simple balance principle is the precursor to every differ-
where P
ential equation governing the transport of conserved or balanced quantities. Generally, input and
output transport will require overcoming some resistance to flow with the net inflow resulting in
storage.
C H A P T E R 2
13
Governing Dynamics
Governing dynamics, gentlemen; it’s all governing dynamics
John Nash
In all of science we take certain axioms as given and proceed to model behavior from there.
One such premise is that there is a collection of quantities whose behavior is governed by principles
of conservation and balance. Among these are mass, momentum, energy, and electrical charge.
We accept that these quantities can neither be created nor destroyed. erefore, the amount of
any one of these quantities is a function of how much you begin with and how much is either
transported to you or from you. ere can be external sources and sinks and repositories where
the quantities can be stored. To write a statement that balances any conserved quantity at a point,
we isolate an infinitesimally small volume with inlet and exit windows through which our quantity
of choice can be transported in and out.
It is perhaps best at this point to proceed by example. In an attempt to be consistent with
other treatments of the effort-flow analogy, let’s consider a volume, e.g., a bank vault, into which
money may enter and exit through different ports in the volume boundary, e.g., the bank doors.
QIN
In order to introduce the parameter of time, imagine that dollars enter the bank at a rate of
P
QOUT dollars
dollars per day. Say a different amount may be flowing out of the bank at a rate of
P
per day. A balance principle is as simple as tallying how much money enters vs. how much exits.
If the amount entering is greater than the amount exiting in any given interval of time, there is
a net accrual and the amount of money in the bank increases over time. In other words, there
is a net amount of storage of money in the bank. Contrarily, if the amount exiting exceeds the
amount entering in any time interval, the amount of money in the bank will decrease. One can
then conclude that the amount stored in this time interval is a negative value, i.e., there is a net
QSTORED < 0 over this time interval. So the statement of balance (in rate
loss of money when P
form) is simply
QIN
P
QOUT
QSTORED
D P
In the absence of a storage mechanism, the amount of the quantity already stored in this
volume remains constant and we say this quantity is conserved. When this is the case, any net
inflow must be accompanied by an equal amount of outflow.
(cid:0) P
(2.1)
14
2. GOVERNING DYNAMICS
Figure 2.1: A repository for a balanced quantity that allows inflow and exit from it through virtual
windows and storage on the interior.
2.1 DERIVING A GOVERNING DIFFERENTIAL EQUATION
2.1. DERIVING A GOVERNING DIFFERENTIAL EQUATION 15
We will need to explore some of the more germane properties of differential equations here to
establish the utility of our analogy. When we let the size of the volume into which a balanced or
conserved quantity flows shrink to a point, the balance or net storage principle becomes a differ-
ential equation governing the amount of the quantity present at that point at any given moment
in time. Once again, to make the mathematics more approachable, let’s proceed by example. Let’s
start with a simple story of a passive electrical circuit that contains an energy dissipator, the resis-
tor, connected in series with an electrical potential energy storage device known as the capacitor
as shown in Figure 2.2. At some point in time, a DC battery is connected across the circuit. In
our cartoon version in Figure 2.3, Father Force represents an excitation from the outside world,
an externally applied effort. We can often understand system behavior by associating this force
with a driving external agent from the outside world perturbing the system. Here, the agent of
the outside world, the battery, imposes a voltage difference across the circuit. In our story, we
call this agent of the outside world Father Force because he represents an externally applied effort
difference. He hurls electric charge at the circuit, our system. is electrical voltage difference
drives electrical charge at some rate (known as current) through the resistor. e resistor is an
energy dissipator, the Evil Dr. Friction in our cartoon. His snake “eats charge” and thus electrical
energy. He steals it from the system. is electrical energy will be lost mostly in the form of heat.
What is left exits the resistor and can be stored across the plates of an electrical capacitor. Here
we see Captain Potential Energy as the storage agent in the capacitor.
Figure 2.2: A series RC circuit as typically represented with a standard circuit diagram. A represen-
tative volume element of the circuit is examined under the magnifying glass. Given that the circuit is
grounded at the lower battery post, a voltage at the prescribed node represents the voltage drop across
the capacitor, V1.
At this point, we should point out that the system is defined exclusively by the system
element characters, i.e, one character that eats energy and one that stores energy. e battery is
“the outside world.” is agent imposes a voltage difference on the circuit that causes current to
+−V0CRV116
2. GOVERNING DYNAMICS
Figure 2.3: A electrical effort (voltage difference provided by a DC battery) throws charge flow
through a resistor, an energy dissipator, which eats part of the input charge only to have the amount
that gets through be stored by a storage element or character. As the voltage difference across the
capacitor grows storing electrical potential energy, the voltage drop across the resistor decreases and,
along with it, the current in the circuit.
flow. Father Force lives in the outside world and delivers an input to the system whose characters
are Captain Potential Energy and the Evil Dr. Friction!
To derive any governing differential equation, we isolate a small part of the system. Con-
sider a point in the circuit between the resistor and the capacitor (the node under the magnifying
glass in Figure 2.2). is choice of representative volume element (RVE) is somewhat arbitrary.
Since the only charge storage element in our example is the capacitor (and this character is outside
of the RVE), the amount of charge per unit time (or current) flowing into the node must exactly
equal or balance the current flowing out because there is no means by which to allow charge to
accumulate on a wire alone.
is concept must now be rendered mathematically. e current IN must equal current
OUT or the current that passed through the energy dissipator must equal that flowing into the
energy storage element. Material laws usually govern the inflow and outflow of the balanced
quantity. ese material laws are hypotheses based on observation and measurement [1, 2, 12, 14].
2.1. DERIVING A GOVERNING DIFFERENTIAL EQUATION 17
QSTORED
D P
0
D
QOUT
iC
iC
C d.V1
(cid:0)
QIN
P
)
)
VO
RC
(cid:0) P
iR
(cid:0)
iR
D
V1
(cid:0)
R D
V1
P
C
V1
D
VREF /
dt
VO .t/
(2.2)
where VO is the battery voltage and V1 is the voltage drop across the capacitor.
Figure 2.4: e abstraction of the mathematical governing equation exhibits “sides” belonging to the
system and the outside world which acts to excite the system into some manner of dynamic response.
Because the reference voltage is chosen to be grounded, that is zero voltage, we say that
this differential equation governs the voltage drop, V1 , across the potential energy storage device
or capacitor. It is important to note that the resistor, capacitor, and the interior system voltage,
V1, lie mathematically on one side of the equation while the driver from the outside world lies on
the other side of the equation. e left side contains all system parameters and quantities while
the right hand side represents a “forcing function” that drives the flow.
RC
V1
P
C
V1
D
VO .t/
(2.3)
is will be a constant theme in our development. e movie characters and their behavior
live “on the left” while the circumstances presented to them by the outside world (and to which
they must respond) will lie “on the right” (see Figure 2.4).
CRV1 + V1 =SystemOutsideWorld Vo•18
2. GOVERNING DYNAMICS
2.2 THE FOUR CASTS
Movies are always open to being remade ... I think of it like the James
Bond movies. Different actors can play the same role.
Steve Martin
I think the movie business is all movies that you’ve seen before.
Everything’s a remake; people want things that are familiar.
Graydon Carter
So the story of the evolution of quantity, Q, over time is governed by the balance
QIN
P
QOUT
QSTORED
QPOTENTIAL
QKINETIC
STORED
(cid:0) P
D P
Here, we must make a statement, though, that while it is allowable, there need not be two
storage characters. Recall, the example given in Section 2.1 had only a potential energy storage
element in its cast. So the transport processes of interest are those that contain:
STORED C P
D P
(2.4)
1. Dissipative elements and a potential energy storage element
2. Dissipative elements and a kinetic energy storage element
3. Dissipative elements and both potential and kinetic energy storage elements
4. Both potential and kinetic energy storage elements and no dissipative elements
It will turn out that systems with only one type of storage element character in their script
are always governed by first order ordinary differential equations in time. Alternatively, systems
whose script contains two types of storage element will always be characterized by second order
ordinary differential equations in time. ese are important characteristics to be aware of before
we discuss the nature of their solutions.
e focus of the analogical approach is its power in describing the similitude between
systems transporting conserved quantities in four otherwise distinct disciplines of engineering.
Dynamic differential equations are statements of how conserved quantities change in time. In
electrical systems, we will always balance electrical charge; in mechanical systems, momentum;
in fluid systems, mass; and in thermal systems, heat energy. ese are summarized in Table 2.1.
2.3
SYSTEM ORDER
In our analogy to a screenplay, we have limited our discussion to two scripts: those associated
with first order governing equations and those associated with second order governing equations.
While it is seldom said in this way, the order of the governing differential equation is defined as
Table 2.1: Conserved quantities
2.4. LINEARITY 19
the difference between the highest order derivative appearing in the equation and the lowest order
derivative appearing in the equation. It will be shown that the system order is the most important
determinant of the system behavior. We will have more to say about this as we set about solving
these equations in Chapters 5 and 6.
2.4
LINEARITY
It will suffice to say that a differential equation is linear when the system variable and all its deriva-
tives on the left side of the equation, i.e., those associated with the capacitor voltage, V1, in the
example of the previous section, appear only to the first power and there are no transcendental or
trigonometric terms on the left side, e.g., exponential functions, natural logarithms, or periodic
functions of the dependent variable. While analytical, functional solutions do exist for so-called
nonlinear systems, they are mostly rare or difficult. erefore, solutions to nonlinear systems of-
ten require numerical solution techniques. Solutions to governing differential equations are the
mathematical representations of physical system behavior. We will concern ourselves with linear
systems only in this book. We can use the linear story to help us visualize and understand non-
linear behavior once we master a linear understanding. When appropriate, we can then linearize
nonlinear systems to find a simpler story over a limited range of behavior.
DisciplineConservedQuantityElectricalChargeTranslationalMechanicalLinearMomentumRotationalMechanicalAngularMomentumFluidMassThermalInternalEnergyC H A P T E R 3
e Electrical Cast
21
An electron’s journey through a circuit can be described as a zigzag path
that results from countless collisions with the atoms of the conducting
wire. Each collision results in the alteration of the path, thus leading to a
zigzag type motion. While the electric potential difference across the
two ends of a circuit encourages the flow of charge, it is the collisions of
charge carriers with atoms of the wire that discourages the flow of
charge.
e Physics Classroom
In Chapter 2, our example system was a passive RC circuit, a system whose script contains
only two character “types”: a potential energy storage character and a dissipative character. In this
system, the battery is an agent of the outside world that continually hurls charge through the
resistive element that “eats charge” and turns its electrical energy to heat or thermal energy. It is
important to note that the energy is not destroyed, but merely transformed to another form that is
no longer available as electrical potential causing current flow through the circuit. is represents a
loss of so-called electrical energy to other non-useful forms (in terms of hurling electrons through
the electrical circuit). e heat in a light bulb is a necessary loss incurred as current flows through
a resistive filament which produces heat AND light. Perhaps ironically, the light is a useful “by-
product” of the circuit, but, from a purely electrical perspective, it represents a loss of electrical
energy that forces the circuit to require constant energy input.
3.1 EFFORT AND FLOW VARIABLES
If you push charge, it will flow. e flow of charge is, by definition, an electrical current. How
you push charge is by creating a difference in electrical potential (or voltage). e electrical po-
tential or voltage drop along a portion of a circuit drives the charge to flow from higher to lower
electrical potential. Electrical charge can be difficult for mechanical engineers to grasp if we look
at the world as driven by external forces that require contact to initiate motion. Electrical charge
responds to force-at-a-distance, force fields that are many orders of magnitude larger than, say,
gravitational forces in mechanical systems. e reason forces are not always seen as this large is
that many, many positive and negative electrical charges end up cancelling each other out. It is
only the sparse imbalances in charge that occasionally occur that tip the balance and end up cre-
22
3. THE ELECTRICAL CAST
ating a difference in electrical potential. is difference is not an equilibrium state and charges
tend to move to reduce this difference. So charge moves in response to the electromagnetic field,
a force felt by charge when all charges are not paired up [4]. But it is also a known condition
that, like mass and energy, no one can create or destroy charge. Positive and negative charges
exist. On the whole, they cannot be created or destroyed, but they can be collected in such states
that differences in net amounts drive flow of unlike charges toward one another. All governing
equations are based on writing mathematical statements of this conservation of electrical charge.
Given that charge is conserved, governing equations of motion arise out of balancing electrical
“forces” that drive charge to move toward an equilibrium state.
Table 3.1: Effort, flow, and conserved quantities for electrical systems
3.2
STORAGE ELEMENTS
All such powered passive electrical circuits can, at most, contain three system element characters.
Recall that two of the characters are capable of storing energy, one in the form of potential energy,
the other in terms of kinetic energy.
3.2.1 POTENTIAL ENERGY STORAGE CHARACTER
Potential energy storage devices store energy in the form of the effort variable. e electrical cast
member who plays the role of Captain Potential Energy is a device that stores a differential of
electrical effort or voltage. is is the capacitor.
e potential energy storage character is described mathematically in the same way for
every disciplinary system. e governing mathematical expression of the storage by virtue of effort
is
FLOW
iC
C
C
D
D
d.EFFORT/
dt
d.V1.t/
;
VREF/
(cid:0)
dt
;
where capacitance is measured in farads or ampere-seconds/volt
A
s
:
(cid:0)
V
f
PD
Conserved Quantity Units Symbol Charge Coulombs q Variable Units Effort Electrical Potential Voltage Volts V Flow Current Amperes i 3.3. DISSIPATIVE ELEMENTS 23
Figure 3.1: e electrical potential energy storage character is played by the capacitor.
3.2.2 KINETIC ENERGY STORAGE CHARACTER
Kinetic energy storage devices store energy in the form of the flow variable. e electrical cast
member who plays the role of Captain Kinetic Energy is that device that stores energy by virtue
of electrical flow or current. is is the inductor.
e kinetic energy storage character is described mathematically in the same way for every
disciplinary system. e governing mathematical expression of the storage by virtue of flow is
EFFORT
V1.t/
(cid:0)
V2.t/
;
L
L
d.FLOW/
dt
d.iL.t//
dt
;
D
D
where inductance is measured in henries or volt-seconds/ampere
H (cid:1)
D
V
s
:
(cid:0)
A
3.3 DISSIPATIVE ELEMENTS
Energy losses occur at the hand of a dissipative element or a character that “eats energy.” e role
of the Evil Dr. Friction in the electrical script is played by the resistor.
Recall, the governing mathematical expression of the dissipation is always algebraic rather
than differential:
EFFORT
V1
V2
(cid:0)
D
D
FLOW;
R
(cid:3)
RiR;
24
3. THE ELECTRICAL CAST
Figure 3.2: e electrical kinetic energy storage character is played by the inductor.
Figure 3.3: e electrical energy dissipative character is played by the resistor.
where resistance is measured in ohms or volts/ampere
(cid:10) (cid:1)
D
V
A
:
3.4. SINGLE STORAGE ELEMENT SCRIPTS 25
Although there is no hard and fast rule about this, the mathematical expression governing en-
ergy loss is very often characterized as “someone’s law.” Here, Ohm’s law governs the electrical
energy dissipated in a resistor. Because dissipative elements result in energy losses, a requisite
V2, is necessary to drive a current, iR, through the
effort differential, here the voltage drop, V1
element.
(cid:0)
A summary of the electrical cast and the roles they play is given in Figure 3.4 and a summary
of the relevant system element relations is summarized in Table 3.2.
Table 3.2: Relevant system element relations for electrical systems
3.4
SINGLE STORAGE ELEMENT SCRIPTS
Recall, single energy storage scripts are capable of storing only one type of system energy, poten-
tial or kinetic. When coupled with an energy dissipating agent, first order ordinary differential
equations are the result. ese first order equations in time govern the system effort and flow
behavior(s). For the electrical cast, the simplest examples are the series RC and LR circuits.
3.4.1 RC CIRCUITS
In the case of the series RC circuit (Section 2.1), electrical energy is provided from “the outside
world” by the electrical potential boost of the DC voltage source or battery. Some energy is lost
through the resistor, yet enough gets through so that a potential difference or voltage builds up
Field Effort Variable Flow Variable Electrical Voltage Current Relation Form Analogy Dissipative Material Property Law Effort = Resistance x Flow ()12VVRi−= Resistance = Resistance Energy Storage in Effort Variable Flow = Capacitance x d(Effort)/dt ()12dVViCdt−= Capacitance = Capacitance Energy Storage in Flow Variable Effort = Inductance x d(Flow)/dt ()12diVVLdt−= Inductance = Inductance 26
3. THE ELECTRICAL CAST
Figure 3.4: e electrical cast of characters.
3.4. SINGLE STORAGE ELEMENT SCRIPTS 27
across the plates of the potential energy storage element thereby charging the capacitor through
a differential of voltage or effort. Recall, what results mathematically is a differential equation for
the voltage stored across the capacitor plates that is linear and first order. erefore, the resulting
differential equation is written for the effort variable.
Figure 3.5: e electrical cast of characters playing out a series RC circuit.
Here, the quantity RC possesses units of time.
RC (cid:1)
D
RC (cid:1)
D
(cid:10)f;
A
V
A
s
(cid:0)
V
s:
(cid:1)
D
D
It is known as the system time constant, RC
(cid:28). Because time is the independent variable and
all responses are time histories, it is of primary importance that the system is characterized by
this special amount of time. Effort (voltage) and flow (current) will change over time. Because
the time constant, (cid:28), enters the governing differential equation explicitly, it flavors the entire
system response. How fast or slow effort and flow evolve in time in the system will always be in
quanta of time constants. at is, system variables will change explicitly in “chunks” of time of (cid:28)
seconds. We will learn to talk in these terms. e amount of time it will take for any change to
occur in a first order system will be N time constants. e number, N , of course, will depend on
what phenomenon we are discussing. We will call the time constant, (cid:28), the system parameter. It
parameterizes how fast the system responds to external stimuli.
Electrical systems, it turns out, are “mathematically versatile” in that the resulting ordi-
nary differential equations will as often govern the behavior of the effort variable, V1, as the flow
variable, i. e equation governing the capacitor voltage can be recast as a 1st order ordinary
differential equation governing the system current, i.
V1.t/
VO .t/
(cid:0)
R
i.t/:
D
RCV1 + V1 = VO (t)CRV1 + V1 =SystemOutsideWorld Vo•28
3. THE ELECTRICAL CAST
Inverting the potential energy storage relation for the capacitor will give an expression for the
voltage, V1.t/, which may be substituted into this relation:
VO .t/
Z i.t/
C
(cid:0)
dt
R
i.t /R
RC
di.t/
dt C
i.t /
i.t /
VO .t /
Z i.t/
C
(cid:0)
dt
C
VO .t /;
P
D
D
D
VO .t/ is an equivalent input signal current seen as a forcing function by the differential
where C
P
equation governing the system current. It is associated with the time rate of change of the imposed
battery voltage, VO .t/. It is important to note that the same system time constant, RC
(cid:28), appears
in and characterizes the solution(s) of both differential equations: those governing the capacitor
voltage (effort) and the circuit current (flow)!
D
3.4.2 RL CIRCUITS
We might as easily let the current that passes through the resistor be stored in the form of electrical
kinetic energy. e cast member that plays the character storing kinetic energy is the inductor.
Figure 3.6: An electrical effort (voltage difference) drives charge flow through an energy dissipator,
the resistor, only to have the amount that gets through be stored by a storage element or character, the
inductor, in the form of the flow variable.
We are used to representing this system by a circuit diagram as in Figure 3.7.
3.4. SINGLE STORAGE ELEMENT SCRIPTS 29
Figure 3.7: An electrical RL circuit as typically represented. Upon applying the battery (an external
voltage difference) across the circuit, charge will respond to the electromagnetic force and flow through
the circuit.
At this point, we should point out that the system is defined only by the element characters,
i.e., the characters that eat energy and those that store energy. e battery is an agent of the
outside world. is agent imposes a voltage differential, i.e., an imposed effort, on the circuit that
causes current to flow. As in Chapter 2, let’s choose a node in the circuit between the resistor
and the storage device, the inductor. Since the only charge storage element is the inductor (and
this character is outside of the RVE), the current flowing into the node must exactly balance the
current flowing out.
is concept must now be rendered mathematically. e current that passed through the
energy dissipater must equal that flowing into the kinetic energy storage element.
V1.t/
VO .t/
(cid:0)
R
iR.t/
D
D
iL.t/:
We now introduce the relation corresponding to the storage of kinetic energy by virtue of flow:
V1.t/
VREF
(cid:0)
D
V1.t/
L
di.t/
dt
:
D
Substituting for V1.t/ using the dissipation element relation:
V1.t/
(cid:0)
VREF
V1.t/
D
di.t/
L
R
i.t/
dt C
VO .t/
(cid:0)
i.t/R
di.t/
dt D
VO .t/:
L
1
R
D
D
is differential equation governs the circuit current, i. It is important to note at this point that
all the resistance, inductance, and interior system current again lie mathematically on one side of
30
3. THE ELECTRICAL CAST
the equation while the driver from the outside world, the battery voltage, lies on the other side
of the equation. e left side contains all system parameters and quantities while the right-hand
side represents a forcing function that drives the flow.
Figure 3.8: e electrical cast of characters playing out a series RL circuit.
Of Special Note
is will be a constant theme in our development. e movie characters and
their behavior live “on the left” while the circumstances presented to them by
the outside world will lie “on the right.”
..
Here, the mathematical term on the right-hand side, 1
R VO .t/, is a flow-like external signal
input to the system supplied by the battery.
3.4.3 A GENERALIZED MATHEMATICAL FORM FOR THE SINGLE
STORAGE ELEMENT SCRIPT
If we observe the general nature of the governing differential equations for both the RC and RL
circuits, there is a distinct one-to-one correspondence of terms. Single storage element scripts
are characterized by 1st order ordinary differential equations in time. We further see that these
equations can be cast in a form wherein:
(a) Either the effort or flow variable appears isolated with a coefficient of unity and
(b) e coefficient of the effort or flow derivative term (RC or L/R) has units of time
1()OLdiiVtRdtR+= •1RRVoI + I = SystemOutsideWorld LWe generalize the governing ODE for any single storage element script or 1st order system as:
3.4. SINGLE STORAGE ELEMENT SCRIPTS 31
(cid:28)
d
dt C
(cid:9)O .t/
G
(cid:3)
D
I .t/
D
where (cid:28) is the system time constant and is either an effort or flow variable in the system. Here, we
L=R. In general, the time constant will be some
have already seen cases where (cid:28)
RC and (cid:28)
D
function of the system element parameters, (cid:28)
f .L; C; R/. e forcing function will be some
D
normalized form of the actual physical input excitation that renders an equivalent effort or flow
driving function. e generalized forcing function is often represented as the actual physically
imposed agent of excitation scaled by a factor called the static gain, G, where (cid:9)O .t/
I.t/.
For the series RC circuit, we have the relations summarized in Table 3.3.
D
D
G
(cid:3)
Table 3.3: Parts of 1st order governing differential equations for a series RC circuit
While for the series RL circuit, we arrive at the results summarized in Table 3.4.
Table 3.4: Parts of 1st order governing differential equations for a series RL circuit
One of the most powerful aspects of the analogical approach is that when systems behave
linearly, the solutions to any equation expressed in this generalized form are essentially equivalent,
i.e., ALL linear first order systems share inherent and important common characteristics in their
system response to input or excitation from “the outside world.” We will examine these common
characteristics in detail when we address time domain solutions in Chapter 5.
Response Variable Capacitor Voltage, ()1Vt Circuit Current, ()it System Parameter RCτ= RCτ= External Excitation ()()OOtVtΨ= ()()OOtCVtΨ= G = 1;()()OItVt= G = CD1; ()()OItVt= 1 Here, we use the differential operator where, by example, for an arbitrary variable, p: dpDppdt≡≡ Response Variable Inductor Voltage, ()1Vt Circuit Current, ()it System Parameter LRτ= LRτ= External Excitation ??2 ()()OOVttRΨ= G = ?? ; ()??It= G = 1/R;()()OItVt= 2 These quantities will be asked of the reader in the Chapter Activities following this chapter.
32
3. THE ELECTRICAL CAST
Of Special Note
Universal Truths for 1st Order Systems
(a) ey are comprised of system elements (or characters) that store ONLY ONE
form of energy, either potential or kinetic forms of energy, but not both.
(b) eir behavior is characterized by a single system parameter called the system time
constant, (cid:28), where
(c) (cid:28)
D
f .R; C /
or (cid:28)
g .L; R/
D
..
3.5 MULTIPLE STORAGE ELEMENT SCRIPTS
e story changes when a system can store energy in more than one form. A more general circuit
would be able to store electrical energy in both potential and kinetic forms as well as dissipate
energy. e multiple storage element character script involves a capacitor, inductor, and resistor.
3.5.1
SERIES RLC CIRCUITS
Such a system is characterized by system capacitance, inductance, and electrical resistance. Con-
sider a circuit where these elements are connected in series.
Figure 3.9: A series electrical RLC series circuit. Upon applying the battery to the circuit, current is
driven in a clockwise sense around the circuit.
In this script, the battery hurls charge at the resistor which “eats” a portion, allowing some
residue of the charge through to the inductor and capacitor. Charge build-up across the capacitor
provides a voltage drop whose time rate of change corresponds to a time rate of charge across the
+−V0RV1LV2C3.5. MULTIPLE STORAGE ELEMENT SCRIPTS 33
capacitor plates. What occurs physically is that charge accumulates on one side of the capacitor.
If the rate is sufficient to cause a rate of change of the voltage drop across the capacitor, charge
at the other plate changes over time. Mathematically, at least, this dictates a current or effective
movement of charge. is charge then “gets a boost from the battery” and starts the process all
over again.
Focus on the voltage drop across the capacitor as the relevant system variable whose re-
sponse we desire. Writing a current balance on node 2:
QIN
P
(cid:0) P
QOUT
iL.t/
V2.t/
QSTORED
iC .t/
D P
D
V1.t/
(cid:0)
LD
C
d.V2.t/
(cid:0)
dt
VREF/
LC
V2.t/
R
C
V2.t/
V1.t/:
D
D
But we do not know the other system voltage, V1.t/. is is because there is now more than one
way to store energy! erefore, we must investigate a second current (or charge) balance at node 1.
QIN
P
(cid:0) P
QOUT
iR.t/
QSTORED
iL.t/:
D P
D
From the relation governing effort and flow through the resistor:
and
but
so
V1.t/
VO .t/
(cid:0)
D
RiR.t/
iR.t/
iL.t/
D
iL.t/
iC .t/
C
V2.t/
P
D
D
V1.t/
VO .t/
RC
V2.t/:
P
(cid:0)
D
Substituting this into the relation obtained at the first node and rearranging terms:
LC
V2.t/
R
C
RC
V2.t/
P
C
V2.t/
D
VO .t/:
Once again, all the system parameters .R; L; C / and a voltage internal to the system, V2.t/, are
all on one side of the equation while the excitation “force” or effort supplying charge to the system
“from the outside world” appears on the other side of the equation.
34
3. THE ELECTRICAL CAST
Figure 3.10: A series RLC electrical circuit character equation.
3.5.2 PARALLEL RLC CIRCUITS
One may also investigate a branched loop over which the charge will “choose the path of least
resistance,” or, more properly, the path of least impedance. e impedance is nothing more than
a dynamic resistance. Using the definition of resistance as the ratio of effort/flow:
For the inductor:
For the capacitor:
LDi.t/
(cid:1)V
D
RINDUCTOR
DYNAMIC D
CD(cid:1)V
i
D
LD
i.t/=CD
(cid:1)V .t/
RCAPACITOR
D
DYNAMIC D
1=CD
where, again, we are using the differential operator, D .
/
(cid:15)
D
/
d .
(cid:15)
dt
.
Let’s next consider a parallel RLC circuit in Figure 3.11.
3.5. MULTIPLE STORAGE ELEMENT SCRIPTS 35
Figure 3.11: A parallel electrical RLC series circuit. Upon applying the battery to the circuit, current
is driven in a clockwise sense around the circuit, but must now “choose the path of least impedance”
at the branch point.
Performing a charge balance over the system at internal node 1:
QIN
P
(cid:0) P
VO .t/
(cid:0)
R
QOUT
iR.t/
V1.t/
D P
D
QSTORED
iL.t/
V1.t/
LD C
C
D
iC .t/
C
V1.t/:
P
Applying the operator LD to both sides of the equation:
LC
V1.t/
R
C
L
R P
V1.t/
C
V1.t/
L
R P
VO .t/:
D
All the system parameters .R; L; C / and a voltage internal to the system, V1 .t/, are all on one
side of the equation while the excitation “force” supplying charge to the system “from the outside
world” appears on the other side of the equation.
In this script, the battery hurls charge at the resistor (Evil Dr. Friction). Evil Dr. Friction
eats some charge allowing less out which then is stored in the system inductor (in the form of
electrical kinetic energy) and/or the capacitor (in the form of electrical potential energy). How
much is stored in each of these storage elements depends on their impedance or instantaneous
(dynamic) electrical resistance with more energy being stored in the path with least impedance.
When the storage characters dominate over friction, they will pass energy back and forth
with friction eating away at each transfer. In Figure 3.13, a system imparted with potential energy
+−V0CRV1L36
3. THE ELECTRICAL CAST
Figure 3.12: A parallel electrical RLC series circuit character equation.
(A) will pass it on to Captain Kinetic Energy. Dissipation is eating energy during this transfer
as evidence by Evil Dr. Friction fighting Captain Kinetic Energy (B). Dissipation continues to
degrade the energy cache during each subsequent exchange back to Captain Potential Energy (C)
and back to Captain Kinetic Energy (D) until all the electrical energy has been consumed. In the
case where an input signal delivers energy continually to the system, eventually the amount stored
in potential and kinetic forms reaches a steady state while the energy losses continue to accrue
with time.
V1 + V1 =SystemOutsideWorld V1+RCLVo···RLL3.5. MULTIPLE STORAGE ELEMENT SCRIPTS 37
Figure 3.13: A second order system with dissipation results in energy being “consumed” within each
exchange from kinetic to potential and back to kinetic.
3.5.3
IDEALIZED LC CIRCUITS
Consider the first series RLC circuit. In the limit as the resistance vanishes, the differential equa-
tion for the capacitor voltage becomes:
LC
V2.t/
R
C
V2.t/
D
VO .t/:
In this script, the battery provides a voltage or effort difference that drives charge at the inductor.
A charge difference causes a rate of change of current passing through the inductor. is process
creates kinetic energy that is present in the system owing to the presence of the inductor. Be-
DCBA38
3. THE ELECTRICAL CAST
cause charge must be conserved, the rate of change of current in the inductor results in a charge
difference across the capacitor that varies in time, i.e., changes in stored potential energy in the
capacitor. Charge differences that change in time across the plates of the capacitor result in a
flow of charge. is charge flows back to the battery for “a boost” from the “outside world.” e
electrical energy is simply transferred from kinetic to potential and back with no dissipation ad
infinitum. We will “see” this behavior in the structure of the mathematical solutions described in
Chapter 5. is system is a simple frictionless harmonic oscillator where the harmonic response
occurs in the system voltage or effort variable as well as the current or flow variable. is is analo-
gous to motion in a simple, frictionless pendulum where a similar simple harmonic motion results
for the angular velocity and position (flow variables). We can also show that harmonic variation
also occurs in the component of the gravitational force that produces the internal torque driving
the system back again.
Figure 3.14: A second order system without dissipation results in energy simply being transferred
between potential and kinetic forms, but otherwise being conserved in total. e simple pendulum is
an analog to the LC circuit in the absence of any electrical resistance.
3.5.4 A GENERALIZED MATHEMATICAL FORM FOR THE DUAL
STORAGE ELEMENT SCRIPT
If we examine the governing ordinary differential equations for the system voltages in Sec-
tions 3.5.1, 3.5.2, and 3.5.3, we see that dual storage element scripts are always characterized
by 2nd order ordinary differential equations in time. We can further see that the resulting gov-
erning differential equations can be cast in a form where:
(a) the effort or flow variable appears isolated with a coefficient of unity on “the system side”
of the ODE,
(b) the coefficient of the effort or flow derivative term (RC or L/R) has units of time, and
(c) the coefficient of the effort or flow second derivative term (LC) has units of (cid:140)T (cid:141)2
3.5. MULTIPLE STORAGE ELEMENT SCRIPTS 39
One can then generalize the governing ODE for any dual storage element script or 2nd order
system as:
1
!2
N
d
dt C
2(cid:16)
!N
d
dt C
(cid:9)O .t/
G
(cid:3)
D
I .t/ ;
D
where !N is the system natural frequency, (cid:16) is the dimensionless system damping ratio, and is
either an effort or flow variable in the system. Here, we have already seen a similar situation when
the equation is 1st order. In this case, we saw the (cid:28)
L=R. For 2nd order systems,
there are two system parameters: the natural frequency and damping ratio will be functions of the
f .L; C; R/. e forcing function will be some normal-
system element parameters,
ized form of the actual physical input excitation that renders an equivalent effort or flow driving
function. e generalized forcing function is often represented as the actual physically imposed
agent of excitation scaled by a factor called the static gain, G, where (cid:9)O .t/
For the second order RLC circuits, the results obtained are summarized in Table 3.5.
RC or (cid:28)
!N ; (cid:16)
I.t/.
g D
D
D
D
G
(cid:3)
f
Table 3.5: Parts of 2nd order governing differential equations for series and parallel RLC circuits
Of Special Note
Universal Truths for 2nd Order Systems
(a) ey are comprised of system elements (or characters) that store BOTH potential
AND kinetic forms of energy
(b) eir behavior is characterized by a pair of system parameters,
!N ; (cid:16)
f
g
, where
(c) !N
D
f .L; C / and (cid:16)
g .L; C; R/
D
..
RLC Circuits Series Circuit Parallel Circuit Response Variable Capacitor Voltage, ()2Vt Capacitor/Inductor Voltage, ()1Vt System Parameter 1;2NRCLLCωζ== 11;2NLRCLCωζ== External Excitation ()()OOtVtΨ= ()()OOLtVtRΨ= G = 1;()()OItVt= G = LD/R; ()()OItVt=
40
3. THE ELECTRICAL CAST
One of the most powerful aspects of the analogical approach is that when systems behave
linearly, the solutions to any equation expressed in this generalized form are essentially equivalent,
i.e., ALL linear second order systems share inherent and important common characteristics in
their system response to input or excitation from “the outside world.” We will examine these
common characteristics in detail when we address time domain solutions in Chapter 5.
3.6 CHAPTER ACTIVITIES
Problem 1 Perform a charge balance over an appropriate circuit node in the series RC circuit in
Figure 2.2 to derive a governing differential equation for the circuit’s current (flow variable)
instead of the circuit voltage drop across the capacitor (effort variable).
Problem 2 Perform a charge balance over an appropriate circuit node in the RL circuit in Fig-
ure 3.7 to derive a governing differential equation for the circuit’s voltage drop (effort vari-
able) across the inductor instead of the circuit current (flow variable). Fill in the missing
normalized excitation signal input in Table 3.3.
Problem 3 Recast the nodal balance over the representative circuit nodes in the series RLC circuit
to derive a governing differential equation for the circuit’s current (flow variable) instead of
the system voltage drop across the capacitor (effort variable).
Problem 4 Recast the nodal balance over the representative circuit nodes in the parallel RLC
circuit in to derive a governing differential equation for the circuit’s current (flow variable)
instead of the system voltage drop across the inductor/capacitor pair (effort variable).
Problem 5 Consider the series circuit for which the capacitor and resistor are swapped resulting
in a series CR circuit shown here:
Perform a charge balance at an appropriate node and derive the differential equation gov-
erning the voltage drop across the resistor.
+−V0V1CRProblem 6 An actual inductor will often contain non-negligible resistance because it is a long
coiled piece of wire. For this more realistic version of the parallel RLC circuit shown here:
3.6. CHAPTER ACTIVITIES 41
perform a charge balance at an appropriate node and re-derive the governing differential
equation for the voltage drop across the capacitor.
Problem 7 Consider the circuit shown with parallel system capacitors. At t
V0, is applied to the circuit by connecting it suddenly across a battery:
D
0, a step voltage,
e0
0/
D
D
12 V
40 milliamps:
iR.t
D
(a) On the circuit diagram label the relevant nodes and apply the necessary conservation
principles to derive the differential equation governing the response of the voltage drop
across the pair of capacitors in the circuit.
+−V0CR1R2V1L+−RC1C2V1V042
3. THE ELECTRICAL CAST
(b) What order is the equation? Use the potential energy storage system element equation
to find the relevant initial condition or initial conditions for the system effort variable.
(c) Compare the governing equation with that from the simple series RC circuit in Fig-
ure 2.2. What conclusion can you draw about the effective capacitance of a pair of
capacitors in parallel?
C H A P T E R 4
43
e Mechanical Cast
A body perseveres in its state of being at rest or of moving uniformly
straight forward, except insofar as it is compelled to change its state by
forces impressed.
Sir Isaac Newton
is transfer of knowledge from one branch of science, electrical network
theory, to another branch of science dealing with mechanical structures
is one of a long line of such interchanges (that are) made possible by
fundamental analogies which rest finally on that fact that electrical and
mechanical motions satisfy the same type of differential equations. Since
these interchanges have been going on for hundreds of years, it seems
worthwhile to examine their foundation and development.
W.P. Mason
“Electrical and Mechanical Analogies”
Bell System Technical Journal
In Chapter 1, we examined the concepts of effort and flow which continue to guide and
build our analogy between different disciplines. Recall that we posited that force acts as an effort to
cause motion. ereby, the flow variable can be represented by either the displacement or velocity
variable depending on whether one wishes to choose the motion variable or its rate of change in
time as the pertinent flow variable. With these choices made, rectilinear forces that act on a mass
cause changes to the directional momentum of the mass. As per Newton, the net force acting on
a mass equals the net change in momentum. In the absence of a net force, the linear momentum
of a mass or particle is conserved.
4.1 EFFORT AND FLOW VARIABLES
If you push inertia, it will flow. e flow of mass is, by our definition, velocity. How you push a
mass is by creating a force differential or a net force across the mass. is is clearly evidenced in
a free body diagram. According to Newton’s second law of motion, the net force applied to an
inertial element results in a time rate of change in its linear momentum. is is a statement of the
balance of linear momentum, as summarized in Table 4.1.
44
4. THE MECHANICAL CAST
Table 4.1: Effort, flow, and conserved quantities for translational mechanical systems
4.2
STORAGE ELEMENTS
It is now time to identify the mechanical cast that will play the roles of energy storage and dis-
sipation in mechanical systems. Typically, the motion can be separated into translational and
rotational components. ese can be analyzed separately in linear systems.
4.2.1 POTENTIAL ENERGY STORAGE CHARACTER
e mechanical cast member who plays the role of Captain Potential Energy is that device that
stores a force internally that may, at some later time, be released to perform useful mechanical
work on the system. is potential energy storage character is played by the simple spring.
Figure 4.1: e mechanical potential energy storage character is played by the spring. It embodies
the mechanical capacitance of the system.
Conserved Quantity Units Symbol Linear momentum kg-m/s p Variable Units Effort Force N ; lb F Flow Velocity m/s ; ft/s υ e governing mathematical expression of the storage by virtue of effort is
4.2. STORAGE ELEMENTS 45
FLOW
(cid:29)
CMECH
CMECH
D
D
d.EFFORT/
dt
d.FNET /
dt
:
Integrating both sides over time results in an expression for the mechanical analog to an electrical
system’s capacitance
1
CMECH
Z (cid:29).t /dt
CMECH
FNET
kx
D
1
k
:
D
(cid:17)
4.2.2 KINETIC ENERGY STORAGE CHARACTER
Using the rate form, we address the flow rate of position or velocity. e mechanical cast member
who plays the role of Captain Kinetic Energy is that device that stores energy by virtue of its flow
or velocity. e mechanical actor who stores kinetic energy by virtue of velocity is the system’s
inertia.
Figure 4.2: e mechanical kinetic energy storage character is played by the system’s inertia. Inertia
is embodied in a system’s mass.
e governing mathematical expression of the storage by virtue of flow is
EFFORT
FNET
D
D
L
d.FLOW/
dt
d (cid:29)
dt D
LMECH
ma:
46
4. THE MECHANICAL CAST
Since this relation gives us Newton’s second law of motion, one concludes that the mechanical
analog to an electrical system’s inductance is the inertia or mass of the mechanical system
4.3 DISSIPATIVE ELEMENTS
LMECH
m:
(cid:17)
e role of the Evil Dr. Friction in a mechanical script is played by the physical presence of
friction. Friction, in a sense, eats energy, reducing the amount available to produce motion.
Figure 4.3: e mechanical energy character that dissipates energy is played by any form of friction.
Here the friction acts physically along the surface of some inertia with the floor on which it is sliding.
Father Force performs work on the mass which it can store as kinetic energy of motion thwarted by
the Evil Dr. Friction who eats a portion of the input work done by Father Force.
e governing mathematical expression of the dissipation is always algebraic rather than
differential. If we consider the source of the friction to be viscous friction as, say, would result
from a thin layer of viscous oil between the box and the floor. Alternatively, the same force would
result in a mechanical damper in which the same shear force is developed in a cylindrical dashpot.
e viscous force resisting the relative motion of the ends of the dashpot is proportional to the
relative velocity
FNET
b(cid:29):
D
4.4. SINGLE STORAGE ELEMENT SCRIPTS 47
Figure 4.4: e friction force is modeled by the net force across a mechanical dashpot; this viscous
force is proportional to the relative velocity. For many applications, a viscous representation of friction
may suffice.
e resistive effort flow relation is also algebraic
EFFORT
FNET
D
D
R
FLOW
(cid:3)
RMECH(cid:29):
is relation dictates that the mechanical analog to the electrical system’s resistance is the viscous
friction or damping coefficient, b
RMECH
b:
(cid:17)
Alternatively, the friction could result from other physical sources such as dry friction, often
termed Coulomb friction. Many systems, however, have friction forces that may be described
as viscous-like in nature, enough so that the algebraic relation between the dissipation and flow
holds. A summary of the mechanical cast and the roles they play is given in Figure 4.5. A list of
corresponding system element equations is given in Table 4.2.
4.4
SINGLE STORAGE ELEMENT SCRIPTS
4.4.1
SPRING-DAMPER SYSTEMS
An idealized case often studied is that of the mass-less spring-damper system. is represents
the bound on behavior of a system with negligible inertia that is dominated by its elasticity and
friction. In the case of the spring-damper system, mechanical energy is lost through the damper
while the residue is stored by virtue of a net force inside the spring.
48
4. THE MECHANICAL CAST
Figure 4.5: e mechanical cast of characters.
Table 4.2: Relevant system element relations for translational mechanical systems
4.4. SINGLE STORAGE ELEMENT SCRIPTS 49
Figure 4.6: An idealized mass-less spring-damper system under the influence of an externally applied
force, F .t/.
In order to balance linear momentum of the inertia-less plate, we perform a force balance
on a representative piece of the system, i.e., the plate. For mechanical systems, this part of the
system is that on which all forces act, the mass. e result is a free body diagram (FBD).
Field Effort Variable Flow Variable Mechanical Force Velocity Relation Form Analogy Dissipative Material Property Law Effort = Resistance x Flow ()21Fbxx=− Resistance = Friction; Damping coefficient, b Energy Storage in Effort Variable Flow = Capacitance x d(Effort)/dt 1dFkdtυ= Capacitance = 11kstiffness= Energy Storage in Flow Variable Effort = Inductance x d(Flow)/dt dFmdtυ= Inductance = Mass/Inertia, m bkxF(t)50
4. THE MECHANICAL CAST
Figure 4.7: Free body diagram (FBD) for a mass-less spring-damper system.
Of Special Note
Free body diagrams are the representative volume elements (RVE) for all me-
chanical systems.
..
Summing all forces and assuming the mass is constant:
QOUT
QIN
P
FO .t/
(cid:0)
(cid:0) P
kx
b
x
P
(cid:0)
QSTORED
D P
dp
dt D
D
m
d (cid:29)
dt D
0:
Rearranging terms results in the differential equation governing position of the plate
b
x
C
P
b
x
k P
kx
x
C
FO .t/
1
k
FO .t/
D
D
is differential equation is linear and first order. Appealing to our analogy with electrical systems:
b
x
k P
x
C
D
b
1
x
k P
x
C
D
RMECHCMECH
x
x
P
C
D
1
k
FO .t/
D
CMECHFO .t/
where
A similar system character equation results analogous to the electrical RC circuit in Figure 4.8.
RMECH
CMECH
b
1=k:
D
D
F(t) kx bv 4.4. SINGLE STORAGE ELEMENT SCRIPTS 51
Figure 4.8: e mass-less spring-damper mechanical system is a purely mechanical analog to the
series RC circuit as evidenced by the character equation.
Unlike electrical systems, mechanical systems’ differential equations are not typically
“mathematically versatile” in that they will almost exclusively appear with the flow variable as
the dependent variable. e equation governing the plate displacement could be re-cast in terms
of the reaction force necessary to cause a given displacement, but this is often relegated to post-
processing the displacement solution, i.e., one typically does NOT see differential equations for
the force stored in or transmitted by the spring where force is solved as the primary variable. In
most, if not all, mechanical systems, the primary solution variable is the flow variable.
4.4.2 MASS-DAMPER SYSTEMS
What if we wanted to examine a system that stored its energy solely in kinetic form? e alter-
native two-character script with a single energy storage character would involve Captain Kinetic
Energy battling the Evil Dr. Friction! Consider a parachutist diving out of an airplane and sud-
denly pulling their chute cord. ey’re subject to a step input force from gravity. Father Force is
instantaneously pulling them toward the Earth, as illustrated in Figure 4.9. e Evil Dr. Friction
is also pushing back the parachute with a drag force due to the air in the parachute. In this case,
one may argue that friction is not so evil as it fights gravity. But if we view motion of the mass as
giving the system kinetic energy, then friction continues to eat that energy away from the diver.
In this case, friction happens to be our friend (if we desire a safe landing), but it is still the enemy
of speed. Aerodynamic drag forces are always functions of the skydiver’s downward velocity. For
b1kx + x = •1kFOSystemOutsideWorld 52
4. THE MECHANICAL CAST
simplicity, let’s say the drag force is linearly proportional to the velocity. We begin modeling this
system with a free body diagram shown in Figure 4.9.
Figure 4.9: Upon diving from an airplane, a skydiver experiences a sudden step input force exerted by
gravity. e parachute provides a velocity-dependent drag force opposing the gravitational force. e
net force results in the diver’s acceleration.
At this point, we should point out that the system is defined only by the element characters.
Father Force is Planet Earth, providing a driving effort that is an energy supply to the system from
the outside world. Normally this energy would turn entirely into kinetic energy of the skydiver
with potentially fatal results. But the Evil Dr. Friction consumes part of the energy. e rest is
stored by way of velocity in the mass of the diver by Captain Kinetic Energy.
Each system element character exhibits its own characteristic effort-flow equation. So by
balancing linear momentum:
QIN
P
QOUT
(cid:0) P
b
D P
QSTORED
dp.t/
d (cid:29).t/
mg
x.t/
P
x.t/
P
Recognizing that this differential equation is actually first order in velocity
x.t/
R
x.t/
R
dt D
dt D
mg:
D
D
C
m
m
m
(cid:0)
b
m
b R
m
b P
x.t/
(cid:29).t/
x.t/
C P
(cid:29).t/
C
D
D
1
b
1
b
mg
mg
D
D
(cid:29)TERMINAL
(cid:29)TERMINAL:
4.4. SINGLE STORAGE ELEMENT SCRIPTS 53
is differential equation governs the system flow variable or velocity of the mass, (cid:29). e left side
contains all system parameters and variables while the right-hand side represents a scaled forcing
function that drives the flow to its steady-state value, the so-called terminal velocity! So we don’t
need to solve the equation to see where it’s heading. e equation can be written in an effectively
identical form to that governing the electrical RL circuit in Figure 3.10.
m
b
d (cid:29).t /
dt C
(cid:29).t/
1
b
D
FO .t/:
is equation, in fact, takes on a form identical to the RL circuit when the analogous mechanical
parameters are introduced
LMECH
RMECH
d .FLOW/
dt
C
FLOW
FO .t/
1
RMECH
EXTERNAL EFFORT
RMECH
D
D
FLOW SS:
D
Figure 4.10: e skydiving mass-damper mechanical system is a purely mechanical analog to the
series RL circuit as evidenced by the character equation.
4.4.3 A GENERALIZED MATHEMATICAL FORM FOR THE SINGLE
STORAGE ELEMENT SCRIPT
If we observe the governing differential equations for both the spring-damper and mass-damper
mechanical systems, we see that single storage element scripts are characterized by the same 1st
order ordinary differential equations in time as 1st order electrical systems:
(cid:28)
d .t/
dt C
.t/
(cid:9)O .t/
G
(cid:3)
D
I.t/
D
54
4. THE MECHANICAL CAST
where (cid:28) is the system time constant and is either an effort or flow variable in the system. e
forcing function will be some normalized form of the actual physical input excitation that renders
an equivalent effort or flow driving function.
For the parallel spring-damper system, the first order time constant and signal excitation
are summarized in Table 4.3.
Table 4.3: Parts of 1st order governing differential equations for a parallel k–b system
While for the mass-damper system, an analogous set of relations is summarized in Ta-
ble 4.4.
Table 4.4: Parts of 1st order governing differential equations for a mass-damper system
One of the most powerful aspects of the analogical approach is that when systems behave
linearly, the solutions to any equation in this generalized form are essentially equivalent, i.e.,
ALL linear first order systems share inherent characteristics in their system response to input or
excitation from “the outside world.”
4.5 MULTIPLE STORAGE ELEMENT SCRIPTS
4.5.1 THE CLASSICAL MASS-SPRING-DAMPER SYSTEM
Introducing non-negligible inertia to the platform in Section 4.3, the two-storage-element-
character script now has the ability to store kinetic as well as potential energy as depicted in
Figure 4.11.
Response Variable Platform Position, ()xt System Parameter MECHMECHbRCkτ== External Excitation ()()OOFttkΨ= G = 1/k ; ()()OItFt= Response Variable Platform Position, ()xt System Parameter MECHMECHLmbRτ== External Excitation ()()OOFtmgtbbΨ== G = 1/b ; ()()OItFtmg== 4.5. MULTIPLE STORAGE ELEMENT SCRIPTS 55
Figure 4.11: A parallel m–k–b mechanical system. Upon applying the external force to the inertial
element, flow or motion of the mass is driven.
Writing a linear momentum balance on the mass:
QOUT
QIN
P
FO .t/
(cid:0)
kx.t/
(cid:0) P
b
(cid:0)
x.t/
P
D P
QSTORED
dp.t/
D
dt D
m
d (cid:29).t/
dt D
m
x.t/:
R
Rearranging terms and normalizing the equation
x.t/
P
After scaling the entire equation by the stiffness to normalize the flow variable term
x.t/
R
FO .t/:
kx.t/
C
C
D
m
b
m
k R
x.t/
C
b
k P
x.t/
C
x.t/
1
k
D
FO .t/:
Using the mechanical analogs for the electrical system element parameters
LMECHCMECH
x.t/
R
C
RMECHCMECH
x.t/
P
C
x.t/
D
1
k
FO .t/
where
LMECH
CMECH
RMECH
m
1=k
b:
D
D
D
In this script, the external excitation provided by Father Force translates into a change of mo-
mentum of the inertia in the system. e resistance acting against the mass in the form of the
Evil Dr. Friction eats some work performed on the block with the residual work being stored as
potential energy that stretches the spring and kinetic energy stored by virtue of the velocity of the
mass.
bkxF(t)m56
4. THE MECHANICAL CAST
Figure 4.12: A classical mass-spring-damper system is equivalent to the series RLC circuit when
placed in the form of a character equation.
4.5.2
IDEALIZED MASS-SPRING SYSTEMS
An idealized case can be illustrated when the resistance becomes “infinitesimally small” and ALL
of the energy input by the driving force is stored as potential energy in the spring and kinetic
energy in the mass. is is the idealized case of a system without losses. In this case:
m
x.t/
1
k R
LMECHCMECH
C
x
R
FO .t/
1
k
CMECHFO .t/:
x.t/
x
C
D
D
In this script, the external force drives a momentum change in the mass which is slowed down by
the spring opposing its motion. As the kinetic energy imparted to the mass by the force is reduced,
an equivalent amount of potential energy is stored in the spring. e mechanical energy is simply
transferred from kinetic to potential and back with no dissipation ad infinitum. In this sense,
Captain Potential Energy and Captain Kinetic Energy “have a catch” with a ball of energy while
the Evil Dr. Friction gets none. is is the translational mechanical analog to the equivalent LC
electrical circuit. Recall, we made an appeal to our intuition about a simple frictionless pendulum
system at the end of Chapter 3. e frictionless pendulum is a rotational mechanical analog
of the simple mass-spring harmonic oscillator discussed here and the idealized LC circuit of
Section 3.5.3.
It is now a natural excursion to relate the effort-flow story for such rotational mechanical
systems in Section 4.6.
4.5. MULTIPLE STORAGE ELEMENT SCRIPTS 57
Figure 4.13: e simple frictionless pendulum is an analog system to the LC circuit in the absence
of any electrical resistance.
Table 4.5: Parts of a 2nd order governing differential equation for a classical m–k–b system
4.5.3 A GENERALIZED MATHEMATICAL FORM FOR THE DUAL
STORAGE ELEMENT SCRIPT
All mechanical scripts in which two distinct energy storage characters appear are always char-
acterized by the same 2nd order ordinary differential equations in time as electrical 2nd order
systems:
1
!2
N
d .t /
dt C
2(cid:16)
!N
d .t/
dt C
.t/
(cid:9)O .t/
G
(cid:3)
D
I .t/
D
where !N is the system natural frequency, (cid:16) is the dimensionless system damping ratio, and
f .L; C; R/. e dependet variable, , is either an effort or flow variable in the system.
!N ; (cid:16)
f
e forcing function will be some normalized form of the actual physical input excitation that
renders an equivalent effort or flow driving function.
g D
Table 4.5 summarizes the results for the second order mechanical systems discussed so far.
Mechanical Systems Classical m-k-b System Response Variable Position, ()xt System Parameter ;2Nkbmkmωζ== External Excitation ()()OOFttkΨ= G = 1/k ;()()OItFt= 58
4. THE MECHANICAL CAST
4.6 ROTATIONAL MECHANICAL SYSTEMS
One example is torque, moment of inertia, angular momentum, vs force,
mass and momentum. e possible undistinguishability of translation
and rotation would seem to indicate that they are really two guises for
the same set of phenomena.
e Physics Stack Exchange
4.6.1 EFFORT AND FLOW VARIABLES
If you push a mass with a rectilinear or translational force, translational velocity will evolve over
time as the mass accelerates. You push a mass by creating a difference in rectilinear force across
the mass, i.e., applying a net force. It is precisely analogous to note that if you twist a rotational
inertia, such as a massive disk, for example, it will develop an angular velocity. To do this, you
need to apply a net rotational force or a net torque. All governing equations are based on writing
mathematical statements of the conservation of angular momentum as summarized in Table 4.6.
Table 4.6: Effort, flow, and conserved quantities for rotational mechanical systems
So, for rotational mechanical systems, one still draws an appropriately labeled free body
diagram, only now one must sum the external torques and relate this to a net change in angular
momentum according to Newton’s laws. is is done in a manner strictly analogous with trans-
lational mechanical systems.
4.6.2
STORAGE ELEMENTS
e energy storage occurs through the same actors: springs for potential energy and masses for
kinetic energy, but these must now be an angular or torsional spring, (cid:20), and a measure of inertial
resistance to angular motion or a mass moment of inertia, J .
Conserved Quantity Units Symbol Angular momentum kg-m2/s L Variable Units Effort Torque Nm ; ft-lb T Flow Angular velocity rad/s ω 4.6. ROTATIONAL MECHANICAL SYSTEMS 59
Potential Energy Storage Character
e torsional potential energy storage devices store energy in the form of the effort variable or
torque. e mechanical cast member who plays the role of Captain Potential Energy is that device
that stores a torque internally that may, at some later time, be released to perform useful rotational
form of mechanical work on the system. is potential energy storage character is played by the
torsional spring.
Figure 4.14: e rotational mechanical potential energy storage character is played by the torsional
spring. It embodies the rotational mechanical capacitance of the system.
e mathematical expression of the storage by virtue of effort is
FLOW
!.t/
D
D
CROT_MECH
CROT_MECH
d.EFFORT/
dt
d TNET .t/
dt
:
Integrating both sides results in an expression for a rotational mechanical analog to electrical
capacitance
1
CROT_MECH
Z !.t/dt
CROT_MECH
D
(cid:17)
TNET .t/
(cid:20)(cid:18).t/
D
1
(cid:20)
:
Kinetic Energy Storage Character
e mechanical cast member who plays the role of Captain Kinetic Energy is that device that
stores energy internally by virtue of its rotational speed. is potential energy storage character is
played by the rotational or mass moment of inertia.
You may recall from your undergraduate dynamics course that the rotational form of New-
ton’s Second Law states that a net torque applied to a system is equal to the time rate of change
60
4. THE MECHANICAL CAST
Figure 4.15: e rotational mechanical kinetic energy storage character is played by the system’s rotary
or mass moment of inertia. Inertia is embodied in a system’s mass weighted by the square of its distance
about the axis of rotation.
of the system’s angular momentum, H . e mathematical expression of the storage by virtue of
flow is
EFFORT
TNET .t/
D
D
L
d.FLOW/
dt
dH
dt D
d.J !.t//
dt
D
LROT_MECH
d!.t/
dt D
J(cid:11).t/
J R(cid:18).t/:
D
Again, applying the effort-flow analogy, one observes that the mass moment of inertia is the
mechanical analog of an electrical inductance
LROT_MECH
J
(cid:17)
(cid:17)
Z r 2d m:
4.6.3 DISSIPATIVE ELEMENTS
e role of the Evil Dr. Friction in our rotational mechanical script is played by any physical
presence of friction about the axis of rotation. Let’s consider the source of the friction to be
viscous friction as would result from a thin layer of viscous oil between two rotational elements
as in a sleeve bearing.
4.6. ROTATIONAL MECHANICAL SYSTEMS 61
Figure 4.16: e friction force is modeled by the net torque across a mechanical cylindrical dash-
pot; this viscous force is proportional to the relative angular velocity. For many applications, a viscous
representation of friction may suffice.
For which
EFFORT
TNET
RROT_MECH
D
D
(cid:17)
R
FLOW
(cid:3)
RROT_MECH!.t/
(cid:12)
(cid:12)!.t/
D
where (cid:12) is a torsional damping coefficient relating the torque necessary to sustain an angular
velocity differential across a rotational frictional element. A summary of the rotational mechan-
ical cast and the roles they play is given in Figure 4.17. A list of corresponding system element
equations is given in Table 4.7.
4.6.4 THE SIMPLE PENDULUM
Consider the swinging pendulum shown in Figure 4.18. If we perform an angular momentum
balance about the pivot point:
QIN
P
QOUT
(cid:0) P
TO .t/
(cid:0)
mgL sin (cid:18).t/
(cid:12) P(cid:18).t/
(cid:0)
D P
QSTORED
dH.t/
D
dt D
J
d!.t/
dt D
J(cid:11).t/
J R(cid:18).t/:
D
Assuming small angles of rotation linearizes the system
sin (cid:18).t/
(cid:18).t/
J R(cid:18).t/
(cid:12) P(cid:18).t/
C
C
)
(cid:25)
mgL(cid:18).t/
TO .t/:
D
62
4. THE MECHANICAL CAST
Figure 4.17: e rotational mechanical cast of characters.
Table 4.7: Relevant system element relations for rotational mechanical systems
4.6. ROTATIONAL MECHANICAL SYSTEMS 63
Rearranging terms and normalizing by the effective torsional stiffness
J
(cid:20)EFF R(cid:18)
(cid:12)
(cid:20)EFF P(cid:18)
(cid:18)
C
mgL:
C
(cid:20)EFF
D
1
(cid:20)EFF
D
TO .t/
All the system parameters .J; (cid:12); (cid:20)EFF/ and a flow variable internal to the system, (cid:18), are all on one
side of the equation while the excitation effort, now an applied torque, appears on the other side.
If we scale the entire equation by the torsional stiffness to normalize the flow variable term
LROT_MECHCROT_MECH R(cid:18).t/
C
RROT_MECHCROT_MECH P(cid:18).t/
(cid:18).t/
C
1
(cid:20)EFF
D
TO .t/:
Note that while you might not see a torsional spring here, there is one! By virtue of hanging from
a cable of length, L, in a gravitational field, the mass may store maximum potential energy at the
ends of each swing where height of the mass provides potential energy due to the work of the
gravitational field. Gravity is our spring! Possible sources of damping are provided by air resistance
during the swing and friction at the pivot. Rotational inertia is provided by the mass being lumped
a finite distance from the pivot, the center of rotation. is is illustrated in Figure 4.18. Here,
Father Force provides the effort or torque to drive the swinging angular motion. At the ends of
the swing, all energy is potential in form. As the bob gains speed on the downswings, the system
Field Effort Variable Flow Variable Mechanical Torque Angular velocity Relation Form Analogy Dissipative Material Property Law Effort = Resistance x Flow ()21Tβωω=− Resistance = Friction; Damping coefficient, β Energy Storage in Effort Variable Flow = Capacitance x d(Effort)/dt 1dTdtωκ= Capacitance = 11stiffnessκ= Energy Storage in Flow Variable Effort = Inductance x d(Flow)/dt dTJdtω= Inductance = Rotary Mass/Inertia, J 64
4. THE MECHANICAL CAST
Figure 4.18: THE rotational pendulum.
gains rotational kinetic energy at the expense of potential energy. All the while, the Evil Dr.
Friction, acting in the air flowing around the bob and in resistance at the pivot, eats away at each
exchange.
4.7 CHAPTER ACTIVITIES
Problem 1 It is somewhat intriguing and not often discussed what the mechanical system analo-
gous to the parallel RLC circuit (discussed in Section 3.4.2) would be. Identify this mechan-
ical system whose governing differential equation would be analogous with that obtained
for the parallel RLC circuit. Draw the system elements and their relative connectivity and
derive the governing differential equation.
4.7. CHAPTER ACTIVITIES 65
Problem 2 Consider the plate damper, mechanical system from which the spring has been re-
moved. e system is turned vertically and subject to a step input gravitational force as
shown:
0 m from rest, write the differential equation
If the mass is dropped from the position xO
D
governing the system plate velocity. What order is the equation (and system)? What system
parameter(s) characterize the system?
Problem 3 You’re escaping the East India Trading Company in your trusty vessel “e Black
Pearl.” e Pearl’s sails generate thrust in the following relationship:
FSail
D
CS .VW
Vp/
(cid:0)
where VP is the velocity of the Pearl, VW is the velocity of the wind, and CS is a constant.
e drag on the Pearl’s hull is linearly proportional to her velocity:
where CD and the Pearl’s mass, m, are constant.
FDrag
D
CDVP
xmgTwo, thin, viscous fluid layersresulting in a total dampingcoefficient = b66
4. THE MECHANICAL CAST
Use an appropriately labeled free body diagram to derive the differential equation governing
the Pearl’s velocity. Determine an algebraic expression for the Pearl’s terminal, i.e., steady
state, velocity. Identify the time constant, (cid:28), for the ship’s velocity response.
Problem 4 Consider the mass-spring-damper system subjected to a ramp input platform dis-
placement, y.t/
5t as shown:
D
(a) Draw an appropriately labeled free body diagram and derive the governing differential
equation for the displacement of the mass.
(b) What order are the equation and the system?
(c) What is/are the relevant system parameter(s)?
Problem 5 Consider the downhill skier pictured here:
kmxby4.7. CHAPTER ACTIVITIES 67
e total drag on the skier, FD, is a combination of man-made-snow surface resistance and
*
V
aerodynamic drag resulting in the following relationship for the drag force:
*
F D
CD
*
V is the velocity of the skier down the inclined slope, and
constant. Draw an appropriately labeled free body diagram and derive the equation
where CD is the coefficient of drag,
CD
governing the skier’s velocity. Determine the relevant system parameter(s) for the model.
D
D
Problem 6 A pressure compensating hydraulic spool valve consists of a bar-bell-like mass in a
cylindrical sleeve (shown below). e valve is moved horizontally by a solenoid that applies
a step input force to the mass. A spring at the far end provides an opposing force. Hydraulic
fluid in a tight clearance of width, h, provides a viscous friction force resisting the motion
and given by the relation:
where C is a constant.
F(cid:29)
D
C (cid:29)
h
Show that a balance of forces in the horizontal direction gives:
m
d 2x
dt2 D
F .t/
kx
(cid:0)
(cid:0)
C (cid:29)
h
:
is equation physically represents a statement of what balance principle? Write algebraic
expressions for the system natural frequency and damping ratio in terms of the provided
quantities.
68
4. THE MECHANICAL CAST
Problem 7 Consider the angular position of a 100 kg winter Olympic snowboarder on a circular
pipe of radius, R. e total drag on the snowboarder, FD, is a combination of man-made-
snow surface resistance and aerodynamic drag resulting in the following relationship for the
*
V where CD is the coefficient of drag and
*
V is the tangential velocity
*
FD
drag force:
of the snowboarder and CD
CD
D
constant. Use I
D
mR2.
D
Using an appropriately labeled free body diagram and applying a balance of torques, show
that the differential equation governing the angular position of our snowboarder with re-
spect to time is given by
mR2
CDR2
P(cid:18)
C
If the skier could enter the pipe at an angle of 30 degrees and remain at angles equal to
or lower than this, linearize the equation to obtain a linear, ordinary differential equation
governing the skier’s angular position.
mgR sin (cid:18)
C
D
0:
R(cid:18)
Problem 8 Consider the mass-less-platform-spring-damper system subjected to a ramp input
platform displacement, y.t/
5t as shown:
D
kbyx(a) Draw an appropriately labeled free body diagram and derive the governing differential
4.7. CHAPTER ACTIVITIES 69
equation for the displacement of the mass.
(b) What order are the equation and the system?
(c) What is/are the relevant system parameter(s)?
C H A P T E R 5
A Common Notion
71
Euclid’s first common notion is this: things that are equal to the same
thing are equal to each other. at’s a rule of mathematical reasoning.
It’s true because it works. Has done and always will do. Euclid says this
truth is self-evident. You see … there it is, even in that 2000 year old
book of mechanical law. It is a self-evident truth that things which are
equal to the same thing are equal to each other.
Abraham Lincoln quoting Euclid’s
Book of Common Notions
I understand what an equation means if I have a way to figure out the
characteristics of its solution without solving it.
Richard P. Feynman
quoting Paul Dirac
Mr. Lincoln read Euclid wisely: two things equal to the same thing are equal to each other.
is basic premise lies at the notion of a common solution for linear ordinary differential equa-
tions. What we will learn here is that all solutions for first order systems look like “the same thing.”
is will also hold for all 2nd order systems. In Chapters 2 and 3, we motivated the independent
physical principles of inertia, stiffness, and friction (or alternatively inductance, capacitance, and
resistance) by linking them with a cartoon-like characterization in an attempt to illustrate the
analogous roles these play in mechanical and electrical systems. We further made this charac-
terization to create a mnemonic device by which the abstract mathematics used to model such
systems may be more approachable and less daunting. In fact, because the mathematics is essen-
tially “always the same thing,” the analogy serves to teach us that there’s less to learn than we
might otherwise have thought.
We further associated principles of inertia, stiffness, and friction with their physical roles
as agents of storage of kinetic energy, potential energy, and the dissipation of energy, respectively.
We then followed a universal principle of balancing or conserving a basic quantity entering and
leaving a volume element of the system. When we introduce mathematical relations for Captains
Potential and Kinetic Energy and the Evil Dr. Friction using effort and flow variables, it is a
relatively painless procedure to write a governing ordinary differential equation for a system. So
far, the common axioms of systems in different disciplines are:
(a) Each system contains elements represented by characters that:
72
5. A COMMON NOTION
(a) Store kinetic energy, e.g., inertia or inductance
(b) Store potential energy, e.g., stiffness or capacitance
(c) Dissipate energy, e.g., friction or electrical resistance.
(b) e governing differential equation results from expressing conservation or balance of ele-
mental quantities, e.g., momentum in a mechanical system, or change through a represen-
tative electrical circuit node, and
(c) Very specific and critically important quantities called system parameters arise out of various
ratios, products, and sums of the system elements, e.g.,
(a) e time constant, (cid:28), for linear, 1st order, ordinary differential equations
(b) e natural frequency, !N , and the damping ratio, (cid:16), for linear, 2nd order, ordinary
differential equations.
What makes these quantities so crucial is that they characterize everything interesting about
the mathematical solutions. In the following sections, we will discuss and dissect these solutions
for linear, first and second order differential equations in terms of the system parameters. Re-
member, the specific mathematical form of the system parameters, the time constant, natural
frequency, and damping ratio, arise from the individual discipline-specific actors playing out a
common movie script.
5.1 TIME DOMAIN SOLUTIONS OF 1st ORDER SYSTEMS
Consider the movie scripts we discussed in Chapter 2 that correspond to 1st order systems. First
order systems result when the script involves a single type of storage element or character (either
potential or kinetic energy storage) along with dissipative elements. Note there may be multiple
agents of storage, but they must store only one type of energy. So far, we’ve been introduced to:
(a) A single electrical capacitor with a resistor, e.g., a series RC circuit with battery
(b) A single electrical inductor with a resistor, e.g., a series RL circuit with battery
(c) A single mechanical spring with a dashpot or friction element arranged in parallel, e.g., the
idealized, mass-less spring-dashpot system
(d) A single mechanical inertia with a friction element, e.g., the parachutist in free-fall.
In all these cases, the governing differential equation has the form shown in Section 3.4.3:
d .t/
(cid:28)
dt C
0/
D
D
O
.t
.t/
D
(cid:9) .t/
Table 5.1: Relation of the system time constant to the system element parameters
5.1. TIME DOMAIN SOLUTIONS OF 1st ORDER SYSTEMS 73
where is either effort or flow that is stored in the system and the time constant, (cid:28), depends on
the individual inertial, stiffness, or friction quantities.
erefore, one needs only identify the storage and dissipative elements and their structural
arrangement to conclude the relevant time constant. Recall, this is illustrated in Figure 2.4. e
system response for .t/ is then driven by the system’s initial condition and the forcing function
or signal input, (cid:9) .t /.
For linear systems, solutions for .t/ may be obtained by either use of Laplace transforms
in the complex plane or a superposition of homogeneous and complimentary solutions in the time
domain. Laplace transform solutions are available in a number of good texts on systems dynamics
[10, 11, 19]. For the purposes of physical interpretation, we choose here to restrict ourselves to
solutions strictly in the time domain. By doing this, we hope to replace the mathematical jargon
with the physical meaning underlying the math.
From courses in elementary differential equations, we recall that any linear, ordinary, first
order differential equation in a single independent variable exhibits a solution that can be posed
as the sum of the response, h .t/, to the corresponding homogeneous differential equation
(cid:28)
d h.t/
dt C
h.t/
0
D
and the particular response, p .t/, to the differential equation driven by the external signal input
or forcing function, (cid:9) .t/:
where the total solution, via linear superposition, is given by:
(cid:28)
d p.t/
dt C
p.t/
(cid:9) .t/
D
.t/
h .t/
C
D
p .t/ :
You probably were shown this in your earlier courses in differential equations. e homogeneous
solution is often referred to as the natural or free response as this portion of the solution solves
the equation where only the system parameters appear and there is no forcing function or agent
of change from the outside world. Father Force is AWOL in this part of the response. It’s all
System Time Constant Electrical Analogy Series RC Circuit R*C Series RL Circuit L/R Massless Spring Dashpot b/k Product *MECHMECHRC Freefall Parachutist m/b Ratio /MECHMECHLR 74
5. A COMMON NOTION
about the system on the left side of the equation. Recall this from the illustration given by the
character equation in Figure 2.4 for the RC circuit. is part of the solution prescribes how the
system will react when free of external forces or inputs, i.e., how a system responds essentially
to initial conditions. erefore, the natural response will be a function of the system parameters
ONLY, i.e., in the case of a first order system, the time constant, (cid:28).
e portion of the solution that responds directly to the excitation from the outside world
is the so-called particular solution. An agent external to the system is forcing the system to respond
to it. We can understand this distinction even more clearly once we have solved both differential
equations.
5.1.1 TRANSIENT RESPONSE
Mathematicians postulate forms for solutions to differential equations …
well, let’s face it, they guess.
P.E. Wellstead
Introduction to Physical System Modeling
While there is, of course, more to it than that, we, as engineers, rather than mathemati-
cians, are happy to take the nod on the form of the solution. Many real physical systems exhibit
exponential behavior. ey can be modeled as first order ordinary differential equations because
an exponential solution works to “solve” it.
D
where the unknown quantity, (cid:21), results from satisfying the homogeneous form of the governing
differential equation:
h .t/
Ae(cid:21)t
Dividing through by Ae(cid:21)t renders the characteristic equation:
(cid:28)A(cid:21)e(cid:21)t
Ae(cid:21)t
C
0:
D
So solutions like h .t/
D
(cid:28)(cid:21)
1
D
Ae(cid:21)t work when (cid:21)
C
0
(cid:21)
)
D (cid:0)
1=(cid:28):
1=(cid:28). So we have
D (cid:0)
h .t/
Ae(cid:0)
1=(cid:28) :
D
e value of the constant, A, is determined by applying system’s initial conditions after the com-
plete or total solution is found. e natural response is an exponential decay over the dimensionless
time, t=(cid:28), and represents the part of the solution that responds to the system’s initial conditions.
5.1. TIME DOMAIN SOLUTIONS OF 1st ORDER SYSTEMS 75
Of Special Note
e free response is a response to initial conditions in the absence of any external
forcing from the outside world. We can associate this response with the transient
response of the system. As it is purely exponential in nature, it “dies out” in a
finite amount of time we call the settling time.
..
5.1.2 FORCED RESPONSE
e mathematical particular solution, p .t / , responds directly to the forcing function imposed
by the outside world. e proper form for this response is a function that is, in some sense, the
most general form of the function driving the system. Some familiar forms of input excitations
are shown in Table 5.2.
Table 5.2: General forms of particular solutions corresponding to a variety of polynomial input exci-
tations
We can more clearly show the physical interpretation of the forced response by performing
a full solution for a few simple examples.
Step Input
Consider the example of the mass-less plate discussed in Section 4.3 wherein a constant force is
instantaneously applied to the plate and maintained:
(cid:21)
for which the appropriate forced response is a constant:
F0 .t/
(cid:26) 0
D
P t
t < 0
0
p .t/
K
D
D
constant:
Input Excitation General Form of ()ptψ Step Constant: K Ramp or Step-Ramp Linear: Ct + K Polynomial Similar Order Polynomial ()1NNptAtBtCtKψ−=++++ Harmonic ()()cosOINtAtψωα=+ Harmonic()()cospOUTtAtψωαϕ=++ Arbitrary Function Truncated Polynomial Taylor Series 76
5. A COMMON NOTION
is function must now satisfy the inhomogeneous or forced version of the differential equation
(cid:28)
d
dt
.K/
C
K
K
D
D
P =k
P =k:
So the particular solution is that amount of deformation the spring would experience under a
(cid:14)STATIC. e forced system response is then simply the static
purely static load P , i.e., P =k
deflection of the spring alone. To understand this in more detail, let’s compose the total solution
for the position of the plate
D
Ae(cid:0)
t=(cid:28) P =k
x .t/
x.0/
A
)
D
D
D
xh .t/
x0
x0
D
(cid:0)
xp .t/
P =k
D
C
A
C
P =k
or
x.t/
.x0
(cid:0)
D
P =k/ e(cid:0)
t=(cid:28)
P =k:
C
Notice that since the transient, by definition, decays away at long times compared with the system
time constant, (cid:28), the particular solution must represent that part of the solution that remains at
long times or the steady state. is solution is shown graphically in Figure 5.1.
We learn several interesting characteristics from this response that, it turns out, are char-
acteristic of all first order responses. Since for our case:
x.0/
x .t
4(cid:28)/
(cid:21)
2 m
x0
D
P =k
x.t/
.x0
(cid:0)
D
(cid:25)
D
xSS
D
D
P =k/ e(cid:0)
12 m
t=(cid:28)
P =k:
C
P =k
e response to the step input force proceeds exponentially from the initial value of 2 meters to
the final value of xSS
(cid:14)STATIC in approximately four time constants. As engineers, we
choose a somewhat arbitrary datum for the time at which the exponential decay is sufficiently
complete. Here, we adopt as a reference point the time by which 98% of the change from the
initial value to the steady-state value takes place. is is four time constants because 98% of the
exponential decay has occurred within this time frame:
D
D
t=(cid:28)
e(cid:0)
4(cid:28)=(cid:28)
e(cid:0)
4
e(cid:0)
D
D
D
0:018
0:02:
(cid:25)
We often refer to this regime as the transient because the plate position is changing throughout
this time range. ereafter the response is in the steady-state at a value equaling that given by
the static deflection of the spring alone because the plate is effectively no longer moving and the
internal force in the damper has decayed to some negligibly small value.
5.1. TIME DOMAIN SOLUTIONS OF 1st ORDER SYSTEMS 77
Figure 5.1: e response of a mass-less plate with spring and damper to a step input force of 60 N
2 seconds. e response
10 Ns/m). e time constant is given by b=k
(x0
is characterized by an exponential approach from an initial value to a final value of (cid:14)STATIC
5 N/m; b
2 m; k
P =k.
D
D
D
D
D
Of Special Note
e forced response is a response specifically to the external forcing from the out-
side world. is response is present long after the transient or free response has
decayed away. For this reason, the forced response is often referred to as the
steady-state response.
..
When one examines these regimes along with the mathematics of the homogeneous and
particular solutions, one can list several observations that are universally true for all first order
systems.
01234567891024681012Time (s)Plate Position (m)63%86.5%95%98%SteadyStateTransient78
5. A COMMON NOTION
Of Special Note
Observations regarding solutions to all 1st order differential equations
(a) e homogenous solution responds to the initial conditions and repre-
sents the mathematical structure of the physical transient from initial
to steady-state values.
(b) e particular solution responds specifically to the forcing function im-
posed upon the system by some external agent. It is the only portion of
the solution that survives after the exponential decay of the transient.
As such, p .t/ represents the response of the system in steady state.
(c) In the parlance of a movie script, from beginning (initial) to end (steady
state) values, the transient part of the movie lasts roughly 4 time con-
stants. Admittedly, this number is somewhat arbitrary and can be ad-
justed to please the precision with which one needs to attain steady
state. What doesn’t change is that the steady state is effectively attained
in quanta of time constants
(d) Lastly, the entire response can be cast in dimensionless form. is is al-
ways an appealing feature in predictive models because it points toward
physically motivated model parameters.
In this language, the steady state is generally a function of time when the input signal is time-dependent.
..
To see this last point, one can reformulate the solution to take the form of a dimensionless
response variable, O
.t/ where
.t/
O
D
.t/
0
SS
(cid:0)
SS D
(cid:0)
e(cid:0)O
t where
t
(cid:28)
t
O
D
which is plotted in Figure 5.2. Often, students will only first see this dimensionless form of solu-
tions to first order differential equations in their undergraduate heat transfer courses. As you may
SS is the
not have yet had such a course, what is important to point out is that the term .t/
driving agent that causes the variable .t/ to change over time. When the variable eventually
reaches its steady-state value, this driving agent vanishes and the transient is complete. So the
main “take away” concept here is that the driver for dynamic response is the measure by which
the current value of the variable is different from its eventual steady-state value. It is precisely this
difference in values that actually exponentially decays away in time. Because all systems, regard-
less of their initial conditions or forcing function, can be cast in this form, we can refer to this
dimensionless form as a master curve for first order systems. A master curve is a function onto
(cid:0)
5.1. TIME DOMAIN SOLUTIONS OF 1st ORDER SYSTEMS 79
which all solutions fall when appropriately normalized or non-dimensionalized. Master curves
are appealing in predictive mathematical modeling because of the physical interpretation given
to the normalizing quantities. Here, these are the difference between the value of the dependent
system variable and its eventual steady-state value, i.e., the driving force, and the system time
constant.
Figure 5.2: e response of a mass-less plate with spring and damper to a step input force in dimen-
sionless form.
e difference between a system response variable, .t/, and its value in steady state is the
driver causing the dynamic response. As for most meaningful dimensionless parameters in models,
.t/ represents a ratio between two physical quantities: the ratio of the current driving agent to
O
the initial driving agent. erefore, this particular ratio of differences always decays exponentially
in first order systems over time regimes measured in quanta of system time constants.
Ramp Input
We can maintain that the generalization holds when the system is exposed to a time-dependent
forcing function. Consider the ramp input signal:
F0 .t/
(cid:26) 0
10 t
D
t < 0
0
t
(cid:21)
00.511.522.533.544.5500.10.20.30.40.50.60.70.80.91Dimensionless TimeDimensionless PositionSteadyState63%86.5%95%98%Transient80
5. A COMMON NOTION
for which the appropriate normalized signal input in our example is F .t/=k
general form of a linear function is then presumed for the particular solution:
D
10t=k. e most
p .t/
C t
K:
C
D
Substituting this into the differential equation
and upon setting like terms equal to one another:
(cid:28) C
K
C
C
C t
D
.10=k/t
(cid:28) C
K
C
C
C t
C
0
K
D
D
D
.10=k/t
10=k
C (cid:28)
K
C
C (cid:28)
D (cid:0)
D
10
(cid:0)
k
(cid:28)
or
p .t/
10
k
(cid:28)
C
10
k
t
D
10
k
D (cid:0)
(cid:28) / :
.t
(cid:0)
It is important to note that while the forcing function is a straight line with zero intercept the
eventual steady-state solution has an intercept. is implies there is an offset in time between the
forcing function and the steady response. is steady solution given by p .t/
(cid:28)/ is the
straight dotted line in Figure 5.3.
10
k .t
D
(cid:0)
Compiling the total solution and applying the initial conditions:
x .t/
x.0/
x.t/
D
D
D
xh .t/
x0
D
.x0
C
xp .t/
D
10(cid:28)=k
C
A
(cid:0)
10(cid:28)=k/ e(cid:0)
t=(cid:28)
Ae(cid:0)
10.t
C
(cid:0)
(cid:28)/=k
t=(cid:28)
10.t
C
(cid:0)
(cid:28)/=k
which is plotted in Figure 5.3 for several distinct initial displacements along with the asymptotic
steady-state line.
5.1.3 DIMENSIONLESS SOLUTIONS FOR 1st ORDER SYSTEMS
We note that even when the steady state is time-dependent, the entire response can still, for every
first order system, be cast in dimensionless form:
.t/
O
D
.t/
0
SS .t/
(cid:0)
SS .0/ D
(cid:0)
e(cid:0)O
t where
t
(cid:28)
:
t
O
D
is dimensionless solution is plotted in Figure 5.4. Note the form is identical with that in Fig-
ure 5.2.
5.1. TIME DOMAIN SOLUTIONS OF 1st ORDER SYSTEMS 81
Figure 5.3: e response of a mass-less plate with spring and damper to a step input force in dimen-
sional form. e response is characterized by an exponential approach or transient from an initial value
to a final value that, like the forcing function, increases linearly in time.
5.1.4 UNIVERSAL TRUTHS FOR 1st ORDER SYSTEM RESPONSE IN THE
TIME DOMAIN
We can now add several observations to our list of universal truths that always characterize how 1st
order systems respond to their environment. We note that 1st order systems always approach a
steady-state response monotonically from their initial condition, and the response never over-
shoots this steady response. e steady response behaves like “a fence” that bounds the total
response. is total response approaches the steady solution “from one side” where the initial
conditions reside, as observed in Figure 5.3 for the ramp input example. We also note that even
when the steady-state solution is time-dependent, the appropriate non-dimensionalization de-
livers a master curve that is identical for all initial conditions, or starting points, and steady-state
solutions or ending points as shown in Figure 5.4.
012345678910-20-15-10-505101520Time (s)Platform Position (m)TransientSteady State82
5. A COMMON NOTION
Figure 5.4: e response of a mass-less plate with spring and damper to a ramp input force in dimen-
sionless form.
Of Special Note
Universal Truths for 1st Order Systems
(a) ey are comprised of system elements (or characters) that store only a single form
of energy, either potential or kinetic energy (but not both).
(b) eir behavior is characterized by a single system parameter called the system time
constant, (cid:28), where
f1 .R; C /
(cid:28)
D
D
f2 .b; k/
or
(cid:28)
g1 .L; R/
D
D
g2 .m; b/ :
(c) e system is identified with one characteristic time given by the time constant.
(d) e system transient decays away in a multiple number of system time constants.
(e) e system response approaches steady state monotonically from one side.
(f ) e system is never capable of overshooting the eventual steady-state response.
(g) e system response can be universally placed in a dimensionless form normalized
by the driving agent
(cid:0)
SS, and the characteristic time constant, (cid:28).
..
00.511.522.533.5400.10.20.30.40.50.60.70.80.91Dimensionless TimeDimensionless Position5.2. TIME DOMAIN SOLUTIONS OF 2nd ORDER SYSTEMS 83
5.2 TIME DOMAIN SOLUTIONS OF 2nd ORDER SYSTEMS
Again, as discussed in detail in Chapter 2, 2nd order systems result when the script involves both
distinct types of storage character, i.e., both potential and kinetic energy storage. Note there may
be multiple agents of storage, but they must be capable of storing both types of energy. So far,
we’ve been introduced to:
(a) An electrical capacitor and inductor with a resistor, e.g., series or parallel RLC circuits with
an external passive power supply, i.e., battery.
(b) An electrical capacitor and inductor with no systemic damping, e.g., a series/parallel LC
circuit with an external passive power supply, i.e., battery.
(c) A mass with mechanical spring and dashpot connected in series or parallel, e.g., the ideal-
ized, mass-spring-dashpot system.
(d) An idealized, undamped mass-spring harmonic oscillator.
In these cases, the normalized governing differential equation has the form:
1
!2
N
d 2 .t/
dt2 C
2(cid:16)
!N
d .t/
dt C
.t/
D
(cid:9) .t/
where reresents the dependent effort or flow variable in the system and the system is charac-
, where each
terized by a pair of parameters: the damping ratio and natural frequency,
depends on the individual inertial, stiffness, or friction quantities. Examining the systems in Sec-
tions 3.5 and 4.5, we arrived at the results summarized in Table 5.3.
(cid:16); !N
g
f
When viewed from the perspective of the effort-flow analogy with electrical systems, i.e.,
considering the system damping ratio and natural frequency, a parallel mass-spring-damper sys-
tem should behave similarly to the series RLC circuit (see Table 5.3). erefore, one need only
identify the storage and dissipative elements and how they are structured in the system to know
that the relevant natural frequency and damping ratio are particular products and/or ratios of the
respective system element parameters. Recall, this is illustrated in Tables 3.3 and 3.4 for 1st order
electrical systems. e system response for .t/ is driven by one or both of the system’s initial
conditions and the forcing function or signal input, (cid:9) .t/.
e corresponding transient and steady-state parts of the solution satisfy:
and
1
!2
N
d 2 h.t/
dt2 C
2(cid:16)
!N
d h.t/
dt C
h.t/
0
D
1
!2
N
d 2 p.t/
dt2 C
2(cid:16)
!N
d p.t/
dt C
p.t/
(cid:9) .t/
D
84
5. A COMMON NOTION
Table 5.3: Analogous representations for system natural frequency and damping ratio
respectively, where the total solution, via linear superposition, is given by:
.t/
h .t/
C
D
p .t/ :
e transient response will be a function of the system parameters only, i.e., the system natural
frequency, !N , and system damping ratio, (cid:16), and is that portion of the solution that responds
directly to the initial conditions. e portion of the solution that responds directly to the excitation
from the outside world is the particular solution, p .t/.
5.2.1 FREE RESPONSE
Similar to first order equations, exponential functions also satisfy the second order equation:
where the unknown exponents result from satisfying the ODE:
h .t/
D
Ae(cid:21)t
1
!2
N
A(cid:21)2e(cid:21)t
2(cid:16)
!N
C
A(cid:21)e(cid:21)t
Ae(cid:21)t
0:
D
C
Dividing through by Ae(cid:21)t renders the characteristic equation for the ODE:
1
!2
N
(cid:21)2
C
2(cid:16)
!N
(cid:21)
1
C
D
0
)
two solutions, (cid:21)1;2:
Because this equation is quadratic, it exhibits two roots, (cid:21)1;2. Depending on the sign of the dis-
criminant, pairs of roots to this equation correspond to three distinctly different physical regimes
of behavior as in Table 5.4.
System Natural Frequency (rad/s) Electrical Analogy Damping Ratio Electrical Analogy Series RLC Circuit 1NLCω= 2RCLζ= Parallel RLC Circuit 1NLCω= 12LRCζ= Parallel Mass-Spring-Damper Nkmω= 1*NMECHMECHLCω= 2bkmζ= 2MECHMECHMECHRCLζ= Parallel Mass-Spring Nkmω= 1*NMECHMECHLCω= 0ζ= 0ζ= Table 5.4: e three physical scenarios for transient solutions of second order systems
5.2. TIME DOMAIN SOLUTIONS OF 2nd ORDER SYSTEMS 85
Underdamped Systems
For the first scenario, the system is under-damped. Mathematically, these result when the damp-
ing ratio, (cid:16), is less than one. Physically, this happens when elasticity and inertia dominate friction
b=2pkm. If the stiffness and inertia, pkm dominate
in a system (see Figure 5.5). Recall, (cid:16)
relative to dissipation, b, then (cid:16) < 1. Energy storage, in some sense, is strong enough to overcome
energy dissipation allowing for a transfer back and forth between potential and kinetic forms of
energy in the system.
D
Figure 5.5: Strength of energy characters in an under-damped 2nd order system.
is is rendered mathematically by the solutions of the characteristic equation. e tran-
sient solution is given by:
h .t/
D
C1e(cid:21)1t
C2e(cid:21)2t
C
where C1 and C2 are constants to be determined later by imposing the initial conditions. When
the roots are complex, they contain a negative real portion corresponding to the exponential de-
cay caused by the energy dissipating character, and a purely imaginary portion that corresponds
to a harmonic oscillation that occurs “inside the decaying envelope” of the exponential part of
the solution. ese are the energy storage characters transferring energy back and forth between
Scenario ζ Nature of roots Roots Physical Regime 1 1ζ< Pair of complex conjugate roots 21,21NNjλζωωζ=−±− Under Damped 2 1ζ= Pair of two real, equal roots 12Nλλζω==− Critically Damped 3 1ζ> Pair of two real, distinct roots 21,21NNλζωωζ=−±− Over Damped 86
5. A COMMON NOTION
potential and kinetic forms. Algebraic manipulation results in a solution of the form
h .t/
D
e(cid:0)
(cid:16)!N t (cid:140)A cos .!d t/
B sin .!d t/(cid:141)
C
(cid:0)
D
!N p1
(cid:16)2 is the damped natural frequency, and A and B are constants determined
where !d
by applying the initial conditions. e system will oscillate in the transient with this characteristic
frequency in the presence of energy dissipation. And the natural response decays away exponen-
tially in a time frame prescribed by the system damping ratio and natural frequency. ere is
no time constant, per se, for a second order system. e constants, A and B, are determined by
applying the system’s initial conditions. e under-damped transient response is an exponentially
decaying harmonic that decays over the dimensionless time,
t=.1=(cid:16)!N /.
t
O
D
Of Special Note
It is interesting to note that while resistance is fairly straightforward to quantify in
electrical systems, damping coefficients in mechanical systems have a somewhat higher
degree of uncertainty associated with them. You will not find a value for the damping
coefficient stamped on the container for a damping element. Friction always has an
inherent uncertainty about its actual mathematical representation.
Because of this, and because it is the damping ratio, (cid:16), and not the damping
coefficient, b, that matters in our solutions, we point out that there is a straightforward
way to determine the damping ratio directly from experimental data. For this purpose,
imagine that you perturb an under-damped second order system from rest with an initial
displacement and let the system’s free response decay away from the rest. It is easy to
show that the ratio of successive peaks is given by:
xN
1
C
xN D
(cid:16) !N Td
e(cid:0)
ln (cid:18) xN
C
xN
1
(cid:19)
(cid:14)
(cid:17)
D
(cid:16)!N Td
2(cid:25)(cid:16)
D
p1
(cid:0)
:
(cid:16)2
ereby, the log decrement, (cid:14), a quantity readily measured from experiment, is a func-
tion solely of the system’s damping ratio. Inverting this relation
(cid:16)
D
(cid:14)
p4(cid:25) 2
:
(cid:14)2
C
So the damping ratio is easily determined by measuring the log decrement, (cid:14), or loga-
rithm of ratios of successive peaks. e damped period of the free decaying oscillation
is also easy to measure. With the period known, it is straightforward to compute the
..
system’s natural frequency:
5.2. TIME DOMAIN SOLUTIONS OF 2nd ORDER SYSTEMS 87
!N
D
2(cid:25)=Td p1
(cid:16)2:
(cid:0)
So the system parameters can be computed directly from simple experimental measure-
ments.
..
Critically Damped Systems
e second scenario is basically a fence between the 1st and 3rd scenarios. e system is called
critically damped. Physically, this corresponds to a system where the energy storage and dissipa-
2pkm (see Figure 5.6). e ability of the system’s
tion “have equal strength,” if you will, and b
elasticity and inertia to store potential and kinetic energy, respectively, is “equal,” in some sense,
to the ability of the system to dissipate energy. Energy storage, then, just rivals energy dissipation.
In this limit, there is just enough friction or dissipation to prevent anything other than a single
transfer of energy between the kinetic and potential forms. As such, there is sufficient enough
energy dissipation to just prevent oscillatory response from occurring.
D
Figure 5.6: Strength of energy characters in a critically damped 2nd order system.
e solutions contain two negative, equal real parts, (cid:21)1
(cid:16)!N , corresponding
to the exponential decay caused by the energy dissipating characters. e transient solution is
given by:
D (cid:0)
(cid:21)2
D
D
(cid:21)
h .t/
C1e(cid:21)t
C2te(cid:21)t
C1e(cid:0)
(cid:16)!N t
C2te(cid:0)
(cid:16)!N t :
D
C
D
e constants, C1 and C2, are determined by the system’s initial conditions. e critically damped
t=.1=(cid:16)!N /. As
transient response is a pure exponential decay over the dimensionless time,
we will soon observe, this decay is the fastest decay that does not allow for oscillatory behavior
in the transient. is makes the critically damped response case an important limit solution for
engineering design as there are a number of physical situations in which one desires as fast a decay
C
D
t
O
88
5. A COMMON NOTION
as possible without oscillation from a given prescribed set of initial conditions, e.g., response of a
mass-spring-damper automobile strut to an imposed initial compression.
Overdamped Systems
Physically, the final scenario corresponds to a system where the energy dissipation dominates
the response at the expense of the ability of the system’s elasticity and inertia to store potential
and kinetic energy respectively. Mathematically, b > 2pkm
(cid:16) > 1. In this limit, there is
more energy dissipation than is necessary to prevent oscillatory response from occurring (see
Figure 5.7).
)
Figure 5.7: Strength of energy characters in an overdamped 2nd order system.
is is rendered mathematically by the solutions of the characteristic equation: two nega-
!N p(cid:16)2
tive, and distinct real parts, (cid:21)1;2
1 corresponding to two distinct rates of
exponential decay caused by the energy dissipating characters. e transient solution is given by:
(cid:16)!N
D (cid:0)
(cid:6)
(cid:0)
h .t/
D
C1e(cid:21)1t
C2e(cid:21)2t
C
D
(cid:16)
(cid:0)
C1e
(cid:16)!N
C
!N p(cid:16) 2
1(cid:17)t
(cid:0)
(cid:16)
(cid:0)
C2e
(cid:16) !N
(cid:0)
!N p(cid:16) 2
1(cid:17)t :
(cid:0)
C
e constants, C1 and C2, are determined by the system’s initial conditions. e critically damped
transient response is a pair of pure exponential decays over two distinct dimensionless times:
t1
O
D
t= (cid:16)1= (cid:16)
(cid:0)
(cid:16)!N
C
!N p(cid:16)2
(cid:0)
1(cid:17)(cid:17)
and
t2
O
D
t= (cid:16)1= (cid:16)
(cid:0)
(cid:16)!N
(cid:0)
!N p(cid:16)2
1(cid:17)(cid:17) :
(cid:0)
e over damped response is identified by two physical time scales for decay:
(a) one decay time that is larger than that in the critically damped case
(b) a distinct second decay time that is smaller than that in the critically damped case.
us, superposing both solutions results in an overall decay time longer than that observed in the
critically damped case. e more damping or friction added to a system beyond this limit, the
slower the decay to steady state.
5.2. TIME DOMAIN SOLUTIONS OF 2nd ORDER SYSTEMS 89
5.2.2 FORCED RESPONSE
e handling of the mathematical particular solution, p .t/, is no different than that for 1st order
systems. e solution to the inhomogeneous differential equation responds directly to the forcing
function. e form for this response is the most general form of the function driving the system
as outlined in Table 5.2. Again, the physical interpretation of the forced response is best shown
by performing several simple examples.
Step Input to an Underdamped System
Consider the example of the parallel mass-spring-damper discussed in Section 4.4 wherein a
constant force is instantaneously applied. e classic step input signal is simply a constant input
suddenly applied:
for which the appropriate forced response is
F0 .t /
D
(cid:26) 0
P t
t < 0
0
(cid:21)
is function must now satisfy the inhomogeneous or forced version of the ODE
p .t/
K
D
D
constant:
1
!2
N
K
R
C
2(cid:16)
K
!N P
C
K
K
D
D
P
k
P =k
D
(cid:14)STATIC:
e forced system response is then simply the static deflection of the spring alone. To understand
this in more detail, let’s compose the total solution for the position of the plate
x .t/
D
xh .t/
C
xp .t/
D
e(cid:0)
(cid:16)!N t (cid:140)A cos .!d t/
B sin .!d t/(cid:141)
P =k:
C
C
Applying the initial conditions:
x.0/
x.0/
P
D
D
x0
(cid:29)0
)
)
A
B
D
D
x0
(cid:29)0
(cid:0)
C
P =k
(cid:16)!N .x0
!N p1
P =k/
:
(cid:0)
(cid:16)2
(cid:0)
Finally, upon substitution
(cid:16)!N t "
x.t/
e(cid:0)
D
.x0
(cid:0)
P =k/ cos .!d t/
(cid:29)0
C
C
(cid:16)!N .x0
!N p1
(cid:0)
P =k/
(cid:0)
(cid:16)2
#
sin .!d t/
P =k:
C
is solution is shown graphically in Figure 5.8 for several sets of initial displacements.
is response exhibits several features characteristic of all under-damped 2nd order re-
sponses: an exponentially decaying, oscillatory, harmonic response that overshoots the eventual
90
5. A COMMON NOTION
Figure 5.8: e response of a lumped mass with parallel spring and damper to a step input force of
60 N (x1;2;3
10 Ns/m). e response is characterized
by a decaying oscillation from an initial value that overshoots its steady-state value. It oscillates about
and ultimately decays to the steady-state value of (cid:14)STATIC
2 m, 8 m, 14 m; (cid:29)0
5 N/m; b
0 m/s; k
F0=k.
D
D
D
D
0
D
steady-state solution at xSS
(cid:14)STATIC. is eventual steady state is then reached in ap-
proximately four characteristic decay times parameterized by !N , and (cid:16). is is the transient
regime where the response is characterized by two characteristic times: the decay time of the
envelope bounding the oscillations and the period of the oscillations as outlined in Table 5.5.
F0=k
D
D
Table 5.5: Characteristic transient time scales for under-damped second order systems
Following the exponential decay, the response is in the steady state at a value equaling that
given by the static deflection of the spring alone because the inertial mass is effectively no longer
moving and the internal force in the damper has decayed to some negligibly small value. Again,
precisely as for first order systems, we observe characteristics common to the solutions of all 2nd
order systems.
0102030405060708024681012141618Time (s)Position (m)Solution Feature Characteristic Time Exponential decay 1Nζω Period of damped harmonic response 2221dNππωωζ=− 5.2. TIME DOMAIN SOLUTIONS OF 2nd ORDER SYSTEMS 91
Of Special Note
Observations regarding solutions to all 2nd order differential equations
(a) e homogenous solution responds to the initial conditions and represents the
mathematical structure of the physical transient from initial to steady-state values.
(b) e particular solution responds specifically to the signal input or forcing function
imposed upon the system by some external agent, i.e., the outside world. It is
the only portion of the solution that survives after the exponential decay of the
transient. As such, p .t / represents the response of the system in steady state.
(c) In the parlance of a movie script, from beginning (initial) to end (steady-state)
values, the movie lasts effectively 4 characteristic decay times as prescribed in Ta-
ble 5.5. So the steady state is effectively attained in quanta of exponential decay
times.
(d) Lastly, the entire response can, for every second order system, be cast in dimen-
sionless form.
..
To see this last point, one can reformulate the solution to take the form of a dimensionless
response variable, O
.t/ where
.t/
O
D
.t/
0
SS
(cid:0)
SS D
(cid:0)
Ge(cid:0)O
t1 cos
2(cid:25)
t2
O
(cid:0)
tan(cid:0)
)!
1 ( (cid:16).(cid:17)
p1
1/
(cid:16)2
C
(cid:0)
(cid:0)
C
s 1
2(cid:17)(cid:16)2
1
(cid:29)0
(cid:16)!N .x0
(cid:0)
t= .1=(cid:16)!N /
t= .2(cid:25)=!d /
G
(cid:17)
t1
O
t2
O
D
D
D
D
(cid:17)2(cid:16)2
C
(cid:16)2
xSS/
t=Td
D
which is plotted in Figure 5.9. Because all systems, regardless of their initial conditions or forcing
function, can be cast in this form, we can refer to the function in Figure 5.9 as a master curve for
under-damped second order systems.
e difference between the current system response variable, .t/, and its value in steady
.t/ represents the ra-
state is the driver causing the dynamic response. In dimensionless form, O
tio of the current driving agent to the initial driving agent. e master curves are representative
for any initial displacement, any set of system parameters for which the system remains under-
damped, and any step input force. is master representation shows explicitly that with a sufficient
92
5. A COMMON NOTION
Figure 5.9: e response of a lumped mass with parallel spring and damper to a step input force from
rest. e curves that are distinct in Figure 5.5 all collapse to the same curve (Curve 1). e rest of the
curves correspond to increasing amounts of viscous damping in the system.
amount of damping, any second order system’s response will be nearly indistinguishable from a
corresponding first order-like response.
Further, since there are two initial conditions necessary for second order systems, there
xSS/ such that the response is
(cid:16)!N .x0
is a natural scaling of the initial velocity with (cid:29) (cid:3) D
characterized by a dimensionless form of the initial velocity:
(cid:0)
(cid:29)0
O
D
(cid:29)0=(cid:29) (cid:3)
D
(cid:29)0=(cid:16)!N .x0
xSS/ :
(cid:0)
e response of the original system is plotted in dimensionless form for a variety of initial velocities
in Figure 5.10.
Ramp Input to an Over-damped System
We can maintain that the generalization holds when the system is exposed to a time-dependent
forcing function. Consider the ramp input signal:
F0 .t/
(cid:26) 0
Dt
D
t < 0
0
t
(cid:21)
:
e most general form of a linear function is then presumed for the particular solution:
p .t/
K
D
C
C t:
00.511.522.533.54-0.6-0.4-0.200.20.40.60.81Dimensionless Decay TimeDimensionless PositionIncreasing damping ratio First order-like responseCurve 15.2. TIME DOMAIN SOLUTIONS OF 2nd ORDER SYSTEMS 93
Figure 5.10: e response of a lumped mass with parallel spring and damper to a step input force for
a variety of initial velocities.
Substituting this into the differential equation and setting like terms equal to one another:
1
!2
N
.0/
C
2(cid:16)
!N
C
K
C
C
C t
)
C
0
K
Dt
k
D=k
K
C
2(cid:16)
!N
C
D
D
D
.2(cid:16)=!N /
D (cid:0)
D
k
or
p .t/
2(cid:16)
!N
D
k C
D
k
t
D
(cid:18)t
D
k
(cid:0)
(cid:19) :
2(cid:16)
!N
D (cid:0)
Compiling the total solution and applying the initial conditions:
x .t/
D
D
xh .t/
xp .t/
C
(cid:16)!N
(cid:16)
C1e
(cid:0)
!N p(cid:16) 2
1(cid:17)t
(cid:0)
C
C
(cid:16)
(cid:0)
C2e
(cid:16)!N
(cid:0)
!N p(cid:16) 2
1(cid:17)t
(cid:0)
D
k
.t
(cid:0)
C
.2(cid:16)=!N // :
Applying initial conditions:
x.0/
x.0/
P
D
D
x0
x0
)
)
x0
x0
D
D
C1
C
C1 (cid:16)
C2
(cid:0)
(cid:16)!N
(cid:0)
2D(cid:16)=k!N
!N p(cid:16)2
1(cid:17)
(cid:0)
C
C2 (cid:16)
(cid:0)
(cid:16)!N
(cid:0)
!N p(cid:16)2
C
1(cid:17)
(cid:0)
C
D
k
:
00.511.522.533.54-0.6-0.4-0.200.20.40.60.81Dimensionless TimeDimensionless Position0.81Dimensionless TimeDimensionless Position94
5. A COMMON NOTION
Figure 5.11: e response of an over-damped mass-spring-damper system to a ramp input force in
dimensional form. e response is characterized by an exponential approach or transient from an initial
value at rest to a final value that, like the forcing function, increases linearly in time.
Figure 5.12: e response of an overdamped mass-spring-damper system to a ramp input force in
dimensional form. With increasing values for the damping ratio, the response eventually appears first-
order-like.
0510152025303540-300-200-1000100200300400Time (s)Position (m)Steady State01020304050607080-400-2000200400600800Time (s)Position (m)1020200Time (s)Position (m)Increasing damping ratio5.2. TIME DOMAIN SOLUTIONS OF 2nd ORDER SYSTEMS 95
After resolving the values of C1 and C2, we plot the total response in Figure 5.3 for several distinct
initial displacements. As for the under-damped case, systems with increasing damping ratios
eventually respond in a manner that “looks” first-order-like (see Figure 5.12). Finally, solving the
constants C1 and C2 for various initial velocities gives the responses shown in Figure 5.13.
Figure 5.13: e response of an overdamped mass-spring-damper system to a step input force for
various values of the initial velocity.
5.2.3 DIMENSIONLESS SOLUTIONS FOR 2nd ORDER SYSTEMS
Again, even when the steady state is time-dependent, the entire response can still, for every second
order system, be cast in dimensionless form:
.t/
O
D
.t/
0
SS .t/
(cid:0)
SS .0/ D =
(cid:0)
t1;
(cid:0)O
t2(cid:1)
O
D
8
(cid:136)<
(cid:136):
e(cid:0)O
t1 (cid:2)A cos (cid:0)2(cid:25)
t
Ae O
t1
Ae(cid:0)O
t2(cid:1)
O
C
C
t2(cid:1)(cid:3)
O
B sin (cid:0)2(cid:25)
C
t
B
te O
O
t2
Be(cid:0)O
(cid:16) < 1
(cid:16)
1
D
(cid:16) > 1
where the respective characteristic times are given in Table 5.6.
ese solutions for the under and overdamped systems, respectively (in Section 5.2.2) are
plotted in dimensionless form in Figure 5.14. Note the limit behaviors of under-damped and
damped systems.
0510152025303540-150-100-50050100150200250300350Time (s)Position (m)96
5. A COMMON NOTION
Figure 5.14: e response of a mass-spring-damper to an arbitrary input force in dimensionless form.
5.2.4 CHARACTERISTIC TIMES FOR TRANSIENTS IN 2nd ORDER
SYSTEMS
e characteristic time for transients in any first order system corresponds directly with its system
parameter, (cid:28). Alternatively, the characteristic times associated with the transient response in 2nd
order systems are functions of its system parameters as outlined in Table 5.6.
5.2.5 UNIVERSAL TRUTHS FOR 2nd ORDER SYSTEM RESPONSE IN THE
TIME DOMAIN
We can now add several observations to our list of universal truths that always characterize how
2nd order systems respond to their environment. We note that 2nd order systems always ap-
proach a steady-state response from their initial state, and the response overshoots this steady re-
sponse for under-damped systems and does not overshoot for over-damped systems. e steady
response behaves like “a fence” that bounds the total response only when the system is over-
damped. is over-damped response approaches the steady solution “from one side” as observed
in Figure 5.11 for the ramp input example. We also note that even when the steady-state so-
lution is time-dependent, the appropriate non-dimensionalization delivers a master curve that
is identical for all initial conditions, i.e., starting points, and steady-state solutions, i.e., ending
points.
00.511.522.533.54-0.6-0.4-0.200.20.40.60.81Dimensionless TimeDimensionless PositionOverdamped systemUnderdamped systemTable 5.6: Characteristic times for transient solutions of second order systems
5.2. TIME DOMAIN SOLUTIONS OF 2nd ORDER SYSTEMS 97
Of Special Note
Universal Truths for 2nd Order Systems
(a) ey are comprised of system elements (or characters) that store both potential
and kinetic forms of energy
!N ; (cid:16)
f
g
g2 .m; k; b/
(b) eir behavior is characterized by a pair of system parameters,
, where
(c) !N
D
f1 .L; C /
D
f2 .m; k/
and (cid:16)
g1 .L; C; R/
D
D
(d) e system transients are identified by two characteristic times
(e) e system, when underdamped, is capable of overshooting the eventual steady-
state response.
(f ) With an appropriate amount of damping, the system response is nearly indistin-
guishable from that of an appropriately parameterized first order system
(g) e system response can be universally placed in dimensionless form, normalized
by two characteristic times.
..
Scenario ζ Physical Regime Characteristic Times 1 1ζ< Under Damped 1Nζωexponential decay 221Nπωζ− damped period 2 1ζ= Critically Damped 1Nζωexponential decay 3 1ζ> Over Damped ()211NNζωωζ+− first exponential decay ()211NNζωωζ+−− second exponential decay 98
5. A COMMON NOTION
5.2.6 ENERGY STORAGE AND DISSIPATION FOR 2nd ORDER SYSTEM
RESPONSE IN THE TIME DOMAIN
Let’s continue with the example of the mass-spring-damper system. e system stores both ki-
netic and potential energy. Now that we have resolved the resultant motion and velocity of the
lumped mass analytically in Section 5.2.2, we may compute the energy partition that results from
an imposed step input force applied to the mass when the system is underdamped (Figure 5.15).
e early transient behavior shows clearly that peak potential energy caches coincide with
the absence of kinetic energy when the mass is at rest at peak values of displacement as shown in
Figure 5.16. Behavior in the steady state shows the continued decay to a state of steady potential
energy corresponding to the spring extended to its static deflection where motion ceases and
kinetic energy decays to zero as shown in Figure 5.17. All the energy is eventually stored in the
spring as the displacement converges on the static value. All the while, an order of magnitude
more energy is dissipated in the damper throughout the transient as evidenced in Figure 5.18.
Note that the dissipated energy only ever increases. e work done by friction, as plotted
in Figure 5.18, can never decrease and only ever accumulates.
is is perhaps more evident in an under-damped system that is given an initial displace-
ment and released from rest. Here the entire response is simply a transient decay from the initial
conditions. Recall from our discussion in Chapter 3 that in this case of a damped harmonic os-
cillator, the kinetic and potential caches are passed back and forth to one another while friction
eats away during each transfer as shown in Figure 5.19. e energy story for each of the three
characters (inertia, stiffness, and friction) is shown for a typical case in Figure 5.20.
In the resulting free response, energy is “consumed” within each exchange from kinetic to
potential and back to kinetic. With each “pass of the energy ball” the total amplitude of stored
energy is decreased by precisely the amount eaten away by friction as shown in Figure 5.21. Neg-
ligible energy is dissipated as the potential energy peaks, i.e., where the kinetic energy (and,
therefore, velocity) is minimal. Most of the energy is dissipated where the kinetic energy (and
velocity) reach their respective maxima.
Finally, consider the case of the over-damped system subjected to a ramp input. We solved
the inertial displacement and velocity in Section 5.2.2. Here, owing to the slope of the ramp
input force, the net kinetic energy stored plateaus at a relatively small value while the spring
continues to stretch storing the lion’s share of the imparted energy as potential. e dissipated
energy also accounts for a substantial energy cache. ese are shown in the early transient in
Figure 5.19. Later, in the steady state the displacement becomes linear in time resulting in a
potential energy cache that accumulates quadratically in time. e friction work is the integral of
an F-v curve in the damper when the force approaches a constant value. In this case, the friction
work increases linearly over long times. e stored kinetic energy plateaus along with the velocity
at long times. Here, we recognize features of the solution without showing its explicit functional
form. As Feynman correctly noted, “(We can) understand what an equation means if (we) have a
way to figure out the characteristics of its solution without solving it.”
5.2. TIME DOMAIN SOLUTIONS OF 2nd ORDER SYSTEMS 99
Figure 5.15: Energy partition for a mass-spring-damper system subject to a step input force of 60 N
(x0
125 N=m
5 Ns=m
30 kg).
2 m
(cid:29)0
m
k
b
D
I
0 m=s
I
D
D
I
D
I
D
Figure 5.16: Energy partition for a mass-spring-damper system subject to a step input force of 60 N
(x0
125 N=m
5 Ns=m
30 kg).
2 m
(cid:29)0
m
k
b
D
I
0 m=s
I
D
D
I
D
I
D
0510152025050100150200250Time (s)Stored System Energy (J)Total Stored EnergyStored Potential EnergyStored Kinetic Energy051015050100150200250Time (s)Stored ystem Energy (J)Potential EnergyKInetic Energy100
5. A COMMON NOTION
Figure 5.17: Steady-state energy partition for a mass-spring-damper system subject to a step input
force of 60 N (x0
125 N=m
5 Ns=m
30 kg).
0 m=s
2 m
(cid:29)0
m
k
b
D
I
D
I
D
I
D
I
D
Figure 5.18: Total energy and dissipated energy for a mass-spring-damper system subject to a step
input force.
20253035404550556005101520253035Time (s)Stored and Dissipated Energy (J)Kinetic energyTotal and Potential energies converge0510152025303540050100150200250Time (s)Stored and Dissipated Energy (J)Total EnergyDissipated Energy5.2. TIME DOMAIN SOLUTIONS OF 2nd ORDER SYSTEMS 101
Figure 5.19: e second order free response is the story of an energy catch between Captains Potential
and Kinetic Energy while the Evil Dr. Friction “steals away” energy with each transfer.
Figure 5.20: A second order system with dissipation is excited by an initial displacement from rest
with no external forces applied.
01234567891000.511.522.533.544.55Time (s)Energies (J)Potential EnergyKinetic EnergyDissipated Energy102
5. A COMMON NOTION
Figure 5.21: In the free response of an under-damped second order system with dissipation, with
each “pass of the energy ball” the total amplitude of stored energy is decreased by precisely the amount
eaten away by friction.
Figure 5.22: An over-damped second order system with ramp input experiences a continual intro-
duction of energy to the system.
7.67.888.28.48.68.899.22.12.22.32.42.52.62.72.82.93Time (s)Energies (J)Dissipated EnergyKinetic EnergyPotential Energy0246810121416182000.511.522.5x 105Time (s)Stored & Dissipated Energy (J)Input EnergyPotential EnergyDissipated EnergyKinetic Energy5.3. CHAPTER ACTIVITIES 103
Figure 5.23: An over-damped second order system with ramp input as it enters steady state. Here, one
can reason the forms of the steady dependence of energy dissipation (linear) and storage (quadratic)
without actually solving the explicit equations.
5.3 CHAPTER ACTIVITIES
Problem 1 Consider the plate damper, mechanical system shown:
If the mass is initially moving to the right with a velocity of 1 m/s from the position x0
D
2 m and a constant, horizontal force is suddenly applied to the mass, as shown, write the
(cid:0)
differential equation governing the system plate displacement. What are the system’s natural
frequency and damping ratio? Sketch the system position response as a function of time.
Be sure to specifically label initial conditions, steady-state response, transient response, and
0102030405060708000.511.522.5x 106Time (s)System Element Energies (J)Total Input EnergyPotential EnergyDissipated EnergyKinetic Energy104
5. A COMMON NOTION
the settling time with numerical values where possible. Use m
6 Ns=m.
0:1 kg
I
k
D
D
40 N=m
b
I
D
Plot time histories for the system potential and kinetic energy caches as well as the energy
dissipated over time.
Problem 2 Consider the plate damper, mechanical system from which the spring has been re-
moved. e system is turned vertically and subject to a step input gravitational force as
shown:
If the mass is dropped from the position x0
0 m from rest, write the differential equation
D
governing the system plate velocity. Sketch the system response as a function of time. Be
sure to specifically label initial conditions, steady-state response, transient response, and
the settling time with numerical values where possible. Use m
10 m=s.
4 kg
I
6 Ns=m
D
(cid:25)
D
g
b
I
Problem 3 Consider the mass-spring-damper system shown subject to a ramp input displacement
of y .t/
5t:
D
xmgTwo, thin, viscous fluid layersresulting in a total dampingcoefficient = b5.3. CHAPTER ACTIVITIES 105
Derive the governing differential equation for the displacement of the mass. Solve the equa-
25 Ns=m. Plot the response, labeling the transient
tion using m
and steady regimes. Plot the displacement response in dimensionless form and compare
with Figure 5.11.
40 N=m
10 kg
D
D
D
k
b
I
I
Problem 4 Consider the downhill skier pictured here:
e total drag on the skier, FD, is a combination of man-made-snow surface resistance and
*
V
aerodynamic drag resulting in the following relationship for the drag force:
*
F D
CD
*
V is the velocity of the skier down the inclined slope and
constant. Draw an appropriately labeled free body diagram and derive the equation
where CD is the coefficient of drag,
CD
governing the skier’s velocity.
D
D
If the skier jumps out a gate and starts ideally from rest, determine:
(a) the skier’s eventual terminal downhill velocity
(b) how long it will take to effectively attain this speed.
kmxby106
5. A COMMON NOTION
Use m
system over a relevant time scale. What story do they tell?
16 Ns=m
80 kg
D
D
(cid:25)
g
b
I
I
10 m=s. Plot the energy stored and dissipated in the
Problem 5 Consider the idealized windshield wiper mechanism illustrated here.
1=2mR2 for the disk
A mass-less blade is rigidly attached to the disk of radius R. Use I
and wiper blade assembly for all calculations. Assuming the angular rotation of the disk
remains “small,” derive the differential equation governing the sweep of the wiper blade.
Based on your differential equation, derive theoretical expressions for the system’s natural
frequency and damping ratio. What damping coefficient is required to critically damp the
system?
(cid:25)
Solve for the total response when the platform is subject to a step input displacement of
R
y.t/
1:2 inches using: YIN
1:2 inches
0:5 inches
1 lb=ft
k
I
D
I
D
D
0:01 slug
I
b
D
m
D
D
0:25 lb-s/ft.
Problem 6 Consider the angular position of a 100 kg winter Olympic snowboarder on a circular
pipe of radius, R. e total drag on the snowboarder, FD, is a combination of man-made-
snow surface resistance and aerodynamic drag resulting in the following relationship for the
MRkby(t)*
drag force:
F D
of the snowboarder and CD
CD
D
constant.
D
*
V where CD is the coefficient of drag and
*
V is the tangential velocity
5.3. CHAPTER ACTIVITIES 107
Use I
mR2; R
10 m; g
10 m=s2 for all calculations.
D
D
D
Assuming that the snowboarder enters the pipe at an initial position of (cid:18)
30(cid:14) and begins
his angular descent from rest, show that the differential equation governing the angular
position of our snowboarder with respect to time is given by
D
P(cid:18)
C
Consider that the small angle approximation is valid and that on two successive passes five
seconds apart, the maximum angular values are:
mgR sin (cid:18)
D
C
0:
R(cid:18)
mR2
CDR2
and
(cid:18)N
(cid:18)N
1
C
D
D
30(cid:14)
25(cid:14):
Using the log decrement, compute the system’s natural frequency and damping ratio. Make
a theoretically informed estimate of the drag coefficient, CD, based on these measurements.
From an initial angular entry point at (cid:18)0
30(cid:14), how long would it take the snowboarder to
effectively come to rest? Present your solution in dimensionless form and compare a graph
of the dimensionless position with Figure 5.11.
D
Problem 7 You’re escaping the East India Trading Company in your trusty vessel “e Black
Pearl.” e Pearl’s sails generate thrust in the following relationship:
FSail
D
CS .VW
VP /
(cid:0)
where VP is the velocity of the Pearl, VW is the velocity of the wind, and CS is a constant.
e drag on the Pearl’s hull is linearly proportional to her velocity:
where CD and the Pearl’s mass, m, are constant.
FDrag
D
CDVP
108
5. A COMMON NOTION
Use an appropriately labeled free body diagram to derive the differential equation governing
the Pearl’s velocity. Determine an algebraic expression for the Pearl’s terminal, i.e., steady
state, velocity. Determine an algebraic expression for how long it will take the Pearl to
“effectively” attain its terminal velocity. Write out a functional solution for the velocity of
the Pearl. Assume the initial velocity is given by VPO. Sketch the solution for the Pearl’s
velocity. Identify the time constant, (cid:28), and the corresponding terminal velocity, (cid:29)SS, on the
graph.
Problem 8 A pressure-compensating hydraulic spool valve consists of a bar-bell-like mass in a
cylindrical sleeve (shown below). e valve is moved horizontally by a solenoid that applies
a step input force to the mass. A spring at the far end provides an opposing force. Hydraulic
fluid in a tight clearance of width, h, provides a viscous friction force resisting the motion
and given by the relation:
F(cid:29)
D
C (cid:29)
h
where C is a constant. A balance of forces in the horizontal direction gives:
m
d 2x
dt2 D
F .t/
kx
(cid:0)
(cid:0)
C (cid:29)
h
:
5.3. CHAPTER ACTIVITIES 109
F .t /
m
k
h
D
D
D
D
(cid:26) 0
t < 0
0
(cid:21)
1 N t
0:01 kg
100 N=m
10(cid:0)
20
(cid:3)
6 m
Upon step-input application of the solenoid force, the valve is designed to move horizontally
as fast as possible to its equilibrium position without overshooting it and without oscillating.
(a) e governing equation m d 2x
what balance principle?
dt2 D
F .t/
kx
(cid:0)
(cid:0)
C (cid:29)
h physically represents a statement of
(b) What value of C must be used for the steady-state amount of valve travel to be achieved
in the minimum time without oscillation?
(c) What is the steady-state amount of horizontal travel realized by the valve under this
step input force?
(d) Roughly how long will it take for the valve to travel to its equilibrium position?
(e) Plot the system’s total energy stored and dissipated over time.
(f ) Often hydraulic fluid becomes contaminated as wear particles accumulate in the clear-
ance between the spool and its housing. Such particles often jam in the clearance ef-
fectively reducing the clearance width. Using arguments supported by the form of the
solution for the valve motion, explain the effect the particulate contamination will have
on the time necessary to move the valve to its steady-state position.
(g) If the value of the oil drag coefficient, C , used in part (a), were reduced to half its
original value, would the system overshoot and oscillate about its eventual steady state?
If so, with what frequency would it do so?
Problem 9 Consider the circuit shown with parallel system capacitors. At t
V0, is applied to the circuit by connecting it suddenly across a battery:
D
0, a step voltage,
110
5. A COMMON NOTION
V0
D
iR .t
12 V
0/
D
D
40 mil liamps
On the circuit diagram label the relevant nodes and apply the necessary conservation princi-
ples to derive the differential equation governing the response of the voltage drop across the
pair of capacitors in the circuit. Use the potential energy storage system element equation
to find the relevant initial condition or initial conditions for the system effort variable.
Sketch the system response as a function of time, labeling the output variable (on verti-
cal axis), and the transient and steady-state regimes of behavior using R
25 (cid:22)f
100 (cid:22)f.
100 (cid:10)
C1
D
D
C2
I
I
D
Problem 10 Consider the system presented below in which the cord is wrapped around a solid disc
with mass moment of inertia, 1
2 M R2. e cord sticks to the disc without slipping. e disc
At, applied about the fixed pivot at its center.
is subjected to a ramp input torque, T .t/
e disc starts from rest at (cid:18).0/
0 rad. Assume the disk rotation remains “small” and use
an appropriately labeled free body diagram to derive the differential equation governing the
disc’s angular position, (cid:18).t/. Solve for the functional form of the disc position. At what time
will the assumption of “small” angles break down? Assume angles of 30 degrees or less are
reasonably “small.” Express your answer in terms of M; R; A; k1, and k2.
D
D
+−RC1C2V1V05.3. CHAPTER ACTIVITIES 111
Problem 11 Consider the situation of drug absorption into a human being. e human body is
your system and a drug is administered by the outside world at a rate given by f .t/. For such
a case, the differential equation governing the amount of medicine in the blood stream, m,
is given by:
d m
dt C
rm
D
f .t/
t in hours
where r
D
0:0833 hr(cid:0)
1.
e drugs are to be administered by injection which may be modeled as a non-zero initial
condition: f .t/
0, and m .t
7 mg.
m0
0/
D
D
D
D
(a) Compute the solution for the presence of drug in the body over a representative time
scale.
(b) What is the settling time for the drug to wear off?
(c) How many drug storage agent types are present in the system? Why?
(d) How many drug dissipation agent types are present in the system? Why?
Problem 12 Consider the mechanical system of the idealized building model below:
MRθk1k2112
5. A COMMON NOTION
D
0:5 slug
I
Take m
at rest at the position x.0/
to the right, as shown
D
k
D
8 lb=ft
b
1 lb
32 lb. If the mass is initially
D
0 ft and a constant, horizontal 32 lb. force is suddenly applied
F .t/
s=ft
F0
D
D
(cid:0)
I
I
(a) What are the system’s natural frequency and damping ratio?
(b) Sketch the system response as a function of time. Be sure to specifically label initial
conditions, steady-state response, transient response, and the settling time with nu-
merical values where possible.
(c) What internal damping coefficient would be needed in the column walls to “just” make
the building’s lateral motion response behave “1st -order-like”?
kkx(t)F(t)mRigidFloorMasslesscolumnsC H A P T E R 6
Going Nowhere?
113
Going from home to work to home to work, I am moving, but in the
end I haven’t gone anywhere … vibrating strings move but go nowhere
… drawers open, close, open, close—all that motion and nothing to
show for it. Oscillatory motion is interesting. Doing the same thing over
and over and going nowhere is pretty important.
e Physics Hypertext Book
e conversion of circular motion into sine waves is a pervasive part of
our daily lives. Sine waves are the atoms of structure. ey’re nature’s
building blocks. Primordial sine waves spanned the stuff of the cosmos.
e ripples of a pond and the ridges of sand dunes are manifestations of
the emergence of sinusoidal structure from a background of bland
uniformity. ere’s something almost spiritual about them.
Steven Strogatz
e Joy of X
We’ve examined polynomial functions as input signals to dynamic systems. e category
of harmonic functions is a special class unto itself and deserves individual treatment. Going to
work and returning home, swinging on a swing in a playground, rotating a drum in a washing
machine, spinning tires on an automobile—all are pervasive manifestations of periodicity in the
world around us. And while one can admit the nature of periodicity is that one “goes nowhere,”
the energy story tells us something different. ere is “something to show for it” in the energy
tale. Response of a building to earthquake loading easily reminds us that only in one peculiar
sense does the building “go nowhere.” e ability of the building to absorb, store, or dissipate the
input energy convinces us there is another side of the story.
ere are myriad examples of periodic input that excite dynamics. Because the periodicity
appears in the forcing function or excitation, we are interested in the steady-state solution long
after the transient has decayed away. Normally such treatments are referred to as the frequency
response of systems because the response is dependent on the frequency of the input excitation
relative to the system. ese solutions naturally appear in terms of the system parameters, where
the specific mathematical form of the system parameters arises from the individual movie script.
114
6. GOING NOWHERE?
6.1
FREQUENCY DOMAIN SOLUTIONS OF 1st ORDER
SYSTEMS
First order systems result when the script involves a single type of storage element or character:
either Captain Potential Energy or Captain Kinetic Energy. ere may be multiple storage ele-
ments, but they must store only one type of energy. ere can only be one storage superhero. In
these cases, the governing differential equation has the form:
(cid:28)
d
dt C
(cid:9)0.t/
D
D
(cid:9)IN.t /
D
(cid:9)IN cos.!t /:
In the time domain, we used the superposition of homogeneous and complimentary solutions to
determine a total solution composed of both transient and steady state. For the unique case of
periodic loading, the input excitation “never goes away.” erefore, one must be cognizant of the
nature of the steady-state solution because it is the specific response to this ever-present input.
e nature of the steady-state response to periodic input is captured in three characteristics:
(a) the solution to a periodic excitation of frequency, !, is also a periodic function with the
same frequency, !
(b) the magnitude of the steady-state solution is a scale multiple of the input magnitude of the
excitation and
(c) the solution is shifted in time from the input signal.
As such, the steady-state solution is always of the form:
SS.t/
D
(cid:9)OUT cos.!t
’/
C
and we need only determine the magnitude, (cid:9)OUT
pletely determine the periodic steady-state response of the system.
D
A(cid:9)IN, and phase shift, ’, in order to com-
6.1.1 TRANSFER FUNCTION ANALYSIS FOR HARMONIC INPUT
Consider the case where the magnitude of the excitation, (cid:9)IN, is constant, i.e., it is not a func-
tion of the excitation frequency. Because the steady state has no memory of the system’s initial
conditions, we assume zero initial conditions and apply the Laplace operator:
.t/
(cid:9)0.t/
d .t/
L (cid:26)(cid:28)
dt C
(cid:9) .s/
C
1/(cid:9) .s/
D
(cid:9)IN.s/
D
(cid:9)IN.s/:
D
(cid:28)s(cid:9) .s/
.(cid:28)s
C
(cid:9)IN.t/
D
D
(cid:9)IN cos.!t/(cid:27)
D
6.1. FREQUENCY DOMAIN SOLUTIONS OF 1st ORDER SYSTEMS 115
Since (cid:9) .s/ represents the total output of the system when subject to input (cid:9)IN.s/, we do not lose
any generality by referring to it as (cid:9)OUT .s/ giving
.(cid:28) s
C
G.s/
1/(cid:9)OUT .s/
(cid:9)OUT .s/
(cid:9)IN.s/ D
D
D
(cid:9)IN.s/
1
(cid:28) s
C
1
)
:
e parameter s is a quantity in the complex plane where s
j!. In the case where the time
domain function is periodic and representable by trigonometric functions, the real portion of s
dictates the exponential rate of decay for constant magnitude input. Since there is no decay in a
0. e complex part remaining for the steady state is then simply
pure sinusoid, in our case a
s
j!. Using this simplification, we arrive at what is often called the sinusoidal transfer function
(STF). For the remainder of this chapter, we will confine our discussions to STF’s only. Making
this substitution:
C
D
D
D
a
G.s
j!/
D
(cid:9)OUT .j!/
(cid:9)IN .j!/ D
1
D
1
(cid:28) !j
C
:
Now the STF is a function whose numerator and denominator, in general, can be thought of as
vectors in the complex plane
G.s
j!/
D
D
Bj
Ej
A
C
C
C
D
D
0; C
1; B
1; E
where A
(cid:28)! for a first order system subject to constant magnitude peri-
odic input. e STF can be used to easily compute the magnitude and phase shift of the resultant
periodic response. e numerator and denominator vectors of the STF can be represented graph-
ically in the complex plane (Figure 6.1), where:
D
D
G.s
j!/
D
N
D
A
C
A
C
C
C
C
C
D
D
D
and
N
D
N ej(cid:11)
Dej(cid:12)
Bj
Ej D
Bj
Ej
D
D
N
(cid:11)
D
(cid:12)
D
D
D
D
pA2
tan(cid:0)
B 2
C
1 .B=A/
pC 2
tan(cid:0)
E2
C
1 .E=C / :
116
6. GOING NOWHERE?
Figure 6.1: Graphical representation of the numerator and denominator vectors of the STF in the
complex plane.
Now we may illustrate the utility of the Laplace approach for periodic input excitations.
e STF, G.j!/ in this form can be used to readily obtain the magnitude and phase shift:
G .j!/
(cid:9)OUT .j!/
(cid:9)IN.j!/ )
D
8
(cid:136)(cid:136)(cid:136)<
(cid:136)(cid:136)(cid:136):
From this result:
(cid:9)OUT
(cid:9)IN D
(cid:9)OUT .j!/
(cid:9)IN.j!/
k
k
N
D D
pA2
pC 2
D
B 2
E 2 D
A
C
C
:
k
k
’
D (cid:134)
(cid:9)OUT .j!/
(cid:0) (cid:134)
(cid:9)IN .j!/
N
D (cid:134)
D
(cid:11)
(cid:12)
(cid:0)
D
(cid:0) (cid:134)
SS.t/
(cid:9)OUT cos.!t
’/
A(cid:9)IN cos.!t
’/
C
where, A, the amplification ratio, and ’, the phase shift, of the response relative to the input
excitation are given above as functions of the excitation frequency, !, and the system parameters
(here, (cid:28)).
C
D
D
6.1.2
STEADY-STATE RESPONSE AND BODE PLOT ANALYSIS
Frequency response is entirely characterized by the degree to which the output response is am-
plified and the degree to which the output response lags the input signal. Let’s examine how this
plays out for the simple case of the series RC circuit. Recall this circuit in Figure 6.2.
Consider the case where the input battery voltage or effort differential placed on the system
constant, such that
is periodic with input magnitude, VIN
D
V0.t/
D
VIN cos .!t/
6.1. FREQUENCY DOMAIN SOLUTIONS OF 1st ORDER SYSTEMS 117
Figure 6.2: e series RC circuit and its governing differential equation.
where the magnitude VIN
⁄
f .!/. en, following the development of Section 6.1.1,
L (cid:26)(cid:28)
d V1.t/
dt C
V1.t/
VIN cos.!t/(cid:27)
D
V1.s/
(cid:28)sV1.s/
.(cid:28)s
C
1/V1.s/
C
D
D
VIN.s/
D
VIN.s/:
)
Since V1.s/ represents the output voltage of the system, we can refer to it as VOUT .s/ giving
.(cid:28)s
C
G.s/
D
1/VOUT .s/
VOUT .s/
VIN.s/ D
D
VIN.s/
1
(cid:28) s
1
C
which results in a sinusoidal transfer function
G.s
j!/
D
VOUT .j!/
VIN .j!/ D
1
D
1
(cid:28) !j
C
:
From here, it is straightforward to calculate the amplification ratio
A
D
VOUT .j!/
k
VIN .j!/
k
k
k
p12
D
q12
C
02
C
.(cid:28)!/2 D
1
q1
C
.(cid:28)!/2
and the phase shift
N .j!/
’
D (cid:134)
(cid:0) (cid:134)
D .j!/
0
(cid:0)
D
tan(cid:0)
1 .(cid:28)!/
RCV1 + V1 = VO (t)●+−V0CRV1118
6. GOING NOWHERE?
of the system response
V1SS.t/
D
AVIN cos.!t
’/
C
VIN
D
q1
.(cid:28) !/2
C
cos (cid:0)!t
(cid:0)
tan(cid:0)
1 .(cid:28)!/(cid:1) :
Note that all characteristics of the steady solution are only functions of the dimensionless quantity,
(cid:28)!. Plots of the amplification ratio (or alternatively the output response magnitude) and the phase
shift as functions of the dimensionless quantity, (cid:28)!, are known as the Bode plots. ese are shown
in Figures 6.3 and 6.4, respectively, below.
6.1.3 AN INTERPRETATION OF DIMENSIONLESS FREQUENCY RATIO
Often Bode plots are presented simply as a function of the dimensionless parameter, (cid:28)!, which is
sometimes referred to as the dimensionless frequency ratio. Whenever dimensionless parameters
appear in a model, such parameters can often be placed in the form of a ratio of two physical
quantities at play in the model. Let’s examine how one may ascribe a physical interpretation to
this dimensionless frequency ratio.
Consider the dimensionless parameter written as a ratio
(cid:28)!
D
!
1=(cid:28) D
input excitation frequency
equivalent system frequency
:
e input signal excites the system at an imposed frequency, !. Alternatively, the “outside world”
!=2(cid:25) cycles of input per sec-
bombards the system with an imposed effort or flow at a rate of f
ond. is excitation is characterized by a characteristic time called its period, T
2(cid:25)=!.
So we see that the frequency can be interpreted as the reciprocal of the characteristic time. e
larger the input signal frequency, the smaller its characteristic time. A similar interpretation can
be had for the system. Since the system is characterized by its time constant, one can understand
the time constant to be a measure of the system’s response time, the time it takes the system to
respond to external stimuli.
1=f
D
D
D
Now the dimensionless parameter, (cid:28)!, as written above can be physically interpreted as a
dimensionless frequency ratio: the ratio of the input excitation frequency to the frequency with
which the system can respond to any input. When the excitation frequency is large compared
to the frequency to which the system is capable of responding, then the excitation frequency is
termed “high” in this relative sense. When the equivalent system frequency is large compared to
the frequency imposed on it by “the outside world,” then the excitation frequency is considered
“low.” When the ratio is of order unity, the frequency can be termed “moderate.”
Summarizing
(cid:28)!
D
!
1=(cid:28) D
8
(cid:136)<
(cid:136):
(cid:29)
(cid:25)
(cid:28)
1
1
1
)
)
)
high excitation frequency
moderate excitation frequency
low excitation frequency:
6.1. FREQUENCY DOMAIN SOLUTIONS OF 1st ORDER SYSTEMS 119
Figure 6.3: Amplification ratio as a function of dimensionless frequency ratio.
Figure 6.4: Phase shift as a function of dimensionless frequency ratio.
10-310-210-110010110210300.10.20.30.40.50.60.70.80.91Dimensionless Frequency Ratio Dimensionless Amplification Ratio10-310-210-1100101102103-1.6-1.4-1.2-1-0.8-0.6-0.4-0.20Dimensionless frequency ratioPhase angle (radians)120
6. GOING NOWHERE?
Similarly, consider the dimensionless parameter written as a ratio
(cid:28)!
(cid:28)
D
1=! D
system characteristic time
input excitation characteristic time
:
e system time constant is the characteristic response time of the system to external stimuli. e
!=2(cid:25) cycles of
input signal bombards the system with an imposed effort or flow at a rate of f
2(cid:25)=!. If
1=f
input per second or one dose every T seconds, where T is the input period, T
we consider the input excitation characteristic time to be a scaled quantity, 1=!, we see that the
excitation characteristic time can be interpreted as the reciprocal of the imposed frequency. e
larger the input signal frequency, the smaller its characteristic time.
D
D
D
Now the dimensionless parameter, (cid:28) !, as written above can be physically interpreted as
a dimensionless characteristic response time ratio: the ratio of the time it takes the system to
respond to an external stimulus to the characteristic time over which that stimulus is delivered by
some external agent. When the system time constant is large compared to this characteristic time
over which an excitation is delivered, the system is considered “slow to respond” or alternatively,
the input is “coming at the system” faster than it can respond!
When the system time constant is small compared to this characteristic time over which an
excitation is delivered, then the system response time is small relative to how often the stimulus
is delivered. In this limit, the system is considered “fast to respond” or alternatively, the input is
“coming at the system” slower than that rate at which the system can respond!
When the ratio is of order unity, the system can respond on time scales commensurate with
those over which the excitation is being delivered.
Summarizing
(cid:28)!
(cid:28)
D
1=! D
8
(cid:136)<
(cid:136):
(cid:29)
(cid:25)
(cid:28)
1
1
1
)
)
)
FAST system relative to the “outside world”
system is of similar relative “speed” as the “outside world”
:
SLOW system relative to the “outside world”´
ese interpretations are summarized in Table 6.1.
6.1.4 FILTERING CHARACTERISTICS OF 1st ORDER SYSTEMS
In the classic sense of a frequency response, Bode plots show an infinite number of potential
steady-state solutions each at a different imposed excitation frequency. e plots, because they
are characterized by the dimensionless parameter, (cid:28)!, exhibit unique behavior in the relatively
low, moderate, and high frequency regimes.
Low Pass Filters
For the series RC circuit, the Bode plots are illustrated in Figures 6.3 and 6.4. In the low frequency
regime, the amplitude ratio approaches unity and the output is negligibly shifted in time. In other
words, the magnitude of the output voltage across the capacitor is nearly the same value as that
Table 6.1: Physical interpretations of the dimensionless frequency ratio, (cid:28)!
6.1. FREQUENCY DOMAIN SOLUTIONS OF 1st ORDER SYSTEMS 121
input to the system by the external battery. In this limit, the steady-state output precisely mimics
the input signal as shown in Figure 6.5.
For moderate excitation frequencies, the amplitude ratio approaches and p2 the phase shift
approaches 45 degrees as shown in Figure 6.6.
In the high frequency regime, the amplitude ratio approaches zero and the phase shift
approaches 90 degrees making the output a sine wave response to a cosine input. e output has
negligible magnitude and lags the input signal as much as possible as shown in Figure 6.7.
e series RC circuit passes through all of the input excitation to the system at low in-
put frequencies and passes none of the input signal and lags as much as possible at high input
frequencies. For this reason the system is referred to as a low pass filter.
High Pass Filters
at the series RC circuit happened to behave as a low pass filter is entirely a result of its transfer
function. It depends on both the nature of the excitation, the numerator in the transfer function,
and the system itself, the denominator in the transfer the function. Change either the system, its
elements or their structure or the nature of the input excitation and you necessarily change the
transfer function, the representative Bode plots, and the filtering characteristics of the excited
system.
So let’s consider an alternate mechanical system with a mass-less platform sandwiched
between a linear spring and damper as shown in Figure 6.8.
Dimensionless Frequency Ratio High Input Excitation Frequency Low Input Excitation Frequency 1ωτωτ= 1ωτ>> 1ωτ<< Dimensionless Characteristic Time Ratio Fast System Response Slow System Response 1ττωω= 1τω<< 1τω>> 122
6. GOING NOWHERE?
Figure 6.5: Series RC circuit response to low frequency excitation. is system is characterized by a
time constant of 1 second and a transient of approximately 4 seconds after which time the response is
predominantly steady state.
Figure 6.6: Series RC circuit response to moderate frequency excitation. Once again, the system
settling time is roughly 4 seconds.
050100150-10-8-6-4-20246810Time (s)System and Excitation Voltages (V)Excitation SignalSystem Capacitor VoltageTransient0510152025-10-8-6-4-20246810Time (s)System and Excitation Voltages (V)Excitation SignalSystem CapacitorVoltage6.1. FREQUENCY DOMAIN SOLUTIONS OF 1st ORDER SYSTEMS 123
Figure 6.7: Series RC circuit response to high frequency excitation.
Figure 6.8: e mechanical spring-damper system with an interposed mass-less platform.
00.511.522.5-10-8-6-4-20246810Time (s)System and Excitation Voltages (V)Excitation SignalSystem CapacitorVoltagey()y()+=+=mechmechmechmechbbxxtkkRCxxRCtkbyx124
6. GOING NOWHERE?
Operating on the governing differential equation with the Laplace operator
L (cid:26)(cid:28)
dx.t/
dt C
x.t /
(cid:27)
dy.t/
dt
(cid:28)
D
X.s/
(cid:28)sX.s/
(cid:28)sY .s/:
C
We understand the input to the system is the displacement of the end of the damper, Y.s/, and
the output is the displacement of the mass-less platform, X.s/, giving
D
D
.(cid:28) s
C
G.s/
1/XOUT .s/
XOUT .s/
YIN.s/ D
D
D
Calculating the amplification ratio
(cid:28) sYIN.s/
(cid:28) s
:
(cid:28)s
1
C
A
D
XOUT .j!/
YIN .j!/
k
k
k
k
q02
D
q12
C
C
.(cid:28) !/2
.(cid:28) !/2 D
(cid:28)!
q1
C
.(cid:28) !/2
and the phase shift
D (cid:134)
the system response is given by
’
N .j!/
D .j!/
(cid:0) (cid:134)
(cid:25)
2 (cid:0)
D
tan(cid:0)
1 .(cid:28) !/
XSS.t/
D
A YIN cos.!t
’/
C
(cid:28)!YIN
D
q1
.(cid:28)!/2
C
cos (cid:16)!t
(cid:25)
2 C
(cid:0)
tan(cid:0)
1 .(cid:28)!/(cid:17) :
Again, all characteristics of the steady solution are only functions of the dimensionless quan-
tity, (cid:28)!. Plots of the amplification ratio and the phase shift are shown in Figures 6.9 and 6.10,
respectively, below.
e mass-less platform exhibits quite different behavior. Here it is in the high frequency
regime that the amplitude ratio approaches unity and the phase shift approaches zero degrees. In
other words, the steady-state platform displacement precisely mimics the input signal as shown
in Figure 6.11.
It is good to ask what is happening physically in this limit. When the right end of the
damper is displaced at very high frequency, one is essentially applying a large periodic velocity
here. When a large velocity differential is applied across a damper, it locks up and behaves as if
it is rigid. e displacement time histories of both the input excitation and the platform motion
should be identical in this limit.
Alternatively, when the damper’s right end is harmonically displaced at extremely low fre-
quency, it is the same as applying an infinitesimal velocity differential across the damper or negli-
gible force. In this limit, the lion’s share of the displacement across the damper occurs at the right
6.1. FREQUENCY DOMAIN SOLUTIONS OF 1st ORDER SYSTEMS 125
Figure 6.9: Amplification ratio as a function of dimensionless frequency ratio.
Figure 6.10: Phase shift as a function of dimensionless frequency ratio.
10-310-210-110010110210300.10.20.30.40.50.60.70.80.91Dimensionless Frequency RatioDimensionless Amplification Ratio10-310-210-110010110210300.20.40.60.811.21.41.6Dimensionless frequency ratioPhase angle (radians)126
6. GOING NOWHERE?
Figure 6.11: Mass-less platform response to high frequency excitation. e settling time for this
system is approximately 2.5 seconds.
Figure 6.12: Mass-less platform response to low frequency excitation.
00.511.522.533.544.55-20-15-10-50510Time (s)Rxternally Imposed and System Platform Displacements (in) Imposed damper displacementSystem platform displacementTransient050100150-10-8-6-4-20246810Time (s)Externally Imposed and System Platform Displacement (in)System platform displacement Imposed damper displacement6.1. FREQUENCY DOMAIN SOLUTIONS OF 1st ORDER SYSTEMS 127
end while the magnitude of displacement of the system platform is negligible. Also the platform
displacement lags the input displacement history by “as much as possible” or 90 degrees resulting
in a platform displacement that is a sine wave response to a cosine input as shown in Figure 6.12.
Because this mechanical system passes all of the input excitation to the system at high
input frequencies and passes none of the input signal and lags as much as possible at low input
frequencies, the system is referred to as a high pass filter.
6.1.5 UNIVERSAL TRUTHS FOR 1st ORDER SYSTEMS SUBJECT TO
HARMONIC INPUT
Of Special Note
For all first order systems, certain steady-state behaviors are characteristic of all
systems:
(a) eir steady-state behavior is a function of a single dimensionless parameter, (cid:28)!
(b) Dimensionless amplification ratios can never exceed a value of unity
(c) Phase shifts can never exceed 90 degrees
(d) Bode plots contain, at most, a single inflection point:
(a) Order 1 equation
)
(b) One inflection point
increase or decrease with (cid:28)!.
)
1 inflection point
amplification ratio and phase monotonically either
(e) Systems may only ever be low-pass or high-pass filters.
..
6.1.6 ENERGY STORAGE AND DISSIPATION IN 1st ORDER SYSTEMS
SUBJECT TO HARMONIC INPUT EXCITATION
Let’s continue with the example of the mass-less spring-damper system discussed in Section 6.1.4.
Because the idealized platform has negligible mass, no kinetic energy can be stored by the system.
We know that energy can only be stored in the form of potential energy in the spring or dissipated
by the damper. Now that we have resolved the resultant motion and velocity of the platform, we
may compute the energy partition that results from an imposed harmonic input to the damper.
128
6. GOING NOWHERE?
For zero initial conditions, the total platform displacement can be written as
xTRANSIENT .t/
C
xSS_0/ e(cid:0)
x.t/
D
D
D
.x0
.x0
(cid:0)
(cid:0)
xSS_0/ e(cid:0)
t=(cid:28)
xSTEADY STATE.t/
t=(cid:28)
C
AYIN cos.!t
(cid:28)!YIN
C
p1
.(cid:28) !/2
C
’/
C
cos (cid:16)!t
(cid:25)
2 C
(cid:0)
tan(cid:0)
1.(cid:28)!/(cid:17) :
e potential energy is simply
VSYSTEM.t/
1
2
D
kx2:
While the energy dissipated in the damper is equal to the friction work performed by the damper
WFRICTION.t/
t
t
Z
D
0
Z
D
0
FFRICTION.t/dx
b.
y.t/
P
x.t//2dt:
(cid:0) P
ese quantities are shown graphically in Figures 6.13 and 6.14, respectively.
e energy story tells an interesting tale that is potentially belied by the frequency response
alone. At low input frequency, there is negligible movement of the platform. While the platform
displacement is relatively low compared with the damper stroke displacement, it is not zero. As a
result, the spring potential energy, is relatively speaking, low. e relative velocity over the damper,
however, results in energy dissipation that dominates the energy story. It is nearly two orders of
magnitude larger than the potential energy stored in the system.
At high frequency, the damper appears effectively locked, but there remains a relative ve-
locity over the damper that can be relatively large owing to the high frequency of the damper
stroke displacement. erefore, the energy dissipated in the damper still dominates, only now
it is only half an order of magnitude larger. e relative amount of energy stored has increased
compared with the case at low frequency.
It is important that this result explicitly depends on the values of spring constant and damp-
ing coefficient and not simply their ratio, the time constant. erefore, the energy story of two
systems with the same time constant will not necessarily be the same as is the story for effort and
flow. But the relative amounts of energy stored and spent will potentially be a deciding factor in
system design.
is is an important issue not often discussed in elementary courses in systems dynamics. It
plays a significant role in that while one needs to know the flow variables of velocity and displace-
ment to calculate the kinetic and potential energies stored by the system, it may be the energy
storage versus dissipation that is the deciding factor in the feasibility of the design. An analo-
gous issue arises in finite element analysis where the primary solution variables are a set of nodal
point displacements in a loaded structure. While this is true, it is often the internal stresses that
6.1. FREQUENCY DOMAIN SOLUTIONS OF 1st ORDER SYSTEMS 129
Figure 6.13: Energy partition for a mass-less platform response for low input frequency excitation.
Figure 6.14: Energy partition for a mass-less platform response for high input frequency excitation.
01020304050600500100015002000250030003500Time (s)Energy stored and dissipated in spring-damper-platform system (J)Total energyEnergy dissipated in damperPotential energy in spring00.511.522.533.544.5500.511.522.533.5x 104Time (s)Energy stored and dissipated in spring-damper-platform system (J)Total EnergyEnergy dissipated in damperPotential energy in spring130
6. GOING NOWHERE?
are the determining factor in design. e internal stresses are calculated by using displacements
to compute strains and strains to compute stresses. at is, the displacements or flow variables,
by themselves, are incidental. e corresponding transmitted forces or effort variables internal
to the system and energies stored are primary factors in system design. In engineering system
design, engineers often redesign systems to lower transmitted forces or internally stored energy.
is is accomplished by either altering the geometric structure of the system, i.e., whether system
elements are connected in series or parallel, or by altering the material properties, i.e., the sys-
tem capacitances, inductances, or resistances in any given geometric configuration. It’s not unlike
playing with Lego bricks. ey can be put together in an infinite number of ways and we can
choose different sizes of bricks. When the spring and damper are placed on either side of the
platform and the outside world is stroking the damper, as shown here, the requisite energy losses
are substantial. As we will see, such is not always the case.
6.2
FREQUENCY DOMAIN SOLUTIONS OF 2nd ORDER
SYSTEMS
If you wish to understand the universe, think of energy, frequency, and
vibration.
Nikola Tesla
Second order systems result when the script involves multiple types of storage elements or
characters, i.e., Captains Potential and Kinetic Energy both appear in the movie. Note there may
be many storage elements, but they must collectively store both types of energy. In these cases,
the governing differential equation has the form:
1
!2
N
d 2 .t/
d t 2 C
2(cid:16)
!N
d .t/
dt C
.t/
D
(cid:9)0.t/
D
(cid:9)IN.t/
D
(cid:9)IN cos.!t/
where represents the pertinent effort or flow variable that characterizes the system and !N
and (cid:16) are the system natural frequency and damping ratio, respectively. Again, for periodic load-
ing, the input excitation “never goes away.” e steady-state solution is a response specifically to
this omnipresent input excitation. Just as with 1st order systems, the nature of the steady-state
response to periodic input is captured in three characteristics:
(a) the solution to a periodic excitation of frequency, !, is also a periodic function with the
same frequency, !
(b) the magnitude of the steady-state solution is a scale multiple of the input magnitude of the
excitation and
(c) the solution is shifted in time from the input signal
6.2. FREQUENCY DOMAIN SOLUTIONS OF 2nd ORDER SYSTEMS 131
erefore, the steady-state solution is always of the form:
SS.t /
D
(cid:9)OUT cos.!t
’/
C
and we need only to determine the magnitude, (cid:9)OUT
A(cid:9)IN, and phase shift, ’, in order to
completely determine the periodic steady-state response of the system. Because the steady-state
solution has no memory of the system’s initial conditions, we, again, use Laplace transforms to
examine a system’s steady-state response to periodic input.
D
6.2.1 TRANSFER FUNCTION ANALYSIS FOR HARMONIC INPUT
Consider the case where the magnitude of the excitation, (cid:9)IN, is constant, i.e., it is not a function
of the excitation frequency. Again, assuming zero initial conditions and applying the Laplace
operator to the differential equation:
L (cid:26) 1
!2
N
1
!2
N
(cid:18) 1
!2
N
d 2
dt 2 C
2(cid:16)
!N
d
dt C
2(cid:16)
!N
(cid:9)0.t/
D
(cid:9)IN.t/
D
(cid:9)IN cos.! t/(cid:27)
D
(cid:9)IN.s/
D
D
s2(cid:9) .s/
C
2(cid:16)
!N
s2
C
s
C
s(cid:9) .s/
(cid:9) .s/
C
1(cid:19) (cid:9)OUT .s/
(cid:9)IN.s/
D
G.s/
(cid:9)OUT .s/
(cid:9)IN.s/ D
D
1
(cid:18) 1
!2
N
s2
C
2(cid:16)
!N
s
C
:
1(cid:19)
For periodic input, s
D
j!, rendering the second order sinusoidal transfer function:
G.s
j!/
D
(cid:9)OUT .j!/
(cid:9)IN .j!/ D
D
1
(cid:19)
:
2(cid:16)
!N
!j
C
(cid:18)1
!2
!2
N
(cid:0)
Now
G.s
j!/
D
D
Bj
Ej
A
C
C
C
where A
1; B
0; C
D
magnitude periodic input.
D
D
(cid:18)1
!2
!2
N
(cid:0)
(cid:19) ; E
2(cid:16)
!N
D
for a second order system subject to constant
132
6. GOING NOWHERE?
e amplification ratio and phase shift follow:
G .j!/
(cid:9)OUT .j!/
(cid:9)IN.j!/
D
8
<
:
)
(cid:9)OUT
(cid:9)IN D
k
k
(cid:9)OUT .j!/
(cid:9)OUT .j!/
(cid:9)IN .j!/
k
D
k
(cid:9)IN .j!/
N
D D
(cid:0) (cid:134)
D (cid:134)
pA2
pC 2
N
B 2
E2 D
D
(cid:11)
D
A
(cid:12)
(cid:0)
C
C
(cid:0) (cid:134)
’
D (cid:134)
and
SS.t /
where A and ’ are functions of !N and (cid:16).
(cid:9)OUT cos.!t
D
’/
C
D
A(cid:9)IN cos.!t
’/
C
6.2.2
STEADY-STATE RESPONSE AND BODE PLOT ANALYSIS
For second order systems, the concept of a frequency ratio is explicit as the system is characterized
by its natural frequency as opposed to a time parameter as in first order systems. Again, the specific
instances of periodic signal inputs are best shown by specific examples.
Periodic Input Signal of Constant Magnitude
Consider the classical mass-spring-damper system from Section 4.4.1 and illustrated in Fig-
ure 4.12. Let’s restrict ourselves to the case where the externally applied input force or effort placed
FIN cos .!t/
on the system is periodic with input magnitude, FIN
where the magnitude FIN
f .!/. en, following the development of Section 6.1.1,
constant, such that F0.t/
D
D
⁄
(cid:18) 1
!2
N
G.s/
s2
C
2(cid:16)
!N
X.s/
1(cid:19) X.s/
s
C
D
FIN.s/
1
k
1
D
FIN.s/=k D
(cid:18) 1
!2
N
s2
C
2(cid:16)
!N
s
C
1(cid:19)
which results in a sinusoidal transfer function
G .j!/
XOUT .j!/
FIN.j!/=k D
D
1
(cid:19)
:
2(cid:16)
!N
C
!j
(cid:18)1
(cid:0)
!2
!2
N
e resulting amplification ratio is given by:
A
D
XOUT .j!/
k
k
FIN.j!/=k
k
k
p12
02
C
2
(cid:19)
D
s(cid:18)1
!2
!2
N
(cid:0)
2 D
(cid:19)
q.1
(cid:18)2(cid:16)
!
!N
C
1
r 2/2
(cid:0)
.2(cid:16)r/2
C
where r
D
is given by:
6.2. FREQUENCY DOMAIN SOLUTIONS OF 2nd ORDER SYSTEMS 133
!=!N is known as the dimensionless frequency ratio. e corresponding phase shift
N .j!/
’
D (cid:134)
(cid:0) (cid:134)
D .j!/
0
(cid:0)
D
tan(cid:0)
1 (cid:0)2(cid:16)r=.1
r 2/(cid:1) :
(cid:0)
So, finally, in steady state
XSS.t/
A FIN
k
D
cos.!t
’/
C
D
q.1
FIN=k
r 2/2
C
(cid:0)
.2(cid:16)r/2
cos (cid:0)!t
(cid:0)
tan(cid:0)
1 (cid:0)2(cid:16)r=.1
r 2/(cid:1)(cid:1) :
(cid:0)
Note that all characteristics of the steady solution are only functions of the dimensionless quanti-
ties, (cid:16) and r. Plots of the amplification ratio and the phase shift as functions of the dimensionless
quantities, (cid:16) and r, are known as the Bode plots or surfaces for second order systems. ese are
shown in Figures 6.15 and 6.16, respectively, for the case of constant magnitude input signal.
We should note several observations for this specific case of a constant force amplitude
periodic signal input to a parallel mass-spring-damper system:
(a) at low frequency ratio, all of the signal input is passed onto the system with an amplification
of zero and zero phase shift.
(b) At frequency ratios near unity, where the input signal frequency equals the system natural
frequency, the amplification ratio can become much larger than one. For an undamped
system, the output system response magnitude will grow unbounded at r
1. is is known
as resonance.
D
(c) At high frequency ratio, the amplification ratio falls off monotonically and asymptotically
to zero at sufficiently high frequency ratio.
(d) e amplification ratio always decreases with increasing damping for all frequency ratios.
(e) At sufficiently high damping ratio, the system appears first-order-like and behaves like a
low pass filter.
In most cases, one cannot make generalizations about the behavior of any one system from a
different system. To see how any periodically excited system behaves in the steady state, one must
derive the transfer function and examine the behavior in the Bode plots. e transfer function
depends both on the system parameters and features of the forcing function. Whenever either is
altered, the transfer function and steady-state behavior can be altered. Each system under specific
signal inputs must be examined on its own merits. Considering a second example will make this
point unambiguous.
Periodic Input Signal from a Rotating Imbalance
When rotating machinery is submitted to an imbalance about the axis of rotation, such as happens
when wet clothes shift to one side of a spinning basin in a washing machine, the washing machine
134
6. GOING NOWHERE?
Figure 6.15: Amplification ratio as a function of frequency and damping ratios.
01234500.20.40.60.81012345Frequency ratioDamping ratioAmplification ratio00.511.522.533.544.5500.511.522.53Frequency ratioAmplification ratioIncreasing damping ratio6.2. FREQUENCY DOMAIN SOLUTIONS OF 2nd ORDER SYSTEMS 135
Figure 6.16: Phase shift as a function of frequency and damping ratios.
00.511.522.533.544.5500.511.5-3.5-3-2.5-2-1.5-1-0.50Frequency ratioDamping ratioPhase angle (rad)00.511.522.533.544.55-3.5-3-2.5-2-1.5-1-0.50Frequency ratioPhase angle (rad)Increasing damping ratio136
6. GOING NOWHERE?
is excited into motion. Similarly, an automobile exhibiting a wheel imbalance will have observable
and detrimental vibration imparted to the car axle and frame. A simple lumped model of such an
inertial imbalance is illustrated in Figure 6.17.
Figure 6.17: A second order system subject to a rotating imbalance.
e system is characterized by some frictional losses that we assume can be modeled ef-
fectively as viscous dissipation with damping coefficient, b, and a system stiffness, k, whereby
the system stores potential energy. e relatively small imbalance .m
M / is spinning about
a frame rigidly attached to the mass, M , at constant angular velocity, !. e imbalance, m, is
spinning at a prescribed rotational speed, thus imparting an eccentric load on the inertial mass,
M , that is sinusoidal with a magnitude that is dependent on the spinning speed. Because the
spinning speed is prescribed, the system has only a single degree of freedom. is is often called
a classical rotating imbalance. Consider the location of the mass imbalance relative to the center
of the lumped system mass to be given by a vector, R .t/
j where R is
O
j /
the magnitude of the eccentricity of the imbalance. If the block is constrained in the vertical .
O
direction, a free body diagram on the inertial block renders the following governing differential
equation for the horizontal motion of the mass, M :
R cos !t
R sin !t
(cid:28)
C
D
i
O
M
d 2x.t/
dt2 C
b
dx.t/
dt C
kx.t/
m
R.t/
R
D
D (cid:0)
mR!2 cos.!t/:
Normalizing the equation by the system stiffness, k, and assuming M
m,
d 2x.t/
M
k
d 2x.t/
dt2 C
dt2 C
1
!2
N
b
k
dx.t/
dt C
x.t/
2(cid:16)
!N
dx.t/
dt C
x.t/
mR!2
k
mR!2
M!2
N
D
D
cos.!t/
D
cos.!t/:
(cid:29)
mR!2
M!2
N
cos.!t/
Note here that the magnitude of the forcing function is dependent on the frequency of rotation of
the imbalance. e magnitude of the imbalance increases as the square of the spinning frequency.
bkx(t)mRωM6.2. FREQUENCY DOMAIN SOLUTIONS OF 2nd ORDER SYSTEMS 137
We will see that this is a crucial feature of this excitation. Applying the Laplace operator to the
differential equation:
L (cid:18) M
k
d 2x.t/
dt2 C
b
k
dx.t/
dt C
x.t /
R.t /(cid:19)
m
k R
D (cid:0)
(cid:18) 1
!2
N
s2
C
2(cid:16)
!N
s
C
1(cid:19) X.s/
m
(cid:0)
k
D
s2R.s/
M
(cid:0)
k
m
M
D
s2R.s/
ms2
(cid:0)
M!2
N
D
R.s/
G.s/
X.s/
R.s/ D
D
ms2=M!2
N
(cid:0)
s2
(cid:18) 1
!2
N
2(cid:16)
!N
s
C
1(cid:19)
C
which results in a sinusoidal transfer function
G .j!/
XOUT .j!/
RIN.j!/ D
D
(cid:18)1
and amplification ratio
m!2=M!2
N
2(cid:16)
!N
!2
!2
N
C
(cid:19)
(cid:0)
!j
A
k
D
M XOUT .j!/
k
mRIN .j!/
k
k
D
s(cid:18)1
N (cid:1)
(cid:0)!2=!2
2
!2
!2
N
A .r; (cid:16)/.
(cid:0)
(cid:19)
C
(cid:18)2(cid:16)
!
!N
2 D
(cid:19)
q.1
(cid:0)
r 2
r 2 /2
.2(cid:16)r/2
C
where r
!=!N , and, again, A
D
D
e corresponding phase shift is given by:
N .j!/
’
D (cid:134)
(cid:0) (cid:134)
D .j!/
0
(cid:0)
D
tan(cid:0)
1 (cid:0)2(cid:16)r= (cid:0)1
r 2(cid:1)(cid:1) :
(cid:0)
So, finally
XSS.t/
A mRIN
M
D
cos.!t
’/
C
D
.mRIN=M / r 2
r 2 /2
.2(cid:16)r/2
(cid:0)
C
q.1
cos (cid:0)!t
(cid:0)
tan(cid:0)
1 (cid:0)2(cid:16)r= (cid:0)1
r 2(cid:1)(cid:1)(cid:1) :
(cid:0)
While the phase shift is identical to that for the constant magnitude forcing function, the presence
of r 2 in the numerator changes the amplification ratio in significant ways. e resultant Bode plot
of the amplification ratio is shown in Figure 6.18.
For the specific case of a periodic signal input from a rotating imbalance to a parallel spring-
damper—mass system:
(a) at low frequency ratio, none of the signal input is passed onto the system with an amplifi-
cation of zero and zero phase shift.
138
6. GOING NOWHERE?
Figure 6.18: Amplification ratio for a rotating imbalance.
01234500.20.40.60.81012345Frequency ratioDamping ratioAmplification ratio00.511.522.533.544.5500.511.522.53Frequency ratioAmplification RatioIncreasing damping ratio6.2. FREQUENCY DOMAIN SOLUTIONS OF 2nd ORDER SYSTEMS 139
(b) At near resonant frequencies, the amplification ratio can become much larger than one.
For an undamped system, the output system response magnitude will grow unbounded at
r
1.
D
(c) At high frequency ratio, the amplification ratio converges to a value of one.
(d) e amplification ratio always decreases with increasing damping for all frequency ratios.
(e) At sufficiently high damping ratio, the system appears first-order-like and behaves like a
high pass filter.
Significant changes are evidenced here at both high and low input frequencies when compared
with the steady-state behavior of the system whose excitation magnitude is independent of fre-
quency. In fact, the limit behavior is opposite for both systems at both low and high frequency.
Periodic Input Signal from a Base Excitation
When a system is subject to forces that are applied through its internal elements, i.e., springs and
dampers by the motion of an external agent, the imposed forces are still applied by virtue of an
external agent. Consider the case of an idealized model of an automobile suspension. Here, the
inertial lumped mass represents the mass of a 1=4 model of an automobile comprised of a 1=4 of
the chassis/frame, a single suspension strut, and one tire. e model stiffness, k, lumps together
the stiffness of the suspension strut and the rubber tire while the damper primarily represents the
viscous dissipation in the suspension strut.
Figure 6.19: A second order system subject to excitation of its base.
e vertical motion, y.t/, is provided by a sinusoidal road profile with wavelength, (cid:21), traversed
by a vehicle with speed, (cid:29):
y.t/
D
Y0 cos !t
kbmxyv140
6. GOING NOWHERE?
and
!
D
2(cid:25)(cid:29)
(cid:21)
:
A free body diagram on the inertial block renders the following governing differential equation
for the horizontal motion of the mass:
X F
k .y.t /
D
d 2x.t/
m
(cid:0)
dx.t/
b
dt2 C
dt C
x.t //
b .
y.t /
P
C
(cid:0) P
x.t//
m
x.t/
R
kx.t /
ky.t /
b
C
D
D
dy.t/
dt
:
Where the terms on the right-hand side of the equation are external forces provided by virtue
of the tire motion imposed by the road profile and speed of the vehicle. Again, normalizing the
governing differential equation by the system stiffness, k:
m
k
d 2x.t/
dt 2 C
b
k
dx.t/
dt C
x.t/
D
y.t/
b
k
dy.t/
dt
C
or, in terms of the system parameters
1
!2
N
d 2x.t/
dt2 C
2(cid:16)
!N
dx.t/
dt C
x.t/
y.t/
D
2(cid:16)
!N
dy.t/
dt
:
C
Applying the Laplace operator to the normalized differential equation:
(cid:18) 1
!2
N
s2
C
2(cid:16)
!N
s
C
1(cid:19) X.s/
(cid:18)1
2(cid:16)
!N
C
D
s(cid:19) Y .s/
G.s/
X.s/
Y .s/ D
D
(cid:16)1
(cid:18) 1
!2
N
s2
C
C
2(cid:16)
!N
s(cid:17)
2(cid:16)
!N
s
C
1(cid:19)
which results in a sinusoidal transfer function
G .j!/
XOUT .j!/
YIN .j!/ D
D
(cid:16)1
(cid:18)1
(cid:0)
C
!2
!2
N
2(cid:16) !
!N
j (cid:17)
(cid:19)
C
2(cid:16)
!N
!j
D
.1
.1
(cid:0)
C
r 2 /
2(cid:16)rj /
2(cid:16)rj
C
and amplification ratio
A
D
XOUT .j!/
YIN .j!/
k
k
k
k
q1
D
q.1
(cid:0)
where r
D
!=!N , and, again, A
A .r; (cid:16)/.
D
.2(cid:16)r/2
C
r 2/2
.2(cid:16)r/2
C
6.2. FREQUENCY DOMAIN SOLUTIONS OF 2nd ORDER SYSTEMS 141
e corresponding phase shift is given by:
’
N .j!/
D (cid:134)
D .j!/
D
(cid:0) (cid:134)
tan(cid:0)
1 .2(cid:16)r/
tan(cid:0)
1 (cid:0)2(cid:16)r=.1
r 2/(cid:1) :
(cid:0)
(cid:0)
So, finally
XSS.t/
D
A YIN cos.!t
YIN
q1
C
r 2 /2
D
q.1
(cid:0)
’/
C
.2(cid:16)r/2
.2(cid:16)r/2
C
cos (cid:0)!t
C
tan(cid:0)
1 .2(cid:16)r/
tan(cid:0)
1 (cid:0)2(cid:16)r=.1
r 2/(cid:1)(cid:1) :
(cid:0)
(cid:0)
e resultant Bode plot of the amplification ratio and phase shifts are shown in Figures 6.20 and
6.21, respectively.
For periodic signal input from a base excitation to a parallel mass-spring-damper system:
(a) at low frequency ratio, all of the signal input is passed onto the system with an amplification
of unity and zero phase shift.
(b) At near resonant frequencies, the amplification ratio can become much larger than one. For
an undamped system, the output response magnitude will grow unbounded at r
1.
D
(c) At the peculiar frequency ratio of r
of damping ratio.
D
p2, the amplification ratio becomes unity irrespective
(d) At high frequency ratio, the amplification ratio converges to zero.
(e) e amplification ratio no longer decreases with increasing damping ratio for all frequency ratios!
p2,
is is true only for frequency ratios less than r
increasing the amount of damping actually increases the amplification ratio. is may seem
counterintuitive, but the mathematics, i.e., our “eyes with which we see physics,” says it is
true, and experiments verify this reality!
p2. For ratios higher than r
D
D
(f ) At sufficiently high damping ratio, the system behaves like an all pass filter, i.e., the ampli-
fication ratio converges to unity for all frequency ratios.
Significant changes are evidenced here: increasing friction enhances amplification for r >
p2 eventually ending up allowing all of the external excitation to be seen in the steady state at all
frequencies when the friction is sufficiently high.
142
6. GOING NOWHERE?
Figure 6.20: Amplification ratio for base excitation of a 2nd order system.
01234500.20.40.60.810123456Frequency ratioDamping ratioAmplification ratio00.511.522.533.544.5500.511.522.53Frequency ratioAmplification ratioIncreasing damping ratio6.2. FREQUENCY DOMAIN SOLUTIONS OF 2nd ORDER SYSTEMS 143
Figure 6.21: Phase shift for base excitation of a 2nd order system.
01234500.20.40.60.81-3-2.5-2-1.5-1-0.50Frequency ratioDamping ratioPhase angle00.511.522.533.544.55-2.5-2-1.5-1-0.50Frequency ratioPhase angleIncreasing damping ratio144
6. GOING NOWHERE?
6.2.3 UNIVERSAL TRUTHS FOR 2nd ORDER SYSTEMS SUBJECT TO
HARMONIC INPUT
Of Special Note
For all second order systems:
(a) eir steady-state behavior is a function of a two dimensionless parameters: the
!=!N , and the damping ratio, (cid:16).
frequency ratio, r
D
(b) Amplification ratios can exceed a value of unity, particularly near resonant fre-
quencies.
(c) Phase shifts can exceed 90 degrees.
(d) Bode plots contain, at most, two inflection points allowing for peaks at interme-
diate frequency ratios.
(a) Order 2 equation
(b) 2 inflection points
amplification ratio and phase can increase and then
decrease (or vice versa) with dimensionless frequency ratio, r.
2 inflection points
)
)
(e) Systems can be low-pass, high-pass, mid-band pass, or all-pass filters.
..
6.3 REDESIGNING SYSTEMS FOR STEADY-STATE
BEHAVIORS
One thing to note in second order systems is that resonance can be a particularly interesting case
as amplification can be quite large. So we might want to design systems that are not capable of
meandering into any troublesome regimes. Let’s say, for instance, in the case of a constant force
magnitude periodic input to a second order mass-spring-damper system, one wished to never see
output dynamic position amplitudes greater than half the static deflection. Being interested in this
limit, let’s say we wish to dictate that the dynamic output be precisely half the static deflection.
Recall, the amplification depicted in Figure 6.15 is a function of two parameters, the frequency
ratio, r
!=!N , and the damping ratio, (cid:16). If we limit the amplification to be precisely 1=2, then
we have a unique relationship between the frequency and damping ratios shown in Figure 6.22.
is figure is a cut parallel to the r–(cid:16) plane elevated to a height of A
1=2.
D
ere are now several interesting observations one can make regarding possibilities for ob-
taining the design condition A
pairs to the left of the cut have
amplification greater than 1=2 while all
pairs to the right have amplification less than 1=2.
On the curve separating the two regions, the amplification precisely equals 1=2. If we desire
1=2. In Figure 6.22, all
r; (cid:16)
f
r; (cid:16)
f
D
g
g
D
6.3. REDESIGNING SYSTEMS FOR STEADY-STATE BEHAVIORS 145
Figure 6.22: A curve }.r; (cid:16)/ for which A
mass-spring-damper system.
D
1=2 for constant force magnitude periodic input to a
A
f
f
g
g
g
g
D
r; (cid:16)
r; (cid:16)
1; 1
g D f
r; (cid:16)
f
m; k; b
pair, say
1; 1
f
1=2, we can choose any
. What specific triple
pair on this curve. e number of possibilities? Yes, infinity.
Further, as engineers, we don’t swap out parts with natural frequencies and damping ratios. We
specify spring stiffnesses, damping coefficients, and inertial masses. Well, let’s pick one of the
infinity of solutions for a particular
corre-
? Well, there are, again, an infinite number of such triples that determine
sponds to
pairs, there is yet another
a single dimensionless pair. So, for every one of the infinity of
2 possible solutions. Of course, one will
infinity of
need to consider cost, weight, availability, and other factors as constraints to fence in a reasonable
solution, but there are still often a large number of potential candidates for a redesigned solution.
ere are a wealth of solutions at our disposal because the second order system is characterized
by two independent dimensionless parameters. In first order systems, only the time constant can
be changed to alter the steady-state behavior. But typically this single parameter is a product or
;
. ere remain an infinite num-
ratio of system parameters pairs:
g
g
ber of solutions for these pairs of system elements that will deliver the requisite time constant for
a sufficient redesign of the steady-state amplitude or phase shift.
triples! On some scale, there are
R; L
f
R; C
f
m; b
f
m; k; b
r; (cid:16)
f
k; m
1
g
f
g
g
f
f
g
g
;
;
Note also that within any order system, the possibilities for redesign are dictated by the
transfer function and are, therefore, dependent upon the details of the system and how it is excited
by external agents. Consider that you wanted to limit the amplification ratio to a value of 1=2 for
a system exhibiting a rotating imbalance. In this case, taking the appropriate slice through the
2
three-dimensional Bode surface results in the section shown in Figure 6.23. ere are still
1
146
6. GOING NOWHERE?
Figure 6.23: A curve
input to a second order mass-spring-damper system.
.r; (cid:16)/ for which A
D
=
1=2 for a frequency dependent magnitude periodic force
potential solutions. Note that unlike the case of constant magnitude periodic force, however, now
as one increases the damping ratio, the frequency ratio must increase rather than decrease in order
to maintain a level amplification ratio of 1=2. e frequency content in the magnitude of the force
imbalance alters redesign scenarios in a significant way. If one increased the damping ratio along
with the frequency ratio in the case where periodic force magnitude is constant, one would climb
the amplification surface to values in excess of the desired design value of 1=2. One must move,
in some sense, in the opposite direction in one case than the other to achieve the desired results.
erefore, accurately modeling the system and transfer function characteristics is crucial when
redesigning such dynamic systems.
6.4 ENERGY STORAGE AND DISSIPATION IN 2nd ORDER
SYSTEMS SUBJECT TO HARMONIC INPUT
EXCITATION
Again, system flow or effort variables solutions are calculated as primary variables. e transmitted
forces and stored energies tell a part of the story not addressed by flow variables alone. For this
reason, we consider the classic case of the second order mass-spring-damper subject to a constant
amplitude periodic force excitation. And we will examine the energy stored by the system when
the excitation frequency is low, moderate (near resonance), and high as depicted in Figure 6.24.
6.4. ENERGY STORAGE AND DISSIPATION IN 2nd ORDER SYSTEMS 147
Figure 6.24: Low, resonant, and high frequency constant magnitude periodic input forces to second
order system.
Figure 6.25: Low frequency response for position and energy in an underdamped 2nd order system
subject to periodic force input of constant magnitude.
0200400600800100012001400160018002000-1.5-1-0.500.511.5Time (s)Position (m) & Energy (J)PositionPotential EnergyKinetic Energy148
6. GOING NOWHERE?
Figure 6.26: High frequency response for position and energy in an underdamped 2nd order system
subject to periodic force input of constant magnitude.
Figure 6.27: Resonant frequency response for position and energy in an underdamped 2nd order
system subject to periodic force input of constant magnitude.
0Time (s)Position (m) and Energy (J)PositionKinetic EnergyPotential Energy020406080100120-1001020304050Time (s)Position (m) and Energy (J)PositionKinetic EnergyPotential Energy6.5. CHAPTER ACTIVITIES 149
Because steady-state solutions are sinusoidal functions, speed is proportional to frequency.
At low frequency, this minimizes the kinetic energy, leaving the lion’s share of energy stored as
potential energy (see Figure 6.25). Conversely, for high frequency input, the system amplitude
approaches zero, leaving minimal potential energy storage. High frequency imparts high velocities
and the kinetic energy is the prime storage mechanism at high frequencies (see Figure 6.26). At
near resonant frequencies, both the steady-state amplitude and speed grow to large values. Here
the stored energy cache takes on large values that alternate between potential and kinetic forms
as shown in Figure 6.27.
6.5 CHAPTER ACTIVITIES
Problem 1 Consider the circuit pictured below, in which the bulb acts a resistor. At t
0, a pe-
riodic voltage, V0, is applied to the circuit by connecting it suddenly across a frequency
modulated battery:
D
e differential equation governing current response of the circuit is given by:
L
R
diL
dt C
iL
D
V0.t/
R
iL.t
0/
D
D
0 amps
(a) Derive the amplitude ratio
I
E0=R
.
(b) When R
1000 (cid:10), and L
125 mH is the electrical circuit current response fast or
slow given the forcing function? Explain your answer.
D
D
(c) Redesign the series LR circuit so that the steady-state circuit current has a magnitude
55 cos.0:002t/ V.
of 40 milliamps when driven by the periodic circuit voltage V0.t/
D
0()cos(0.002)55OOVtEtEV== 150
6. GOING NOWHERE?
Problem 2 Consider the mass-less spring-damper mechanical system:
where b
250 Ns/m; k
125 N/m; F .t/
F0 cos.0:25t/; F0
100 N.
D
e differential equation governing the position of the mass-less platform is given by:
D
D
D
b
k
dx
dt C
(a) Derive the transfer function
F .t/
k
x
D
XOUT
F0=k
x.t
0/
D
D
0 m
(b) Redesign the spring-damper system by changing out the spring so that the steady-state
output magnitude is 0.40 m when driven by the force F .t/
100 cos.0:25t/ N.
D
Problem 3 Consider a bell is modeled as a cone as shown here:
In large bells, the clapper is motor controlled at the pivot. Consider the case where the
motor provides a torque given by:
T
D
45 cos t Nm
bkxF(t)LT(t)mθα6.5. CHAPTER ACTIVITIES 151
and there is a torsional spring with negligible damping at the pivot. Assume m
L
100 Nm/rad; (cid:11)
1 m; (cid:20)
30(cid:14).
25 Kg;
D
D
D
D
(a) Assuming the mass of the rod holding the clapper is negligible, determine if the steady-
state motion of the clapper will ring the bell. If yes, why? If no, why not? Assume that
the shape of the bell is a cone.
(b) For what clapper mass would the steady-state motion of the clapper just barely reach
the conical bell to ring it?
Problem 4 Consider the lumped rotational mechanical system consisting of a point mass, m, sus-
pended at the end of a long, thin bar whose mass is lumped entirely with the point mass a
distance L away from a frictionless pivot. A translational spring and dashpot are attached
to the mass a distance L away from the same pivot as shown:
Use m
D
0:1 kg; k
D
40 N/m; b
D
3 N-s/m; L
1 m.
D
(a) Derive the differential equation governing the angular motion of the system as a func-
tion of time. Assume “small” angles to linearize the system.
(b) What are the system’s natural frequency and damping ratio?
(c) Derive the system transfer function,
is applied at the pivot point, P .
(cid:2)OUT
T0=(cid:20)
, if a driving torque of T0
15 cos.5t/ Nm
D
(d) Identify for what non-zero input frequency, !, the transfer function equals unity .
1/,
i.e., the dynamic steady amplitude of angular vibration just equals the static angular
deflection or “you get out precisely what you put in.”
D
kmbθL152
6. GOING NOWHERE?
Problem 5 Consider the situation of drug absorption into a human being as mentioned in Prob-
lem 11 in Chapter 5. e human body is your system and a drug is administered by the
outside world at a rate given by f .t/. For such a case, the differential equation governing
the amount of medicine in the bloodstream, M is given by:
dM
dt C
rM
D
f .t/
t in hours
where r
D
0:0833 hr(cid:0)
1.
ere are two means of drug delivery: (i) by injection or (ii) a periodic dosage of so many pills
per day, i.e., a periodic input. For the case of an injection of 7 mg of drugs, we have f .t/
0
and M.t
7 mg. e pill dosage can be modeled by a periodic input:
M0
0/
D
D
D
3r cos (cid:0) (cid:25)
D
4 t(cid:1) mg/hr (t in hours), and M.t
f .t /
8r
D
C
0/
D
D
M0
D
0 mg.
(a) Compute the total solution for the amount of drug in the body over time for the in-
jection and the periodic pill dosage.
(b) Compare the two solutions graphically. What amount of injection may deliver an
equivalent amount of drug dosage as the pill prescription over time?
(c) In this system are you supposedly “in control” of the system variables or the outside
world?
Problem 6 Consider the windshield wiper mechanism illustrated here. e mass-less blade is
mR2 for the disk and wiper blade as-
rigidly attached to the disk of radius R. Use I
sembly for all calculations.
(cid:25)
(a) Assuming the angular rotation of the disk remains “small,” derive the differential equa-
tion governing the sweep of the wiper blade.
(b) Based on your differential equation, compute theoretical expressions for the system’s
natural frequency and damping ratio.
(c) Specify the damping coefficient “b” necessary if you desire the steady-state wiper blade
sweep to be (cid:1)(cid:18)
45(cid:14).
D (cid:6)
6.5. CHAPTER ACTIVITIES 153
6 rad/s; R
D
0:5 inches; k
1 lb/ft; m
D
D
0:02 slug.
For all calculations, use:
YIN
0:25 inches; !
D
D
Problem 7 Consider the parallel RLC electrical circuit shown below:
MRθbky(t) = y0cos(ωt)+−V0CRV1L154
6. GOING NOWHERE?
e governing differential equation for the capacitor voltage can be shown to be:
LC
V1
R
C
L
V1
R P
V1
C
D
L
R P
V0:
Consider that the system is excited by a frequency modulated input voltage, V0.t/
E0 cos !t.
D
(a) Derive the transfer function for V1.s/=V0.s/ as a function of the relevant system pa-
rameters and the frequency of excitation, !.
(b) Using the transfer function, derive an expression for the amplitude ratio V1=E0.
(c) Describe the behavior of the magnitude of the voltage across the capacitor at low fre-
quency, resonance and high frequency?
(d) At resonance, for what damping ratio will the amplitude ratio, V1=E0, fall below unity?
(e) When the damping ratio (cid:16)
shown here:
D
1=2, a plot of amplification ratio vs. frequency ratio is
For this level of damping, determine for what input frequency ranges the output signal
voltage drops below 20% of the input battery voltage if the system has a natural frequency
of 2000 rad/s. What filtering characteristics would you say this system exhibits? Describe
whether a first order system could exhibit such characteristics. If so, why? If not, why not?
Problem 8 Consider a downhill skier skiing down a series of moguls wherein the angle of incli-
nation of the skier varies harmonically such that
00.511.522.533.544.5500.10.20.30.40.50.60.70.80.91Frequency ratioAmplification ratio6.5. CHAPTER ACTIVITIES 155
(cid:18)0.t/
D
0:35 cos 2(cid:25) t radians:
e ODE governing the skier’s velocity was given by:
m
v
P
C
bv
D
mg sin (cid:18)0:
Assuming the angle of inclination remains small, and invoking the small angle approxima-
tion:
m
v
P
C
bv
D
F .t/
D
mg sin (cid:18)0.t/
mg(cid:18)0.t/
(cid:25)
D
0:35mg cos 2(cid:25) t:
(a) Derive the transfer function
V .s/
.F .s/=b/
.
(b) What is the steady-state magnitude of the skier’s downhill velocity?
(c) With m
80 kg, and b
D
1/, intermediate .(cid:28)!
(cid:25)
16 Ns/m, classify the mogul gravity loading as low .(cid:28)!
1/.
D
1/, or high frequency .(cid:28)!
(cid:29)
(cid:28)
(d) Show that for a “very heavy” skier, their steady-state velocity magnitude would be:
VSS
D
0:35g
2(cid:25)
i.e., that the steady-state velocity magnitude of the skier is independent of the skier’s
mass. Is the skier described in part (c) “heavy” in steady state, dynamically speaking?
Problem 9 Consider the translational series mass-spring-damper mechanical system shown below
forced by excitation of the damper y.t/
YIN cos.!t/:
D
156
6. GOING NOWHERE?
(a) Show that the governing differential equation is given by:
m
x
R
b
x
P
C
C
kx
b
y
P
D
(b) Using the transfer function, derive an expression for the amplitude ratio
XOUT
YIN
in terms
of the damping ratio and the dimensionless frequency ratio. Describe, in words, the
1/, and high
behavior of the amplification ratio at low .r
.r
1/, intermediate .r
1/ frequencies.
(cid:28)
(cid:25)
(cid:29)
(c) Is this system analogous to any of the electrical circuits you have experienced thus far?
If so, describe the analogous system elements for each.
(d) If the damper is pumped at a frequency p2 times the system’s resonant frequency,
4 cos p2!N t and the damping ratio for the system is
1=2, determine the output, steady-state motion of the mass, m, as a function of
YIN cos p2!N t
D
D
i.e., y.t/
(cid:24)
D
time.
(e) At resonance, for what damping ratio will the amplitude ratio,
XOUT
YIN
, fall below unity?
Problem 10 Consider the regenerative braking lumped model illustrated here. Pumping the brake
pedal effectively acts as a base excitation on a damper linked to the brake disk of diameter,
D, and mass, m
0:25 slugs.
D
4 lb/ft; b
k
D
D
20 lb-s/ft; D
1 ft; J
D
D
1
2 mR2; y.t/
D
0:5 cos.20t/ ft.
kmxby6.5. CHAPTER ACTIVITIES 157
Show that, when the angular motion of the disk remains small, the governing ODE for the
angular motion of the disk is given by:
J R(cid:18)
C
bR2
P(cid:18)
C
2kR2(cid:18)
Rb
y
P
D
where J
1
2 mR2.
D
(a) Derive the transfer function for
(cid:2)OUT
YIN=D
.
(b) Using k
4 lb/ft; b
20 lb-s/ft; D
1 ft; m
0:25 slugs; y.t/
what is the steady-state angular motion amplitude, (cid:2)OUT?
D
D
D
D
0:5 cos.20t/ ft.
D
(c) Redesign the dashpot to reduce the steady-state amplitude to 0.25 radians.
MRθk1k2by(t)C H A P T E R 7
159
e Fluid and ermal Casts
Finally, we introduce two last casts of characters telling the story of effort and flow in fluid and
thermal systems.
7.1
FLUID SYSTEMS
e flow of fluids fascinates everybody. We watch streams, waterfalls,
whirlpools, and we are fascinated by this substance which seems almost
alive relative to solids.
Richard P. Feynman
e Feynman Lectures on Physics
e hydraulic analogy compares electric current flowing through circuits
to water flowing through pipes. When a pipe is filled with hair, it takes a
larger pressure to achieve the same flow of water. Pushing electric
current through a large resistance is like pushing water through a pipe
clogged with hair: It requires a larger push or voltage drop to drive the
same flow or electric current.
Wikipedia
Have you ever wondered why water is stored in high towers or standpipes? By virtue of
their height, towers storing fluid produce hydrostatic pressure sufficient to drive the fluid out into
distribution systems such as pipes for homes and businesses. Fluid flows out of the tank under the
gravitational force of its own weight. e fluid effort across a volume of contained fluid pushes
the fluid which then responds by flowing. Again, your intuition helps in this telling of the story.
7.1.1 FLUID EFFORT AND FLOW VARIABLES
What pushes fluid is a pressure differential across, say, a length of pipe. is drives a volume flow
rate of fluid, Q, through the pipe. At their essence, fluid systems are special cases of mechanical
systems in general. As with mechanical systems, when the fluid is incompressible, this volume
flow rate directly implies a mass flow rate.
m
P
D
dm
dt D
(cid:26)Q:
160
7. THE FLUID AND THERMAL CASTS
Table 7.1: Effort, flow, and conserved quantities for fluid systems
7.1.2
STORAGE ELEMENTS
e fluid cast is capable of storing energy in both potential and kinetic forms. e fluid system is
nearly always a circuit of containment vessels that deliver fluid from one location to another.
Potential Energy Storage Character
Potential energy storage in fluid systems takes place when the fluid stores a large effort or pressure
differential in a fluid circuit. e fluid cast member who plays the role of Captain Potential Energy
is a storage tank. By virtue of a height or head of fluid, a large static pressure differential is built
up due to gravitational loading. Let’s now imagine: what factors will determine the amount of
potential energy that a tank can store? It seems intuitive that the volume of fluid may matter.
But fluid volume in, say, a cylindrical tank is a product of its area and height. It is a result from
fluid statics that the pressure at the bottom of a column of fluid is determined solely by the
height of fluid in the column. Often pressure is measured in pressure head or the height of liquid
of a given density that produces a given pressure. Pressure and height are, in this sense, both
equally interchangeable effort variables. Since pressure, and not force, drives the fluid mass, what
properties of a storage tank make for a fluid system having the capacity to drive flow of fluid mass?
First, let’s follow the mathematical relation for storage of potential energy of water kept
in a tank of cross-sectional area, A. As we have already mentioned, the pressure at the bottom
of the tank will be related to the height of water in the tank. So pressure and height are inter-
changeable effort variables. For the moment, let’s focus on pressure itself. Since you have not had
a course in fluid mechanics yet, let’s practice letting the mathematics and our analogy guide us.
e mathematical expression of the storage by virtue of effort is
In this way, we have
FLOW
CFLUID
D
d.EFFORT/
dt
:
CFLUID
Q
D
dp=dt D
dV=dt
dp=dt D
dV
dp
:
Here, the fluid capacitance, CFLUID, is a rate of change of fluid volume corresponding to a rate of
change in applied pressure. Fluid dynamicists refer to this quantity as the fluid compliance. en
Conserved Quantity Units Symbol Fluid mass kg m Variable Units Effort Pressure N/ m2 ; lb/ft2 p Flow Volume flow rate m3/s ; ft3/s dVQdt= 7.1. FLUID SYSTEMS 161
Figure 7.1: e fluid potential energy storage character is played by the storage tank. It stores energy
in potential form in accordance with increased height of mass in the tank and storage of a pressure
differential across the height of the tank.
the analogy with mechanical systems comes full circle because in mechanical systems, the inverse
of a substance’s stiffness is its compliance
So analogously for fluid systems
d (cid:18) mg
ATANK
dp
D
CMECH
k(cid:0)
1:
D
(cid:19)
g
ATANK
D
dm
D
g
ATANK
d .(cid:26)V /
(cid:26)g
ATANK
dV
D
CFLUID
dV
dp D
ATANK
(cid:26)g
(cid:17)
where the fluid capacitance is measured in
CFLUID (cid:1)
D
m4s2=kg:
Kinetic Energy Storage Character
When considering energy storage via flow, fluid systems are directly analogous with translational
mechanical flow. e fluid cast member who plays the role of Captain Kinetic Energy is that
device that stores energy by virtue of its volume flow rate. Consider the case of a fluid that is
incompressible. e volume flow rate in a cylindrical pipe is determined directly by the fluid
velocity along the pipe. Kinetic energy is stored by virtue of fluid velocity that is, in the strictest
sense of our analogy, stored by a measure of the fluid inertia. In fluid systems, this is often referred
to as the fluid inertance.
Again, without a physical intuition or feel for inertance, let’s allow the analogy to guide
us mathematically. is may seem abstract, at the moment, but the analogous behavior, in the
162
7. THE FLUID AND THERMAL CASTS
Figure 7.2: e fluid kinetic energy storage character is played by the system’s inertia. Fluid inertance
is embodied in a fluid system’s mass.
end, will hopefully bolster our physical feel once we undertake a course in fluid mechanics and
dynamics. e mathematical expression of the storage by virtue of flow is
EFFORT
p
L d.FLOW/
dt
L dQ
dt
:
D
D
Understanding that fluids are a special case of mechanical systems
m
d (cid:29)
dt
:
F
D
Using F
pA, m
D
D
(cid:26)A‘ and Q
A(cid:29)
D
pA
D
(cid:26)A‘
d (cid:29)
dt D
(cid:26)‘
dQ
dt
or for fluid, say, flowing in a pipe of length, ‘PIPE
p
D
)
dQ
dt D
(cid:26)‘
A
LPIPE
FLUID D
L dQ
dt
(cid:26)FLUID ‘PIPE=APIPE
where the fluid inertance is measured in
LFLUID (cid:1)
D
kg
m4 :
Many fluid systems are designed for steady flow purposes, e.g., hoses, faucets, pipelines. Transients
occur when such systems are turned on and shut off, but for most of the operating time, the flow is
0. In these instances, inertia plays a negligible role in the energy storage
Q
steady and P
and the inertance is then neglected.
d (cid:29)=dt
(cid:25)
/
7.1. FLUID SYSTEMS 163
7.1.3 DISSIPATIVE ELEMENTS
Energy dissipation in fluid systems results from any element in the fluid circuit that impedes
fluid flow rate. e role of the Evil Dr. Friction in the fluid flow script is played by the physical
presence of friction acting against the flow of fluid. Two salient examples are pipe friction and
losses exhibited in flow of fluid through valves or constrictions.
Figure 7.3: e friction force is modeled by the net viscous force that is proportional to a pressure
difference in the fluid circuit.
e governing mathematical expression of the dissipation is algebraic and often bears some-
one’s name! Let’s consider an incompressible, viscous fluid undergoing slow, laminar flow in a
pipe. For such conditions, the Hagen-Poiseuille law relates volume flow rate, Q, of the fluid to
the pressure difference applied across the section of pipe driving the flow
And the Hagen-Poiseuille flow law is given by
RQ
p
D
)
R
D
p=Q:
p(cid:25)RD4=128(cid:22)‘
Q
D
where (cid:22) is the viscosity of the fluid, and D and ‘ are the diameter and length of the pipe re-
spectively. e viscocity is a fluid property that quantifies a fluid’s material resistance to flow. It
is measured in poises:
1 poise (cid:1)
D
0:1 Ns=m2:
164
7. THE FLUID AND THERMAL CASTS
Using our analogy for resistance:
and for Hangen-Poiseuille flow
EFFORT
p
R
(cid:3)
RQ
D
D
FLOW
p
D
)
128(cid:22)‘
(cid:25)D4 Q
RPIPE
FLUID D
128(cid:22)‘
(cid:25)D4 :
e resistance to flow will increase linearly with the pipe length and fluid viscosity. e resistance
will also decrease as the pipe radius is increased, but this dependence is to the fourth power! e
fluid resistance is measured in
RFLUID (cid:1)
D
Ns
m2 (cid:3)
m=m4
kg
m4s
:
(cid:1)
D
Flow resistances from higher velocity flows must account for turbulence. ese resistances are
almost always nonlinear and will not be considered explicitly here.
Table 7.2: Relevant system element relations for fluid systems
Field Effort Variable Flow Variable Fluid Pressure Mass flow rate Relation Form Analogy Dissipative Material Property Law Effort = Resistance x Flow Linear ()12ppRQ−= Resistance = Laminar Pipe Flow Linear Resistance = 4128LDµπ Energy Storage in Effort Variable Flow = Capacitance x d(Effort)/dt AdpQgdtρ= Fluid Capacitance = Compliance FLUIDACgρ= Energy Storage in Flow Variable Effort = Inductance x d(Flow)/dt dQpdt= Fluid Inductance = Inertance PIPEFLUIDPIPEFLUIDPIPEAρ== 7.1. FLUID SYSTEMS 165
Figure 7.4: e fluid system cast of characters.
166
7. THE FLUID AND THERMAL CASTS
7.1.4
SINGLE STORAGE ELEMENT SCRIPTS
An idealized case often studied is that of the storage tank draining out of an aperture cut below
the fluid surface or into an exterior pipe. Here, we might be asking how much time it takes to fill
or drain the tank. Or we might be interested in calculating the height of fluid in the tank under
steady flow conditions.
e system is comprised of the standing tank acting as the fluid capacitor, and the draining
pipe which is the fluid resistor. You may ask why the tank’s resistance is not considered. It is, after
all, a sort of “short” pipe with a rather large diameter. But consider the ratio of the tank’s effective
length to its diameter to the fourth power. When this value is negligible compared to that of the
drainpipe, then the resistance of the pipe dominates over that of the tank and it may be reasonable
to neglect the flow resistance of the tank.
We perform a force balance on a representative control volume of fluid in the pipe. Father
Force, pictured on the ladder in Figure 7.5, provides a supply of water from the outside world.
Let’s presume he turns on an input tap that provides a fluid volume flow rate of QIN. e pressure
Figure 7.5: e classic problem of the draining tank.
difference across the pipe created by the weight of fluid in the tank drives the outgoing flow in the
drainpipe. e pressure at the free surface in the tank and the outflow of the pipe is atmospheric.
If we use this value as a reference effort value, or alternatively use the so-called gauge pressure,
we can set these reference values of pressure to zero. en the operative pressure difference across
the pipe is illustrated in Figure 7.6.
7.1. FLUID SYSTEMS 167
Figure 7.6: Mass flow rate over a control volume of fluid in the draining pipe.
From the effort flow analogy
QIN
QOUT
CFLUID
dp
dt
:
D
e input volume flow rate is externally provided by “the outside world,” aka Father Force. e
output volume flow rate depends on the resistance of the pipe while the capacity to maintain
a driving pressure difference is determined by the characteristics of the storage tank. Using the
corresponding system element equations corresponding to Dr. Friction and Captain Potential
(cid:0)
168
7. THE FLUID AND THERMAL CASTS
Energy, respectively,
QIN
p=RPIPE
FLUID D
(cid:0)
CFLUID
d .p/
dt
RPIPE
FLUIDC TANK
FLUID
dp
dt C
p
D
RPIPE
FLUIDQIN:
is is a differential equation for the pressure at the bottom of the tank or entry to the pipe, the
system effort variable. is is also linearly related to the height of fluid in the tank, often called
the pressure head.
Performing a change of variable from pressure to pressure head
p
D
(cid:26)Ahg=A
(cid:26)gh:
D
RPIPE
FLUIDC TANK
FLUID
d .(cid:26)gh/
(cid:26)gh
dt C
dh
dt C
h
RPIPE
FLUIDQIN
RPIPE
FLUIDQIN=(cid:26)g:
D
D
RPIPE
FLUIDC TANK
FLUID
When there is no source from the outside world, the equation will be homogeneous. e solution
of the homogeneous equation is the sole transient and the steady state is an empty tank with zero
height of fluid and zero gauge pressure. When there is an external flow source, the steady-state
height of fluid in the tank will coincide with the condition that
dh=dt
0
D
)
hSS
D
RPIPE
FLUIDQIN=(cid:26)g:
e time constant is given by the classical RC expression using the hydraulic analogy to electrical
systems
RPIPE
FLUIDC TANK
FLUID D
128(cid:22)‘PIPEATANK
(cid:26)g(cid:25)D4
PIPE
(cid:28):
D
Note that (cid:28) turns out to have units of time, as we expect from the analogy:
(cid:28)
(cid:17)
RPIPE
FLUIDC TANK
FLUID (cid:1)
D
(cid:18) kg
m4s
(cid:19) m4s2=kg (cid:1)
D
s:
Recall that the governing equations for electrical systems typically appear in terms of effort and/or
flow while mechanical systems are most often in terms of flow. Steady incompressible fluid systems
are most often written in terms of effort, either the fluid pressure or pressure head.
7.1.5 MULTIPLE STORAGE ELEMENT SCRIPTS
A multiple storage script must involve fluid kinetic energy as well as fluid potential energy. An
illustrative case is that of the U-tube manometer. Fluid in static equilibrium in a vertical U-tube
will contain as much fluid mass or climb as high in the left tube as the right tube as shown in
Figure 7.7. If an external pressure were applied to the free surface in the left tube, a relative fluid
height would develop as the fluid originally in the left tube is displaced into the right tube. If
the pressure were then released, this displaced fluid would then be driven by a net gravitational
loading until it moved back into the left tube. is motion would resemble that of a pendulum
released from a given initial angle.
7.1. FLUID SYSTEMS 169
Figure 7.7: A classical U-tube manometer fluid pendulum.
If the tube friction is not sufficient to prevent it, the fluid will overshoot the original equi-
librium position by virtue of the kinetic energy of flow. en it will climb up the left tube and
“swing” back and forth as a fluid pendulum. Friction between the fluid and the tube walls will pro-
vide damping and the transfer of potential energy to kinetic energy and back will be accompanied
by losses that cause the fluid pendulum swing to eventually cease (see Figure 7.8).
Writing a momentum balance on a representative fluid control volume, one can show that
the differential equation governing the relative height of fluid in the manometer is given by:
RFLUIDA2 dh
2(cid:26)gAh
AP .t/
(cid:26)AL
dt C
D
d 2h
dt2 C
where P .t/ is an externally imposed gauge pressure at one fluid surface. If we scale the entire
equation to normalize the effort variable term of pressure head, one can show that (see Chapter
Activities Problem 2):
C
where the system element equation for the capacitance of a U-tube manometer is
D
C
LFLUIDCFLUID Rh
RFLUIDCFLUID Ph
h
HO .t/
and the pressure head forcing function is
CFLUID
D
A=2(cid:26)g
D
You should take note that the coefficient of the head, h, is already unity. As such, we have:
H0 .t/
P .t/=2(cid:26)g:
LFLUIDCFLUID (cid:1)
D
kg
m4
m4s2
kg
s2
(cid:1)
D
Cross-sectional area, A2hlength, L170
7. THE FLUID AND THERMAL CASTS
Figure 7.8: e energy catch with losses in a U-tube manometer fluid pendulum.
which exhibits units of 1=!2
N and
which exhibits units of 2(cid:16)=!N .
RFLUIDCFLUID (cid:1)
D
s
In this script, the externally applied pressure drives the fluid mass which is initially opposed
by a gravitational spring and tube friction. As the kinetic energy imparted to the mass by the
pressure is reduced, an equivalent amount of potential energy is stored in the spring or height in
Free Response Displacement HistoryTime, th(t1)h(t2)h(t3)TADCBABDCthe remaining tube. e fluid system energy is simply transferred from kinetic to potential and
back with dissipation provided by the tube walls.
Once again, Captain Potential Energy and Captain Kinetic Energy “have a catch” with a
ball of energy while the Evil Dr. Friction takes a bite at each pass.
7.2. THERMAL SYSTEMS 171
Figure 7.9: A dynamic second order system energy exchange with dissipation.
7.2 THERMAL SYSTEMS
ere is apparently no thermal element which displays
an energy storage mechanism which is complementary
to the flow store.
Paul E. Wellstead
Introduction to Physical System Modeling
But as sure as you’re born … you’re never gonna see no
unicorn.
Shel Silverstein
“e Unicorn”
Finally, we introduce our last cast of characters telling the thermal story of effort and flow.
e effort-flow analogy holds only in part for thermal systems because Captain Kinetic Energy
does not exist! ere is no storage nor balance of a momentum-like quantity in any thermal
system. ermal kinetic energy is a unicorn. In other words, you won’t find one!
172
7. THE FLUID AND THERMAL CASTS
7.2.1 THERMAL EFFORT AND FLOW VARIABLES
In thermal systems, your intuition will again serve you well. You already know that a temperature
difference across an element will cause heat to flow from hot to cold. erefore, temperature plays
the role of effort while heat flow rate is the flow variable.
Table 7.3: Effort, flow, and conserved quantities for thermal systems
7.2.2
STORAGE ELEMENTS
e single most interesting characteristic of thermal systems is, arguably, that they can store only
one type of energy, namely potential. is is the main and a crucial difference between thermal
and all other systems. As such, thermal systems can only ever be governed by first order differential
equations in time. Let’s examine the potential energy character in detail.
Potential Energy Storage Character
ermal capacitance is defined as the capacity to store an effort differential across an element.
Here that translates into a temperature difference. e energy stored per unit temperature dif-
ference is a measure of the capacitive strength. e thermal cast member who plays the role of
Captain Potential Energy is the mass which provides material-specific heat capacity.
e mathematical expression of the storage by virtue of effort is
FLOW
CTHERM
D
d.EFFORT/
dt
which can then be written for thermal systems
q
q
CTHERM
D
D
)
(cid:17)
mcP
d.(cid:1)T /
dt
d.T
CTHERM
mcP :
TREF/
(cid:0)
dt
mcP
D
d T
dt
Here, our thermal capacitor represents the potential for a thermal system to store thermal heat
energy by virtue of a temperature difference contained in the element. A material’s ability to store
Conserved Quantity Units Symbol Heat energy Joules J Variable Units Effort Temperature OC ; OF T Flow Heat flow rate Watt = J/s; BTU/hr q 7.2. THERMAL SYSTEMS 173
Figure 7.10: e thermal potential energy storage character is played by the heat capacity carried by
the system mass. It embodies the thermal capacitance of the system.
heat energy for every degree of rise in its temperature is referred to as its heat capacity, cP .
mcP
d.(cid:1)T /
dt
d.T
CTHERM
q
q
D
D
TREF/
(cid:0)
dt
mcP
D
dT
dt
where the quantity mcP .T
capacitance is the total thermal heat capacity
(cid:0)
TREF/ is referred to as the internal energy of the system. e thermal
CTHERM
mcP :
(cid:17)
e heat capacity is an extensive quantity and is proportional to the system’s thermal mass. It is a
measure of how much energy can be stored in a mass before its temperature will increase a single
degree:
mcP (cid:1)
D
kg
kg
:
J
(cid:0) (cid:14)C
Kinetic Energy Storage Character
In Shel Silverstein’s words, “you’re never gonna see no unicorn.” us, is the thermal kinetic energy
storage character. Kinetic energy elements that store energy by virtue of the flow variable simply do
not exist. Ergo, Paul Wellstead’s notion that “apparently, there are none.” Captian Kinetic Energy
is AWOL! is has tremendous implications for thermal system dynamics. Namely, because both
Captain Potential Energy and Captain Kinetic Energy must be present and accounted for in order
to have a second order system, all thermal systems are necessarily governed by first order equations
in time.
174
7. THE FLUID AND THERMAL CASTS
Figure 7.11: e thermal kinetic energy storage character does not exist!!
7.2.3 DISSIPATIVE ELEMENTS
Dissipation in thermal systems is provided by physical agents that impede heat flow. Heat flow
is impeded differently in solid, fluid, and a vacuum. Heat flows through a solid by means of
conduction, through fluids by means of convection, and through a vacuum by radiation. Radiative
heat flow is highly nonlinear and will not be addressed here.
Conductive Resistance to Heat Flow
Heat flows through solids by conduction, a process in which heat thermally agitates the solid
atoms in their lattice. e solid lattice impedes the flow of heat. A temperature difference must
be imposed across a solid to drive heat flow through it. By virtue of their lattice structure, solids
that are conducting heat provide a thermal resistance to heat flow.
Here, Fourier’s law of heat conduction provides a relationship between a temperature dif-
ference across a solid of constant thickness and the resulting heat flow rate by virtue of a material
property known as the thermal conductivity, k. Fourier said that the heat flux through a solid is
proportional to the local temperature gradient through the thermal conductivity
qCOND
kA
dT
dx
:
D (cid:0)
Consider once more that in deriving a differential equation for heat flow, we balance heat into
and out of a small representative control volume in the system. If the control volume is sufficiently
small, any temperature distribution will “look linear” and we can model the temperature gradient
as a finite difference
qCOND
kA
dT
dx (cid:25) (cid:0)
kA
(cid:1)T
(cid:1)x D (cid:0)
kA
T2
x2
D (cid:0)
T1
x1 D
T1
kA
T2
(cid:0)
L
(cid:0)
(cid:0)
where L is some representative distance across which conduction is taking place through “a win-
dow” of cross-sectional area, A, and at whose ends the temperatures are T1 and T2, respectively.
7.2. THERMAL SYSTEMS 175
Figure 7.12: Solids provide thermal resistance to heat flow by the energy lost through thermal agita-
tion of strongly bonded solid lattice networks.
We are now able to relate the temperature difference necessary to drive heat flow through a re-
sistive element
T1
T2
(cid:0)
D
L
kA
qCOND
D
RCOND
THERMqCOND
where we can now apply the system analogy
EFFORT
RCOND
D
THERM (cid:17)
RCOND
L=kA
THERM (cid:3)
FLOW
where the units of thermal resistance are given by
L=kA (cid:1)
D
m=
m
W
(cid:0) (cid:14)C
m2
(cid:14)C
W
(cid:1)
D
and heat flow rate is measured in watts
W
Watt (cid:1)
D
(cid:17)
J
s
:
Convective Resistance to Heat Flow
Alternatively, heat flow is impeded in a different manner when being transferred through a fluid.
Heat flows through a fluid medium by a process known as convection, and the fluid provides a
thermal resistance as heat convects through the fluid under the influence of an imposed temper-
ature difference.
176
7. THE FLUID AND THERMAL CASTS
Figure 7.13: Fluid media provide thermal resistance to heat flow by the energy lost through thermal
agitation of loosely bound fluid molecules.
Convective heat flow is governed by Newton’s Law of Cooling whereby a solid at tempera-
, will result in heat transferred
ture, T , surrounded by a large reservoir of fluid at temperature, T
through the fluid given by
1
qCONV
hA .T
T
/
hA(cid:1)T
(cid:0)
where h is referred to as the heat transfer or film coefficient and A is the area through which the
heat is flowing. Inverting this relationship, the resulting temperature difference between the solid
surface and the fluid becomes
D
D
1
(cid:1)T
1
hA
D
qCONV
D
RCONV
THERMqCONV
and invoking the effort-flow analogy
RCONV
1= hA
THERM (cid:3)
FLOW
EFFORT
RCONV
D
THERM (cid:17)
(cid:14)C
W
(cid:1)
D
m2
.
W
m2
(cid:14)C
where RCONV
THERM (cid:17)
1= hA (cid:1)
D
1=
A list of thermal system element equations is given in Table 7.4. A summary of the thermal
cast and the roles they play is given in Figure 7.14.
7.2.4
SINGLE STORAGE ELEMENT SCRIPTS
An idealized case often studied is that of conduction through a solid, insulated wall. e solid
is characterized by a capacity to retain heat measured by its temperature. Father Force is now
temperature. e heat capacity of the wall allows it to store thermal energy by virtue of its tem-
perature. is is referred to as the solid wall’s internal energy. e heat capacity of the wall is
7.2. THERMAL SYSTEMS 177
Figure 7.14: e thermal system cast of characters.
178
7. THE FLUID AND THERMAL CASTS
Table 7.4: Relevant system element relations for thermal systems
likened to a thermal spring being pushed by Father Force as shown in Figure 7.15. is illustrates
the capacity of the wall to remain at an elevated temperature and store thermal energy in a form
measurable by its effort variable. e solidly bonded molecules of the insulating layer provide
resistance to heat flowing through them to the outside, TOUT < T .
In order to balance heat flow rate through the insulation, we perform a thermal heat energy
balance on a representative control volume in the insulation.
qOUT
0
(cid:0)
D
CTHERM
dT
dt D
mcP
d .T /
dt
where T is the temperature of the wall. If the dominant temperature difference is that between
the wall and the temperature outside of the insulating layer, TOUT, then we can represent the heat
Field Effort Variable Flow Variable Thermal Temperature Heat flow rate Relation Form Analogy Dissipative Material Property Law Effort = Resistance x Flow ()12TTRq−= Convective Resistance = 1hA Conductive Resistance = LkA Energy Storage in Effort Variable Flow = Capacitance x d(Effort)/dt PdTqmcdt= Capacitance = Thermal Heat Capacity THERMPCmc= Energy Storage in Flow Variable Effort = Inductance x d(Flow)/dt Not Applicable Inductance = There is no thermal equivalent or analog for inductance 7.2. THERMAL SYSTEMS 179
mcP
d .T /
dt
Figure 7.15: Heat flow through a control volume across a solid wall.
flowing out through the insulation as
kA(cid:1)T =L
(cid:0)
D (cid:0)
TOUT / =RINSULATION
.T
(cid:0)
RCOND
THERMCTHERM
)
CONDUCTION D
dT
dt C
D
T
TOUT
180
7. THE FLUID AND THERMAL CASTS
resulting in
RCOND
THERMCTHERM
dT
dt C
T
D
TOUT :
e excitation from the outside world is provided by the external temperature. e solution of
this equation is a temperature changing monotonically from T .0/ to TOUT in roughly four time
constants. e time constant is given by the classical RC expression using the analogy to electrical
systems
Note that the units of the time constant are:
RCOND
THERMCTHERM
mcP L
D
kA D
(cid:28):
RCOND
THERMCTHERM
(cid:28)
mcP L
kA
(cid:1)
D
J
(cid:14)C
(cid:14)C
W
s
(cid:1)
D
(cid:17)
or units of time. e analogy delivers a parameter known to characterize all first order systems in
time as we’ve described them.
(cid:17)
Alternatively, a simple illustration of convective heat transfer occurs during quenching:
when a hot, small solid object is transferred to a large cooling bath (Fig. 7.16). In order to balance
Figure 7.16: Heat flow through a control volume contained in a fluid surrounding an object from
which heat is being transferred.
heat flow rate in the fluid surrounding the quenched sphere, we perform a heat energy balance
on a representative control volume in the fluid reservoir.
qSTORED
qOUT
(cid:0)
hA .T
qIN
0
(cid:0)
D
T
(cid:0)
/
1
D
mcP
dT
dt D
CTHERM
dT
dt
:
TTOUT7.3. CHAPTER ACTIVITIES 181
Invoking the effort-flow analogy
0
(cid:0)
1
RCONV
THERM
.T
T
/
1
(cid:0)
D
mcP
dT
dt D
CTHERM
dT
dt
:
Rearranging
RCONV
THERMCTHERM
dT
dt C
T
:
T
1
D
e excitation from the outside world is provided by the quench tank fluid reservoir temperature.
e solution of this equation is a temperature changing monotonically from T .0/ to T
in roughly
four time constants. e time constant is given by the classical RC expression using the analogy
to electrical systems
1
RCONV
THERMCTHERM
D
7.3 CHAPTER ACTIVITIES
mcP
hA D
(cid:28):
Problem 1 A U-tube manometer is a relatively simple device used to measure pressure. When the
fluid level is displaced as shown above and released, the following oscillation in relative fluid
height is observed:
When a periodic pressure is applied at one end, a force and mass balance on the system
gives the following governing differential equation for the fluid height, h.t/:
R(cid:26)A2 dh
dt C
2(cid:26)gAh
D
PA cos 4t
d 2h
dt2 C
0/
D
0/
D
D
(cid:26)AL
h.t
dh
dt
D
.t
5 cm
0 cm=s
where (cid:26) is the fluid density, A
1 cm2, and L
5 cm.
D
D
Free Response Displacement HistoryTime, th(t1)h(t2)h(t3)TCross-sectional area, A2hlength, L182
7. THE FLUID AND THERMAL CASTS
(a) Using representative analogies in Table 7.2, show that the differential equation gov-
erning the height, h, can be written as:
RFLUIDCFLUID Ph
h
C
D
H0.t/
LFLUIDCFLUID Rh
H0.t/
D
CFLUID
C
P .t/=2g(cid:26)
A=2(cid:26)g:
D
it is critically damped. Assume the acceleration due to gravity is given by g
(b) Write an algebraic expression for the fluid inertance, i.e., the fluid inertia.
(c) Calculate the natural frequency and the fluid resistance, R, in this pendulum system if
10 m/s2.
(d) What periodic pressure magnitude, P , needs to be applied to obtain a steady-state
output height of 2 cm (an amount that will just cause liquid to spill out of the U-
tube).
D
(e) What are the characteristic times for the system in part (d)?
(f ) If the tube resistance is removed, i.e., R
0 1=m
D
the fluid height, h.t/, as a function of time.
(cid:0)
s. Compute the total solution for
Problem 2 e height of fluid in a tank with two outlet pipes, one at the bottom of the tank and
one 2 meters directly above it, is given by the following governing differential equation:
1
R2
H /
.h
(cid:0)
D
QIN
P
g
A
g Ph
A
H
R1
R2
D
D
h
C
1
R1
C
20 m2
2 m
2 1=ms
2 1=ms
30 m3=s
D
D
D
10 m=s2
QIN
P
g
(cid:25)
hH(a) What are the conserved quantity, and the effort and flow variables?
(b) Sketch the response for the height of fluid in the tank. Assume the initial height is 5
7.3. CHAPTER ACTIVITIES 183
meters.
(c) Assume the system has already come to steady state. From this new initial state, what
is the new steady-state height of fluid in the tank if the top outlet pipe is suddenly
lowered 1 m, i.e., H
1 m?
D
(d) How long will it take to attain this new steady-state height?
Problem 3 e differential equation governing heat transfer in the thermocouple probe quenched
suddenly in a fluid bath maintained at T
1
is given by
mcP
dT
dt D
hA .T
1 (cid:0)
T /
where m is the mass of thermocouple bead, cp is its specific heat per unit mass, h is the
heat transfer coefficient of still air, and AS is the surface area of the thermocouple bead.
Supposed it is known that the time constant for an experiment is 3 minutes from which it
is determined that the heat transfer coefficient of still air is h
(cid:0)(cid:14)C . If you know the
heat transfer coefficient of still ice water is h
(cid:0)(cid:14)C , roughly how long will it take
for the thermocouple bead to reach steady-state when the probe is re-immersed quickly into
the ice water?
3600 W
15 W
m2
D
D
m2
Problem 4 For the thermocouple probe quenched in Problem 3, the temperature, T .t/, is gov-
erned by the following 1st order ODE plunged suddenly in boiling water from standing
air:
mcP
hAS
.T
D
dT
dt C
0/
D
100
T
D
20 (cid:14)C
(a) Sketch the dynamic thermal response of this first order system of a room temperature
mass suddenly placed in boiling water.
(b) Consider that at the same time, you have a second mass with double the heat capacity
of the original mass, cP , that is at an initial temperature of 150 degrees C when it is
placed suddenly in a reservoir of liquid with a heat transfer coefficient 40% of that for
water. On the same graph, sketch the response of this second mass.
(c) Write the functional form of the temperature solution for the second mass.
Problem 5 Consider an electrical analogy to a human artery provided by the 4-element Wind-
kessel model. e capacitor represents the elasticity of the arterial wall, i.e., ranges of this
value can model hardening of the arteries. e resistance to blood flow is determined by the
184
7. THE FLUID AND THERMAL CASTS
viscosity of blood, i.e., a dehydrated patient will exhibit more viscous blood and a higher
resistance to flow. An inductor is said to simulate inertia of the blood, i.e., it can model the
density of blood changing as when its iron content becomes depleted.
P .t/
25 cos !t V
R1
R2
C
L
D
1000 (cid:10)
1000 (cid:10)
D
D
:002 f
40 H
D
D
In this analogy, the current represents the blood flow rate, the applied voltage source rep-
resents the effort variable of blood pressure, and the frequency of the input excitation is the
heart rate or pulse (where it is understood that rad/s correlates with beats-per-minute).
e governing differential equation for the system blood flow rate (current in the model) is
given by:
LC
d 2iC
dt2 C
(cid:18)R1C
L
R2
C
(cid:19) diC
dt C
(cid:18)1
(cid:19) iC
R1
R2
C
P .t/
R2 C
C
dP
dt
D
where R1
R2
D
D
R:
Consider that the so-called inertia of the fluid is small, but not zero. Mathematically, this
implies
(cid:28)
(a) Make a mathematically convincing argument, i.e., back it up with the necessary equa-
tions/relationships, to show that as the heart rate increases dramatically, a condition
LC
RC
7.3. CHAPTER ACTIVITIES 185
known as tachycardia, the blood flow rate decreases for a given constant blood pres-
sure. Assume any response to initial conditions has decayed away and the system is in
steady state.
HINT: Consider the transfer function for I =.P =R/ when formulating your answer!
(b) Describe, in words, the behavior of the amplification ratio, I =.P =R/, at low .r << 1/,
!=!N .
1/, and high .r >> 1/ normalized frequencies where r
intermediate .r
(cid:25)
D
(c) For the given input magnitude blood pressure of 25 V, if a life-viable cutoff blood flow
rate in steady state is 4 milli-amperes, at what heart rate, !, will the patient expire?
Problem 6 Consider an older weightlifter who loves sausage and whose diet has hardened his
arteries. An electrical 4-element Windkessel model identical in structural form to that for
Problem 5 may be used. For such an analogous electrical heart, the governing differential
equation and corresponding transfer function for the weightlifter’s blood flow rate are given
by:
(cid:18)1
R1
R2
C
(cid:19) I.t/
L
R2 P
I .t/
C
LC
I .t/
R
C
D
1
R2
P .t/
C
P .t/
P
C
IOU T
.PIN=.R1
C
R2// D
.1
1
(cid:0)
C
r 2/
r
2(cid:16)
j
2(cid:16)rj
C
Assume: L
40 H
C
I
100 (cid:22)f
I
R1
D
D
D
9000 (cid:10)
R2
I
D
1000 (cid:10)
P .t/
I
D
1000 cos.50t/ V.
(a) For these conditions, what is the steady-state amplitude of blood flow rate (current)?
(b) If the inductance is increased 5 fold to 200H and the capacitance is further reduced 5
fold to 20 (cid:22)f (i.e., the arteries continue to harden), to what extent will this change the
steady-state blood flow rate amplitude?
(c) Find an expression for the amplification ratio, A, as a function of damping ratio, (cid:16), at
resonance (r
1).
D
Problem 7 Consider the electrical circuit analog for a quenched solid in an insulating jacket as
shown here:
186
7. THE FLUID AND THERMAL CASTS
e resulting governing differential equation for the solid’s temperature is given by:
(cid:26)V cP
dT
dt C
(cid:18) 1
R1 C
(cid:19) T
1
R2
1
R1
D
TBATH cos.!t/
T
T
(a) When forced by a periodic input at low frequency, the amplitude ratio
approaches
what value? Give your answer as an algebraic expression in terms of R1 and R2.
1
(b) What does the amplitude ratio
T
T
1
approach for high frequency input?
Problem 8 Before insulating materials were readily available, buildings were thermally insulated
by endowing their walls with sufficiently large thermal time constants.
When driven by the daily solar thermal fluctuation, the differential equation governing (cid:18),
the fluctuation in wall temperature above and below its average daily value, is given by:
4 P(cid:18)
(cid:18)
C
D
12 cos (cid:16)
(cid:25)
12
t(cid:17) (cid:14)F
where the time, t, is measured in hours.
(a) What is the amplitude of the steady-state thermal fluctuation of the wall (in (cid:14)F)?
(b) For what value of the thermal time constant will the amplitude of the steady-state
7.3. CHAPTER ACTIVITIES 187
thermal wall fluctuation drop to
2(cid:14)F, effectively insulating the building?
(cid:6)
(c) How would you re-design the wall so that the steady-state response is reached in ap-
proximately 6 hours? You may state your answer in terms of the characteristic time or
times of the system response.
(d) Draw an analogous electrical circuit whose behavior would be equivalent in some sense
to this thermal problem. Label all the analogous electrical system elements correspond-
ing to each of the thermal elements and describe the relevant input forcing function
to the electrical circuit.
Problem 9 Consider omas Jefferson’s home at Monticello, built before insulation was avail-
able. In the 18th century, buildings were thermally insulated by endowing their walls with
sufficiently large thermal time constants.
When driven by the daily solar thermal fluctuation, the differential equation governing (cid:18),
the fluctuation in wall temperature above and below its average daily value, is given by:
mcp P(cid:18)
C
kA
L
(cid:18)
D
40kA
L
cos (cid:16)
(cid:25)
12
t(cid:17) (cid:14)C
where the time, t, is measured in hours, and:
kA
L
D
mcp
D
100 Jm=hr (cid:14)C
0:25 m
1600 J=(cid:14)C
D
e dimensionless Biot number, Bi
hL
k D
D
RCOND
RCONV
quantifies the relative magnitudes of
conductive and convective resistances in a thermal system. e wall here is designed such
that its Biot number is very large so that convection to the air surrounding the walls can be
neglected.
188
7. THE FLUID AND THERMAL CASTS
(a) What is the amplitude of the steady-state thermal fluctuation of the wall (in (cid:14)C)?
(b) For what value of the thermal time constant will the amplitude of the steady-state
thermal wall fluctuation drop to
5(cid:14)C, effectively insulating the building?
(cid:6)
(c) Re-design the wall by altering its thickness only so that the time constant obtained in
part (b) can be obtained?
(d) With the time constant from part (c), how many hours will any transient now last en
route to steady state?
(e) Draw an analogous electrical circuit whose behavior would be equivalent to this ther-
mal problem. NOTE: You must draw the actual circuit and then label all the analogous
electrical system elements corresponding to each of the thermal elements and describe
the relevant input forcing function to the electrical circuit.
C H A P T E R 8
Summary
189
e rules that describe nature seem to be mathematical. It is not a
characteristic necessity of science that it be mathematical. It just turns
out you can state mathematical laws which work to make powerful
predictions. Why nature is mathematical is, again, a mystery.
Richard Feynman
e Meaning of It All
Fortunately, today’s online world, with its advances in video and
animation, offers several underused opportunities for the informal
dissemination of mathematical ideas. Perhaps the most essential message
to get across is that with math you can reach not just the sky or the stars
or the edges of the universe, but timeless constellations of ideas that lie
beyond.
Manil Suri
How to Fall in Love With Math
A la Suri [15], what we’ve sought to offer here is a digestible version of building govern-
ing differential equations from the cartoon building blocks of characters with whom are associ-
ated fundamental relations from the effort-flow analogy. We’ve presented an animated storyline
wherein Captains Potential Energy and Kinetic Energy store system energy while the Evil Dr.
Friction finds ways to steal it. We motivate these characters as roles in a common movie script
about energy transfer in systems dynamics. We’ve then introduced the mechanical, electrical,
fluid, and thermal casts that play these energy roles in the separate system disciplines.
It has been our intention to simply provide a mnemonic device to remember that separate
physical actors always play the same roles in this movie. We also associate with these roles in the
script equations relating effort and flow. Simple conservation balances then hopefully provide a
more straightforward way to remember how to derive a governing differential equation for the
system. We have also provided the story to show how features of our superheroes characterize
the solutions to these equations. More than half of the students taught with these character rep-
resentations of the effort-flow analogy claim these stories made coming to terms with systems
dynamics more fun and the concepts more memorable. Learning can be fun. Even learning math
can be fun!
190
8. SUMMARY
Figure 8.1: e cast of the movie script for systems dynamics: Father Force, Captains Potential and
Kinetic Energy, and the “not always Evil” Dr. Friction!
Afterword
191
is book has been written to present multi-disciplinary systems in a common light with an
encompassing story focused on energy storage and dissipation. Based on our experience teaching
the effort-flow analogy with these energy superheroes, we have found that the mnemonic of char-
acters performing a common script played by discipline-specific actors helps students more clearly
identify with the theme common to these dynamic systems. We have chosen a variety of chapter
activities that illustrate this common behavior across engineering disciplines. After reading this
manuscript, if you have comments on the presentation of the storyline or the orchestration of
the chapter activities and examples or wish to suggest additional examples that emphasize system
similitude across disciplines, feel free to contact the authors at [email protected].ank
you, in advance, for any input you have.
Bibliography
193
[1] Dym, C. (2004). Principles of Mathematical Modeling. Academic Press. 16
[2] Feynman, R. P. (1998). e Meaning of It All: oughts of a Citizen-Scientist. Perseus Books.
1, 16
[3] Feynman., R., Gottlieb, A., and Leighton, R. (2006). Tips on Physics:. Pearson Addison
Wesley. 1
[4] Feynman, R. (2009). Richard Feynman on Electricity. https://www.youtube.com/watc
h?v=kS25vitrZ6g. 22
[5] Feynman, R. (2012). What is the relationship between mathematics, science and na-
http://www.researchgate.net/post/What_is_the_relationship_between
ture?
_Mathematics_Science_and_Nature. xiv, 1
[6] Jensen, B. D. and McLain, T. W. (2012). System Dynamics. http://twmclasses.group
s.et.byu.net/lib/exe/fetch.php?media=483:335notes.pdf. xiii
[7] Johnson, A. T. (1998). Biological Process Engineering: An Analogical Approach to Fluid Flow,
Heat Transfer, and Mass Transfer Applied to Biological Systems. Wiley-Interscience.
[8] Johnson, A. T. (2001). Teaching by analogy: e use of effort and flow variables. Proceed-
ings of the 2001 American Society of Engineering Education Annual Conference & Exposition,
Session 2973:1–3. xiii
[9] Lehrer, J. (2012). IMAGINE: How Creativity Works. Houghton Mifflin. xvii
[10] Ogata, K. (2003). Systems Dynamics. Prentice Hall. 73
[11] Palm, W. (2013). Systems Dynamics. McGraw Hill-Engineering-Math. 73
[12] Public Broadcasting System–NOVA (1993). e Best Mind Since Einstein - Richard Feyn-
man Biography. Television Production. 16
[13] Singer, S. and Smith, K. A. (2013). Discipline-based education research: Understanding
and improving learning in undergraduate science and engineering. Journal of Engineering
Education, 00:1–4. DOI: 10.1002/sce.21091. xv
194 BIBLIOGRAPHY
[14] Sofia, J. W. (1995). e fundamentals of thermal resistance measurement. Technical report,
Analysis Tech. 16
[15] Suri, M. (2013). How to Fall in Love With Math. http://www.nytimes.com/2013/09/
16/opinion/how-to-fall-in-love-with-math.html. 189
[16] Susskind, L. and Hrabovsky, G. (2013). e eoretical Minimum: What You Need to Know
to Start Doing Physics. Basic Books. 4
[17] Tippett, K. (2010). Einstein’s God. Penguin Books. xiii
[18] Wellstead, P. E. (2000). Introduction to Physical System Modelling. www.control-systems-
principles.co.uk. xiii
[19] Woods, R. L. and Lawrence, K. L. (1997). Modeling and Simulation of Dynamic Systems.
Prentice Hall. xiii, 73
Authors’ Biographies
195
VINCENT C. PRANTIL
Vincent C. Prantil earned his B.S., M.S., and Ph.D. in Mechanical Engineering from Cornell
University where he was awarded the Sibley Prize in Mechanical Engineering and held an An-
drew Dickson White Presidential Fellowship. He was a Senior Member of Technical Staff at
Sandia National Laboratories California in the Applied Mechanics and Materials Modeling Di-
rectorates for eleven years. He joined the faculty in the Department of Mechanical Engineering
at the Milwaukee School of Engineering in September 2000 where he presently specializes in
finite element model development, numerical methods, and dynamic systems modeling. Since
joining academia, he has become interested in the use of animation to both engage students and
as a suggestive tool for students to use as a mnemonic device to enhance long-lasting learning. In
addition to working with Tim Decker in Milwaukee, he has teamed up with colleagues at North-
ern Illinois University and Rutgers University in their efforts to showcase the power of video
simulation for teaching undergraduate engineering concepts in dynamic modeling and controls
theory.
TIMOTHY DECKER
Timothy Decker has played an important role in educational engagement over the past several
decades. With extensive experience in game animation, character design and children’s television,
Tim has been an Animation Supervisor for Disney Interactive, lead animator for Knowledge Ad-
venture, and layout artist/animator for the award-winning television series “e Simpsons” as well
as Teenage Mutant Ninja Turtles, Alvin and Chipmunks, and the Critic. He has also appeared on
many episodes of the “Imagination Station” as a guest artist inspiring children in the art of anima-
tion and cartooning. He has extensive experience directing animation in Canada, India, Korea,
and the United States. roughout his career, Tim has won numerous gaming awards from PC
Magazine, Communication Arts Magazine, Family Magazine and the Academy of Arts and Sci-
ences. Tim has been awarded three regional Emmy awards for his participation with Milwaukee
Public Television. Tim holds a Bachelor’s degree in Character Animation and Film from Califor-
nia Institute of the arts (CalArts) and an Associates degree in Illustration from Rocky Mountain
College of Art and Design. Tim is enjoying his second career as a Lecturer at Peck School of the
Arts at the University of Wisconsin–Milwaukee and Milwaukee Area Technical College. Tim
teaches animation, character development, puppetry, claymation, and drawing for animation.
His students are major participants in many national and international film festivals.Tim believes
196 AUTHORS’ BIOGRAPHIES
that immersive virtual environments are advantageous for communicating complex ideas, and that
animation has the ability to support the telling of scientific stories in medical, engineering, and
applied sciences.
|
https://ieeexplore.ieee.org/xpl/ebooks/bookPdfWithBanner.jsp?fileName=8355742.pdf&bkn=8355741&pdfType=book
|
Series ISSN: 2327-6738
Series Editor: Robert Beitle, Jr., University of Arkansas
Concise Introduction to Cement
Chemistry and Manufacturing
Tadele Assefa Aragaw, Bahir Dar University, Ethiopia
This book is designed to be used in an introductory sophomore-level undergraduate course in chemical
engineering, civil engineering, industrial engineering, chemistry, and/or industrial chemistry. Senior-level
students in resource development, soil science, and geology might also find this book useful. In addition,
it is our hope that even advanced mathematics-oriented high school seniors might find the material easy
to master as well.
This book emphasizes concepts, definitions, chemical equations, and descriptions with which
some chemical science professionals struggle. It stresses the importance of maintaining uniformly high
standards in pure chemical science and manufacturing technology while still keeping in mind that
procedures that might seem strange also yield results that prove effective.
ABOUT THE AUTHOR
Tadele Assefa Aragaw is a lecturer in Chemistry and Environmental Engineering, a Researcher, and a
Facility Manager in the Chemical and Food Engineering at the Bahir Dar Institute of Technology. Since
2017 he has been involved in a research project in the area of Ethiopian kaolin characterization for different
industrial applications as well as an indigenous microalgae investigation from wastewater for biodiesel
production. In 2012, Tadele received his B.S. in Chemistry from the University of Gondar. In 2014, he
started studying for his master’s degree in Environmental Engineering while also teaching an Analytical
Chemistry and Environmental Engineering course for Chemical Engineering students. He received his
M.Sc. in Environmental Engineering in 2016 from the Bahir Dar Institute of Technology, Bahir Dar
University. Tadele has published articles in the field of his profession, Environmental Engineering.
About SYNTHESIS
This volume is a printed version of a work that appears in the Synthesis Digital
Library of Engineering and Computer Science. Synthesis books provide
concise, original presentations of important research and development topics,
published quickly, in digital and print formats.
store.morganclaypool.com
A
R
A
G
A
W
C
O
N
C
I
S
E
I
N
T
R
O
D
U
C
T
I
O
N
T
O
C
E
M
E
N
T
C
H
E
M
I
S
T
R
Y
A
N
D
M
A
N
U
F
A
C
T
U
R
I
N
G
M
O
R
G
A
N
&
C
L
A
Y
P
O
O
L
Concise Introduction to
Cement Chemistry and
Manufacturing
Tadele Assefa Aragaw
Concise Introduction to
Cement Chemistry and
Manufacturing
Synthesis Lectures on
Engineering
Each book in the series is written by a well known expert in the field. Most titles cover subjects
such as professional development, education, and study skills, as well as basic introductory
undergraduate material and other topics appropriate for a broader and less technical audience.
In addition, the series includes several titles written on very specific topics not covered
elsewhere in the Synthesis Digital Library.
Concise Introduction to Cement Chemistry and Manufacturing
Tadele Assefa Aragaw
2018
Data Mining and Market Intelligence: Implications for Decision Making
Mustapha Akinkunmi
2018
Empowering Professional Teaching in Engineering: Sustaining the Scholarship of
Teaching
John Heywood
2018
The Human Side of Engineering
John Heywood
2017
Geometric Programming for Design Equation Development and Cost/Profit
Optimizaton, Third Edition
Robert C. Creese
2016
Engineering Principles in Everyday Life for Non-Engineers
Saeed Benjamin Niku
2016
A, B, See... in 3D: A Workbook to Improve 3-D Visualization Skills
Dan G. Dimitriu
2015
iii
The Captains of Energy: Systems Dynamics from an Energy Perspective
Vincent C. Prantil and Timothy Decker
2015
Lying by Approximation: The Truth about Finite Element Analysis
Vincent C. Prantil, Christopher Papadopoulos, and Paul D. Gessler
2013
Simplified Models for Assessing Heat and Mass Transfer in Evaporative Towers
Alessandra De Angelis, Onorio Saro, Giulio Lorenzini, Stefano D’Elia, and Marco Medici
2013
The Engineering Design Challenge: A Creative Process
Charles W. Dolan
2013
The Making of Green Engineers: Sustainable Development and the Hybrid Imagination
Andrew Jamison
2013
Crafting Your Research Future: A Guide to Successful Master’s and Ph.D. Degrees in
Science & Engineering
Charles X. Ling and Qiang Yang
2012
Fundamentals of Engineering Economics and Decision Analysis
David L. Whitman and Ronald E. Terry
2012
A Little Book on Teaching: A Beginner’s Guide for Educators of Engineering and
Applied Science
Steven F. Barrett
2012
Engineering Thermodynamics and 21st Century Energy Problems: A Textbook
Companion for Student Engagement
Donna Riley
2011
MATLAB for Engineering and the Life Sciences
Joseph V. Tranquillo
2011
Systems Engineering: Building Successful Systems
Howard Eisner
2011
iv
Fin Shape Thermal Optimization Using Bejan’s Constructal Theory
Giulio Lorenzini, Simone Moretti, and Alessandra Conti
2011
Geometric Programming for Design and Cost Optimization (with illustrative case study
problems and solutions), Second Edition
Robert C. Creese
2010
Survive and Thrive: A Guide for Untenured Faculty
Wendy C. Crone
2010
Geometric Programming for Design and Cost Optimization (with Illustrative Case Study
Problems and Solutions)
Robert C. Creese
2009
Style and Ethics of Communication in Science and Engineering
Jay D. Humphrey and Jeffrey W. Holmes
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Analog Multimedia
Explorations
Lina J. Karam and Naji Mounsef
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Digital Multimedia and
Robotics Explorations
Lina J. Karam and Naji Mounsef
2008
CAD/CAM of Sculptured Surfaces on Multi-Axis NC Machine: The DG/K-Based
Approach
Stephen P. Radzevich
2008
Tensor Properties of Solids, Part Two: Transport Properties of Solids
Richard F. Tinder
2007
Tensor Properties of Solids, Part One: Equilibrium Tensor Properties of Solids
Richard F. Tinder
2007
v
Essentials of Applied Mathematics for Scientists and Engineers
Robert G. Watts
2007
Project Management for Engineering Design
Charles Lessard and Joseph Lessard
2007
Relativistic Flight Mechanics and Space Travel
Richard F. Tinder
2006
Copyright © 2018 by Morgan & Claypool
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in
any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations
in printed reviews, without the prior permission of the publisher.
Concise Introduction to Cement Chemistry and Manufacturing
Tadele Assefa Aragaw
www.morganclaypool.com
ISBN: 9781681733234
ISBN: 9781681733241
ISBN: 9781681733258
paperback
ebook
hardcover
DOI 10.2200/S00839ED1V01Y201803ENG031
A Publication in the Morgan & Claypool Publishers series
SYNTHESIS LECTURES ON ENGINEERING
Series ISSN
Print 1939-5221 Electronic 1939-523X
Concise Introduction to
Cement Chemistry and
Manufacturing
Tadele Assefa Aragaw
Bahir Dar University, Ethiopia
SYNTHESIS LECTURES ON ENGINEERING #31
CM&cLaypoolMorganpublishers&ABSTRACT
This book is designed to be used in an introductory sophomore-level undergraduate course
in chemical engineering, civil engineering, industrial engineering, chemistry, and/or industrial
chemistry. Senior-level students in resource development, soil science, and geology might also
find this book useful. In addition, it is our hope that even advanced mathematics-oriented high
school seniors might find the material easy to master as well.
This book emphasizes concepts, definitions, chemical equations, and descriptions with
which some chemical science professionals struggle. It stresses the importance of maintaining
uniformly high standards in pure chemical science and manufacturing technology while still
keeping in mind that procedures that might seem strange also yield results that prove effective.
KEYWORDS
cement chemistry, cement production, clinkerization, dry process, manufacturing,
Portland cement, wet process
Contents
ix
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
1
2
3
4
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Classification of Cements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Raw Materials and Their Components for Cement Production . . . . . . . . . . . . . . 3
2.1 The Raw Material Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Mode of Formation of Limestones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3 Carbonate Associations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Additives and Corrective Materials in Cement Production . . . . . . . . . . . . . . . . . 7
Exploration of Raw Materials for Cement Manufacturing . . . . . . . . . . . . . . . . . . 9
4.1
Significance of Raw Materials Exploration in Cement Making . . . . . . . . . . . . . 9
4.2 Objectives of Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
5 The Composition of Portland Cement and Production Process . . . . . . . . . . . . . 11
5.1 Clinkerization Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
5.1.1 Clinker Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
5.1.2 Clinkerization Phenomenon vis-à-vis Clinker Characteristics . . . . . . . 14
5.2
Raw Materials for Cement Making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
5.3 Chemical Composition of Raw Mixes and Compositional Compatibility . . . 16
5.3.1 Module Values of Raw Mixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.3.2 Effect of Chemical Composition on the Reactivity and Burnability
of Raw Mixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.4
Particle Size of Ground Materials in Raw Mixes and Physical Properties
of Clays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.5
Summary and Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
x
6
Burnability and Clinkerization of Cement Raw Mixes . . . . . . . . . . . . . . . . . . . . 23
6.1
6.2
6.3
Burnability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Reactivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Reaction Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
7 Manufacturing Portland Cement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
7.1 Dry Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
7.2 Wet Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
8
9
Testing Portland Cement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
8.1
Samples for Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
8.2 Chemical Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
8.3
Fineness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
8.4 Consistency of Standard Cement Paste . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Soundness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
8.5
8.6
Setting Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
8.7 Compressive Strength . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Tensile Strength . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
8.8
Hydration of Portland Cement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
10 Different Kinds of Cement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
10.1 Rapid Hardening or High Early Strength Cement . . . . . . . . . . . . . . . . . . . . . 41
10.2 High Alumina Cement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
10.3 Quick Setting Cement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
10.4 Portland Slag Cement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
10.5 Low Heat Cement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
10.6 Air Entraining Portland Cement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
10.7 White Cement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
10.8 Colored Cement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
10.9 Portland Pozzolana Cement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
10.10 Chemically Inert (Acid-resistant) Cements . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
11
Storage of Cement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
xi
12 Technical Analysis of Cement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
12.1 Solution Preparation and Apparatus/Reagents Used . . . . . . . . . . . . . . . . . . . . 50
12.2 Sample Analysis and Their Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
12.2.1 Experiment No. 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
12.2.2 Experiment No. 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
12.2.3 Experiment No. 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
12.2.4 Experiment No. 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
12.2.5 Experiment No. 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
12.2.6 Experiment No. 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
12.2.7 Experiment No. 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
12.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Author’s Biography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
xiii
Preface
This book deals with the chemistry of the principal silicate and aluminate cements used in build-
ing and civil engineering. Emphasis is placed throughout on the underlying science and manu-
facturing process but detail practical applications which are well covered in other works.
In order to help the readers understand the context in which this book has been darted
for chemical engineering, civil engineering, industrial chemistry, chemistry, soil science, and
geology disciplines, the book represents a summary information collected from limited number
of sources and written by the author’s understanding the science behind cement chemistry and
manufacturing. The information provided in this book is intended to be used as an input to the
determinations of the principles of production and chemistry of cement in specific areas.
The rest of this section describes the type of information that is provided in each chapter
of the book.
Chapters 1, 2, and 3 provide general information on cement production in the world,
together with its marketing, classification, and type of cements, chemistry, and raw material,
the formation of limestone, additives, and pozzolan materials in cement processing.
Chapters 4, 5, 6, and 7 describe in more detail the mining of raw materials and com-
position; clinkerization and production processes of cement including the advantages and dis-
advantages of the dry and wet production mechanisms with quality and economic aspect; its
burnability.
Chapters 8, 9, 10, and 11 present the testing of the produced cement materials with certain
parameters; hydration effects of Portland cement for the cement strength; different types of
cement and storage mechanisms.
Chapter 12 describes the technical analysis of basic cement quality parameters with a
detailed laboratory procedure.
It is therefore of the utmost importance that the information contained in this book is
fully take into account the best available techniques that change over time. This book will be
reviewed and updated as appropriate.
Tadele Assefa Aragaw
April 2018
Acknowledgments
xv
There are far too many people to thank for their contributions to this book. Whether it was
help with the chemical equations, the writing of the text, preparation of illustrations, or overall
understandability, all contributions are greatly appreciated.
I would also like to thank the reviewers, Dr. Molla Bahiru, University of Gondar and
Dr. Belete Asefa Aragaw, Bahir Dar University, for their many helpful comments and sugges-
tions.
Tadele Assefa Aragaw
April 2018
C H A P T E R 1
Introduction
1
World cement [3] production has registered more than a nine-fold increase over the last three
and a half decades—from 133 million tons in 1950 to 860 million tons in 1979 to 1 billion
tons in 1985. At the same time, there has been tremendous technological progress in the ce-
ment manufacturing process, which is being continuously updated through the introduction of
new technological advances for capacity enhancement as well as by various devices for energy
economy and conservation. Such developments during the past few decades have inevitably im-
posed greater responsibility on the geologists and mining engineers engaged in exploration and
exploitation of raw materials for cement manufacture.
Cement is the name given to mineral powders which when mixed with water form a plastic
body that easily can be shaped and that hardens after some time to yield a strong, stone hard
body. Cement is used for making building and plastering mixes (lime), structural and decorative
articles (Plaster of paris and magnesia cements), prefabricated concrete and reinforced concrete
structural items, underground and hydraulic structures, etc. As can be understood from the uses,
the production of cement in a country, particularly a developing one such as Ethiopia, has to be
given great importance.
1.1 CLASSIFICATION OF CEMENTS
Depending on their uses and properties, cements are divided into three main groups.
1. Air cements: harden and retain their strength in air. They include: air lime, gypsum, and
magnesium cements. These materials are used for making buildings and plastering (lime)
and structural and decorative articles (Plaster of Paris and magnesia cements).
2. Hydraulic cements: harden and retain their strength in water. They include: hydraulic lime,
Roman cement, Portland cement, and cement with various admixtures (Pozzolan cement,
Portland slag cement), alumina cements, etc. Hydraulic cements are more important than
air cements and are used for making prefabricated concrete and reinforced-concrete struc-
tural items and parts of buildings as well as underground and hydraulic structures.
3. Acid-resistant cements: hardening withstands the action of mineral acids such as H2SO4,
HNO3, HCl, etc.
In building practices, cements are used in the form of structural pastes of several types:
grouts (i.e., a mixture of cement with water, mortars, mixtures of cement with water and fine ag-
2
1. INTRODUCTION
gregate (sand)), and concrete mix containing cement, water fine, and coarse aggregate (cement,
gravel, crushed stone).
The hardened mix is called concrete, and concrete embedded with steel is known as rein-
forced concrete.
C H A P T E R 2
3
Raw Materials and Their
Components for Cement
Production
The raw materials for making cement are naturally occurring materials as well as some in-
dustrial waste products. The naturally occurring materials include: gypsum minerals (gypsum
CaSO4.2H2O, anhydrite CaSO4), limestone minerals (limestone, chalk, dolomites), and clay
minerals (clays and marbles, silica sand, bauxites). The industrial waste minerals used for making
cements include: metallurgical slag, the nepheline sludge of the alumina manufacturing industry,
and the sodium hydroxide production sludge, which contains CaCO3, pyrite cinder, etc.
The raw-meal feed for cement making basically contains four types of compounds: carbon-
ates aluminosilicates, iron components, aluminum compounds (oxide), and minor constituents.
Out of these, the first three are very important in the formation of cement clinker, while the
fourth affects the manufacturing process (mainly burning, stabilization of the kiln, and pre-
heated performance) depending upon the type and quantity of the minor constituents present.
The three main components should satisfy among themselves the compositional compatibility,
thermal combinability, and physical amenability (responsibility) to production processes (crash-
ing, grinding and homogenization, burning and clinker formation).
2.1 THE RAW MATERIAL COMPONENTS
The calcareous component of the cement raw meal is usually any rock containing CaCO3.
Limestone is the most commonly available calcium carbonate rock. Besides CaCO3 present
as calcite or aragonite, these rocks also contain various quantities of impurities like: quartz,
clay, phosphates, opal (SiO2), pyrite (FeS2), siderite (FeCO3), qoethite (FeO.OH), dolomite
(CaMg(CO3)2), magnetite, gypsum (CaSO4.2H2O), fluorite (CaFe2), bituminous impurities,
etc.
The type of limestone is characteristic of its mode of origin and has definite implication for
cement making as well as for various production processes. The mode of origin has also profound
effects on the mineral form, degree of crystalline, grain size, cementing medium, degree of com-
paction, mode, and mineral form of occurrence of the impurities in the limestone and controls its
physical, technological, dissociation, and combinability properties. Each of these properties has
4
2. RAW MATERIALS AND THEIR COMPONENTS FOR CEMENT PRODUCTION
significant bearing in the process control and optimization in cement manufacture. The mode of
origin controls the association of various rocks, found interbedded, or as intercalations (anything
out of ordinary course), gradations, or impurities in limestone.
Study of this factor is important for regional prospecting and delineation of the rock types.
The environments of deposition control the chemical (major and minor constituents) composi-
tion and their variations account for the direct suitability of the rock. The details of the physic-
chemical environment and its variations control the mineralogical composition, degree of crysa-
tallinity, grain size, nature, and extent of cementing material in the rock. The first three factors
primarily control the reactivity and thermal combinability of the fine raw meal obtained from
the rock, while the last three control the amenability of the rock to fine grinding for raw, meal
preparation. At the same time, the reactivity and burnability of a raw meal depends upon its
fineness.
Present-day cement manufacture in large plants is based on the advanced technology of
process optimization in which energy conservation is the main constituting factor. Fuel used for
raw-material burning and electrical energy used for crushing, grinding, and homogenization,
also primarily based on fuel as source, can be effectively conserved through a proper under-
standing of the behavior of the raw materials to size reduction and burning processes.
It is therefore apparent that a knowledge about the different modes of formation of lime-
stones, the geological and structural peculiarities, and the lithological association with different
other rock types is therefore an essential prerequisite for regional prospecting and exploration for
selecting an appropriate deposit while detailed study of the mineralogical association, textural,
structural, and granulometric characteristics of the deposit is essential to understand and control
its behavioral pattern in cement manufacture.
2.2 MODE OF FORMATION OF LIMESTONES
Wide compositional, textural, and granulometric variations among limestones and their fre-
quent intimate associations with clays, dolomites, and other rock types reflect their varied mode
of formation. A brief appraisal of different environments and physical-chemical factors control-
ling carbonate formation is helpful in limestone prospecting as well as in their quality evaluation.
Carbonate rocks are mainly the products of deposition in shallower marine environments.
1. Mechanism and Process of Formation
The majority of carbonates is of sedimentary origin and is formed by:
(a) crystallization of calcium carbonate as an initial solid material by both organic and
inorganic precipitation or by a combination of both;
(b) chemical and/or mechanical breakdown of pre-existing rocks, transportation of the
products either as detrital particles or in chemical solution, and the deposition or
precipitation in standing bodies of water in a layered sequence;
2.3. CARBONATE ASSOCIATIONS
5
(c) lithification of calcium carbonate sediments under low-temperature, low-pressure
conditions which include various steps beginning with the change of grain miner-
alogy, addition of coatings to grains, selective dissolution of matrix and/or grains,
precipitation of mineral cement in pores, recrystallization, etc.; and
(d) replacement of calcium sulphate or quartz by calcium carbonate under the effect of
sulphate reducing bacteria, by ammonification or nitrate reduction. This is a less com-
mon process.
CaSO4+8H+H2O+CO2
2(NH4) OH+CaH(CO3)2
H)
H)
CaCO3+4H2O+H2S
CaCO3 + (NH4)2CO3 +2H2O
2. Chemical Precipitates
These may be biogenic or inorganically precipitated rocks. The deposits may be well-
bedded, thick, and uniform and may have aphonic, cryptocrystalline, or fine-grained gran-
ulometry. The rocks are usually dense with low porosity and extensively uniform lamina-
tions.
3. Detrial Carbonates
These consist of gravel, sand, or clay-sized fragments derived from other carbonate rocks.
The detrial fragments constituting the framework are cemented by normal precipitates,
which may be microcrystalline calcite (micrite) or sparry calcite cement (sprite) and other
post-depositional replacement or recrystallization minerals. Such limestones are usually
hard, compact, and show high compressive strength and difficult grindability.
2.3 CARBONATE ASSOCIATIONS
Calcium occupies a unique position in its ionic radius (0.99 Å) which is intermediate between
small and large cations. It can form either rhombohedra (calcite) or orthorhombic (aragonite)
carbonates. Other calcite-type rhombohedra carbonates include MgCO3, and FeCO3. Arago-
nite type includes SrCO3, BaCO3, and PbCO3. Iron-containing carbonates dissociate at lower
temperatures and are comparatively more reactive than pure or siliceous limestones. Aragonite-
type minerals show preferential substitution with larger cations, while surface conditions, calcite,
aragonite, and dolomite are the common carbonate minerals.
C H A P T E R 3
7
Additives and Corrective
Materials in Cement
Production
Additives are naturally occurring rocks or industrial wastes which are added to a raw mix to com-
pensate its compositional deficiency for cement making or to correct marginal deviations from
the desired composition. For a very pure limestone, additives may be generally distinguished as
argillaceous components and corrective materials are ferruginous components. The role of either
may be reversible, significant, or minor depending upon the compositional characteristics of the
limestone.
Various admixtures are introduced to the cements to give them the required properties and
also to reduce manufacturing costs. Hydraulic admixtures: containing alumina, which increase
the resistance of cements to the effects of water and aid hardening under water; plasticizing
agents: surface-active substances which increase the elasticity and bonding properties of the
cement paste: Inert aggregates (sand, limestone, dolomite), acid-resistant admixtures (andesite,
beschfaunite, granite), etc.
Besides chemical composition, one important aspect in the choice of additives and cor-
rective materials with respect to a particular limestone is the compatibility among their physical,
mineralogical, and thermal combinability characteristics, which control, respectively, effective
grinding and homogenization, dissociation, and clinkerization.
C H A P T E R 4
9
Exploration of Raw Materials
for Cement Manufacturing
4.1
SIGNIFICANCE OF RAW MATERIALS EXPLORATION
IN CEMENT MAKING
1. Large limestone deposits need be prospected for increasingly larger capacity of individual
plants.
2. Depletion of large, good-quality, and favorably located deposits through exploitation ne-
cessitated falling back upon inferior-grade and less favorably accessible deposits.
3. More rigid implementation of statutory regulations on noise and dust control excludes use
of many good deposits located near habitations.
4. Higher capital investment on cement plants because of both rising costs and higher ca-
pacity installations intensifies the need for greater reliability in raw material proving to
minimize the entrepreneurial risk.
5. The increasingly rigorous demands on better cement quality impose more rigid require-
ments on raw material quality.
6. The rapid switch-over to the energy-saving dry process of cement manufacture with sus-
pension preheaters calls for:
(a) thorough homogenization or raw meal for better reactivity, where compatibility of
physical and technological properties of the raw meal constituents is a primary pre-
requisite, and
(b) a lesser permissible limit for deleterious minor constituents in raw materials and ce-
ment.
4.2 OBJECTIVES OF EXPLORATION
1. Location of cement plant, taking into consideration the principal factors, i.e., location of
the raw-material deposits, availability of infrastructural facilities, such as power, transport
and communication, and nearness to the marketing region.
10
4. EXPLORATION OF RAW MATERIALS FOR CEMENT MANUFACTURING
2. Optimum size and working life of a plant based on raw material availability; market con-
ditions guided by the scope and extent of capital investment and the risk factor involved.
3. Choice of manufacturing process. Dry, semi-dry, or wet process of manufacture, depending
upon moisture content, minor constituents present in raw material, cost and availability
of solid (coal), liquid (oil products), and gaseous fuels.
4. Quality of the product and scope of manufacture of different types of cement. The quality
of raw materials controls that of clinker, its specific quality characteristics and availability
of other subordinate raw materials, or industrial wastes defines the scope of manufacturing
ordinary Portland cement, white cement, Portland pozzolana cement, etc.
5. System design in the cement manufacturing process. The crushability and grindability
of limestone and the raw mix components decides the types of crushers and mills; the
degree of uniformity inequality or raw material deposits dictates the need or otherwise for
pre-blending or homogenization installations; and the type and quality of harmful minor
constituents decides the design of the preheater and the extent of bypass of kiln exit gases.
6. Mine planning and quarry layout. The geographic geologic characteristics of the raw-
material deposit, such as terrain condition, over burden, mode of occurrence, and structural
features determine the mine layout, the number and height of benches, direction of mining
and scope of selective quarrying for uniform mine output, etc.; the lithologic, i.e., textural,
structural, and fracturing (strength) properties of the rock decides upon the choice of min-
ing method, by drilling and blasting or ripping. The quarry layout, distance from plant,
and topography of the region decide the choice of loading and transport machineries, their
number and capacity, i.e., shovels, excavators, dragline, etc., for transport to the crusher
or plant.
C H A P T E R 5
11
The Composition of Portland
Cement and Production
Process
Portland cement was first introduced in 1824 by Joseph Aspdin, a brick layer from Leeds, Eng-
land [5]. On setting, the color of cement resembles the color of rocks near Portland, England,
hence the name.
Approximate composition of raw material used for manufacturing ordinary Portland ce-
ment is: clinker (percent by weight) varies within the limits shown in Table 5.1.
Table 5.1: Chemical compositions of cement
All the above compounds undergo some chemical combinations during the process of
burning and fusion. Main constituents of cement are: 3CaO.SiO2, 2CaO.SiO2, 3CaO.Al2O3.
Tri-calcium silicate is the best cementing material and the more it is present in cement the better
the cement is. In a properly burnt clinker, 3CaO.SiO2 should be about 40%. In a properly burnt
clinker it shall have less of 3CaO.SiO2 and more of free lime.
After the addition of water to cement it sets and hardens due to the hydration and hydrol-
ysis of the above three compounds which act as a glue. The aluminates are the first to set and
harden. Trisilicate is slower, and disilicate is the slowest. As such, the initial setting of cement is
due to Trisilicate. Disilicate takes 14–18 days to add the strength. All three compounds in their
action with water give out heat. Maximum heat giving compound is the aluminates which is
responsible for most of the undesirable properties of concrete. Cement having lesser aluminates
shall have less initial strength but higher intimate strength. Also, there will be less generation
of heat, more volumetric stability, less cracking, and more resistance to acid attacks. Incomplete
Calcium Oxide (CaO) 60–65%Silica (SiO2)20–25%Aluminum Oxide (Al2O3)4–8%Ferrous Oxide (Fe2O3)2–4%Magnesium Oxide (MgO)1–3%12
5. THE COMPOSITION OF PORTLAND CEMENT AND PRODUCTION PROCESS
burning of clinker leaves free lime in it. This free lime causes expansion and disruption of con-
crete after use. The silicates form a gel with water. The gel fills the pores of cement there by
making it impervious. The gel later on crystallizes and firmly binds the particles.
According to IS 269-1975, composition of ordinary Portland cement shall satisfy the
fallowing conditions.
1. Ratio of the percentage of lime to that of silica, alumina, and iron oxide when calculated
by the formula
CaO
0:75O3
(cid:0)
1:2Al2O3
0:65Fe2O3
2:8SiO2
C
shall not be less than 0.66 and not more than 1.02.
C
2. Ratio of percentage of alumina to that of iron oxide shall not be less than 0.66.
3. Weight of insoluble residue shall not be less than 2%.
4. Weight of magnesia shall not be more than 6%.
5. Total sulphur contents calculated as SO3 (sulphuric anhydride) shall not be more than
2.75%.
6. Total loss on ignition shall not be more than 4%.
In commercial practice the charge composition is calculated on the basis of the required
percentage ratio of the basic oxides in the clinker. These ratios are called modules, the silicate
module “n” and the alumina module “p”:
%Fe2O3
%SiO2
C
%Al2O3
%Al2O3
%Fe2O3
.CaO overall
\
P
KS
D
D
D
(cid:0)
CaO free)
.1:65Al2O3
0:35Fe2O3
(cid:0)
2:8.SiO2 overall
C
SiO2 free/
(cid:0)
0:7SO3/
:
C
The basic characteristics describing the mineral composition of Portland cement clinker is the
coefficient of saturation of the silica with lime, KS, expressing the ratio of the amount of lime re-
maining in the clinker after formation of 2CaO.SiO2, 3CaO.Al2O3, and CaSO4 to the amount
of lime necessary for combining with the silica to form 3CaO.SiO2.
Using given values of modules and KS and also data obtained by chemical analysis of the
raw materials, limestone, and clay, their ratio by weight in the charge is computed. For Portland
cement the coefficient of saturation lies between 0.8 and 0.95. The lower the KS value, the higher
the content of 2CaO.SiO2 in the clinker and lower the activity of the cement.
5.1. CLINKERIZATION PROCESS 13
In the manufacturing of Portland cements the properties of the final products owe their
origin to clinker and gypsum and other additives that are introduced during the process of grind-
ing, and changes that place during the process of grinding and subsequent storage. Any consid-
eration of the characteristics of raw materials requires a basic understanding of the factors that
control the clinker quality and the clinkerization process.
5.1 CLINKERIZATION PROCESS
5.1.1 CLINKER CHARACTERISTICS
The clinker characteristics that are significant in achieving a quality product can be summarized
as follows:
1. appropriate bulk chemical composition;
2. formation of hydraulically active phases;
3. optimum grain growth;
4. optimum proportion of different phases; and
5. proper microstructural development.
A quality clinker can be produced if the following measures are adopted.
1. Achieving the bulk chemical composition of clinker in the following range:
CaO (C)
63–67%
Al2O (A)
Fe2O3 (F)
4–7%
2.4%
SiO2 (S)
21–24%
C
A
F
S
C
C
D
C
98–93 to 93–91
MgO
K2O
SO3
C
C
C
P2OS
C
TiO2
D
2–3 to 7–9%
2. Stabilizing the hydraulically active clinker phases including the high-temperature poly-
morphic forms of alite (Ca3SiO3) and belite (B-Ca2SiO4).
3. Optimizing the proportions of the major phases in a clinker, where the alite content should
be aimed as high as possible, preferably in the range of 55–65%, and aluminates and ferrite
phase in the range 9–11% and 12%, respectively, the balance being made up of belite.
4. Keeping the average grain size of the clinker minerals around 30 (cid:22)m and raising the max-
imum crystal size of alite grains even higher to the extent possible but not exceeding the
range 70–100 (cid:22)m.
14
5. THE COMPOSITION OF PORTLAND CEMENT AND PRODUCTION PROCESS
5. Forming monadoblastic texture, i.e., a microstructure in which there is little clustering of
grains with alite and belie crystals being well distributed over the entire clinker volume as
independent grains having well-crystallized aluminates and ferrite phase in the interstices.
5.1.2 CLINKERIZATION PHENOMENON VIS-À-VIS CLINKER
CHARACTERISTICS
It is well known that the above measures adopted to achieve clinker quality are realized through
the clinkerzation process which can be represented in a simplified form by the reaction steps
given in Fig. 5.1.
Figure 5.1: Approximate reaction sequence in clinkerization.
It is evident from this summary diagram that the clinkerization phenomenon is strongly
dependent on the reactivity (signifying the achievable rate of different reactions at respective
temperature within practical time limits) and burnability (signifying the overall measure of ease
or difficulty of burning under practical operating conditions) or raw mixes, which, in turn, de-
pend on the intrinsic characteristics of the constituent raw materials. In this context it should
be kept in mind that the burning process (Fig. 5.2) has several interdependent and interrelated
controlling factors, which means that knowledge of the raw material characteristics is necessary
to match the need of systems design and operation.
Dehydration and DehydroxylationDecarbonationBreakdown of AluminosilicatesSolid State ReactionsLiquid Phase SinteringCooling27°C55°C550°C1280°C1280°C1450°C600°C660°C950°C1000°C1000°C1300°CMelt FormationH2OCaO<2%CaO ---17%Al2O3 + SiO2 + Fe2O3C3S + C2S + meltC3S + C3A + C2S + C4AFC2S + CA + C12A7 + Ferrite + CaOOH5.2. RAW MATERIALS FOR CEMENT MAKING 15
Figure 5.2: Major factors controlling the burning operations.
5.2 RAW MATERIALS FOR CEMENT MAKING
The common raw materials for cement making have been classified and categorized in Tables 5.2
and 5.3. It is well known that the dispersed raw meal feed for cement manufacture basically
consists of two components—calcium carbonate and aluminosilicates—that are complementary
in nature.
Table 5.2: Raw materials in combined methods of manufacture of cement
BurningOperation andControlBurning Conditions CoolingSchedule Clinker EnvironmentClinker Output and QualityBurnability ofRaw MixRaw MaterialsCharacteristicsLiquid PhaseVolatiles Coal AshReactionSystemDesignPrincipal raw materialSourceProductsBlast-furnace slag Blast furnace of iron and steel industry Clinker, slag cement, road ballast slag wool, slag bricks, lightweight aggregate Calcium sulphate Natural gypsum chemical gypsum Cement sulphuric acidCalcium silicate Nepheline wasteCement, alumina, fertilizers Salt brine and limestone Soda manufacturing industry Caw tic soda, white cement Dolomite Natural Cement, magnesiaCement raw materials and rhyolitic tuff rocks Natural Cement, fertilizers 16
5. THE COMPOSITION OF PORTLAND CEMENT AND PRODUCTION PROCESS
Table 5.3: Raw materials for Portland cement industry
5.3 CHEMICAL COMPOSITION OF RAW MIXES AND
COMPOSITIONAL COMPATIBILITY
5.3.1 MODULE VALUES OF RAW MIXES
The composition of Portland cement clinker is represented by four major oxides. A combination
of their ratios known as module values (Table 5.4) is used for the control of bulk chemical
composition of clinkers. For convenience, these module values are extended to raw mixes as
well. As a broad guideline the fallowing rational limits have been proposed to the soviet cement
industry for clinkers and corresponding raw mixes with coal ash influence.
CategoryNatureMaterials in useMaterials for Clinker ProductionPrincipalCarbonates Limestone, chalk, marble, sea shell, moral, carbonate sludge of paper, sugar, and fertil-izer industries Aluminosilicates Clay, soil, shale, phyllites, slate, and volcanic rocks. Fly ash from thermal power stations Lime-silicates Wollaston tic rocks, metallurgical silages, wastes of aluminum industry SupplementaryCorrective materialsSand and sandstones, buxite, iron are later-ite, pyrite cinders from chemical industry Special AdditivesGrinding acids, etc.Surface active agents like triethnolamine sulphate lye, sodium paly phosphate, etc. Slurry thinners, etc.Surface active agents Granulation Activators Min-eralizes Chemical reagents like Na2CO3, CaF2, Na2SiF6, Ca3(PO4)3, CaSO4, 2H2O, etc.Materials for Converting Clinker into CementPrincipalSet retarderNatural gypsum, chemical gypsumSupplementaryHydraulic blending materialsMaterials with lime reactivity, such as natural pozzolanic rocks, burnt days, blast furnace slag, fl y ash Special AdditivesGrinding acids, Hydrophobic agents, pigments 5.3. CHEMICAL COMPOSITION OF RAW MIXES AND COMPOSITIONAL COMPATIBILITY 17
Table 5.4: Module value used for clinker and raw mix
(S=SiO2, A= Al2O3, F= Fe2O3, C=CaO, M= MgO, S= SO3)Silica Modulus (Ms) = S A+FAlumina Modulus (MA) = A FHydraulic Modulus (MH) = C S+A+FLime saturation factor (LSF) = 100C 2.8S+1.1A+0.7F(when MA<0.64)Lime saturation factor (LSF) = 100C 2.8S+1.65A+0.35F(when MA>0.64)Lime saturation factor (LFSB) = C-0.7s 2.8S+1.2A+0.65FLime Standard (Ls) II = 100C 2.8S+1.18A+0.65FLime Standard (Ls) II = 100(C +0.75M) 2.8S+1.18A+0.65F(when MgO<2%)Lime Saturation III = 100(C+0.75M) 2.8S+1.18A+0.65F(when MgO>2%)Lime – Saturation factor (LSFR) = C-(1.65A+0.35F+0.7s) 2.8s(when MA >0.64)∆= 100 2.8s+1.65A+0.35F-C 2.8S+A+F+C18
5. THE COMPOSITION OF PORTLAND CEMENT AND PRODUCTION PROCESS
5.3.2 EFFECT OF CHEMICAL COMPOSITION ON THE REACTIVITY
AND BURNABILITY OF RAW MIXES
There are limitations; nevertheless, some of the effects of compositional variations are high-
lighted as follows.
1. Other conditions remaining the same, with increase in LSF, both reactivity and burnability
of raw mixes are decreased at temperatures below the liquid formation, whereas above
this temperature (1300(cid:14)C) only burnability is decreased and reactivity is practically not
affected. On the other hand, increase in silica modulus affects reactivity at all temperatures,
while increase in alumina modulus get reflected primarily in harder burning.
2. In actual practice, one unit of LSF is regarded as equivalent of 20(cid:14)C rise in burning tem-
perature.
3. Alumina modulus is particularly critical in liquid phase sintering. In this context the points
in Table 5.5 are of practical significance.
Table 5.5: Alumina modulus is particularly critical in liquid phase sintering
4. For the same silica modules, the maximum formation of liquid at minimum temperature
corresponds to the alumina modulus 1.38 or 1.63, depending on the MgO saturation,
and for the same alumina modulus, the amount of liquid increases with decrease of silica
modulus.
5. The general relationship of raw mix burnability with module value is illustrated in Fig. 5.3.
The reactivity of raw mixes is a resultant effect of
• dissociation of the mineral species into reacting oxides/complexes,
• transformation of the decomposed phases into a reactive stats, and
• combination reactions.
SystemLiquid Formation Temperature °CA/F Ratio for theEutectic CompositionFluxing OxideC-A-F-S13381.38Al2O3,when A/F<1.38Fe2O3,when A/F>1.38C-A-F-S-M13011.63Al2O3, when A/F<1.63Fe2O3,when A/F>1.635.4. PARTICLE SIZE OF GROUND MATERIALS IN RAW MIXES AND PHYSICAL PROPERTIES OF CLAYS 19
Figure 5.3: Effect of module values clinkering temperature and burnability.
5.4
PARTICLE SIZE OF GROUND MATERIALS IN RAW
MIXES AND PHYSICAL PROPERTIES OF CLAYS
The particle size of raw mix is important for the very basic reason that the sintering rate is
roughly proportional to the inverse of particle size. In general, the fineness of raw mixes varies
in the range 3000–5000 cm2/g with about 9–22% particles or more than 0.07–0.09 mm and
0–5% over 0.2 mm size.
Properties like plasticity, specific surface, water requirement, suspension stability, coag-
ulation of clay particles, swelling, etc. are of considerable importance in raw meal preparation,
particularly in the wet process. Clays with at least 7–15 plasticity index and having 10% particles
0:08 mm size with CEC 11–12 mg/100 g
0:2 mm size and cumulative 20% particles of
of
are reportedly regarded as cement making variety.
C
C
5.5
SUMMARY AND CONCLUSIONS
1. The characterization and evaluation of raw materials for the manufacture of Portland ce-
ment have to be done necessarily in relation to the requirements of the manufacturing
process and product quality, and their interrelation can be conceived as depicted in Fig. 5.4.
2. Although a wide varity of raw materials is used in cement industry, future trends would
seem to use industrial wastes and to establish industrial complexes, based on a single set
of raw material of another.
3. A limestone with a minimum 44–45% CaO and maximum 3–3.5% MgO, 0.6% k2O,
0.6–0.8% SO3, 0.25% P2O5, 0.5% Mn2O3, 1.3% TiO2, and 0.015–0.02% Cl is regarded
as a cement-grade limestone, provided its SiO2, Al2O3, and Fe2O3 contents satisfy the
ultimate module values of raw mixes. The compositional ranges of alumniofersilicate ma-
terials cannot be defined rigidly as they have to match the principal carbonate component.
In general, a clay with more than 3% K2O and 1% SO3 may be considered, primafacie,
Modulus of SilicaModulus of AluminaTemperature °C1440140013601320(a)(b)20
5. THE COMPOSITION OF PORTLAND CEMENT AND PRODUCTION PROCESS
Figure 5.4: Interrelation of processing steps, characterization features, and basic properties of
raw mixes in cement making.
unsuitable. For most of the minor constituents 0.5% is found to be a safe limit, in excess
of which a special examination is called for.
4. To arrive at a desirable raw-mix composition, clinker module ranges like 0.92–0.95 for
LSF, 2.0–2.5 for Ms, and 1.4–1.6 for MA may provide a rational guide.
5. The thermal behavior of raw materials primarily depends on the activity state of the min-
eral species present in them. The temperature, rate, and activation energy of limestone
dissociation depend on its mineralogy and microstructure. The rates of clinker formation
reactions are also dependant on the mineral forms of the aluminoferrosilicate components
of a kiln feed. The concurrence of carbonate dissociation and thermal demolition of alu-
minoferrosilicate component is considered a basic necessity for proper burning.
6. The amenability of limestones to size reduction process is apparently controlled by the free
and fixed silica content and the grain size variations of calcite and quartz, although a host
of other factors also have a role to play.
RawMaterialsParticulateSolidsClinkerMiningCrushingGrindingHandlingHomogenizationDryingBurningGrindingwithGypsumFinishedProductwith SellingPropertiesCementCompositionalCompatibilityBasic PropertiesRequired in a Row MixSteps inCementManufactureCharacterizationFeaturesThermalCombinaibilityChemicalCompositionMineralCompositionMicrostructure(GrainCharacteristics)Size andSurface ofParticulateSolidsPhysical Amenabilityto ProductionProcesses likeCrushing, Grinding5.5. SUMMARY AND CONCLUSIONS 21
7. The particle size distribution in ground raw mixes is critical both for burnability and clinker
granulometry. The mineralogy of the coarser fractions of raw mixes is particularly signifi-
cant for measure of their burnability. The limiting particle size for different mineral forms
in raw mixes has already been evolved for easy burning.
8. Properties of the clay component are as important in cement making as those of the car-
bonate rocks. Since the clay composition is widely variable and its mineralogy is complex,
the choice of clay is done more on the basis of its Si: (A, F) ratio, fusibility, and physical
characteristics like plasticity granulometry, cation exchange capacity, etc.
C H A P T E R 6
23
Burnability and Clinkerization
of Cement Raw Mixes
6.1 BURNABILITY
Burnability of raw mixes has been a matter of great importance in cement technology. The be-
havior of a raw mix during its sintering process is greatly influenced by its chemical, mineralog-
ical and granulometeric compositions, variation in these affect kiln operation, refractory lining,
fuel consumption, and clinker quality. Each cement raw mix burns in its own way resulting in
variation of clinker quality.
The burnability of a cement raw mix conceptually denotes the amount of mass transfer
of its constituents with ease or difficulty to the clinker phases. By convention, burnability is
measured by determining the CaOf (free) after burning the raw mix for a certain time ((cid:18)) at a
certain temperature (T ), i.e., CaOf
F .(cid:18); T /, above 1300(cid:14)C when melt is formed, burn ability
D
decreases by increasing this parameter.
Burnability is generally expressed by either of the following two quantities.
1. Measure of CaOf of a pseudo-isochrones ((cid:18)
constant) at a given temperature. Increas-
ing values of CaOf correspond to decreasing burnability.
D
2. Measure of time ((cid:18)/ of pseudo-isotherm (T
corresponds to decreasing burnability.
D
constant) for CaOf
2%, increasing of (cid:18)
(cid:20)
The following are the important parameters which affect the burnability of a raw mix to
a great extent.
1. Raw mix-mineralogical composition:
• Lime components: consisting mainly of CaCO3 and very small quantity of the fol-
lowing in order S-M-R:F:S:N:K.
• Clay components: consisting mainly of SiO2 with considerable amount of the fol-
lowing in the order R:F:C-M-S-N-K.
• Corrective ingredients: consisting mainly of other of any main oxides (C/A/S/F).
• Modifiers: consisting of different inorganic compounds which accelerate the clinkiza-
tion reactions.
24
6. BURNABILITY AND CLINKERIZATION OF CEMENT RAW MIXES
2. Raw mix-chemical composition:
• Main component oxides are: C, A, S, and F.
• Minor volatiles are: K, N, S, P, F, Cl, and H.
• Minor non-volatiles are: Sf , M, Ti, Mn, Sr, and Cr. Each component of the raw mix
has individual and combined (Ms, MA, LSF, and Ms) effect on burnability.
3. Raw mix–granulometric composition: fineness and particle size, distributions, homogene-
ity, and compaction.
• The more fine grained, the greater surface area a raw mix has, the easier it is to sinter
and the lower the sintering temperature.
Homogenization of kiln feed is a major operation in cement manufacturing as it affects
the quality of clinker, burning process, and fuel consumption. Fluctuation of the kiln feed
measured as % CaCO3 should not be more than
0.2 from the holding point. An in-
crease of 1% CaCO3 will increase the C3S by 13% and reduce C2S by about 11.5%. The
ultimate homogeneity depends on the physical-chemical characters, fineness and particle
size distributions, method of mixing, and efficiency of the blending system.
(cid:6)
4. Raw mix–thermal treatment:
Firing temperature: in clinker the temperature must be fairly enough for the formation of
alite phase. Burning of the raw mix is generally carried out at 1450–1500(cid:14)C. Excessively
high burning temperature results in a great stress on the kiln and the refractory lining,
more fuel consumption, reduction in cement strength, and larger alite crystals. Increase in
burning temperature from 1360–1420(cid:14)C results in lowering the burning period by half.
Maximum firing temperature was determined by a multiple regression analysis of raw meal
containing only the four main oxides as given below:
(cid:14)C
1300
4:51 C3S
3:74C3A
12:64C4AF:
(cid:0)
(a) Holding time: on increasing holding time, the following changes may be observed.
C
D
(cid:0)
i. C3A content decreases and C4AF content increases.
ii. C2S decreases and C3S increases.
iii. Higher mechanical strength at later ages and lower at early ages.
iv. Heat of hydration at early ages decreases.
v. Unburnt clinker can produce high-quality cement even in presence of high
CaOf .
(b) Burning rate: rapid burning is always favored for the following reasons.
i. More coarse-grained materials can be charged.
6.1. BURNABILITY
25
ii. Materials differing by their degree of fineness can be charged.
iii. Five grains of C2S formed which accelerate the interaction of C2S, CaOf and
liquid.
(c) Burning activation: thermal activation may be enhanced by either accompaniment
with mechanical (vibratory mill) or chemical (mineralizer) activation. Mechanical
activation gives better results than chemical.
5. Liquid phase formation: appearance temperature, amount, viscosity, surface tension, ionic
mobility:
A, F, M, minor volatile and non-volatile components generally govern the amount of liquid
formed, its appearance, temperature, viscosity, surface tension, and ionic mobility in the
clinkerization process. The range of clinker composition may be fairly wide if the amount
of liquid phase increases slowly. A clinker with about 25% liquid phase form a raw mix
is generally considered an ideal raw mix for kiln lining, fuel saving, rapid C3S formation
through dissolution of phase at 1450(cid:14)C is usually calculated by
3:0A
8:5A
2:28F
5:22F
N
N
K
K
C
C
C
C
C
C
M
M
(cid:0)
(cid:0)
when MA > 1:38
when MA < 1:38
6. Clinker quality: silicate phase, alumina-ferrite phases.
It has been seen that the burnability becomes worse as the potential C3S content increases,
at the expense of other clinker constituents, while increasing C3A and C4AF content, the
burnability improves, and the C4AF is significantly more effective in this respect.
7. Coal ash: amount absorbed, composition, fineness.
When coal is used as the fuel for clinker-making, its ash quantity, composition, and fine-
ness affect the burnability. Generally, the composition of coal ash varies within the limits:
S-35-60%, A-15-35%, F-5-20%, C-0-10%, and M, S and alkalis are often present in the
ash is small amounts. In general, the ash composition shows a very high S/C ratio and
moderately high A/F ratio.
8. Kiln atmosphere: oxidation, reduction.
Reducing conditions during cement clinker burning substantially affect the color of the
clinker by producing Ferrous oxide, accelerate the setting by enhancing C3A content at
expense of C4AF, and reduce the strength by breaking down C3S during clinker coaling.
Therefore, oxidizing conditions (0-1-2 vol. % in exist gas) should be maintained in the kiln
for better clinker quality.
26
6. BURNABILITY AND CLINKERIZATION OF CEMENT RAW MIXES
6.2 REACTIVITY
Reactivity of a raw mix is defined by the overall chemical reactions among the represented con-
stituents of the raw mix, attained on burning it at a certain temperature for a certain time, i.e.,
F .T; (cid:18) /, above 1300(cid:14)C when melt is formed, this parameter, however, has no effect on
Rm
D
reactivity.
Factors Affecting Reactivity
1. Physical-chemical, mineralogical, and granulometric composition.
2. Chemical process of clinker mineral formation.
6.3 REACTION SEQUENCE
The course of reaction inside a rotary kiln has been of great interest to the cement technolo-
gists since the kiln is controlled by computer and, obviously, a mathematical model to explain
the reaction process is to be constructed in order to find a logical relation between the process
variables.
Table 6.1: Zone temperature range 0(cid:14)C and reaction profile
The experimental observations revealed the following phenomena.
1. The first aluminate phase “CA” is formed at lower temperatures (550–600(cid:14)C) which, in
turn, combine with free CaO resulting in the formation of an intermediate phase C12A7
and finally it converts into C3A above 900(cid:14)C.
2. The formation of C2AS as an intermediate phase is likely but dependent on the nature of
raw materials used.
3. The ultimate formation of C4AF at higher temperature (1300–1440(cid:14)C) is consecutively
followed by the appearance of ferrite phase (CF and C2F) at lower temperature (800–
900(cid:14)C).
Parallel observations, which confirmed the above items.
IUp to 200Evaporation (slurry drying) preheating (dehydration, dehydroxylation, and fi rstappearance new phases). Decarbonization (calcinations) exothermic reactions sintering cooling II200–800III800–1100IV1100–1300V1300–1450–1300VI1300–10006.3. REACTION SEQUENCE 27
1. The reaction sequence of raw mixes is almost identical in dry, semi-dry, and wet kiln.
2. The dissociation and decarbonation of raw-mix components start at 550–600(cid:14)C. The CaO
formed during decarbonation reacts with other components simultaneously in such a way
that about 2% CaOf at 800(cid:14)C and about 17% at complete decarbonation temperature
(1000(cid:14)C) remain unreacted.
3. The first detectable phases CA
C2S were noticed at 7000(cid:14)C. The amount of
these phase increase with temperature up to 900–1000(cid:14)C, when poorly detectable C3S
and some C4AF/C2F are traced.
C12A7
C
C
4. In some other investigations, the first phases detected are CF
CS which are sub-
sequently converted into clinker phases with rise of temperature in accordance with the
following scheme:
CA
C
C
CA________ C12A7__________C3A
CF ________ C2F____________C4AF
CS ________ C3S ____________C2S
5. X-Fe, FeO, and Fe2O3 along with the temperature of x-wolestonite almost concurrently
with B-C2S are detected from a series of charge samples.
6. Extensive study made after comparing five kiln charges coating a reaction sequence was
derived accordingly, as shown in Fig. 6.1, which further confirmed the above observations.
Figure 6.1: Reaction sequence in cement rotary kiln.
QuartzB-C2SC3SC3ACaOfC4A3SC4AFC4AFC12A7CaOAnhydriteGehleniteMagneio-ferriteMgO/FeO.Fe2OSpurrite2(C2S).CaCO3CaCO3ClaysB-C2S28
6. BURNABILITY AND CLINKERIZATION OF CEMENT RAW MIXES
7. The solid reactions are almost complete at a temperature of about 1300(cid:14)C and a melt phase
appears. The melt phase contains a complete melting of C3A
C4AF and partial melting
of C2S and CaO with incorporation of such constituents as R2O, MgO. The formation
of C3S is activated through clinker phases appear with the formation C3A, C4AF, C2S,
C3S, MgO, and glass after crystallization of the residual liquid.
C
C H A P T E R 7
29
Manufacturing Portland
Cement
The manufacture of cement [2] is composed of two independent processes.
Fabrication of the intermediate product—the clinker, which includes preparation of the
raw mixture and firing of it and the grinding of the clinker together with the admixtures, storing
and packing of the Portland cement.
There exist two methods for preparing raw mixture: a wet method and a dry method. Both
are outlined in this chapter.
7.1 DRY PROCESS
The specific feature of this process is that the raw materials are ground and mixed in the dry
state. In this process, limestone and clay are ground separately to fine powders and then mixed
together in the desired proportions. Water is then added to it so as to get a thick paste of which
cakes are then made, dried, and burnt in kilns. To the clinker obtained after burning is added
3–4% of gypsum and ground to very fine powder. This powder is cementing ready for use. This
process is slow, costly, and also difficult to have the correct proportion of constituents; to do so is
a cumbersome operation. The quality of cement is not as good as that of the one manufactured
by wet process. This method has therefore become obsolete.
7.2 WET PROCESS
The specific feature of this process is that the raw materials are prepared in water. The flow
diagram of the wet process for manufacturing Portland cement is given in Fig. 7.1.
Mixing: The limestone is first broken up in crushers (2), and a liquid mass from clay mixer (1),
which are in desired proportions are fed into ball mill (raw-material mill (3)), and are simulta-
neously ground to very fine powder and water is added to it. Ball mill (shown in Fig. 7.2) is a
rotating steel cylinder in which there are hardened steel balls. When the mill rotates the steel
balls pulverize the raw materials which form into a solution with water. This liquid mixture is
known as a slurry. This slurry is then passed into storage tanks known as silos (correcting slurry
basin (4)), where it is stirred with agitators or by pneumatic mixing. The slurry is passed to
horizontal basin (5), where the proportioning is finally adjusted to ensure the correct chemical
30
7. MANUFACTURING PORTLAND CEMENT
Figure 7.1: Flow diagram of Portland cement manufacturing by wet process: (1) clay mixer,
(2) hammer crusher, (3) raw material mill, (4) correcting slurry basin, (5) horizontal slurry basin,
(6) rotary drum furnace, (7) grate cooler, (8) storage, (9) cement mill, and (10) cement silos.
(Hand-drawn by the author.)
composition, and to obtain the necessary ratio of compounds (components). Composition of
raw mix in the wet process can be better controlled than in dry process. The corrected slurry is
then fed into the rotary kiln for burning.
Burning: The corrected slurry is fed at the higher end of the inclined rotary kiln (rotary drum
furnace (6)) shown in Fig. 7.2, whereas from the lower end of the kiln a flame is produced (using
combustion products) by injecting pulverized coal with a blast of air, and that moves through it
in a counter current of hot gaseous. The rotary kiln is a steel tube lined inside with fire bricks.
It goes to 120 m long and from 2.5–3.5 m in diameter. The kiln is mounted on rollers
at a gradient of 1 in 25 to 1 in 30 and rotating once in every minute. The slurry interaction
results in the successive processes of water evaporation, mineral dehydration, dissociation of
limestone, and chemical reactions between the basic oxides, CaO, which is formed, and the
7.2. WET PROCESS 31
Figure 7.2: Cross section of a ball mill and rotary kiln. (Hand-drawn by the author.)
components of the clay SiO2, Al2O3, Fe2O3, and finally small lumps or “nodules” are firmed.
The nodules gradually roll down passing through zones of rising temperature until them rich
burning (sintering) zone where they are finally burnt at 1500–1650(cid:14)C. At this temperature
“nodules” changes to clinkers. The clinker is cooled with cold air in grate type cooler (7), to
temperature of 50–60(cid:14)C. In these coolers, which are located below the kiln, the air is passed up
through a bed of clinker particles uniformly distributed on a bar grating.
Grinding: The clinker is transferred from the coolers to the storehouse (8), where it is kept for
a certain length of time for quenching (hydration) of free lime. The cured clinker together with
hydraulic or inert admixtures and gypsum, which is adds to control the setting time, is ground
in tubular cement mills (9). The cement is stored in reinforced concrete silos (10), through the
bottom of which air is forced when the cement is being discharged to loosen it. Cement is
delivered to consumers in automobile or railway cement tanks in bulk or in paper multilayer
bags.
32
7. MANUFACTURING PORTLAND CEMENT
Figure 7.3: Flow diagram of cement manufacture (wet process).
Argillaceous MaterialWashing with WaterStored in BasinCalcareous MaterialCrushingStored in SilosChannelsGrinding in the Ball Mills or Tube MillsCorrection BasinStorage BasinC H A P T E R 8
33
Testing Portland Cement
8.1
SAMPLES FOR TESTING
Each sample for testing shall consist of an intimate mixture of approximately equal portions
selected from at least 12 different bags or packages when then the cement is not loose or 12
different position in the heap or heaps when the cement is loose. Selection of samples shall be
done in such a manner so as to obtain a fair average sample. The sample taken will be stored in
an airtight container until the time of the test.
8.2 CHEMICAL COMPOSITION
Loss of ignition: Heat 1.00 g of the sample for 15 m in a platinum crucible (or for 1 h in
porcelain crucible) at a temperature of 900–1000(cid:14)C. Cool and weightloss on ignition should
not be more than 4%.
Insoluble residue: Boil for 10 m a well-stirred mixture of 1 g cement, 40 cc of water and 10 cc
concentrated hydrochloric acid (sp.gr.1.18). Filter the solution and rinse the container five times
and wash the filter ten times with hot water. Wash the residue on a filter with hot water and
boil for 10 min with Na2CO3 solution (2N). Filter the solution again through the same filter
paper and wash five times with water. It is now washed with HCl (2N) and finally with water
until it is free from chlorides. The filter paper should be dried ignited and weighed to give the
insoluble residue. The insoluble residue should not be more than 1.5%.
Lime and alumina: The percentage of lime to silica, alumina, and iron oxide when calculated
by the formula
CaO
0:7SO3
(cid:0)
1:2Al2O3
0:65Fe2O3
C
2:8SiO2
C
should not be greater than 1.02 nor less than 0.66. The ratio of the parentage of alumina to
that of iron oxide shall not be less than 0.66. An excess of free lime will cause unsoundness of
cement.
Magnesia:
If free magnesia exceeds 5% then it makes the cement unsound.
34
8. TESTING PORTLAND CEMENT
8.3
FINENESS
Finer cements react quicker with water and develop early strength, although the ultimate
strength is not affected. However, finer cements increase the shrinkage and cracking of con-
crete. The fineness is tested by either one of the following two methods.
1. By sieve analysis: break with hands any lumps present in 100 g of cement placed in a sieve
No. 9, and sieve it by gentle motion of the wrist for 15 m continuously. The residue when
weighed should not exceed 10% by weight of the cement sample.
2. By specific surface: will not be less than 2250 cm2/g as found by Wagner’s turbidmeter
method.
8.4 CONSISTENCY OF STANDARD CEMENT PASTE
The following physical test should be carried out, whenever possible, between the temperate
range of 25–29(cid:14)C.
This test is performed to find out the correct amount of water to be added to a given
quantity of cement so as to get a paste of normal consistency. This test precedes the test of
cement for soundness, setting time, and tensile strength or for compressive strength. This test
can do with the help of vicat’s apparatus having the frame movable rod, as shown in Fig. 8.1.
Diameter of the rod mostly is 1 cm and is 5 cm long. At its lower end is attached a detachable
needle 1 mm square or 1.3 mm in diameter and 5 cm long. There is a vertical sale graduated
from 0–40 mm in either direction to measure the vertical movement of the rod.
To start with 25% of clean water is mixed with about 300 g of neat cement in a crucible.
The mixing can be done with a standard spatula shown. After about 30 s it is thoroughly mixed
with hands for at least 1 min. The kneaded paste is tossed about six times from one hand to the
other and pressed into the hard rubber mold through its bigger end. Fill the mold completely
with paste and remove the extra paste by a single movement of the palm. Place the inverted
mold (with larger end on glass plate) and slice off extra paste from top by a single movement
of trowel. Place to mold resting on glass plate under the needle. Bring 1 cm diameter end of
needle in touch with the paste and release it without any jerk or force and note the penetration.
The time taken from adding of water in cement to filling of mold should be between 3–5 min.
Repeat experiment with trial pastes made with varying percentages of water. The paste giving
a penetration of 33–35 mm is said to be of normal consistency. The amount of water mixed is
expressed as a percentage by the weight of dry cement. This is usually in the neighborhood of
30% for a paste or normal consistency.
8.5. SOUNDNESS 35
Figure 8.1: Vicat apparatus.
8.5
SOUNDNESS
It is essential that cement concrete does not undergo large changes in volume after setting. This
change in volume is known as unsoundness and may cause cracks, distortion, and disintegration
of concrete.
The test is carried out with the help of Le Chevalier’s apparatus shown in Fig. 8.2. It
consists of a split brass cylinder 30 mm high, 30 mm internal diameter, and 0.5 mm thick. Two
pointers AA, 165 cm in length up to the axis of cylinder, are attached to the cylinder, one on
each side of the split. Cement paste prepared with 0.78 times the water required preparing a
paste of normal consistency and 100 g of cement is filled in the mold resting on a glass plate.
Another glass plate is placed on the mold and weighed down. The whole is immediately placed
36
8. TESTING PORTLAND CEMENT
in a water bath maintained at a temperature of 27–32(cid:14)C after 24 h the distance between the
pointers is measured and the mold is transferred to a beaker of water heated to the boiling point
in 25–30 m and kept at this temperature for one hour. After cooling the increase is distance.
Between the pointers is noted. The increase in this distance should not be more than 5 mm for
cement that had been aerated for 7 days in a humidity of 50–80% before test or 10 mm if the
cement had been kept in airtight containers.
Figure 8.2: (a) Le Chevalier’s apparatus and (b) briquettes of standard dimension. (Hand-drawn
by the author.)
8.6
SETTING TIME
To enable the concrete to be laid in position properly the initial setting of cement should not
start too quickly. Once the concrete has been laid it should harden rapidly so that the structure
could be put to use early. The initial setting of cement is that stage in the process of hardening
after which any cracks that may appear do not reunite. Final setting is that when it has attained
sufficient strength and hardness.
Vicat apparatus shown in Fig. 8.1 is used to find the setting time for cement. The paste of
300 g cement made with 0.85 times the amount of water required for paste of normal consistency
8.7. COMPRESSIVE STRENGTH 37
is filled in the mold at the lower end of the rod is fitted with a 1 mm square needle. This needle
is brought in contact with the surface of paste and released. The initial set is said to have taken
place when the needle fails to penetrate beyond a point 5 mm above the glass plate. The time
taken from the instant can added to cement to the moment when the needle fails to penetrate
5 mm before the glass plate is known as it should not be less than 30 m for ordinary Portland
cement.
For finding out the final setting time the 1 mm square needle is replaced by the other
needle. This needle has an annular attachment around 1 mm square needle and projecting by
0.5 mm below it. To find final setting time the needle shall be brought in touch with the paste
in the mold and released instantly. The final set shall be considered as having taken place when
the attachment fails to make any impression on the surface of paste whereas the needle makes
one. The time from the moment water was added to make impression on the surface of cement
paste is known as final setting time.
For ordinary Portland cement the final setting time should not be more than 10 h. The
test should be performed in an air-conditioned room with 90% humidity and at a temperature
between 25–29(cid:14)C.
8.7 COMPRESSIVE STRENGTH
The compressive strength of cement is judged by finding the compressive strength of cement and
sand mortar. For the purpose on part by the weight of cement is mixed dry with three parts by
weight of IS sand. To this dry mixture of cement and sand is added water given by the following
formula:
P n
4 C
where P is the % of water by weight of dry materials and P n is the % of water required for
making a cement paste of normal consistency.
3:5;
D
P
Cement sand and water shall be intimately mixed to give the paste of uniform color but
the mixing should be intimately mixed to give the paste a uniform color but the mixing should
not be for more than 3–4 min. Cubes of 7.06 cm sides are then molded out of this paste and
are kept in an atmosphere to 90% humidity and 25–29(cid:14)C temperature for 24 h. They are then
removed from the molds and kept submerged in clean water until the time of the test and should
not be allowed to dry.
Three webs each are tested in a compression testing machine after 3 days and 7 days.
Compressive strength of ordinary Portland cement should not be less than the following values:
After 3 days
After 7 days
115 kg/cm2
175 kg/cm2
38
8. TESTING PORTLAND CEMENT
8.8 TENSILE STRENGTH
Tensile strength of cement sand mortar is tested to judge the tensile strength of cement. To do so
briquettes of standard dimensions are prepared. Briquettes have a uniform thickness of 25.1 mm
and a minimum sectional area of 645 mm2 at the central section. For preparing briquettes one
part by weight of cement and three parts by weight of water are mixed with the quantity of water
found from the following formula,
P
D
0:2P n
2:5:
C
Cement sand and water are mixed intimately so as to get a uniform of the mortar. A small heap
of mortar is placed on a briquette mold and filled as usual. It is then beaten down with the
standard spatula until water appears on the surface. The mold is now turned upside down and
as before again a small heap of mortar is placed and beaten down. The surfaces are smoothed
with the blade of a trowel. The briquettes are taken out of moulds after keeping them in an
atmosphere of 90% humidity and temperature of 25–29(cid:14)C for 24 h. Six such specimens each
are tested in a briquette testing machine after 3 days and 7 days.
Tensile strength for good Portland cement should be as follows:
After 3 days
not less than 20 kg/sq cm
After 7 days
not less than 25 kg/sq cm
C H A P T E R 9
39
Hydration of Portland Cement
The hydration behavior of Portland cement [4] encompasses that of its constituent minerals,
but care must be taken in translating their functioning to that of practical Portland cement sys-
tems. While a number of studies have tended to approximate, the hydrations of alite with that
of Portland cement, many additional criteria are involved. Not least of these are interactive rela-
tionships between the principal hydrating minerals C3S, C2S, C3A, and C4AF. None of these
phases is pure; each contains a large number of elements in small quantities in solid solution.
Alkalis can and do affect the course of the reaction, whether percent as water-soluble sulphate
or incorporated in the constituent phases initially with exsolution occurring during hydration.
The products formed (C-S-H, ettringite (the reaction product of C3A with gypsum), mono
sulphate- C4AH13, Calcium hydroxide, etc.) are also impure. Analytical electron microscopy of
cement pastes has shown that C-S-H incorporates significant amounts of aluminum, iron, and
sulphur, while the ettringite and MonoSulphate phases contain significant amounts of silicon,
and even the calcium hydroxide contains small quantities of foreign ions chiefly silicate. There is
no pure saturated solution of calcium hydroxide, for instance a whole host of other cations and
anions in different quantities (mostly small) are also present. The permutation and combination
of what can and does occur in practice are, of course, infinite and it must not be forgotten that
hydrating Portland cements are exceedingly complex systems with many interactive possibilities.
In the processes for shortening the hardening time can be classified as follows.
1. The use of quicker hardening cements such as high early strength cement.
2. Heat treatment methods.
3. The use of electromagnetically treated mixing water.
4. The use of chemical additive as hardening accelerators.
5. The use of pressure.
For optimum effects the magnetizing parameter, such as the magnetic field strength, the
flow rate of the water and the period of influence of the field on the water must be accurately
determined.
C H A P T E R 10
41
Different Kinds of Cement
The following are some of the important kinds of cements manufactured to suit the different
requirements.
10.1 RAPID HARDENING OR HIGH EARLY STRENGTH
CEMENT
This cement gains strength faster than the ordinary Portland cement. Its initial and final setting
times are the same as those of ordinary cement. It contains more of tri-calcium silicate and
is more finely ground. It gives out more heat while setting and is as such unsuitable for mass
concreting. It is used for such structures as are to be subjected to loads early, e.g., repair of bridges
and roads, etc.; it is more costly than the ordinary cement. It is manufactured by burning at
clinkering temperature an intimate mixture of calcareous and argillaceous materials and grinding
the resultant clinker without the addition of gypsum and not more than 1% air entraining agents.
The average compressive strength of at least three mortar cubes (area of face 50 cm2) com-
3 percent (of combined
posed of one part cement and three parts standard sand by mass p*/4
mass of cement and sand) water, shall be as under:
C
After 24 h
After 72 h
not less than 160 kg/cm2
not less than 275 kg/cm2
P* is the % of water required to prepare a paste of standard consistency.
10.2 HIGH ALUMINA CEMENT
It is manufactured by fusing together a mixture of bauxite and limestone in correct proportion
and at high temperatures. The resulting product is ground finely. It develops strength rapidly
and is of black color and resists well the attack of chemicals especially of suphates seawater. Its
ultimate strength is much higher than that of ordinary cement. Its initial setting time is more
than 2 h and the final set takes place immediately thereafter. Most of the heat is given out by it
in the first 10 h as a result of which it can be conveniently used in freezing temperatures but it
used in thin layers in normal temperatures.
42
10. DIFFERENT KINDS OF CEMENT
10.3 QUICK SETTING CEMENT
It sets faster than the ordinary Portland cement. Its initial setting time is 5 m and the final setting
time is 30 m. It is used for making concrete that required setting early, as for laying under water
or in running water. Initial setting time being very little there is always the danger of concrete
having undergone initial setting during mixing and placing as such this cement is used only in
exceptional circumstances.
10.4 PORTLAND SLAG CEMENT
It is obtained by mixing Portland cement clinker, gypsum, and granulated slag in proper pro-
portion and grinding it finely. This cement has properties very much similar to those of ordinary
Portland cement with the following improvements.
1. It has less heat of hydration.
2. It has better resistance to soils, sulphates of alkali metals, alumina, and iron.
3. It has better resistance to acidic waters.
This cement can advantageously be used in marine work. Manufacture of Portland slag
cement is aimed primarily at profitably utilizing blast furnace slag—a waste product from blast
furnaces
10.5 LOW HEAT CEMENT
Heat generated by cement while setting may cause the structure to crack in case of concrete. Heat
generation is controlled by keeping the percentage of tri-calcium aluminates and tri-calcium
silicate low. Its initial and final setting times are nearly the same as those of ordinary cement
but the rate of its developing strength is very slow. It is not very suitable for use in ordinary
structures, when not only the use of structures shall be delayed but also the shuttering shall have
to be kept for long and curing will be prolonged.
10.6 AIR ENTRAINING PORTLAND CEMENT
It is ordinary Portland cement mixed with small quantities of air entraining materials used are:
resin, vinsol resin, oils, fats, and fatty acids. Vinsol resin and darex are most commonly used.
These materials have the property of entraining air in the form of fine air bubbles in concrete.
These bubbles render the concrete more plastic, more workable and more resistant to freezing.
However, because of air entraining the strength of concrete reduces and as such the quantity of
air so entrained should not exceed 5%.
10.7 WHITE CEMENT
It is cement with pure white color and having the same properties as those of ordinary Portland
cement. Grayish color of ordinary cement is due to iron oxide, as such white cement is manu-
factured from white chalk and clay free from iron oxide. Oil fuel and not the coal are used for
the burning of this cement. It much more costly than ordinary cement.
10.7. WHITE CEMENT 43
10.8 COLORED CEMENT
By mixing suitable pigments ordinary Portland cement could be given a red or brown color.
For other colors, 5–10% of desired pigments are ground with white cement. Pigments used in
cement should be chemically inert and durable so as to fade due to the effect of light or weather.
10.9 PORTLAND POZZOLANA CEMENT
Portland pozzolana cement is produced either by grinding together Portland cement clinker and
pozzolana (porous volcanic rock) or by intimately and uniformly blending Portland cement and
fine pozzolana.
This cement has properties similar to those of ordinary Portland cement, and can therefore
be used for all general purposes where the latter is employed, with no change in the proportion
of coarse or fine aggregates and cement. Gypsum can be added in both cases.
Portland pozzolana cement produces less heat of hydration and offers greater resistance
to the attack of aggressive waters or cuspate-bearing soils than ordinary Portland cement. It also
reduces leaching of calcium hydroxide liberated during the setting and hydration of cement.
Consequently, Portland pozzolana cements concrete structures.
Pozzolana cement takes a little longer than ordinary Portland cement to gain strength. It
is recommended that when pozzolana cement is used in reinforced concrete, the centering be
left in position a little longer than would be the case with ordinary Portland cement. Ultimate
strength of this cement is more than that of ordinary Portland cement but initial and final setting
times are the same.
10.10 CHEMICALLY INERT (ACID-RESISTANT) CEMENTS
It can be divided into acid-resistant cements, concretes, and putties. Acid-resistant cement is
made without firing from silicate or soluble glass (an aqueous solution ion of the silicates of
alkali metals with a common formula Ck, Na2O.nSiO2), finely ground acid-resistant aggregates
(andesine, diabase, quartz), and sodium fluosilicate Na2SiF6. Depending on the aggregate used
acid-resistant cements are called quartz cement, landsite cement, etc.
Cement powder consists of a mixture of pulverized aggregate and sodium fluosilicate.
When this mixture is combined with liquid glass, the mass formed soon sets and then rapidly
hardens. Setting and hardening take place as a result of the reaction between the liquid glass and
44
10. DIFFERENT KINDS OF CEMENT
sodium fluosilicate leading to formation of silicon acid gel (H4SiO4) which possess bonding
properties.
Acid-resistant cements are used for lining chemical equipment and for preparing mortar
and concretes. When a piece of equipment (acid-storage.vat acid absorption tower, reactor, etc.)
is lined, polyisobutylene or rubber is glued on to the shell walls and the acid-resistant lining is
applied on top of this film to provide complete tightness of the lining. Acid-resistant putties,
used in assembling chemical apparatus, are also made from acid-resistant cements.
C H A P T E R 11
Storage of Cement
45
Portland cement is a finely ground material. It therefore readily absorbs moisture even from the
atmosphere. It is therefore essential to protect it from dampness during storage. Lack of proper
care may cause setting of cement or reduction in its strength due to partial setting. Following
precautions must as such be taken in storing cement.
1. Walls, roof, and floor of the building in which cement is to be stored should be completely
waterproof.
2. In case the cement store is newly constructed then its interior should have been thoroughly
dried before cement is stored on it.
3. Doors and windows should be properly titted and should be kept shut.
4. Except in the case of dry concrete floor the cement bags should be stacked on wooden
planks.
5. The bags should be stacked away from walls. A space of 25 cm all around should be left
between the exterior walls and the piles.
6. Bags should be piled close together.
7. Bags should be piled in header stretcher fashion and not more than 15 bags high.
8. While removing cement from store do not take out bags from one tier only. Step back two
or three tiers.
9. Each incoming consignment should be stacked separately and a placard bearing the date
of arrival of the consignment should be pinned to it. This would help in using cement
in the same order as it arrives thereby avoiding dead storage, that is a stack remaining in
position for a long time while other consignments of cement come in and go out.
10. For temporary storage of cement at the site of work, bags should not be stacked on the
ground. A minimum number of bags needed should be piled upon a raised, dry platform
and covered with tarpaulins.
C H A P T E R 12
47
Technical Analysis of Cement
Cement has to be produced in quantity to meet the need it has to have standard quality [1].
Cement analysis is mainly done to control its quality. Various constituents affect the quality of
cement (cement has an ideal composition). The following formula can be used to calculate the
percentage or kg wt. of constituent provided the rest are as follows.
Given:
where
K
D
a.b
.b
(cid:0)
(cid:0)
c/
d /
;
kg of calcined limestone
kg of clinker to be made
percent of CaO in clinker
percent of CaO in calcined limestone
percent of CaO in ignited shale
K
a
c
d
b
D
D
D
D
D
Example: Suppose that a calcined limestone contains 96% CaO, ignites shale contains 4% CaO,
and the desired clinker is 65% CaO. If 100 kg of clinker is to be made, what amount of calcined
limestone and ignited shale are required?
a.b
.b
c/
d /
(cid:0)
(cid:0)
K
D
Solution:
Given:
a
b
c
d
Then,
D
D
D
D
100 kg
4%
65%
96%
D
D
D
0.04
0.65
0.96
K
D
100.0:04
.0:04
(cid:0)
0:65/
(cid:0)
0:96/ D
66:3
kg of calcined.
Limestone: Since the clinker is calcined limestone plus ignited shale, the amount of ignited shale
required is: 100
33:7 kg.
66:3
(cid:0)
D
48
12. TECHNICAL ANALYSIS OF CEMENT
Within certain definite limitation in composition of cement, the mixture behaves satis-
factorily in kilns and produces good cement; outside of these limits it is also shown that trouble
in burning may result or the cement may be of inferior quality.
There are possible defects arising from unbalanced composition. For instance; if the lime
content is too high, the extra lime does not come in to combination, and this may cause ex-
pansion and cracking of the mortar or concrete. Silica, alumina, and ferric oxide are likewise
limited. If the lime content is fixed, and silica becomes too high, which may be accompanied
by a decrease in alumina and ferric oxide, the temperature of burning will be raised and the
special influence of the high lime is lost. If the lime is too low which means an increase in the
alumina ferric oxide, the cement may become quick-setting and contain larger amount of alu-
mina compounds which appear to be of little value for their cementing qualities. The magnesia
(MgO) content is limited, not to exceed 5% because higher magnesia may be dangerous to the
soundness of cement, especially at the later ages.
The customary method for expressing the relations is by means of ratios of the several ox-
ides. Some of these values are based only on empirical results of experience, some on theoretical
ideal composition in terms of the probable compounds formed. Ordinary or Portland cement,
technically, is a greenish-grey active, impalpable powder made by burning to a high tempera-
ture in a rotary kiln, a pulverized mixture containing definite proportions of oxides of calcium,
silicon, aluminum, and iron and grinding the resultant clinker. 2–6% gypsum (CaSO4. 2H2O)
by weight (based on maximum limit of 2–2.5% of SO3 in the cement) is added during grinding
of clinker to control setting time.
Lime in cement has maximum limit. This can be expressed in terms of ratios of the ox-
ides. In this instance, the molecular ratio of CaO/SiO2 <3, since the tricalcium silicate is the
most basic of the silicates in cement, which can be seen from the following reaction of cement
formation. Formation of cement take place by the reaction in the solid state to a great extent.
49
Firing reaction: CaCO3
Clay >500(cid:14)C
Dehydrated clay
And then CaO
Al2O3
C
CaO
C
SiO2
(or dehydrate clay
CaO
C
(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)!
>500(cid:14)C
(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)!
H2O
(cid:0)
650(cid:14)C
(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)!
(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)!
(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)!
>650(cid:14)C
(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)!
CaO
C
CO2
dehydrated clay
a mixture of Al2O3SiO2
CaO.Al2O3
2CaO.SiO2
CaO.Al2O3
2CaO.SiO2
C
Finally, 2CaO
CaO.Al2O3
C
(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)!
3CaO.Al2O3
CaO
C
2CaO.SiO2
(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)!
3CaO.SiO2
(cement resistant to sulphate-containing water should not contain 3CaO.Al2O3 but only
4CaO.Al2O3.Fe2O3).
When excess of lime is present, the compounds formed are 3CaO.SiO2 and 3CaO.Al2O3,
thus the upper limit for CaO is expressed by:
CaO
SiO2
C
C
MgO
Al2O3 D
3:
If CaO decreases beyond a certain limit, 2CaO.SiO2 (dicalcium silicate) appears which disin-
tegrates spontaneously and non-hydraulic. The lower limit for CaO (lime), in which tricalcium
silicate (3CaO.SiO2) will fail to appear is
CaO
MgO
C
.Al2O3
SiO2
(cid:0)
Fe2O3 D
C
not less than 3.
If this ratio fails below 3, then the undesirable 2CaO.Al2 CaO.SiO2 will be formed.
Experiments suggest that 3CaO.SiO2 and 2CaO Al2CaO.Al2O3 is the best. But Portland
clinker consists of tricalcium silicate (3CaO.SiO2), and beta dicalcium silicate ((cid:12)-2CaO.SiO2)
as principal constituents together with lesser and variable quantities of tricalcium aluminates
(3CaO.Al2O3), tetra calcium alumino ferrite (4CaO.Al2O3:Fe2O3), or some solid solution of
iron—phase, periclase (MgO), free lime (CaO), and trace amounts of many other compounds.
Cement has to be hydraulic. This is expressed by the ratio of wt. percentage of their four major
constituents (SiO2, Al2O3, Fe2O3, CaO).
The hydraulic modulus (Hm)
D
SiO2
C
CaO
Al2O3
Fe2O3:
C
50
12. TECHNICAL ANALYSIS OF CEMENT
The hydraulic modulus should be between 1.8 and 2.2. When SiO2 is too high, (Al2O3
Fe2O3)
is decreased and the temperature of burning is raised, the influence of the lime is also lost. An
expression for this is:
C
the silicate modulus (Sm)
SiO2
D
Al2O3
Fe2O3
C
.%wt.:/
The value should be in the range of 2.0–2.5. Cement with high silicate modulus hardens slowly;
that with low silicate modulus set rapidly.
12.1 SOLUTION PREPARATION AND
APPARATUS/REAGENTS USED
150 ml beakers, stirrer, watch glass, white band filter paper, red band filter paper, conical flasks,
funnel, porcelain (evaporating dish), crucible tongue, grinding mortar, glass pistle, drying oven,
igniting furnace, analytical balance, cooling desiccators, burette distilled water, heater (stove)
measuring cylinders, and sand bath are used for the analysis.
Reagent Used
• HCl of d
1:19 or 1.185
D
• AgNO3 solution
• 10% Na2CO3 dissolving in 10% HCl
• 40% HF
• Cone H2SO4 of d
1:84 5% H2SO4
D
• HNO3 of d
1:19
D
• NH4O4 of 25%, 10%, 2.5%
• Na2HPO4 of 50%, 5%
• 3% NH4NO3
• Saturated (NH4)2 C2O4, 0.1% (NH4)2C2O4
• 0.01% methyl orange solution
• 0.1N KMnO4
12.1. SOLUTION PREPARATION AND APPARATUS/REAGENTS USED 51
Preparation of Reagent
Value of reagents in laboratory and value of reagents to be used are calculated by the formula:
C1 d1 V 1
C 2 d2 V 2:
D
1. HCl of 37%: d
HCl of 10%: d
D
D
1:185, N
12:5
D
1:0475 is required to be prepared
10
V1
D
(cid:2)
37
1:0475
(cid:2)
1:185
(cid:2)
50
11:9
D
D
12 ml
12 ml of 37% is diluted with distilled water to 50 ml to give 10% HCl.
2. H2SO4 of 96%: d
1:84, N
D
D
36 is present in laboratory
H2SO4 of 5%: d
D
1:0325 is required to be prepared.
5
(cid:2)
V1
D
1:0325
96
(cid:2)
1:84
(cid:2)
200
5:845
D
D
5:85 ml
5.85 ml of 96% is diluted with water to 200 ml to give 5%. H2SO4
3. HNO3 of 70.5%: d
1:4225, N
16:14
D
D
HNO3 of 31.47%: d
D
1:19 required to be prepared
V1
D
31:47
70:5
50
1:19
(cid:2)
1:4225 D
(cid:2)
(cid:2)
18:67
D
19 ml
19 ml of 70.5% is diluted with water to 50 ml to give d
1:19 HNO3.
D
4. NH4OH of 28%: d
(a) NH4OH of
D
D
25%: d
0:898, N
14:76
D
0:907 is required to be prepared
D
V1
D
25
(cid:2)
28
0:907
50
(cid:2)
0:898 D
(cid:2)
45 ml
45 ml of 28% is diluted with water to 50 ml to give 25% NH4OH.
(b) NH4OH of 10%: d
D
0:957 is required to be prepared
V1
D
10
0:957
200
(cid:2)
28
(cid:2)
0:898
D
76:12
76 ml
D
(cid:2)
76 ml of 28% is diluted with water to 200 ml to give 10% NH4OH.
52
12. TECHNICAL ANALYSIS OF CEMENT
(c) NH4OH of 2.5%: d
D
0:987 is required to be prepared
V1
D
2:5
(cid:2)
28
0:987
50
(cid:2)
0:898 D
4:9
D
5 ml
(cid:2)
5 ml of 28% is diluted with water to 50 ml to give 2.5% NH4OH.
10% NaCO3
10 g is dissolved with distilled water up to 10 ml
3% NH4NO3
3 g is dissolved with distilled water up to 10 ml
50% Na2HPO4
50 g is dissolved with distilled water up to 10 ml
5% Na2HPO4
5 g is dissolved with distilled water up to 10 ml
0.1%(NH4)2C2O4
0.1 g is dissolved with distilled water up to 10 ml
0.01% methyl orange
0.01 g is dissolved with distilled water up to 10 ml
• (NH4)2C2O4 is dissolved in water until it is saturated.
• 40% HF Solution is prepared
• 0.1N KMnO4 is prepared by standardizing it with Na2C2O4 as follows:
KMnO4
m
D
D
0:1
(cid:2)
31:6
1000
(cid:2)
150
D
0:47415 g
• 0.47415 g KMnO4 is dissolved in water to 150 ml to give
0:1 N
D
Na2C2O4
m
D
D
0:1
(cid:2)
67
(cid:2)
1000
100
D
0:67 g
• 0.67 g Na2C2O4 is dissolved in water to 100 ml to give 0.1 N
Na2C2O4
m
0:1
67
(cid:2)
(cid:2)
D
D
100
D
0:67 g
• 0.67 g Na2O4 is dissolved in water to 100 ml to give 0.1 N
• The aliquant part (25 ml) of Na2C2O4 is put into conical Flask and acidified with
a small amount of HCl and tittered with KMnO4, until the apple (pink) color of
KMnO4 appears with one drop of it.
• Volume of KMnO4 used for titration
D
• Volume of Na2C2O4 taken for titration
26 ml
25.0 ml
D
• Normality of Na2C2O4
0:1 N
D
12.2. SAMPLE ANALYSIS AND THEIR REPORT 53
• Normality of KMnO4
?
Na2C2O4
V1N1
D
D
KMnO4
V2N2
0:25
0:1
(cid:2)
NKMnO4
26:9
(cid:3) (cid:2)
0:1
0:25
D
(cid:2)
0:0929
D
26.9
12.2 SAMPLE ANALYSIS AND THEIR REPORT
12.2.1 EXPERIMENT NO. 1
Determination of Moisture Content
Procedure:
• 2.00 g of cement sample was weighed on analytical balance and put into reweighed crucible
of known constant weight.
• Then it was dried in a drying oven at temperature of 110(cid:14)C for 3 h.
• Next the crucible with content is put into a desiccator and cooled for 20 min and weighed.
• Again, it was dried for 1 h, then cooled and weighed.
Date obtained and calculation
1. Constant weight of crucible
7.05800 g.
D
2. Wt. of crucible
cement sample
9.05800 g.
D
C
3. Wt. of cement sample (b–a)
2:00000 g.
D
4. Wt. of crucible
cement sample after drying for 3 h in 110(cid:14)C
C
After drying for 1 h
9.03865 g.
D
9.03900 g.
D
5. Wt. of sample after drying (d–a)
9:03865
7:05800
(cid:0)
D
1:98065 g:
6. Wt. of moisture loss (c–e)
0.01935 g.
D
7. Wt. of moisture loss in percentage
f
100
(cid:2)
C
0:01935
2
(cid:2)
D
100
D
0:9675%:
54
12. TECHNICAL ANALYSIS OF CEMENT
12.2.2 EXPERIMENT NO. 2
Determination of Loss of Substance After Ignition
Procedure:
• 2.00 g of cement sample was put into crucible of known constant weight.
• Next it was dried in a drying oven for 30 min at 110(cid:14)C and then ignited in the furnace at
1000(cid:14)C for 1 h.
• Then it was cooled in a desiccator for 20 min and weighed.
Data obtained and calculation
1. Constant wt. of crucible
6.90700 g.
D
cement sample
8.90700 g.
D
2. Wt. of crucible
C
3. Wt. sample (b–a)
4. Wt. of crucible
furnace (d–a).
C
2.00000 g.
D
sample after drying for 30 min in the oven, and igniting it for 1 h in the
8:83700
5. Wt. of substance lossed (c–e).
2:00000
6:90700
1:93000
(cid:0)
(cid:0)
D
D
1:93000 g
0:07000 g
6. Wt. of substance lossed in percentage.
f
100
(cid:2)
C
0:07000
2
(cid:2)
D
100
3:5%
D
*The substance lost are CO2 and steam.
12.2.3 EXPERIMENT NO. 3
Determination of Undissolved Residue
Procedure:
1. 1 g of cement sample is put in a beaker of 150 ml and 25 ml of distilled water and 5 ml of
concentrated HCl (d
1:185) are added. By shaking the content, 50 ml of distilled water
added. Then this mixture was heated for 15 min by covering the beaker with watch glass
on a heater.
D
12.2. SAMPLE ANALYSIS AND THEIR REPORT 55
2. The mixture was filtered on a white-band filter paper by using funnel. The precipitate was
washed with cold water until chloride ions were removed (this was tested with AgNO3)
and after that the content of the filter paper was transferred to the beaker by washing it
with a 30 ml hot 10%Na2CO3 solution.
3. The content of the beaker was covered with watch glass and heated the necessary time for
maximum moisture removal.
4. Then the mixture was filtered on the filter paper and washed with hot distilled water. Then
10 drops of 10% HCl were added on the filter paper and it was washed until free from
chloride ion with water (this was tested with AgNO3).
5. The filter paper with the residue was put inside a previously weighed crucible and ignited
for 1 h at 1000(cid:14)C in furnace then cooled in desiccators for 20 min and weighed.
Data obtained and calculation
1. Wt. of crucible
2. Wt. of crucible
D
C
6.90700 g.
residue after ignition
6.92045 g.
D
3. Wt. of residue (b–c)
4. Wt. of undissolved residue in percentage
6:92045
(cid:0)
6:90700
D
0:01345 g:
C
100
wt. of sample D
(cid:2)
0:01345
100
(cid:2)
1:00000
1:345%:
D
12.2.4 EXPERIMENT NO. 4
Determination of Silicic Acid as SiF4.(SiO2:SO3)
Procedure:
1. 0.4 g of cement sample is added into a 50 ml porcelain dish and to this 15 ml of distilled
water and 10 ml HCl (d
1:185) added.
D
2. The mixture was evaporated on sand both until all the HCL disappears.
3. Then the dry residue was treated with 10 ml of HCl (d
1:185) and evaporated again.
D
4. The content of the porcelain dish was grained with the glass pestle and collected toward
the center of the dish and moistened with a few drops of HCl.
56
12. TECHNICAL ANALYSIS OF CEMENT
5. 30 ml of hot distilled water was added to the content and the mixture was heated for 10 m,
and filtered on a red-band filter paper using funnel. Grains precipitate from the dish were
transferred to the filter paper with the use of of additional filter paper.
* The filtrate was saved for the next experiments.
6. The precipitate was washed with hot distilled water until all the chloride ions are removed
(this was checked with AgNO3).
7. The filter paper with precipitate was placed in a reweighed porcelain crucible and dried in
an oven at 110(cid:14)C for 30 min and ignited in the furnace for 3 h at 1000(cid:14)C.
8. Then it was cooled in desiccators for 20 min and weighed.
9. The content was ignited, cooled, and weighed again until constant weight obtained.
* The weighed form was white in color.
10. The content of the crucible was moistened with 2 ml of distilled water and 2 ml of 40%
HF was added and evaporated under the hood.
11. Then 1 ml of HCl was added and evaporated. Near the end of evaporation 2 ml of conc.
1:84) was added and heating was continued till white fumes of SO3 appear
H2SO4 (d
(this indicates that all the SiF4 and HF have evaporated).
D
12. Heating was continued until the white fume (SO3) disappears.
13. Then, after cooling the crucible with residue it was weighed.
Date obtained and calculation
1. Wt. of crucible
2. Wt. of crucible
D
C
16.78900 g.
precipitate after ignition
16:88000 g.
D
3. Wt. of precipitate (b–a)
16:88000
(cid:0)
16:78900
D
0:09100 g:
4. Wt. of the precipitate in percentage
C
100
wt. of sample D
(cid:2)
0:09100
0:4
(cid:2)
100
D
22:75%:
5. Wt. of precipitate
16:87000 g.
C
crucible after treating with 40% HF and then after igniting
D
12.2. SAMPLE ANALYSIS AND THEIR REPORT 57
6. Wt. of the evaporate (b–e)
16:88000
(cid:0)
16:87000
D
0:00600 g:
7. Wt. of evaporate in percentage
f
100
wt. of sample D
(cid:2)
0:00600
0:4
(cid:2)
100
D
1:5% SO3:
8. Percentage of SiO2 (d–f )
22:75
1:5
(cid:0)
D
21:25:
* The purpose of adding H2SO4 is to obtain SO3 by oxidation which shows the complete
evaporation of SiF4 and HF by its appearance.
12.2.5 EXPERIMENT NO. 5
Determination of Iron and Alumina Oxides (R2O3)
Procedure:
1. The filtrate saved from determination of SiO2 was heated in about a 200 ml beaker to boil.
2. Then, 0.5 ml of conc. HNO3 (d
was added.
D
1:185) and 4 drops of 0.01% methyl orange solution
3. Finally, 10% NH4OH solution was added drop by drop unit the solution becomes slightly
alkaline (orange in color).
4. Then the muddy solution was kept in hot place until the entire precipitates settle, and the
liquid above the precipitate become transparent.
5. The precipitate was filtered on red-band filter paper and washed with 35% NH4NO3 so-
lution until all the chloride ions are removed.
* The filtrate was saved for the next experiment.
6. The filter paper with the precipitate was transferred to a porcelain crucible and dried until
it starts charring (in drying oven) and then ignited in a furnace at 1000(cid:14)C for 1 h then
cooled and weighed.
Data obtained and calculation:
1. Wt. of crucible
2. Wt. of crucible
D
C
6:90900 g.
R2O3 after ignition
6:94300 g.
D
58
12. TECHNICAL ANALYSIS OF CEMENT
3. Wt. of R2O3 (b–a)
4. % of
6:94300
(cid:0)
6:90900
D
0:03400 g:
R2O3
0:03400
0:4
(cid:2)
D
100
D
8:5%:
9%. From this, 6.5% is Al2O3 and 2.5% is Fe2O3.
The average (%) proportions of R2O3
Therefore,
D
9—–6.5
8.5—–?
8:5
6:5
(cid:2)
9 D
6:138% Al2O3
6:138
9
(cid:0)
D
2:362% Fe2O3 .
12.2.6 EXPERIMENT NO. 6
Determination of Calcium Ion
Procedure:
1. The filtrate which was saved from determination of R2O3 was acidified with few drops of
HCl (pink color appears) and boiled.
2. 25 ml of hot saturated (NH4)2C2O4 solution was added, and 10% NH4OH was added
drop by drop until the solution became orange. The precipitate (Ca2C2O4/ is formed.
3. The solution was boiled until the precipitate settles and the mixture stood for 1 h undis-
turbed.
4. The precipitate was filtered with white-band filter paper and washed with hot 0.1%
(NH4)2C2O4 solution at first until all the chloride ions are removed and then with hot
water four times.
* The filtrate was saved for the next experiment.
5. The filter paper with the precipitate was carefully transferred to a 250 ml beaker, flattens
it on the wall of the beaker and was poured 150 ml of hot 5% H2SO4 solution, gradually
moving the filter paper outwards using a glass rod.
6. Then the solution was heated to 70(cid:14)C and titrated with 0.0929 N KMnO4 solution.
NB No indicator was used for the titration because KMnO4 has a pink color. One drop
of KMnO4 solution changes the colorless solution to pink at the equivalent point.
Data obtained and calculation
12.2. SAMPLE ANALYSIS AND THEIR REPORT 59
• CaC2O4
H2SO4
C
(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)!
CaSO4
C
H2C2O4.
• Volume of solution (H2C2O4)
150 ml.
D
• Volume of KMnO4 used for titration
98:5 ml.
D
• Normality of KMnO4
0:0929 N.
D
• Normality of H2C2O4
?
D
H2C2O4
KMnO4
V1N1
• Normality of
V2
.
(cid:3) (cid:2)
H2C2O4
98:5
0:0929
(cid:2)
150
D
D
0:06100 N
• CaC2O4
(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)!
CaO
C
CO2.
• Gm. eq. of Ca
20.04 g.
D
• Wt. of
CaO
(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)!
56
?
(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)!
(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)
Ca
D
0:0610
(cid:2)
20:04
100
(cid:2)
150
D
0:183366 g
Ca
40 g
0.183366
56
(cid:2)
0:183366
40
D
0:2567124 g
0.2567124
?
(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)!
(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)
0.4 g
100 g
%CaO
100
(cid:2)
0:2567124
0:4
D
D
64:1781%:
12.2.7 EXPERIMENT NO. 7
Determination of Magnesium Ion
Procedure:
1. The filtrate after removal of CaCC ions was evaporated almost to dryness in a 250 ml
beaker.
60
12. TECHNICAL ANALYSIS OF CEMENT
2. After cooling the beaker, the residue was moistened with 3 ml HCl (d
of distilled water was added and heated. The solution is pink.
1:185) and 50 ml
D
3. The solution was acidified with HCl (d
1:19) after adding 4 drops of methyl orange.
The solution became light yellow in color.
D
4. 30 ml of 5% Na2HPO4 and 10% NH4OH was added until the solution just turns alkaline
(checked by litmus).
5. After cooling, the solution was diluted with 100 ml distilled water and 15 ml of 25%
NH4OH solution was added.
6. The precipitate waited for 3 h and then filtered through a white-band filter paper.
7. The precipitate was washed with 2.5% NH4OH until all the Cl(cid:0) ions are removed.
8. The filter paper with precipitate was dried in drying oven with a previously weighed porce-
lain crucible and ignited in the furnace at 1000(cid:14)C until the constant weight obtained.
Data obtained and calculation
1. Wt. of crucible
2. Wt. of crucible
D
C
6.90900 g.
residue after ignition
6.91955 g.
D
6:90900
3. Wt. of residue (MgO)
6:91955
(cid:0)
D
4. % of
MgO
D
C
100
wt. of sample D
(cid:2)
Other Calculations
1. The upper limit for CaO (lime)
0:01055 g.
D
0:01055
0:4
(cid:2)
100
D
2:6375%:
MgO
Al2O3 D
3
CaO
SiO2
C
C
2:6375
6:138 D
66:8156
27:388 D
2:4396:
64:1781
21:25
C
C
2. The lower limit for CaO (lime)
SiO2
CaO
MgO
C
.Al2O3
(cid:0)
64:1781
21:25
C
2:6375
8:5
C
(cid:0)
Fe2O3/ D
not less than 3
66:8156
12:75 D
D
5:2404:
3. The hydraulic modulus
12.2. SAMPLE ANALYSIS AND THEIR REPORT 61
Hm
CaO
D
SiO2
C
R2O3 D
1:8
22
(cid:0)
64:1781
64:1781
21:25
C
8:5 D
29:75 D
2:1572:
4. The silicate modulus
Sm
D
2:0
2:5
(cid:0) (cid:0) (cid:0)
SiO2
R2O3 D
21:25
8:5 D
2:5:
How do you grade this cement? Inferior, good, why?
The cement is graded as good quality because of the following.
1. The main composition of mineral oxides are found according to their proportion range in
64:178%
21:25%
SO3
15%
D
%
CaO
SiO2
D
D
D
D
D
Al2O3
Fe2O3
6:138%
2:3620%
MgO
2:6375%
2. From the value of upper limit of CaO, we can see that excess of lime was not present, there-
fore, no possibility for expansion and cracking of mortar or concrete, and from the value
of lower limit of CaO, we can see that there is no possibility of formation of undesirable
2CaO.Al2CaO.SiO2.
3. From the hydraulic value range we can see that the cement has hydraulic property.
4. When SiO2 is too high, (Al2O3
Fe2O3) is decreased and the temperature of burning is
raised, the influence of high lime is also lost. Cement with high silicate modulus hardness
slowly, that with low silicate modulus set rapidly. From the silicate modulus value we can
see that, the value is in the required range.
C
5. From the content of MgO value we can see that there will be no possibility for soundness
(decrease in volume) of cement at the early setting slage.
6. From all the chemical composition values we can see that the amount of gypsum added to
control the setting time was a required amount.
62
12. TECHNICAL ANALYSIS OF CEMENT
12.3 CONCLUSION
From the theoretical expression, the cement production in any country must meet the good
quality in order to carry out satisfactory construction work. As we can see from the sample
analysis result, the cement which was produced in Ethiopia has good quality.
Since Ethiopia is a developing country, the demand of cement increases from time to
time in every construction field. To fulfill this demand great work must be done in cement
production and this must meet with modern technology and for this, skilled manpower is also
very important.
References
63
[1] Bye, G. C. Portland Cement: Composition, Production and Properties, Thomas Telford Pub-
lishing, London, 1999. 47
[2] Ghosh, S. N. Advances in Cement Technology. Cement Institute of Research of India, New
Delhi, India, 1983. DOI: 10.1016/0008-8846(83)90070-4. 29
[3] Mukhlyenov, I., Kuznestov, D., Furmer, L., Tumarkina, E., and Averbukn. A Chemical
Engineering. The Higher School Publishing (part two), Hous, Moscow. 1
[4] Peray, K. E. Cement Manufacturer’s Handbook. Chemical Pub. Co-Technology & Engi-
neering, U.C.D Library, 1979. 39
[5] Singh, S. Engineering Materials, 2nd ed. Technical Education, Delhi Administration,
1979. 11
Author’s Biography
65
TADELE ASSEFA ARAGAW
Tadele Assefa Aragaw is a lecturer in Chemistry and Environmental Engineering, a Researcher,
and a Facility Manager in the Chemical and Food Engineering at the Bahir Dar Institute of
Technology. Since 2017 he has been involved in a research project in the area of Ethiopian
kaolin characterization for different industrial applications as well as an indigenous microalgae
investigation from wastewater for biodiesel production.
In 2012, Tadele received his B.S. in Chemistry from the University of Gondar.
In 2014, he started studying for his master’s degree in Environmental Engineering while
also teaching an Analytical Chemistry and Environmental Engineering course for Chemical
Engineering students. He received his M.Sc. in Environmental Engineering in 2016 from the
Bahir Dar Institute of Technology, Bahir Dar University. Tadele has published articles in the
field of his profession, Environmental Engineering.
|
https://ieeexplore.ieee.org/xpl/ebooks/bookPdfWithBanner.jsp?fileName=7833474.pdf&bkn=7833473&pdfType=book
|
Series ISSN 1939-5221
The Human Side of Engineering
John Heywood, Trinity College Dublin-University of Dublin
While in many university courses attention is given to the human side, as opposed to the technical
side of engineering, it is by and large an afterthought. Engineering is, however, a technical, social, and
personal activity. Several studies show that engineering is a community activity of professionals in which
communication is central to the engineering task. Increasingly, technology impacts everyone in society.
Acting as a professional community, engineers have an awesome power to influence society but they can
only act for the common good if they understand the nature of our society.
To achieve such understanding they have to understand themselves. This book is about understanding
ourselves in order to understand others, and understanding others in order to understand ourselves in the
context of engineering and the society it serves. To achieve this understanding this book takes the reader
on 12 intellectual journeys that frame the big questions confronting the engineering professions.
J
O
H
N
H
E
Y
W
O
O
D
T
H
E
H
U
M
A
N
S
I
D
E
O
F
E
N
G
N
E
E
R
I
I
N
G
ABOUT SYNTHESIS
This volume is a printed version of a work that appears in the Synthesis Digital
Synthesis Lectures provide concise
Library of Engineering and Computer Science.
original presentations of
topics, published
information, visit our website:
quickly
http://store.morganclaypool.com
in digital and print formats. For more
important research and development
store.morganclaypool.com
The Human Side
of Engineering
John Heywood
e Human Side of Engineering
Synthesis Lectures on
Engineering
Each book in the series is written by a well known expert in the field. Most titles cover subjects
such as professional development, education, and study skills, as well as basic introductory
undergraduate material and other topics appropriate for a broader and less technical audience.
In addition, the series includes several titles written on very specific topics not covered elsewhere
in the Synthesis Digital Library.
e Human Side of Engineering
John Heywood
2016
Engineering Principles in Everyday Life for Non-Engineers
Robert C. Creese
2016
Engineering Principles in Everyday Life for Non-Engineers
Saeed Benjamin Niku
2016
A, B, See... in 3D: A Workbook to Improve 3-D Visualization Skills
Dan G. Dimitriu
2015
e Captains of Energy: Systems Dynamics from an Energy Perspective
Vincent C. Prantil and Timothy Decker
2015
Lying by Approximation: e Truth about Finite Element Analysis
Vincent C. Prantil, Christopher Papadopoulos, and Paul D. Gessler
2013
Simplified Models for Assessing Heat and Mass Transfer in Evaporative Towers
Alessandra De Angelis, Onorio Saro, Giulio Lorenzini, Stefano D’Elia, and Marco Medici
2013
iv
e Engineering Design Challenge: A Creative Process
Charles W. Dolan
2013
e Making of Green Engineers: Sustainable Development and the Hybrid Imagination
Andrew Jamison
2013
Crafting Your Research Future: A Guide to Successful Master’s and Ph.D. Degrees in
Science & Engineering
Charles X. Ling and Qiang Yang
2012
Fundamentals of Engineering Economics and Decision Analysis
David L. Whitman and Ronald E. Terry
2012
A Little Book on Teaching: A Beginner’s Guide for Educators of Engineering and Applied
Science
Steven F. Barrett
2012
Engineering ermodynamics and 21st Century Energy Problems: A Textbook
Companion for Student Engagement
Donna Riley
2011
MATLAB for Engineering and the Life Sciences
Joseph V. Tranquillo
2011
Systems Engineering: Building Successful Systems
Howard Eisner
2011
Fin Shape ermal Optimization Using Bejan’s Constructal eory
Giulio Lorenzini, Simone Moretti, and Alessandra Conti
2011
Geometric Programming for Design and Cost Optimization (with illustrative case study
problems and solutions), Second Edition
Robert C. Creese
2010
v
Survive and rive: A Guide for Untenured Faculty
Wendy C. Crone
2010
Geometric Programming for Design and Cost Optimization (with Illustrative Case Study
Problems and Solutions)
Robert C. Creese
2009
Style and Ethics of Communication in Science and Engineering
Jay D. Humphrey and Jeffrey W. Holmes
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Analog Multimedia
Explorations
Lina J. Karam and Naji Mounsef
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Digital Multimedia and
Robotics Explorations
Lina J. Karam and Naji Mounsef
2008
CAD/CAM of Sculptured Surfaces on Multi-Axis NC Machine: e DG/K-Based
Approach
Stephen P. Radzevich
2008
Tensor Properties of Solids, Part Two: Transport Properties of Solids
Richard F. Tinder
2007
Tensor Properties of Solids, Part One: Equilibrium Tensor Properties of Solids
Richard F. Tinder
2007
Essentials of Applied Mathematics for Scientists and Engineers
Robert G. Watts
2007
Project Management for Engineering Design
Charles Lessard and Joseph Lessard
2007
Relativistic Flight Mechanics and Space Travel
Richard F. Tinder
2006
Copyright © 2017 by Morgan & Claypool
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in
any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations
in printed reviews, without the prior permission of the publisher.
e Human Side of Engineering
John Heywood
www.morganclaypool.com
ISBN: 9781627056649
ISBN: 9781627056656
paperback
ebook
DOI 10.2200/S00748ED1V01Y201612ENG028
A Publication in the Morgan & Claypool Publishers series
SYNTHESIS LECTURES ON ENGINEERING
Series ISSN
Print 1939-5221 Electronic 1939-523X
e Human Side of Engineering
John Heywood
Trinity College Dublin-University of Dublin
Foreword by Mani Mina
Iowa State University
SYNTHESIS LECTURES ON ENGINEERING #28
CM&cLaypoolMorganpublishers&ABSTRACT
While in many university courses attention is given to the human side, as opposed to the technical
side of engineering, it is by and large an afterthought. Engineering is, however, a technical, social,
and personal activity. Several studies show that engineering is a community activity of profession-
als in which communication is central to the engineering task. Increasingly, technology impacts
everyone in society. Acting as a professional community, engineers have an awesome power to
influence society but they can only act for the common good if they understand the nature of
our society. To achieve such understanding they have to understand themselves. is book is
about understanding ourselves in order to understand others, and understanding others in order
to understand ourselves in the context of engineering and the society it serves. To achieve this
understanding this book takes the reader on 12 intellectual journeys that frame the big questions
confronting the engineering professions.
KEYWORDS
agency, assumptions, change, common good, communication, community, contrac-
tualism, constructivism, consequentialism, curriculum, duty, development, engineer-
ing, engineering education, enterprise, epistemology, ethics, engineering ethics, fear,
higher education, judgement, knowledge, language, learning, learning organization,
life-long learning, management, mobility, morality, open-system, closed system,
organization, the person, perception, philosophy, philosophy of engineering, pro-
fessional, realism, reflection, responsibilities, rights, schema, self-management, so-
cial system, transfer of learning, team-work, technological literacy, truth, university,
virtues, ways of thinking, work, workforce
Contents
ix
Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Preface and Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv
1
“It all Depends on What You Mean by...” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 inking about inking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3 ings are not Always What ey Seem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4
5
6
Meaning—True or False: Real or Imagined . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
From Perception to Self-Perception and a Little Management En-route . . . . . . 37
Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Sharing Problems: Living in Communities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7 inking about Making a Good Engineer Possible . . . . . . . . . . . . . . . . . . . . . . . 59
Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
8
9
Aspiration in Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Preparing for the Future: Individuals and Organizations . . . . . . . . . . . . . . . . . . 81
Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
x
10 Changing Us: Changing Society . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
11
Journey’s End: A New Beginning? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
12 Questioning our Assumptions: Adaptability and Change . . . . . . . . . . . . . . . . . 119
Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Author’s Biography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Subject Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Foreword
xi
THE BIG PICTURE
is book is a series of essays that were originally created by John Heywood for a seminar at Iowa
State University—Electrical Engineering EE510M—that I created during the Fall 2013. Faculty
and students from different perspectives, disciplines, and departments attended and contributed
to the seminar class. is version of “the Journeys,” as his explorations in the seminars were called,
is a result of interactive participation, discussion, and feedback from the followers of the seminar,
and finally the discussions between John and I. e journeys are explorations into the multidi-
mensional and connected world of engineering, technology, and society. ey invite the reader
to explore the same territories and find their own answers, not those of a text book.
I thought it would be of value to provide the story of how these journeys started, what
were the goals, objectives, and hopes that we I had when we embarked on this project. Hopefully,
this will help you, the reader, to examine your perspectives and belief structures, and reflect on
your fields of interest as it helped us and our friends, associates, and colleagues examine ours. We
hope that readers will be encouraged to participate in more constructive dialogues and reflective
activities about engineering and its purposes and, in consequence, engineering education. As John
says at the end of the first journey “We […] are taking a series of journeys so we may better reflect
on who and what we are as individuals and engineers within a society that is becoming increasingly
complex.”
THE SEMINAR CLASS
e idea for the seminar was the result of a very successful session that we did in the Spring of 2013
in the first class of electrical engineering [Electrical Engineering (EE185)] in our department.
Because I had for some time used student reflections to monitor their experience and provide
them with a method of self-evaluation I thought their development would be enhanced if they
were enabled to engage in dialogue with an outside scholar so I asked John if he would meet with
the class via Skype.
A week before the meetings, I introduced John and his ideas and writings to the class.
For that week in EE185 we discussed who John is and students read about him by reading his
memories of 50 years of being in IEEE (on the IEEE History website). e students then created
a set of questions for him to answer. e questions were sent to John a few days before the meeting.
On the day of the Skype meeting, he answered these and in dialogue with the students other
questions put to him by them. e students not only liked the session but also kept asking to have
xii FOREWORD
more sessions like that. In particular, they wanted to have more sessions with John. is session
was so successful and students loved it to so much that I thought it would be even better if we
could engage higher-level students, so we decided to do a seminar class with a new audience and
create more in-depth discussions.
THE BEGINNING
Later that year, at ASEE’s annual meeting (Atlanta, Georgia, 2013) we reviewed what had hap-
pened and considered how we could have a seminar class on the subject of engineering pedagogy
and philosophy, promote discussion and debate within in them, and record the participants’ re-
flections on them. John had the vision that we could bring the self, the person as an agent into
the discussions and help the participants realize how to create their own journey in critical think-
ing, personal philosophy, and pedagogy. Eventually, he decided to create a few essays to trigger
discussion that would help a selected group of faculty and senior students to debate, think, dis-
cuss, and appreciate the role of the humanities and social sciences in engineering and engineering
education as a means of reflective practice.
When I returned from the conference the first thing I did was to visit the undergraduate
office at Iowa State University Department of Electrical and Computer Engineering. I asked
the Director Vicky orland-Oster the following: “I would like to make a class for Fall 2013
on Critical Reflections on Engineering, Engineering pedagogy, and philosophy, can you help me
with that?” Vicky paused, looked at me, and said that the idea is great but we cannot create a class
unless the curriculum committee approves it. is was not possible since the committee did not
have meetings during the summer of 2013. But I had to make it happen! is was so special to
us that we had to get it done. We just had to….
To make the seminar happen within the constraints of university bureaucracy, I decided
to go through the regular academic system and create a special topic class. Since I am a member
of the Faculty of Electrical Engineering in the Electromagnetic area, I used a special topic class
for the Fall 2013 semester called EE510M. e class was officially (according to the Iowa State
University catalog) a special topic class with the following title “EE510M: Special topics on Elec-
tromagnetism.” John mentioned that since 50 or more years ago he had worked on ionospheric
research in industry, this may fit! While this was a real stretch, it was the only way that we could
have a class in the time that we had. After the class was created I began to invite people to par-
ticipate. In order to be more descriptive about the seminar class we decided to adapt the title to
“Critical Reflections on Engineering and Engineering Pedagogy”. e Journeys would focus on
how we develop and find our own belief structures as individuals and educators in the context of
engineering and technology.
CREATING A PLATFORM FOR THE DISCUSSIONS: THE JOURNEYS
John Heywood started to write his personal reflections in a set of essays called “the Journeys.” In
this development he led with his ideas and reflections on engineering, pedagogy, and develop-
FOREWORD xiii
ment of personal philosophy. John and I have been active members of the American Society for
Engineering Education (ASEE) and in particular in its Technological and Engineering Literacy
and Philosophy of Engineering (TELPhE) division. As a part of that group we believed that
in order to enrich our efforts in engineering education and pedagogy we need to question our
epistemology of what is engineering, our roles as educators, our value systems as engineers and
engineering educators, and more especially as individuals. As we engage in these activities and
discussions we end up developing a philosophy of engineering and engineering education driven
by our personal philosophy. John started to write the Journeys for the class to review and discuss
them with him at weekly session.
John Pritchard, graduate student, researcher, and good friend, would record via Skype,
direct the sessions, and put them into the final form that would be posted on the website he
created for this seminar series. Copies of the scripts, and the accompanying notes too which John
attached great importance to, would also be made available so a participant could choose to view
the Skype recording or read the script or use both. In so far as was possible, the Skype would take
place on Mondays and a Skype seminar would follow on Fridays.
THE SEMINARS
e seminars were advertised during the middle to end of August 2013. en they began on
the first week of September 2013. We met every Friday afternoon from 2–3 pm (8–9 pm in
Dublin), the essay and the readings having been posted earlier in the week. ose who wanted
to attend would be in the class during the live Skype session with John. e sessions would start
with questions and comments by the participants. Some of the questions were created and sent
to John via email; the others were asked and discussed during the sessions.
Based on the discussions, suggestions, interactions, and feedback from the participants,
John modified the Journeys, added items, clarified points, and included some of the participant’s
points of view in the revision of the text. e Journeys that are published in this book are the
modified and finalized ones. We believe the modifications that came about through dialogue
have improved and enriched the original Journeys since the final forms do reflect the interactions
and discussion by the participants.
THE PARTICIPANTS AND SOME OBSERVATIONS
e class was divided into three groups:
1. those who attended the live sessions from undergraduate and graduate students (from the
U.S. and other countries) from electrical and computer engineering. In addition, we had
faculty of engineering, English, rhetoric, and physics attending the seminar class;
2. those who followed the reading and activities within the campus of Iowa State University.
is group included some administrative staff from the Department and the office of Dean
of engineering including Associate Deans. In addition, there were a number of faculty and
xiv FOREWORD
graduate students in engineering, sciences, English, philosophy who were following our
activities via our website; and
3. finally, interested national and international colleagues and friends including some of mem-
bers of Technological and Engineering Literacy and philosophy of Engineering Division of
ASEE and others also followed some of the activities also via our website.
We received reflections, critiques, and ideas from many of our caring and kind colleagues
and participants. ey patiently helped us think and rethink the activities and discussions. e
Journeys reflect the feedback from the attendees and patrons who were kind enough to commu-
nicate with us during the progress of the project and after the completion of the Journeys as a part
of ongoing critique and discussions. In particular, we are very thankful to our special colleagues
and friends in ASEE Technological and Engineering Literacy and Philosophy of Engineering
Division including Professors Alan Cheville at Bucknell University and John Krupczak at Hope
College.
e class was very successful, and I have received requests to organize more seminar series of
this kind. e engagement of the participants and continuations of support, and requests for more
of this kind of activity, showed that our efforts were needed and that they should be continued
in many forms. When I reflect back on the class, I realized that in particular the class became an
effective vehicle for all to reflect and think more deeply about their beliefs and perspectives in their
field and their relationship to education. Finally, it helped all participants to advance their efforts
to develop their own “philosophy”. We began the class with the title of “Critical Reflections on
Engineering and Engineering pedagogy,” and somewhere during the first half of the seminar it
became “Critical Reflections on Engineering, Engineering Pedagogy, and Philosophy.”
One of the more interesting observations was the reaction of the engineering participants
and followers of the Journeys. ey were fundamentally different to those the other groups. Here
are some of the more interesting questions put to us by the engineering group.
• “ese are wonderful words: How do they help me be better educator?”
• “Knowing all of this is fine: How could it help me do better as an engineer?”
• “If engineering is taking action, doing and designing things, how does philosophy help me
do it better?”
• “is is of great value and importance. We do not have anything like that in our curriculum,
and it has worked well.”
• “Do we really need to change anything in our education system? It seems to work.”
• “e engineering curriculum is based on skills, math, physics, and all of engineering con-
cepts and practice. If we engage in pedagogical and philosophical discussions, reflections,
and debates, it could reduce the students’ engineering knowledge base. We would then de-
velop weak students.”
FOREWORD xv
• “What would industry think? Would they still hire our graduates?”
e following question summarizes the overall engineering participant’s questions and con-
cerns: “ese are nice words, and great perspectives, but how can I apply it to engineering and
engineering education?” In a way, the engineering team is looking for a summary and action items
to help them with possible implementation.
To our surprise, the physics, math, and English participants did not have such questions.
ey tried to absorb, participate, and contribute. One may think physics and engineering are
close, but physics members did not really ask the same types of questions and did not show the
same concerns as those reflected in the above list. Generally, the physics members were much
more accepting and integrated the ideas and discussions; they were not looking for action items.
Why? We need to remember that physics is usually placed in the college of sciences and liberal
studies, and that this field of study was historically called natural philosophy. It only changed
to physics about the second half of 19th century. us, physics is likely to be closer to a philo-
sophical perspective than engineering; the true essence of this issue and observation needs more
exploration, however.
All in our team claimed that they had to read some of the journeys and parts of the Journeys
more than once to really see the point and connection to their intentions. We recommend that
the reader, having read the Foreword and the Preface, to first read the journeys in order to become
comfortable with the style, and the way the notes are used to support the argument on the one
hand, and on the other hand provide a bridge for further exploration. e Journeys are meant to
make the reader think, wander, enjoy, question, and argue with the writer as did the participants
in the exercise.
A NEED; A SPECTER OF SOMETHING MORE
Upon review and discussion John, I, and many of our colleagues believe that the experiences and
insights gained by participants point toward a fundamental void in the engineering community
and in particular the engineering education community. ere seems to be a lack of dialogue,
creative discussion, and philosophical examination of what engineering is. For example, “Why do
we teach what we teach? What is needed? What should all engineers know?” ese are questions
of the utmost importance for the field and its educators. Currently, there are few forums for
such discussions in the arena of engineering education. However, there seems to be a need of
national and international venues for creating meaningful and visible dialogue and discussions on
engineering, engineering pedagogy, and philosophy of engineering and engineering education.
Mani Mina
Departments of Industrial Design and Electrical and Computer Engineering
Iowa State University
August 2016
Preface and Introduction
xvii
e title of this book, e Human Side of Engineering is both borrowed from and inspired by one
of the outstanding books on management, Douglas McGregor’s e Human Side of Enterprise
published in 1960. It’s theories X and Y continue to help us understand the behavior of indi-
viduals at work and the impact that organizations have on them (see Journey 5). While in many
courses some attention is given to the human side of work, it is by and large an afterthought, for
engineering is thought to be a technical and personal activity.
Journey 1 uses a model of a three-legged stool to offer an explanation of how engineering
produces a technology. Engineering is seen to be a process and technology the product of that
process. e base of the stool is where the process begins: it represents the mind of the engineer
and the beliefs, attitudes, and values that it generates. From this mind, informed by the values of
the society in which it exists, come the product designs intended to solve the problems with which
it is presented. In an engineering company that information is conveyed by those concerned with
marketing about products that can be changed or new products that seem to be required. e
resulting design is fed to manufacturing, or the problem is sent to R & D and subsequently back
to manufacturing resulting in a technology that impacts on the economy and society. All this is
done in an organization which links together all the components in order to produce the tech-
nology. at organization comprises roles, humans, and technical and the task of management
is to coordinate and integrate them [1]. Resulting from studies of engineers at work the Aus-
tralian engineering educator James Trevelyan concluded that four major competences required by
expert engineers are technical coordination, project management, negotiation, and teaching [2],
none of which have anything to do with skill in engineering science and engineering design, but
everything to do with working with human beings.
Much of what Trevelyan found Michael Youngman, Bob Oxtoby, Denis Monk, and I found
in a study of engineers at work during the nineteen seventies [3]. Whereas we reported only on
our research, Trevelyan provides a substantive guide to what students need to know to become
expert engineers. Had it been published before these journeys I would certainly have referenced
it on several occasions. But there is a great deal in common between the findings of Trevelyan’s
research and ours. Primarily, as my diagram in the first journey shows engineering is far more than
a bench at things are designed, made and tested. We found that roles, however precisely defined,
depend on interpersonal relationships for their effective functioning. is means that engineers
have to have high level interpersonal skills, skills that engineers are not known to possess to any
great degree, so it is assumed. One of the major complaints of industrialists in the UK and U.S.
is that universities do not produce graduates who can communicate or work in groups [4]. eir
technical abilities are not questioned.
xviii PREFACE AND INTRODUCTION
In his search to understand engineering epistemology in the aircraft manufacturing indus-
try, Walter Vincenti’s reported that engineering is a community activity in “What Engineers Know
and How they Know It” [5]. is community activity is largely informal. All the elements that
Trevelyan highlight are present in the few short paragraphs that Vincenti devotes to explaining
the way knowledge is exchanged, structured, and built upon.
Technology impacts on everyone from the richest to the poorest. Acting as a community
engineers have an awesome power to influence society. But this can only be done if engineers
understand the nature of this community. To achieve that understanding we have to understand
ourselves. is book is about understanding ourselves in order to understand others and under-
standing others in order to understand ourselves. is is a problem that each one of us faces, engi-
neer or not. At the same time it faces curriculum designers with a problem because the knowledge
required to do this has to be drawn from a wide range of disciplines, as for example, sociology,
psychology, literature, economics, philosophy, and theology, and that is by no means the end,
especially if it is assumed that the way to obtain this knowledge is through study of these disci-
plines. Yet that is what the present approaches to university education that focus on the study of
subject disciplines would require.
However, it is evident that in everyday living we obtain vast quantities of knowledge that
we assemble and make judgements about or discard. It is equally evident that some of those
judgements are not as informed as they should be. Consider voting behavior. I suggest that the
view which has recently emerged, particularly in the UK, that those who are educated are better
able to make political decisions than those who have had little education is without foundation.
Be that as it may when we solve problems we generally bring knowledge from a variety of areas to
bear on the problem much of it acquired haphazardly. If we are more systematic we explore many
avenues before deciding to pursue a course of action or learning depth. at is what children
do in their early years. ey explore everything. Albert North Whitehead the mathematician
philosopher calls this a stage of “romance” in his theory of rhythm in education [6]. Romance is
necessarily one of transdisciplinarity [7] because it is a stage of exploration, a stage of discovery.
He writes, “e stage of first apprehension (a stage of ferment). Education must essentially be a
setting in order of a ferment already stirring in the mind: you cannot educate the mind in vacuo.
In our conception of education we tend to confine it to the second stage of the cycle, namely
precision. In this stage knowledge is not dominated by systematic procedure. Romantic emotion
is essentially the excitement consequent on the transition from bare facts to first realisations of
the import of their unexplored relationships.”
So too is the final stage of generalization (synthesis) which is, “A return to romanticism
with the added advantage of classified ideas and relevant technique.”
Between these stages is one of precision (grammar) in which, “width of relationship is
subordinated to exactness of formulation. It is the stage of grammar, the grammar of language
and the grammar of science. It proceeds by forcing on the students’ acceptance a given way of
analysing the facts, bit by bit. New facts are added but they are the facts which fit into the analysis.”
PREFACE AND INTRODUCTION xix
It is here that the language, which is the “style” of a particular subject, is learnt, and the interest
found in the stage of romance turned into a search for expertise.
Whitehead does not expect the stage of romance to be one that is simply a collection of
“scraps of information.” In a lecture on the aims of education to mathematics teachers he said,
“Culture is activity of thought, and receptiveness to beauty and humane feeling. Scraps of information
have nothing to do with it. A merely well informed man is the most useless bore on God’s earth. What
we should aim at producing” are [is] persons [men] who possess both culture and expert knowledge in
some special direction. eir expert knowledge will give them ground to start from, and their culture
will lead them as deep as philosophy and as high as art [6, p. 1]. Education is then, “the acquisition
of the art of utilisation of knowledge” [6, p. 6], and one of the functions of the stage of romance is
to help the student find that “special direction.” Looked at from the perspective of Whitehead’s
formal philosophy engineering and technology are creative activities. e stage of “romance” is
not only one of discovery but of creative exploration [8]. It is a view that fits well with what an
engineer seeks to do.
e intention of these journeys is that they should be a stage of romance. ey are intended
to create a debate as well as to inform. e extensive notes are designed as guides to further study
and result from the debates that the journeys caused when they were delivered. ey are a bridge
between romance and precision and grammar.
e goals of the stage of “romance” relate to
• the motivation of students;
• how we know and learn. How our learning styles influence the way we learn;
• the exploration of our personal value systems;
• personal development; and
• practical experience with what is learned.
ese journeys are explorations (Mani who organized them would prefer “reflections”) of
ourselves and organizations that have the purpose of helping you and I establish who and what
we are as individuals, engineers and educators in a society that is becoming increasingly complex.
e roads that I took were not always familiar and eventually they led to consideration of the
“common good,” and to the view that the basis of all professional study is a liberal education
which I explore in the last but one and final journeys. My answer to those who asked Mani “how
will it help me to do better as an engineer?”—is that good engineering is a community activity
that depends on wisdom and skill in practical reasoning as it is often called. ese journeys are
essays in practical reason [9].
Notwithstanding the difficulty of summarizing short essays, I will engage in the task in the
hope that it will be helpful. Journey 1 is about meaning and language. rough a brief analysis of
the engineering processes involved in the making of a technology product we learn that engineers
xx PREFACE AND INTRODUCTION
have to speak many languages. At the end of this journey you are invited to participate in an
activity that is a preparation for Journey 2 which is about perception, or about the meaning that
reality has for you and I. At the same time, it shows the relevance of a philosophy of engineering
that seeks to answer such questions as—“how and why do engineers differ from scientists and
business people?”
e road widens and broadens our understanding of perception. Both Journeys 2 and 3
show that the boundaries between philosophy and psychology are often blurred. Journey 3 takes
us past some of the best known illusions to the importance of personal relationships, and from
there to how we handle the mass of information with which we are faced each day, and how the
influence of past experience affects the way we solve problems, particularly engineering problems.
Journey 4 brings us to another blurred boundary, that between philosophy and sociology and
their respective theories of knowledge. Our understanding of “how we know” and “how we learn”
impact on our everyday behavior, and influence our attitudes, opinions, and values. ey impact
on how we learn, how we teach, how we manage, and how we are managed, and in consequence
the way we organized or are organized.
e boundaries between philosophy, psychology, and sociology become almost merged
when in Journey 5 we consider what it means to live in a plurality of social systems, and the de-
mands they make on us. e focus of the journey changes to managing ourselves and others since
in the future it is more likely we will have to manage ourselves. e questions self-management
presents to us are philosophical in nature, starting with “who am I?”
at question cannot be answered without reference to other persons, and in the different
systems that make up the communities we inhabit. Engineering knowledge is typically a com-
munity activity that is committed to “doing.” e Journey 6 explores our interdependence, what
it means for rights and responsibilities, and how the ideal organization be it a university or a firm
is a learning community. Communities that persist have a common ethic.
e “good” in the title of Journey 7 is ambiguous. It could mean engineering a product
that is good, a person who does this regularly being a “good” engineer. Or, it could mean being a
good person, that is, one whose behavior is driven by moral principles. is journey explores the
relationship between the two. When we think about making the good engineer possible, “What
are our aspirations?”
Journey 8 finds an answer to the question “What are our aspirations?” In Bowne’s aspira-
tional ethic for engineers that is grounded in Martin Buber’s view of the relationship between
individuals (I/ou) and McIntyre’s virtue ethics. All engineers need to take an active role in con-
sidering the ethical implications of their work, and these cannot be divorced from their personal
lives.
Journey 9 brings us face-to-face with technology and the impact that it is having on the
structure of the workforce. Current models of the workforce seem no longer to apply. At the
same time, the banking collapse of 2008 has raised questions about existing economic models
and the nature of the firm—“What constitutes a company?” more profoundly “what constitutes
PREFACE AND INTRODUCTION xxi
the common good?” Engineering students need to experience what it is to be in a community.
How within all the constraints imposed on educational institutions can a collegiate climate be
introduced and extended to the firm so as to enable permanent learning (continuous professional
development)?
Journeys 10 and 11 seek an answer to this question. e Journey 10 begins by doubting if
universities can claim to be learning systems when so few of their faculty know anything about
learning or development. eory X and Y are applied to teaching in engineering education but
the central focus is on the design of the curriculum for development—cognitive and personal, and
with engineering curricular that have been designed for that purpose. As the structure of higher
education changes and embraces life-long learning, the findings of research on adult learning will
have increasing relevance. e final paragraphs argue that teaching in engineering is a professional
activity that is a discipline that has its own knowledge base.
Journey 12 is both a summary and an argument that engineering education is at a crossroads
and that at the present time there are opportunities for major change.
It is three years since these journeys were given and much has happened since then. In
discussions with Mani Mina and Joel Claypool, the publisher, we decided that the integrity of
the seminars should be retained for which reason they have not be altered. Where it was thought
new material would be valuable it has been added in a postscript to the journey, or in the notes,
or both, and an additional journey has been added at the end.
John Heywood
December 2016
NOTES
[1] Whoever the individual, whatever his or her personality, they will adapt their behavior
to the situation in which they find themselves. us, just as human organizations can be
conceived of as systems, so they may also be conceived of as conglomerates of role players,
for in any social system the basic unit is the role. A role is, therefore, a pattern of behavior
associated with a particular position. “It carries out activities that, if the system is to achieve
its goals, have to be coordinated. One activity of management is, therefore, the coordina-
tion and integration of roles. e role does not have to be a human: it could be a machine
[…]. Problems arise for management because a variety of individuals, each with their own
value system and idiosyncracies, occupy roles in the organization. Very often personnel
come into conflict with each other simply because of personality differences. Sometimes
conflict is created because of the perception that individuals haven have of their role. Even
in a bureaucracy it is not possible to define a role so exactly that there are no differences
xxii PREFACE AND INTRODUCTION
in perception about how it should be performed. A major problem for employers, indeed
ourselves, is the fact that at one and the same time our goals create for us a plurality of
social systems. ere is not merely one role system that connects the job to other jobs in
the organization for work purposes, but the career system, the peer-group system and, not
least the family system. All of these systems make demands on our energies and there is
no way of escape. e ways we use to reduce these tensions and sometimes conflicts influ-
ence our performance at work for better or for the worse […]. Conflict and tensions are
normal consequences of living systems […]. Whenever we anticipate a role, we generate
expectations of what will be expected of us in that role and very often we will have to ad-
just those expectations […]. e need to define roles will be evident, ambiguities in roles
can cause role conflict and individuals much stress.” Extracts from Heywood, J. (1989).
Learning, Adaptability and Change; the Challenge for Education and Industry, London, Paul
Chapman/Sage, pp. 39–47. In recent organizational research much attention is paid to
networks, the structure and management of teams, etc. xvii
[2] Trevelyan, J. (2014). e Making of an Expert Engineer. London. CRC Press/Taylor and
Francis. xvii
[3] Youngman, M. B., Oxtoboy, R., Monk, J. D., and J. Heywood (1978). Analysing Jobs.
Aldershot, UK, Gower Press. xvii
[4] Heywood, J. (2016). Assessment of Learning in Engineering Education. Hoboken, NJ,
IEEE/Wiley. xvii
[5] Vincenti, W. G. (1993). What Engineers Know and How they Know It. Analytical Studies
from Aeronautical History. Baltimore, e Johns Hopkins University. xviii
[6] Whitehead, A. N. (1950). e Aims of Education. 2nd ed. London, Benn. xviii, xix
[7] Transdisciplinary derives from the need to respond to a single complex, concrete problem
that requires the assistance of several disciplines that give a variety of viewpoints to the
solution of the problem which is not resolvable by a single discipline but requires the syn-
thesis of a number of solutions. is definition has its origins in a 1973 OECD document
which is summarised in (a) Heywood, J. (2005). Engineering Education. A Review of Re-
search and Development in Curriculum and Instruction. Hoboken, NJ, Wiley/IEEEE. For
a discussion of various models of interdisciplinarity see (b) Fogarty, R. (1993). Integrating
the Curriculum. Pallatine, IL, IRI/Sky Publ. xviii
[8] I have translated Whitehead’s major concept of creativity to fit this argument but I think
he would have agreed. For Whitehead every concrete entity an individualization of the
universal creative force that is his ultimate. See p. 268 of Lowe, V. (1990). Alfred North
Whitehead. e Man and his Work, Vol. II. Baltimore, e Johns Hopkins University Press.
xix
PREFACE AND INTRODUCTION xxiii
[9] Kallenberg writes “practical reasoning is the stuff of relationships both at the personal level
as well as city wide (according to Aristotle) one needed to do practical reasoning well in
order to live successfully each day.” Kallenberg argues that “morality is identical to practical
reasoning. Any act that derives from practical reasoning-whether it is telling a joke or con-
structing a road-is inherently moral.” Kallenberg, B. J. (2013). By Design: Ethics, eology,
and the Practice of Engineering. Cambridge UK, James Clarke publishers. See also Book 6
of Aristotle. e Nicomachean Ethics (1996). Introduction by S. Watt. Wordsworth Clas-
sics. Ware Herts, Wordsworth editions. xix
Sternberg found among different groups of academics that their implicit theories of wis-
dom varied but could contribute to our understanding of wisdom. In his work on intelli-
gence he had distinguished between academic and practical intelligence. In his balanced
theory of wisdom he considers that wisdom is a special case of practical intelligence that
requires the balancing of multiple and often competing interests. He said, “wisdom is de-
fined as the application of tacit as well as explicit knowledge mediated by values towards
the achievement of a common good through a balance among (a) intrapersonal, (b) inter-
personal, and (c) extrapersonal interests, over the (a) short and (b) long terms, to achieve
a balance among (a) adaptation to exiting environments, and (c) selection of new envi-
ronments.” Sternberg, R. J. (2001). Why schools should teach for wisdom. e balance
theory of wisdom in educational settings Educational Psychologist 36, pp. 227–245. is
note is based on Bassett, C. L. (2006). Laughing at gilded butterflies: Integrating wis-
dom, development, and learning in Hoare, C. (Ed.), Handbook of Adult Development and
Learning. Oxford, Oxford University Press.
Acknowledgments
xxv
John and I are very thankful for John Pritchard’s great and gracious technical help in patiently
taping, editing, and creating the sessions. In addition, John Pritchard created and managed the
website. is activity would not have been possible without the participants to whom we express
our thanks and appreciation for their active participation. Professors John Hauptman (Physics),
Gregory Wilson (Rhetoric and English), Jennifer Lowey (English), and John Basart (Electrical
and Computer Engineering) doubted, questioned, challenged, and made many suggestions. We
are also grateful for the generous participation of doctoral students Robert Bouda, John Prichard,
David Lastine, Mohamaduo A. Diallo, and Mirzad Mohandespour.
ere were a number of our friends, associates, and colleagues who were unable to attend
the live sessions due to time conflicts and scheduling. We would like to thank them for following
our work and encouraging us as we went on. Finally, we would like to thank Vicky orland-Oster
for helping us to create this class, Broke Ascher for his kind suggestions, editing and encourage-
ments, the Engineering and Liberal art On-line (ELO) team for their great support, and Anthony
Moore for his encouragement and support in all aspects of this project.
Mani Mina
December 2016
As Dr. Mina has explained these journeys were undertaken in dialogue with a group of
educators and doctoral students. e journeys were modified in places and the notes considerably
extended as a result of these dialogues. e first of the dialogues began when Dr. Mina and John
Pritchard recorded the journeys. During the course I was able to discuss what I thought was
happening and where I was going with Dr. Alan Cheville of Bucknell University with whom I
was already in conversation about such matters.
I am very grateful to them, to Mani in particular, and the course participants for making
these journeys so meaningful.
John Heywood
December 2016
J O U R N E Y 1
1
“It all Depends on What You
Mean by...”
During the 1940s and 50s, the “BEEB” (as the British Broadcasting Corporation is affectionately
known in Britain), broadcast a radio show known as “e Brains Trust.” During each of the 84
broadcasts that were made, a panel of four erudite personalities attempted to answer questions
that were put to them. ree of them anchored the program and the fourth place was occupied
by some well-known intellectual. Some of them regularly occupied this space. One of the three
respondents who anchored the program began his response to any question by stating that “it
all depends on what you mean by.......” a particular word or phrase. He was C. E. M. Joad, a
philosopher and psychologist who had written an excellent introduction to philosophy. His phrase
became part of everyday language usage in the British Isles: even teenagers would be heard using
it. With a laugh of course!
Many years later during an interview for a senior academic post I used the same phrase in
response to an interviewer who was asking me to comment on the philosophy of R. S. Peters.
Professor Peters was responsible for making the study of educational philosophy something that
had to be done in university departments responsible for the education of teachers. He had said
that “education was the initiation of worthwhile activity” or words to that effect [1]. Forty five
or more years on it would be foolish to suggest that I can remember how the question was put,
but I do know that I knew little or nothing of Peters work and that my response was to say “it all
depends upon what you mean by education,” etc. Of course I did not get the job!
Nowadays, I appreciate that the phrase merits some discussion. For example, it revolves
around what you mean by “worthwhile activity.”
I suspect that in any group of a dozen or so people while some would give similar answers
others would give different answers as to what they perceived worthwhile activities to be. ere
would have to be some clarification, and the development of the ability to clarify is something
that the activity of philosophy can encourage. But let us stay with the issue for a minute. Suppose
we find that the focus of some of the answers is on worthwhile activities in the classroom while
other responses refer to a range of activities from such things as gardening to going to a pop-
concert, we might argue, as an observer, that only the former are educational. But who are we
to say that no learning takes place in the latter? So if we change the meaning of education to
learning, and if we take it that more or less everything we have to do is worthwhile, by definition
we arrive at something we know to be universally true, that is, that learning takes place all the
2
1. “IT ALL DEPENDS ON WHAT YOU MEAN BY...”
time, contingent though it may be. Take another step and we begin to recognize that the system
of formal education is a social artifact. Finally, we find that someone else is prepared to take all
these arguments apart. at is what philosophers do. ey take each other’s arguments/systems
apart, and that is how philosophy moves on but never escapes from the arguments of the past.
Joad or “Professor Joad” as he was known was being a philosopher. He showed us that many
statements require clarification if their meaning is to be understood.
Read the Wikipedia biography of Joad, or any other biography that you can call up, and
you will find it was Joad who popularized philosophy, that he was a socialist, that he liked the
ladies, that he liked mixing with the grandees, that he wrote prolifically, that near his death bed
he returned to Christianity but, you will find little or nothing about his philosophical beliefs.
However, among the long list of his publications you will find “Critique of Logical Positivism.”
is suggests that he allowed for metaphysics in his thinking which logical-positivism does not.
One of those who joined him as the fourth panellist in the Brains Trust was the philosopher
who introduced the British public to “dogmatic logical-positivism” A. J. “Freddie” Ayer. He did
this through a book called Language, Truth and Logic which was published in 1936 [3]. Ayer was
then professor of philosophy at University College London. Like Joad, he had socialist leanings
although he was not a pacifist. He, too, liked the ladies to the extent that he was married four
times. However, his philosophy was quite clear. Only scientific statements can be proved to be
either true or false and this implies a limitation on science, and therefore philosophy, since science
has to be restricted to observable aspects of nature. Metaphysical and theological statements have
no meaning, and the activities of philosophy become focused on how to replace ordinary language
with more precise and standardized equivalents. is is the reason for mentioning it here although
as a philosophy logical positivism is no longer in the vogue that it was. Here, it is with the view
that ordinary language is imprecise that we are concerned, and that is something about which
most reasonable people would say of much that is spoken and written.
Wittgenstein, an engineer by education, is regarded by many as the greatest philosopher of
the 20th century. In his first major study (“e Tractatus”) he took a similar position to Ayer [4].
However, in later years that position was modified [5]. Nevertheless, the logical-positivists drew
much support from Wittgenstein’s thesis that only propositions in the natural sciences are true,
moreover, it is impossible to say anything meaningful about ethics, aesthetics, and metaphysics.
us, the clarification of meaning, which is what Wittgenstein considered to be the role of phi-
losophy, is confined to natural science.
Although the average member of the public, and for the most part that is you and I would
not want to engage in the abstract conversations of philosophers on language, some things have
trickled down into the public arena. For example, we have become increasingly aware of the need
to clarify meaning: we know that if the questions we set in a public examination are unclear there
is the possibility that we will be taken to court. More pertinently, we know that if an instruction we
give to a technician is misunderstood and leads to an accident that we are ultimately responsible for
what happened. So we need to check that our instructions are understood and not misunderstood.
3
Nowhere does the problem of meaning raise its ugly head than in the interpretation of statistics,
particularly those to be found in newspapers.
Since the year 2000 engineering educators in the U.S. have been required by ABET to
ensure that the programs they teach will achieve certain specified outcomes. Before they were
introduced in the year 2000 engineering educators were able to attend meetings that clarified
the meaning of these outcomes. Two engineering educators, Yokomoto and Bostwick, argued
among other things that “secondary meanings of some words are sometimes used, such as using
the term “criteria” to describe the level of performance that students must achieve and “outcomes”
to describe the learning behaviors students must demonstrate” [6]. A more common definition of
“outcome’ is “result” or “consequence,” and anyone attaching that meaning to the word will surely
become confused in any discussion about writing measurable outcomes. Yokomoto and Bostwick
said that the aims listed by ABET (Exhibit 1.1) were considered to be too broad to be assessed
directly and in the tradition of e Taxonomy of Educational Objectives [7] they recommended that
those aims should be broken down into smaller more measurable units. e essence of their argu-
ment was that accrediting agencies should explain the terms used, and use them consistently, and
to this end they made a distinction between course outcomes and course instructional objectives.
Again such distinctions are debatable.
Exhibit 1.1: e list of program outcomes in Section II of Engineering Criteria 2000. e Accredi-
tation Board for Engineering and Technology (ABET).
It is easy to fall into the trap of making ambiguous statements. For example, recently I wrote
in a chapter of a book a modification of a statement that I had written in a paper in 1986 [8].
Engineering programs must demonstrate that their graduates have:(a) an ability to apply knowledge of mathematics, science, and engineering;(b) an ability to design and conduct experiment, as well as to analyze and interpret data;(c) an ability to design a system, component, or process to meet desired needs;(d) an ability function in multi-disciplinary teams;(e) an ability to identify, formulate, and solve engineering problems;(f) an understanding of professional and ethical responsibility;(g) an ability to communicate effectively;(h) the broad education necessary to understand the impact of engineering solutions in a global/social context;(i) a recognition of the need for and an ability to engage in life-long learning;(j) a knowledge of contemporary issues; and(k) an ability to use the techniques, skills, and modern engineering tools necessary for engineer-ing practice.4
1. “IT ALL DEPENDS ON WHAT YOU MEAN BY...”
It was a definition of technology in which I now substituted “engineering” for “technology.” e
revised statement which related to the model shown in Exhibit 1.2 reads,
“Engineering is the art and science of making things that meet the needs of self and
society. It is both an activity and a system that serves both individuals and society that
creates new problems for both. erefore, engineering literacy is necessarily interdisci-
plinary and a liberal study. Engineering literacy is about the process of engineering
whereas as technological literacy is about the products of engineering and their impact
on society.” It was the phrase in italics that bothered the reviewer. He or she wrote-
“Meaning engineering creates new problems for both? Or the fact that it serves both
individuals and society creates problems?” [9] e ambiguity is immediately apparent.
I was intending the former but isn’t the second point valid?
In the original statement the word technology was used because those of us who wanted
to include some engineering science in the middle and high school curriculum in England failed
to get our idea established. Governments had begun to replace the industrial arts (woodwork and
metalwork) with an approach that was based on design and make projects, and a syllabus based
on a black box systems approach. Since then as members of the Technological Literacy Division
of the American Society for Engineering Education have pointed out I have allowed the two
literacies to become interchangeable when there are discernible differences. Some argued that I
was confusing the issue so it became evident that there was a need for clarification and definition.
At the same time the Division also questioned the meaning of the term technology in its title
in relation to its aims. A group led by John Krupczak gave separate and different definitions of
engineering and technological literacy which has caused the division to change its name so as to
embrace both engineering and technological literacy. is is why summaries of their definitions
are incorporated in the last sentence of the statement that has been considered [10].
Both statements were written as an introduction to the diagram which shows a model of
the interrelationships between the areas of knowledge and the achievement of a technological
artefact for society and the economy. First, both engineering and technology have to function
within the constraints legal and otherwise, imposed by society and the environment.
e base of the model represents the person. e mind that supports the whole activity
is the source of our values, beliefs and technical understanding: it is the source of our attitudes
and opinions in the different social systems in which we find ourselves: it is the driver of our
actions. at is how this dimension of the model has been presented on several occasions but it
is also the source of our ideas and creativity. Understanding how our beliefs and values (moral
and otherwise) are formed is important to our conduct as engineers and individuals but it belongs
primarily to the domains of philosophy and theology which are different languages.
e three legs of the stool represent the technological aspects of engineering: research and
development; data acquisition; information technology; design; manufacturing data and produc-
tion; marketing data and sales. e first two legs are the domains of engineering science, de-
sign, and manufacturing. e third leg is the knowledge domain of business, legal, and economic
5
Exhibit 1.2: A model of the engineering processes engaged in the production of a technology (tech-
nological product [8]).
understanding. Supporting the legs are the trusses that represent individuals and the way the
organization is structured. ese are the domains of organizational behavior and behavior in or-
ganizations. e seat represents the economy and society within which the product is placed.
My purpose in introducing this limited discussion of the model is to show all the differ-
ent languages that an engineer or a manager, indeed each participant has to learn if they are to
understand the “meanings” that each person brings to the activity of engineering [12].
Joad’s rhetoric was not idle. I make no apology for greatly simplifying the philosophical
debate about logical positivism and more generally the analytic tradition in Britain because it
caused the public to understand that much care should be taken to ensure that the “meanings” they
Product(s)/TechnologyThe MindValuesValuesThe Economy/SocietyManufacturing Data Information TechnologyProductionResearch Development Data Infomation Technology DesignMarketing Data Information Technology SalesIndividualsOrganizationsIndividualsOrganizations6
1. “IT ALL DEPENDS ON WHAT YOU MEAN BY...”
wished to convey are understood in the way they wished them to be understood [12]. Wittgenstein
did not consider there was any such thing as “pure” thought. It is the language we possess that
enables us to think. It is a way of life. All language is shared but that is part of another journey.
is short journey into the meaning of meaning has been taken to illustrate the kind of
problem that philosophers tackle. It was inspired by a popular British philosopher’s persistence in a
series of radio broadcasts to query the meaning of words and statements. En-route brief excursions
were made into the British analytic movement and logical positivism. No attempt was made
to define philosophy. e reader was allowed to determine that philosophers consider profound
questions and sometimes these lead to “isms” like American pragmatism [13]. Indeed, one way to
learn philosophy is to consider its “isms.” Since these “isms” are often associated with particular
philosophers another way to learn philosophy is to learn about the great philosophers in historical
sequence. We, in contrast, are taking a series of journeys so that we may better reflect on who and
what we are as individuals and engineers within a society that is becoming increasingly complex.
Our next journey is into the world of reality where we continually cross over the boundary between
philosophy and psychology.
NOTES
[1] Peters, R. S. (1964). Education as Initiation. London, Evans. 1
[2] Joad, C. E. M. (1960). Critique of Logical Posivitism. London, Gollancz.
[3] Ayer, A. J. (1936 rep 2001). Language, Truth and Logic. Harmondsworth, Penguin. 2
[4] Wittngenstein, L. (1922). Tractatus-Logico-Philosophicus. Trans. G. K. Ogden. London.
Routledge and Kegan Paul. 2
[5] Wittgenstein, L. (1953). Philosophical Investigations. Trans. G. E. M. Anscombe. Oxford,
Blackwell. 2
[6] Yokomoto, C. F. and W. D. Bostwick. (1999). Modeling the process of writing measur-
able outcomes for Ec 2000. Proceedings Frontiers in Education Conference 2, 11b1, 18–22.
Piscatawy, NJ, IEEE. 3
[7] Bloom, B. (Ed.). (1956). e Taxonomy of Educational Objectives. Handbook 1 Cognitive
Domain. New York, David McKay. 3
e Taxonomy is the most widely referenced educational text of all time. Many engineer-
ing educators have used it, and it is claimed to have had a significant influence on education
worldwide. It provides a hierarchical framework for categorizing educational objectives for
use in test and curriculum design. e hierarchy is made up of six domains and in the text
these are accompanied by sub-categories. ese are expressed in terms of the behaviors
a student would demonstrate if they possessed the skill required. us, a student has the
7
“ability to produce a unique communication” is one of the categories of the domain of syn-
thesis. e six domains are Knowledge, Comprehension, Application, Analysis, Synthesis,
and Evaluation. It was open to much criticism (see Chapter 2 of Heywood, J. (2005). Engi-
neering Education. Research and Development in Curriculum and Instruction. Hoboken, NJ,
Wiley/IEEE).
e authors of the Taxonomy made it clear that their descriptors were of the “outcomes”
of education. e influence of this approach can be seen in the ABET 2000 Ec list of
outcomes, and the outcomes for the different levels of the Bologna agreement. But those
authorities did not use the Taxonomy and they preferred as have very many other educa-
tional authorities to use the term “outcome” rather than objective.
e taxonomy was revised in 2001. e Knowledge Domain was sub-divided into four—
Factual knowledge, Conceptual knowledge, Procedural knowledge and Meta-cognitive
knowledge. Six cognitive process dimensions were included- Remember, Understand, Ap-
ply, Analyze, Evaluate, and Create.
Anderson, L. W. et al. (Eds.) (2001). A Taxonomy for Learning Teaching and Assessing. A
Revision of Bloom’s Taxonomy of Educational Objectives. New York, Addison Wesley Long-
man.
[8] Heywood, J. (1986). Toward technological literacy in Ireland: An opportunity for an in-
clusive approach in Heywood, J. and P. Matthews (Eds). Technology, Society and the School
Curriculum: Practice and eory in Europe. Manchester, Roundthorn. 3, 5
[9] Personal communication, June 2013. 4
[10] Krupzcak, J. et al. (2012) Defining engineering and technological literacy. Proceedings An-
nual Conference of the American Society for Engineering Education. 4, 7
[11] Bucciarelli has made a detailed study of language usage in design teams. He points out that
each person in the team lives in his/her own object world which has its own object world
language. “An engineer’s will be rooted in a particular scientific paradigm which serves as
a basis for conjecture, analysis, testing, and designing within that world” [10]. Among the
examples that Bucciarelli gives are the differences between the languages of structural and
electronics engineers. As the model indicates, a design team will have to cope with other
world languages outside of those in engineering and Bucciarelli notes that at the beginning
of a design the perceptions that each of the members of the team have of the design will
differ, and the final design will be the result of much negotiation. In this respect Bucciarelli
argue that design is a social enterprise “that at its core is a conversation spoken in a language
of its own invention.” Design like language is a social process.
Bucciarelli, L. L. (2003). Engineering Philosophy. Delft, Netherlands, Delft University
Press. Chapter 2.
8
1. “IT ALL DEPENDS ON WHAT YOU MEAN BY...”
For a detailed examination of Bucciarelli’s ideas in the context of engineering ethics, see
Kallenburg, B. J. (2013). By Design. Ethics, eology and the Practice of Engineering, Cam-
bridge, UK. James Clarke.
[12] Magee, B. (2001). Writes a section in his book called Common Sense that “meanwhile
(in the 1930’s) in Britain a near-contemporary and lifelong friend of [Bertrand] Russell’s
called G. Moore had been pursuing the analysis of statements in ordinary language using
neither science or technical logic as his yardstick but common sense [...] into a mode of
philosophy that was eventually to displace Logical Positivism. It became known as “linguis-
tic philosophy” or “linguistic analysis,” and its criterion was the ordinary use of language.
e Logical Positivists had been mistaken said the linguistic analysts in trying to force the
straight jacket of scientific standards on all forms of utterance. Umpteen different sorts of
spontaneous discourse go to make up human life, and each one has its own logic. Philo-
sophical problems arise when a form of utterance appropriate to one mode of discourse is
mistakenly used in the wrong context” [pp. 200–201]. Magee writes of Wittgenstein that in
his later philosophy “linguistic analysis achieved its ultimate degree of refinement” [p. 202].
5, 6
Magee, B. (2001). e Story of Philosophy. London, Dorling Kindersley.
[13] e term “pragmatic” is in common use. In that usage it is the judgment (justification) “of
any assertion of any assertion solely by its practical bearing on human interests” [Oxford
Dictionary]. Alternatively as a philosophy it is the “theory that a proposition is true if
holding it to be so is practically successful or advantageous” [Richard Rorty in the Penguin
Dictionary Of Philosophy, 2nd ed., 2005]. 6
e principal American pragmatist philosophers are John Dewey, William James, and
Charles Sanders Pierce. Biographies of Dewey and James will be found in the Penguin
Dictionary of Philosophy (2005) 2nd ed., London, Penguin. A short biography of Pierce
who was the first of the pragmatists will be found in Collinson, D. (1987) Fifty Major
Philosophers. A Reference Guide. London, Routledge. He focused on the clarification of
meaning and his best known paper is called “How to make our ideas Clear” (Popular Sci-
ence Monthly, pp. 286–302, January 1978). e relevance of pragmatism, in particular the
thinking of Dewey to engineering can be found in Omidvar, I. and M. Mina (2012). Imag-
ining an undergraduate curriculum based on the educational philosophy of John Dewey.
Proceedings Frontiers in Education Conference, 256–257. Piscataway, NJ, IEEE. Dewey,
following Pierce, argued that learning was accomplished by “doing” since knowledge is an
activity. He believed that when we met a problem its solution was found by a mental pro-
cess that we would recognize today in a variety of problem solving paradigms including
the process of design.
AN ACTIVITY BETWEEN JOURNEYS
Before the next journey please look at a tap slowly dripping water into a bowl. Write a description
of what you see as you watch the water coming out of the tap and joining the water in the bowl
below.
A kitchen tap dripping into a washing-up bowl is likely to be suitable, but you may choose
any convenient tap and bowl or basin. e tap should be set to drip roughly once a second.
You will probably want 300 words for your description.
When you have written your description please answer the following questions.
9
1. I wrote my description
(a) while I was observing the tap.
(b) after watching the tap, staring to write about ........................... minutes/hours after
finishing watching the tap.
(c) from my recollections (without making special observations for the present exercise)
(i) of taps in general.
(ii) with one particular tap in mind.
2. e tap and basin which I described
(a) are specified in the description which I have written.
(b) I did not specify. ey were in fact .............................................................. .
is exercise was devised by the late Dr. Ronald Stansfield of City University London.
J O U R N E Y 2
11
inking about inking
One of the best kept secrets in education, let alone higher or engineering education, is how we
learn. Very occasionally freshmen are exposed to courses in learning how to learn. By and large,
however, it is assumed that we know how to learn by some kind of in-built intuition. A few tutors
have argued that the more we reflect on our own learning, an act that they call meta-cognition,
the more likely we will be to enhance the skill that is learning [1].
John Pritchard’s response to the dripping tap activity shows how it can cause reflective
thinking (Exhibit 2.1). Ask yourself how easy would it be for you to write a completely different
response to that activity? You will find among any group of individuals who have completed the
exercise responses that range from accounts of the physics involved to essays of imagination, and
perhaps, the occasional piece of poetry. is is not say that John Pritchard’s view is not imaginative.
It clearly is. Neither does it consider all the physics that is possible. Each view only provides a
limited picture of the dripping tap. If we are to obtain the “grand-view” we have to consider each
of the views presented by a mix of participants in relation to one another.
Let us stop contemplating the tap and join John Henry Newman, a 19th century British
sage and Cardinal now a Beatus. In his renowned lectures on “e Idea of a University” he asks
us to “contemplate man himself as our object of contemplation; then at once we shall find we
can view him in a variety of relations; and according to those relations are the sciences of which
he is the subject-matter, and according to our acquaintance with them is our possession of true
knowledge of him. We may view him in relation to the material elements of his body, or to his
mental constitution, or to his household and family, or to the community in which he lives, or to
the being who made him; and in consequence we treat him, respectively, as physiologists, or as
moral philosophers, or as writers of economics, or of politics, or as theologians. When we think
of him in all these relations together, or as the subject at once of all the sciences I have named,
then we may be said to reach unto and rest in the idea of man as an object or external fact, similar
to that which the eye takes from the outward form [...] And if there be one relation about which
we know nothing at all except that it exists, then is our knowledge of him, confessedly and to our
own consciousness, deficient and partial [...]” [2, p. 47 ff ].
Dwight Culler, an American commentator on Newman paraphrased his statement thus,
“what is true of man in general would also be true of any portion of reality however minute. If we
wished to know a single material object—for example [Canterbury Cathedral], (one of the great
medieval Gothic cathedrals of England)—to know it thoroughly we should have to make it the
focus of universal science. For the science of architecture would speak only of its artistic form,
12
2. THINKING ABOUT THINKING
Exhibit 2.1: John Pritchard’s experience of the dripping tap exercise.
engineering of its stresses and strains, geology of the nature of its stones, chemistry and physics
of the ultimate constitution of its matter, history of its past, and literature of the meaning which
it had in the culture of the people. What each one of these sciences would say would be perfectly
true in its own idea, but it would not give us a true picture of [Canterbury Cathedral]. For that
all the sciences would have to be recombined [...]” [3, p. 182].
Dripping WaterIn this exercise, I focused on the sound of dripping water for 5 min and reflected on the experience immediately after (taking notes). From a bird’s eye view of my thought process, I focused specifical-ly on the following thoughts, in chronological order:1. The literal visualization of the dripping system in sync with the sound I heard.2. A 2-D transformation of the sound.3. A 3-D transformation of the sound.In the first thought, I pictured, in time, the growing mass of water beginning to seep from the spout until its weight overcame the surface tension along the spout’s metallic rim. I then intended to match the imagined droplet’s impact on the water to the sound I heard. However, I skipped imagining the droplet itself in the process of free fall. Maybe the brief moment of silence caused this.After some time, I began to focus only on the sound. I first pictured the sound as a single trace, similar to the way a heartbeat signal would look. This is what I now call a 2-D transformation of the sound. I think the shape I imagined was a result of the secondary drip heard after the initial impact. I noticed an inconsistency of the sound type of the successive impacts. This confused me since we set up a more or less steady state system (predetermined flow and water level). I then realized that the droplets created ripples in the pool below, causing the droplet to impact the pool’s waves at different surface angles. This would lead to different sounds. I decided that this observation draws on life’s events. Meaning, it seems that no matter how perfect we plan, the unexpected always seems to find its way into our lives.My last thought attempted to expand the sound into what I now call a 3-D transformation of the sound. Hearing the secondary impact led me to visualize a small hardened ball ricocheting off of two transverse surfaces. Maybe this is similar to a coated Ping-Pong ball bouncing off of a table and hitting a paddle.After writing this, I observed that no matter where my thoughts about the droplets start, I always seem to transform them into a visualization of sorts and provide technical details about the process of transformation. This may be a result of my college education, or maybe the accumulation of my experiences in all realms of life. Nonetheless, this activity has allowed me to further explore how I think.13
Exhibit 2.2: Mani Mina’s response to the dripping tap exercise.
John Henry Newman’s statement came from the third discourse that he gave when he
founded the Catholic University of Ireland in 1851/2. Today it would doubtless be presented in
gender neutral terms but it remains the epistemology that underpins liberal education, I hasten
to add not general education. We may regard general education as a step toward liberal education
in that, certainly as in the American scene, those who study the liberal arts do so in a variety
I did not start the tap, I did not follow the directions of the assignment. I did not do what I was supposed to do, but…. The tap found me. It found me while intruding into the moments of confusion. I woke up, things were quiet and there was nothing to think about, except, all the things that I need to do, all the things that I have not done well, and all the problems that I need to address…..Things are so confusing when you are half-asleep and are sort of dreaming about issuesAnd right there, between the dark quite time-space continuum which was reality of that moment and was accented with confusing circular thoughts, the tap found me. It was pushing its existence in my continuum; I heard it and had to react. It was something beckoning me to let go of the unending thoughts and I had to address an immediate task.It could be a rodent, but the sound had a consistent periodic life to it. My searches and the sounds that I made did not change its persistence. It continued regardless of anything around it. It did not care about me, so why do I care about it?Someone’s action, for not torqueing the screw to stop the water has made an ongoing dripping tap. A wasteful existence that did not care about anyone, and time was of no value to it. I thought that I need to envy that I the circling thoughts, wrapped themselves around and put the tap in the center. Images changed but were much more direct, focused, and as usual wasteful.I thought about time lost, resources wasted, lessens that were not learned, and lives that kept going, while being ignored,….The drips helped me found memories that were lost, and helped me dream of a different future or possible futures. They, the dripped that did not care about anything, made the floods, the rivers, and the seas. Small items, particles, interaction that we take for granted do make difference and to most of us they are not interesting.Then I dreamed again, into the night, how do we know what is interesting what is important, and what will make the most difference to us, to the world, and to others….But life is meant to be experienced, meanings come out of our interactions, thoughts, leanings, playing, and getting confused. Mistakes are important part of our learning, wastefulness is and can relative.Here we are humans with our subtle emotional delicacy, mental toughness, and imaginations that are hoping to reach the farthest stars. I wondered how many did spend sleepless nights thinking about these, I know that I am not alone and the tap does not care about me.14
2. THINKING ABOUT THINKING
of subjects. But it is only liberal when the relations between them are considered in relation to
some overarching concept such as “man” or more appropriately the “person,” or perhaps better
still “man and woman.” at is roughly what Newman meant by universal knowledge. It is the
recombination of that knowledge which is the object of university education. rough it the
student seeks a true and balanced picture of reality and so the mind is enlarged. Liberal education
is distinguished from general education in that it is an activity of synthesis.
Many commentators object that such an education is not possible in the complex society
in which we live. ose who continue to support the idea of a liberal education argue that only
through such an education can an individual learn to cope with the complexity of life. In the con-
text of the diagram of the activities of engineering and technology presented in the last journey
surely we should focus on what the engineer needs to know about man (woman)? Why answers
to this question have significance is illustrated by IBM’s approach to the recruitment for one of
its research programs in 2006 that was reported in IEEE Spectrum (December 2006, p. 6 – Ex-
hibit 2.3). e article draws attention to the fact that in the U.S. and Europe 83% of jobs are filled
by barbers, teachers, doctors, lawyers, closet reorganizers, and their like but their productivity is
below those working in agriculture and industry. IBM’s response was along with other manage-
ment experts to devote more research effort to “figuring out how people think, work and think
Exhibit 2.3: Extracts from the article in regular section Spectral Lines—IBM’S New Motto: ink…
About how others think. IEEE Spectrum, December 2006, p. 6.
Technology gave its first big boost to productivity on the farm, its next on the factory floor. Now comes the hard part: services in which it is rarely obvious how to rationalise our work […] How can we help a barber cut more hair?Barbers, teachers, doctors, lawyers, closet reorganizers and their like fill 83 per cent of all jobs in the United States, and nearly as many in Europe, so it matters that their productivity lags behind that of their brethren in industry and agriculture.Even engineers with their calculators and CAD/CAM programs, do not always outwork profes-sional forebears who had slide rules and drawing tables.Many management experts say we should devote more of our research efforts to figuring out how people think, work and think about their work. The biggest company betting on this approach is IBM ranked 10th in R& D spending in a list compiled for IEEE/Spectrum by Standard and Poor’s.Senior Editor Harry Goldstein and Ron Hira of the Rochester Institute of technology, in New York state, took a close look at Big Blue’s effort. They found that it now devotes a quarter of its R & D budget to services, up from zero three years ago. It is hiring anthropologists, and economists that it has created new, non-engineering titles for them, together with a new academic discipline called “services, science, management and engineering”. The company is trying to convince leading universities to offer courses in it.15
about their work.” Harry Goldstein and Ron Hira found that IBM now “devotes a quarter of its
budget to services, up from practically zero three years ago. It is hiring so many anthropologists,
sociologists and economists that it has created new non-engineering titles for them together with
a new academic discipline called ‘services sciences, management and engineering.’ e company
is trying to convince leading universities to offer courses in it.” Clearly, IBM does not think much
of the engineer’s ability to think him/herself into some else’s shoes. Yet historians have to think
themselves into other people’s shoes in order for them to be able to comprehend a particular set of
circumstances at some time in the past. Given that IBM has decided to make use of persons from
other disciplines, why is it that engineers’ should understand the ways of thinking that persons
have in other occupations? e most elementary and possibly the most important answer is that
it enables communication but IBM seems to assume that it also helps design.
But this article presupposes that engineers cannot ask the questions that will enable them
to understand how other people think and work. Is that true? Could/should engineers undertake
the task that the “outsiders” have been asked to do? Is it true that there are ways of thinking
particular to a specific activity of work? Can we put ourselves inside the mind of another? What
differentiates other jobs from engineering? [4]. Answers to these questions have a bearing on the
education engineers receive. For example, the view that engineers should understand that there
are different modes of thinking requires some provision in the curriculum which is unlikely to
be achieved without their participation in activities that require a mix of students from other
disciplines. e dripping tap is an example where the perceptions of engineering students per
se while yielding the possibility of reflective thought as happened in the case of the example in
Exhibit 2.1 does not reveal the perceiving, thinking, and communication processes that exist in
a community. at can only be achieved when students in disciplines “distant” from science and
engineering show their thought processes to engineering students. To illustrate this point I have
put a description of a course for freshman sudents of English.
Students that had an intention of training students to cope with unstructured situations in
Exhibit 2.4. (For other details of this course see note 5).
I submit that not only would engineering students have benefited from participation in
this activity but the students of English would also have benefited from the participation of the
engineers. By itself such a project for a particular group of students contributes to their general
not liberal education. It becomes a liberal education (as defined by Newman) when a mix of
students from different disciplines are its participants. I acknowledge that there are some engi-
neering courses in which students from the humanities and social sciences participate in activities
(projects) with engineers that may achieve such goals. You may contend that the dripping tap
activity would be sufficient to bring about the questions desired.
In the foundation years of the University of Lancaster students taking a single honors degree
in the arts and the humanities were required in their second year to pursue a courses in one of
the sciences. It accounted for one-ninth of the honors degree program. e courses were devised
by lecturers who had a specific interest in the subject. e Vice-Chancellor (President) and those
16
2. THINKING ABOUT THINKING
Exhibit 2.4: Extract from Roller, D. R., Giardina, R., Herman, G. and G. Woditsch (1972). e first
year of the first little college. Journal of Higher Education, 43(5), 337 [5].
who created the idea believed that as between one subject and another there were different ways
of thinking and that students in the arts and the humanities should be exposed to the ways of
thinking in a science. is is no different to what is being proposed here for engineers. e person
in charge of the physics course Hugh Montagu Pollock asked me if I would evaluate the course
and for this purpose I became a participant observer. We wanted to see if students came to an
understanding of the differences between their major subject and physics. So in a compulsory
essay we asked the following question:
Distinguish between the terms “mistake,” “discrepancy,” “uncertainty,” “systematic
error” and “random error” as applied to the experimental testing of a hypothesis. Com-
pare the usefulness of the concept of error as used in physics with that of the errors
occurring in the study of your major subject. About 1,000 words plus diagrams (i.e,
study as used in the sense of knowledge rather than as study for an examination).
If the course had been in engineering among the concepts that might have been included
would be “failure’ and “risk.” is approach suggests that one way we can find about the different
approaches to subjects is to see how the participants respond to such concepts as “uncertainty,”
“risk” and “failure.” What, for example, constitutes evidence in engineering, law and say history.
Exhibit 2.5 is an example of the handling of evidence by lawyers and engineers. It was suggested
by a distinguished American engineering educator omas T. Woodson.
To conclude with another interesting question raised by the IBM development: IBM has
created a new academic discipline for these non-engineering personnel. ey call it “services sci-
ences management and engineering.” Could this be Engineering and Technological Literacy in
another guise? It is to answer questions such as these that a philosophy of engineering is nec-
essary. So how, for example, do engineers differ from scientists on the one hand and business
The second course had the aim of focusing on what each student was now thinking, and on what they had learned in the last twenty-odd years about his(her) own perceiving, thinking and communication processes. This was achieved first by exposing the students to a range of experiences, and then analysing their processes of thinking about these experiences in relation to such questions as “can different or personal ways of perception be communicated? How do different cultural backgrounds and different group situations affect people’s perceptions? To what extent does the English language restrict our abilities to think, or even imagine other ways of seeing the world?” the students were then asked to devise exercises. Anything was allowed so long as it involved creative effort. Many students found the initiative and responsibility with which they had to cope more than they could handle. The project also broke up then group experience which, while not a panacea, is consonant with the broadest aims of liberal education in that it enables the student to gain comprehension and control over his present capabilities and some perspective on his present value.17
Exhibit 2.5: Example of evidence from engineering and legal viewpoints from Woodson, T. T. (1966)
Introduction to Engineering Design. p. 46. New York, McGraw Hill.
people (managers) on the other? at is our next problem but before that, however, we should
take time to investigate how we learn.
POSTSCRIPT
Since the manuscript was submitted for printing I have tried to understand the forces at work in
the election for a President of the United States. One of my friends referred me to a book that
had just been published with the title Strangers in their Own Land with the sub-title “Anger and
Mourning on the American Right.” e publisher had chosen to add on the dust jacket A Journey
to the heart of Our Political Divide (New York, e New Press). Journey it is; taking some five years.
It was undertaken by Arlie Russell Hochschild a social anthropologist from U.C. Berkeley. She
was concerned that the nation was becoming politically divided and that the split was becoming
Legal(Evidence that Mr. A contracted to perform X for Mr. BNature of the evidenceEngineering(Evidence that a motor design A has a 10,000-hr bearing life under given conditions)Th e original contract itselfProof inherentEngineer witnesses tests; exam-ines the parts (being familiar with motors)*A photocopy of the contract (which is not immediately available)Proof availableEngineer reviews data of tests run by his employee*X is being and has been regu-larly performed by Mr. A for Mr. BProof circumstantialMotor design. A has been sold elsewhere for similar 10, 000-hr dutyExpert testifi ed that Mr A has accomplished X.Expert testimonyConsultant in motor fi eld states he knows motor design. A has passed 10,000-hr testsEyewitness Mr C testifi es see-ing Mr. A performEyewitness testimonyNon-expert, who observed the tests at a distance, reports the resultsMr. D testifi es he heard Mr. A performed XHearsayMan on next project heard that motor design A passed18
2. THINKING ABOUT THINKING
increasingly hostile. While she thought she had some understanding of the liberal left camp she
wanted to know what was happening on the right and, in particular, “how life feels to people on
the right—that is in the emotion that underlies politics. To understand their emotions, I had to
imagine myself into their shoes.”
If an engineer is to understand how others think, as IBM seem to have wanted then to do,
then it will not be sufficient to understand their cognitive processes for their “being” depends as
much on their emotions as anything else. Engineers have to understand people, yet as Trevelyan
says you will be lucky if you can find the study of people as a core part of the engineering curricu-
lum. “Strangers in their Own Land” is as good an entry point. ere are two points that relate
to our journey. e first is that it is not an easy to understand people beyond the micro culture in
which we live. e second is the concept of the “empathy wall” which we have to overcome if we
are to know people from the “inside.” “An empathy wall is an obstacle to the deep understand-
ing of another person, one that can make us feel indifferent to those who hold hostile beliefs, or
whose childhood is rooted in different circumstances” (p. 5). Hochschild found it quite difficult to
overcome the “empathy wall” between her and the people of Louisiana she wished to understand
from the inside. Few people will not have come across such walls in their own lives.
NOTES
[1] is journey and Journey 3 are based on parts of Ch. 2 Perception and learning in Hey-
wood, J. (1989) Learning Adaptability and Change. e Challenge for Education and Industry.
London, Paul Chapman/Sage. In Managing and Leading Schools as Learning Organizations;
Adaptability and Change (2009- Dublin, Original Writing for the National Association of
Principals and Deputies) it was revised and extended to include a model of the process of
perception. is took into many factors that limit perception including other person char-
acteristics, organization, and personality characteristics. ese were also shown to relate to
motivation. e five interdependent processes were considered under the following head-
ings; the acquisition of information: learning, categorization, memory and the influence of
personality. Searching and sampling: our limited capacity. Expectancy and expectations:
expectancy and first impressions: expectancy and cognitive dissonance. e acquisition of
information: attribution, expectancy, and gossip. e acquisition of information: attention:
receiving: trial and check. Consolidation: the problem of experience. 11
[2] Newman, J. H. (1852, 1923 Impression). e Idea of a University Defined and Illustrated.
London, Longmans, Green. 11
[3] Culler, A. D. (1955). e Imperial Intellect. A Study of Newman’s Educational Ideal.
Newhaven, CT, Yale University Press. In the original text Culler used Westminster Abbey
where the Kings and Queens of England are crowned. I used Canterbury Cathedral as it
is better known to me being in the City where I went to school. 12
19
[4] Heywood, J. (2007). “ink...about how others think.” Liberal education and engineering.
IEEE/ASEE Proceedings of the Frontiers in Education Conference, T3C, 2- t0 24. Piscat-
away, NJ. 15
[5] Although written many years ago this example seems to remain pertinent. I cited it in
a section on meeting the goals of perceptual learning in the curriculum in Assessment in
Higher Education, (1989, 2nd ed., Chichester, Wiley). e text with which it is associ-
ated read (pp. 184–185) “e authors of the Images course at Little College (Roller et al.,
1972) believed, like us, that the capacities of freshmen were unused and that the freshman
curriculum had a stifling effect. After seminars with the students they decided to initi-
ate a course on the processes by which man conceptualized his universe and in turn both
shapes his experiences and is himself shaped by his images or concepts. We decided to call
the course ‘e Making and manipulation of Images’ a title inspired by one of Kenneth
Boulding’s books on our list of core readings. Rather than organize our course around a
topical division of materials or disciplines, we developed a generic model of the image-
building process and adopted that as a broad outline for the course. In essence, the model
and outline were a simplified version of the ‘scientific method’ intentionally stated in terms
so as to be universally applicable viz.” 16
Roller, D. R., Giardina, R., Herman, G. and G. Woditsch (1972). e first year of the first
little college. Journal of Higher Education, 45(5,) 337.
(1) Encounter. Meeting with new and unexplained phenomena.
(2) Articulation. e formulation of an explanation tentatively held until tested.
(3) Conflict. Discovering the inadequacy of the image to explain the phenomena and/or
discovering in the implications of newly validated image a conflict with a previously held
image.
(4) Internalization. Acceptance (whether by an individual or a society) of the new image,
a conflict with previously held and validated images.
(5) Rule and reign of images. Guiding thought and conflict to establish and elaborate image
systems, sometimes even when some portion of the image system is in conflict with or fails
to explain newly encountered phenomena.
J O U R N E Y 3
21
ings are not Always What
ey Seem
Sitting in a MacDonald’s in Denver many years ago I was reminded of what I was going to do at
the beginning of a management course that I had to give when I arrived home. MacDonald’s had
given me a tray that had a sheet of paper on it. I imagine it was a substitute for a tray cloth. On
it were printed a number of optical illusions. ey reminded me that on many occasions I had
begun my management course with what Peter Hesseling called a “healthy choc des opinions” [1].
With managers who were very skeptical that academics knew anything about management at
all, I would organize the room so that the tables formed a complete square with no entry to the
middle. I would let the class assemble. Say nothing for a few minutes and wait until they got a
bit restive. en I would throw some money in the form of notes and coins into the center of the
square, and watch for their reactions. At the same time I would pick on some of them and suggest
that they might be thinking “what’s this chap up to?” or “Oh my God we’ve got an awful lecturer
here” and so on. It provided a nice introduction to the study of worker-manager relationships, as
well as the idea that perhaps this crazy academic had something to offer.
Another opening that I have used with both mangers and student teachers is to set a psy-
chological test. At least that is what I told them, and in no uncertain terms. “It is a psychological
test!”
It should be remembered that psychological tests are among other things, used for the
selection of people as well as counseling in school, college, and work. ey are also associated with
the measurement of IQ on this side of the Atlantic. Depending on the audience I would give a
formula which described the reasons why I wanted them to take the test. For example, graduate
student teachers were told that the purposes of the exercise were (1) to show the importance of
standardization when setting tests, (2) to illustrate objective items, and (3) to illustrate a test that
was culture-free. ey were also told I would not look at their scores.
ey were required to give one answer to each of five questions, each set to test their un-
derstanding of a certain aspect of a picture displayed on a screen. ey were told there was only
one right answer to each question which was set in multiple-choice form. ey were also told that
the pictures would be presented at speed since speed is related to intelligence. Several repetitions
were made of the rule that there was only right answer.
One of the pictures used was the Muller-Lyot illusion. It is surprising that so many people
have not seen it and when faced with it on first sight assume that line “b” is longer than line “a”.
22
3. THINGS ARE NOT ALWAYS WHAT THEY SEEM
ose who have seen it before know that both lines are the same distance. Ask them which line
is the longest and give them no choice and they will answer (a) or (b). Even given the option of
“neither” many will be drawn into the trap of answering (a) or (b) (see Exhibit 1.1).
e other pictures came from a book called the Anatomy of Judgement by Jane Abercrom-
bie [2]. It was one of the first books published in the field of Higher Education in the UK and
is a classic, well worth reading. It reports research on the judgments made by medical students.
It contains probably, the first reported use of the discussion group as a vehicle for obtaining data
for research from students in higher education.
One of the pictures shows three persons in a tunnel. Many people perceive them to be
of different height but, in fact they are all the same height. From somewhere else I got the old
lady/young lady illusion. Is she old or young? Is she a blonde or brunette? [3, p. 19, Exhibit 2].
So why go to all this trouble to create an exercise that annoyed everyone when I told them that
contrary to the information given that there was only one right answer to each question, there
were no right answers to any of the questions! ey felt they had been brainwashed which to an
extent they had. Once, after I had done this exercise with a 100 or more graduate student teachers,
a nun shouted from the back of the class “you are immoral!”
In so far as student teachers were concerned I wanted them to grasp that their students may
not always perceive what the teacher is doing or saying in the same way that the teacher wishes
them to do, and that this may be the cause of misunderstanding. e dripping tap exercise is a
reminder to teachers that among their students there may be a variety of interpretations, and,
therefore understandings of what the teacher is saying. For example, the way problems are set
can affect the understanding that individuals have of a problem. In the case of management it
is a warning, particularly when faced with an unheralded problem, such as an angry member of
the workforce, that the manager can easily misunderstand the situation. In such circumstances
the manager has to slow things down so that his/her response is considered rather than re-acting
quickly and living to regret his/her re-action.
Within this context, when I was moderating engineering science projects at a school in the
north of England, the teacher responsible for the course, Glyn Price, sent me the letter that is
shown in Exhibit 3. is drew attention to a problem that has been noted by other teachers of
engineering. Namely that student’s can solve an engineering science problem with the correct use
of mathematics and yet not understand the physics. Teachers need to ensure that students have
an understanding of the physics which is the primary purpose of classroom assessment.
To summarize:
1. “things” in a classroom, staff room or elsewhere in the work situation may not always be
what they seem;
2. communication is a two-way affair; and
3. communicators do not always perceive each other in the same way.
23
Lest we forget, when we communicate we establish relationships and it is these relationships
that give us our being. e Scottish philosopher John Macmurray provides a philosophical insight
into the significance of relationship [4]. He argues that the “Self ” finds its being through the
relationships it has with others. Macmurray asks us to consider the Self in relation to the world.
“When I act I modify the world. Action is causally effective, even if it fails to produce the particular
effect that is intended. is implies that the Self is part of the world in which it acts and in
dynamicrelation with the rest of the world. On the other hand, as subject the Self stands “over
against” the world which is its object. e Self as subject then is not part of the world it knows,
but withdrawn from it, and so, in conception, outside it, or other than its object. But to be part
of the world is to exist, while to be excluded from the world is to be not-existent” [4, p. 91]. We
depend on the world for our identity. In these terms we choose to exist or not-exist as a person or
a professional. If we choose the former then there is an obligation to understand the world beyond
that of the technicalities of our chosen profession. Since the “Self is a person” and “persons only
develop in relation to other persons we come to be who we are as personal individuals only in
personal relationships” [4, p. 15].
Given this view of personal relationships it is incumbent on each individual to understand
the others with whom we relate. Such understanding can be obtained by reflecting on our own
behavior and how it affects others. ere is no better beginning for such reflection than with
perception. So how do our perceptions shape our learning? e first thing we might have induced
from the dripping tap exercise is a warning. at is, not to be governed by stereotypes or, things
are not always what they seem.
Many years ago I found an exercise in an American Journal that had the intention of il-
lustrating this point (among other things) which I tried out on the engineering students I was
teaching [5]. I asked them to spend an hour walking in pairs around the center of a large city
(Liverpool). When they got back to the class I asked them to write down what they had seen.
Since they had all been sent along the same route you would have expected all the main land-
marks to be listed in their reports such as the railway terminus, a large hotel, the philharmonic
hall (home of the Royal Liverpool Philharmonic Orchestra), and two cathedrals. Not so: as the
American authors had predicted a variety of descriptions would emerge. Remember they walked
in pairs: there was no instruction that they should not talk to each other. It is highly unlikely
that they did not talk about the issues that were bothering them: and a quick skip through the
reported scenarios suggested that they might have been influenced what they had seen.
ere is an early 19th century building in the street opposite to where I had my office in
Trinity College Dublin before I retired. My students would have passed it many times. It has a
“cemented” freeze that goes round its wall which is twelve inches or so above eye level. At intervals
are a number of carvings at the base of pillars that decorate the building. One is of mice playing
billiards. When I ask my students what they have observed about this building when they had
walked passed it, very few of them mentioned these carvings. What, then, governs our thinking
when we are walking by ourselves or with friends when we walk around a city center? Why do
24
3. THINGS ARE NOT ALWAYS WHAT THEY SEEM
we notice some things and not others? In my case I know that when I get off a bus near college I
look up to see what the time is on a clock fixed to a wall: this diverts my attention away from the
book display immediately in front of me in the window of one of the few surviving bookshops in
Dublin. So while these books may catch my eye for a brief second I don’t remember them. I do
remember the time.
e fact of the matter is that there is so much information available to us that our mind has
to have a mechanism for selecting items from that information. For our purpose let us follow Jane
Abercrombie’s explanation. Even though much research has been completed on perception since
the 1960s her general explanation is consistent with the facts as we observe them for ourselves.
First we store information. As long ago as 1932 the British Psychologist F. C. Bartlett had said
we organize our past experience so that we can relate it to the present. Clearly, we have many such
organizations and he called them schema. Philip Vernon another British psychologist called them
schemata. He said they were “persistent deep-rooted and well organized classifications of ways
of perceiving, thinking and behaving” [2, p. 28]. Other authorities such as the American G. W.
Allport have called them categories which are, “frames of reference for fresh perceptual samples
of immediate modes of behavior.” In my work I tended to use Allport’s term frames of reference.
Abercrombie summed it up by saying that “schemata can be regarded as tools which help us to
see, evaluate and respond.”
I used the term “frames of reference” because they appeared to be rather large organizations
of built up frameworks of concepts rather than the individual concepts themselves. ey had
“meaning” and they help us to give “meaning” to the objects we see in our world. So, when we
are walking along a street, we relate the objects that we see to these “frames of reference”and in
consequence some objects have more meaning for us than others.
ese definitions do not propose anything as simple as a “memory bite.” Nevertheless, it is
easy to see how useful this concept can be in information processing models of problem solving
and decision making. e nature of learning is immediately clear. It is that process by which expe-
rience develops new and re-organizes old concepts. e concepts are the means we use to describe
our schema. ese schema contain within them first principles. Concepts are classes of stimuli that
have common characteristics. Without them the world is meaningless and communication is im-
possible. at is why we put things in categories and build up frames of reference. How they are
learnt and how they are taught are problems of great significance, but that is another issue [6].
When we walk round a street as a purposeful activity we approach that (and indeed any)
situation with pre-conceptions to which our expectations are related. ere is a tendency for these
preconceptions to organize our perceptions of the task(s) we propose to undertake. is applies
as much to research, as it does to learning, as it does to beginning a new job, or dealing with a
colleague of whom we have been given some knowledge but never met. To this extent we give
meaning to the objects of knowledge, hence, the adage that no two persons see the same things
alike. us, what we see, and more significantly what is presented to us (as well as how it is
presented-as for example this seminar) also control that knowledge.
25
is means that whether we like it or not we are “prejudiced” or “biased.” A person from
a poor neighborhood will have very different experiences to a person from a wealthy district.
In effect they will speak different languages [7]. e former are likely to have fewer words and
grammatical forms to call on. Much has been written about these differences. But the different
experiences to which individuals are exposed will lead to different prejudices, attitudes, and values.
Apart from family and school there are other influences on our perceptions not least the
people with whom we work. Industrial and commercial organizations create their own culture
and it has been shown that working to and within such cultures can breed successful organiza-
tions. Such cultures can also be self-defeating. Firms develop their own systems of categories and
language associated with those categories. As it passes on to individuals, because of the limita-
tions of the jobs they do, this language and the schema with which it is associated also become
limited. A perception that has been developed within a narrow range of activities is sometimes
called “deformation professionelle” [7, p. 19]. It is a characteristic of specialism (specialist knowl-
edge). Furthermore, it causes individuals to over rely on past experience. I found among a group
of engineers who had been highly inventive in the past that when they were faced with a new
problem they began to solve it by seeing if they had done anything like it in the past. ey did
not, contrary to what might have been expected, necessarily hypothesize about the new problem
and bring the “principles” that they had learned in their education to bear on the issue [7, p. 95].
In these circumstances experience becomes an impediment to innovation [8].
I would not wish to argue that adult motivation is tied to the past but the way in which
we try to meet our needs may well be governed to some extent by this factor. It seems to me that
there is an in-built tendency to rely on experience in preference to training. In one inquiry persons
below the age of 40 valued training where as those over the age of 40 thought their experience was
more important. Moreover, there was some indication that the training might have been valued
less for its educational merit than for its contribution to future promotion. at research was a
long time ago. Consider the present situation. ere is some evidence of unemployment among
middle-aged engineers. Moreover, some employers prefer younger employees in the belief they
are more innovative. But they too will be made redundant and possibly face unemployment as
they age [9].
All of this has to have implications for the education system and the role of employers in
that system. Clearly, students not only have to be educated to be adaptable and flexible but to
be prepared for continuing professional and personal development (CPPD). Management will
no longer be able to devolve responsibility for CPPD to the education system. We have to view
ourselves as learning systems that have to comprehend many worlds of experience. Understanding
how we learn is therefore the first skill in acquiring the abilities of adaptability and flexibility.
POSTSCRIPT
After I had completed these seminars I was given a book on the philosophy of engineering [10] in
which there was a chapter on “Roboethics and Telerobotic Weapons Systems” [11] which rein-
26
3. THINGS ARE NOT ALWAYS WHAT THEY SEEM
forced my views about the importance of perception, more especially as it relates to the intelligent
control of weapon systems, and therefore, the morality of their use [12]. Sullins writes, “the op-
erators of telerobots” (we think of drones) “necessarily see the world a little differently when they
look at it through the sensors and cameras mounted on the machine and this may impact their
ability to make ethical decisions or at least influence the kinds of ethical decisions they choose
while operating the machine. When one is experiencing the world through the sensors on a robot
one is experiencing the world telepistemologically, meaning that the operators are building be-
liefs about the situation that the robot is in even though the operator may be many (thousand)
“miles away from the telrobot. is adds a new wrinkle to traditional epistemological questions.
In short how does looking at the world color one’s beliefs about the world?” More significantly
how does it color one’s decision making when one has to distinguish between innocent people
and an enemy? And this, as Sullins says is “a monumental problem” [13]. He argues that while
telepistemological distancing has been one of the reasons that it is difficult to exercise intelligent
control over machines they have had the ability to reduce casualties. When he wrote his article he
was not able to say whether the ethically positive outweighed the negative. He pointed out that
if telerobotic warfare fostered hatred and caused the moral agency of an enemy to be disregarded
then ethical conditions for a just war would not be reached [14].
Not only do these weapon systems illustrate the importance of perceptually driven behavior
but they also show that epistemology is not a trivial subject [15].
NOTES
[1] Choc des Opinions. Hesseling (p. 19) points out that “practice effects perceptions by estab-
lishing tentative frames of reference which increasingly articulate and structure our experi-
ences.” It follows that these experiences become “reliable aspects of our experience” and so
we come to value experience because of the perceived reliability of our experience. Hessel-
ing then says that applied to specialism autistic tendencies are fostered “because one tends
to define each situation as fitting one’s own schemata” (p. 19). Now says Hesseling appoint
a specialist to a general management position and he/she will have to take in schemata that
do not yet fit their own experience that is based on his/her prior specialism. Now he/she
is faced with taking in information from a range of specialisms. Hesseling argues that the
manager has to be shown how his “thinking” is based on his specialism and that he needs
to restructure it to cope with his new cognitive environment. is can be done by giving
the manager a frustrating experience to demonstrate the deficiency of their own schemata
in other fields. “ey need to be confronted by specialists in other fields in order to get a
healthy choc des opinions” (p. 19). 21
is shows the value of a liberal education in which a person lives in a mixed community.
Nevertheless, the individual would still have to be shown what was being sought (i.e., aided
to transfer perception).
27
Related to choc des opinions is déformation professionelle. e deformation represents “the
usually highly efficient categories when viewed in the context of one’s specialism.” Most
people tend to persist in using these categories outside of their specialism.
Hesseling, P. (1966). A Strategy for Evaluation Research. Aassen, Van Gorcum.
[2] Abercrombie, J. (1960, 1989 reprint). e Anatomy of Judgement. London, Free Association
Books. 22, 24
[3] Heywood, J. (1989). Learning Adaptability and Change: e Challenge for Education and
Industry. London, Paul Chapman/Sage. 22
[4] Macmurray, J. (1957). e Self as Agent. London, Faber and Faber. 23
[5] Dinkelspeil, J. R. (1971). A teachable subject. Journal of Higher Education, 42(10), 42. 23
[6] See Ch. 4 – Concepts and Principles in Heywood, J. (2005). Engineering Education. Re-
search and Development in Curriculum and Instruction. Hoboken, NJ, Wiley/IEEE. 24
[7] In 1961 the distinguished British sociologist Basil Bernstein drew attention to two types
of language which he called “public” and “formal.” ese broadly related to language use
in different socio-economic groups. He subsequently redefined the terns. A “restricted”
code was used by those in lower socio-economic status groups and an “elaborated” was
used by those in higher socio-economic status groups. e “restricted” code limits both the
scope of expression and thought. It progressively orients the child to lower level conceptu-
alization. It is through using the language of implicit meaning that it becomes difficult to
make explicit, and to elaborate verbally, subjective matter. e teacher who speaks with an
elaborated code has, according to Bernstein, to make that code available without depriv-
ing the pupils of the dignity of their own restricted codes. Common characteristics of the
restricted code are: (1) short, grammatically simple, often unfinished sentences; (2) simple
a repetitive use of conjunctions; (3) little or no use of subordinate clauses; (4) rigid and
limited use of adjectives and adverbs; and (5) frequent use of statements where the reasons
and conclusion are confounded to produce a categoric statement. 25
As between the subjects of the curriculum it is self evident that pupils from any social
class may use a “restricted” code because of limitations in their understanding (e.g., second
languages and science; see for example Champagne, A. B. Gunstone, R. F., and L. E.
Klopfer (1983). Naïve knowledge and science learning. Research in Science and Technological
Education 1, 173–184).
A contrasting view to Bernstein’s was put by an American linguist W. Labov (1973) who
studied African-Americans in New York. He argued that the “myth” of verbal deprivation
is particularly dangerous because it diverts attention from real defects in our educational
system to imaginary defects in the child. He distinguished between standard dialects used
28
3. THINGS ARE NOT ALWAYS WHAT THEY SEEM
by middle classes and non-standard dialects used by the lower classes. He is critical of
Bernstein whom he argues, sees middle class language as logical in every respect. In con-
trast, Labov saw much middle class language as verbose with no inherent logic. e average
middle class speaker is enmeshed in verbiage, the victim of socio-linguistic factors beyond
his control. Non-standard dialects are highly structured systems.
e above remarks are taken from a text published by this writer in 1984. But they do
have a relevance in 2016 when in the UK those advocating exit from the European Union
who it seems came mainly from the working classes were written off as unintelligent. ey
were contrasted with graduates who tended to vote remain. e implication was that the
graduates were more intelligent. Is wisdom the province of a particular social class or does
everyone possess wisdom irrespective of dialect?
Bernstein, B. (1966). Elaborated and restricted codes: their social origins and consequences
in A. G. Smith (ed.) Communication and Culture. New York, Holt Rinehart and Winston.
Labov, W. (1973). e logic of non-standard English in N. Keddie (ed.). Tinker, Tailor,
e Myth of Cultural Deprivation. New York, Academic Press.
Heywood, J. (1984). Considering the Curriculum during Student Teaching. London, Kogan
Page.
[8] Youngman, M. B., Oxtoby, R., Monk, J. D. and J. Heywood (1978). Analysing Jobs. Alder-
shot, UK, Gower Press. 25
[9] Heywood, J. (2012) e response of higher and technological education to changing pat-
terns of employment. Proceedings Annual Conference of the American Society for Engineering
Education. Washington, DC. 25
[10] Michelfelder, D. P., McCarthy, M. and D. E. Goldberg (eds.) (2013). Philosophy and En-
gineering: Reflections on Practice, Principles and Process. New York, Springer. 25
[11] Ibid. Chapter 18. Sullins, J. P. Roboethics and Terobotic Weapon Systems. 25
[12] Sullins writes (p. 229) “A technology is used ethically when it is intelligently controlled
to further a moral good.” e philosopher Carl Mitcham explains that intelligent control
of technology requires: “(1) Knowing what we should do with technology, the end or goal
toward which technological activity ought to be directed; (2) knowing the consequences of
technological actions before the actual performance of such actions; and (3) acting on the
basis of or in accord with both types of knowledge-in other word, translating intelligence
into active volition” (Mitcham, C. (1994). inking through Technology. e Path between
Engineering and Philosophy. Chicago, Chicago University Press. 26
[13] Sullins writes (p. 231) “even just getting a robot to autonomously recognize a soda can
in a lab environment is tough. One solution is to have a human agent help the machine
make these determinations telerobotically by having the human operator analyze the data
coming in from the machine to help it determine if an object is a soda can or some other
object. If we move the robot out of the lab and onto a battlefield, and task it to not just
look for innocent soda cans but for enemy agents who are actively trying to deceive the
machine, and then added to all this complexity we also have to distinguish the enemy
and the neutral agents who are also present at the scene, then we must realize that this is
obviously a monumental problem that will tax out telepistemological systems design to the
limit.” 26
29
[14] Bowen, a distinguished engineer, writes points out from the earliest times there have been
attempts to define a “just” war. Following Jones (1998) he lists five conditions for dealing
with the decision to begin a war. ese are as follows. 26
“1. ere must be a just cause (such as to repel an aggressor). 2. ere must be just in-
tent (such as to restore peace and justice). 3. War must be a last resort, every possibility of
peaceful settlement having been exhausted. 4. e declaration of war must be by a legiti-
mate authority. 5. ere must be a good prospect of success.”
Just how difficult it is to get agreement about these is well illustrated in the UK in the
responses to the Chilcot Report on the Iraq War published in July 2016. In so far as the
conduct of the war is concerned there are two requirements: “6. e innocent must not be
directly attacked, but only the armed forces of the enemy. 7. e means must be propor-
tionate to the end in view.” e latter is clearly open to much debate. It will be noticed that
there are no requirements for the “peace.” In the case of the Iraq War, it is clear that the US
or the UK had no sensible plan to deal with the aftermath of the war, and the consequences
of that failure remain with us. Bowen, W. R. (2009. Engineering Ethics. An Aspirational
Approach, London, Springer-Verlag. See also Jones, R. G. (1998). Peace, violence and war
in B. Hoose (ed.). Christian Ethics. London, Continuum pp. 210–222.
[15] Epistemology. “eory of knowledge, the branch of philosophy that inquires into the na-
ture and the possibility of knowledge. It deals also with the scope and limits of human
knowledge, and with how it is acquired and possessed. It also investigates related notions,
such as perception, memory, proof, evidence, belief and certainty.” p. 194. e Penguin
Dictionary of Philosophy. London, Penguin Books. (See Journey 4.) 26
J O U R N E Y 4
31
Meaning—True or False: Real
or Imagined
Our last Journey took us into the realm of psychology. Clearly, the questions posed by philoso-
phers are sometimes posed by psychologists. e same is true of sociology where sociologists have
developed a very substantial theory of knowledge, traditionally the province of philosophers. is
journey brings us face to face with some of the fundamental questions of philosophy such as
“What is knowledge?” and “What is truth?” It also brings us into the realm of the philosophy
of science (education). A question that has to be resolved in these journeys is whether or not
there is an area of knowledge that is the philosophy of engineering (education). I put education
in brackets because some engineering educators with whom I work believe that you first have to
resolve the issue as to whether or not there is a philosophy of engineering that is separate from
the philosophy of science before you can resolve the issue of whether or not there is the possibility
of a philosophy of engineering education that is separate from a philosophy of science education.
e starting point for this journey is the view expressed in Journey 3 that learning is the
process by which experience develops new and reorganizes old concepts (schema). At issue is the
mechanism that develops new and reorganizes old concepts. Some authors, such as John Mac-
murray, call this process apperception. One answer lies in what I shall call psychological con-
structivism. It arises from the well-known research on children’s learning by Jean Piaget. He said
that concept development arises from the personal, individual, and intellectual construction that
children make as a result of their activity in the world. Knowledge is not passively received from
the environment, but is actively constructed by the child. It is an axiom that directly challenges
the transmission model of learning. But beyond that is a much more controversial axiom which
says that “coming to know is an adaptive process that organizes one’s experiential world: it does
not discover an independent pre-existing world outside the mind of the knower” [1, p. 141]. To
put it in another way, “facts are made by us and our way, of experiencing them” [2]. Seen in this
light all knowledge is relative.
Constructivists believe that their theory has implications for teaching, particularly that of
children. Ruth Driver suggested that the constructivist teaching of children takes place in six
steps. ese are elicitation (in which the students find out where they are at); restructuring ideas
(in which the students clarify meanings together); constructing new ideas in the light of these dis-
cussions: evaluating these ideas by thinking them through or by experiment; applying these tasks
to different situations; and reviewing them, i.e., reflect on the on the outcomes. Ruth Driver and
32
4. MEANING—TRUE OR FALSE: REAL OR IMAGINED
her colleagues liken this last stage to learning how-to-learn or meta-cognition as understanding
how we learn is often called [3].
It is only a small jump to see the activity of design mirrored in this process for constructivists
value non-directive teaching and discovery learning.
Two well-known American engineering educators Ron Miller and Barbara Olds developed
a unit operation laboratory in chemical engineering that was based on constructivist methodology.
ey described the behavior of the tutors in these terms “rather than acting as acknowledged
authorities transmitting objective knowledge to passive students, laboratory faculty use coaching
and Socratic questioning techniques to help students understand complex technical phenomena
by constructing mental models which perceive reality as perceived by acknowledged experts while
minimizing models containing significant misconceptions” [4]. ey go on to argue that “use
of constructivist pedagogics creates an ideal context for assessing students’ abilities to complete
authentic engineering tasks rather than relying on artificial examinations which emphasize non-
contextual recall of facts and closed ended problem solving.”
But as Michael Matthews, an Australian philosopher of science has pointed out the con-
structivist approach to teaching is not unique [5]. I certainly did not discuss examinations and
assessment in the way I have done from a constructivist position but rather from that of a moder-
ate realist. Michael Matthews notes that constructivism is a particular development of empiricism
and also points out that the debate in science between empiricism and realism can be traced back
to at least Aristotle.
e fundamental philosophical problem is that the two views represent two different the-
ories of knowledge and truth. Put simply, constructivist theory holds that our understandings
and misperceptions are phenomena for we have no direct access to the real world. It leads to a
“notional’ view of science. In contrast the realist view holds that there is an objective world that
is independent of the learner. e world is learner independent and it is possible to seek truths
about that world, as they are currently understood. ere is the possibility of universals. Realists
hold a “correspondence” theory of truth. is says that a statement is true if it corresponds to
what it is that it attempts to describe. A British philosopher Peter Vardy gives as an example—
“an atom is the ultimate invisible element is true if, and only if the ultimate indivisible element is
an atom” [6, p. 12] but it only remains true until it is proved otherwise. e same would be true
of Newton’s principles which in spite of relativity remain true for a great number of situations.
Realists take the view that while we may not necessarily know the truth or falsity of a particular
statement there is a truth to be found.
e opposite view is a “coherent” view of truth. Statements are true because they cohere
with other statements. To cite Radford “my knowledge of the world hangs together in a coherent
bundle of propositions representing beliefs and understandings. My inclination to believe in the
truth of particular statements rests on the fact that they fit with others” [7, p. 139].
Peter Vardy illustrates the difference between the two as follows: “e realist about music
will maintain that there is some absolute standard of music against which two types of music can
33
be measured, while the anti-realist will say that within one culture Mozart’s music might be more
highly rated than that of the Beatles, but there is no absolute standard and in another culture
a contrary view might be held” [6, p. 14]. It is this theory that is in the ascendancy in modern
society. Truth is relative; therefore there is not much point in searching for the truth [8].
Of course nothing is as simple as it seems and I have presented a rather stark comparison.
ere are several realist positions and there is a group of constructivists (anti-realists) who re-
ject the “notional” (nominalist) view of science. Hernstein Smith, in the wonderfully titled book
Scandalous Knowledge. Science, Truth and the Human, gives a full-scale defense of the non-relativist
constructivist position [9].
Whatever view a person takes it influences his/her values, opinions, and attitudes not only
as a person but as an engineer. From an educational point of view there are things to be learned
from constructivism just as there are from realism. Sociological phenomenology is a case in point.
It has its origins in the work of Alfred Schutz which was made accessible when in 1966 two
American Sociologists, Peter Berger and omas Luckmann, published a book called e Social
Construction of Reality [10]. It is said to be one of the most widely read theory books of its time.
eoretically, it is a study in phenomenological sociology. It is an attempt to construct a sociology
of everyday life. By way of reinforcement of the previous comments on Piagetian constructivism
the view taken by Berger and Luckmann is that reality is a social construct, and our construction
of that reality depends on prior experience. at would seem to be no different to what happens
to us when we learn. As we have seen, experience dictates to a large extent what we learn, and
that experience is of the family, school, college, work, and our more general social relationships.
Sociological constructivism is not concerned so much with what individuals believe but with how
the social structure (environment) of those individuals determines what they believe, and it is
clear from Berger and Luckmann that social organizations created for the learning of science and
engineering are no exception. One of the reasons why educators sometimes have differing views
to industrialists as to what should happen in engineering education is possibly due to the fact that
they have different constructions of what engineering actually is. Many engineering educators
have come round to the view that there is a need to understand what it is that engineers actually
do and why.
e larger problem for engineering educators is that in theories of this kind knowledge
is not absolute but relative. Consider the challenge to engineering educators as expressed by a
student teacher. She said that the theory “is based on a phenomenological approach to the anal-
ysis of reality. In this view consciousness is subjective: when we perceive something we bestow
meaning on it, which will depend on our subjective consciousness, which has been determined by
our past experience. us, knowledge is not something to be brought into the classroom in neat
fixed packages, but is something which is determined in the classroom by the perceptions of the
individuals therein” [11, p. 192]. If this is true then why am I offering a series of packages? I won’t
claim they are neat, far from it: I will claim that they are designed to draw your attention to the
34
4. MEANING—TRUE OR FALSE: REAL OR IMAGINED
questions we have to ask about ourselves and what it is to be an engineer, and more pertinently
what it is to be human.
If I pursue my student’s axiom then I am faced with the view that I ought to negotiate the
curriculum with each of you so that each individual has his or her needs met. at might be okay
in the liberal arts but is it okay in engineering? My answer lies first in the view that a major goal
of education is to develop a philosophical disposition to learning, a reflective habit which in the
case of the curriculum asks questions like: “what do we know already?” “What do we want to
know and need to find out?” “How will we go about finding out?” “How will we know and show,
that we’ve found out when we’ve finished?” Such questions would seem to be a normal part of
thinking. ey certainly have to be asked when we invent or design a product. We are asked to
become reflective practitioners.
Let us take the argument a little further. An Australian educator Garth Boomer said that
“there can never be an exact congruence between what a teacher or a textbook means and what a
learner makes of that meaning” and that applies as much to mathematics as it does to any other
subject. at is surely verified by the after-class conversations that students have with each other
about statements in textbooks and what the teacher said. As Boomer goes on to say, “the dance
between teacher and taught represents a continuing negotiation of meaning” [12]. It is a complex
matter of continuous negotiation and in terms of complexity theory it is the activity from which
student learning is “emergent” [13]. But in engineering it is a negotiation that takes place within
a correspondence view of truth.
But one step more. Universities are supposed to be different places to schools yet so much
of what is done in schools is replicated in universities, often badly. In universities students should
be at a stage where they are able to accept responsibility for their learning. is places on the uni-
versity an obligation to place them in environments where they can exercise that learning. Given
that this is the case it is clear from the literature and common room discussion, that irrespective of
the changes that universities have experienced over the centuries, the idea that they are, or rather
should be communities of scholars seeking “emergent” meanings remains the key descriptor of
what they should be, even if they aren’t. ey are unlikely to be committed to anyone form of
learning, that is directive or non-directive but should be anchored to learning situations that are
most likely to achieve the particular objectives that have to be achieved. Some of these are likely
to be negotiated as for example the requirement that students should choose their own projects
during certain stages of the program. Engineering like teaching should be a reflective practice.
NOTES
[1] Lerman (1989). Cited by Matthews, M. R. (1994). Science Teaching. e Role of the History
and Philosophy of Science. London, Routledge, p. 141. 31
[2] Ibid. 31
35
[3] Driver, R. and V. Oldham. (1986). A constructivist approach to curriculum development
in science. Studies in Science Education, 13, pp. 105–122. 32
[4] Miller, R. L. and B. M. Olds. (2001). Performance assessment of EC 2000 outcomes
in the uit operations laboratory. Proceedings Annual Conference of the American Society for
Engineering Education. Paper 3513. 32
[5] Matthews, M. (1994). Science Teaching. e Role of the History and Philosophy of Science.
London, Routledge. 32
[6] Vardy, P. (1999). What is Truth? Sydney, University of New South Wales Press. 32, 33, 35
[7] Radford, M. (2008). Complexity and truth in educational research. In M. Mason (Ed).
Complexity eory and the Philosophy of Education. Chichester, Wiley/Blackwell. 32
[8] Vardy [6, p. 10] lists five groups that take an anti-realist stance. Mainstream analytic phi-
losophy which has given up the search for firm foundations of knowledge. ose in the
philosophy of religion who hold that “religious truths are essentially internal to a fictitious
story.” In ethics, aesthetics are “culturally determined and have no reality independently of
such settings.” Post-modernism rejects any single truth. 33
[9] Hernstein Smith, B. (2005). Scandalous Science. Science, Truth and the Human. Edinburgh.
University of Edinburgh Press. Chapter 1 of this book gives a review of the history of con-
structivism that takes in authorities other than Piaget. In this chapter she writes “in the
Social Construction of What Ian Hacking observes that nominalism is a crucial conceptual
commitment in the constructivist epistemology” [which as it happens, he calls, “social con-
structionism”]. Hacking explains nominalism as the “denial” contra realism’s affirmation,
that Nature is inherently structured in certain ways. Contrary to his implication, how-
ever constructivists do not characteristically “deny” metaphysically what realists evidently
metaphysically maintain: “namely, first that nature is structured in certain ways inherently
(meaning independent of our perceptions, conceptions and descriptions) and, second that
we properly assume (Hacking says “hope”) that those ways are in accord with our percep-
tions, conceptions and descriptions of them. Rather, constructivists typically decline, in
their historical, sociological, or psychological accounts of science and cognition, to pre-
sume either any particular way the world inherently is or such an accord. is professional
ontological agnosticism is not, as realists may see it, a perverse refusal of common sense
but an effort at due methodological modesty and theoretical economy.” 33
Hacking, I. (1999). e Social Construction of What? Cambridge, MA., Harvard University
Press.
[10] Berger, P. and T. Luckmann. (1966). e Social Construction of Reality. New York, Dou-
bleday. 33
36
4. MEANING—TRUE OR FALSE: REAL OR IMAGINED
[11] Heywood, J. (1982). Pitfalls and Planning in Student Teaching. London, Kogan Page. 33
[12] Boomer, G. (1992). Negotiating the curriculum. In G. Boomer et al. (Eds.), Negotiating
the Curriculum. Education for the 21st Century. London, Falmer Press. 34
[13] Morrison, K. (2008). Educational philosophy and the challenge of complexity theory.
In M. Mason (Ed.), Complexity eory and the Philosophy of Education. Chichester, Wi-
ley/Blackwell. “In complexity theory, learning becomes a joint voyage of exploration, not
simply a recycling of given knowledge. For learning to be promoted, rich and positive
feedback between learners and teachers is essential. Cognition is dialogic and high qual-
ity verbal interaction is essential. e teacher is vital, intervening judiciously to scaffold
and create the conditions for learning through-self-organization and the child’s (students)
emergent knowledge. Cognition is not simply the acquisition of new knowledge: it engages
motivation, personalities, learning styles, dispositions and preferences, the whole-person.
Teaching and learning take place at the intersection of the individual and society, and
the outcomes are unpredictable. is is a difficult model for those managers to entertain
who seek certainty, control, predictability and narrow accountability. Learning is on-going,
emergently choreographed dance, between partners and agents (co-evolution through re-
lationships connections); the partners both create, and are in, their dance. All parties come
together as co-evolving, co-adaptive and fluid communities of practice” [p. 23]. 34
J O U R N E Y 5
37
From Perception to
Self-Perception and a Little
Management En-route
Over fifty years ago in the early 1960s Tom Burns, a Scottish sociologist with a particular interest
in innovation in engineering, promoted the concept of a “plurality of social systems” [1]. By this he
meant that during the average day we mix in several different social systems. If we have a family
then before we go to work we live in the family system: once we are at work we are in the working
system. When we go home we are in the traffic system and. if when we return home we play some
sort of sport we join the football system, or tennis system, or golf (system) club, or whatever. Life
is a continual movement between social systems. Even at work we live in different systems one
of the most important of which is the career system, that is, if the organization is large enough.
Many engineers work in large organizations where progress up the career ladder is important. An
equally important system is the peer-group system, whether in industrial organizations, university
faculty, or among students who learn as much from each other as they do in class. Each of these
has a pull on our intentions and may impede the performance required of us to achieve the goals
of the formal or informal organization that we currently inhabit.
ere is no escape from the demands that these systems make on our everyday lives. Wit-
ness a TV “soap” comedy about office work, or the many that portray life in hospitals. Doubtless
there is certain amount of truth in them. We find that doctors and surgeons are subject to the
same highs and lows as we are, and may be there is some cause for alarm. How these different
systems affect our everyday life is a function of our personal goals. Consider the young engineer
whose first job is in a large organization or the young teacher about to begin his/her career: if the
work he or she is given to do is motivating then he/she may well want to work at it beyond what
is actually required, or as we might say “beyond the call of duty.” ey may act like that on many
occasions but when they marry and have children the situation changes. For many people the
family becomes more important than the organization. One hurries home to be with the young-
ster before he/she goes to bed, or to put them to bed. ere is a continuing conflict between the
demands of the family and the demands of the organization that different individuals resolve in
different ways. Much stress is generated.
Most of us have to find ways to reduce the tensions and sometimes conflicts that arise from
having to function in a plurality of social systems if we are to maintain a satisfactory performance
38
5. FROM PERCEPTION TO SELF-PERCEPTION
at work. It is not surprising that the perceptions that people have of us are a function of the
behaviors we display. To the observer it may appear that we change personalities as we occupy
different roles. As pointed out in Journeys 2 and 3, different observers may see what we see quite
differently. Hesseling in the language of manufacturing of the nineteen sixties gave the example
Exhibit 5.1.
Exhibit 5.1: Hesseling’s example of differing perceptions at work.
As Robert Burns the Scottish poet more or less said “Oh that we had the gift of God to see
ourselves as others see us.” Apart from the illustration that this provides of how different people
perceive the same situation as a function of their roles, it is also a reminder that while we may
be fairly good at observing what is happening in a situation external to us, we are not so good at
summing up our own position in terms of what others may perceive us to be.
It is part of the task of management to reduce ambiguities such as these by ensuring that
everyone understands what the role of the person is. It is also part of management’s task to recog-
nize factors that affect our performance adversely and to take steps to alleviate them if possible.
Typically, in large organizations engineers work in teams. It takes very little to upset the flow
of teamwork. In some cases a manager may have to change a person’s role to make a team more
effective for the role is the basic unit of any social system.
e problem is that although the nature of work undoubtedly influences our attitude to
work, the orientations we bring to work will be influenced by the other systems with which we
interact [3]. ese orientations to work (as well as the other systems in which we move) are
also influenced by personality. Taken together they contribute the meaning that work has for us.
us, one of the most powerful influences on role behavior is the expectations’ that we have of
role keepers irrespective of the persons who occupy them. is is as true in the classroom situation
as it is anywhere else. Teachers are managers of learning and their orientations are determined
by the beliefs they have about how students learn and are motivated to learn. In the absence of
any formal training they rely on their previous experience. In this situation, any group of teachers
“A student had to make a work sampling study of charge-hands* in a fairly confined factory without a complete introduction. He started on a Monday morning. A new product was assembled and production was under time stress. Because he often spoke to charge-hands and was continually making notes, he appeared to most of the young assembly workers as controller of their bosses. His appearance near the assembly lines was welcomed with some satisfaction: they became bolder towards the charge-hands. They made jokes and nudged each other when he was approaching. To the chargehands he became a menace: they became uncertain and nervous and they concerned themselves more with the production process itself than with their group of workers. To the departmental manager he became a scapegoat: he blamed several production faults on this work sampling study and he walked about the factory more than usual”*charge-hand=type of shop foreman39
is likely to divide into what in the latter half of the 20th century were commonly called eory
X and eory Y. Douglas McGregor proposed these to describe two different orientations to
management. ey equally apply in teaching as Exhibit 5.2 shows. Column A is adapted from a
description by Schein of eory X and eory Y is is clearly related to the potential for learning. A
teacher who believes eory X explains student learning is much more likely to be committed to a
monologue form of teaching than a teacher who thinks eory Y approximates to the truth. e
key question that faculty have to answer, irrespective of their beliefs, is number 5. If the answer is
“no’, what are they going to do about it? In reality human behavior is very complex as Exhibit 5.3
shows [4].
Exhibit 5.2: eory X vs. eory Y.
Given that a role is a pattern of behavior associated with a particular position, it carries
out activities that if the system is to achieve its goals, have to be co-ordinated. us, one of the
tasks of management is the integration and co-ordination of roles. is is not an easy task for
the way an organization works depends on the psycho-social dispositions of the people in the
organization or team, and people can be very awkward as the list of behaviors in meetings given
in Exhibit 5.4 shows [5]. ey are easily recognizable be they in a meeting of industrialists, or of
university faculty as for example, a departmental meeting, or a meeting of a student society.
Peter Drucker places the responsibility for effective team work firmly on the members of
the team who have to commit to the purpose of the team which is “to make the strengths of
A. Theory X1. The student is primarily motivated by academic incentives and will do whatever gets him or her the greatest gain.2. Since academic incentives are under the control of the institution, the student is essential a passive agent to be manipulat-ed, motivated, and controlled by the organization.3. The student’s feelings are essentially irrational and must be prevented from interfering with his or her rational calculation of self-interest.4. Institutions and their organizational (curriculum) arrangements can and must be designed in such a way as to neutral-ize and control their feelings and therefore their unpredictable traits.B. Theory Y1. The expenditure of physical mental effort is as natural as play or rest. The ordinary person does not inherently dislike work: according to the conditions, it may be a source of satisfaction of punishment.2. External control is not the only means for obtaining effort. A person will exercise selfdirection and self-control in the service of objectives to which he is committed.3. The average human being learns, under proper conditions, not only to accept but to seek responsibility.4. Many more people are able to contribute creatively to the solution of organiza-tional problems than do so.5. At present, the potentialities of the average person are not fully being used.40
5. FROM PERCEPTION TO SELF-PERCEPTION
Exhibit 5.3: e complex learner (adapted for the academic context from Schein [4]).
each person effective, and his or her weaknesses irrelevant.” [6]. Do we assess team projects for
these qualities? Do we help students develop these qualities? “If the organization is to perform, it
must be organized as a team” [7]. I am aware that many authorities will claim that they do such
assessments but the issues are to what level of depth are such assessments made and are students,
or for that matter most of us capable of the real reflective thought that answers to such questions
require. I think we are but that we need to be put in situations like the face-to-face tutorial where
we are forced to reflect on and come to an argued position on fundamental issues. A major problem
is that the idea of reflective thinking is thrown at us very late in our educational careers when
it ought to be part and parcel of our cognitive and emotional development from kindergarten
onwards: part of a spiral curriculum. We should not be put in the position that Matthews found
himself in when he tried to show his students that doing philosophy was natural. He hit “on
the strategy of showing them that as children many of them had already done philosophy. It
1. The learner (worker, manager, or teacher) is complex: the individual is highly variable and at any time has many motives, some of which are more important than others. Since an individual’s motive patterns are complex, the individual’s response to incentives will also change with circumstances.2. The learner (worker, manager, or teacher) is capable of learning new motives that affects his behaviour through his/her curriculum, work and institutional experience. The psychological contract that that the individual makes with his or her peers, managers and teachers is the result of a complex interaction between perceived needs and learning (work) and institution-al experiences.3. The learner’s (worker’s, manager’s, or teacher’s) motives in different institutions or different sub-systems of the same institution may be different; the student (worker) who is alienated in the formal structure may find fulfilment of social and self-actualization needs in the student union (societies), trade union, or other parts of the extra-mural system, or outside the system altogether, as for example in a hobby or the family. If the curriculum work is complex, in respect of perceived needs or abilities, some parts of the curriculum (work) may engage some motives, while other parts engage other motives.4. The learner (worker, manager, or teacher) can become productively involved with the curricu-lum (work) and institution on the basis of many different kinds of motive. The individual’s ultimate satisfaction in the institution depends only in part on the nature of personal motivation. The nature of the task to be performed, the abilities and experience of the learner (worker), and the nature of the teachers, administrators and managers in the institu-tion, all interact to produce a certain pattern of work and feelings.5. The learner (worker, manager, or teacher) can respond to many different types of learning (work) strategy depending on his/her own motives and abilities and the nature of the task. There is no one correct learning (working) strategy that will work for all learners (workers) at all times.41
Exhibit 5.4: List of behaviours in meetings.
occurred to me that my task as a college philosophy teacher was to reintroduce my students to an
activity that they had once enjoyed and found natural, but that they had later been socialized to
abandon” [8].
I think there are some techniques that can help us develop such skills. For example, in the
Appendix I have shown a questionnaire designed by Bill Humble of the British Steel Corporation,
Jim Freeman and myself for self-assessment by persons working together in groups in search
of a policy. In this case, the intention was to video managers and trade union negotiators in
session after which they would be asked to watch the video and then complete the questionnaire. I
would argue that the more we know about the behavior of human behavior whether through some
understanding of social psychology or extensive reading, as for example literature and biography,
the more we will understand ourselves.
R. M. Belbin, a British management research worker and consultant with an interest in
training, found that effective teams were composed of people who collectively employed the eight
roles shown in Exhibit 5.5 [9]. Even though we may be irritated by some members of the team
we need to recognize that each type is needed and has a valuable role to play. It is the task of each
individual to extract that value firstly from him or herself, and secondly from the other members
of the team.
I (we) did not specifically look at team behavior during a study we undertook to find out
what engineers actually did. But I was impressed by the way in which individually, those in en-
gineering functions not only “managed” but also had to “manage” their jobs irrespective of the
Type BehaviorTh e aggressor A person who increases his or her status at the expense of others. S/he criticizes and is generally hostile.Th e blocker A person who disagrees with everything without good reason.Th e comedian A person who messes about, and may make negative jokes.Th e competitor A person who challenges others.Th e devil’s advocate A person who can be useful but may turn the group to his/her own way of thinking.Th e digressor A person who cannot stick to the point whose contributions can be long-winded.Th e dominator A person who makes loud and lengthy interventions.Th e side talker A person who continually talks to his/her next door neighbor.Th e under contributor A person who contributes little or nothing to the meeting.Th e withdrawer A person who might be expected to contribute to the meeting but doesn’t.42
5. FROM PERCEPTION TO SELF-PERCEPTION
Exhibit 5.5: R. M. Belbin’s eight team roles.
level of the job in the hierarchy. ey told me how by necessity they had to widen the scope of
the initial brief through communication and cooperation with other people. e role definitions
were often inadequate, but to my surprise, in some cases this seemed to have been an advantage.
Often in order to get a job done, an individual would have to persuade another, over whom they
had no authority, to do a job. For example, a person responsible for a contract that required the
company to maintain a spare parts store at a military air base might have to organize the replace-
ment of a single specialized component to replace one that had been used. e contracts engineer
had to persuade those responsible for the manufacture (small batch and unit) of the company’s
products to slip this job in with their other work. No formal system existed for this work, yet it
had to be done or the contract would be broken. e contracts engineer had to develop his role to
achieve this goal. In so doing he used management skills. Other similar situations came to light:
it seemed that persons were appointed to roles that they had to change, so as to implement an
action. To achieve that goal personal characteristics and skill in communication were essential. It
was in such situations that feelings of responsibility were engendered and motivation enhanced,
and I came to the view that the organization was not rigidly hierarchical but more a system of
persons in relation [10]. In a sense it was communitarian.
“A person is a psycho-social system. Within the boundaries of that system most individuals
wish to be “organic” to use a term first suggested by Burns and Stalker for this context [1]. ey
wish to be able to take actions and decisions as well as mature. ey wish to have some responsi-
bility. e boundaries of these psycho-social systems arise as a function of the needs of the job and
Team RoleDescriptionChairmanTask to obtain the objectives. May be dominant but will not be assertive and try and use all the talents in the group.Th e ShaperA person who wants to bring everything together. A bundle of nervous energy.Th e PlantTh e person with imagination who will look for new ideas when the team is in diffi culty.Th e Monitor-Eval-uatorBrings dispassionate analysis to the problem.Th e Company WorkerPlenty of character that brings a disciplined approach to implementation.Resource InvestigatorLooks outside for ideas and brings them back to the group.Team WorkerA person who understands the emotional needs of the group.Th e FinisherA person who worries about detail and expects things to be done properly.43
the needs of the person. When these are matched for each person in the organization a hierarchic
system becomes structured by individuals who are organic within their own system. e system
itself becomes organic so that it can respond to the needs of individuals. Both systems have to
be self-adjusting” [11] and that is true of any organizational structure. It seems that the some
executives in the IT industry have established self-adjusting systems, and that such systems aided
the development of Silicon Valley [12]. As long ago as 1960, Barnes of the Harvard Business
School reported a study of two electronics organizations that showed the more communitarian
(my word) they were the more successful they were likely to be [13]. He distinguished between
open and closed systems [14]. e more open the system the more successful it was likely to be.
It is a study that is still worth reading.
Looking back it seems that what I was looking at was a community of some two hun-
dred people called engineers. Perhaps we would get a better understanding of organizations if we
understood them as imperfect communities where not everything goes smoothly. Communities
depend for their success on interdependence and the commitment of the communitarians, for
which reason when a community is failing and its members blame the leadership they are failing
to look at themselves and their agency as a cause of whatever went wrong.
Whichever way the data was analyzed, that is by ability groupings or by functional groups
some sub-abilities that involved direction control were listed. Every person directed and controlled
at a level necessary for the performance of their job. e Little Oxford Dictionary (1966) tells us
that management is direction and control. Given that this is the case, then every person in that
community was a manager to a greater or lesser degree, even if it was mainly of himself. Every
day that is true of each and every one of us [15]. Everyone is a manager and from that we derive
a whole lot of personal responsibility for what we do. Moreover, when we are not allowed to take
responsibility some of us get very frustrated. Of course some of us take on too much responsibility
to the frustration of others!
Exactly the same argument can be made about leadership [16]. If that is the case then each
individual is given considerable responsibility in whatever situation they find themselves—the
family, the tennis club, work etc. For the individual is the agent of his/her own actions and in
that sense is both manager and leader of him or herself. Of course it is clear that some of us man-
age ourselves and other people terribly badly. Some of us do not have the personality to engage
with others and prefer to be managed in many different situations. But there is a situation when
allowing someone else to manage is an act of management. For example, democratic approaches to
management in which everyone is allowed an input often fail to create action because either there
is no consensus or there is no one able to lead the assembled out of the morass. Groups require
leaders/managers who take the ultimate responsibility but in a democracy it is an act of leader-
ship/management on the part of its participants to allow the leader/manager to lead/manage. But
this, as Greenleaf wrote long ago, requires a new concept of leadership/management “the moral
principle of which holds that the only authority deserving one’s allegiance is that which is freely
and knowingly granted by the led to the leader in response to, and in proportion to, the clearly
44
5. FROM PERCEPTION TO SELF-PERCEPTION
evident servant stature of the leader. ose who choose to follow this principle will not casually
accept the authority of existing institutions. Rather, they will freely respond only to individuals
who are chosen as leaders because they are proven and trusted servants. To the extent that this
principle prevails in the future, the only truly viable institutions will be those that are student
led” [17]. While I have completed the quotation for the sake of its integrity I do not wish to
pursue the idea of the servant leader here, although I have done this elsewhere [18].
e Scottish philosopher John Macmurray asks us to consider the “self ” in relation to the
world. “When I act I modify the world.” In my terms, management of the self is taken with a
view to modifying my world. Macmurray goes on, “Action is causally effective, even if it fails of
the particular effect that is unintended. is implies that the “self ” is part of the world in which
it acts, and in dynamic relation with the rest of the world [...] to be part of the world is to exist,
while to be excluded from the world is to be non-existent. It follows that the self exists as agent,
but not as subject” [19]. In my submission an act of agency is an act of management but it is also
an act of management to submit to direction and control as happens in two-way relationships.
All relationships are two-way since as Macmurray argues the “self ” is a person and persons only
develop as persons in relation to other persons, and a major relationship in that development in
college is the tutor-student relationship. e basic motivation for managing and being managed
is for the sake of “communion.” So a community depends on the purposeful management of
individuals in the pursuit of the community’s aims. In higher education students and tutors have
common purpose. ey necessarily carry with them responsibilities for managing themselves. It
is a reminder of the need to match a person’s talents to what an organization (college or industry)
has to offer in order for that person to be fulfilled and better meet the needs of the organization
(see Exhibit 5.6).
is brings us to Drucker, America’s most famous management guru. He wrote that “more
and more people in the workforce-and most knowledge workers-will have to manage themselves.
ey will have to place themselves where they can make the greatest contribution; they will have
to learn to develop themselves. ey will have to learn to stay young and mentally alive during a
fifty-year working life. ey will have to learn how and, when to change what they do, how they
do it and when they do it” [20, p. 163].
“Knowledge workers, therefore, face drastically new demands:
1. ey have to ask: Who am I? What are my strengths? How do I work?
2. ey have to ask: Where do I belong?
3. ey have to ask: What is my contribution?
4. ey have to take relationship responsibility.
5. ey have to plan for the second half of their lives.”
I consider the first question to be the most important and all-embracing question of them
all. It’s implication for the curriculum, not only that of engineering are profound for where better
45
Exhibit 5.6: pp. 33 & 34 of note 15.
to begin the pursuit of these most philosophical of questions than in a university—if you can find
one that still philosophizes!
APPENDIX
A questionnaire designed for use in negotiating skills development training by W. Humble, J.
Freeman, and J. Heywood and cited in Heywood, J. (1989). Learning, Adaptability and Change,
London, Paul Chapman, pp. 48 and 49.
Students and Individuals Bring to Their Work1. Knowledgea. Generalb. About his/her specialism or subjects of studyc. About business or college2. Physical Skillsa. Healthb. Related to psychomotor skills3. Cognitive and affective skillsa. Abilities to recognize and solve problems (creative skills)b. Ability to make judgmentsc. Ability to communicated. Ability in the development of satisfac-tory interpersonal relationships4. Personality and drivea. A certain activity level-normb. A certain level of risk-takingc. Aspiration and expectationsd. Acceptability5. Valuesa. Interestsb. A moral dispositionCollege or Organization Help People be Effective by the Following:1. Job or lecture analysisa. Providing a definition of the key results the job is required to produce, or the aims and objectives of a curriculum programb. A definition of the knowledge and skills required for the performance of the taskc. Details of information necessary for the completion of the job, or homework, or project or practical class2. College or organizationa. Provides a structure in which people can work or learnb. A management or teaching style that will motivate3. Recruitment, education, and traininga. Matching what an individual brings to the job, the needs of the job, or matching student abilities to the requirements of the programb. Background knowledge and experience of similar work or instructional situations likely to be of usec. Training or instruction in specific knowledge and skills for a defined function46
5. FROM PERCEPTION TO SELF-PERCEPTION
.
e
r
i
a
n
n
o
i
t
s
e
u
Q
:
7
.
5
t
i
b
i
h
x
E
Questionnaire for use in negotiating-skills development training SELF-APPRAISAL—BEHAVIOUR IN DISCUSSIONPlease study your performance from the video playback. This is a privateassessment, so be perfectly frank in answering the questions—otherwise youare only fooling yourself.In this discussion I tended to: Yes NoAsk specific questions about the topic under discussion Try to score debating pointsGet irritated with an opponentAsk for clarification/facts about a point made by the other sideContribute helpful suggestionsAdmit I was misinformed/wrongInterrupt before a speaker had finishedOpt out of answering an oppontnt’s question on the groundsthat I would appear to give wayCriticize when I had not really got a real point to makeClose the door to further argumentChange my mind when my assumptions were shown to be faultyKeep quiet when I had nothing constructive to sayOverrule the chairman/leaderPrepare my case before the meetingNot listen to an opponent’s argument because I disagreedwith his or her casePLACE THE SHEET INSIDE THE FOLDED SHEET 2 AND FOLD OVER THE RIGHT-HAND EDGE SO THAT THE ANSWERS SHOW IN CUT-OUTSNow compare your results and consider whether yourcontribution to the meeting was:Potentiallydestructive to your caseNow compare what you have written with youroriginal scoreNow attempt to define ways in which training couldhelp you improve your performance in discussionBecause I was:bloody-mindedCo-operativeBenignQuarrelsomeFlexibleRigidTolerantIntolerantOpen-mindedClosed-mindedSilentTalkative– Points+ PointsIndependentDependentWell-informedLacking ininformation/knowledgePotentiallyhelpfulto your caseCorrect answers (R)Incorrect answers (W)Questions unanswered (U)CORRECT ANSWERSNOW FOLD OVER THELEFT-HAND EDGEAND ANSWER THEQUESTIONS-3-2-1-1-2-30-3-2-1-1-2-30£ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ £ Score: R – W5 NOTES
47
[1] Burns, T. and G. Stalker. (1961). e Management of Innovation. London, Tavistock. 37,
42
[2] Hesseling, P. (1966). A Strategy for Evaluation Research. Assen, van Gorcum.
[3] Orientation- disposition towards. For example Christian Bey and American sociologists
distinguished between students who had academic, intellectual, and social orientations to-
wards college. ose with a social disposition seek out the social life that the institution has
to offer. ey are not particularly concerned with academic performance as those with an
academic orientation would be or the few with an intellectual orientation and concern for
knowledge as its own end (Bey, C. (1961) A social theory of higher education. In N. San-
ford (Ed.), e American College. New York, Wiley). ere are clearly a number of students
in high school whose orientation is instrumental. ey arrange their work so as to make
their dislike of schooling tolerable. Such attitudes derive from the meaning that life and
school have for these young adolescents. If school performance is related to work success,
and they perceive that they will be unemployed, they will undoubtedly develop an instru-
mental attitude toward schoolwork, and why shouldn’t they? e problem for school is to
provide a curriculum that has meaning for them, and this may mean a radical appraisal of
what is taught but also how it is taught, with all the implications this has for small group
work and individualized instruction. It is clearly a benefit to society to ensure that every
young person is both literate and numerate. I would argue that investment in the education
of this group of people should be prioritised above all other sectors of education. 38
Clearly, the type of work we do gives rise to particular orientations. An assembly line job or
work in housekeeping in a hotel to take but two examples may well lead to an instrumen-
tal disposition where work is done solely for its remuneration, and some people may seek
highly paid but routine jobs for the purpose of supporting their families. ey find their
satisfaction elsewhere. Others may find satisfaction in the formal and informal groups they
find at work. ey obtain satisfaction from the group activities they involve themselves in,
as for example a trade union. Other more lucky people—professionals seek rewards from
the work they do, as for example the engineer- researcher, designer, and manufacturer.
(based on Heywood, J. (1989). Learning, Adaptability and Change. e Challenge for Edu-
cation and Industry. London, Paul Chapman/Sage.) (See also note 13).
[4] From ch. 4 of Heywood, J. (1989). Learning, Adaptability and Change. e Challenge for Ed-
ucation and Industry. London, Paul Chapman/Sage. e models of the student are adapted
from Edgar Schein’s descriptions in Organizational Psychology (1965). Englewood Cliffs,
NJ, Prentice Hall. Original in D. M. McGregor (1960). e Human Side of Enterprise.
New York, McGraw Hill. 39, 40
48
5. FROM PERCEPTION TO SELF-PERCEPTION
[5] Hodgson, P. and J. Hodgson. (1992). Effective Meetings cited in Heywood, J. (2009). Man-
aging and Leading Schools as Learning Organizations. Adaptability and Change. Dublin, Na-
tional Association of Principals and Deputies/Original Writing. pp. 101–102. Trevelyan
found that the key work that engineers do is technical coordination which demands a high
level of liaison. Trevelyan, J. (2014). e Making of an Expert Engineer. London, CRC
Press/Taylor and Francis. 39
[6] Drucker, P. (1993). Managing Non-Profit making Organizations. London, Butterworth-
Heinemann. 40
[7] Drucker, P. (2008). Classic Drucker. Essential Wisdom of Peter Drucker from the pages of
the Harvard Business Review. Boston, Harvard Business School Publishing Corporation.
p. 151. 40
[8] Matthews, G. (1980). Philosophy and the Young Child. Cambridge, MA, Harvard Univer-
sity Press. 41
[9] Belbin, R. M. (1981). Management Teams. Why they Succeed or Fail, London, Heinemann.
41
[10] Youngman, M. B., Oxtoby, R., Monk, J. D., and J. Heywood (1978) Analysing Jobs. Alder-
shot, Gower Press, p. 114–115. 42
[11] Ibid., p. 115. 43
[12] Lécuyer, C. (2007). Making Silicon Valley. Innovation and the Growth of High Tech, 1930–
1970. Cambridge. MA, MIT Press. 43
“Innovator-entrepreneurs in Silicon Valley also devised new ways of relating with employ-
ees. ey were under the constant threat of unionization and they needed to secure the
cooperation of a skilled workforce in order to build and control complex manufacturing
processes. As a result, Silicon Valley firms developed a corporatist approach to manage-
ment. ey gave substantial autonomy to their engineering staffs and often organized en-
gineering work around teams. ey sought to involve their professional employees in the
decision making process. In addition, they developed unusual financial incentives for their
work force: profit-sharing programs, stock ownership, and stock option plans.”
“Within the corporatist framework, one can distinguish three different approaches. Eitel-
McCullough, Litton Industries and microwave tube firms adopted a participatory and pa-
ternalistic management style that emphasized profit-sharing and generous employee ben-
efits. Varian Associates had a socialist streak, developing communal organizational and
ownership structure. In contrast Fairchild Semiconductor and most semiconductor firms
pioneered an entrepreneurial form of corporation organized around stock options.” [p. 299]
[13] Barnes, L. B. (1960). Organizational Systems and Engineering Groups. Harvard Graduate
School of Business Administration. 43
49
[14] An open system is one that is in exchange with its environment, whereas a closed system
is one that has no exchange with its environment. A closed system will eventually die,
whereas the open system maintains itself because it is able to export and import “material”
from the environment. is description of biological and thermodynamic systems may be
applied to organizations. An industrial and commercial organization is in exchange with
its market. If it does not respond to the market it will die. Within the market it also has to
choose how to compete with the same type of “good” companies over a range of products
or only in technical areas where it has product advantage. Similarly, the case for free trade
among nations is based on systems theory. 43
e way in which the senior management of an enterprise views the environment may
also condition their attitudes within the organization, a point that is well illustrated by
Barnes. He described the attitudes of management and a particular section of a particular
company in the electronics industry. is company not only had to operate in a highly
competitive way but also had to meet the goals of its parent organization. e pressures on
the general manager were for low prices and high quality with the effect that engineering
management believed that productivity was much more important than quality. is meant
that developmental work in the department investigated did not have high standing even
though it seemed that development work was required.
Barnes shows how the chief engineer and supervisor were placed in middle-man roles.
Management and business values (practical engineering and productivity) were stressed to
their colleagues, whereas to their seniors, in contrast, they emphasized the value of the
scientific approach to their supervisors, thereby reflecting the views of their staff. On the
one hand the supervisor “stresses scientific principles and deplores production engineering’s
‘knob twisting approach.’ On the other hand he builds up subordinate resistance by asking
them to turn out more ‘quickies’, to get out into the factory, and to be less scientifically
rigorous.” (Shades of the Challenger story some 27 years later—See Davis, M. (1998).
inking Like an Engineer, Oxford University Press).
In contrast, Barnes described another company in the same business that was also highly
competitive but making products in which it had a technical advantage. It is not surpris-
ing to find that in this company scientific and technical knowledge was valued, and that
the attitudes throughout the organization were different. e field engineer who was the
equivalent of the chief engineer in the other company did not present one face to his en-
gineers and another to management. ere were no pressures on him for productivity and
practicality. e pressure that came through, if it can be called pressure, was manage-
ment’s encouragement of individual development. Officials at the top of the organization
50
5. FROM PERCEPTION TO SELF-PERCEPTION
put down company success to the informality that spread across the organization (shades of
Google!). So the field engineer in responding to this dictate, arranged for his subordinates
to have high autonomy while at the same time requiring that interaction between them
and himself so that a system of mutual influence was created.
As things stood the second company was more efficient than the first by Barnes measures.
He put this down to the organizational structure of the company. e first company’s was
relatively closed, and within the section hierarchically organized with several small sub-
sections. Barnes stresses the term “relatively” and argues that the second organization was
relatively more “open.” e first discouraged individual performance while the second en-
couraged it. In the first the engineers thought they should be doing engineering whereas
the pressure was on them to worry about production. In the second there were no explicit
pressures for productivity and practicality of the knob twisting kind.
Of particular interest is the fact that the organization of the first department seemed to
highlight the different value dispositions between the individuals and the groups.
ose who were oriented toward the values of science (for example, truth and knowledge)
tended towards relatively low non-work activities, low interaction, and low mutual friend-
ships. ose who wanted to attain promotion, acceptance and prestige within the organi-
zation tended towards relatively high interaction and mutual friendship. Barnes called the
first group “the professionals” and the latter group the “organizationals.” e third group
he called the “socials” or those who wanted popularity and acceptance by the high-status
groups. ey were characterized by high non-work activities, high interaction but low mu-
tual friendships (see note 3). ere was little mixing between the grades.
By contrast, in the second department there was much more mixing between the grades,
and there was a higher level of participation in non-work activities. e two structures
influenced the way in which individuals in the departments behaved and worked, and they
in their turn were influenced and reinforced by the mode of work. Barnes concluded that
the more open system was more effective than the more closed system. (Adapted from
pp. 77–79 of Heywood, J. (1989). Learning, Adaptability and Change. e Challenge for
Education and Industry. London, Paul Chapman/Sage).
[15] Heywood, J. (1989). Learning, Adaptability and Change. e Challenge for Education and
Industry. London, Paul Chapman/Sage. See Chapter 4. 43
[16] Ibid. See Chapter 11. 43
[17] Greenleaf, R. K. (1973) e Servant as Leader. Peterborough, NH, e Windy Row Press.
44
[18] Ibid. Note 16. 44
[19] Macmurray, J. (1957). e Self as Agent. Faber and Faber, p. 91. 44
[20] Drucker, P. (1999). Management Challenges for the 21st Century. New York, Harper. 44
51
J O U R N E Y 6
53
Sharing Problems: Living in
Communities
To continue where I left off at the end of Journey 5, that is first with the view put forward by
Macmurray that the “self ” is a person and persons only develop as persons in relation to other
persons: and second with the first question that Drucker suggests knowledge workers have to ask
themselves which is “Who am I?” But my purpose in asking this question is not utilitarian but
humanitarian.
Macmurray wrote that “a personal being is at once subject and object; but he is both because
he is primarily agent. As subject he is “I,” as object he is “YOU,” since the “YOU is always “the
other’, the unity of the personal is then to be sought in the community of the “YOU’ and “I’,
and since persons are agent, this community is not merely a matter of fact, but also a matter of
intention” [1, p. 27].
Ponder for a moment the last phrase which is to the effect that a community is “a matter
of intention.” Communities are intended for us, we, in our turn are intended for communities.
Consider for a moment the reflective activities in which we are encouraged to engage. ey are
acts of expression but they are pointless if they are not shared with somebody else [2, p. 187]. at
is why those concerned with children and adults who are drug abusers try to create communities
that care [3], or why people who have some problem or another find it helpful to share their
experiences with others.
I have not tried to define community. Rather, I have allowed them to be what we com-
munally observe. We find that they can range from the community based on a church to the
global community that can be created by scientists. Engineering students are made aware that
engineering is a global activity. Presumably some communities are established that go beyond
networks.
Some persons are busily creating virtual communities the pioneers of which must have
been the Ham radio enthusiasts. David P. Munns an American historian of science and tech-
nology in the post-second war period including the cold war, has pointed out that scientists live
in intellectual communities. He refers his readers to a film called e Dish which is about the
Parkes radio-telescope in Australia. It was this telescope that “filmed” Neil Armstrong making
the first human landing on the moon. Munns writes “Radio astronomers played a small part in
that grand spectacle having built a world-wide network of radio telescopes. at such a network
existed by 1969 is a testament to the radio astronomers’ international and interdisciplinary scien-
54
6. SHARING PROBLEMS: LIVING IN COMMUNITIES
tific community” [4, p. 172]. In this respect the October 2013 issue of Astronomy and Geophysics
(the house journal of the Royal Astronomical Society) gives a 3-page spread to the establishment
by 12 British astronomers of the UK SETI (Search for Extra-terrestial Intelligence) network to
promote academic SET in the UK, having as its patron the Astronomer Royal (Lord Martin
Rees). Munns argues that the social world of the radio astronomers was shaped by their intel-
lectual world. We saw in the last journey that this was in no small measure the case in the two
electronics firms that Barnes compared.
A corresponding story about engineering is to be found in Vincenti’s study of the establish-
ment of the design requirements for aircraft in the period 1918 to 1943 [5]. He wrote “, [...] the
generation of engineering knowledge is characteristically a community activity. While a number
of people play visible roles, no individual or individuals dominate our account; the protagonist
must be seen as the entire flying-quality community. is community consisted however, of at
least four sub-communities having to do individually with design, engineering research, instru-
ment development, and test flying. ese sub-communities overlapped intimately and the gener-
ation of knowledge took place -indeed, had to take place-simultaneously and interactively in all
of them” [6]. Elsewhere, Vincenti concludes that “engineering knowledge is thus the product of
communities committed to “doing” and having a sense of collective identity fostered by complex
interaction based in part on a shared problem” [7]. e failure to share problems can create other
problems that may be of a more serious nature.
All communities acquire their own mores in order that their members can live together.
It is likely they will contain members whose experience and specialization is expressed in a way
of thinking that differs to other members of the group. It is important that any one individual
recognizes this to be the case and is willing to learn that lesson. at seems to be the lesson
of the IBM story that ended Journey 2. e questions that engineers have to ask are—What is
their community and what are its boundaries? Current criticisms of engineering and engineering
education suggest that its boundaries are limited and that it tends to be a closed rather than an
open system.
Michael Davis in what must rank as one of the seminal works in the philosophy of engi-
neering incorporated information from the documentation of the Challenger disaster because not
only did it highlight the problems of corporate decision making but “it will help us understand
what engineers do, what can go wrong ethically, and what can be done to prevent ethical wrong
doing” [8, p. 44]. It can help us begin to understand the concept of community within engineer-
ing, and the different ways of thinking that go with different jobs undertaken by different people
in a community. Exhibit 6.1 is Davis’ summary of what happened.
Davis begins his very considerable analysis by asking should Lund have thought like an
engineer or a manager. Unfortunately, the term manager confuses things a little because quite
clearly business considerations would have been behind the request to think like a manager. In
this respect Davis’ analysis could have been strengthened.
55
Exhibit 6.1: From pp. 43–44, Davis, M. (1995). inking Like an Engineers. Studies in the Ethics of a
Profession. New York, Oxford University Press.
“Managers are trained to handle people. Engineers are trained to handle things” [p. 44].
Davis writes, “Lund was asked to concern himself primarily with how best to handle his boss, the
Space Center, and his own engineers. He was to draw on his knowledge of engineering only as
he might draw on his knowledge of a foreign language, for example to help him understand what
his engineers were saying. He was to act as much as he would if he had never earned a degree in
engineering” [p. 44].
Apart from this being an impressive example of role conflict it illustrates in the first in-
stance two different orientations or ways of thinking created by the different perceptions that
the participants had at first of their roles. e conflict is introduced when the key person in the
“On the evening of January 27, 1986, Robert Lund, vice-president for engineering at Morton Thiokol, had a problem. The Space Center was counting down for a shuttle launch the next day. Earlier that day, Lund presided at a meeting of engineers who unanimously recommended against the launch. He concurred and informed his boss, Jerald Mason. Mason informed the Space Center; Lund expected the flight to be postponed. The Space Center had a good safety record. It had achieved it by not allowing a launch unless the technical people approved”.“Lund did not approve because the temperature at the launch site would be close to freezing at lift-off. The Space Center was worried about the ice already forming on the boosters, but Lund was worried about the O-Rings that sealed the boosters’ segments. They were a good idea, permitting Thiokol to build the huge rocket in Utah and ship it in pieces to the Space Center two thousand miles away. Building at Utah was so much more efficient than building on-site that Thiokol was able to underbid competition. The shuttle contract had earned Thiokol $150 million in profits. But the O-rings were not perfect. Data from previous flights indicated that the rings tended to erode in flight, with the worst erosion occurring on the coldest temperature preceding lift-off. Experimental evidence was sketchy but ominous. Erosion seemed to increase as the rings lost resiliency and resiliency decreased with temperature. Unfortunately almost no testing had been done below 40°F. The engineer had had to extrapolate. But with the lives of seven astronauts at stake, the decision seemed clear enough, safety first”.“Well, it had seemed clear earlier that day. Now Lund was not so sure. The Space Center was “surprised” and “appalled” by the evidence on which the no-launch recommendation was based. The Space Center’s senior managers wanted to launch, but they could not launch without Thiokol’s approval. They urged Mason to reconsider. He re-examined the evidence and decided the rings should hold at the expected temperature. Joseph Kilminster, Thiokol’s vice-president for shuttle programs, was ready to sign a launch approval, but only if Lund approved. Lund’s first response was to repeat his objections. But then Mason said something that made him think again. Mason asked him to think like a manager rather than an engineer. (The exact words seem to have been, “Take off your engineering hat and put on your management hat”) Lund did so and changed his mind. On the next day the Shuttle exploded during lift-off, killing all on board. An O-Ring had failed”.56
6. SHARING PROBLEMS: LIVING IN COMMUNITIES
decision-making process is asked to change his role from a professional activity with a particular
ethic to one that had a different ethic. However, if a quite different view is taken of what manage-
ment is, that is the view I presented in the previous journey, then by definition every engineer is a
manager. In that view managers have very considerable responsibilities to themselves as well as to
others. ere can, therefore, be no difference between the ethical commitment of the company’s
executives and those of the professional engineers. But such principles have to be legitimised, and
a community provides such legitimisation. Its ethic, to cite Macmurray, should be that “we need
one another to be ourselves. is complete and unlimited dependence of each of us upon the oth-
ers is the central and crucial fact of personal existence [...] it is only in relation to others that we
exist as persons; we are invested with significance by others who have need of us; and borrow our
reality from those who care for us. We live and move and have our being not in ourselves but in
one another; what rights or powers of freedom we possess are ours by the grace and favor of our
fellows” [1, p. 211]. But rights and powers bring with them responsibilities that extend in every
direction within the community.
As the story is told it does not suggest an extended community that reached from the
senior executives to the engineering executive and his engineers that would invite a pause for
reflection. At the same time it shows two communities, one of which was apparently not at ease
with reflective activity. But as Davis points out disasters of this kind do not just have one cause. If
we venture beyond the two different modes of thinking the question would seem to be why was
there not a common ethic throughout the firm that would have caused everyone to reflect on the
engineers’ concern in the light of the common good. Davis concludes that one of the many lessons
that can be learnt from the Challenger disaster is that “the ethics of engineers is as important to
the success of engineering as good design or testing is” [p. 49]. e story is a tragic reminder of
the need to and value of sharing problems and the need for each person to accept responsibility
for every other person in the community in which they work. In this case, the astronauts were an
integral part of that community.
One final thought: we understand quite easily that universities and colleges should be learn-
ing communities. Whether they are or not is a different matter. It is more difficult to see that
an organization is or should be a learning community. But they should be. ere is no better
illustration of this principle than the Challenger disaster. Neither is there any better example of
the need for communities to be driven by a common ethic.
NOTES
[1] Macmurray, J. (1962). Persons in Relation. London, Faber and Faber. 53, 56
[2] Macmurray, J. (1957). e Self as Agent. London, Faber and Faber. 53
[3] Hawkins, J. D., Catalano, R. F., and associates. (1992). Communities that Care. Action for
Drug Abuse Prevention. San Fransisco, Jossey Bass. 53
57
[4] Munns, D. P. D. (2013). A Single Sky. How an International Community Forged the Science
of Radio Astronomy. Cambridge, MA, MIT Press. 54
[5] Vincenti, W. G. (1990). What Engineers Know and How they Know It. Analytical Studies
from Aeronautical History. Baltimore, e Johns Hopkins University Press. 54
[6] p. 52. Vincenti argues that the evidence that he presented supported the “community of
technological practice” described by Edward Constant (1990) in e Origins of the Turbojet
Revolution. Baltimore. 54
[7] Vincenti. p. 239. Here he is citing a William Rifkin. See p. 316. 54
[8] Davis, M. (1998). inking Like an Engineer. Studies in the Ethics of a Profession. New York,
Oxford University Press. Challenger was a manned artificial earth satellite that exploded
during lift –off killing all seven of its occupants. 54
J O U R N E Y 7
59
inking about Making a Good
Engineer Possible
“e ethics of engineers is as important to the success of engineering as good design or testing is,”
so concluded Davis [1]: but so it is in everyday life. Consider some of the things that happened
during the days that I wrote this text Private Bradley Manning was told that he was not guilty
of helping the enemy but was guilty on other charges; e President of Ireland signed into law
controversial legislation on abortion that clarified the role of doctors in a country where abortion
except in exceptional circumstances is forbidden. In the UK, police in Manchester arrested a 21-
year-old on suspicion of the harassment on Twitter of a Ms. Criado-Perez who had campaigned
successfully to have Charles Darwin replaced on an English bank note by Jane Austen. She had
been threatened with rape among other things. is had caused the San Fransisco-based director
of trust and safety at Twitter to seek a meeting with Ms. Criado-Perez to address her concerns.
Yesterday the House of Lords (the second chamber of the UK Parliament) debated a ruling of
the European Court for Human Rights that the denial of the vote to prisoners held in English
jails infringed their human rights, a ruling that the press tell us is vigorously opposed by the
British public. Today the Appeal Court in England has issued a judgment on whether or not it
was permissible for a health professional to travel with a person who wished to travel to a clinic
in Switzerland for assisted dying. In England where the health service is free, frequent cases are
brought against it when it refuses to sanction a very expensive drug that may extend a patient’s
life for a short time.
Everyday there are cases that involve moral principles and require ethical judgments because
we have to choose. For the most part we are not faced with making difficult choices. Probably, for
that reason most of us, it would seem, take very little notice of such dilemmas until we are faced
with one, and for the most part we accept the judgments of the courts.
Of course these are big decisions: yet, every day we have to make ethical decisions which
influence our behavior and that of those around us. In family life how and when to discipline a
child involves us in ethical decisions for one definition of ethics is that it “refers to the customary
way to behave in society” [2]. But the Greek word ethikos from which ethics is derived relates to
character and is used by the philosopher Aristotle in this way in e Nicomachean Ethics. Aristotle
gave us the first study in what is now called “virtue ethics.” Virtues are the qualities needed “to
lead any sort of recognisably human life. For example, she needs to be able to live and work with
others; she needs to be able to confront difficulties and threats; she needs to be able to control
60
7. THINKING ABOUT MAKING A GOOD ENGINEER POSSIBLE
her desires, and so on” [3]. A person who has these qualities is virtuous so those who exhibit the
behaviors that these qualities demonstrate are behaving ethically. e study of ethics is therefore
a very practical matter.
Most of us I would suggest, and I am one, confuse morality with ethics and vice versa.
Unlike ethics morality is of Latin origin (moralis) and is to do with whether an action is right
or wrong. We might be forgiven this confusion because encyclopedias often include the whole
history of ethics when they define or write about the problems of moral philosophy [4]. However,
it is agreed that the central problem is that of right or wrong and it is to this dimension that the
decision to fly the Challenger belonged and codes of ethics are related.
Codes of ethics are guides to virtuous practice. us, in engineering education much atten-
tion in ethics courses has been given to codes of conduct and whistle blowing such as that done
by Private Bradley Manning.
Recently, a former President of the National Academy of Engineering William A. Wulf
asked us to distinguish between Herkert’s concepts of macro-ethics which are the ethical issues
that are faced by a profession, and micro-ethics that are the issues faced by individual practition-
ers [5]. Clearly, much attention has been paid to the latter in engineering courses but little to the
former. He contends that it is on the former that our thinking should be focused. e examples
given earlier are a mix of the two. He cites the problem of the allocation of resources in medicine,
and while he thinks that is a matter for the profession he concedes that “perhaps society guided by
the profession” [5] should be involved in the decision. But should it not be the profession guided
by society? And isn’t the problem, in the case of decisions affecting engineering that society does
not know or care about the engineering profession? e answer lies in the axiom that an education
is not truly liberal that does not involve engineering as is the present case. But I do not wish to
travel along this route at this time.
DIFFERENCES IN PHILOSOPHY
Each day on Today the premier news radio program transmitted by the BEEB (by which the
British Broadcasting Corporation is affectionately known in Britain) there is a three minute slot
when a person normally of a religious persuasion, but by no means always Christian offers some
reflections on something that is relevant in the news (ought for Today). is morning the rector
of one of London’s most fashionable churches, e Rev. Lucy Winkett, offered a reflection on
confidentiality that was triggered by Edward Snowden’s release of information about the interna-
tional surveillance activities of the U.S. Edward Snowden was holed up in Moscow Airport for a
month or so before being given political asylum by Russia. e Rector made the point that atti-
tudes to the release of confidential information of the kind made public by Manning and Snowden
varied. She thought that some would take an idealistic view that would hold that in a democratic
society everything should be transparent. Others, she said would take a realist position and allow
that for the state to function in an imperfect world some things would necessarily have to remain
confidential. Her problem was that bureaucracies very often reinforce their own legitimacy by
creating excessive levels of confidentiality. is creates a problem for realists. e other problem
is the power that accrues to agencies/institutions that collect information about each and every
one of us. But these are also matters for discussion at another time.
One point to be drawn from this illustration is that our opinions represent a philosophical
disposition. In her case idealism was contrasted with realism she was trying to determine that
disposition or rather enabling us to make such a determination. ere are many theories of ethics
that try to account for the principles of our beliefs [6].
61
ENGINEERS AS CONSEQUENTIALISTS OR
CONTRACTUALISTS
Bowen suggests that engineers have approached ethics from either a consequentialist, or a con-
tractualist, or duty-based perspectives [7]. Consequentialism is a term due to the British Catholic
philosopher Elizabeth Anscombe. It is a form of utilitarianism and is on the one hand a theory of
responsibility, and on the other hand, a theory of right and wrong. From an engineering perspec-
tive it is a reminder to designers of the importance of assessing the consequences of their designs,
for they are not only responsible for the intended outcomes of a design but for its unintended
but foreseen consequences. e O-Rings on the Challenger Rocket are a case in point. Bowen,
a Fellow of the Royal Academy of Engineering using a definition due to J. Finnis (given in the
box), suggests that it makes consequentialism attractive to engineers and notes the “similarity to
the familiar and more limited exercise of cost-benefit analysis” [8]. ey might find Bentham’s
hedonic calculus equally attractive [9].
...postulate some good as the human good and then seek to identify the act that will
maximize the good: that act is (by definition) the act of greatest utility and (by ethical
stipulation) the right act. [Cited by Bowen 7, p. 31: 8, p. 81: 7].
Bowen points out that in engineering consequences are important but he asks why so much
of the world’s resources are put into weapons development and the pursuit of war and so little put
into the provision of drinking water and sanitation in the third world. His answer is to hope with
Mill that a change in resource allocation will be “by the contagion of sympathy and the influences
of education” [8, p. 33]. He is interested in what engineers can do about this state of affairs (see
Journey 8).
MACMURRAY’S THREE MODES OF MORALITY
John Macmurray distinguishes between three modes of morality, one of which he describes as
positive and the others as negative. e communal mode is based on a positive apperception. It is
heterocentric because when an agent seeks to act rightly, the agents center of reference is always
the personal Other. “To act rightly is then to act for the sake of the Other and not oneself. e
Other in this mode, always remains fully personal; consequently its objective must be the main-
62
7. THINKING ABOUT MAKING A GOOD ENGINEER POSSIBLE
taining of positive personal relations between all agents as the bond of community” [10, p. 122].
It is characterized in the Judaeo-Christian tradition by the command to “love your neighbor as
yourself,” and given that communities are inevitably a mix of people the command to “love your
neighbor” is very challenging. Macmurray hesitates to call this Christian morality, even though
that is what it is, because he thought that what was often identified as Christian morality was
misconceived in the negative mode of contemplation, an ideal world that we can imagine. “e
real world is the spiritual world, and the real life the spiritual life.”
Both negative modes are egocentric. Our purpose here is to concentrate on the other nega-
tive mode which, to confuse matters, he calls “pragmatic.” It is the opposite of the communal for
it describes those who have as their intention that which they are determined to realize. Clearly,
in these circumstances there is a need to keep the peace whether it is between individuals or na-
tion states. e mechanism or as Macmurray calls it “technique” for maintaining harmony in any
society is the law. “e pragmatic mode of morality will then be conceived as obedience to the
law [...]. It will be a morality of self-control, power over the self, limiting its own freedom for
the sake of the community. It will be expressed in terms of will, obligation and duty, as a set of
rules or principles, which are the same for all, and which limit for each the use of his own power
to do what he chooses” [10, p. 123]. Macmurray considers the greatest exponent of this moral
philosophy, which has its origin in stoicism, to be Immanuel Kant.
THE CONCEPT OF “DUTY “AND ENGINEERING ETHICS
Duty is a concept that is well understood by the English. It is exemplified by the Monarch Queen
Elizabeth II. She carries out her obligations as a matter of duty whether or not she likes them, and
this characteristic is to be found among English people. But there are those who act in accordance
with duty because it satisfies them so to do and Kant holds that their motivation is no different
to those who commit immoral actions. Paul Hurley illustrates this in a way that is relevant to
life today. “e person who gives to the poor because it gives him or her pleasure is motivated in
exactly the same way as is the person who spends everything on him or herself. Each is motivated
by the desire for pleasure. But just as the person who spends everything on him-or herself because
it gives him or her pleasure deserves no moral credit, so the person who spends much of his or
her money on others because it gives him or her pleasure deserves no moral credit. Each is simply
doing whatever he or she happens to be naturally inclined to do” [11, p. 305]. In Kant’s view it
is those who do the right thing because it is the right thing to do who are acting from duty and
deserve moral praise. is seems to imply that there is no satisfaction to be gained from acting
from duty. Yet there seems to be no reason why actions that stem from a respect for the moral
law should not bring pleasure. All that is required by Kant is that reason should be the cause of
our motivation although as Bowen points out Kant recognized this weakness and wrote about the
need for “a feeling of pleasure or of delight” if the obligation of duty is to lead to action (cited in
[6, p. 37]). Bowen’s criticism of Kant is that the categorical imperative does not allow a motivation
based on personal compassion [12]. As Bowen points out, the concept of duty (obligation) has
played a significant role in the development of engineering ethics.
63
CONTRACTUALISM AND CODES OF CONDUCT
When we make a contract we agree to abide by the conditions (rules) of the contract. Sometimes
our elected representatives make contracts on our behalf. International treaties and conventions
are examples. Sometimes they are nearer to home such as the “social-contracts” that have been
entered into by some governments in Ireland and the UK in which the principal partners are
the Unions and the Government of the Day [13]. Such agreements are very difficult to keep even
though they are supposed to be binding. An election in a democracy is the affirmation of a contract
(manifesto) with a particular political party that is binding for a limited number of years. In this
contract individuals give up part of their natural liberty in order that civil society may accrue some
advantages.
According to Rawls who is one of the major 20th century exponents of social contract
theory, a social contract is based on two principles. ese are the principle of liberty and the
principle of difference. e former requires that we can do what we wish so long as we do not harm
others either directly or indirectly. e latter arises from the fact that people differ in many ways
among themselves, so any contract has to allow for these differences [1, pp. 138–141]. Acceptance
of these principles will inevitably lead to some inequality and Rawls would require the pattern of
distribution to favor the less well off. One of the major problems that Rawls theory seeks to off-set
is that of self-centerdness, and as we can all testify escaping from personal opinion is always very
difficult [14].
Codes of conduct are also contracts. ose for engineers differ in that they are voluntary but
as we have said we can feel obliged from a sense of duty to obey a code of conduct. When we join
an organization like the IEEE we agree to abide by its code of conduct (Exhibit 7.1). Engineering
educators whose courses are accredited by ABET agree to ABET’s code of conduct. In England
the awarding authority for the status of Chartered Engineer, the Engineering Council requires
each institution that it licences to develop a code of professional conduct (Exhibit 7.2) and the
Council provides guidelines for the constitution of these codes. So when a person accepts the
designation “Chartered Engineer” or becomes a member of the IEEE they enter into a contract
to abide by the relevant code of conduct.
e principle that governs contractualism is that rules of conduct derive their validity from
actual agreements between the parties concerned. Note that the IEEE code begins with the
statement—“We the members of the IEEE” [...]. Most people would assent to this when they
join a society. If they seriously transgress the code then they will be asked to leave the society.
is is not always the case especially if it is religion. Many Catholics continue to practice even
when they break the rules, as for example in the case of the practice of contraception.
One of the problems with these codes is that they are voluntary. You do not have to be
a chartered engineer to obtain work as an engineer in industry. For them to be enforced there
64
7. THINKING ABOUT MAKING A GOOD ENGINEER POSSIBLE
Exhibit 7.1: e IEEE Code of Ethics [15].
would have to be an industry-wide agreement. Another point that has been made is that most
engineers are employees [17]. ey have to do what their employers require, hence the problem
of whistle-blowing. While whistle blowing is encouraged by the authorities in the case of wrong
doing it is by no means clear that what might be called a responsible whistle blower is protected.
But what is responsible whistle blowing? e pertinence of this question is highlighted by the
activities of Private Manning and Edward Snowden.
In this respect Bowen draws our attention to the failure of international treaties and con-
ventions to have effect when they are needed [7, pp. 17–20].
Bowen took into account the work of John Rawls who believed that his account of justice
was superior to that of utilitarianism, proposed that “contractualism provides more of a basis for
securing the present arrangements, even of justifying ethical mediocrity, rather than promoting
an opportunity for promoting an ethos of high ethical aspirations” [7, p. 9]. Elsewhere he writes
“[...] reading something like these guidelines is as close as many engineers come to encouragement
to make the most effective use of their skills” [7, p. 36].
We, the members of the IEEE, in recognition of the importance of our technologies in affecting the quality of life throughout the world, and in accepting a personal obligation to our profession, its members and the communities we serve, do hereby commit ourselves to the highest ethical and professional conduct and agree:1. To accept responsibility in making decisions consistent with the safety, health, and welfare of the public, and to disclose promptly factors that might endanger the public or the environ-ment;2. To avoid real or perceived conflicts of interest whenever possible, and to disclose them to affected parties when they do exist;3. To be honest and realistic in stating claims or estimates based on available data;4. To reject bribery in all its forms;5. To improve the understanding of technology; its’ appropriate application, and potential consequences;6. To maintain and improve our technical competence and to undertake technological tasks for others only if qualified by training or experience, or after full disclosure of pertinent limita-tions;7. To seek, accept, and offer honest criticism of technical work, to acknowledge and correct errors, and to credit properly the contributions of others;8. To treat fairly all persons regardless of such factors as race, religion, gender, disability, age, or national origin;9. To avoid injuring others, their property, reputation, or employment by false or malicious action;10. To assist colleagues and co-workers in their professional development and to support them in following this code of ethics.65
Exhibit 7.2: From the Engineering Council’s Guidelines for Codes of Conduct [16]. Cited by
Bowen [7, pp. 35–36].
“We do not” writes R. W. Lovin, “begin reflecting on the moral life by opening the textbook
at page 1 and proceeding in order through the lessons. It is in the nature of ethics that we are
always already living the subject when we start to think about it” [18, p. 124]. When we start to
think about it we make the good engineer possible, but how do we escape the legalistic approach
and what should our aspirations be?
POSTSCRIPT
I was not aware of Brad J. Kallenberg’s treatise on ethics, theology, and the practice of engineering
when I wrote this journey for the very good reason it was also published in 2013 [19]. Had
I been, it would probably have altered the structure of these journeys. at said he does offer
a solution as to how we might escape the legalistic approach by changing our aspirations, or
rather how we think about ethical codes. e title of his chapter on codes of conduct makes one
through
stop and ponder: at least it did me. It reads, “Reading Professional Codes of Ethics
Design.” Reading…. We have to read and then we have to interpret. We can argues Kallenberg
read codes as a stipulation warrant or something else. Stipulations are clear-cut rules. “Definitions
j
Prevent avoidable danger to health or safety.Prevent avoidable adverse impact on the environment.Maintain their competence.Undertake only professional tasks for which they are competent.Disclose relevant limitations of competence.Accept appropriate responsibility for work carried out under their supervision. Treat all persons fairly, without bias, and with respect.Encourage others to advance their learning and competence.Avoid where possible real or perceived conflict of interest.Advise affected parties when such conflict arises.Observe the proper duties of confidentiality owed to appropriate parties.Reject bribery.Assess relevant risks and liability, and if appropriate hold professionalism indemnity insurance.Notify the Institution if convicted of a criminal offence or upon becoming bankrupt or disquali-fied as a company director.Notify the Institution of any significant violation of the Institution’s Code of Conduct by another member.1. 2. 3a. 3b. 3c. 4a. 4b. 4c. 5a. 5b. 6. 7. 8. 9. 10. 66
7. THINKING ABOUT MAKING A GOOD ENGINEER POSSIBLE
are warrants that use descriptive moral vocabulary such as good, stipulations often use modal moral
vocabulary, words like ought and should” [19, p. 66]. Kallenberg suggests that we should read codes
of conduct heuristically. Chasing after a code of ethics for teachers of engineering Alan Cheville
and I have shown that most codes of conduct are limited, imperfect if you prefer [20]. Kallenberg
argues that even imperfect codes may be useful. e problem is that there is a tendency to read
codes as stipulation warrants whereas they may be read as emblems, expert consensus, covenant,
conversation-starters, and prescriptions. Kallenberg argues that most codes of conduct “fall short
of their hoped for power” because they are read in the wrong way. As stipulations they are little
more than decision theory. ey are “most helpful when read (1) as a kind of badge of honor (or
emblem), (2) as thumbnail sketch of expert practice, (3) as a covenant formed among friends,
(4) as a series of ice breakers that open up vast vistas of deep and significant design conversation,
and (5) as a kind of ‘athletic’ training regimen” [19, p. 97].
e “athletic” metaphor relates to his view that we should think of codes as prescriptive
rather than proscriptive. Certainly in the UK the population is prescribed regular physical ex-
ercise in order to reduce the potential for or obesity. “Prescription makes for a training regimen
that is self-transforming,” and that is how we should view codes of conduct. He gives the example
of the ASME Code which is prescriptive in outlook “Engineers shall continue their professional
development throughout their career and shall provide opportunities for the professional develop-
ment of those engineers under their supervision” [fundamental canon # 3 of the ASME code 19,
p. 96].
All of which raises the question “What is a professional? [21]. But that is for another day.
In the next journey Bowen’s “aspirational,” which might equally be called “transforming” ethic is
discussed.
NOTES
[1] Davis, M. (1998). inking like an Engineer. Studies in the Ethics of a Profession. New York,
Oxford University Press. 59, 63
[2] Vardy, P. and P. Grosch. (1999). e Puzzle of Ethics. 2nd ed. London, Harper-Collins
(Font). p.4 59, 67, 68
[3] Watt, S. (1996). Introduction to: Aristotle. e Nicomachean Ethics. Ware, Herts.
Wordsworth Classics. P. xiv. 60
[4] See for example Mautner, T. (Ed.) (2005). Dictionary of Philosophy. London, Penguin.
p. 405. or Honderich, T. (Ed.) (2005). e Oxford Companion to Philosophy. Oxford Uni-
versity Press, pp. 627–630. 60
[5] Wulf, W. W. (2004). Engineering ethics and society. Technology and Society. 24, pp. 385–
390. 60
67
See Herkert, J. R. (2004). Microethics, Macroethics, and Professional Engineering Soci-
eties. Emerging Technologies and Ethical Issues in Engineering. Washington DC, National
Academies Press. Cited by Donna Riley (2008). Engineering and Social Justice. San Rafael,
CA, Morgan & Claypool Publishers, p. 110.
[6] For example, McInerney, R. (1990). A First Glance at St. omas Aquinas. Notre Dame,
IN, Notre Dame Press. 61, 62
[7] Bowen, W. R. (2009). Engineering Ethics. Outline of an Aspirational Approach. London,
Springer Verlag. 61, 64, 65
[8] Finnis, J. (1983). Fundamentals of Ethics. Oxford, Clarendon Press. 61
[9] e principle of utility is that an action is approved if that action has an overall tendency
to promote the greater amount of happiness. is leads to the idea that the amounts of
pleasure and pain can be measured according to intensity; duration; certainty; extent; re-
moteness; richness; and purity. Vardy and Grosch [2] give a lengthy example as to how it
might work. ey point out that there are problems with measuring “pleasure” and deter-
mining what pleasure is. Politicians in Europe have become interested in recent attempts
to measure “happiness,” a fact that indicates the utilitarian nature of present day politics.
61
[10] Macmurray, J. (1961). Persons in Relation. London, Faber and Faber. 62
[11] Hurley, P. (1993) Kant’s moral philosophy in Scott-Kakures, D. et al. (Eds.), History of
Philosophy. Harper Collins College Outline, New York, Harper Collins. See also Copleston
(p. 316) who cites Kant’s own example. “Kant makes a distinction between actions which
are in accordance with duty and acts which are done for the sake of duty. His own example
serves to make clear the nature of this distinction. Let us suppose that a tradesman is
always careful not to overcharge his customers. His behaviour is certainly in accordance
with duty; but it does not necessarily follow that he behaves in this way for the sake of
duty, that is, because it is his duty so to behave. For he may refrain from overcharging his
customers simply from motives of prudence; for example, on the ground that honesty is
the best policy. us the class of actions performed in accordance with duty is much wider
than the class of actions performed for the sake of duty.” Copleston, F. (1994). A History
of Philosophy. Vol. VI. New York, Image Books (Doubleday). 62
[12] Kant distinguishes between hypothetical and categorical imperatives. A hypothetical im-
perative is of the form “If I stop smoking I will not get cancer” or “If I jog daily I will
lengthen my life.” ey suggest we do things that are a means to some end, and are the
result of what Kant calls pure reason. ey do not necessarily result in action. In contrast
categorical imperatives result in action. ey are ends in themselves and not means to other
ends. Moral duties are imperatives. ey are arrived at by practical reason and undertaken
68
7. THINKING ABOUT MAKING A GOOD ENGINEER POSSIBLE
for the sake of duty and no other reason. (See 1, or alternatively 9). A code of conduct
should, therefore, be a list of categorical imperatives. Vardy and Grosch note that Kant
gives different formulations of these terms and they cite a translation by H. J. Paton [14]
that gives the three that Kant provides in his summary of e Groundwork of the Metaphysics
of Morals. ese are:” (1) e Formula of the law of Nature: Act as if the maxim of your ac-
tion was to become through your will a universal law of nature.” (2) e Formula of the End
in Itself. Act in such a way that you always treat humanity, whether in your own person,
or in the person of any other, never simply as a means, but always at the same time as an
end.” (3) e Formula of the Kingdom of Ends. So act as if you were through your maxims a
law-making member of the kingdom of ends” [slightly adapted from 1, pp. 57–60]. 63
e second principle might be interpreted as “do unto others as you would do unto your-
self.” It is the principle of equity. e highest aspiration of a human being is according to
Kant—good will.
Kant’s theory is called “deontological” because it is based on duty.
[13] Social contract theory has a long history. In the 20th century a major exponent of this the-
ory has been the American philosopher John Rawls (1971, revised 1999) A eory of Justice.
Boston, Belknap Press, Harvard University). Vardy and Grosch [2] give the example of a
shipwreck that leaves people on a desert island who have to learn to live together. In order
to develop a community they have to agree to certain rules—a social contract that is binding
on everyone. Rawls assumes that people in this situation are self-interested, have an equal
ability and freedom to make suggestions about the contract, are rational in their thinking,
and have access to knowledge of human nature. He makes a fifth assumption that is known
as “the veil of ignorance.” at is if they know nothing about their particular futures (role
and status) in the society they are building then Rawls believes that compassion is ensured
because it prevents self-centredness. Curran points out that the Kantian categorical im-
perative can have the same effect. (Curran, C (1999). e Catholic Moral Tradition Today.
Washington, DC, Georgetown University Press. p. 189). 63
[14] Paton, H. J. (1948). e Categorical Imperative. A Study in Kant’s Moral Philosophy. London,
Hutchinson. 63, 68
[15] IEEE Code of Ethics. August 3, 2013. http://WWW.IEEE.Org. 64
[16] Bowen cites Engineering Council UK (2007). Guidelines for Institutions Codes of Conduct.
London, Engineering Council UK. http://www.engc.org.uk/ecuk%20documents/i
nterne%20Aug.%203rd%202013. 65
[17] One of the earliest reports on this aspect was by Moon, J. (1968). e ethical issues of char-
tered mechanical engineers and their relationship to education. M. Litt. thesis, University
of Lancaster UK. See also Davis, M. (1998). inking like an Engineer. Studies in the Ethics
69
of a Profession. Oxford UP. Davis [p. 170] takes the view that “To be a ‘true professional’
is to act as the employer orders insofar as the orders are consistent with the profession’s
standards.” He argues that it is in an employer’s interests for engineers to be professional.
64
[18] Lovin, R. W. (2000). Christian Ethics. An Essential Guide. Nashville, TN, Abingdon Press.
65
[19] Kallenberg, B. J. (2013). By Design. Ethics, eology and the Practice of Engineering. Cam-
bridge, James Clarke & Co. 65, 66
[20] Cheville, A. and J. Heywood (2015). Drafting a code of ethics for engineering education
Proceedings Frontiers in Education Conference (IEEE), October. 66
[21] Heywood, J. and A. Cheville (2015). Is engineering education a professional activity?. Pro-
ceedings Annual conference of the American Society for Engineering Education, Paper 12907.
66
J O U R N E Y 8
71
Aspiration in Engineering
Whenever I ride on a bus or train, or pass through an airport or shop in a mall I can’t help but
see numerous notices that tell you what not do. As far as I can see, the only notices that tell you
what to do are those that show you where to go in the event of a fire. I find it all rather depressing
because most of the things I am told not to do I would not do anyway. To put it in another way
civilized people would not do such things. e fact that we are told that we should not do them
suggests that those who authorised the instructions do not think we are very civilized. Perhaps we
are not, and if that is the case, given what we believe to be our present sophistication compared
with previous generations, how sad. Such notices bring out the negative in me and I think codes of
conduct have the same effect. ey do not inspire and as Professor Bowen says they do not make
us want to aspire. Fortunately, rather than being put off by the codes for engineering Professor
Bowen set out to establish an aspirational approach to engineering ethics [1].
To understand Bowen’s starting point think back a few years to the oil spill in the Mexican
Gulf, one of the largest of man-made disasters. e Chief Executive Tony Hayward of BP, the firm
with overall responsibility for the rig, came out pretty quickly to the Gulf to solve the problem. He
saw that as his task and he made it pretty clear that that was what he had come to do. e result was
that he gave BP and engineering a pretty bad image. Rightly or wrongly, he was perceived to have
put technique before people (public relations). Long before that I had interviewed employers to
find out what they expected of graduates from Colleges of Advanced Technology in England and
I had come away with the strong impression that they wanted them to work at the lab bench and
not seek positions in management [2]. ey wanted them because of their skill with technique. I
thought that really what they wanted were technicians. ey did not mind what they were called
so long as they did the bench work. To a certain extent it seemed that those engineers were only
too willing to oblige. People who become engineers, I use the term in the broadest sense, do
so because they like playing with and designing gadgets. Professor Alan Cheville of Bucknell
University takes a different, and some would say more hopeful view. He believes that students
want to become engineers to improve the world. But do they? Last year Dr. Mina gave 30 or so
of his students a simple questionnaire about what the purposes of engineering were. e answers
given did not go one way or the other. Some clearly showed they wanted to help the human race:
others indicated a preference for playing with gadgets.
Bowen looking at matters from the perspective of the UK curriculum wrote that “at its best
engineering changes the world for the benefit of humanity. However, there are at present signif-
icant imbalances in the application of engineering knowledge” [1, p. 6]. By this he meant there
72
8. ASPIRATION IN ENGINEERING
is a tendency in engineering education, “as presently taught and practised, to prioritize technical
ingenuity over helping people” [1, p. 6]. He gives two examples of the failure to use technology
to better the circumstances of humankind. e first is water treatment. Water shortages affect
2 billion people in 40 countries, and 25,000 people die every day from water-related hunger. His
second example is the engineering used to develop inappropriate technologies such as cluster mu-
nitions. Among the things that Princess Diana is remembered for by the British is her patronage
of organizations that tried to clear the war fields of Africa of live munitions which killed people,
especially children, long (years) after fighting had ceased. Bowen believed that “engineers need to
think creatively and become smarter in using engineering to avert war in non-military ways. An
alternative to the use of engineering in preparation for military deterrence and pre-emptive war
is to use the same basic skill resources in preparation for genuine peace” [1, p. 7].
John Forge goes so far as to argue that “engineers have a duty not to provide the means
to harm, whatever other duties they may have” [3, p. 35]. However, he does not believe that
there should be a total ban on weapons research and that some could be justified. He dismisses
the idea that there can be purely defensive weapons. Given the exceptionally large number of
engineers that are employed in military research every engineering student needs to be faced with
these issues within the curriculum they have to study. It is not the task of academics to attempt
to dictate such views but to enable students to clarify their values and opinions [4]. ey have
to decide when a job creates conflict with their codes of ethics and this may not be as easy as it
seems. Even codes of conduct can be ambiguous and that, as things stand, is a major reason why
engineering students should have to study them; but, such study should take them into the realm
of social justice as Bowen’s thesis implies [5]. He would, however, argue that traditional ethical
views should be left to their own limitations, and that change will only be brought about if there
is an aspirational engineering ethic.
Bowen believes that “an aspirational ethos for engineering requires an antidote to the ten-
dency of individuals to adopt lower ethical aspirations in professional contexts than in private
contexts” [1, p. 12]. He argues that to change attitudes individuals have to perceive life to be a
“whole” in which work ethics are not separated from a person’s private ethic. He uses the term
“narrative unity” to describe this state of affairs. e ethical aspirations of our work life then be-
come the same as those for our personal life. I would hold that the personal should be the driver,
which is why I advocate that engineering students should have some acquaintance with the great
problems of philosophy and theology, apart from some practice in the philosophical method.
Further, if change is to be brought about it has to be recognized that the emotions are involved
because ethics is about relationships whether at a macro or micro-level.
As we have seen in an earlier journey the Scottish philosopher John Macmurray points out
that we only develop as persons in relation to other persons [7]. We come to be who we are as
personal individuals only in personal relationships. Macmurray distinguished between positive
and negative personal relations to avoid using the word love which in his view had become dis-
torted: he variously described positive relations such as friendship, fellowship, communion, and
73
community. If this is so, and reflection suggests it is, then it gives a quite different understanding
of the firm as a community which is consistent with what I have argued in an earlier journey.
Macmurray argues that “the building of positive relations requires that there be essentially one
ethic for all human interaction-not one for private life (altruistic care for the Other), and a totally
contradictory one for public life (cut throat competition). Public relationships are subject to the
same ethical imperatives as the ethics of the individual or family relationships: they must aim for
full community in freedom and equality, and be open to participation in that community by oth-
ers. is was not (or is) “the dominant view of human beings at work in western society during
the modern era” [8, p. 327]. Given that this is the case the development of an aspirational ethic
depends as much on the organization (firm) as it does on the individual.
Bowen does not draw on Macmurray but on the Jewish philosopher Martin Buber, one of
Macmurray’s admirers. It is evident that their philosophies had many similarities [9]. For exam-
ple, Buber argues that reality arises between agents as they transform each other through their
dialogue. Reality for Buber is dialogical. As between people Buber distinguishes two primary re-
lationships which he labels I-ou and I-It. e former is personal, the latter impersonal. I-ou
corresponds with Macmurray’s personal relationship. It is the relationship that engenders care.
e I-It relation is the world where a person, “works, negotiates, bears influence, undertakes,
concurs, organizes, conducts business, officiates” [10, p. 39, cited by Bowen]. Bowen argues that
while engineers are familiar with this world they are not familiar with the I-ou. It is, he says,
an analysis that is lacking in most modern engineering. Both Bowen and Macmurray see the
I-It dimension as being at the root of individualism and the materializm with which it is as-
sociated. Bowen argues that “Buber’s insight has the unique advantage of encompassing both
person/person and person/natural world (environmental) relations and of recognizing the impor-
tance of technological knowledge. It also vitally balances the priority presently given to rule and
outcome approaches in engineering ethics. However, the nature of engineering requires an exten-
sion of the I-ou interactions in terms of I-You interactions based on care but lacking personal
proximity” [1, p. 11]. e point here is that individuals make demands on other individuals who
in turn make demands on them. is is another way of understanding how even mechanistic or-
ganisations inevitably have an organistic dimension to them, to use the terminology of Burns and
Stalker [11].
Bowen found that that the European writer E. Levinas, whose works have been translated
by the Duquesne University Press, makes “an even stronger statement of the priority of the de-
mands that others make on us, which he designated by the strikingly visual notion of the face.” He
describes an ethical act as, “a response to the being who in a face speaks to the subject and tolerates
only a personal response” [[12, p. 219]: cited in [1, p. 11]]. Bowen writes, “in simple terms, and
changing the metaphor, we need to hear the voice of others saying, ‘It’s me here, please help me!’
” [1, p. 11].
e issue of personal proximity that Bowen points to is of present interest because Man-
ning and Snowden who do not have personal proximity with the world community surely acted
74
8. ASPIRATION IN ENGINEERING
with a moral intention based on their previous experience, and their judgment of things as they
perceived them at the time. Whether or not they were prudent is another issue? “Moral discourse
has the moral commonplaces of Natural Law at one end and singular judgements in fleeting and
unrepeatable circumstances at the other, and in between are general judgements which can guide
us for the most part, but it is up to us to judge when and how and where and how much. ere
are no rules for applying rules. For that we need prudence” [13, p. 169]. Prudence is one of the
intellectual virtues listed by Aristotle in his Nicomachean Ethics [14].
It is to a present day exponent of “virtue theory” that Bowen turns for other key con-
cepts in the development of his aspirational ethics for engineering. He is the Scottish/American
philosopher Alasdair MacIntyre who came to public notice in 1981 with the publication of Af-
ter Virtue [15]. Nicolas Dent considers that because MacIntyre’s approach to ethics [16] which
relates changes in moral ideas to the many influences that form the individual and the society in
which he lives, (e.g., historical, cultural, religious), is so broadly based, it makes it accessible to
the “non professional” reader [17]. It is evident that the thesis presented in his various works is
attractive to Bowen because MacIntyre objects to the view (philosophy) that the individual is the
sovereign chooser of the values they wish to live by. is philosophy MacIntyre suggests has led
to dislocations in society, a breakdown of social ties and given rise to activities that take away the
dignity of human living. MacIntyre argues that without virtues, communities of whatever size,
collapse. In many respects this view is similar to that of David Selbourne, a political philoso-
pher who proposed the principle of duty as a guiding principle of the civic order. He wrote, “that
the language of civic morality should seem to have become the language of a lost age is both
the consequence and further cause of disaggregation and of the extensive dissolution of citizen
feeling” [18].
MacIntyre argues that when we participate in the small communities that give life to the
area in which we live such as a school’s parents association, or a handicapped group, or within
societies like the local golf and tennis clubs we have the opportunity to develop and sustain the
virtues. According to MacIntyre the learning and sustenance of the virtues occurs because we live
our lives through a narrative structure. We practice the virtues with the support of institutions. e
virtues in their turn support the institutions. “A rugby (football) club is sustained by players who
see the worth of the game, and the worth of the game is both ensured and enhanced by traditions
established by the club” to cite Vardy and Grosch [19, p. 105]. is example is of some interest
because in the British Isles disputes over sporting activities, that is, those that are considered
to be sportsmanlike as opposed to unsportsmanlike, often revolve around what commentators
and the public consider to be virtuous. Vardy and Grosch quote MacIntyre thus, “e virtues
therefore are to be understood as those dispositions which will not only sustain practices and
enable us to achieve the goods internal to practices, but also sustain us in the relevant kind of
quest for the good, by enabling us to overcome the harms, dangers, temptations and distractions
we encounter” [15, p. 219].
75
Bowen believes that MacIntyre’s theory of virtues can provide a starting point for a recog-
nisable description of engineering that is acceptable to professionals. He suggests a number of
virtues that are appropriate to engineering and draws attention to the principles of—accuracy
and rigor; honesty and integrity; respect for life, law, and the public good; responsible leader-
ship, listening and informing, which are in the Royal Academy of Engineering’s statement of
ethics [20]. He goes on to argue that Macintyre’s terminology also provides a very appropriate
description of the outcomes of engineering activity. “ese are internal goods, including standards
of technical excellence and the satisfaction arising from personal accomplishment, and external
goods, including engineered artefacts and wealth.” e description of engineered artifacts as goods
allows such physical technological accomplishments to be distinguished from the end or goal of
engineering activity, which may be described as the promotion of human flourishing through con-
tribution to material wellbeing. “Such an analysis leads to an important conclusion that the present
imbalanced prioritization in engineering of technical ingenuity over helping people may be considered
as arising from mistaking the external goods of the practice for the real end of the practice” [1, p. 12].
I want to suggest that the public make the same mistake and see technology as means of solving
problems without considering the broader ends. For that reason both the public (politicians) and
employers tend to view engineers as a “commodity,” to quote Alan Cheville.
e practical outcomes of Bowen’s aspirational ethic would be:
“e personal ethical responsibility of every engineer. All engineers need to take a more active
role in considering the ethical implications of their work. Our aspiration should be summarized:
“Here I am, how can I help you?” [1, p. 13].
“In education [...] give greater emphasis to the goal of benefits in terms of the quality of life”
and to the need to take “personal responsibility for professional activities” [1, p. 13].
“Industry and work practices [...] promote “the widespread adoption of aspirational codes of
conduct” and provide “career development plans that bring employees into closer proximity with
end –users at least for part of their working life” [1, p. 13].
“Engineering Institutions [...] progressively incorporate degrees of compassion and generos-
ity” into their codes of practice and support “the development of an international engineering
initiative to promote aspirational practice” [1, p. 13].
ese have been ordered differently to the one given by Bowen. Everything follows from
the first principle. Second, if these goals are to be achieved there have to be radical changes in at-
titudes within education and industry and links have to be made between them that show clearly
the importance of aspiration. I have omitted his outcome for positioning engineering in the public
and intellectual mainstreams because I regard that as a sine-qua non. Similarly, he sees an aspira-
tional role for engineering in international initiatives such as the United Nations Declaration and
Programme of Action on a Culture of Peace. In this he is not alone among engineering professors as
Aarne Vesilind of Bucknell University and his collaborators have shown [4]. It is likely that he will
have his critics for he has not mentioned social justice. Yet thinking in this area, as illustrated by
76
8. ASPIRATION IN ENGINEERING
the work of Donna Riley, not only supports and overlaps with his thinking but is complementary
to it as well [21].
POSTSCRIPT
While these journeys were being given we asked Donna Riley to write about how her work on
social justice related to philosophy for a series of handbooks that were being prepared by the Tech-
nological Literacy Division of the American Society for Engineering Education. e conclusion
of her paper [22] provides a fitting and challenging end to this journey.
“By and large, one could say that engineering has reflected the values of mainstream society,
of neoliberalism, of military and corporate interests. is is due in part to, and continually justified
by engineers’ commitment to considering themselves as value-neutral or objective. But because
there is no such thing as value neutrality, engineering has reflected some unjust biases embedded
in our social structure to the point where they have become so mainstream as to be rendered
invisible. is default set of values has been inculcated in engineering through the engineering
education process. In all these areas of historical and traditional injustice, voices are emerging,
asking, for whom and by whom is engineering done? How is engineering done, and who wins
and who loses from engineering activity? ese are fundamental questions that need to be asked
of programs of engineering and technological literacy, and we have an opportunity to transform
the profession for the better.”
NOTES
[1] Bowen, W. R. (2009). Engineering Ethics. Outline of an Aspirational Approach. London,
Springer-Verlag. 71, 72, 73, 75
[2] Heywood, J. (1969). An Evaluation of Certain Post-War Developments in Higher Tech-
nological Education. esis. University of Lancaster, UK. (two volumes) 71
[3] Forge, J. (2005). e morality of weapons research. In P. Aarne Vesilind (Ed.), Peace
Engineering. When Personal Values and Engineering Careers Converge. Woodsville, NH,
Lakeshore Press. 72
[4] Vesilind, P. A. (2005). University life and Peace Engineering. In P. Aarne Vesilind (Ed.),
Peace Engineering. When Personal Values and Engineering Careers Converge. Woodsville,
NH, Lakeshore Press. 72, 75
Describes how clear thinking is the defence against indoctrination in the following [p. 139].
“Consider a session I recently taught in the professional ethics course. In this course we try
hard to take issues apart and to discover what values drive decisions and how a difference in
values can lead to significant disagreements. e best examples come from unpredictable
sources. For example, we recently received a campus wide notice to come to a rally in
77
support of our soldiers who had been fighting overseas, and some of the students wanted to
talk about it. We decided that when one goes to such a rally, perhaps carrying an American
flag, one is sending signals. What exactly is being supported?”
“e students decided that there are three different recipients of such show of support:
1. America as a nation as an idea. 2. e soldiers who are placed in harm’s way, and 3. e
political leaders who place the soldiers in harm’s way.”
“None of us had any problems with supporting the first two, but we could not figure out
how to show support for the first two without also unwittingly showing support for the
last one. I did not tell them to avoid the rally, and I would never offer my own reason for
not going, but the discussion allowed them to think through the problem.”
I used to share this view but now I will give my views if asked, and will argue the point if
invited.
[5] For example, Professor Emeritus Aarne Vesilind writes, “Engineering, as a profession,
states its purpose and objectives in a Code of Ethics, and at least in the United States,
the code of ethics of almost every engineering discipline begins with the statement”: 72
e engineer in his professional practice, shall hold paramount the health, safety, and welfare of
the public.
“e two key words are “shall” and “paramount.” ere is no equivocating about this as the
primary commitment of engineering, and the vast majority of engineers agree with this
statement and practice their profession accordingly.”
“ere is, however, a problem with this statement when it comes to engineers working for
the military establishment and it centers on the word “public.” What exactly is the “public?”
Suppose an engineer works for a company that designs and produces landmines. Is “public”
the people who pay his or her salary? Has the “public” decided through a democratic process
that the manufacture of landmines is necessary? Or is the “public” of record those people
who will eventually have to walk over the ground in which these landmines have been
planted and be killed and maimed by the explosions” [6, pp. 9–10].
[6] Vesilind, P. A. (2005). e evolution of Peace Engineering in P. Aarne Vesilind (Ed.),
Peace Engineering. When Personal Values and Engineering Careers Converge. Woodsville,
NH, Lakeshore Press. 77
[7] Macmurray, J. (1961). Persons in Relation. London, Faber and Faber. 72
[8] Costello, J. E. (2002). John Macmurray: A Biography. Edinburgh, Floris Books. [p. 321] I
have given this quotation because Costello includes a very good summary of the Gifford
lectures that produced Macmurray’s two great books—pages 324 to 330. 73, 78
78
8. ASPIRATION IN ENGINEERING
[9] Both thought that the conceptualization of the form of the personal was the philosophical
projects given to the twentieth century. “Martin Buber considered himself to be the poet
of this project, and saw Macmurray as its metaphysician, and told him so” [8, p. 15 and p.
322]. 73
[10] Buber, M. (2004). I and ou. London, Continuum. (Originally published in German in
1923. Buber lived between 1878 and 1965). 73
[11] See Journey 5. Burns, T. and G. Stalker. (1961). e Management of Innovation. London,
Tavistock. 73
[12] Levinas, E. (1969). Totality and Infinity. Pittsburgh, Duquesne University Press. Fransis-
can monks of my acquaintance ask their congregations to see the face of Christ in others—
particularly in those who suffer. 73
[13] On the meaning of prudence see McInerney, R. (1990). A First Glance at St. omas
Aquinas. Notre Dame, IN, University of Notre Dame Press, p. 169. See also Gilson, E.
(1956). e Christian Philosophy of omas Aquinas, Notre Dame, IN, University of Notre
Dame Press, p. 287 ff. 74
Prudence or practical wisdom (phronesis): the virtue that helps us balance our interests
with those of others. Vardy and Grosch argue that without this virtue the other intellectual
virtues revert to being skills [19, p. 28]. For a detailed discussion of prudence in relation
to engineering design see Kallenberg, B. J. (2013). By Design: Ethics,eology and the Prac-
tice of Engineering, Cambridge, UK, James Clarke.” Practical wisdom is the art of doing
practical reasoning well” [p. 249].
[14] Aristotle. e Nicomachean Ethics. Introduction by S. Watt. Ware, UK, Wordsworth Clas-
sics of World Literature, 1996. Aristotle 384—322 BC. His moral theory is based on the
virtues and is sometimes called virtue theory. Aquinas (1225—1274) developed virtue the-
ory within the context of natural law and belief in a personal God (see 13, Gilson, p. 259 ff ),
also Vardy, P. and P. Grosch. (1999). e Puzzle of Ethics. London, Fount/Harper Collins
which has separate chapters on Aristotle, Aquinas, and virtue theory which in particular
summarises the views of two British philosophers-Elizabeth Anscombe and Philippa Foot.
74
Aristotle distinguishes between moral and intellectual virtues. e moral virtues are
courage, temperance, liberality, magnificence, magnanimity, proper ambition, patience,
truthfulness, wittiness, friendliness, modesty, and righteous indignation. e intellectual
virtues are art or technical skill (techne), scientific knowledge (episteme), prudence or prac-
tical wisdom (phronesis) as distinct from wisdom, and intelligence or intuition (nous).
[15] MacIntyre, A. (1981). After Virtue. A Study in Moral eory. Revised in 1984. London,
Duckworth. 74
79
[16] MacIntyre, A. (1966). A Short History of Ethics. London, Macmillan. 74
[17] Dent, N. J. H. (2005). MacIntyre, Alasdair in Honderich, T (Ed) e Oxford Companion
to Philosophy. Oxford. UP, p. 549. 74
[18] Selbourne, D. (1994). e Principle of Duty. An Essay on the Foundations of Civic Order.
London, Sinclair-Stevenson, p. 4. 74
[19] Vardy, P. and P. Grosch. (1999). e Puzzle of Ethics. London, Fount/Harper Collins. 74,
78
[20] Royal Academy of Engineering (2007). Statement of Ethical Principles. London, Royal
Academy of Engineering. 75
[21] Riley, D. (2008). Engineering and Social Justice. San Rafael, CA, Morgan & Claypool Pub-
lishers. 76
[22] Riley, D. (2014). Social Justice framings for conversations on engineering and philoso-
phy in J. Heywood and A. Cheville (Eds.) Philosophical Perspectives on Engineering and
Technological Literacy. Washington. DC, Technological Literacy Division of the American
Society for Engineering Education. 76
J O U R N E Y 9
81
Preparing for the Future:
Individuals and Organizations
Many of us hoped that the financial crisis of 2007–2008 would bring some changes to the capitalist
system which had proved like most other systems that there was a point at which it stopped
functioning. We hoped that politicians and those responsible for financial systems would stop
chasing money for the sake of money and use it to invest wisely in the future. One hoped there
would be an end to short termism and that good CEOs would be encouraged to stay with and
grow their organizations. We hoped that the gap between rich and poor would narrow. We hoped
that the minimum wage would become a fair wage. Five years down the line not much seems to
have happened. Few have been held responsible for their irresponsibility and the gap between
rich and poor seems to have widened, and economists talk to themselves.
At the same time during those five years there has been massive social change brought about
primarily by technology. e good and the bad consequences of social networks have been exposed
but there has been relatively little response to the structural change that is evidently underway.
An article in e Times (of London) newspaper with the sub-title “with their jobs vanishing and
incomes squeezed, the middle classes may never see things getting better” drew no comment in
the letter columns of that newspaper. Commenting on the rich poor gap the author Jenni Russell
had much the same thing to say as the distinguished American economist and politician Robert
Reich. She wrote that “the root cause of the crash, as the IMF (International Monetary Fund)
conceded two years ago, was that across the developed world the rich were taking too great a share
of the world’s growth leaving workers to maintain their lifestyles by taking on mass unmanageable
debt. No economy can run like that for long” [1, p. 17]. No one will lend anyone anything in the
British Isles and small businesses in particular are being starved of cash. Worse, the labor model
that said new jobs will be created with each new technological innovation appears to no longer
hold.
e failure of workforce models to take into account the effects of technological change on
jobs has been highlighted by Erik Brynjolfson and Andrew MacAfee of MIT. e sub-title of
their book reads “How the Digital revolution is Accelerating Innovation, Driving Productivity,
and Irreversibly Transforming Employment and the Economy” [2]. Jobs are being destroyed and
new skills are required in the workforce. It is striking that their recommendations for providing
these new skills focus on the school curriculum, not the university. Like Jim Clifton whose report
82
9. PREPARING FOR THE FUTURE: INDIVIDUALS AND ORGANIZATIONS
will be discussed later they see a pressing need for entrepreneurs who can successfully place some
innovations in the market.
But what is the relevance of all this to engineering. Engineers many of whom belong to the
middle class are making life difficult for the same middle class for the technologies they develop
not only put other people out of work but themselves too. In 2011 I suggested there was sufficient
evidence available to support the view that there was not a shortage of highly qualified manpower
and that a con trick had been worked on politicians [3]. Since then there has been a flow of papers
in support of the view that there was a shortage of students for STEM courses [4, 5]. But this
view has been seriously challenged in the U.S. where three academics have pointed out the irony
of employing large numbers of guest workers in IT industries while at the same time U.S. colleges
graduate 200,000 more scientists and engineers than find employment in these fields [8]. ey
point out that the IT firms by employing guest-workers to fill two thirds of the positions available
have been able to keep wages at the same level for the last ten years. Is this, they ask, a question
of market failure?
One telling statistic came from e US Bureau of Labor Statistics who recorded for the
decade ending 2010 that techno-scientific employment fell by 19%, in Silicon Valley and that
average wages fell by 14% [9]. But the most worrying information related to data that suggested
there is unemployment among middle aged and older engineers [10]. Other commentators sug-
gested that immediately (today) the demand is for technicians. Salzman, Lowell, and Kuehn
pointed out that the guest workers who enter IT are being employed in what they call “commod-
ity like” production jobs such as programers and systems analysts. An unusually long article in the
September 2013 issue of IEEE Spectrum reviewed the STEM debate and came down gently on
the side of those who think that there is not a STEM shortage [10]. Charette cites Teitelbaum
who said that “the problem with proclaiming a STEM shortage when one does not exist is that
such claims can actually create a shortage down the road. When previous STEM cycles hit their
“bust” phase, up-and-coming students took note and steered clear of those fields as happened in
computer science after the dot-com bubble burst in 2001” [p. 52]. e problem that the engi-
neering profession has is the possibility that the products of universities will come to be treated
as commodities for those jobs where the only interest is in engineering techniques.
ere was an all too brief conversation in ASEE Prism where it was reported that middle-
aged and older engineers were being asked to take a cut in salary, or in the software industry be
replaced by cheaper graduate entry personnel in the belief they had more up to date skills [11].
Professor Plotkin in a comment questioned whether or not anyone would want to work in an
industry that treats its workers in the ways described by Wadwha. Nevertheless, it seemed there
was a serious unemployment problem among middle aged and older engineers in some sectors
of the U.S. Wadwha’s response was to cite the metaphor of a roller coaster and suggest that the
universities need to prepare students for that ride so that when the need arises they are able and
interested to change jobs. It does seem as though the futurists will at last be proved correct. e
majority of the working population will be faced on at least one occasion, and possibly more, in
83
their working lives with having to find a job outside the perceived range of their abilities. Hence,
the need to take the concept of life-long learning more seriously and to design courses of contin-
uing professional development that support engineers on that roller-coaster. Such programs are
likely to be as much about personal development as they are about specific topics in engineering.
One thing that is certain from the available data is that changing technology will change
society. It is equally clear that the 2008 crash has not created an international debate about the
future of society yet the future of the international finance system is intimately linked to that
future because of its links with technology. e other thing that is certain is that there will be a
jobs war. is is Jim Clifton’s view, and being CEO of Gallup he has a great deal of worldwide
data [12]. Although he is concerned with the problem as it affects America the general principles
apply to any country. He writes [A country] “goes broke when its GDP falls and jobs can’t be
found. A country goes broke one company at a time and then one citizen at a time. It grinds
down. And it’s happening now because the U.S. is going broke. All this is happening because
jobs and GDP live together, and are the cause and effect of one another. ey are the chicken
and egg. So without significant sudden GDP growth, America will not experience significant job
growth. America will not experience meaningful job growth” [12, p. 18–19]. He thought that
cities should encourage entrepreneurs to bring innovation to the market, a view as we have seen
that is held by Brynjolfson and McAfee. at was written in 2011. Among Clifton’s points that
resonated on this side of the Atlantic, certainly in Britain and Ireland, is the fact that while in
2011 Gallup put American unemployment at 10% it placed underemployment at around 20%.
Against this Gallup found that across the world what people wanted was a “good job.” Hence,
the title of Clifton’s book e Coming Jobs War.
“e biggest problem facing the world is an inadequate supply of good jobs. [...]e great
global dream is now focused on having a good job [...] Job creation is the new currency of all
world leaders. e most important social value in the world is my job” [12, p. 186]. e trouble is
that no one seems to know how to create them.
A problem for engineering education is that too little is known about what engineers ac-
tually do and who they are in the small and medium sized firms where most jobs are created [12,
p. 29:13]. Many of them do not pursue a career in engineering. What do they do and what is the
value of engineering education to them? ese are pressing issues.
Irrespective of Clifton’s solutions, it is important to note, that he is a firm supporter of free
enterprise and capitalism. at is my view too, but I think that in its extremes capitalism can
function against the “common good.” I also believe that the utilitarian model on which modern
capitalism is based has not in the long run been a servant of the “common good.” Money chasing
money so that bankers make more money does not contribute to wealth creation, and not therefore
to the “common-good.” Some would consider this to be amoral. Some might consider that short
term investment where investors want quick returns comes within this category since it causes
CEO’s to change the organization so that it is easily saleable. e stories of many mergers cast
doubt on the view that they always create stronger organizations. It is difficult to believe that
84
9. PREPARING FOR THE FUTURE: INDIVIDUALS AND ORGANIZATIONS
CEOs who come to an organization with a view to remaining for two or three years can create
an effective organization.
e argument to be offered here is that all organizations, irrespective of whether they are
part of the free enterprise system or the State system, are obliged to serve the common good by
the production of useful goods and services. By common good is meant, “e sum total of social
conditions which allow people, either as groups or individuals, to reach their fulfilment more fully
and more easily” [14]. Apart from the fact that the state should intend to achieve the common
good, it follows from this principle which is welded into the Judaeo-Christian tradition [15]
that all organizations have a social function by creating opportunities for meeting, cooperating,
and the enhancement of the abilities of the people involved. It is argued here that businesses have
neglected their social function particularly in regard to education, training and career development
and that in the future this will have to be reversed. Another way of looking at the problem is to
follow Loughlin Hickey’s advice and conduct a public (open) examination of the relationship
between purpose, performance and profits [16]. is view has implications for the structure of
organizations as will be evident in the discussion that follows.
If it is assumed that as technology changes many persons will have to change their jobs
at regular and relatively short intervals then individuals may be called on to change their jobs
as many as 6 times in a 40–50-year work-span. e U.S. differs from Europe in that there are
no age limits placed upon the age when a person can retire. Until recently, retirement has been
fixed at 65 in the UK and the state pension has been paid from that age. In the future the age
will be set at 67. It is likely that as the average life span increases that that age will be further
increased. Very many people make provision for private pensions more often than not through
their employer. During the first decade of this century most final salary schemes have come to an
end (except in the public service) because pension funds in the private sector have had large black
holes. Notwithstanding the problems of the sector it is the pension funds that have by far the
largest money resource available for investment. e pension funds have been criticized for not
taking a pro-active role in the management of companies particularly in respect of salaries paid
to Boards of Directors. Not only is there a problem of getting funds to be proactive but there is
a question of how to find members who can exercise some influence over the fund even though
they have no votes in the fund. Similar problems exist in Ireland.
Apart from family-owned businesses and the occasional employee owned business the ma-
jority of companies are limited (incorporated) liabilities. In law they are owned by their share-
holders. e public have the picture that they are pretty ineffective at holding their CEO’s and
directors to account. John Macmurray’s philosophy would produce a different picture of the com-
pany because it asks the question “What constitutes the company?” His answer, I suggest, would
be the relationships within its boundaries. ose boundaries extend from the shareholders to
the consumers via the workers. Everyone has an investment in the organization. e sharehold-
ers depend on their being a market (consumers), and workers who can create and produce for
that market. Without workers and consumers shareholding is pointless, perhaps we should say
85
valueless. Within such an organization there are proximate and close relationships some more in-
terdependent than others but all interdependent in determining the success of the organization.
Seen as a community they have a social function within society. e point to be made here is that
all persons in the community are equally important. Legally that is not the case for a firm and it
is for this reason that in the UK some interest has focused on the John Lewis Partnership. It is
the third largest private company in the UK, and comprises a number of department stores like
Macy’s, and a supermarket chain called Waitrose. It is what is sometimes called a “mutual’ because
it is owned and managed by its employees. e annual profits are divided between the employees.
Last year the turnover was £9.4 billion and each employee was rewarded with the equivalent of
two additional month’s salary.
It would be too much to suppose that there would be trends to create organizations of this
kind but there is no reason not to develop this idea of a firm, or for that matter any organization
structured on the basis of a community, of persons in relations where all take responsibility for the
development of the organization. Clearly, some structures are more favorable to the development
of a community. But the success of a community depends more on the attitudes and beliefs of the
members than it does on structure. Within a community work has to be regarded as a good for
every one; it is not for the personal interests of any particular individual. It does demand moral
behavior, and we should demand of its actions that they should not be so much as right or wrong
but good or bad in so far as it serves the agreed goals of the community. Since, as Macmurray
argues, the moral rightness of an action has its ground in the relation of persons, each individual in
a community contributes to the personal development of the Other. e principle of community
requires that those responsible for the community (management) have to ensure that each person
in the community is able to achieve the limits of their own excellence.
“Minimally, the principle of community rules out selfish or exploitative goals, even when
they require a high level of individual excellence. A person does not live a good life by developing
skills of manipulation and persuasion that allow one to prosper at the expense of others, and a
community does not encourage excellence by arranging matters so that some people develop their
skills as a result of keeping others in subordinate positions with limited possibilities. Put more
positively, the principle of community suggests that we should choose those goals that enrich the
lives of other people and enable them to live good lives of their own. is requires thoughtful
attention to the needs of others, but it also requires a careful assessment of our own needs so
that we develop the skills and capacities that will contribute most fully to the good of others.
People who have the stamina, coordination, and intellect to be surgeons, or the patience and
communicative skills to be teachers have possibilities to help many other people live good lives,
but they will also have to seek support from many other people in order to achieve excellence in
those endeavours” [17, p. 31].
A community will be made up of diverse capabilities and personalities, some of which will
be difficult to manage but the advantages of diversity are considerable. If you find the concept of
community difficult be reminded that there is such a thing as the engineering community. One
86
9. PREPARING FOR THE FUTURE: INDIVIDUALS AND ORGANIZATIONS
reason we do not discuss the engineering community is, as the Chinese social scientist Lee Bocong
reminds us, because most attention has been given to the study of the scientific community [18].
It is community that creates goods for the material world and unlike science which has its goal
the understanding of nature engineering’s goal should be the common good.
But communities are not closed systems. Because technological developments create in-
stability in the labor force, organizations have an obligation to society to ensure that persons no
longer suitable for the tasks required in that organization are prepared to undertake tasks in an-
other organization. ese tasks may be at a distance from the tasks they have been doing (that is,
the abilities they have been using). ey may be in a different part of the labour arena [19]. is
is how an organization partly meets the objectives of the “common good.” Organizations cannot
escape responsibility for professional development for the future. It is not a question of handing
this responsibility to the state but of sharing it with the education system. To a large extent this is
what happens in the German dual system of education of training. Industry takes joint responsi-
bility with the education system for training. Britain, despite legislation, in 1964 has never been
able to obtain such commitment. Germany also runs a tripartite system of school education with
the gymnasium the place for academic study [20]. But unlike Britain of the 1970s there seems to
be no concern for status as between demand for the different types of school. e Germans also
give status to company workers through representation on the management boards responsible
for the direction the firm takes. Given that Germany has probably been the most continuously
successful of the industrialized nations since the end of the second-world war there is much to be
said for taking notice of its approaches to industrial development.
e changes that are taking place in society as a result of the impact of technology will
undoubtedly have a profound effect on the system of higher education. Clearly, it has to be a
“real” preparation for life. It will have to be a base for continuing professional development (CPD)
or permanent education as lifelong learning was once known. But what should those who are
responsible for engineering be considering in the immediate term. It is clear that engineers require
a much wider range of understanding than that provided by the application of science. In the
U.S. a review of what goes for liberal education would seem to be required. Let me make one
submission.
All education systems are bound by severe structural constraints. One of these is the way
the time table is constructed, to enable students to meet specified credits. Combined with the
timetable the credit system allows for little innovation. It would, for example, be extremely diffi-
cult to introduce short intensive courses. e other is the way subjects have become so large that
teaching is done in subject enclaves. I won’t go so far as to call them ghettoes but you get the gist.
Engineering students need to experience what it is to be in a true community. ey need to be
able share their learning and learn with a diverse community. In this respect it is worth looking
at the studies done by Alexander Astin [21]. Now 20 or more years’ old, but in my submission
still relevant even though they were undertaken with liberal-arts students, they lead to the view
that the best social support that students could receive is a collegiate climate. is recommenda-
tion was made both in respect of student well-being and learning. How within all the constraints
educational institutions have to face can a collegiate climate be introduced and extended to the
firm so as to enable permanent learning? I will examine this question and the problem of change
in the final journeys.
87
POSTSCRIPT
Recent research reinforces the views put forward in this chapter in regard to the workforce and
its structure.. First, Teitelbaum (2014) has now clarified his thinking in a substantial treatise [22].
He summarizes his findings as follows.
• “First that the alarms about widespread shortages or shortfalls in the number of US Scien-
tists and engineers are quite inconsistent with the available evidence.
• Second that the similar claims of the past were politically successful but resulted in a series
of booms and busts that did harm to US science and engineering and made careers in these
fields increasingly unattractive: and
• ird that the clear signs of malaise in the US science and engineering workforce are struc-
tural in origin and cannot be cured by simply providing additional funding. To the contrary
recent efforts of this kind have proved to be destabilizing, and advocates should be careful
what they wish for” [22, p. 3].
It seems fairly clear that worldwide changing technology, in particular AI, will have an
impact on the structure of the workforce. Globalization may kill jobs. As this journey suggests,
the models that are currently in use are open to challenge, and it is not surprising that there should
be a number of different views about the future. e optimistic view is that the current model is
correct: changes in technology will continue to bring about other avenues of employment is a the
position of Brynjolfsson and McAfee. A middle view put forward by two English scholars, father
and son, Richard and Daniel Susskind, who argue that “capable machines will transform the work
of professionals giving new ways of sharing practical expertise in society,” the effect of which will
in the short run be some unemployment but in the long run new jobs will emerge [23]. e
Susskinds study is of professional work. With many illustrations they argue the “the traditional
professions will be dismantled, leaving most (but not all) professionals to be replaced by less
expert people and high performing systems.” Technicians rather than technologists. ey expect
new roles will arise, but we are unsure how long they will last, because these too, in due course,
may be taken on by machines” [23, p. 303]. ey say that they are not determinist because how
technology is used is very much in the hands of the professions. “We can shape our own future;
more than this, we believe that we ought to, from a moral point of view” [23, p. 304]. Given that
engineers play a major role in developing these technologies they should also be presenting to the
public their views on how they would shape the future. ey cannot do this without a personal
philosophy.
88
9. PREPARING FOR THE FUTURE: INDIVIDUALS AND ORGANIZATIONS
e pessimistic view is put forward by myself that there will be permanent and increasing
loss of jobs.
Whichever view is taken it has implications for the structure and content of higher educa-
tion. It is also clear that whatever position is taken a basic element of the curriculum throughout
schooling and higher education has to be technological literacy and that has to extend beyond
the art and science of making things to IT/AI but more expressly to the moral dimensions of
technology, one of which is who will own and control practical expertise [23, p. 304–307].
NOTES
[1] Russell, J. (2013). Recovery will only widen the rich-poor divide. e Times, August 2. 81
Reich, R. B. (2015). Saving Capitalism: For the Many not the Few. New York, Alfred A.
Knopf. “For three decades after World War II the average hourly compensation of Amer-
ican workers rose in lockstep with productivity gains […] beginning in the late 1970s the
virtuous circle came to a halt. While productivity gains continued much as before and the
economy continued to grow, wages began to flatten. Starting in the early 1980s, the me-
dian household’s income stopped growing altogether, when adjusted for inflation.” e
standard explanation is that American workers priced themselves out of the market. “If
they want jobs, they have to settle for lower wages and less security. If they want better
jobs, they need better skills. So hath the market decreed.” A familiar story not only in the
U.S. but the UK as well, but Reich while agreeing that this explanation is relevant argues
that it is far from the whole story. He argues that the “underlying problem, then, is not
that average working Americans are worth “less” in the market than they have been, or
that they are living beyond their means. e problem is that they have steadily lost the
bargaining power needed to receive as large a portion of the economy’s gains commanded
in the first three decades after World War II, and their means have not kept up with what
the economy could otherwise provide them. To attribute this to impersonal workings of
the “free market” is to ignore how the market has been reorganized since the 1980’s and
by whom […] “it is to overlook the marked decline of countervailing power in our political
economic system.” (extracts from chapter 13).
[2] Brynjolfsson, E. and A. McAfee (2011) Race Against the Machine. Lexington MA. Digital
Frontier Press. 81
[3] Heywood, J. (2011). e Response of Higher and Technological Education to Changing
Patterns of Employment. Proceedings Annual Conference of the American Society for Engi-
neering Education. 82
“To a large extent policy has been governed by the regularly reported predictions that there
is and will be a shortage of engineers and scientists, and that the pool of students available
to pursue these occupations is too small and declining in quality. In both the UK and the
89
U.S. this perception is taken to be correct and it is held that this will be detrimental to future
economic prospects. Much attention has been paid to remedying this shortage particularly
by focusing on the supply side of the equation. Michael S. Teitelbaum a Program Director
at the Alfred P. Sloan Foundation said at a conference on the U.S. Scientific and Technical
Workforce “the supposed causes are weaknesses in elementary, secondary, or higher ed-
ucation, inadequate financing of the fields, declining interests in science and engineering
among American students, or some combination of these. us it is said that the United
States must import students, scientists, and engineers from abroad to fill universities and
work in the private sector-though even this talent pool may dry up eventually as more
foreign nationals find attractive opportunities elsewhere”[4]a. But Teitelbaum went on to
argue that such data that was available was weak and often misinterpreted [4]b. ere was
no evidence for a shortage of qualified personnel and in a submission to a sub-committee
of the House of Representatives he said that, “despite lawmakers being told by corporate lob-
byists that R & D is being globalized in part due to shortages of scientists in the US no one who
has studied the matter with an open mind has been able to find any objective data of such general
shortages.” He concluded with the controversial view that, “Federal policy encourages an over
production of science professionals” [5]. It has created its own system of vested interests. If
the continuing attention to the shortage of students for STEM education is anything to
go by this system is alive and well [6]. Of course it may not be true of other countries [7].
[4] (a) Teitelbaum, M. S. (2003). Do we need more scientists? e Public Interest No 153.
Washington DC, National Fairs Inc. He presented his paper at a 2007 conference on e
US Scientific and Technical Workforce. Improving Data for Decision Making. Organized by
Rand Science and Technology. e proceedings were edited by Kelly, T. K., Butz, W. P.,
Carroll, S., Adamson, N. M., and G. Bloom, pp. 11–31. It is interesting to note that forty
three years ago John Jewkes in the UK asked a similar question, How much science? In
his Presidential address to the economics section of the British Association at the 1960
meeting of the Association (Advancement of Science, 67, 1960). 82, 89
(b) Lowell, B. L. and H. Salzman. (2007). Into the Eye of the Storm: Assessing the Evi-
dence on Science and Engineering Education, Quality, and Workforce Demand. Urban Insti-
tute. 48 pages. It also considers that there is no shortage of scientists and engineers and
examines in detail the perceptions that have led to the opposite view.
[5] Cited in First Bell. Today’s Engineering and Technology News under the heading, Labor re-
searchers tell Congress U.S. not lacking in scientists, engineers. Washington, DC, ASEE.
See also (a) First Bell 07:06:2011 Some experts say STEM crisis is overblown and con-
trast with 21:10:2011, Demand for STEM skills increasing, study finds. (b) Patel, P. (up
dated 2010). Where the engineering jobs are... the news is good but not great for engineers
looking for work in 2010. IEEE Spectrum, downloaded 03:01:2012. 82, 89
90
9. PREPARING FOR THE FUTURE: INDIVIDUALS AND ORGANIZATIONS
[6] For example, (a) First Bell reports on 28:10:2009, High-achievers defect from STEM
fields of Study finds: 23:05:2011, experts voice concern over high STEM dropout
rate:16:06:2011, training programs offer pointers on incorporating STEM into lessons:
03:02: Technology, engineering overlooked when STEM education discussed, teacher
writes (in London e Times 01:03: 2012 in an article on the importance of science to
Britain’s recovery, no mention is made of engineering): 08:02:2012, Obama to request
$80 Million for education funding for training math, science teachers: 13:02:2012, Labor
Department official discusses Importance of STEM at the University of Dayton. 89
(b) Ellis, R. A. (2007). Effects of recent revisions in Federal Standard Occupational Classifi-
cation (SOC) Categories of the Employment of STEM Professionals. New York, Commission
on Professionals in Science and Technology.
(c) Future of STEM Curricula and Instructional Design. A Blue Sky Workshop. December
1–3, 2009. Center for the Study of the Mathematics Curriculum.
[7] Blau, J. (updated 19:08:2011). Germany faces shortage of engineers. IEEE Prism Down-
loaded 03:01:2012. Also, Schneiderman (2010) Economy and shortages affect European
job outlook. e bigger high-tech companies in Europe are recruiting EE’s. Talent is in
short supply, especially to smaller firms looking for very specific skills. IEEE Spectrum,
March. 89
[8] Widely publicized. Salzman, B., Lowell, L., and Kuehn, D. (2013) Guest workers in the
high skill U.S. Labor market: An analysis of supply, employment and wage trends. 82
[9] Cited by Zachary, G. P. (2011). Jobless Innovation? IEEE Spectrum, April, p. 8. 82
[10] Charette, R. N. (2013). e STEM Crisis is a Myth. IEEE Spectrum, September pp. 40–
43, 50–52. Charette interviewed Teitelbaum who said that anxiety about manpower in the
US dates from World War II (the same is true of the UK-my comment). Ever since then
it has tended to run in cycles defined by “alarm, boom and bust. e cycle usually starts
when someone or some group sounds the alarm that there is a critical crisis of insufficient
numbers of scientists, engineers, and mathematicians” and as a result the country “is in
jeopardy of either a national security risk or of falling behind economically” [...] e gov-
ernment responds either with money (for research), or more recently with visas to increase
the number of STEM workers. is continues for a number of years until the claims of a
shortage turn out not to be true and a bust ensues. Students who graduate during the bust
are shocked to discover they can’t find jobs, or they find jobs but not stable ones.” No one
mentions the point that it is in the interests of engineering educators to have a shortage of
engineers! 82
[11] Wadhwa, V. (2011). Leading edge: over the hill at 40. ASEE Prism, p. 32. 82
91
[12] Clifton, J. (2011). e Coming Jobs War. New York, Gallup Press. 83
[13] A step in this direction has been taken by the authors recently published Engineering Prac-
tice in a Global Context: Understanding theTechnical and the Social. (2013). Edited by B.
Williams, J. Figueiredo, and J. Trevelyan. London, CRC Press (Taylor and Francis). Since
this journey was written, J. Trevelyan has published a major work on e Making of the
Expert Engineers (CRC Press, Taylor and Francis, 2014) which is based on studies of en-
gineers at work and the skills (comopetencies) they use.
[14] Author. (1966). Gaudium et Spes, 26, AAS 58. Second Vatican Ecunemical Council. Cited
in Compendium of the Social Doctrine of the Church (2005). Dublin, Veritas, p. 79. 84
[15] For a Jewish perspective on the common good, see Sacks, J. (2007) e Home We Build
Together: Recreating Society. London, Continuum. 84
[16] Hickey, L. (2013). Change from within. e Tablet, September 25. 84
[17] Lovin, R. (2000). Christian Ethics. An Essential Guide. Nashville, TN, Abingdon Press. 85
[18] Bocong, L. (2010). e rise of philosophy of engineering in the East and West. In I. Van
de Poel and D. E. Goldberg (Eds.), Philosophy and Engineering. Dordrecht, Springer. 86
[19] See Youngman, M. B., Oxtoby, R., Monk. J. D. and J. Heywood (1978 Analysing Jobs.
Aldershot, Gower Publishing, p. 106)—“e development of a more flexible approach to
employment, and thus to training requires substantial changes in attitude on the part of
employers, managers and workers’ organisations. It is bound to affect the structure and
content of aprenticeships as well as the institutions of tertiary education and work. e
current high levels of unemployment reinforce the view since they relate to whole regions
of industry. In other words the chances of an individual being able to find exactly similar
work are small. e role that our technique of job analysis can play in such circumstances
is best illustrated by omas and Madigan’s study of redundancy. ey found that the
response to redundancy among a group of workers was best understood ‘in terms’ of the
groupings formed on the basis of the technological/organization system operating in a
particular firm and more generally, in a particular industry. ese groupings act as a useful
means of indicating the structures of perceptions held by our sample, in terms of which
they evaluated the redundancy and planned their action to achieve their ends.” omas and
Madigan suggested that a theory of labour arenas which reflects the ‘political’ nature of job
choice might provide a more adequate basis for the analysis of job search and job change.
86
Our concept of a labour arena, that is a group of skills which is already possessed or which
may be readily acquired, crosses the divide of job perceptions derived from job titles. It
is relatively easy for employer and employee to check whether or not they can cope with
92
9. PREPARING FOR THE FUTURE: INDIVIDUALS AND ORGANIZATIONS
the operations in jobs as described by the components of nucleus operations necessary to
performance which are derived from analyses, as opposed to relying on assumptions about
the roles implicit in a job (p. 106).
omas, B. and C. Madigan. (1974). Strategy and job choice after redundancy: a case
study in the aircraft industry. Sociological Review, 22, 83–102.
[20] Author: Vocational Training in the Dual System in the Federal Republic of Germany. Bonn,
Federal Ministry for Education and Science. “In the dual system the larger part of learn-
ing takes place not in the school, but in the production facilities or service enterprises in
industry and commerce. e student is a trainee in a company or practice in one of the
liberal professions, or in the Civil Service. He or she is released for the purposes of attend-
ing training school, i.e., is also a student at a vocational school at the same time. In the
dual system, training is divided between the two establishments responsible for providing
training: the company and the vocational school. In the Federal Republic of Germany,
these are subject to different authorities. Federal law applies to the training received in a
company. e school element is the responsibility of the Länder” [p. 6]. 86
[21] Astin, A. (1994). What Matters in College? Four Critical Years Revisited. San Fransisco,
Jossey-Bass. 86
See also Chambliss, D. F. and C. G. Takacs (2014). How College Works. Cambridge, MA,
Harvard University Press.
“[r]elationships to be central to a successful college experience. ey are the “necessary
precondition, the daily motivator, and the most valuable outcome” [p. 155]. “[s]pecific
human beings matter. A student must have friends, needs good teachers, and benefits from
mentors. A student must have friends, or she will drop out physically or withdraw mentally.
When good teachers are encountered early, they legitimize academic involvement, while
poor teachers destroy the reputation of departments and even institutions…relationships
are important because they raise or suppress the motivation to learn; a good college fosters
the relationships that lead to motivation.”
[22] Teitlebaum, M. S. (2014). Falling Behind: Boom, Bust and the Global race for Scientific Talent.
Princeton, NJ, Princeton University Press. 87
[23] Susskind, R. and D. Susskind (2015). e Future of the Professions. How Technology will
Transform the Work of Human Experts. Oxford. Oxford University Press. 87, 88
J O U R N E Y 10
93
Changing Us: Changing Society
While we may understand that universities and colleges should be learning communities it is
complacent to suggest that they are, at least, purposefully so. ere is one quite simple test of
this proposition and that is to ask a large sample of university teachers, teachers of engineering
in particular, if they have any understanding of what learning is, and in particular, adult learn-
ing. It is doubtful if many teachers would claim to have an understanding of what learning or for
that matter development is: that is, in terms of how they are presented by writers who specialize
in learning in higher education, except perhaps those in the education and psychology commu-
nities. Some will have heard of Piaget but may or may not have understood his theory. is is
not surprising for very many teachers in higher education have received little or no training in
instruction and curriculum. But like everyone else they have been to school and college and like
everyone else they think they know more about teaching than the teachers in whose care they
leave their children.
As parents they want their children to experience an environment that will enable those
children to develop to their potential. But there are often contradictions in their position. ey
may place their child in an environment that will ensure that child passes the entrance test for
universities like Harvard and Yale, and Oxford and Cambridge which are not necessarily en-
vironments that ensure the child will develop to its full potential (whatever that might imply).
Decisions made by politicians often lead to management and curriculum operations in schools
that function against this goal. For example, it is quite clear that while we want students to be
“rounded,” to cite a well worn but almost meaningless term, yet we pay far less attention (if any) to
the affective domain of development than we do to the cognitive and ignore the fact that the affec-
tive is important to development in the cognitive domain [1]. We do not attend to the “rounded”
development of the person at all. Similarly, we worry about the status of subjects. “Hard” subjects
like mathematics have higher status than “soft” subjects like “social studies” with the effect that
teachers of the so-called “soft” subjects try to turn them into science subjects. Students cotton
on to these attitudes and in engineering many of them do liberal studies because they have to,
and not for personal development. Often they feel their engineering teachers have little regard for
liberal studies. Happily you will find engineers who valued such studies when they were students.
In sum the fact is that educational decisions by tutors and students often have little to do with
learning outcomes and personal development either at the institutional or political level.
One reason for this is the models we have of teaching, learning and development. I venture
to suggest that teachers divide along a similar continuum to that of managers. At one end of
94
10. CHANGING US: CHANGING SOCIETY
the spectrum teachers will have a theory X view of learning while at the other end will be found
those who advocate a theory Y approach. e model that we have grown up with tends to be
theory X. It is of a nineteenth century assembly line, highly controlled in which the irrational
feelings of students are overcome by a drill approach to education. It has a long history going
back to those Greek philosophers who thought the mind is a “slate” on which things are written.
It is assumed that the slate interprets what is written in the same way that the lecturer or writer
intended. ose who do not get the right interpretation fail. It encourages a passive approach
to learning. is is an extreme view of what John Eggleston called a “received” curriculum [2].
Such a curriculum is based on the view that there is a fixed body of knowledge that has to be
handed down (transmitted) from generation to generation. It is structured by disciplines often
called subjects that have to be “covered” willy-nilly. e engineering curriculum belongs in this
category. ere is received body of knowledge that has to be handed down. But the “received”
curriculum does change with time. In response to changes in technology new subjects are drafted
in, and there are modifications to some of the traditional subjects, but as Dr. Mina reminds me
“electricity and magnetism” to use the title of my youth remains largely unchanged except for the
units in which it is taught. I remember the change from cgs to mks units in the 1950’s.
In engineering new technologies cause new courses but the tendency is to allow them to
add to programs that are already pretty full. While there have been papers written that criticize
the overloaded curriculum in engineering they seem to have had very little impact [3]. ere
seems to be little or no attempt to ask what is essential in terms of the key concepts that should
be grasped. So we complain that while we want our students to be creative our lecture and test
regimes have to be completed at the expense of creativity: and, the technical-system in which we
operate—50-min sessions, credit hours, regular tests, and so forth—supports the assembly line
approach to instruction. Little or no account is taken of the way students learn and develop.
In the “received” model of the curriculum engineering students are oppressed and teachers
are the oppressors, to use Alan Cheville’s extrapolation of Paulo Freire’s philosophy of the op-
pressed (personal communication). is is counter to what a university should be about but it is
often supported by the system of testing and examining (and the system of accreditation) that is
used. It is a matter of fact that the tests we set reinforce the teaching we give. Most complaints
about tests whether of their validity or reliability tend to assume that nothing can be done about
their design. is prevents a debate about how tests can be designed to have a positive effect on
learning and cognitive development. Any system of testing that benefits learning and achieves
the goals we wish to achieve will inevitably be multiple-strategy in its approach. A good exam-
ple of that approach is the “Advanced” level examination in Engineering Science set by the Joint
Matriculation Board in the 1970s and 1980s in England [4].
A multiple-strategy approach to assessment inevitably necessitates a multiple-strategy ap-
proach to instruction, learning and development. ere is no doubt that the ways in which stu-
dents are assessed and taught contribute to what John Henry Newman called the genius loci or
spirit of the place.
95
When Newman founded the Catholic University of Ireland he was much concerned with
the genius loci of the institution and in one of his discourses on e Idea of a University he con-
trasted a residential university with one that relied mainly on examinations (at the time the Uni-
versity of London). He said, “when a multitude of young men, keen, open-hearted, sympathetic
and observant, as young men are, come together and freely mix with each other, they are sure to
learn one from another, even if there be no one to teach them; the conversation of all is a series
of lectures to each, and they gain for themselves new ideas and views, fresh matter of thought
and distinct principles for judging and acting, day by day. An infant has to learn the meaning of
information which its senses convey to it, and this seems to be its employment. It fancies all that
the eye presents to it to be close to it, till it actually learns to the contrary and thus by practice
does it ascertain the relations and uses of those first elements of knowledge which are necessary
for its animal existence. A parallel teaching is necessary for our social being, and it is secured by a
large school or college; and this effect may fairly be called in its own department an enlargement
of mind. It is seeing the world on a small field with little trouble; for the pupils or students come
from very different places, and with widely different notions, and there is much to generalize,
much to adjust, much to eliminate, there are interrelations to be defined, and conventional rules
to be established, by which the whole assemblage is moulded together, and gains one tone and
one character” [5]. Newman believed that a university’s generation of the genius loci depended as
much on the students as it did on anyone else. e importance of learning from peers could not
be underestimated hence the significance he attached to small halls of residence.
We in academia tend not to appreciate the role of peer group learning in helping students to
understand our subjects. It is through such learning that students begin to understand themselves
as agents and how they are interdependent. It is here they learn the value of community and the
problems and practice of living in a community. One thing is clear the tutor can no longer be the
oppressor. Tutors have to be agents that guide; mentors that co-learn for this is what a learning
community is.
And, what does research tell us in the second decade of the 21st century? It says, “[for]
intellectual development (including critical thinking), the breadth of student involvement in the
intellectual and social experiences of college, rather than any particular type of involvement mat-
ters most”[...] “e student—peer contacts that matter most appear to be those that expose the
student to diverse racial, cultural, social, value and intellectual perspectives. at is, students de-
rive the greatest developmental benefits from engagement in peer networks that expose them to
individuals different from themselves...interactions with diverse peers have modest but consis-
tently positive impacts on knowledge acquisition, dimensions of cognitive development such as
critical thinking and complexity of thought, principles moral reasoning, and self-rated job skills
after college.” So wrote Pascarella and Terenzini in volume 2 of their masterful study of the affects
that college has on American students [6, p. 615].
is same research tells us that college education does have an impact on students. ank-
fully students do develop in higher education and beyond: at the same time it is difficult to deny
96
10. CHANGING US: CHANGING SOCIETY
that universities take very little notice of student development in planning courses. is is not to
say that it cannot be done. Back in the early 1980s the Colorado School of Mines (CSM) de-
signed an engineering program to meet the requirements of a theory of student development put
forward by William Perry based on his studies of students whom he counseled at Harvard. It was
taken up by Dick Culver of CSM who with J. T. Hackos showed in Engineering Education [7]
how its application led to a complete rethinking of the engineering curriculum. Instead of discrete
units that were not necessarily brought together in the students mind it required a curriculum that
might be best described as being more holistic (my term). It was a tree with branches rather than
a set of discrete courses.
Up to that time much thinking about pupil/student development had been conditioned
by Piaget’s idea of developmental stages. Put simplistically, part of Piaget’s argument was that
children move through orderly stages of development [8]. e first stage is from birth to about
one-and-a-half years. is is the development of sensorimotor intelligence. e second major
stage is called the period of representative intelligence and concrete operations which takes the
child up to 11 or 12 years. Finally, the child moves into a stage of formal operations. It is this
stage that is of interest to those working in higher education for the child is now able to undertake
abstract thinking, to hypothesize, deduct, experiment and theorize. ese are, of course, the skills
necessary for study in higher education. us, there was some concern when some studies reported
that many students who were in higher education were not at the level of formal operations.
Be that as it may, and not withstanding substantial criticisms of Piaget’s theories, the idea
of development is grounded in the literature and there are a number of equally interesting theories
of development such as those of Jerome Bruner [9]. e point to be made here is that the stage of
formal operations seems to imply that it is the end of development. Perry’s post-Piagetian theory
of intellectual development challenges that view.
William Perry argued that the organization of the curriculum and teaching discourages
students from developing the higher level thinking skills that are expected of them [10]. In en-
gineering we might express the goal as being able to deal with fuzzy problems rather than single
solutions. One reason for this is that we collude in training students to solve problems that have
single right answers and while we may be able to set fuzzy or wicked problems we find them diffi-
cult to score because there may not be a right answer. “Challenger” set management and engineers
a fuzzy problem.
Perry presented a post-Piagetian theory of development of nine stages (Exhibit 10.1). He
held that the attitudes we hold and the concepts and values with which they are associated de-
pend on the stage of cognitive and ethical development we are at. ey relate to curriculum and
instruction in so far as together they either reinforce the stage we are at or help us move forward
to another stage. Perry argued that much teaching tends to reinforce the earlier stages and inhibit
such development. Perry argues that in these first stages students come to university with the ex-
pectation that they will be told the truth; that is, what is right and what is wrong. Subject-based
knowledge is right or wrong, or true or false. us, in stage 1 all problems are seen to have right
97
Exhibit 10.1: e Perry Positions or Stages after Culver, Woods, and Fitch (1990) [12].
answers and authority must be followed. At this stage those who are rated the best teachers will
be those who administer a received curriculum. By stage 3, however, it is apparent that the au-
thorities are “seeking the right answers” and only in the future will we know the right answer.
Perry calls these first three stages “dualism.” From “dualism” the student moves into the phase
of “scepticism” for now it is clear that not only does authority not have the right answers but
everyone including the student has the right to hold his or her own opinions, and some of these
can be supported by evidence. us, by stage 5 some answers are to be found that are better than
others, and knowledge has to be considered in context. It is a stage of relativism. Among those
who have tried to design engineering courses to meet the requirements of this theory are Marra
and Palmer. ey suggested that the transition from stage 4 to 5 is the most significant transition
because the students “now accept knowledge is for the most part transient and contextual [...].
Students now accept themselves as one among many legitimate sources of knowledge and often
forego their former view of instructors as absolute authorities” [11]. e student begins to see
that good choices are possible and that commitments have to be entered into.
ere are valid criticisms of this theory one of which is that those who use it as a basis for
their teaching are as equally dogmatic as are those who teach by drill and encourage students to
remain in the initial stages of development [13]. at criticism would not hold for those courses,
like cooperative courses, which require a student to have some industrial experience before they
Positions 1 and 2: DualismAll knowledge is known, and it is a collection of information. Right and wrong answers exist for everything. Teachers are responsible for giving information; students are responsible for reproduc-ing it.Position 3: Early multiplicityKnowledge includes methods for solving problems. There may be more than one right answer. Teachers help students learn how to learn. Students are responsible for understanding and apply-ing knowledge.Position 4: Late multiplicityUncertainty with respect to knowledge and diversity of opinion become legitimate. Teacher requires evidence to support opinions and design choices. Students learn how to think and analyze.Position 5: RelativismAll knowledge must be viewed in context. Teachers are consultants; students can synthesize and evaluate perspectives from different contexts.Positions 6–9: Commitment within relativismFor life to have meaning commitments must be made, taking into account that the world is a changing, relativistic place.98
10. CHANGING US: CHANGING SOCIETY
undertake academic study. It was commonly said after the second-world war that students re-
turning from the armed forces were more disciplined and motivated to study. In England many
students are encouraged to obtain some kind of work or voluntary experience before they embark
on third-level study. ey call it a “gap” year.
ere are other developmental theories. Personally, I think that King and Kitchener’s the-
ory of reflective judgement is to be preferred to Perry’s model [14]. But what seems to me to be
inescapable is that we continue to develop while we are in college, and research in adult learning
would suggest that we continue to develop with age. By 1990, Alexander and Langer had pub-
lished Higher Stages in Human Development; Perspectives on Adult Growth [15], and in 2006 Ca-
role Hoare was able to produce a substantial Handbook of Adult Development and Learning [16].
e chapters in both books support the view that development is a life-long process. Clearly,
there is a need to bring the research on learning in higher education into this frame if we are to
understand how to redesign the system of higher education to support the life-long learning that
social-technical changes in our society are demanding, and, particularly if we are to understand
how interrelated work and study can contribute to the evolution of professional competence.
Garrett McAuliffe who has reviewed research in this area writes of Torbett’s strategist frame
of professional development “for those who would be highly competent experts and leaders, learn-
ing should lead them toward strategist thinking with its emphasis on dialogue, experience and
self-reflection. Instead, however, current educational and in-service training programs teach to
the technician worldview, with its ideological tunnel vision and disinterest in stepping outside of
professional standards. us, such professionals remain embedded in the usual practices of their
own fields and are less attuned to the situational contextual dynamics that professionals must ac-
count for in good portion” [17]. He argues that expertise depends as much on the ability of “how
to know” as it does on “what to know.” So a key question for educators who want their students
to develop expertise is to what extent do they help their students acquire the skill of “learning
-how-to-learn?”
When I was completing my 2005 book, Don Evans of Arizona State University who pio-
neered the use of Concept Inventories in engineering education asked me why it was that there
was so much resistance in engineering education to the use of educational strategies that were
affirmed by considerable research? Dr. Mani Mina assures me ten years later that the situation
has not changed. We have, for example, known about the relationship between learning-how-
to-learn (reflective capability) and expertise for years, but has much been done about it? At the
time the one answer that I did not give Dr. Evans was “fear of the unknown.” But this is surely
the heart of the matter! Very little training is given to engineering educators or, for that matter,
educators in higher education more generally, neither is engineering education seen as a matter
for professional development. In these circumstances no one is prepared for the “unknown.”
Perhaps if we grasped, as Trevelyan has pointed out, that many of the professional skills
can be learned through developing skill in teaching. He argues that students should be given
the opportunity to teach which suggests that engineering educators ought also to learn how to
99
teach [18]. To be fair there are many teaching and learning centers for higher education through-
out the world [19] but especially in the U.S., and Utschig, Schaefer, and Visco recently proposed
a competency based program for teaching and learning [20].
When she was at Harvard, K. Patricia Cross argued that the way forward was to encour-
age educators to realize that their classrooms were laboratories for research [21].ey could do
in the classroom what they did when engaged in their engineering research. Others, like myself,
who had responsibility for the training of teachers argued that the way to professionalize teach-
ing was to train teachers to be researchers of their own instruction. Such evaluation renders the
professional accountable; such evaluation induces another dimension to the motivation to teach.
At the level of professional development I used the same idea to promote the idea of instruc-
tional leadership [22]. I have described what is meant by this in engineering education in detail
in the extended introduction to my 2005 book. Briefly, departments (schools) would provide an
instructional leader from among their number who would be available to offer advice, update and
encourage faculty to take a live interest in educational developments. Overall, there is no substi-
tute for the development of mind that accepts that education is a professional activity just as much
as engineering is, so that steps are taken to acquire the knowledge that identifies that profession.
at knowledge has to encompass the curriculum, how and what should be taught, in particular
the role of ethics and other liberal arts subjects in that curriculum; and, how it should be assessed
and credentialed, how it should be learnt, and the role of instruction.
NOTES
[1] For example, the importance of the affective domain in the learning and practice of math-
ematics by engineers has been demonstrated by Goold, E. and F. Devitt. (2013). Mathe-
matics in engineering practice: tacit trumps the tangible inWilliams, B., Figueiredo, J. and
J. Trevelyan (Eds.) Engineering in Practice in a Global Context. Understanding the Technical
and Social. London, CRC Press/Taylor and Francis. 93
[2] Eggleston, J. (1977). e Sociology of the School Curriculum. London, Routledge. Suggests
three perspectives on the curriculum, the received, the reflexive and the restructuring. e
received perspective arises from the belief that there is a fixed body of knowledge which has
to be handed down from generation. It is structured by disciplines which we call subjects.
In certain societies some subjects acquire more prestige than others. In Britain and Ireland,
profoundly influenced by the liberal education movement of the nineteenth century, the
“pure” is preferred to the “applied,” “theoretical” to “practical” and “university” to “tech-
nical college.” Support for a disciplines approach to education is to be found in the work
of Paul Hirst. (1975—Knowledge and the Curriculum, London, Routledge). e reflexive
perspective is in contrast to the received perspective and finds it s base in the sociology of
knowledge presented by such theorists as P. Berger and T. Luckman (1966—e Social
Construction of Reality. London, Allen Lane). Knowledge is socially constructed and de-
100
10. CHANGING US: CHANGING SOCIETY
pends on our experience and environment. In this situation teachers and students should
define a curriculum which is real to them in their social context. In this sense the cur-
riculum should be negotiated and worked out to meet the individual needs of students.
Eggleston suggested a restructuring perspective that brings together these two paradigms
as two related modes of understanding both the realities of knowledge in the curriculum
and the possibilities of change therein. His model shows how in the received curriculum
the dominance of the teacher perceptions and the way in which these perceptions can be
influenced by the students to restructure the curriculum. It brings together the components
of knowledge acquisition and knowledge making. 94
[3] See Ch. 7, Curriculum change and changing the curriculum, in Heywood, J. (2005). Engi-
neering Education. Research and Development in Curriculum and Instruction. Hoboken, NJ.
IEEE/Wiley. 94
[4] Heywood, J., Carter, G., and D. T. Kelly. (2007). Engineering Science A Level in the UK.
A Case Study in the balanced assessment of student learning. Educational policies and edu-
cational scholarship. Proc. Frontiers in Education Conference. S4F, pp. 9–12, (ASEE/IEEE)
(See Exhibit 10.2). 94, 101
[5] Newman, J. H. (1852/1923). e Idea of a University: Defined and Illustrated. London,
Longmans p. 164. 95
[6] Pascarella, E. T and P. T. Terenzini. (2005). How College Affects Students Vol. 2. A ird
Decade of Research. Hoboken, NJ, John Wiley. 95
[7] Culver, R. S. and J. T. Hackos, (1982). Perry’s model of intellectual development. Engi-
neering Education, 73(2), pp. 221–226. 96
[8] Piaget’s theory argues that children move through orderly stages of development. e first
stage is from birth to about one-and-a-half years. is is the development of sensorimotor
intelligence. Within this stage, there are six sub-stages. Each of these is a problem solving
activity involving its own logic. us, after about 18 months the child is able to solve a
detour problem by going round a barrier even if this means departing from the original
goal for a short time. e child can infer causes from the observation of effects and begins
to predict effects from observing causes; the child also begins to invent applications of
something previously learned. 96
e second major stage of development is called the period of representative intelligence
and concrete operations. is takes the child up to 11 or 12 years. e first part of this
period is between two and seven and is called the preoperational stage. e second phase
is that of concrete operations. It is in this stage that the child learns conservation*. For
example, the size-weight illusion is resolved. Children take this problem in relation to
matter, weight, and volume. Piaget claims that the order of such learning is invariable.
101
Exhibit 10.2: Table illustrates the multiple strategy approach to assessment adopted for the assessment
of engineering science [4].
Learning by doing is the essence of concrete operations. In this period children learn to
seriate, classify, and establish correspondence.
e final period when the child moves from the middle childhood to adolescence is that
of formal operations. Now the child is able to undertake abstract thinking, to hypothesize
and deduct experiment and theorize. It is the stage of in-built maturity.
is summary does less than justice to Piaget’s work. I have not related it to the more
general theory of development or how the change from one stage to another is made (see
below). e theory has been criticized and of importance to this text are the criticisms of
Sub Test/AssessmentObjective and Technique of Assessment and Duration of Tests% of Total Score on which Re-ported Grade is BasedWriten Paper IKnowledge and short-chain problem solving (1 hour, 40 ob-jective items)13.5Written Paper IIAComprehension exercise. Can-didates read article in a journal and answer questions on it ( 1 hour)13.5Written Paper IIBProject planning and design ex-ercise (1 hour)13.5Written Paper IIIAApplications of engineering sci-ence (application and analysis) (1½ hours. 6 out of 9 questions)20.0Written Paper IIIBApplications of Engineering Science (1½ hours. 3 out of 6 questions)20.0Coursework 1Two Experimental Investiga-tions (written report)Coursework 2Individual project (50 hours laboratory time written report)20.0** Coursework assessed by the student’s teacher and moderated by the examiners. 20% of the final score was given for all coursework combined.102
10. CHANGING US: CHANGING SOCIETY
those who argue that young children can deal with the fundamental problems of philoso-
phy in their own language (e.g., Matthews, G. B. (1980). Philosophy and the Young Child,
Cambridge, Harvard University Press.
A feature of Piaget’s theory is the attempt to relate it to the epistemological processes
which go on in the child’s mind as he or she learns by solving problems. Piaget’s often
quoted example is of the way a child uses clay. From experimenting with clay rolled into a
sausage shape, the child learns the following.
1. ere is less clay in a thin sausage and more in a long sausage.
2. A sausage can be long and thin.
3. If a sausage can become longer, it can become shorter.
4. Length and thickness can compensate for each other.
e transition between 2 (configuration) and 3 (transformation is an example of equilibra-
tion; the children learn by their actions on the environment. Level 4 is called conservation
by Piaget because a transformation does not change the quality of the matter.
[9] In Bruner’s theory of cognitive development three modes of representation follow in se-
quence. e first is called enactive. Bruner notes that conditioning and stimulus-response
learning are appropriate to this mode of learning: it is learning through action without
words. e second stage, which is one of mental representation, is called iconic. In this
stage the child uses concrete visual imagery. e final stage of representation is symbolic.
Because children are able to translate experience into language and think with language,
they are able to develop abstract images. It is arguable that mature learners go through such
stages when solving problems. 96
Bruner holds the belief that children can be helped to learn at the level of the most kind
of thinking in which they engage. at is a teacher can help a child to undertake more
sophisticated kinds of thought process. us, Bruner would argue that we should teach
readiness and not have to wait for it as in Piaget’s theory. It is this theory that leads some
to say that a child can be taught anything in a language appropriate to him/her at the time
and encourages books about “relativity” for children.
[10] Perry, W. B. (1970). Forms of Intellectual and Ethical Development in the College Years. New
York, Holt, Reinhardt and Winston. 96
[11] Marra, R and B. Palmer (1999). Encouraging intellectual growth: senior engineering pro-
files. Proc. Frontiers in Education Conference (IEEE), 2, 12C1 pp. 1–6. 97
[12] Culver, R. S., Woods, D. and P. Fitch (1990) Gaining professional expertise through design
activities. Engineering Education. 80(5), pp. 533–536. 97
[13] loc. cit. note 3, Ch. 6, p. 154. 97
103
[14] King, P. M. and K. S. Kitchener. (1994). Developing Reflective Judgement. San Fransisco,
Jossey Bass. 98
e differences between King and Kitchener’s model and Perry’s model are seen immedi-
ately by comparing the stages of each model as shown in Exhibits 10.1 and 10.3. Clearly,
the earlier stages owe much to Perry. Some critics think that the first three stages are the
same as Perry’s. e idea of “reflective judgement” was influenced by Dewey. “We now think of
reflective judgements as beginning with an awareness of uncertainty. Such judgements involve
integration and evaluating data, relating those data to theory and well-informed opinions, and
ultimately creating a solution to the problem that can be defended as reasonable and plausible.”
Exhibit 10.3: Stages of the King and Kitchener reflective judgment model (adapted).
King and Kitchener noted that their investigations were related to what other researchers
were doing in the areas of critical thinking and intelligence, and they found that while
some aspects of the definitions overlapped, other aspects were quite distinct.
Like Perry, the model assumes that as individuals develop, so they become more able to
evaluate the claims of knowledge, and both models advocate and support their points of
view about controversial issues. “e ability to make reflective judgements is the ultimate out-
come of this progression.” To arrive at this destination the learner passes through seven stages,
each of which has its own assumptions and logic. e stages develop from the relatively
simple to the relatively complex, each with a different strategy for solving ill-structured
StageDescriptionStage 1Knowing is limited to single concrete observations. What a person observes is true.Stage 2Two categories for knowing: right answers and wrong answers. Good authorities have knowledge; bad authorities lack knowledge.Stage 3In some areas, knowledge is certain and authorities that have that knowledge. In other areas, knowledge is temporarily uncertain. Only personal beliefs can be known.Stage 4Concept that knowledge is unknown in several specifi c cases leads to the abstract general-ization that knowledge is uncertain.Stage 5Knowledge is uncertain and must be understood within a context; thus justifi cation is context specifi c.Stage 6Knowledge is uncertain but constructed by comparing evidence and opinion of diff erent sides of an issue or across contexts.Stage 7Knowledge is the outcome of a process of reasonable inquiry. Th is view is equivalent to a general principle that is consistent across domains.104
10. CHANGING US: CHANGING SOCIETY
problems. us each stage has its own view of knowledge and concept justification. Re-
flective thinking takes place in stages 6 and 7. “True reflective thinking pre-supposes that
individuals hold epistemic assumptions that allow them to understand and accept real uncer-
tainty.” It is only when they engage in ill-structured or novel problems that they engage in
reflective thinking as defined by King and Kitchener.
ey found that their model complemented another model due to Fischer. He argued that
individuals will only operate at their optimal levels when they practice skills in familiar
domains and receive environmental support for high level performance. ere will be lots
of “Eureka” moments en-route. Unlike stage theory, which holds that all children pass
through the same stages of development Fischer’s skill theory argues that the steps which
individuals take to attain a skill vary considerably as between one individual and the next,
as a function of the environment and the individual. Because of these variations it will be
difficult to find any two children who spontaneously follow the same steps in any domain.
At the same time the theory states that irrespective of the path taken all skills pass through
the same developmental levels. All skill acquisitions involve the same group of transfor-
mation rules. e position taken by Fischer and his colleagues is similar to that taken by
information-processing theorists namely that the “same fundamental acquisition processes
occur in development, learning and problem solving at all ages.” Instruction and assessment
should, therefore, be designed to take account of these different needs. is theory has
considerable implications for the design of modular (credit-unit) curriculum systems and
the pacing of assessment and learning within them.
In the Reflective Judgment model a spurt marks the emergence of a new stage. e skill
levels in the Fischer model correspond directly to the stages of the Reflective Judgment
Model. King and Kitchener argued that the decisions students make when they are in
relativistic frames of reference should reflect a level of cognitive development beyond rel-
ativism. In the Perry model, the student remains within the relativistic frame and has to
make an act of faith in reaching a commitment. e purpose of the Reflective Judgment
model is to deal with the form and nature of judgements made in the relativistic frame-
work. Individuals, it is held, hold epistemological positions beyond relativism. Whatever
else one may say such a position would seem to be more satisfying than Perry’s.
King and Kitchener had much to say about teaching in higher education and they take a
broad of view of who may be a teacher and what teaching is. According to the Reflective
Judgment Interview, first-year students in the United States lie in the range stage 3 to
stage 4. Seniors were found to be around stage 5. ey argue that many seniors are at a
loss when they are asked to defend their answers to ill-structured problems. erefore, if
reflective thinking is to be developed, teachers should do the following.
• Show respect for students regardless of the developmental levels they may exhibit.
105
• Understand that students differ in the assumptions they make about knowledge.
• Familiarize students with ill-structured problems within the teacher’s area of exper-
tise.
• Create multiple opportunities for students to examine different points of view.
• Informally assess (i.e., from student journals, assignments etc.) assumptions about
knowledge and how beliefs may be justified.
• Acknowledge that students work within a developmental range of stages and set ex-
pectations accordingly; challenge students to engage in new ways of thinking while
providing them with support; and recognize that students differ both in their per-
ceptions of ill-structured problems and their responses to particular learning environ-
ments.
• Share with one another what they do and what they expect to achieve.
King and Kitchener do not, however, believe there is one best way of teaching reflective
thinking.
e differences between stage 3 and stage 6 from a teaching perspective are shown in
Exhibit 10.4. It will be appreciated that since these descriptions could apply at any level
of education they would have to be developed to describe the requirements of a particular
level (e.g., year on course, course level). It is clear that if students in schools are to develop
critical thinking they will have to tackle ill-structured problems, and this has implications
for assessment.
King and Kitchener designed an instrument called e Reflective Judgment Interview (RJI
to detect the stage at which a student is. e interview is structured with standard probe
questions, each with a specific purpose. us, two questions, that will clearly elicit a level of
development that are of direct relevance to today’s media governed society are: (1) How is
it possible that experts in the field have such different views about the subject? and (2) How
is it possible that experts in the field should disagree about the subject?
While it is not the intention to examine the psychometric properties of this instrument it
is of some interest since the questions may help with the design of assessment. ere is also
one important comment from one of the analysts to the effect that differences between the
samples were more pronounced at lower levels of educational attainment than at the higher
levels. Wood who undertook this analysis thought that this was consistent with the view
that performance on the RJI is dependent on verbal ability (which is a necessary, but not
sufficient condition for high scores). Once again the need for a high level of verbal ability
to think critically or reflectively is highlighted.
In an earlier paper Kitchener had pointed out that no single instructional or curricular
experience over a limited period is likely to have an impact on development that a carefully
106
10. CHANGING US: CHANGING SOCIETY
Exhibit 10.4: Promoting reflective thinking in the King and Kitchener model—stages 3 and 6. Rea-
soning. (Adapted from King and Kitchener, 1994. In their description (pp. 250–254) they also give for
each stage a list of difficult tasks from the perspective of the particular stage, a sample of developmental
assignments, and suggestions for developmental support for instructional goals).
constructed set of cumulative experiences over a long period of time is likely to have. e
implication for teachers is that in planning the curriculum they have to work as a team and
share with one another what they do and what they expect to achieve. ere is unlikely to
be one best way of teaching reflective thinking. But there is a more profound implication
for the system. If reflective thinking is to be developed and pupils are to be prepared for life
and work in which higher education is included then the cumulative experiences should
extend from primary through post-primary to third level. What better than a program of
the type developed by Lipman (Philosophy for Young Children program).
Stage 3Characteristic assumptions of stage 3: ReasoningKnowledge is absolutely certain in some areas and temporarily uncertain in other areas.Beliefs are justified according to the word of an authority in areas of certainty and according to what “feels right” in areas of uncertainty.Evidence can neither be evaluated nor used to reason to conclusions.Opinions and beliefs cannot be distinguished from factual evidence.Instructional goals for studentsLearn to use evidence in reasoning to a point of view.Learn to view their own experience as one potential source of information but not as the only valid source of information.Stage 6Promoting reflective thinking.Characteristic assumptions of Stage 6: ReasoningKnowledge is uncertain and must be understood in relationship to context and evidence.Some points of view may be tentatively judged as better than others.Evidence on different points of view can be compared and evaluated as a basis for justification.Instructional goals for studentsLearn to construct one’s own point of view and to see that point of view as open to re-evaluation and revision in the light of new evidence.Learn that though knowledge must be constructed, strong conclusions are epistemologically justified.107
is statement is copied from Heywood, J. (2009). Instructional and Curriculum Leader-
ship. Towards Inquiry Oriented Schools. National Association of Principals and Deputies/
Original Writing, Dublin, pp. 358–352.
[15] Alexander, C. N. and E. J. Langer (Eds.) (1991). Higher Stages of Human Development.
New York, Oxford University Press. 98
[16] Hoare, C. (2006). (Ed.) Handbook of Adult Development and Learning. New York, Oxford
University Press. 98
[17] McAuliffe, G. (2006). e evolution of professional competence. Ch. 21 in C. Hoare,
(Ed.), Handbook of Adult Development and Learning. New York, Oxford University Press.
McAuliffe summarises research on the linkage between professional competence and adult
development including summaries of work by Donald Schön (1993—e Reflective Prac-
titioner. San Fransico Jossey Bass.) and its origins in Argyris, C. and D. S. Schön (1978).
Organizational Learning. Reading, MA, Addison Wesley; Kegan’s Fourth and Fifth orders
of consciousness and professional competence (Keegan, R. (1994—In Over Our heads. e
Mental Demands of Modern Life, Cambridge, MA, Harvard University Press), and Torbert’s
frames of professional development (Torbert, W. R. (1994) Cultivating post-formal adult
development: Higher Stages and contrasting interventions. In M. Miller and S. Cook-
Greuter (Eds.), Transcendence and Mature ought in Adulthood: the Further Reaches of Adult
Development (pp. 181–203. Lanham, MD, Rowman and Littlefield). 98
e reference in the main text is to Torbert’s taxonomy of developmental positions for pro-
fessionals especially managers which comprises six frames. ere is a similarity with the
Perry model in that they move from stages of concreteness and conformity to capability in
abstraction and a willingness to tolerate ambiguity. Torbert’s first frame is called “oppor-
tunistic.” To simplify, the current way of knowing of the opportunistic is the only way to
view the world to McAuliffe (p. 487) “ey experience others without empathy, as objects
to be manipulated” and “tend to use force and deception to reach short-term ends.” One
positive aspect is that their self interest can force them to become entrepreneurs (my inter-
pretation). e second frame is called “Diplomatic.” To simplify, professionals operating
in this frame fit into Belbin’s roles (see Journey 5) quite nicely. ey are “company men”
who have loyalty to the rules of the organization but they find it difficult to make difficult
decisions. e third frame is called “technician” and is the mode referred to in the text. To
cite McAuliffe “Technician professionals are narrowly focused on efficient methods and
the internal logic of objective standards. In the process technicians fail to see the larger
systems of which they are part, for they are enamoured of the consequence of their own
doctrines. To them, there is no room for alternate explanations. eir logic is the only logic.
Fisher and Torbert propose that the technician’s embrace of “standards” can be inspiring
for co-workers. is position is especially important in explaining professional behaviour,
as it is the largest single group of professionals” (p. 489).
108
10. CHANGING US: CHANGING SOCIETY
e other frames are “Achiever” which is held to be a wider frame that of the “techni-
cian.” ey are guided by the goals of the field beyond their own career expectations and
can provide leadership. McAuliffe does not mention McClelland’s achievement motiva-
tion in his discussion of this frame but it clearly has a bearing. He points to the negative
dimension that achievers are likely to pursue their agenda to the exclusion of other goals
and alternatives although being open to feedback. ey can be open to learning. e next
frame is the “strategist.” McAuliffe uses a comment by a manager in one of Torbert’s pa-
pers to describing this frame as moving from “having very explicit goals and timetables
[and] a structured organization to...the collaborative process [which] focuses on inquiry,
constructing shared meanings from experience and building consensus through responsible
interaction” (p. 401).
[18] Trevelyan, J. (2010). Engineering students need to learn to teach. Proc. Frontiers in Edu-
cation Conference, F3H-1 to 6. 99
[19] For example with reference to Israel see Muller, O. and V. Dangur. (2012). Integrating a
college of engineering teaching and learning center into a leading position in the institu-
tion. Proc. Frontiers in Education Conference, pp. 429–430. 99
[20] Utschig, T. T. and D. Schaefer. (2012). A proposed teaching and learning curriculum for
COMPLEETE based on current national trends. Proc. Frontiers in Education Conference,
pp. 423–428, (ASEE/IEEE). 99
[21] Cross, K. P. (1986). A proposal to improve teaching. (AAHE), American Association for
Higher Education Bulletin), Bulletin, September, pp. 9–15. 99
[22] Heywood, J. (2009). Instructional and Curriculum Leadership. Towards Inquiry Oriented
Schools. Dublin, National Association of Principals and Deputies/Original Writing. 99
J O U R N E Y 11
109
Journey’s End: A New
Beginning?
RECOLLECTION
When Dr. Mina invited me to undertake these journeys he asked me to bring you on a series of
journeys that would help you and I reflect on who, and what, we are as engineers and educators
within a society that is being increasingly complex. We were inviting you to philosophize with us
and this we did as is evident from how the discussions influenced the final text and structure of the
journeys. ese discussions were undertaken within the framework of a model of the engineering
processes engaged in the production of a technology (or technological product). Presented in the
first journey in the form of a three-legged stool it showed quite clearly that engineering decisions
are based on value decisions and as such have a philosophical dimension. e outcomes of these
decisions, as illustrated by the seat of the stool, produce artifacts and ideas that can have a profound
influence on society whether deliberately or unforeseen. It is incumbent, therefore, that engineers
have some idea of how society and technology interact and are able to make predictions about
the possible impact of the artefacts and ideas they design. Commonly, we call this the study of
“society and technology.” ose who engage in it often do so in interdisciplinary teams. Although
we did not dwell in any great detail on the nature of inter-disciplinarity within the engineering
curriculum, it is ever present in these discussions because engineering problems are often very
complex and rely on the contributions of more than one discipline. e process of producing
a product is complex and as we saw in Journey 5 the way engineers are organized in groups
contributes to the success or failure of the organization. Competence in understanding people
and organizations and how they interact with each other and in organizations is a key skill that
should be possessed by engineers. Our early journeys explored the characteristics of that skill.
It was argued that the study of ethics had to be much more than the study of codes of
practice for we all have moral purpose and this purpose is a source of motivation. Bowen’s as-
pirational ethics for engineers was taken as an example of how the subject might be developed.
But it was also argued that such developments should take place in a more general framework
where individuals (students) reflect on their own philosophies and the impact that they have on
their philosophy of engineering. Finally, the present problems of society were examined and it
was argued that the nature of the firm required it to be re-examined in the light of the common-
good. Views about stake-holding need to be changed; at the same time, organizations have to be
110
11. JOURNEY’S END: A NEW BEGINNING?
allowed to achieve. e common good has to come before profit which is to be used in the service
of the common good. ese thoughts were provoked by substantial changes in the workforce not
least the redundancy being experienced by many middle aged engineers, and the view that fact has
replaced the platitude that many people in the workforce may have as many as three or four dif-
ferent careers in the much longer lifetimes they will live. is has profound consequences for the
system of education and training, not least of which is the provision of a culture in which persons
can move across labour arenas. It is with these consequences that this final journey is concerned.
HIGHER EDUCATION—THE NEXT “BUBBLE?”
e time is opportune for such discussion because there is a strong support for the thesis that
higher education may become the next bubble. In any case it would seem to be in crisis! I take up
the argument where I left it in Journey 10. at Journey was devoted to a discussion of the impor-
tance of how the structure of the peer-group might influence learning and personal development.
It was illustrated by Newman’s famous assertion that he would prefer that type of education that
simply brought people together in a hall (college) to that where a student simply sat an exami-
nation. It is a view that to a certain extent I share. Where I differ, and I imagine I am not alone,
is with the view that what emerges from such discussion is necessarily rational and reasonable.
It is quite clear that it would be possible for some groups to create a terrorist cell. After all, it is
only 50 or so years since we discovered that a Soviet spy ring had been created at the University
of Cambridge. Five undergraduates and one tutor, all of Trinity College, were recruited by the
Soviet Spy Agency and worked at the same time for the British security agency MI5 [1]. In the
U.S. Alexander Astin’s longitudinal studies of students in general education programs in the U.S.
showed the significance of the peer group and that students were likely to take on the views of
the dominant members of the group [2]. at said it is clear that Newman felt that the university
had a significant role to play which was to inform student thinking and that students would bring
the range of that thinking to their private discussions, a position that is entirely consistent with
his epistemology. Otherwise, why did he bother with the rest of his discourses? A university’s
task is to throw out a series of challenges to students in order to help them think and this means
that it must promote a view of knowledge that is universal. But that is precisely what the modern
university does not do. I shall argue that the failure to promote a truly liberal education at the
expense of specialization is a disservice to students and more generally to society at large for many
students, although not all may find their career mobility restricted.
HIGHER EDUCATION, THE ECONOMY, AND MOBILITY
In the British Isles Newman’s view and that of the Oxford dons of the university curriculum
was rejected in favor of a utilitarian specialist education the idea of which had been promoted
in Scotland. Students come to university to study a specific discipline be it English literature,
history, physics, or engineering. As it has developed so governments in collusion with the univer-
111
sities have taken the view that the purpose of higher education is utilitarian or economic growth.
Consequently, support has been much greater for scientific and technological subjects than it
has for subjects in the arts (humanities). In parallel with the UK government’s belief in the eco-
nomic value of higher education it has considerably expanded the number of full-time students
in university education and accompanied it by a large increase in the number of institutions given
university status. ere is a corresponding search each year by new graduates for jobs thought to
be worthy of a university education. Newspaper reports suggest that many are unlucky.
At the same time the UK government has found that the costs of higher education are
not inconsiderable for which reason it has followed the American practice of charging fees for
attendance at universities in England, Northern Ireland, and Wales while at the same time it
encourages students to take out loans. British newspaper reports have suggested that a student
will end his/her university study owing £45,000 as compared with an American student who will
owe about £15,000. Given that earnings among the middle classes have flat-lined in recent years
this can place an enormous burden on students and their families. Could it be that the next bubble
will be higher education? And what effect would that have on social mobility?
If a bubble occurs in higher education it is likely that students and parents will begin to
question the value of higher education? If they do, and numbers begin to fall, where will that
leave some institutions of higher education that do not have the reserves to maintain themselves?
Second, in the UK a report was published (October 17, 2013) of a study sponsored by the govern-
ment and undertaken by a former minister Alan Milburn (Labour) indicated a total lack of social
mobility. e newspapers made much of the fact that many children being born to the middle
classes would not be as well-off as their parents. is fits in well with American data especially
that of Clifton which suggests the most important crisis facing nations will be shortage of good
jobs [3]. Clearly, higher education can only prosper if there is growth and with that growth an
increase in the wealth of the middle classes.
Every culture differs in how the social effects of educational policy are played out. For
example, there is a paradox in Britain which suggests that on the one hand there is a problem of
up-skilling the working classes but on the other hand if this is done it can restrict social mobility.
More generally, it may be argued that specialization generally operates in that direction. It creates
rigidities rather than flexibilities.
T. H. Marshall, a distinguished British sociologist many years ago argued that foundation
of many occupational groups that claim professional status is the specialized technique, and “it
is the multiplication of these techniques that makes possible the spread of these organizations.”
He wrote: “It is important to notice the effects of these changes on social mobility; an organized
profession admits recruits by means of an impartial test of their knowledge and ability. In theory
they are selected on merit, but it is merit of a particular kind [...] A narrow road leads into the
profession through certain educational institutions. How far this favors social mobility depends
on whether these institutions are open to the masses, so that merit can win recognition in all
classes.” Presumably thinking of the British system of education he continued: “But the chance
112
11. JOURNEY’S END: A NEW BEGINNING?
to move comes early, during school days. Once it has been missed and a career has been started
at a non-professional level the whole system of formal qualifications makes movement at a later-
stage well-nigh impossible.” And later [...] “But many of these new semi-professions are really
subordinate grades placed in the middle of the hierarchy of the modern business organization.
e educational ladder leads into them but there is no ladder leading out” [4].
In these circumstances it would seem that a general education through high school and into
university is the best preparation for the career paths that individuals are likely to have to pursue
in the future. Specialization should be postponed for as long as possible. But then industry, it
would seem, is looking for the person with a specialized technique which in a few years will
become redundant. Where does that person go then? Translate this to other non-middle class
groups who have difficulty with general education and are encouraged to study for a trade, for
some, especially the more able, the educational sub-system could not be more perverse [5]. e
complexity of the issues raised here should not be underestimated or the difficulties of finding a
solution under-emphasized. But this search needs to be tempered with the understanding that a
higher education undertaken for an economic purpose alone does not serve the common good.
Its overall aim has to be the preparation of individuals for life, and that is a multi dimensional
task that begins with an effective liberal education which seems to contradict the view that some
people are better suited to an academic education and others to a vocational education.
GENERAL VS. LIBERAL VS. VOCATIONAL EDUCATION
e intention of schooling is to provide a good general education. By this is meant that students
should study a range of subjects that include disciplines from the sciences to, humanities and
languages. Technical subjects have not for the most part been considered key to this education in
spite of the contribution they can make to certain mental and psycho-motor abilities [6]. While
students acquire the basic “grammar” (to use Whitehead’s term) of subjects in high school (say
from twelve years onwards) their study does not constitute a liberal education neither should it [7].
A general education is a condition for liberal education.
e purpose of a liberal education differs from that of general education in that it brings to
our attention how the different disciplines help us to understand who we are as persons and our
place in the universe (nature). For this we have to have the breadth of knowledge that a general
education gives, that is, we have to be literate in the sciences, technologies, arts and languages.
But our object is to be able to see the relationships between subjects, one with another, in order
to recombine that knowledge in a new synthesis. As we saw in Journey 2, Newman held that it is
this recombination of knowledge that is the object of university education. If we are to understand
the person it requires a breadth of knowledge that includes philosophy and theology. But it also
includes technology which is an inherently vocational study. You cannot have a liberal education
without the inclusion of the vocational. In the second journey I gave Newman’s illustration of
how the different subjects contribute to this search for understanding. Now in Exhibit 11.1 I
113
Exhibit 11.1: p. 175 MacIntyre, A. (2009). God, Philosophy, Universities. A History of the Catholic
Tradition. London, Continuum.
give MacIntyre’s modern illustration of the task and ask this question, what do we gain, from an
understanding of engineering, and what do we gain from an understanding of medicine?
Newman also gave a description of what the product of a liberal education would be able
to do. It is shown in Exhibit 11.2. When I examined his famous description of the attributes
derived from a university education I came to the conclusion that these were exactly what in-
dustry sought. ese complaints did not relate to the knowledge obtained in courses but to the
ability to work with others in a constrained situation in pursuit of the objectives of the firm. ey
were what in the nineteen-nineties in the UK were called personal transferable skills. ey were
commonly grouped into the four categories of Management and Organizing, Communication,
Teamwork and Problem Solving (creativity) [8]. e UK Employment Department sponsored
five year projects in most universities that had as their objectives the development of these skills,
called “the skills of enterprise learning” within each department (school) of the participating uni-
versity. No subject was singled out for special treatment; it was thought that all potential graduates
would benefit from such training. A working group of the department drew up a statement of four
broad areas of learning that they thought would equip students for their working lives. Inspection
of these areas (Exhibit 11.3) suggests that the persons described by Newman would be similarly
equipped which suggests that there is no real conflict between the aims of a liberal education and
those purported to emanate with employers. I wondered if employers would be able to cope with
employees educated in this way! I also wondered how many engineering departments could say
they achieved the goals implicit in this list with validity and justification. Exhibit 11.4 might be
judged by some to be the least liberal of the three statements. It is a group of recommendations
made in 1989 that MIT should adopt. It is clear that it seeks a broadening of engineering ed-
ucation and that the attitudes and skills that accompany a liberal education will be required if
the graduates are to obtain senior positions in management. ere is an emphasis on practical
“From the standpoint of physics human beings are composed of fundamental particles interacting in accordance with the probabilistic generalizations of quantum mechanics. From that of chemis-try we are the sites of chemical interactions, assemblages of elements and compounds. From that of biology we are multicellular organisms belonging to species each of which has its own evolu-tionary past. From that of historians we are intelligible only as emerging from long histories of social and economic transformations. From that of economists we are rational profit-maximizing makers of decisions. From that of psychology and sociology we shape and are shaped by our perceptions, and emotions, our social roles and institutions. And from that of students of the literature and the arts it is in the exercise of our various imaginative powers that we exhibit much that is distinctive about human beings. But how do all these relate to each other? In what does the unity of the human being consist? And how should the findings of each of these disciplines contribute to our understanding of ourselves and our place in nature?”114
11. JOURNEY’S END: A NEW BEGINNING?
Exhibit 11.2: Cardinal Newman’s 1852 statement of the Aims of University Education. e Idea of a
University. Defined and Illustrated. pp. 177–178. 1923 impression. London, Longmans.
problem solving that is not to be found in the REAL statement (Exhibit 11.3), and there is more
emphasis on the “technical” as opposed to the “whole” person where as the direction of this text
is that it is the attitudes and beliefs of the “whole” person that informs his “other” persons. Since
then, during the last four or five years, Louis Bucciarelli—A Professor of Engineering and Tech-
nology Studies at MIT has promoted, with considerable detail, the idea of a Bachelor of Arts In
Engineering.
A quite remarkable coincidence between the generic categories established by the Sheffield
study is to be found in a comparison of the skills that experts in intelligence and lay people con-
sider to be the parameters of intelligent behavior (Exhibit 11.5). e similarities with Newman’s
statement will be evident. Of them, all Newman’s statement best conveys the role of the emotions
in a person’s behavior.
Sternberg who derived these parameters defined intelligence “as a mental activity directed
toward purposive adaptation to, and selection and shaping of, real world environments relevant to
one’s life.” If it is correct that individual’s will in the future have to make career choices that take
them out of their career comfort frame then a major goal of university education should be the
But a university is the great ordinary means to a great but ordinary end: it aims at raising the intellectual tone of society, at cultivating the public mind, at purifying the national taste, at supplying true principles to popular enthusiasm and fixed aims to popular aspiration, and giving enlargement and sobriety to the ideas of the age, at facilitating the exercise of political power, and refining the intercourse of private life. It is the education which gives a man a clear conscious view of his own opinions and judgements, a truth in developing them. It teaches him to see things as they are, to go right to the point, to disentangle a skein of thought, to detect what is sophisticated and what to discard what is irrelevant. It prepares him to fill any post with credit and to master any subject with facility. It shows him how to accommodate himself to others, how to throw himself into their state of mind, how to bring before them his own, how to influence them, how to come to an understanding with them, how to bear with them. He is at home in any society, he has common ground with every class, he knows when to speak and when to be silent: he is able to converse, he is able to listen: he can ask a question pertinently and gain a lesson seasonably, when he has nothing to impart himself; he is ever ready, yet never in the way: he is a pleasant companion and a comrade you can depend on; he knows when to be serious with effect. He has the repose of mind which lives in itself, which lives in the world and which has resources for its happiness at home when it cannot go abroad. He has a gift which serves him in public and supports him in retirement, without which with good fortune is but vulgar and which failure and disappointment have a charm. The art which tends to make a man all this, is the object which it pursues as useful as the art of wealth or the art of health, though it is less susceptible of method and less tangible, less certain, lees complete in its result.115
Exhibit 11.3: e four broad areas of learning together with the elements they comprise that are
important for equipping students for their working lives, as defined by the REAL working group of
the UK Employment Department—1991 [4].
Exhibit 11.4: Recommendations for MIT in Ch. 12. How universities should change. In Made in
America: Regaining the Productive Edge by M. Dertouzos, R. K. Lester, and R. M. Solow and the
MIT Commission on Industrial Productivity. (1989) Cambridge, MA, MIT Press.
Cognitive knowledge and skills1. Knowledge: Key concepts of enterprise learning (accounting, economics, organizational behaviour, inter and intra personal behaviour).2. Skills: The ability to handle information, evaluate evidence, think critically, think systemati-cally (in terms of systems), solve problems, argue rationally, and think creatively.Social skills: as for example the ability to communicate, and to work with others in a variety of roles both as leader and team leader.Managing one’s self: as for example, to be able to take initiative, to act independently, to take reasoned risks, to want to achieve, to be willing to change, to be able to adapt, to know one’s self and one’s values, and to able to assess one’s actions.Learning to learn: to understand how one learns and solves problems in different contexts and to be able apply the styles learnt appropriately to the solution of problems.MIT should broaden its educational approach in the sciences, in technology and in the humanities and should educate students to be more sensitive to productivity, to practical problems, to team-work, and to the cultures, institutions and business practices of other countries.Create a new cadre of students and faculty characterised by (1) interest in, and knowledge of real problems and their societal, economic and political context; (2) an ability to function effectively as members of a team creating new products, processes and systems; (3) an ability to operate effec-tively beyond the confines of a single discipline; and (4) an integration of a deep understanding of science and technology with practical knowledge, a hands-on orientation and experimental skills and insight.Where possible, revise subjects to include team projects, practical problems, and exposure to international cultures.Encourage student- teaching to instil a stronger appreciation of life-long learning and the teaching of others. Reinstitute a foreign-language requirement in the undergraduate admissions process.116
11. JOURNEY’S END: A NEW BEGINNING?
Exhibit 11.5: Abilities which contribute to intelligence. Obtained from questions about the nature
of intelligence, academic intelligence, and unintelligence put to experts in research on intelligence and
lay persons by R. H. Sternberg and his colleagues. Among the findings was the fact that research
workers considered motivation to be an important function of motivation whereas lay persons stressed
interpersonal competence in a social context. In R. H. Sternberg (1985) Beyond IQ. A Triarchic View
of Intelligence. Cambridge, New York, Cambridge University Press.
development of intelligent behavior. It is in that context that the objective in the MIT statement
that students should be able to think outside their own discipline makes sense, but I argue that
the epistemology of liberal education as outlined by Newman makes better sense in the context
of life-long education to which the MIT statement also draws attention.
DISCUSSION
I have argued that university education has to be conceived as something more than a preparation
for economic activity. It has to about life and living. It is about more than a career in the workforce.
For this to be achieved it is necessary to return to the concepts of a liberal education. I have argued
that a liberal education necessarily embraces the vocational. Elsewhere I have also argued that my
representation of the engineering (technological) process requires that the engineer (technologist)
see’s the relationships between many subjects. Taught in this way engineering is a mini-liberal
curriculum and as such a preparation for the broader curriculum. Ultimately, the aim has to be
the common good which embraces the person qua person the one hand and on the other hand
society. As I have tried to demonstrate a close inspection of the views of industrialists about
university education suggests that they too want graduates who have received a liberal education,
1. Practical problem solving ability: Reasons logically and well, identifies connections among ideas, sees all aspects of a problem, keeps an open mind, responds to other’s ideas, sizes up situations well, gets to the heart of the problem, interprets information accurately, makes good decisions, goes to original sources of basic information, poses problems in an optimal way, is a good source of ideas, perceives implied assumptions and conclusions, listens to all sides of an argument, and deals with problems resourcefully.2. Verbal ability: Speaks clearly and articulately, is verbally fluent, converses well, is knowledge-able about a particular field, studies hard, reads with high comprehension, reads widely, deals effectively with people, writes without difficulty, sets times aside for reading, displays a good vocabulary, accepts norms, and tries new things.3. Social competence: Accepts others for what they are, admits mistakes, displays interest in the world at large, is on time for appointments, has social conscience, thinks before speaking and doing, displays curiosity, does not make snap judgements, assesses well the relevance of information to a problem at hand, is sensitive to other people’s needs and desires, is frank and honest with self and others, and displays interest in the immediate environment.117
but I wonder if they would know how to handle the products of such an education. If they deny
that that is their view then they have to give an alternative explanation as to what is wrong. In any
case perhaps you can only be oriented to the work place in the work place. If that is the case, and
I think it is then employers have some responsibility for an individual’s development. Whatever is
the case, properly conceived a liberal education should provide for the development of skills that
help and individual to be adaptable and flexible in order to cope with the exigencies of tomorrow’s
world.
I have not tried to establish what might be called a “continuous” curriculum although it
is clear that the aims of higher education cannot be declared without attention to the school
curriculum on the one hand and the post university curriculum on the other. I share Charette’s
view that instead of “spending our scarce resources on ending a mythical STEM shortage, we
should figure out how to make all children literate in the sciences, technology, and the arts to give
them the best foundation to pursue a career and then transition in tot new ones” although I find
it rather to damming of the schools system. It has to be a joint enterprise and of course engineers
are helping schools to teach engineering. e issue is whether such courses produce engineering
literate individuals which is a problem that bothers the Technological and Engineering Literacy
Division of the American Society for Engineering Education.
Very many students may not need a lengthy initial course in higher education institutions
but need to build on the knowledge received in them as their work life progresses, and I believe
that this has to be done in partnership with their employers. It is not merely an orientation when
they arrive in a new employment that is required. e common good demands that employers
contribute to the workers development. is may be in how a worker is employed, or it may be
in encouraging the worker to do a course in philosophy as happened to me when I worked in
industry!
Finally, I do believe that higher education, and in consequence engineering education, is at
a crossroads. I believe it is an opportunity for major change and as I have tried to show elsewhere
philosophers like Alfred North Whitehead provide us with frameworks to engineer that change.
NOTES
[1] e tutor was Sir Anthony Blunt, an art historian subsequently stripped of his knighthood.
110
[2] Astin, A. (1997). What Matters in College. Four Critical Years Revisited. San Francisco,
Jossey Bass. 110
“the students peer group is the single most potent source of influence on growth and de-
velopment in the undergraduate years, and in so far as the affective development of stu-
dents are concerned students’ values, and aspirations tend to change in the direction of
the dominant values, beliefs and aspirations of the peer group.” Astin concluded that the
institutional structure is not the institution as such, rather it is the kinds of peer groups and
118
11. JOURNEY’S END: A NEW BEGINNING?
faculty environments that tend to emerge under these different environments. He found
of examples of large institutions in the U.S. that were trying to develop communities. It
seems self-evident that cooperative learning groups can be nascent communities. But this
draws attention to another major contradiction in the life of engineering educators for it
is very clear from research in the U.S. that while cooperative learning groups lead in many
circumstances to the better achievement of learning outcomes than traditional methods
associated with the lecture there remains much resistance to their use.
[3] Clifton, J. (2011). e Coming Jobs War. New York, Gallup Press. 111
[4] Marshall, T. H. (1963). Professionalism in relation to social structure and policy reprinted
in Marshall, T. H. Sociology at the Crossroads and other Essays. London, Heinemann. 112,
115
[5] e Times (October 15 2013) reports that a new network of technical colleges is being
established for entry at fourteen years of age. ey will be taught to become chefs, health
technicians and carers but will continue to study math, English, and science. Given that
the education system fails very many students by the age of 14 this cannot be a bad thing.
However given the British class structure those who are very bright may not be able to
change career easily once they become established in the jobs for which they are trained.
us the acquisition of personal transferable skills appropriate to their capabilities is as
important for this group as it is for university students. 112
[6] In pre neuro-psychological science terms practical subjects such as woodwork and metal
work may enhance the development of spatial ability. As long ago as 1964 MacFarlane
Smith argued that one of the reasons there was a shortage of engineers in Britain was the
failure of the academic curriculum to incorporate subjects that developed spatial ability.
(MacFarlane Smith, I. (1964) Spatial Ability. London, University of London Press.) 112
[7] Whitehead, A. N. (1932) .e Aims of Education and Other Essays. London, Benn. 112
[8] e model was developed by the Personal Skills Unit based at the University of Sheffield.
A full description of the model and a summary of its implications for teaching are given
in Heywood, J. (2005). Engineering Education. Research and Development in Curriculum
and Instruction. Hoboken, NJ, IEEE/John Wiley, pp. 39–45. e model was derived by
Suzan Green who analysed 10,000 job advertisements in the quality newspapers published
in Britain during a fixed period. 59% percent of the advertisements for graduates explicitly
contained reference to required personal characteristics. Of the remainder, a further 15%
could be inferred to require such characteristics. Of the 32 significant characteristics that
were isolated, 20 were considered to be genuine transferable skills, and these collated into
the 4 generic categories. 113
J O U R N E Y 12
119
Questioning our Assumptions:
Adaptability and Change
When Michael Youngman, Bob Oxtoby, Denis Monk, and I were analyzing the jobs that engi-
neers did in an organization in the aircraft industry [1] I was led to a brief study of the history
which had been published by John Gledhill [2]. is led me to believe that the firm could be
understood as a learning system in the sense that innovations and organizations go through the
same phases as we do when we are problem solving and decision making. ere was no difference
in the process. Both were goal seeking endeavours. If learning is the process by which experience
develops new responses and reorganizes old ones, the process of bringing a product into regular
manufacture may be regarded as a process by which the organization proceeds from relatively
disorganized state of knowledge to a relatively organized one. So in report to our sponsors (the
Department of Employment) I sketched the diagram that subsequently appeared in our book
(Exhibit 12.1). e way in which individuals worked together in the organization would safely
be called a learning community today [3].
It was evident to me that these curves related to the structure of the workforce which was
reinforced by Jack Blears who showed that different kinds of personnel were required for the
different activities [4]. It is self-evident today that research personnel are different to production
personnel. Blears was concerned with the process of innovation to product, and pointed out that
once the product was up and running it required personnel whose essential task was care and
maintenance. ey required quite different skills to the innovator. Today we have also recognized
that products require entrepreneurs if they are to be sold; they too require a different skill set.
Richard Foster of McKinsey’s showed how such curves could be used in business forecasting [5].
e focus here, is with learning.
At the time George Carter, Deryk, Kelly, and myself could also see the same process at
work among the students who were pursuing projects in engineering science [6]. Moreover, we
could relate the skills used to cybernetic models of problem solving that were appearing in the
educational literature of the time. e problem had to be formulated, data gathered, solutions
proposed, evaluated, one developed, produced and evaluated the results being fed back into the
system, leading to the further development of the product. Of course problem solving and decision
making are not a linear activities, but the models highlight certain key skills necessary for the
best solution (however best is defined). ey also enable diagnostic and summative rubrics of
assessment.
120
12. QUESTIONING OUR ASSUMPTIONS: ADAPTABILITY AND CHANGE
Exhibit 12.1: (a) Shows the pattern of innovation in the firm showing process from problem iden-
tification to problem solved, feedback and the next curve in the process of development and so on;
(b) illustrates the rate of change in demand for specific types of workforce as a function of the learn-
ing/innovation curve.
Application of new techniques to previous technologyEffect of new materialFirm sponsors design studyInspiration (collected experience turned into applicationAward of design study contract196019641968Levels of knownledge as products or sub-systems FEDCBA19401953(a)(b)DemandTime, YearsEffects of new technologies and market on the demand for manpower121
We found that some students had difficulty in formulating problems, sometimes they had
difficulty in generating alternative solutions, and in evaluation or as it was called “critical review”
they were sometimes less than critical. ey were helped if they could see examples of what was
wanted. A particular skill that these bright students sometimes found difficult was to recognize
the assumptions they were making and the influence they had on both their practical work and
evaluations. At the heart of changing peoples is the attitudes and assumptions that people make.
Just consider how important the assumptions we make in everyday life are to political dia-
logue. Today is July 18, 2016. e following are a few of the things we have had to think about
in the last week in America and Europe.
First, during last week Britain got a new Prime Minister. Its second woman in that position.
A conservative, she upset some industrialists by promising to legislate for worker representatives
on companies governing boards while demanding a cap on the pay of executives. On ursday,
in France, a large lorry (truck) drove a-mock for a mile along the Promenade des Anglais among
thousands of people celebrating France’s Bastille day (the equivalent of the American day of In-
dependence), and killing nearly 100 men women and children at the last count. On Friday, came
news of a military coup in Turkey only to fail on Saturday. Last night, we heard that three police
officers had been shot in Baton Rouge, and today the British broadcasters are telling us about the
Republican Convention in Cleveland.
e media people interview so-called experts on why these things happened; we jump to our
own conclusions. ey are “assumptions” and sometimes they turn out to be completely untrue.
At the same time, as Journeys 2 and 3 show, they may arise from deeply held prejudices. As for
politics and voting, most of us if we are really forced to sit back and reflect, will agree that often
our views are not well supported by facts let alone argument. I venture to suggest that that is
somewhat of an understatement.
UNDERSTANDING THE ASSUMPTIONS WE MAKE
e importance of understanding the significance of assumptions is to be found in the recent
debate that the British have had about whether or not to leave the European Union. I should
state that I was one of those who thought we should leave, in spite of the fact that I am living
in another country in the European Union and may well be faced with difficulties. One of my
views was that it might restore achievement motivation which seemed to me to be needed. My
American friends were agog when the vote went in favor of exit, and it surprised Europe, big
time.
e No’s are called Brexiteers and those who wanted to remain—“remainers.” It was a
bitterly contested fight which went on for a number of months and was characterized by fear. e
Brexiteers were characterized as creating fear about the number of immigrants being admitted
into the country through false claims about the numbers likely to come in the future. On the
other side the remainers argued that the economy would collapse. ey cited numerous experts
122
12. QUESTIONING OUR ASSUMPTIONS: ADAPTABILITY AND CHANGE
and President Obama was persuaded to come and tell the British that if they left Europe they
would go to the end of the queue of those seeking to negotiate trade deals with the U.S.
It was very difficult to determine what was true and what was false. Certainly the public
were not educated in asking questions about the economic forecasts that were bandied about.
Toward the end of the campaign I came across a little advertised critique of forecasts made
by the Treasury (Ministry/Department of Finance) by a Professor David Blake of the Cass Busi-
ness School of the City University London [7] which put an entirely different gloss on matters.
It was a technical report which would have delighted engineers.
e Treasury use a gravity model. Blake models it as a solar system in which the EU is the
sun and the different European countries are planets orbiting the sun. e countries with largest
GDP’s and population become the biggest planets. Planet size can compensate for distance. ose
countries closest to the EU gain the most economic benefits. e Eurozone (the countries with
the common euro currency) are nearest to the sun. e rest of the world is farthest away. e
model shows that if the UK is moved further away from the sun it will be worse off. It would be
better off, as would all the countries in the world if they joined the Eurozone. Professor Blake
points out that had the gravity model been used in 2000–2002 when the Eurozone was created
the UK would have been better off, a proposition that has been shown to be false.
ere are well-known problems with long-distance forecasting. e treasury model predicts
that by 2030 the GDP per household will be lower than by £4300 (say $5000). But the Treasury
assumes that the UK will not be able to negotiate a more favorable deal than currently exists
with the EU or the rest of the world. e Brexiteers argued that it is highly unlikely that 5th
largest economy in the world would not be able to negotiate a better deal. ey support this
argument with the fact that between the UK and Europe there is a net imbalance that favors
Europe. In extremis were there to be high tariff levels imposed German automobile manufacturers,
and French farmers would be seriously affected.
Professor Blake notes that a similar model predicts that if Scotland were to leave the UK
its trade with the rest of the UK would fall by 80%. Face validity suggests that this could not be
the case.
Blake’s critique will be tested pretty quickly because the Treasury uses another model to
predict short term effects. is model suggests that the “leave” vote will cause an economic shock
equivalent to 50% of the 2007–2008 global financial crisis, and that this shock will last for two
years. According to Blake it is assumed that there will be no policy response to the shock whereas
the response to the 2008 crisis was to inject £375 billion into the economy. e evidence so far is
that the Government and the Bank of England, especially the Bank of England, are prepared to
take the necessary steps.
One other factor that was not seriously debated is the potential instability of the EU. All
sorts of shocks may hit it in the next few years. e main campaign assumes that it is a highly
stable system. We shall soon know how accurate the short-term predictions are.
e assumptions of both camps are that the UK will be better off. Primarily, that means
“more wealthy.” But better off means other things. Remainers say that continuing peace in Eu-
rope is more likely if the UK remains in the EU which is a large assumption. An equally large
assumption of the leavers is that the restoration of full sovereignty will provide better fortunes.
In all of this it is not clear that the leave vote is not made up of some voters who are grumbling
about politicians in general. It does seem that in many parts of the world there is discontent and
a corresponding disconnect between the voters and the political classes, and that this is due to
increasing inequalities in wealth.
123
DIALOGUE
When it comes to educational change, M. M. Cohn and R. B. Kottcamp, who carried out a
major study of teachers in Florida in the 1980s, argued that if learning is to be made more mean-
ingful then the assumptions and structure of the prevailing educational system will have to be
changed [8]. at view is not different to the view of those who believe that engineering educa-
tion ought also to be changed. ey cite Schaefer who in 1967 wrote “we can no longer afford
to conceive of schools simply as distribution centers for dispensing cultural orientations, infor-
mation and knowledge developed by other social units. e complexities of teaching in formal
classrooms have become so formidable and the intellectual demands on the system so enormous
that the school must be much more than a place of instruction. It must be a center of inquiry-a
producer as well as a transmitter of knowledge. One basic fact is our ignorance of teaching. We
simply do not know how to master the abstract knowledge and analytical skills modern society
demands. It seems necessary to transform at least some schools into centers for the production of
knowledge about how to carry out the job.” at was written a long time ago but it still applies
especially to universities. ere is a great deal of knowledge in the system but it is not conveyed
to the average teacher and the dialogue between the researchers and the practitioners is not great
which is why Cohn and Kottkamp recommend assumptional dialogues. ey are “opportunities to
raise awareness and examine largely unrecognized assumptions that currently underlie educational
structures and practices (in a school, university, institution or system) to generate alternatives to
them” [9]. ey are something that we are not terribly good at doing for fear of the unknown. K.
Patricia Cross, also a long time ago argued that things will not improve until teachers see their
classrooms as laboratories for research into instruction [10], a proposition that she explained in
great detail with Tom Angelo and Mimi Steadman [11].
THE TRANSFER OF LEARNING
ere was also evidence that some students had difficulty with within in-subject transfer of skill
(knowledge). is is a key skill that is essential for independent learning, and therefore, for con-
tinuous professional development. Even of more importance is the ability of horizontal trans-
fer which Kallenberg calls cross-domain transfer [12]. Sometimes its exercise is the result of
124
12. QUESTIONING OUR ASSUMPTIONS: ADAPTABILITY AND CHANGE
what Bernard Londergan calls “insight” [13]. Lonergan repeats the well-known story thus; “of
Archimedes rushing naked from the baths of Syracuse with the cryptic cry, Eureka!” King Hi-
ero, it seems, had had a votive crown fashioned by a smith of rare skill and doubtful honesty.
He wished to know whether or not baser metals had been added to the gold. Archimedes was
set the problem and in the bath had hit upon the solution. Weigh the crown in water. Implicit
in this directive were the principles of displacement and of specific gravity [12, p. 3]. Kallenberg
describes a problem that Cambridge students were trying to solve by geometry which seemed,
and would seem too many of us to be the correct approach. ey had to make a perfects square
quilt by sewing together 10 squares each with its own unique size. It was only when the students
dropped that approach, and looked at it from the perspective of electric circuits (Kirchoff ’s Law),
that they obtained a solution. Kallenberg gives several other examples in order to make the point
that practical reasoning may be helped by cross-domain transfer whether it is in design or ethics.
e point to be made here is that cross-domain transfer is at the heart of the stage of romance. It
necessarily happens but the skill is lost. If this approach to the “Human Side of Engineering” is to
be successful then irrespective of insight individuals have to be prepared to dive into other areas
of knowledge. We cannot afford to be afraid, yet fear so often governs our behavior, particularly
it seems, in elections.
FEAR
e British are getting used to referenda. e recent one ( June 23rd) was the second in a couple
of years. e latter gave the Scots the opportunity to say whether or not they wanted to leave the
UK. e commentariat seem to believe that fear played a large part in causing voters to say they
would remain in the UK although the margin in their favor was not great. It seemed that fear was
behind the two campaigns in the recent referendum. Fear of increasing immigration contrasted
with fear of economic collapse, often expressed as collapse of the markets, coupled with fear of
the unknown.
During the referendum the remainers continually made the point that the markets would
fall and fall and do the economy irreparable damage. If a voter is going to make a reasoned judge-
ment they are going to have to obtain a working knowledge of how markets work. Fortunately,
the 2007–2008 crisis has forced numerous analyses of what went wrong and spawned a num-
ber of readable texts on both sides of the Atlantic. Economists seem to agree with the French
economist omas Piketty’s,view that financialization ensures “the rich to get richer and the poor
get poorer.” Foroohar, an American financial journalist, who cited Piketty argues that one of the
reasons for the slower growth exhibited by the American economy is due to financialization, and
moreover, the regulators have helped to bring this about [14, pp. 15–20].
Reich, a distinguished American civil servant and academic, asks us to understand that
there is no such thing as a “free market.” He writes, “Few ideas have more profoundly poisoned
the minds of more people than the notion of a “free market” existing somewhere in the universe,
into which government “intrudes.” In this view, whatever inequality or insecurity the market
125
generates is assumed to be the natural and inevitable consequence of impersonal “market forces.”
What you’re paid is simply a measure of what you’re worth in the market. If you aren’t paid enough
to live on, so be it. If others rake in millions, they must be worth it. If millions are unemployed
or their pay checks are shrinking or they have to work two or three jobs and have no idea what
they’ll be earning next month or even next week, that’s unfortunate but it’s the outcome of “market
forces.” [15, p. 3]. e market can do no wrong is the belief of many of those on the very right
wing of the political spectrum. Interventions which many on the left will reduce inequality will
so the right wing believe distort the market.
Reich argues that the left/right views or government intervention vs. non-intervention mis-
understand the problem for without government there can be no market. e market is inher-
ently linked to the concept of civilization, and civilization does not allow ruthless Darwinian
competition. Civilization “is defined by rules: rules create markets and governments generate the
rules” [15, p. 4]. Reich is of the view that the rules that govern the functioning of the free market
are of more importance than the size of government. He, and writers like Foroohar, believe that
the rules, (or lack of rules-deregulation that is) have changed in favor of “Wall Street.” ey have
aided the financialization that has brought about a situation where the prime purpose of financial
institutions was to invest in the real economy became one in which the system of finance has
become an end in itself. Its growth is supported by increasing debt. e problem is that “the more
debt is likely to be created in excessive quantities. And it means that the more debt there is in an
economy, beyond some level, the less stable that economy will inevitably be” [16].
From an industrial perspective, financialization led to short termism because of the need to
ever increase the returns to shareholders who were in the Friedman doctrine held to be paramount.
In order to satisfy their investors large firms have begun to behave like shadow banks. Foroohar
suggests that one of useful things that could be done is to re-visit the whole notion of the company
and who companies are for, a discussion that is occasionally held in the UK. Reich wants to
find some way of restoring the countervailing power that was lost during the last two or three
decades [17]. His most radical suggestion is that everyone should be given a minimum income
“that enables them to be economically independent and self sufficient.” He cites support from
the conservative economist F. A. Hayek. It would eliminate the need for government welfare
payments and other transfers to the poor and reduce people’s dependence on private employers
and so restore some countervailing power. ere are, of course, many other suggestions to be
found in the literature that is now emerging. But the Queen’s question remains.
THE QUEEN’S QUESTION
In 2009 on a visit to the economics department of the renowned London School of Economics,
Her Majesty the Queen on a fact finding mission asked the question “why did no one see it
coming?” She had to wait for a letter to get an answer. Six years later we learn in one of several
analyses of the crash that macroeconomics held that understanding the monetary workings of
the economy could be understood without reference to the banking system [16, pp. 31,170,245].
126
12. QUESTIONING OUR ASSUMPTIONS: ADAPTABILITY AND CHANGE
is had the effect that the models could not show that the financial system could be a cause of
instability. e kind of mistakes that can be made in engineering were apparent, as for example two
of the assumptions that were made. First, it was assumed that the behavior of people is rational and
can, therefore, be predicted. Unfortunately, they did not have the benefit of Khaneman’s research
which did not make the headlines until 2013 [18]. Second, it was assumed that financial markets
are efficient in terms of the textbook. A positivist view prevailed in which mathematical precision
was taken to provide the correct answers.
Professor Buiter of Oxford University, cited by Adair Turner, said that, “Complete markets
macroeconomics theories not only did not allow the key questions about insolvency and illiquidity
to be answered. ey did not allow such questions to be asked” [16, p. 241]. ere is some simi-
larity here with Robert Lund who in effect was asked to ask and answer questions like a manager
than an engineer when agreeing to the launch of the Challenger.
Turner argues that “underlying these specific failings was also a methodological and philo-
sophical bias—a preference for mathematical precision and elegance at the expense of realism
[…]” [16, p. 243]. Is this not similar to the complaints that industrialists make about the engi-
neering curriculum? Both are accused of theoretical reasoning at the expense of practical reason-
ing.
Tomas Sedlacek [19], a historian of economics writes, “e more important elements of a
culture or field of inquiry such as economics are found in fundamental assumptions that adherents
of all the various systems within the epoch unconsciously presuppose.” Such assumptions appear
so obvious that people do not know what they are assuming, because no other way of putting
things has ever occurred to them, as the philosopher Alfred Whitehead notes in Adventures of
Ideas [20].
is is true of any subject and engineering is no exception. is is no better illustrated
than by the questions Sedlacek asks of economics, “What are we doing? And why? Can we do
(ethically) all that we can do (technically)? And what is the point of economics? What is all the
effort for?” (Questions that I have had to try and answer in respect of teacher education). “And
what do we really believe and where do our (often unknown) beliefs come from?” If science is “a
system of beliefs to which we are committed, what beliefs are they?”
It is fundamental questions such as these that criticism of ABET invokes or the criticisms
of industry that colleges do not prepare their students adequately for work in industry. But they
have to be asked in the context of higher education more generally and the purposes which it
serves. Are we commodities or persons?
POSTSCRIPT
While the page proofs of this book were being prepared Donald Trump was elected to be the next
President of the United States, and yesterday the Italian Prime Minister lost a referendum on the
constitution. Simultaneously, a vast number of tracts were published that tried to account for these
developments that were seen as a challenge the neo-liberal consensus. In the literature I have read
127
there have been very few references to the future and our rapidly changing society, which is rather
surprising given the circumstances. e Wall Street Journal on October 13, 2016 suggested that
dashed employment promises of the 1990s fuelled Donald Trump’s political rise, but the fact that
technology is on track to further reduce jobs, a point made in a letter in the UKs Guardian on
November 17 was not considered in any of debates. Whether or not you believe that all will turn
out well, or believe that unemployment will take over the middle classes as it has the working
classes there is little doubt that technology is changing the culture in which we live. For this we
have to thank engineers and engineering. is raises questions about the responsibilities engineers
have for the impact of their designs. ese questions are primarily philosophical but relate to
the fundamental issue of who is controlling whose mind. Do they escape these responsibilities
because, for the most part, they are employees, or do they have a moral obligation to lead debates
in what is currently called “Technological Literacy?” It was a positive answer to the latter that led
to these journeys.
NOTES
[1] Youngman, M. B., Oxtoby, R., Monk, J. D., and Heywood, J. (1978). Analysing Jobs.
Aldershot, UK, Gower. 119, 128
[2] Gledhill, J. (1966). Recent developments in electric power generating equipment for air-
craft. e English Electric Journal, 21(6), p. 35. 119
[3] It was clear that the effectiveness of the organization was dependent on the interdepen-
dence of its workforce. Because roles were not defined with precision we found that even at
the lower levels individuals needed to widen the scope of their initial brief through skills of
communication and liaison in order to take some action. It appeared that communication
was a complex skill, the nature of which varied with the activities undertaken. It seemed
that persons were appointed to roles which they had to change in order to communicate.
e organization was more a system of persons-in-relation than a strictly hierarchy. 119
It is in such structures that feelings of responsibility are acquired. We often allow ourselves
to confuse status and responsibility: I am as guilty of that as anyone else. To put it in another
way, we often have to seek status in order to be responsible and that may be the reason why
many persons seek to take on managerial roles. e feeling of responsibility accompanies
or generates a feeling that the person is doing something worthwhile. In this organization
almost everyone was directing and controlling, to a greater or lesser degree, and for some
it was mainly a function of themselves. Job satisfaction is, to some extent, a measure of the
degree to which an individual’s needs for direction and control are satisfied. In our study
we showed that this was as much a function of personality as it was of history, ability and
interest. What is an acceptable goal to one person will not be to another: some wanted to
be stretched others wanted a strict routine. No two persons in a section will be exactly alike.
128
12. QUESTIONING OUR ASSUMPTIONS: ADAPTABILITY AND CHANGE
It may contain both aggressive people and timid people who can work together in a way
that enhances of inhibits learning. Some who are taken outside their sphere of controlling
may have to be supported.
“A person is a psycho-social system. Within the boundaries of that system most individuals
wish to be ‘organic,’ to modify a term used by Burns and Stalker (1961). ey wish to be
able to take actions and decisions as well as mature. e boundaries of these psycho-social
systems arise as a function of the needs of the job and the needs of the person. When these
are matched for each person in the organization a hierarchic system becomes structured
by individuals who are organic within their own system, and grow in it in such a way that
the organizations goals are achieved when it also becomes organic. Both systems have to
be self-adjusting and when they are doing that the organization is learning.”
(ese paragraphs are based on pages 114 and 115 of Youngman et al. [1]. e third para-
graph is verbatim)
Burns, T. and G. Stalker (1961). e Management of Innovation. London, Tavistock.
[4] Jack Blears, Director of the Division of Industrial Studies, Faculty of Engineering Science,
the University of Liverpool. 119
Youngman et al. [1] write, “when knowledge is relatively organized and the products are
in manufacture and subject to minor improvement then there will be little demand for
manpower. However, during the innovatory stage or period when knowledge is relatively
disorganized, there will be a demand for the kind of manpower which can structure knowl-
edge in new ways. Further analysis of the innovatory process suggests that when develop-
ments in applied technologies are applied to existing products, they may lead to a decrease
in demand for manpower” […] (p. 103).
[5] See Chapter 3 of Heywood, J. (2016). e Assessment of Learning in Engineering Education.
Policy and Practice. Hoboken, NJ, IEEE/Wiley. 119
[6] Foster, R. (1986). Innovation. e Attacker’s Advantage. London, MacMillan. 119
e S curve is a learning curve that describes the effort put into improving a product or a
process, and the results the company obtains from that investment. In my diagram (Ex-
hibit 12.1) each new adaptation represents an additional cost.
[7] Blake, D. (2016), Measurement without eory: On the extraordinary abuse of economic
models in the EU Referendum debate. London. Cass Business School, City University
London. http://www.pensions-institute.org/BlakeReviewsTreasuryModels.p
df. 122
[8] Cohn, M. M. and R. B. Kottcamp (1993). Teachers. e Missing Voice in Education. Albany,
NY, State University of New York Press. 123
129
[9] Ibid. p. 267. 123
[10] Cross, K. P. (1986). A proposal to improve teaching or what taking teaching seriously
should mean. AAHE Bulletin, 9, p. 14. 123
[11] Angelo, T. and K. P. Cross (1993). Classroom Assessment Techniques. San Fransisco, Jossey
Bass. 123
Cross, K. P. and M. Steadman (1996). Classroom Research: Implementing the Scholarship of
Teaching. San Fransisco. Jossey Bass.
[12] Kallenberg, B. J. (2013). By Design: Ethics, eology, and the Practice of Engineering. Cam-
bridge, UK, James Clarke. 123, 124
[13] Lonergan, B. J. F. (1957). Insight. A Study of Human Understanding. London, Darton,
Longman and Todd. 124
[14] Foroohar, R. (2016). Makers and Takers: e Rise of Finance and the Fall of American Busi-
ness. New York, Crown (Random House/Penguin). 124
[15] Reich, R. B. (2015) Saving Capitalism. For the Many not the Few. New York, Alfred
Knopf/Random House Penguin. 125
[16] Turner, A. (2016). Between Debt and the Devil: Money, Credit, and Fixing Global Finance.
Princeton, NJ, Princeton University Press. 125, 126
One of the drivers of the 2007–2008 collapse was increasing inequality. “Richer people
tend to spend a lower proportion of their income than do middle income and poorer people.
Increasing inequality will therefore depress demand and economic growth unless the in-
creased savings of the rich are off-set by increased borrowing among middle or low income
earners. In an increasingly unequal society, rising credit and leverage become necessary to
maintain economic growth but lead inevitably to eventual crisis.”
One of the many contradictions in Conservative party policy in the UK is that following
its belief that people need to make their own choices it continues to allow increases in
university fees which in a period of flat lining of incomes simply causes people to borrow
more or opt out which does not help social mobility at all.
[17] As for example, common to the UK and the U.S. is the loss of trade union power. Reich
writes “What is the appropriate balance between stimulating new inventions and invest-
ments that could possibly improve the quality of life for millions of people and not concen-
trating too much wealth in the hands of the few, thereby impoverishing almost everyone
else? ere is no correct answer. But with adequate countervailing power we could have
more confidence in the ability of our political economic system to decide. We could better
130
12. QUESTIONING OUR ASSUMPTIONS: ADAPTABILITY AND CHANGE
trust that the resulting distribution of income and wealth represents a trade-off society is
willing to make” [p. 212]. 125
[18] Kahneman, D. (2013). inking, Fast and Slow. New York, Farrar, Strauss and Giroux.
126
[19] Sedlacek, T. (2011). Economics of Good and Evil. e Quest for Economic Meaning from
Gilgamesh to Wall Street. Oxford, Oxford University Press. 126
[20] Whitehead, A. N. (1985 ed.). Adventures of Ideas. New York, Free Press. 126
Author’s Biography
131
JOHN HEYWOOD
John Heywood is a Professorial Fellow Emeritus of Trinity College Dublin-University of Dublin.
He was given the best research publication award of the Division for the Professions of the Amer-
ican Educational Research Association for Engineering Education: Research and Development in
the Curriculum and Instruction in 2006. Recently, he published e Assessment of Learning in Engi-
neering Education: Practice and Policy. Previous studies among his 150 publications have included
Learning, Adaptability and Change; e Challenge for Education and Industry, and co-authored
Analysing Jobs, a study of engineers at work. He is a Fellow of the American Society for En-
gineering Education, a Fellow of the Institute of Electrical and Electronic Engineers, and an
Honorary Fellow of the Institute of Engineers of Ireland. In 2016 he received the Pro Ecclesia et
Pontifice award from the Pope for his services to education.
Author Index
133
Abercrombie, M. L. J., 22
Adamson, N. M., 89
Alexander, C. N., 98, 102
Allport, G. W., 24
Anderson, L. W., 7
Angelo, T., 123, 129
Anscombe, E., 61
Aquinas, T., 78
Argyris, C., 107
Aristotle, xxiii, 32, 59, 72, 78
Armstrong, N., 53
Astin, A., 86, 9, 117
Austen, J., 59
Ayer, A. J., 2
Barnes, L. B., 43, 47, 48, 49, 52
Bartlett, F. C., 24
Bassett, C. L., xxiii
Belbin, R. M., 41, 48, 107
Berger, P., 33, 35, 99
Bernstein, B., 27
Bey, C., 47
Blake, D., 122, 128
Blau, J., 90
Blears, J., 119, 128
Bloom, B., 6
Bloom, G., 89
Blunt, A., 117
Bocong, Lee, 86, 91
Boomer, G., 34, 36
Bostwick, W. D., 3
Bowen, W. R., 14, 61, 62, 63, 71, 72, 73, 74,
75, 76, 109
Bruner, J., 96, 102
Brynjolfson, E., 81, 87, 88
Buber, M., 73, 78
Bucciarell, L. L., 6, 7, 8, 114
Buiter, 123
Burns, Robert, 38
Burns, T., 37, 42, 47, 73, 78, 128
Buts, W. P., 89
Carroll, S., 89
Carter, G., 100, 119
Catalano, R. F., 56
Chambless, D. F., 92
Champagne, A. B., 27
Charette, R. N., 82, 90, 117
Cheville, A., 66, 69, 71, 75, 79, 94
Clifton, J., 81, 83, 89, 111, 118
Cohn, M. M., 123, 128
Collinson, D., 8
Cook-Greuter, S., 107
Copleston, F., 67
Costello, J. E., 78
Criado-Perez, Ms, 59
Cross, K. P., 108, 123, 129
Culler, A. D., 11, 18
Culver, R. S., 96, 97, 100, 102
Dangur, V., 108
Darwin, C., 59
134 AUTHOR INDEX
Davis, M., 49, 53, 55, 56, 57, 59, 61, 62
Dent, N., 74
Dent, N. J. H., 79
Dertouzos. M., 117
Devitt, F., 99
Dewey, J., 8
Diana, Princess, 72
Dinklespiel, J. R., 27
Driver, R., 31, 34
Drucker, P., 39, 44, 48, 49
Eggleston, J., 94, 99
Ellis, R. A., 90
Evans, D., 98
Figueiredo, J., 91, 99
Finnis, J., 65
Fische, K. W., 104
Fitch, P., 957, 102
Forge, J., 72, 76
Foroohar, R., , 125, 129
Foster, R., 117, 128
Freeman, J., 41, 45
Freire, P., 94
Giardina, R., 16, 18
Gilson, E., 78
Gledhill, J., 127
Goldberg, D. E., 28
Goldstein, H., 14, 15
Goold, E., 99
Gosch, P., 74, 78, 79
Greenleaf, R. K., 43, 50
Grosch, P., 66, 68
Gunstone, R. F., 27
Hackos, J. T., 96, 100
Hawkins, J. D., 56
Hayek. F. A., 125
Hayward, T., 71
Her Majesty the Queen, 125
Herkert, J. R., 60, 66
Herman, G., 16, 18
Hernstein-Smith, B., 33, 35
Hesseling, P., 2, 38, 47
Heywood, J., xii, 7, 17, 18, 26, 28, 35, 47, 48,
50, 69, 76, 79, 88, 100, 106, 108,
118, 127, 128
Hickey, L., 84, 91
Hira, R., 14, 15
Hirst, P., 99
Hoare, C., 107
Hodgson, J., 48
Hodgson, P., 48
Honderich T., 66
Hoose, B., 29
Humble, B., 41, 45
Hurley, P., 62, 65
James. W., 8
Jewkes, J., 89
Joad, C. E. M., 2, 5
Jones, R. G., 29
Kallenberg, B. J., xxiii, 65, 66, 59, 78, 123,
124, 129
Kant, I., 62, 67
Keegan, R., 107
Kelly, D. T., , 89, 100, 119
Khaneman, D., 126, 130
Kilminster, J., 55
King, P. M., 102, 103, 104, 105
Kirchoff, 124
Kitchener, K. S., 102, 103, 104, 105
Klopfer, L. E., 27
Kottkamp, R. B., 123, 128
Krupczak, J., 4, 7
Kuhen, D., 82, 90
Labov, W., 27
Langer, E. J., 98, 107
Lécouyer, C., 48
Lerman, 34
Lester, R. K., 117
Levinas, E., 73, 78
Lindsay, B., 82, 89
Lipmann, 106
Lonergan, B., 124, 129
Lovin, R., 91
Lovin, R. W., 65, 69
Lowell, B., 82, 89
Luckmann, T., 33, 35, 99
Lund, R., 54, 55, 126
MacAfee, A., 81, 87, 88
MacFralane Smith, I., 118
MacIntyre, A., 74, 75, 78, 79, 113
Macmurray, J., 23, 26, 44, 49, 53, 56, 61, 62,
72, 73, 77, 84, 85
Madigan, C., 91, 92
Magee, B., 7, 8
Manning, B., 59, 64
Marra, R., 102
Marshall, T. H., 111, 118
Mason, J., 55
Mason, M., 36
Matthews, G., 40, 48
Matthews, G. B., 101, 102
Matthews, M., 32, 35
Maunter, T., 66
McAuliffe, G., 98, 107
McCarthy, N., 28
McGregor, D., 39, 46
McInerney, R., 67, 78
Michelfelder, D. P., 28
Milburn, A., 111
Mill, J. S., 61
Miller, M., 107
Miller, R., 32, 35
AUTHOR INDEX 135
Mina, M., 8, 13, 71, 94, 98, 109
Mitcham, C., 28
Monk, J. D., 119, 127
Monk, J. D., 21, 48, 91
Moon, J., 68
Moore, G., 7
Morrison, K., 36
Mozart, 33
Muller, O., 108
Munns, D. P., 53, 54, 57
Newman, J. H., 11, 13, 18, 94, 95, 99, 110,
112, 113, 114
Obama, President, 121
Oldham, V., 35
Olds, B., 32, 35
Omidvar, I., 8
Oxtoby, R., 28, 48, 91, 119, 127
Palmer, B., 107
Pascarella, E. T., 95, 100
Paton, H. J., 68
Perry, W., 96, 97, 98, 102, 103, 104
Peters, R. S., 1
Piaget, J., 31, 96, 100, 101
Pierce, C. S., 8
Piketty, T., 124
Pollock, H. Montagu, 16
Pritchard, J., 11, 12
Queen Elizabeth II, 62
Rawls, J., 63, 64, 68
Rees, M. (Lord), 52
Reich, R., 88, 124, 125, 129
Riley, D., 76, 79
Roller, D. R., 16, 18
Rorty, R., 8
Russell, B., 7
136 AUTHOR INDEX
Russell, J., 81, 88
Salzman, H., 82, 89
Sanford, N., 47
Schaefe, Dr., 99, 108
Schein, E., 47
Schôn, D., 107
Sedlacek, T., 126, 130
Selbourne, D., 74
Snowden, E., 60, 64
Solow, R. M., 117
Stalker, G., 42, 47, 73, 78, 128
Stansfield, R., 9
Steadman, M., 123, 129
Sternberg, R. H., xxiii, 114, 116
Sullins, J. P., 26, 28, 29
Susskind, D., 87, 92
Susskind, R., 87, 92
Takacs, C, G., 92
Teitelbaum, M. S., 82, 87, 89, 92
Terenzini, P. T., 95, 100
omas, B., 91, 92
Torbert, W. R., 107
Trevelyan, J., xxii, 46, 91, 98, 99, 107
Trump, D., 126
Turner, A., 126, 129
Utschig, T. T., 98, 107
Vardy, P., 32, 35, 66, 68, 74, 78, 79
Vernon, P., 24
Vesilind, A., 75, 76, 77
Vincenti, W. G., xxii, 54, 57
Wadwha, V., 82, 90
Whitehead, A. N., xxii, 111, 117, 118, 126,
130
Williams, B„ 91, 99
Winkett, L., 60
Wittengenstein, L., 2, 6, 7
Woditsch, G., 16, 18
Wood, P. K., 104
Woods, D., 97, 102
Woodson, T. T., 16
Wulf, W. A., 60, 66
Yokomoto, C., 3
Youngman, M. B., 28, 48, 91, 119, 127, 128
Zachary, G. P., 90
Subject Index
137
ABET, 3, 7
Abortion, 59
Adult learning, 93
Affective domain, 93, 99
Aircraft design, 54
Ambiguity, 38
Analytic philosophy, 35
Apperception, 31
ASME, 66
Aspirational ethic, 71ff
Assumptional dialogues, 123
Assumptions, 121, 122, 123ff
Bologna Agreement, 7
Bowen’s aspirational ethic, 75
British Broadcasting Corporation, 60
Bruner’s theory of cognitive development,
102
Bureaucracy, 60
Canterbury cathedral, 18
Capitalist system, 81
Challenger, 54, 55, 56
Chilcot report, 29
Choc des opinions, 21, 26
Codes of conduct, 60ff, 66, 72
College (impact of ), 92
Colleges of Advanced Technology, 71
Colorado School of Mines, 96
Common good, 84, 110
Communication, 15, 22, 23
Community (ies), 15, 26, 43, 53, 54, 56, 85,
86
Company (concept of ), 84, 85
Complex learner, 40
Complexity theory, 36
Concepts, 16, 24
Confidentiality, 60
Consequentialism, 61
Constructivism, 31, 32, 33, 35
Continuing Professional; and Personal
development (CPPD), 25
Contractualism, 61, 63, 66
Creativity, xxii, 66
Culture (organizational), 25
Curriculum (models of ), 94, 99, 100
Curriculum (negotiated), 34
Deformation professionelle, 27
Dependence, 56
Design (social process), 7
Development (student), 93, 96ff
Dialect, 27, 28
Dripping water exercise, 11, 13
Duty, 62, 67
Education system(s), 86
Empathy Wall, 18
Engineering, 4, 71, 72, 80, 126
Engineering, 126
Engineering Council, 65
Engineering curriculum, 94
138 SUBJECT INDEX
Engineering education, 83
Engineering literacy, 4, 16, 117
Engineering science exam, 94, 101
Engineers, 83
Enterprise learning (skills of ), 113, 115
Episteme (scientific knowledge), 79
Epistemology, 13, 29
Ethics, 2, 8, 29, 55, 59, 62, 64, 71, 109, 124
Ethics (of engineering), 56
Ethics-aspiration, 64
Expectancy, 18
Experts (expertise), 121
Fear, 124
Financialization, 124, 125
Frames of reference, 24
General education, 112, 113, 114
Genius loci, 94, 95
German dual system, 92
Goods, 75
Higher education, 86, 93, 110, 111
IBM, 13, 14, 15, 16, 1, 54
IEEE, 64, 68
Illusion(s), 21, 22
Images course, 16, 19
Imperatives, 67
Inequality (ies), 123, 125
Insight, 124
Intelligence, 113
Intelligence (Academic), xxiii
Intelligence (Nous), 79
Intelligence (practical), xxiii
I-ou interactions, 73
Just war, 29
King and Kitchener’s theory of development,
98, 103, 104, 105
Knowledge, 128
Labour arena, 91
Language (public and formal), 27
Language (s), 1, 2, 5, 7
Leadership, 43, 44
Learning, 24, 31
Learning organizations, 56, 119, 208, 125
Liberal education, 26, 60, 112, 113, 114
Little College, 16
Logical positivism, 2, 5, 8
MacDonald’s, 21
Man (views of ), 11
Manage,r/Management, 43, 47, 48
Manager, 55
Market, 48, 124, 125, 126
Meaning, 1, 2, 3
Meetings, 41
Misperception(s), 22
MIT, 113, 115
Mobility, 111, 112
Moral principles, 59
Morality, xxiii, 61, 62
National Academy of Engineering, 60
Natural law, 74
Organization (types of ), 48
Organizational structure & performance, 43
Orientations (to learning and work), 47, 50
Outcome{(s), 3
Peace engineering, 77
Peer group, 8, 117, 118
Perceiving, 15
Perception, 17, 18, 22, 23, 24, 25
Perception (Exercise in), 9
Perry’s theory of development, 97, 103
Person (personal), 53, 128
Personal development, 86
Personal relations, 72, 73
Personal transferable skills, 113, 118
Philosophy, 1, 40
Philosophy (for young children), 41, 100, 104
Philosophy of engineering, 31
Piaget’s theory, 100, 101, 102
Practical problem solving ability, 116
Pragmatism (Pragmatic), 8
Preconceptions, 24
Prejudice (bias), 25
Problem formulation, 121
Problem(s), 22
Professional 56., 66, 69, 77
Professional development, 86, 98, 99
Professional development (Torbert’s frames),
96, 105, 106
Prudence (Practical Wisdom), 74, 78
-Psychological, 31, 32, 33
Psycho-social system, 128
Public relations, 73
Qualities, 39, 40
Radio Telescopes, 53
Realism, 32, 33
Reality (perception of ), 33
Reason (practical), 67
Reason (pure), 67
Reasoning (practical), xxiii
Reflection (reflective thinking), 23, 40, 98,
103, ff
Reflective judgment Interview, 105
Relationships, 44
Relationships (personal), 23
Research (engineering education), 98
Roboethics, 25, 26, 27, 28
Role(s), xxi, xxii, 39, 42, 49, 85, 88
Royal Academy of Engineering, 75
Royal Astronomical Society, 54
Schema (Schemata), 24, 26
SUBJECT INDEX 139
Self, 23, 31, 44
Shareholders, 84
Short-termism, 125
Social change, 81, 83, 86
Social competence, 116
Social justice, 76
Social System, xxii, 37
Society, 109
Society and technology, 109
-sociological, 33, 35
Socratic questioning, 32
Spatial ability, 118
STEM, 80, 86, 87, 88, 117
Systems, 43, 47, 48, 49
Tacit knowledge, xxiii
Taxonomy of educational objectives, 6
Teams (team behaviour), 41, 42
Technical coordination, 47
Technicians, 71
Technological literacy, 4, 16
Technology, 4, 81, 86, 109
Telepistemology, 26
Telerobotic warfare, 26, 28, 29
e Queen’s question, 125, 126
eory X, 39
eory X, 94
inking, 11, 13, 15, 18
inking, 76, 77
Torbert’s frames of professional development,
107, 108
Trade Union power, 129
Transdisciplinary course, xxii
Transfer of learning, 131, 126
Truth, 32, 122
Truth (true), 122
United Nations, 75
Universities, 34
140 SUBJECT INDEX
University (Idea of ), 95
University education, 116, 117
University technical colleges (U.K.), 118
Utility, 67
Values, 4, 25, 49, 50, 76
Verbal ability, 116
Verbal deprivation, 27
Virtue(s), 59, 74, 75, 78
Vocational education, 112, 113, 114
Weapons research, 72
Whistleblowing, 64
Wisdom, xxiii, 79
Wisdom (practical), 79
Work, 38
Workforce (jobs), 81ff, 84ff, 119
|
https://ieeexplore.ieee.org/xpl/ebooks/bookPdfWithBanner.jsp?fileName=6813709.pdf&bkn=6813708&pdfType=book
|
Series ISSN: 1939-5221
Series ISSN: 1939-5221
SYNTHESIS LECTURES oN ENGINEERING
SYNTHESIS LECTURES oN ENGINEERING
A little Book on Teaching
A little Book on Teaching
A Beginner’s Guide for Educators of
A Beginner’s Guide for Educators of
Engineering and Applied Science
Engineering and Applied Science
Steven F. Barrett, University of Wyoming
Steven F. Barrett, University of Wyoming
illustrated by J. Barrett, Closer to the Sun International, Inc.
illustrated by J. Barrett, Closer to the Sun International, Inc.
It is often a challenging and overwhelming transition to go from being a
It is often a challenging and overwhelming transition to go from being a
student to being a teacher. Many new faculty members of engineering and
student to being a teacher. Many new faculty members of engineering and
science have to make this dramatic transition in a very short time. In the
science have to make this dramatic transition in a very short time. In the
same closing months of your Ph.D. program you are trying to complete
same closing months of your Ph.D. program you are trying to complete
your research, finish and defend your dissertation, find a job, move to a
your research, finish and defend your dissertation, find a job, move to a
new location, and start a new job as a faculty member. If you are lucky,
new location, and start a new job as a faculty member. If you are lucky,
you’ve had the opportunity to serve as a teaching assistant and possibly
you’ve had the opportunity to serve as a teaching assistant and possibly
have taught a university-level course. If you have served as a research as-
have taught a university-level course. If you have served as a research as-
sistant, your teaching opportunities may have been limited. Somehow, in
sistant, your teaching opportunities may have been limited. Somehow, in
this quick transition from student to teacher, one is supposed to become a
this quick transition from student to teacher, one is supposed to become a
good teacher and be ready for the first day of school.
good teacher and be ready for the first day of school.
This book is intended as a basic primer on college-level teaching and
This book is intended as a basic primer on college-level teaching and
learning for a new faculty member of engineering and applied science.
learning for a new faculty member of engineering and applied science.
New faculty members in other disciplines will find much of the informa-
New faculty members in other disciplines will find much of the informa-
tion applicable to their area of expertise as well. First and foremost, this
tion applicable to their area of expertise as well. First and foremost, this
book is about learning and teaching. However, it also provides helpful in-
book is about learning and teaching. However, it also provides helpful in-
formation on related topics such as mentorship, student challenges, gradu-
formation on related topics such as mentorship, student challenges, gradu-
ate students, tenure, and promotion and accreditation. This book is also
ate students, tenure, and promotion and accreditation. This book is also
intended as a reference for seasoned professionals. It is a good reference
intended as a reference for seasoned professionals. It is a good reference
for those mentoring the next generation of college educators.
for those mentoring the next generation of college educators.
ISBN: 978-1-60845-868-4
ISBN: 978-1-60845-868-4
90000
90000
9 781608 458684
9 781608 458684
B
A
B
B
R
A
A
R
R
R
R
E
R
T
E
E
T
T
T
T
T
A
A
A
l
i
l
l
T
i
i
T
T
T
l
T
T
E
l
l
E
E
B
o
B
B
o
o
o
o
o
k
k
k
o
o
o
n
n
n
T
E
T
T
A
E
E
c
A
A
c
h
c
h
h
i
n
i
i
n
g
n
g
g
M
M
M
o
o
r
o
r
g
g
r
a
g
a
n
n
a
n
&
&
&
C
C
l
l
C
a
a
l
y
y
a
p
p
y
o
o
p
o
o
o
l
o
l
l
A Little Book on Teaching
A Beginner’s Guide for Educators of Engineering
and Applied Science
Synthesis Lectures on
Engineering
A Little Book on Teaching: A Beginner’s Guide for Educators of Engineering and Applied
Science
Steven F. Barrett
2012
Engineering Thermodynamics and 21st Century Energy Problems: A Textbook Companion
for Student Engagement
Donna Riley
2011
MATLAB for Engineering and the Life Sciences
Joseph V. Tranquillo
2011
Systems Engineering: Building Successful Systems
Howard Eisner
2011
Fin Shape Thermal Optimization Using Bejan’s Constructal Theory
Giulio Lorenzini, Simone Moretti, and Alessandra Conti
2011
Geometric Programming for Design and Cost Optimization (with illustrative case study
problems and solutions), Second Edition
Robert C. Creese
2010
Survive and Thrive: A Guide for Untenured Faculty
Wendy C. Crone
2010
Geometric Programming for Design and Cost Optimization (with Illustrative Case Study
Problems and Solutions)
Robert C. Creese
2009
iii
Style and Ethics of Communication in Science and Engineering
Jay D. Humphrey and Jeffrey W. Holmes
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Analog Multimedia
Explorations
Lina J. Karam and Naji Mounsef
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Digital Multimedia and
Robotics Explorations
Lina J. Karam and Naji Mounsef
2008
CAD/CAM of Sculptured Surfaces on Multi-Axis NC Machine: The DG/K-Based
Approach
Stephen P. Radzevich
2008
Tensor Properties of Solids, Part Two: Transport Properties of Solids
Richard F. Tinder
2007
Tensor Properties of Solids, Part One: Equilibrium Tensor Properties of Solids
Richard F. Tinder
2007
Essentials of Applied Mathematics for Scientists and Engineers
Robert G. Watts
2007
Project Management for Engineering Design
Charles Lessard and Joseph Lessard
2007
Relativistic Flight Mechanics and Space Travel
Richard F. Tinder
2006
Copyright © 2012 by Morgan & Claypool
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in
any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations in
printed reviews, without the prior permission of the publisher.
A Little Book on Teaching: A Beginner’s Guide for Educators of Engineering and Applied Science
Steven F. Barrett
www.morganclaypool.com
ISBN: 9781608458684
paperback
ISBN: 9781608458681
ebook
DOI 10.2200/S00406ED1V01Y201203ENG017
A Publication in the Morgan & Claypool Publishers series
SYNTHESIS LECTURES ON ENGINEERING
Lecture #17
Series ISSN
Synthesis Lectures on Engineering
Print 1939-5221 Electronic 1939-523X
A Little Book on Teaching
A Beginner’s Guide for Educators of Engineering
and Applied Science
Steven F. Barrett
University of Wyoming
illustrated by J. Barrett
Closer to the Sun International, Inc.
SYNTHESIS LECTURES ON ENGINEERING #17
CM&
Morgan
&
cLaypool
publishers
ABSTRACT
It is often a challenging and overwhelming transition to go from being a student to being a teacher.
Many new faculty members of engineering and science have to make this dramatic transition in a
very short time. In the same closing months of your Ph.D. program you are trying to complete your
research, finish and defend your dissertation, find a job, move to a new location, and start a new
job as a faculty member. If you are lucky, you’ve had the opportunity to serve as a teaching assistant
and possibly have taught a university-level course. If you have served as a research assistant, your
teaching opportunities may have been limited. Somehow, in this quick transition from student to
teacher, one is supposed to become a good teacher and be ready for the first day of school.
This book is intended as a basic primer on college-level teaching and learning for a new
faculty member of engineering and applied science. New faculty members in other disciplines will
find much of the information applicable to their area of expertise as well. First and foremost, this
book is about learning and teaching. However, it also provides helpful information on related topics
such as mentorship, student challenges, graduate students, tenure, and promotion and accreditation.
This book is also intended as a reference for seasoned professionals. It is a good reference for those
mentoring the next generation of college educators.
KEYWORDS
teaching, engineering education, learning, new faculty, college-level teaching, instruc-
tion, mentorship, tenure and promotion
Contents
vii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
1 What makes a Great Teacher? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 What makes a great teacher? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3.1 The Atlantic, 2010 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3.2 What Great Teachers Do Differently . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3.3 U.S. Professors of the Year . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.4 What Makes a Great Teacher? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.5 Top Five Character Traits of Superior Teachers . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.6 What makes a great teacher —take two! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4
Pulling it all together: a synthesized model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5 Great teachers as role models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.6
References and Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Chapter Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.7
2
A little learning theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
The physiological basis of learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2
Levels of learning — Bloom’s Taxonomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.3
Personality Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.4
Jung, Myers and Briggs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.5
Felder and Silverman: Bridging the gap between learning and teaching styles . . . 21
2.6
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.7
References and Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Chapter Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.8
viii
3
4
5
Preparation for the first day of classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
The student as a customer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2
3.3 What did you want from a teacher when you were a student? . . . . . . . . . . . . . . . . . 27
Course development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.4
3.4.1 Accreditation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.4.2 Syllabus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.4.3 Textbook selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.4.4 Lesson plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.5 Other items to consider . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Establishing good student relationships. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.6
Conducting the lecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.7
Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.8
Available resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.9
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.10
References and Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.11 Chapter Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Assessment of your students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.2
Assessment of you . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.3
Self assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.4
Assessment of your course . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.5
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.6
References and Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Chapter Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.7
Beyond the first day . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.1 Mentoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.1.1 Traits of a good mentor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.1.2 Finding a good mentor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.1.3 Being a good mentor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Teaching Rewards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.2
5.3
Finding Balance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.4 Where to go from here? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.5
5.6
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
References and Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Chapter Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
ix
A Sample syllabus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
B
Personal Worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Author’s Biography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Preface
xi
It is often a challenging and overwhelming transition to go from being a student to being a teacher.
Many new faculty members of engineering and science have to make this dramatic transition in a
very short time. In the same closing months of your Ph.D. program you are trying to complete your
research, finish and defend your dissertation, find a job, move to a new location, and start a new
job as a faculty member. If you are lucky, you’ve had the opportunity to serve as a teaching assistant
and possibly have taught a university-level course. If you have served as a research assistant, your
teaching opportunities may have been limited. Somehow, in this quick transition from student to
teacher, one is supposed to become a good teacher and be ready for the first day of school.
What is this book about? This book is intended as a basic primer on college-level teaching
and learning for a new faculty member of engineering or applied science. New faculty members in
other disciplines will find much of the information applicable to their area of expertise as well. First
and foremost, this book is about learning and teaching. However, it also provides helpful information
on related topics such as mentorship, student challenges, graduate students, tenure, and promotion
and accreditation. This book is also intended as a reference for seasoned professionals. It is a good
reference for those mentoring the next generation of college educators.
Chapter 1 investigates teaching, characteristics of great teachers, and reviews some of the
great teachers of the past and present. The chapter also provides some self-exploration exercises to
answer such questions as what characteristics of teachers in your past made them memorable and
effective and what kind of teacher do you want to be?
Chapter 2 reviews some of the key theories of teaching and learning from the literature. As
one begins their teaching career, it is important to be aware of the theoretical underpinnings of the
teaching profession. By necessity, only several theories are discussed.The theories are used to develop
a series of practical techniques that can be used in the classroom to enhance student learning.
Chapter 3 provides practical pointers for preparing for the first day of class including syllabus
preparation, selection of textbooks, preparing lesson plans and teaching materials, and establishing
a good classroom dynamic.
Chapters 4 discusses the critical areas of assessment of students and students’ assessment of
teachers. It also provides suggestions on how to assess your course and evaluate how effective it is in
supporting student outcomes.
Chapter 5 looks beyond the first day of class and delves into the areas of effective mentoring,
the rewards of teaching and some practical guidelines of balancing all the demands placed upon the
new educator. The chapter concludes with suggestions on how to continue to be a good and effective
educator.
xii PREFACE
A little bit about the author. I do not pretend to be an expert educator. I have taught for
many years and in various venues, however, I consider myself a lifetime student and practitioner of
the teaching profession.
My teaching career began rather inauspiciously. While still an undergraduate student at the
University of Nebraska at Omaha, my home church was having a difficult time finding a teacher
for a large class of active sixth graders. I volunteered to teach this class and quickly discovered I was
in over my head! I wanted to teach the students about spiritual matters by carefully studying lesson
materials. This was not a good approach for active-minded, spirited sixth graders. After several
frustrating weeks of feeling like I was making little progress, I changed my approach. I planned a
lot of varied activities to engage the students in lively and applied discussions of spiritual topics. I
challenged them to determine methods to apply these techniques to their daily lives. The students
and I became a close knit group and we covered a lot of spiritual ground that year. I enjoyed the
experience so much I continued to teach the class for several more years. That was over 30 years ago.
I have continued to teach challenging classes ever since.
After completing my undergraduate studies in 1979, I was commissioned in the United States
Air Force. I was initially assigned to a missile base in northern Montana. I had a knack for describing
complex missile tasks to my fellow crewmembers. You see, teaching rambunctious sixth graders is
not much different than teaching rambunctious young Air Force (AF) officers. Both groups demand
a high level of energy and creative teaching techniques. After being on missile crew for about a year,
I was assigned to the missile instructor shop where I was to write the monthly training package for
the crew force. After doing this for about a year, I was promoted to the Senior Instructor Crew.
In this role, my crew partner and I were responsible for the monthly training requirements of all
instructors and all crew members—approximately 150 talented, young officers.
Following this assignment, I served at the 4315 Combat Crew Training Squadron in Cali-
fornia. In this position, I taught new AF officers the intricacies of missile operations and also the
awesome responsibility with which they were entrusted. After serving there for two years the AF
transferred my family back to Omaha, Nebraska in a non-teaching assignment. I could not bear
to be away from the classroom, so I volunteered to teach a Confirmation class for 6-8th graders at
my home church. I also completed my Master’s degree which allowed me to serve as an adjunct
professor at my alma mater, the University of Nebraska at Omaha.
My teaching dreams came true in 1988 when I was selected to teach at the United States
Air Force Academy in Colorado Springs, Colorado. This undergraduate institution is charged with
transforming high school graduates into dedicated, disciplined Air Force officers. I served at the
Academy from 1988 until my retirement from active duty Air Force service in 1999. While at the
Academy I served in a number of positions of increasing responsibility and academic rank in the
Department of Electrical and Computer Engineering. I also taught part time at night at a local
university primarily intended for adult students. I retired from the Air Force and the Academy in
1999 as a full professor and the deputy department head.
PREFACE xiii
I was very excited about the prospect of starting a second academic career as an assistant
professor. I was thrilled to be offered a tenure-track position at the University of Wyoming in 1999.
Since arriving at UW, I have taught at all levels: from middle school and high school recruiting
courses; a freshman orientation course; a sophomore circuits course; and a wide variety of senior and
graduate-level design courses. I was promoted to associate professor and received tenure in 2005
and was promoted to full professor in 2011. I now serve as Associate Dean for Academic Programs
in the College of Engineering and Applied Science. However, by choice, I maintain a full teaching
load (and then some). I provide this background to establish credibility as a seasoned (but not an
expert) educator. As I mentioned before, I am a lifetime student of good teaching practices.
My approach to teaching has not changed much in 30+ years. My goal is to keep students
actively engaged and committed to their own education. I believe that students learn best when they
are actively engaged in exciting activities.
This book contains information on effective teaching practices, from the literature along with
lessons I have learned along the way. I’ve also been blessed to have outstanding teachers throughout
my education and have also worked with a number of gifted educators. I have tried to capture what
I learned from them in these pages as well.
What this book is not. In the book I have purposely avoided involved discussions of learning
and teaching theory. I consider this body of work to be of the upmost importance and hold it in the
highest regard. Key theoretical concepts are discussed in Chapter 2. This brief chapter does not do
justice to the many decades of outstanding research in learning and teaching theory. However, this
book is about providing the fundamental tenets to help an aspiring educator quickly and successfully
come up to speed on basic teaching concepts. No disrespect is intended toward the theoretical
underpinnings that provide the foundation on which all teaching concepts are grounded.
Workshop. If you are interested in the author conducting a workshop for beginning instructors
at your institution, please contact him at [email protected]. If you are interested in
conducting your own workshop, workshop materials are available from the author. Feel free to visit
the book website at www.alittlebookonteaching.com. Also, if there are topics and concepts
that should be included in future book editions, please contact the author through the website.
Steven F. Barrett
Laramie, WY
March 2012
Acknowledgments
xv
I dedicate this book to the outstanding teachers and mentors I’ve had throughout my life. I also
thank Joel Claypool of Morgan and Claypool Publishers who encouraged me to pursue this project.
I also dedicate this book to my family who is my constant source of inspiration. I am the product of a
family of gifted educators. My father, although not a formally trained educator, taught and influenced
many young men and women by his example of a well-lived life of service to others. My mother was
a registered nurse and served many years educating the next generation of nurses. She also served
for many years as a teacher of challenged children. My wife serves as an aide for elementary school
students who need extra help. My daughter is a gifted and dedicated elementary school teacher in
Colorado. I also offer a special thank you to my oldest son Jonathan Barrett for providing book
illustrations and web development. For additional informational please contact him at Closer to
the Sun International, Inc. at www.CloserToTheSunInternational.com. Also, I offer a special
thank you to Graham Barrett my youngest son for his careful edits of the final manuscript and his
thoughtful suggestions on how to improve the book. He too has served as an educator as a graduate
teaching assistant and also working with summer high school enrichment programs.
My goal is to one day be a great teacher. I hope to continue teaching for another 30 years
(really!). I have learned a great deal about teaching while writing this book. I’ve put a lot of the
material to practice already in the classroom.
For the students!
Steven F. Barrett
March 2012
List of Figures
xvii
1.1 What makes a great teacher? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2
Tenets of a great teacher: a synthesized model [2, 4, 6]. . . . . . . . . . . . . . . . . . . . . . . . 8
1.3 Measuring the volume of a radar sphere atop a tower. . . . . . . . . . . . . . . . . . . . . . . . 11
1.4
“How do I get started? [ J. Barrett, Closer to the Sun International, Inc.]” . . . . . . 14
2.1 Model of memory storage [1]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2
Bloom’s taxonomy of cognitive learning [6].
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.3 Myers and Briggs personality types [10]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4
“How do I reach them? [ J. Barrett, Closer to the Sun International, Inc.]” . . . . . . 24
3.1 How does your course support program accreditation [2]? . . . . . . . . . . . . . . . . . . . 31
3.2
“This is going to be a challenging course. The syllabus has a table of contents!
[ J. Barrett, Closer to the Sun International, Inc.]” . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.3
Textbook selection matrix.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.1
“A 57% average. What went wrong? [ J. Barrett, Closer to the Sun
International, Inc.]” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.2
Continuous improvement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.1
Serving as an educator is a lifelong profession based on continual
improvement and growth [ J. Barrett, Closer to the Sun International, Inc.] . . . . . 56
C H A P T E R 1
1
What makes a Great Teacher?
1.1 OVERVIEW
This chapter provides an introduction to the challenging and rewarding career of university-level
teaching. The chapter begins with a review of what the literature has to say about what makes a
great teacher. We review a variety of sources and find amazing consistency in the tenets of good
teaching. A synthesized list of tenets of great teachers is then developed. A series of case studies of
good teachers, including an award-winning middle school science teacher and a world-renowned
high school teacher, follows. You will then be asked to take a trip down memory lane and remember
the great teachers you’ve had and list the tenets of what made them special and such an effective
teacher. You then complete an exercise to determine what kind of teacher you want to be. Our goal
for the chapter is for you to discover the tenets, activities, and attitudes of great teachers and include
them in your own professional repertoire.
1.2 WELCOME
Welcome to the noble profession of university-level teaching. You will find this vocation to be
challenging, rewarding, exciting, and doable. As a new faculty member you probably feel a bit over-
whelmed with all that you have to do in a short amount of time. This book provides practical
information and techniques to become an effective university-level educator. We also discuss tech-
niques to balance the demands of research, service and teaching. We begin by investigating what the
literature has to say about the tenets of effective teaching.
1.3 WHAT MAKES A GREAT TEACHER?
This section reviews the tenets of effective teaching from a wide variety of sources from the literature.
In each case we briefly review the main tenets of the article. It is highly recommended that you add
each of these sources to your professional reading list. Full citations for each source are provided at
the end of the chapter. We summarize the traits of effective teachers in Figure 1.1. At the end of
this section we pull together a synthesized list of traits discussed in the articles.
1.3.1 THE ATLANTIC, 2010
An article in the January/February 2010 issue of The Atlantic magazine posed the question: “What
makes a great teacher?” The author, Amanda Ripley, investigated how teachers in similar grade
school classroom environments can have dramatically different results in student progress. Using
2
1. WHAT MAKES A GREAT TEACHER?
data gathered from the “Teach for America,” program similar traits of great teaching emerged. Here
is what they found. Great teachers [1]:
(cid:129) set big goals for their students.
(cid:129) always look for ways to be more effective.
(cid:129) involve family members in the educational process.
(cid:129) have students work with their peers to help with understanding.
(cid:129) matter. Effective teaching has a greater impact on student success than other factors such as a
specific school or well the school is funded.
(cid:129) frequently check to make sure students understand material using fun, non-threatening feed-
back techniques during classroom activities.
(cid:129) are well-prepared. Based on intended outcomes and objectives, they work back from intended
outcomes to develop thorough and well-developed educational programs and lesson plans.
They then stay on track and focused on lesson delivery.
(cid:129) care about the success and well-being of their students. As an example, the article shadowed
Mr. William Taylor, a fifth grade teacher at Kimball Elementary School in Washington D.C.
During the school year, Mr. Taylor moved his class from 40% performing at math grade level
to over 90% at or above grade level by then end of the year. Mr. Taylor used a variety of effective
teaching skills including having a deep commitment to his students. As an example, Mr.Taylor
cooks his students a hot breakfast on the days when they take standardized tests.
The The Atlantic article further reported that “Teach for America” leadership has spent con-
siderable time poring over data in an attempt to predict future teaching success. Interestingly, those
who have demonstrated perseverance, a grit, in dealing with life challenges tend to become good
classroom teachers. Also, success in the last several years of college correlated with good classroom
teaching performance [1].
1.3.2 WHAT GREAT TEACHERS DO DIFFERENTLY
Todd Whitaker is a seasoned, expert educator. He has served as a middle school and high school
educator, a middle school and high school principal, and as a middle school coordinator. He now
is a Professor at Indiana State University in the College of Education. Professor Whitaker has
written a series of books about being an effective teacher and principal. In “What Great Teachers
Do Differently — 14 Things That Matter Most,” Professor Whitaker provides 14 traits of effective
teachers. In his book, Professor Whitaker devotes a chapter to illustrate each of these 14 traits of
great teachers. These traits of effective teachers are briefly summarized below [2].
1.3. WHAT MAKES A GREAT TEACHER?
3
(cid:15)(cid:9)(cid:12)(cid:3)(cid:11)(cid:4)(cid:13)(cid:14)(cid:6)(cid:6)(cid:16)(cid:4) (cid:6)(cid:9)(cid:15)
(cid:13)(cid:16)(cid:12)(cid:3)(cid:9)$(cid:10)(cid:23)(cid:9)(cid:5)(cid:11)(cid:11)(cid:12)(cid:7)
(cid:6)(cid:20)(cid:21)(cid:12)(cid:13)(cid:11)(cid:5)(cid:17)(cid:12)(cid:4)
(cid:2)(cid:9)(cid:12)(cid:2)(cid:3)(cid:9)(cid:12)(cid:19)(cid:10)(cid:3)(cid:7)(cid:19)
(cid:6)(cid:9)(cid:15)(cid:3)(cid:7)(cid:5)(cid:25)(cid:12)(cid:19)
(cid:4)(cid:12)(cid:11)(cid:4)(cid:10)(cid:14)(cid:5)(cid:15)(cid:14)
(cid:12)#(cid:2)(cid:12)(cid:13)(cid:11)(cid:3)(cid:11)(cid:5)(cid:6)(cid:7)(cid:4)
(cid:13)(cid:6)(cid:28)(cid:28)(cid:18)(cid:7)(cid:5)(cid:13)(cid:3)(cid:11)(cid:12)
(cid:8)(cid:9)(cid:12)%(cid:18)(cid:12)(cid:7)(cid:11)(cid:16)(cid:29)(cid:10)(cid:23)(cid:5)(cid:11)(cid:14)
(cid:2)(cid:3)(cid:9)(cid:12)(cid:7)(cid:11)(cid:4)
(cid:30)(cid:14)(cid:3)(cid:11)(cid:10)(cid:28)(cid:3)(cid:24)(cid:12)(cid:4)
(cid:3)(cid:10)(cid:15)(cid:9)(cid:12)(cid:3)(cid:11)
(cid:11)(cid:12)(cid:3)(cid:13)(cid:14)(cid:12)(cid:9)(cid:31)
(cid:12)(cid:7)(cid:15)(cid:3)(cid:15)(cid:12)
(cid:4)(cid:11)(cid:18)(cid:19)(cid:12)(cid:7)(cid:11)(cid:4)
(cid:13)(cid:3)(cid:9)(cid:12)(cid:10)(cid:3)(cid:20)(cid:6)(cid:18)(cid:11)
(cid:4)(cid:11)(cid:18)(cid:19)(cid:12)(cid:7)(cid:11)(cid:4)
(cid:4)(cid:18)(cid:20)(cid:21)(cid:12)(cid:13)(cid:11)(cid:10)(cid:28)(cid:3)(cid:11)(cid:11)(cid:12)(cid:9)
(cid:28)(cid:3)(cid:4)(cid:11)(cid:12)(cid:9)(cid:4)
(cid:13)(cid:6)(cid:28)(cid:2)(cid:3)(cid:4)(cid:4)(cid:5)(cid:6)(cid:7)(cid:3)(cid:11)(cid:12)
(cid:19)(cid:12)(cid:28)(cid:3)(cid:7)(cid:19)(cid:5)(cid:7)(cid:15)
(cid:3)(cid:16)
(cid:7)
(cid:3)(cid:11)(cid:5)(cid:6)
(cid:2)(cid:5)(cid:9)
(cid:4)
(cid:5)(cid:7)
!(cid:6)(cid:2)(cid:10)"(cid:10)(cid:11)(cid:9)(cid:3)(cid:5)(cid:11)(cid:4)
(cid:6)(cid:8)(cid:10)(cid:4)(cid:18)(cid:2)(cid:12)(cid:9)(cid:5)(cid:6)(cid:9)
(cid:11)(cid:12)(cid:3)(cid:13)(cid:14)(cid:12)(cid:9)(cid:4)
(cid:24)(cid:7)(cid:6)(cid:23)(cid:16)(cid:12)(cid:19)(cid:15)(cid:12)
(cid:6)(cid:8)(cid:10)(cid:28)(cid:3)(cid:11)(cid:12)(cid:9)(cid:5)(cid:3)(cid:16)
(cid:4)
(cid:12)
(cid:7)
(cid:4)
(cid:14)
(cid:18)
(cid:28)
(cid:6)
(cid:9)
(cid:12)(cid:10)(cid:6)(cid:8)
(cid:4)(cid:6)(cid:29)(cid:6)(cid:18)(cid:23)(cid:3)(cid:7)(cid:11)(cid:11)(cid:6)(cid:11)(cid:12)(cid:3)(cid:13)(cid:14) (cid:13)(cid:6)(cid:28)
(cid:23)
(cid:6)
(cid:9)
(cid:24)
(cid:10)
(cid:13)
(cid:23)(cid:5)(cid:16)(cid:16)(cid:5)(cid:7)
(cid:6)(cid:16)(cid:16)(cid:3)
(cid:15)
(cid:7)
(cid:12)
(cid:20)
(cid:4)
(cid:6)
(cid:4)
(cid:9)
(cid:3)
(cid:10)
(cid:11)
(cid:11)(cid:5)(cid:17)
(cid:6)
(cid:12)(cid:16)(cid:29)
(cid:15)
(cid:14)(cid:5)(cid:7)
(cid:7)
(cid:4)(cid:5)(cid:6)
(cid:13)
(cid:3)
(cid:4)
(cid:3)
(cid:12)
(cid:2)
(cid:11)
(cid:10)
(cid:9)
(cid:6)
(cid:8)
(cid:30)(cid:14)(cid:3)(cid:11)(cid:10)(cid:28)(cid:3)(cid:24)(cid:12)(cid:4)
(cid:3)(cid:10)(cid:15)(cid:9)(cid:12)(cid:3)(cid:11)
(cid:11)(cid:12)(cid:3)(cid:13)(cid:14)(cid:12)(cid:9)(cid:31)
(cid:4)
(cid:4)
(cid:12)
(cid:7)
(cid:15)
(cid:11)
(cid:13)
(cid:8)(cid:16)(cid:12)
(cid:12)
(cid:9)
(cid:10)
(cid:6)
(cid:11)
(cid:23)(cid:5)(cid:16)(cid:16)(cid:5)(cid:7)
(cid:23)
(cid:6)
(cid:9)
(cid:24)
(cid:10)
(cid:12)
(cid:11)
(cid:14)(cid:5)(cid:13)
(cid:14)(cid:18)(cid:28)(cid:5)(cid:16)(cid:5)(cid:11)(cid:29)
(cid:12)(cid:15)(cid:6)(cid:10)(cid:11)(cid:6)(cid:10)(cid:4)(cid:18)(cid:9)(cid:17)(cid:5)(cid:17)(cid:12)
(cid:13)(cid:14)(cid:3)(cid:16)(cid:16)(cid:12)(cid:7)(cid:15)(cid:5)(cid:7)(cid:15)(cid:10)(cid:19)(cid:3)(cid:29)(cid:4)
(cid:20)(cid:12)(cid:5)(cid:7)(cid:15)(cid:10)(cid:3)(cid:10)(cid:26)(cid:15)(cid:9)(cid:12)(cid:3)(cid:11)(cid:10)(cid:11)(cid:12)(cid:3)(cid:13)(cid:14)(cid:12)(cid:9)(cid:27)
(cid:13)(cid:6)(cid:7)(cid:4)(cid:11)(cid:3)(cid:7)(cid:11)(cid:10)(cid:4)(cid:11)(cid:9)(cid:18)(cid:15)(cid:15)(cid:16)(cid:12)(cid:10)(cid:11)(cid:6)
(cid:5)(cid:28)(cid:2)(cid:9)(cid:6)(cid:17)(cid:12)
(cid:6)(cid:9)(cid:15)(cid:3)(cid:7)(cid:5)(cid:25)(cid:3)(cid:11)(cid:5)(cid:6)(cid:7)
(cid:2)(cid:9)(cid:3)(cid:13)(cid:11)(cid:5)(cid:13)(cid:3)(cid:16)(cid:11)(cid:14)(cid:12)(cid:6)(cid:9)(cid:29) (cid:6)(cid:9)(cid:15)
(cid:16)(cid:6)(cid:17)(cid:12)(cid:10)(cid:6)(cid:8)
(cid:4)(cid:11)(cid:18)(cid:19)(cid:12)(cid:7)(cid:11)(cid:4)
(cid:16)(cid:6)(cid:17)(cid:12)(cid:10)(cid:6)(cid:8)
(cid:11)(cid:14)(cid:12)(cid:5)(cid:9)(cid:10)(cid:4)(cid:18)(cid:20)(cid:21)(cid:12)(cid:13)(cid:11)
(cid:18)(cid:7)(cid:19)(cid:12)(cid:9)(cid:4)(cid:11)(cid:3)(cid:7)(cid:19)(cid:10)(cid:9)(cid:6)(cid:16)(cid:12)(cid:10)(cid:6)(cid:8)
(cid:12)(cid:19)(cid:18)(cid:13)(cid:3)(cid:11)(cid:5)(cid:6)(cid:7)(cid:10)(cid:5)(cid:7)(cid:10)(cid:4)(cid:11)(cid:18)(cid:19)(cid:12)(cid:7)(cid:11)(cid:22)(cid:4)(cid:10)(cid:16)(cid:5)(cid:8)(cid:12)
(cid:23)(cid:5)(cid:16)(cid:16)(cid:5)(cid:7)(cid:15)(cid:7)(cid:12)(cid:4)(cid:4)
(cid:11)(cid:6)(cid:10)(cid:13)(cid:14)(cid:3)(cid:7)(cid:15)(cid:12)
(cid:29)
(cid:16)
(cid:11)
(cid:7)
(cid:3)
(cid:11)
(cid:4)
(cid:7)
(cid:6)
(cid:13)
(cid:10)
(cid:6)
(cid:11)
(cid:10)
(cid:24)
(cid:12)
(cid:12)
(cid:4)
(cid:12)
(cid:13)
(cid:7)
(cid:3)
(cid:28)
(cid:9)
(cid:6)
(cid:8)
(cid:9)
(cid:12)
(cid:2)
(cid:10)
(cid:9)
(cid:5)
(cid:12)
(cid:14)
(cid:11)
(cid:10)
(cid:12)
(cid:17)
(cid:6)
(cid:9)
(cid:2)
(cid:28)
(cid:5)
(cid:6)
(cid:13)(cid:9)(cid:12)
(cid:3)(cid:11)(cid:12)(cid:10)(cid:2)
(cid:11)(cid:9)(cid:12)
(cid:3)(cid:11)(cid:10)(cid:3)(cid:16)(cid:16)(cid:10)(cid:23)(cid:5)(cid:11)(cid:14)(cid:10)(cid:9)(cid:12)
(cid:4)(cid:5)(cid:11)(cid:5)(cid:17)(cid:12)(cid:10)(cid:3)(cid:11)(cid:28)
(cid:4)(cid:14)(cid:3)(cid:9)(cid:12)(cid:10)(cid:2)(cid:6)(cid:4)(cid:5)(cid:11)(cid:5)(cid:17)(cid:12)(cid:10)(cid:3)(cid:11)(cid:11)(cid:5)(cid:11)(cid:18)(cid:19)(cid:12)(cid:4)
(cid:8)(cid:5)(cid:16)(cid:11)(cid:12)(cid:9)(cid:10)(cid:7)(cid:12)(cid:15)(cid:3)(cid:11)(cid:5)(cid:17)(cid:12)(cid:4)$
(cid:24)(cid:12)(cid:12)(cid:2)(cid:10)(cid:9)(cid:12)(cid:16)(cid:3)(cid:11)(cid:5)(cid:6)(cid:7)(cid:4)(cid:14)(cid:5)(cid:2)(cid:4)
(cid:5)(cid:7)(cid:10)(cid:15)(cid:6)(cid:6)(cid:19)(cid:10)(cid:9)(cid:12)(cid:2)(cid:3)(cid:5)(cid:9)
(cid:5)(cid:15)(cid:7)(cid:6)(cid:9)(cid:12)(cid:10)(cid:11)(cid:9)(cid:5)(cid:17)(cid:5)(cid:3)(cid:16)(cid:10)(cid:19)(cid:5)(cid:4)(cid:11)(cid:18)(cid:9)(cid:20)(cid:3)(cid:7)(cid:13)(cid:12)(cid:4)$
(cid:19)(cid:6)(cid:10)(cid:7)(cid:6)(cid:11)(cid:10)(cid:12)(cid:4)(cid:13)(cid:3)(cid:16)(cid:3)(cid:11)(cid:12)(cid:10)(cid:4)(cid:5)(cid:11)(cid:18)(cid:3)(cid:11)(cid:5)(cid:6)(cid:7)
(cid:6)
(cid:4)
(cid:4)
(cid:2)
(cid:2)
(cid:12)
(cid:14)
(cid:13)(cid:11)
(cid:12)(cid:9)(cid:12)$
(cid:12)
(cid:19)
(cid:30)(cid:14)(cid:3)(cid:11)(cid:10)&(cid:9)(cid:12)(cid:3)(cid:11)
!(cid:12)(cid:3)(cid:13)(cid:14)(cid:12)(cid:9)(cid:4)(cid:10)’(cid:6)
’(cid:5)(cid:8)(cid:8)(cid:12)(cid:9)(cid:12)(cid:7)(cid:11)(cid:16)(cid:29)
((cid:30)(cid:14)(cid:5)(cid:11)(cid:3)(cid:24)(cid:12)(cid:9))
(cid:24)
(cid:14)(cid:3)(cid:17)(cid:12)(cid:10)(cid:2)(cid:16)(cid:3)(cid:7)(cid:10)(cid:3)(cid:7)(cid:19)(cid:10)(cid:2)(cid:18)(cid:9)(cid:2)(cid:6)(cid:4)(cid:12)(cid:10)(cid:8)(cid:6)(cid:9)
(cid:3)(cid:16)(cid:16)(cid:10)(cid:11)(cid:14)(cid:12)(cid:29)(cid:10)(cid:19)(cid:6)$(cid:10)(cid:9)(cid:12)(cid:8)(cid:16)(cid:12)(cid:13)(cid:11)$(cid:10)(cid:3)(cid:19)(cid:21)(cid:18)(cid:4)(cid:11)(cid:10)(cid:2)(cid:16)(cid:3)(cid:7)(cid:4)
(cid:2)(cid:16)(cid:12)(cid:10)(cid:11)(cid:14)(cid:5)(cid:7)
(cid:12)(cid:9)
(cid:4)(cid:5)(cid:19)
(cid:7)
(cid:6)
(cid:6)
(cid:7)(cid:10)(cid:13)
(cid:12)
(cid:4)(cid:11)(cid:10)(cid:2)
(cid:13)(cid:5)(cid:4)(cid:5)(cid:6)
(cid:12)
(cid:12)(cid:10)(cid:20)
(cid:12)(cid:8)(cid:6)(cid:9)(cid:12)(cid:10)(cid:19)
(cid:3)(cid:11)(cid:10)(cid:23)(cid:5)(cid:16)(cid:16)(cid:10)(cid:11)(cid:14)
(cid:12)
*
(cid:14)
(cid:23)
(cid:13)
(cid:3)
(cid:9)
(cid:12)
(cid:10)
(cid:3)
(cid:20)
(cid:6)
(cid:18)
(cid:11)
(cid:10)
(cid:4)
(cid:11)
(cid:18)
(cid:19)
(cid:12)
(cid:7)
(cid:11)
(cid:4)
(cid:3)
(cid:14)
(cid:12)
(cid:17)(cid:5)(cid:6)(cid:9)
(cid:19)
(cid:3)(cid:11)(cid:12)
(cid:12)
(cid:2)
(cid:6)(cid:11)(cid:10)(cid:9)(cid:12)
(cid:6)(cid:10)(cid:7)
(cid:4)(cid:10)(cid:4)
(cid:12)
(cid:3)
(cid:15)
(cid:3)(cid:16)(cid:10)(cid:2)(cid:9)(cid:6)
(cid:7)
(cid:3)(cid:16)(cid:16)(cid:12)
(cid:14)
(cid:13)
(cid:13)(cid:11)(cid:5)(cid:17)(cid:12)(cid:16)(cid:29)(cid:10)(cid:23)(cid:5)(cid:11)(cid:14)(cid:10)(cid:20)
(cid:12)(cid:4)(cid:11)(cid:3)(cid:20)(cid:16)(cid:5)(cid:4)(cid:14)(cid:10)(cid:3)(cid:7)(cid:19)(cid:10)(cid:8)(cid:6)(cid:16)(cid:16)(cid:6)(cid:23)
(cid:13)(cid:16)(cid:12)(cid:3)(cid:9)(cid:10)(cid:12)#(cid:2)(cid:12)(cid:13)(cid:11)(cid:3)(cid:11)(cid:5)(cid:6)(cid:7)(cid:4)
(cid:2)(cid:12)(cid:6)(cid:2)(cid:16)(cid:12)(cid:10)(cid:19)(cid:12)(cid:11)(cid:12)(cid:9)(cid:28)(cid:5)(cid:7)(cid:12)(cid:10)(cid:11)(cid:14)(cid:12)
%(cid:18)(cid:3)(cid:16)(cid:5)(cid:11)(cid:29)(cid:10)(cid:6)(cid:8)(cid:10)(cid:3)(cid:10)(cid:4)(cid:13)(cid:14)(cid:6)(cid:6)(cid:16)
(cid:14)(cid:5)(cid:15)(cid:14)(cid:10)(cid:12)#(cid:2)(cid:12)(cid:13)(cid:11)(cid:3)(cid:11)(cid:5)(cid:6)(cid:7)(cid:4)(cid:10)(cid:8)(cid:6)(cid:9)(cid:10)(cid:4)(cid:11)(cid:18)(cid:19)(cid:12)(cid:7)(cid:11)(cid:4)
(cid:3)(cid:7)(cid:19)(cid:10)(cid:11)(cid:14)(cid:12)(cid:28)(cid:4)(cid:12)(cid:16)(cid:17)(cid:12)(cid:4)
(cid:13)(cid:6)(cid:7)(cid:4)(cid:5)(cid:19)(cid:12)(cid:9)(cid:10)(cid:8)(cid:12)(cid:12)(cid:16)(cid:5)(cid:7)(cid:15)(cid:4)(cid:10)(cid:6)(cid:8)(cid:10)(cid:11)(cid:14)(cid:6)(cid:4)(cid:12)
(cid:5)(cid:28)(cid:2)(cid:3)(cid:13)(cid:11)(cid:12)(cid:19)(cid:10)(cid:20)(cid:29)(cid:10)(cid:19)(cid:12)(cid:13)(cid:5)(cid:4)(cid:5)(cid:6)(cid:7)
(cid:24)
(cid:10)(cid:13)
(cid:11)(cid:12)
(cid:12)
(cid:6)
(cid:4)(cid:11)(cid:5)(cid:7)
(cid:12)
(cid:7)
(cid:2)
(cid:13)
(cid:4)(cid:10)(cid:4)(cid:11)(cid:3)
(cid:15)(cid:10)(cid:5)(cid:7)(cid:10)(cid:2)
(cid:12)
(cid:7)(cid:11)(cid:9)(cid:3)(cid:11)(cid:12)(cid:10)(cid:6)
(cid:7)
(cid:19)
(cid:12)(cid:9)(cid:4)
(cid:3)(cid:9)(cid:19)(cid:5)(cid:25)(cid:12)
(cid:2)
(cid:7)(cid:10)(cid:16)(cid:12)
(cid:12)
(cid:13)(cid:11)(cid:5)(cid:17)(cid:12)$
(cid:3)(cid:9)(cid:7)(cid:5)(cid:7)
(cid:19)
(cid:15)
!(cid:6)(cid:19)(cid:19)(cid:10)(cid:30)(cid:14)(cid:5)(cid:11)(cid:3)(cid:24)(cid:12)(cid:9)
Figure 1.1: What makes a great teacher?
4
1. WHAT MAKES A GREAT TEACHER?
(cid:129) People skills are extremely important for effective teaching and determining the quality of a
school.
(cid:129) Clear expectations must be set early and then followed throughout the academic year. Setting
clear expectations sets a consistent tone for students for the entire year.
(cid:129) An effective teacher appropriately responds to misbehavior to prevent it from happening again.
They employ a variety of techniques to effectively and in a professional manner manage the
situation. Throughout the encounter the teacher treats the student and parents with respect.
Their focus is changing the student’s response and behavior in the future. In contrast, an
ineffective teacher seeks revenge against misbehaving students.
(cid:129) High expectations are extremely important. Most teachers set high expectations for their
students. Great teachers set high expectations for themselves and hold themselves accountable.
They focus on their own performance and how it relates to their student’s success.
(cid:129) Effective teachers realize they are the most important variable in the classroom over which they
have control. They constantly hold themselves accountable, take responsibility for classroom
success, and consistently try to improve their performance.
(cid:129) Great teachers create a positive atmosphere in their classroom based on respect, dignity and care
for each and every student. These teachers effectively use genuine compliments and praise to
positively influence their students and also their colleagues. They model appropriate behavior.
(cid:129) Effective teachers set the tone for all interactions with positive professionalism. Students will
respond in kind.To set a positive tone, teachers filter out negative influences such as complaints,
demonstrate a positive attitude and enthusiasm toward their job, and do not let their private
lives and concerns invade the classroom.
(cid:129) Effective teachers place great importance in maintaining a positive relationship with students,
parents and colleagues. They strive to treat everyone with respect and dignity. They also work
to repair damaged relationships.
(cid:129) Great teachers are aware of what goes on within their classroom. However, they carefully
choose when to correct an offending student. In other words, they exercise great self control
and wisely choose when to correct offenses. They rationally respond to inappropriate behavior
without escalating the situation. Furthermore, they do not ignore the high achievers but provide
them needed recognition to allow them to continue moving forward.
(cid:129) Effective teachers construct plans for learning activities and reflect on the success of their
efforts. If things do not go according to plan they proactively adjust their approach to achieve
intended goals.
1.3. WHAT MAKES A GREAT TEACHER?
5
(cid:129) Great teachers carefully consider the impact on others before making changes. In particular,
they make decisions to insure they meet their intended purpose and consider the thoughts of
their best students.
(cid:129) Great teachers carefully consider the feelings of others regarding decisions that have been
made. They insure the good students are comfortable with the change while those that are
uncomfortable will change in a positive direction.
(cid:129) Good teachers keep standardized testing in perspective. They realize that good test scores are
important but also value other measures of student achievement.
(cid:129) Great teachers care about their students by establishing a positive approach, treating everyone
with dignity and respect and modeling to their students how to treat others.
It must be emphasized this list of outstanding teacher traits are based on the expertise in
teaching provided by Todd Whitaker [2]. This book is a must read for the dedicated teacher. We
next examine traits of excellent teaching provided by the U.S. Professors of the Year award program.
1.3.3 U.S. PROFESSORS OF THE YEAR
The U.S. Professors of the Year awards program annually recognizes outstanding undergraduate
teaching at the state and national levels. The awards program is sponsored by the Council for Ad-
vancement and Support of Education (CASE) and the Carnegie Foundation for the Advancement
of Teaching. A review of CASE award criteria provides further insight into tenets of great teaching.
The CASE criteria include [3]:
(cid:129) excellence in the impact on and involvement with undergraduate students;
(cid:129) a demonstrated scholarly approach to teaching and learning;
(cid:129) contributions in undergraduate education to the nominee’s institution, community and pro-
fession; and
(cid:129) the support of colleagues and former undergraduate students.
In an effort to gain a wider perspective on what constitutes great teaching, several websites
devoted to sharing characteristics of useful techniques for outstanding classroom instruction were
visited. A brief summary of each is provided below.
1.3.4 WHAT MAKES A GREAT TEACHER?
GreatSchoolsT M (www.greatschools.org) provides a sharing forum for users to obtain informa-
tion on school performance.The purpose of the site is to “help parents to be more effectively involved
in their children’s education [4].” The senior management and board of directors for GreatSchools
are experts in the educational world. The GreatSchools staff compiled the characteristics of great
teachers. They indicate that great teachers [4]:
6
1. WHAT MAKES A GREAT TEACHER?
(cid:129) Set high expectations for all their students and do not give up on underachievers.
(cid:129) Have clear, written objectives, lesson plans and learning goals for each assignment. Further-
more, assignments are graded consistently and in a timely manner.
(cid:129) Are prepared and organized and ready to teach. They present lesson material in a clear, orderly
and structured manner.
(cid:129) Engage students and have them look at issues in a variety of ways. They effectively engage all
students in the class by asking questions to make sure students are following the lesson and
vary their delivery approach.
(cid:129) Care about their students, form strong relationships with them and are engaged in student
and school activities.
(cid:129) Are enthusiastic and thoroughly know their subject matter and work to stay current.
(cid:129) Communicate on a regular basis with parents about student progress.
Later is this section we develop a synthesized model of tenets of effective college teachers.
We shall see that many of the tenets listed here are also applicable in the college classroom while
others are not. As an example, the Family Educational Rights and Privacy Act (FERPA) provides
strict guidance on what information may be shared with the parents of college students.
1.3.5 TOP FIVE CHARACTER TRAITS OF SUPERIOR TEACHERS
“So You Want to Teach” is a website forum that allow practicing educators to share techniques
on effective teaching. A poll of the top five character traits of superior teachers was provided. The
top five traits were inspirational, compassionate, demanding, sense of humor, and subject matter
knowledge [5].
A clear trend is starting to develop on the tenets, traits, and practices of good teachers. We
are not quite ready to construct a synthesized model. We first visit one more site to pick up a few
more tenets that have not been mentioned yet.
1.3.6 WHAT MAKES A GREAT TEACHER —TAKE TWO!
“Practical Theory” is another website forum that allows educators to share techniques on effective
teaching. An article entitled “What makes a great teacher?” provided a list from a seasoned educator
on what sets great teachers apart. Some of the tenets provided will now be quite familiar to you,
others are new. The article indicates a great teacher [6]:
(cid:129) Loves their students.
(cid:129) Has a passion for teaching.
(cid:129) Loves their subject material.
1.4. PULLING IT ALL TOGETHER: A SYNTHESIZED MODEL 7
(cid:129) Is constantly trying to improve.
(cid:129) Is organized and has structure in their class.
(cid:129) Is willing to change based on interaction with students.
(cid:129) Is humble and realizes it is about and for the students.
(cid:129) Has a strong ego to survive the days when things do not go so well.
(cid:129) Is willing to work collaboratively within the school community to make it better.
(cid:129) Is willing to reflect on what worked and what did not and make changes accordingly.
(cid:129) Has a strong work ethic. Teaching takes considerable time and commitment outside the
classroom.
(cid:129) Understands the bigger picture of their role in student’s lives. Great teachers know that some
of the best teaching moments occur outside the classroom.
1.4
PULLING IT ALL TOGETHER: A SYNTHESIZED MODEL
As you read over the last several sections you probably noticed many similarities between the views
of what constitutes a great teacher. In Figure 1.2, we have synthesized the different views into a
single model. Note how the tenets conveniently fit into three categories: attitude, preparation, and
classroom. Two of these categories are completely within your control while many aspects of the
classroom control category are also within your control.
We have also removed two pieces from the model: communicating frequently with parents
and standardized testing. The Family Educational Rights and Privacy Act (FERPA) provides very
strict guidelines concerning a student’s right to privacy. In a nutshell, student records belong to the
student. The protected information includes grades, finances, and discipline records. Parents are not
allowed access to student records or information on progress without the written permission of the
student [7].
Concerning standardized testing, many engineering schools require their students to complete
the Fundamentals of Engineering (FE) examination as a graduation requirement. It is one of the
steps to becoming a licensed professional engineer. FE examination results also provide valuable
program assessment data helpful for ongoing continuous improvement and accreditation efforts [8].
Although the results of the FE examination are quite important, the exam results do not drive
curricular content.
Let’s take a closer look at the categories of teaching tenets within the synthesized model.
Attitude. I was blessed with an outstanding mother and father. Both worked a variety of challenging,
difficult jobs in service to others. One of my father’s favorite maxims is “Attitude is everything!” He
believes and demonstrates that any job or task approached with the proper attitude and gusto will be
8
1. WHAT MAKES A GREAT TEACHER?
(cid:2)(cid:3)(cid:3)(cid:4)(cid:3)(cid:5)(cid:6)(cid:7)
(cid:14)
(cid:18)
(cid:28)
(cid:6)
(cid:4)
(cid:2)
(cid:14)
(cid:3)(cid:11)(cid:28)
(cid:9)(cid:12)
(cid:4)
(cid:20)(cid:16)(cid:12)$(cid:10)(cid:13)(cid:9)(cid:12)
(cid:2)
(cid:12)
(cid:13)(cid:11)$(cid:10)(cid:4)
(cid:12)(cid:9)(cid:12)$(cid:10)(cid:11)(cid:9)(cid:12)
(cid:5)(cid:7)(cid:4)(cid:2)(cid:5)(cid:9)(cid:3)(cid:11)(cid:5)(cid:6)(cid:7)(cid:3)(cid:16)$(cid:10)(cid:8)(cid:5)(cid:16)(cid:11)(cid:12)(cid:9)(cid:10)(cid:7)(cid:12)(cid:15)(cid:3)(cid:11)(cid:5)(cid:17)(cid:12)(cid:4)$
(cid:3)(cid:11)(cid:12)(cid:10)(cid:2)
(cid:4)(cid:14)(cid:3)(cid:9)(cid:12)(cid:10)(cid:2)(cid:6)(cid:4)(cid:5)(cid:11)(cid:5)(cid:17)(cid:12)(cid:10)(cid:3)(cid:11)(cid:11)(cid:5)(cid:11)(cid:18)(cid:19)(cid:12)(cid:4)$(cid:10)(cid:12)(cid:15)(cid:6)
(cid:11)(cid:6)(cid:10)(cid:4)(cid:18)(cid:9)(cid:17)(cid:5)(cid:17)(cid:12)(cid:10)(cid:13)(cid:14)(cid:3)(cid:16)(cid:16)(cid:12)(cid:7)(cid:15)(cid:5)(cid:7)(cid:15)(cid:10)(cid:19)(cid:3)(cid:29)(cid:4)
(cid:12)(cid:10)(cid:6)(cid:8)(cid:10)(cid:14)
(cid:3)(cid:11)(cid:10)(cid:3)(cid:16)(cid:16)(cid:10)(cid:23)(cid:5)(cid:11)(cid:14)
(cid:4)(cid:5)(cid:11)(cid:5)(cid:17)(cid:12)
(cid:18)
(cid:28)
(cid:6)(cid:9)
(cid:12)
(cid:7)
(cid:4)
(cid:6)
(cid:13)
(cid:3)
(cid:9)
(cid:12)
(cid:10)
(cid:3)
(cid:20)
(cid:6)
(cid:18)
(cid:11)
(cid:10)
(cid:4)
(cid:11)
(cid:18)
(cid:19)
(cid:12)
(cid:7)
(cid:11)
(cid:4)
(cid:13)
(cid:6)
(cid:28)
(cid:2)
(cid:3)
(cid:4)
(cid:4)
(cid:6)
(cid:7)
(cid:3)
(cid:11)
(cid:12)
$
(cid:5)
(cid:24)
(cid:2)(cid:16)(cid:12)(cid:10)(cid:11)(cid:14)(cid:5)(cid:7)
(cid:12)(cid:9)
(cid:4)(cid:5)(cid:19)
(cid:7)
(cid:6)
(cid:6)
(cid:7)(cid:10)(cid:13)
(cid:12)
(cid:4)(cid:11)(cid:10)(cid:2)
(cid:13)(cid:5)(cid:4)(cid:5)(cid:6)
(cid:12)
(cid:12)(cid:10)(cid:20)
(cid:12)(cid:8)(cid:6)(cid:9)(cid:12)(cid:10)(cid:19)
(cid:3)(cid:11)(cid:10)(cid:23)(cid:5)(cid:16)(cid:16)(cid:10)(cid:11)(cid:14)
(cid:12)
(cid:20)
(cid:14)
(cid:23)
(cid:24)(cid:12)(cid:12)(cid:2)(cid:10)(cid:9)(cid:12)(cid:16)(cid:3)(cid:11)(cid:5)(cid:6)(cid:7)(cid:4)(cid:14)(cid:5)(cid:2)(cid:4)
(cid:5)(cid:7)(cid:10)(cid:15)(cid:6)(cid:6)(cid:19)(cid:10)(cid:9)(cid:12)(cid:2)(cid:3)(cid:5)(cid:9)$
(cid:23)(cid:6)(cid:9)(cid:24)(cid:4)(cid:10)(cid:13)(cid:6)(cid:16)(cid:16)(cid:3)(cid:20)(cid:6)(cid:9)(cid:3)(cid:11)(cid:5)(cid:17)(cid:12)(cid:16)(cid:29)
(cid:5)(cid:15)(cid:7)(cid:6)(cid:9)(cid:12)(cid:10)(cid:11)(cid:9)(cid:5)(cid:17)(cid:5)(cid:3)(cid:16)(cid:10)(cid:19)(cid:5)(cid:4)(cid:11)(cid:18)(cid:9)(cid:20)(cid:3)(cid:7)(cid:13)(cid:12)(cid:4)$
(cid:19)(cid:6)(cid:10)(cid:7)(cid:6)(cid:11)(cid:10)(cid:12)(cid:4)(cid:13)(cid:3)(cid:16)(cid:3)(cid:11)(cid:12)(cid:10)(cid:4)(cid:5)(cid:11)(cid:18)(cid:3)(cid:11)(cid:5)(cid:6)(cid:7)
(cid:14)(cid:15)(cid:11)(cid:16)(cid:16)(cid:9)(cid:12)
(cid:19)(cid:12)(cid:3)(cid:16)(cid:10)(cid:2)(cid:9)(cid:6)(cid:3)(cid:13)(cid:11)(cid:5)(cid:17)(cid:12)(cid:16)(cid:29)(cid:10)(cid:23)(cid:5)(cid:11)(cid:14)(cid:10)(cid:20)(cid:12)(cid:14)(cid:3)(cid:17)(cid:5)(cid:6)(cid:9)
(cid:13)(cid:14)(cid:3)(cid:16)(cid:16)(cid:12)(cid:7)(cid:15)(cid:12)(cid:4)(cid:10)(cid:4)(cid:6)(cid:10)(cid:7)(cid:6)(cid:11)(cid:10)(cid:9)(cid:12)(cid:2)(cid:12)(cid:3)(cid:11)(cid:12)(cid:19)
(cid:12)
(cid:17)
!(cid:12)(cid:7)(cid:12)(cid:11)(cid:4)(cid:10)(cid:6)(cid:8)
&(cid:9)(cid:12)(cid:3)(cid:11)
!(cid:12)(cid:3)(cid:13)(cid:14)(cid:12)(cid:9)(cid:4)(cid:10)
(cid:4)
(cid:7)
(cid:4)$
(cid:12)(cid:10)(cid:8)(cid:6)(cid:9)
(cid:4)(cid:11)(cid:10)(cid:2)(cid:16)(cid:3)
(cid:13)(cid:11)(cid:5)(cid:17)(cid:12)
(cid:4)
(cid:6)
(cid:20)(cid:21)(cid:12)
(cid:19)(cid:21)(cid:18)
(cid:18)(cid:9)(cid:2)
(cid:7)(cid:10)(cid:6)
(cid:13)(cid:11)$(cid:10)(cid:3)
(cid:19)(cid:10)(cid:2)
(cid:3)(cid:9)$(cid:10)(cid:23)(cid:9)(cid:5)(cid:11)(cid:11)(cid:12)
(cid:7)
(cid:6)$(cid:10)(cid:9)(cid:12)(cid:8)(cid:16)(cid:12)
(cid:7)(cid:10)(cid:3)
(cid:17)(cid:12)(cid:10)(cid:2)(cid:16)(cid:3)
(cid:29)(cid:10)(cid:19)
(cid:13)(cid:16)(cid:12)
(cid:3)
(cid:12)
(cid:14)
(cid:3)(cid:16)(cid:16)(cid:10)(cid:11)(cid:14)
$
(cid:5)
(cid:19)
(cid:12)
(cid:25)
(cid:7)
(cid:3)
(cid:15)
(cid:9)
(cid:6)
(cid:10)
(cid:19)
(cid:7)
(cid:3)
(cid:10)
(cid:19)
(cid:12)
(cid:9)
(cid:3)
(cid:2)
(cid:12)
(cid:9)
(cid:2)
(cid:29)
(cid:16)
(cid:11)
(cid:7)
(cid:3)
(cid:11)
(cid:4)
(cid:7)
(cid:6)
(cid:13)
(cid:10)
(cid:6)
(cid:11)
(cid:10)
(cid:24)
(cid:12)
(cid:12)
(cid:4)
(cid:10)
$
(cid:16)
(cid:3)
(cid:5)
(cid:9)
(cid:12)
(cid:11)
(cid:10)
(cid:3)
(cid:28)
(cid:12)
(cid:17)
(cid:6)
(cid:16)
(cid:12)
(cid:13)
(cid:7)
(cid:3)
(cid:28)
(cid:9)
(cid:6)
(cid:8)
(cid:9)
(cid:12)
(cid:2)
(cid:10)
(cid:9)
(cid:5)
(cid:12)
(cid:14)
(cid:11)
(cid:10)
(cid:12)
(cid:17)
(cid:6)
(cid:9)
(cid:2)
(cid:28)
(cid:5)
(cid:23)(cid:6)(cid:9)(cid:24)(cid:10)(cid:12)(cid:11)(cid:14)(cid:5)(cid:13)
(cid:2)(cid:3)(cid:4)(cid:4)(cid:5)(cid:6)(cid:7)(cid:10)(cid:8)(cid:6)(cid:9)(cid:10)(cid:11)(cid:12)(cid:3)(cid:13)(cid:14)(cid:5)(cid:7)(cid:15)$
(cid:2)(cid:12)(cid:6)(cid:2)(cid:16)(cid:12)(cid:10)(cid:19)(cid:12)(cid:11)(cid:12)(cid:9)(cid:28)(cid:5)(cid:7)(cid:12)(cid:10)(cid:11)(cid:14)(cid:12)
%(cid:18)(cid:3)(cid:16)(cid:5)(cid:11)(cid:29)(cid:10)(cid:6)(cid:8)(cid:10)(cid:3)(cid:10)(cid:4)(cid:13)(cid:14)(cid:6)(cid:6)(cid:16)
(cid:13)(cid:6)(cid:7)(cid:4)(cid:5)(cid:19)(cid:12)(cid:9)(cid:10)(cid:8)(cid:12)(cid:12)(cid:16)(cid:5)(cid:7)(cid:15)(cid:4)(cid:10)(cid:6)(cid:8)(cid:10)(cid:11)(cid:14)(cid:6)(cid:4)(cid:12)
(cid:5)(cid:28)(cid:2)(cid:3)(cid:13)(cid:11)(cid:12)(cid:19)(cid:10)(cid:20)(cid:29)(cid:10)(cid:19)(cid:12)(cid:13)(cid:5)(cid:4)(cid:5)(cid:6)(cid:7)
(cid:19)(cid:12)(cid:28)(cid:3)(cid:7)(cid:19)(cid:5)(cid:7)(cid:15)$(cid:10)(cid:14)(cid:5)(cid:15)(cid:14)(cid:10)(cid:12)#(cid:2)(cid:12)(cid:13)(cid:11)(cid:3)(cid:11)(cid:5)(cid:6)(cid:7)(cid:4)
(cid:8)(cid:6)(cid:9)(cid:10)(cid:4)(cid:11)(cid:18)(cid:19)(cid:12)(cid:7)(cid:11)(cid:4)(cid:10)(cid:3)(cid:7)(cid:19)(cid:10)(cid:11)(cid:14)(cid:12)(cid:28)(cid:4)(cid:12)(cid:16)(cid:17)(cid:12)(cid:4)
(cid:12)
(cid:4)(cid:11)(cid:3)
(cid:13)(cid:16)(cid:12)
(cid:20)(cid:16)(cid:5)(cid:4)
(cid:3)(cid:9)(cid:10)(cid:12)
(cid:14)(cid:10)(cid:3)
#(cid:2)
(cid:7)
(cid:12)
(cid:19)(cid:10)(cid:8)(cid:6)(cid:16)(cid:16)(cid:6)
(cid:13)(cid:11)(cid:3)(cid:11)(cid:5)(cid:6)
(cid:7)
(cid:4)
(cid:23)
(cid:8)(cid:9)(cid:7)(cid:10)(cid:11)(cid:9)(cid:11)(cid:3)(cid:4)(cid:12)(cid:13)
(cid:13)(cid:6)(cid:28)(cid:28)(cid:18)(cid:7)(cid:5)(cid:13)(cid:3)(cid:11)(cid:12)(cid:10)(cid:8)(cid:9)(cid:12)%(cid:18)(cid:12)(cid:7)(cid:11)(cid:16)(cid:29)
(cid:23)(cid:5)(cid:11)(cid:14)(cid:10)(cid:2)(cid:3)(cid:9)(cid:12)(cid:7)(cid:11)(cid:4)
(cid:24)(cid:12)(cid:12)(cid:2)(cid:10)(cid:4)(cid:11)(cid:3)(cid:7)(cid:19)(cid:3)(cid:9)(cid:19)(cid:5)(cid:25)(cid:12)(cid:19)
(cid:11)(cid:12)(cid:4)(cid:11)(cid:5)(cid:7)(cid:15)(cid:10)(cid:5)(cid:7)(cid:10)(cid:2)(cid:12)(cid:9)(cid:4)(cid:2)(cid:12)(cid:13)(cid:11)(cid:5)(cid:17)(cid:12)$
(cid:10)(cid:13)(cid:6)(cid:7)(cid:13)(cid:12)(cid:7)(cid:11)(cid:9)(cid:3)(cid:11)(cid:12)(cid:10)(cid:6)(cid:7)(cid:10)(cid:16)(cid:12)(cid:3)(cid:9)(cid:7)(cid:5)(cid:7)(cid:15)
Figure 1.2: Tenets of a great teacher: a synthesized model [2, 4, 6].
successful. So it is with teaching. Much of our success in teaching depends on having a positive and
resilient attitude. As shown in Figure 1.2, many of the tenets of a great teacher pertain to attitude.
These tenets include: good relationship repair; being inspirational, humble, compassionate, and
considerate; carefully considering decisions; a demonstrated strong work ethic; and a demonstrated
passion for teaching.
Preparation. I am a product of the military. My father served 26 years in the Air Force and
my mother was a Naval Flight Nurse. I served in the Air Force for 20 years. Throughout my military
career I frequently heard the adage “Proper prior planning prevents poor performance.” (There are
other similar, more colorful versions.) Often referred to as the “6 Ps,” this adage packs considerable
wisdom. Simply stated, careful preparation goes a long way toward success. As an educator, much of
your success depends on preparation for the classroom. The tenets of a great teacher included in the
1.5. GREAT TEACHERS AS ROLE MODELS 9
area of preparation include establishing clear, written objectives; being well-prepared and organized;
and establishing high expectations for your students and yourself.
In the classroom. The two remaining tenets of great teachers pertain to classroom manage-
ment. A great teacher proactively deals with behavioral challenges, ignores trivial disturbances and
works to redirect student action such that misbehavior is not repeated.
In the next section we study several outstanding teachers to see how these tenets are put into
action.
1.5 GREAT TEACHERS AS ROLE MODELS
In this section we showcase two outstanding teachers: a middle school teacher and a high school
teacher.
Paul Crips. Paul Crips is a seventh grade science teacher serving at Carey Junior High School
in Cheyenne, Wyoming. For 34 years, he has taught industrial technology and science to 7th–12th
graders. He is also certified to teach mathematics. I have worked with Paul on a number of projects
and was honored to interview him for this book.
Although Paul is the son of a elementary school educator, his motivation for entering the
teaching profession came from his very caring and dedicated high school welding and wood fabri-
cation teacher, Mr. Don Freshette. Paul was really bothered by the treatment he had received from
several junior high mathematics teachers he had. He did not enjoy school. As he put it, “nobody
lit my fire.” In fact, he remembered that the teachers were quite negative and talked down to the
students. Some teachers were openly cruel to students.
This all changed when he took a class from Mr. Freshette in high school. Mr. Freshette
openly demonstrated care and concern for his students. He listened to them and allowed them to
work within the shop on various projects; however, he had high performance expectations for those
in his classes. Paul felt a connection to Mr. Freshette and knew that he cared. In retrospect, Paul is
amazed at how much influence, either positive or negative, that a single teacher can have on your
life.
Following a two and half year stint in the Navy, Paul enrolled at the University of Wyoming
and completed a Bachelor of Science degree in Industrial Arts from the College of Education in
1978. Paul accepted his first job in Cheyenne at an alternative high school for at-risk students. He
taught vocational education courses there for three years before being hired away to Carey Junior
High School. He taught all areas of industrial arts for 16 years before becoming a science teacher in
1996. Since then, he has taught Physics, Chemistry, Biology, and Earth and Space Science.
Paul describes the tenets of a great teacher as one who cares deeply for the well being and
success of their students. To really care you need to establish a trusting rapport with your students
by showing, via actions, that you really care and will not give up on them regardless of how they
perform or behave. He added that it is easy to become angry at the misbehavior of a student but
he quickly added you must separate the action from the student. Regardless of the challenge, you
need to overcome and work with each student toward success. He acknowledged that this is difficult
10
1. WHAT MAKES A GREAT TEACHER?
to do with a large classroom of many students with a wide variety of background, preparation, and
capability.
To provide opportunities to work individually with each student, Paul has enlisted the aid of
students in higher grades to serve as mentors for the younger grades. He further added that you can
not ignore the gifted students. They need to be challenged so that they too can realize their needs
and dreams.
Paul indicated another tenet of a good teacher is to engage their students. To engage students
from the first day of school, Paul greets each one as they enter the classroom. He works hard to be
personal, humble, and human. His goal is to demonstrate that he is approachable and can be trusted.
He also uses his sense of humor to keep students engaged.
Paul’s efforts to establish rapport with his students does not mean he ignores or condones
misbehavior. He is adamant that punishment does not accomplish anything. Instead, it reinforces
poor behavior. When a student misbehaves Paul takes them aside and talks to them about their
actions. His goal is to proactively engage and redirect their energy in a positive manner and move
on. A related goal is not to allow students to leave his classroom angry. He also does not tolerate
cheating. When a cheating incident occurs, he uses the situation to have the student identify their
incorrect behavior and helps them understand that the consequences of cheating later in life will be
much more severe.
To further engage students Paul indicated it is important to interview students to find out
where their interest lies and tie curricular content to their interests. Paul is a self-proclaimed “gear
head.” He spends considerable time rebuilding cars. He has found his students also have a mutual
interest in this area which has provided a bridge to curricular content. To engage students in a
variety of topics, Paul has mentored a number of after-school programs in robotics, short wave radio,
and astronomy. Currently, he mentors an after-school program where students learn the basics of
programming using small robots controlled by a Texas Instruments TI-84 calculator.
In the classroom, Paul keeps students engaged with curricular content with carefully chosen,
problem-based activities. For example, to teach a variety of math concepts, Paul takes students on a
short hike to a nearby radar sphere mounted atop a high tower. He then wonders out loud, “How
might we measure the volume of the radar sphere?” He allows students to brainstorm on possible
solutions. He then reminds them of helpful curricular concepts previously discussed in class such as
the equation for the volume of a sphere and right triangle relationships (sine, cosine, and tangents).
He then allows the students to brainstorm on how they might use these concepts to measure the
volume of the radar sphere atop the tower. Students typically realize that if there was some method
to measure the height of the top and bottom of the sphere, they would be able to calculate the sphere
volume. Paul then produces a tape measure and an inclinometer and asks the students how these
tools might be used to gather the required data. See Figure 1.3.
Paul’s teaching is guided by a number of principles. We have already discussed establishing
student rapport and keeping them engaged. Paul notes that students are not all wired the same.
Many need pictures and diagrams to understand concepts. He indicated that the illustrations take
1.5. GREAT TEACHERS AS ROLE MODELS 11
,(cid:10)-(cid:10)./0(cid:10)π(cid:10)(cid:9)0
β
α
+
Figure 1.3: Measuring the volume of a radar sphere atop a tower.
away much of the anxiety that many have toward mathematics and allows them to visualize solutions
to posed problems.
To a new educator Paul offers the following advice. Be kind to yourself and be patient. You
have a lot to learn and things will not always go how you have planned. Most importantly, never
give up. If you won’t give up, neither will your students.
In closing, he indicated in the teaching profession you are free to become who you want to
be as an educator. It is very important to reflect on the good and not so good teachers you’ve had in
your past. Model and become the best of those from your past.
It is not surprising that Paul has earned a number of teaching awards for his dedication to
students. In 1994, he was the Wyoming recipient of the Christa McAuliffe Fellowship. It is also no
surprise that he used the fellowship money to purchase telescopes for his students to use. In 1996, he
was named Wyoming’s U.S. West Teacher of the Year.This was followed in 1999 by being named one
of 39 teachers nationwide named as a recipient of the Walt Disney Corporation American Teachers
Award. Also that year he was Wyoming’s Milken Foundation Teacher of the Year. In 2004, he was
one of four teachers chosen statewide for the Arch Coal Teacher of the Year Award. Paul takes all
of the awards in stride and is not comfortable talking about the recognition and accolades he has
received. Instead, he simply reminds all of us “it’s all about the students.”
Jamie Escalante. You may already be familiar with the work of Jaime Escalante. His dedicated,
lifetime work as an educator of students was showcased in the Warner Brothers movie “Stand and
Deliver.” It would have been an honor to interview him for this book; however, Mr. Escalante died
12
1. WHAT MAKES A GREAT TEACHER?
in 2010. Jay Mathews wrote an excellent book on Mr. Escalante, “Escalante – The Best Teacher
in America.” This book is a must read for the dedicated teacher. Mr. Mathews did an excellent
job catching the infectious spirit of Mr. Escalante’s commitment to teaching excellence and most
importantly his students.The information for the following vignette was obtained from Mr. Mathews
book [9].
Mr. Escalante taught mathematics at James A. Garfield High School in East Los Angeles. In
December 1982 the Los Angeles Times reported that 14 of 18 Garfield High School students taking
the Calculus Advanced Placement (AP) examination had been accused of cheating. The students
were eventually cleared of any misdoing. Mr. Mathew’s book does an outstanding job describing the
incident and its resolution [Mathews].
However, the real story is how Mr. Escalante and other dedicated faculty and staff at Garfield
High School prepared students against difficult challenges and odds to prepare for this examination.
I pored over Mathew’s book to unlock Mr. Escalante’s secrets of good teaching. Mathew’s indicated
his motivation for writing about Mr. Escalante’s was “to describe in detail how Escalante taught
and how Garfield had come so far, other teachers and schools with similar challenges might see
something they could use. If I could honestly portray the setbacks, misunderstandings, and personal
tensions that accompanied Garfield’s achievement, perhaps others would not become disheartened
when they found the path to learning particularly rough [9].” We thank Mr. Mathew’s for his
carefully researched and documented biography of Mr. Escalante.
Mr. Escalante was born and raised in La Paz, Bolivia into a family of teachers. The family
placed a great emphasis on education. He attended a demanding Jesuit High School. He taught at
several schools in Bolivia before immigrating to the United States in 1963 when he was 33 years old.
Unfortunately, his Bolivian teaching credentials were not accepted in the U.S. Therefore, he worked
a variety of jobs and received a National Science Foundation Scholarship to achieve his teaching
degree in order to obtain a teaching license [9].
Mr. Escalante passionately worked daily to be an outstanding teacher. Early on he set high
standards and expectations for his students and appealed to their pride to meet them. He also had his
students assume responsibility for their own actions. However, it was clear that he cared deeply for
his students. In fact, “Escalante and his students became part of the same team, fighting a common
foe, rather than adversaries in a war in which the teacher always had the upper hand and the students
often contemplated revolt or desertion [9].”
Mr. Escalante employed a variety of techniques to always keep his students interested and
engaged. He kept careful notes that contained his lesson plans, math short cuts, and insights honed
over many years studying and teaching mathematics. He worked to bridge complex mathematical
concepts to real world things that students knew and understood. He was convinced that students
learned by doing and kept them engaged with in class demonstrations and problem drills. He was
particularly skillful at linking math concepts to examples in sports and small business. Furthermore,
he would not miss the chance to illustrate a concept with a fun, enjoyable illustration. For example, to
illustrate the concept of fractions he would don a chef ’s hat (from a previous job) and slice apples to
illustrate fundamental concepts. As follow up, students could count on multiple homework problems
to hone their understanding of new concepts. The common theme throughout Mr. Escalante’s
approach was he cared deeply for his students’ progress and well being and worked daily to make
difficult mathematical concepts accessible [9].
1.6. SUMMARY 13
1.6
SUMMARY
In this chapter we identified the tenets of great teaching from a variety of sources. These sources
were then combined into a synthesized model that includes attitude, preparation, and classroom
skills. It is important to realize most of these tenets of great teaching are under your direct control.
In an upcoming chapter we develop concrete techniques to apply these tenets in the classroom. The
tenets of great teaching was then explored through a series of vignettes. Our goal throughout the
chapter is for you to develop a personal teaching philosophy that incorporates these tenets.
REFERENCES AND FURTHER READING
[1] Ripley, Amanda “What makes a great teacher?” The Atlantic. Online. Internet. Jan-
uary/February 2010. www.theatlantic.com Cited on page(s) 2
[2] Whitaker,Todd. What Great Teachers Do Differently — 14 Things That Matter Most. Larchmont:
Eye On Education, Inc, 2004. Cited on page(s) xvii, 2, 5, 8
[3] “U.S. Professors of the Year Award Program.” Online. Internet.
www.uspprofessorsoftheyear.org Cited on page(s) 5
[4] “What Makes A Great Teacher?” 3 pp. Online. Internet. www.greatschools.org Cited on
page(s) xvii, 5, 8
[5] “Top 5 (Plus 14) Character Traits of Superior Teachers.” 6 pp. Online. Internet. www.
soyouwanttobeateacher.com Cited on page(s) 6
[6] “What Makes A Great Teacher?” 2 pp. Online. Internet. www.practicaltheory.org Cited
on page(s) xvii, 6, 8
[7] “Family Educational Rights and Privacy Act (FERPA).” Online. Internet. www2.ed.gov Cited
on page(s) 7
[8] “NCEES—Advancing Licensure for Engineers and Surveyors.” Online. Internet. www.ncees.
org Cited on page(s) 7
[9] Mathews, Jay. Escalante—The Best Teacher in America. New York: Henry Holt and Company,
1988. Cited on page(s) 12, 13
14 REFERENCES AND FURTHER READING
Figure 1.4: “How do I get started? [ J. Barrett, Closer to the Sun International, Inc.]”
1.7 CHAPTER ACTIVITIES
1. Develop your personal list of tenets of great teaching that you will follow.
1.7. CHAPTER ACTIVITIES 15
2. Spend some time reflecting on teachers both good and bad from your past. Develop a list of
both good and bad tenets from your personal reflections.
3. Develop a personal teaching mission statement based on the tenets of great teaching from the
area of attitude.
4. Select a course you are currently teaching or will teach in the near future. Develop objectives
for that course. How do the course objectives support student outcomes?
5. For the course discussed in the previous question, develop lesson-by-lesson objectives for the
course.
6. Develop a list of concrete methods to use in the classroom to apply the tenets of great teaching
summarized in the synthesized model.
7. In the Paul Crips vignette, what tenets of great teaching were exhibited?
8. In the Jaime Escalante vignette, what tenets of great teaching were exhibited?
9. Identify a teacher you greatly admire; interview them, identify the tenets of great teaching
they exhibit, and write a teaching vignette about them.
10. Write your own personal teaching vignette.
C H A P T E R 2
17
A little learning theory
2.1 OVERVIEW
This chapter is devoted to the concept of learning and related teaching theories. To be a good
teacher one needs to be familiar with some of the theoretical underpinnings behind sound instruc-
tion techniques and how they are related to learning. These theoretical concepts will help you better
understand the learning process and provide concrete methods on how to best reach your students.
Kupfermann succinctly links the important concepts of this chapter: learning, knowledge, and mem-
ory. He describes learning as “the acquisition of knowledge about the world” and “memory is the
retention or storage about that knowledge [1].”
This chapter begins with a brief review of the physiological basis of learning with an emphasis
on the difference between short term and long term memory and the conversion or consolidation
of short term memory to long term memory. We then investigate the different levels of cognitive
learning, as described by Benjamin Bloom, followed with a brief introduction to the work of Myers
and Briggs in identifying 16 different personality types based on the work of C.G. Jung. A teacher
needs to be aware of their own personality type and those of their students so they can best develop
teaching strategies to reach them. The work of Felder and Silverman, who bridged a variety of
learning styles to teaching styles, is then reviewed. It should be no surprise that a teacher maximizes
their effectiveness by employing a variety of teaching styles to meet the needs of a variety of student
with different learning styles.
2.2 THE PHYSIOLOGICAL BASIS OF LEARNING
To understand the process of learning we need to briefly review some physiological fundamentals
related to memory. Memory consists of two distinct types: short-term or primary memory and
long-term or secondary memory. Short-term memory is the ability to retain specific bits or pieces of
information for a brief amount of time.These tidbits of information may be retrieved instantaneously.
Long-term memory, on the other hand, may be retained for a much longer period of time; however,
recall of this information may take longer [2, 3, 5].
Figure 2.1 provides a model of the relationship between short-term and long-term memory.
The goal of learning is to take knowledge of the world and retain it in long-term memory. The
process begins as an input to short-term memory. Short-term memory is converted to long-term
memory through a process called memory consolidation [1].
18
2. A LITTLE LEARNING THEORY
(cid:5)(cid:7)(cid:2)(cid:18)(cid:11)
(cid:4)(cid:14)(cid:6)(cid:9)(cid:11)(cid:10)(cid:11)(cid:12)(cid:9)(cid:28)
(cid:28)(cid:12)(cid:28)(cid:6)(cid:9)(cid:29)
(cid:28)(cid:12)(cid:28)(cid:6)(cid:9)(cid:29)
(cid:13)(cid:6)(cid:7)(cid:4)(cid:6)(cid:16)(cid:5)(cid:19)(cid:3)(cid:11)(cid:5)(cid:6)(cid:7)
(cid:16)(cid:6)(cid:7)(cid:15)(cid:10)(cid:11)(cid:12)(cid:9)(cid:28)
(cid:28)(cid:12)(cid:28)(cid:6)(cid:9)(cid:29)
(cid:4)(cid:12)(cid:3)(cid:9)(cid:13)(cid:14)(cid:10)(cid:3)(cid:7)(cid:19)(cid:10)(cid:9)(cid:12)(cid:3)(cid:19)(cid:10)(cid:6)(cid:18)(cid:11)
(cid:6)(cid:18)(cid:11)(cid:2)(cid:18)(cid:11)
Figure 2.1: Model of memory storage [1].
Memory consolidation converts short- to long-term memory through a variety of anatomical
and physiological changes that occur at the cellular level within the brain. It is important to note
that this process takes time [1, 2, 3]. There are several techniques to accelerate the process.
(cid:129) Rehearsal or repeating the information accelerates and potentiates (enhances) the consolida-
tion (conversion) of short term to long term memory [3]. This is called habituation [4].
(cid:129) Very strong, repeated, and strongly pleasant (or unpleasant) input has an excellent chance of
being converted from short term to long term memory [2]. This is called sensitization [4].
(cid:129) Information that is codified (categorized) has a good chance of being converted from short
term to long term memory. That is, if new information is compared to similar existing long
term memory items, it has a better chance of becoming long term memory [3],
With a fundamental understanding of the physiological process of learning in place, let’s
investigate the different levels of cognitive learning as described by Benjamin Bloom.
2.3
LEVELS OF LEARNING — BLOOM’S TAXONOMY
Learning may be divided into three different areas: cognitive, affective, and psychomotor. Cognitive
learning involves the development of intellectual skills. Affective learning involves the development
of emotions including feelings, values, and attitudes. Psychomotor learning involves the development
of physical movement, coordination and the development of motor skills. As educators we are
primarily concerned with the development of cognitive learning. Benjamin Bloom developed a
taxonomy or hierarchy of cognitive learning skills to allow “educators to evaluate learning of students
systematically [6].”
Bloom’s taxonomy of cognitive learning is illustrated in Figure 2.2. The taxonomy consists
of advancing cognitive skills from the knowledge level up through the evaluation level. To develop
higher-level cognitive skills one must first develop a base in the lower levels [7].
2.3. LEVELS OF LEARNING — BLOOM’S TAXONOMY 19
(cid:4)
(cid:12)
(cid:4)
(cid:4)
(cid:12)
(cid:13)
(cid:6)
(cid:9)
(cid:2)
(cid:10)
(cid:11)
(cid:14)
(cid:15)
(cid:18)
(cid:6)
(cid:14)
(cid:11)
(cid:10)
(cid:9)
(cid:12)
(cid:14)
(cid:15)
(cid:14)
(cid:5)
(cid:18)(cid:19)(cid:11)(cid:15)(cid:5)(cid:11)(cid:3)(cid:4)(cid:12)(cid:13)
1(cid:19)(cid:12)(cid:11)(cid:12)(cid:9)(cid:28)(cid:5)(cid:7)(cid:12)$
(cid:4)(cid:12)(cid:16)(cid:12)(cid:13)(cid:11)$(cid:10)(cid:13)(cid:9)(cid:5)(cid:11)(cid:5)%(cid:18)(cid:12)2
(cid:20)(cid:21)(cid:13)(cid:3)(cid:22)(cid:7)(cid:16)(cid:4)(cid:16)
1(cid:8)(cid:6)(cid:9)(cid:28)(cid:18)(cid:16)(cid:3)(cid:11)(cid:12)$
(cid:28)(cid:3)(cid:24)(cid:12)(cid:18)(cid:2)$(cid:10)(cid:19)(cid:12)(cid:4)(cid:5)(cid:15)(cid:7)2
(cid:2)(cid:13)(cid:11)(cid:15)(cid:21)(cid:16)(cid:4)(cid:16)
1(cid:19)(cid:12)(cid:9)(cid:5)(cid:17)(cid:12)$(cid:10)(cid:12)#(cid:2)(cid:16)(cid:3)(cid:5)(cid:7)2
(cid:2)(cid:10)(cid:10)(cid:15)(cid:4)(cid:23)(cid:11)(cid:3)(cid:4)(cid:12)(cid:13)
1(cid:13)(cid:3)(cid:16)(cid:13)(cid:18)(cid:16)(cid:3)(cid:11)(cid:12)$(cid:10)(cid:4)(cid:6)(cid:16)(cid:17)(cid:12)2
(cid:14)(cid:12)(cid:17)(cid:10)(cid:9)(cid:7)(cid:22)(cid:7)(cid:13)(cid:16)(cid:4)(cid:12)(cid:13)
1(cid:12)#(cid:2)(cid:16)(cid:3)(cid:5)(cid:7)$(cid:10)(cid:5)(cid:7)(cid:11)(cid:12)(cid:9)(cid:2)(cid:9)(cid:12)(cid:11)2
(cid:24)(cid:13)(cid:12)(cid:25)(cid:15)(cid:7)(cid:6)(cid:26)(cid:7)
1(cid:16)(cid:5)(cid:4)(cid:11)$(cid:10)(cid:4)(cid:11)(cid:3)(cid:11)(cid:12)2
Figure 2.2: Bloom’s taxonomy of cognitive learning [6].
Bloom divided the taxonomy into six different levels of learning, as illustrated in Figure 2.2.
Provided with each level are action verbs associated with the specific level.1 In ascending order, from
the lowest to the highest cognitive level, is [6]:
(cid:129) Knowledge: recalling or repeating facts verbatim. Action verbs include list or state [6].
(cid:129) Comprehension: demonstrating understanding of terms and concepts. Action verbs include
explain (in your own words) and interpret [6].
(cid:129) Application: applying learned information to solve a problem. Action verbs include calculate
and solve [6].
(cid:129) Analysis: breaking concepts into their primary elements, forming theoretical, logical or math-
ematical models to explain observed phenomena. Action verbs include derive and explain [6].
(cid:129) Synthesis: creating something, combining elements in a new way. Action verbs include for-
mulate or design [6].
1Goel et al. [8] compiled an extended list of action verbs from the literature associated with Bloom’s Taxonomy in Goel and Sharda.
20
2. A LITTLE LEARNING THEORY
(cid:129) Evaluation: making and justifying value judgments or selecting from a number of alternatives.
Action verbs include determine, select or critique [6].
As educators of engineering and applied science, it is important to understand the fundamental
concepts of Bloom’s Taxonomy. These concepts may be employed in a number of different areas.
(cid:129) When initially developing a course, you should carefully consider at what level cognitive skills
will be developed. This should be kept in mind when developing specific course objectives.
We discuss the development of objectives in the next chapter.
(cid:129) If your course will develop higher level cognitive skills, you need to ascertain where the lower-
level skills will be developed. For example, will they be developed in a pre-requisite course?
Another alternative would be to develop the lower-level skills early in the course followed by
higher-level skills later in the course.
(cid:129) In an upcoming chapter we discuss student assessment. When assessing students via quizzes
and examinations, they should be assessed at the same cognitive level at which they have been
taught.
(cid:129) If our goal is to develop some of the highest level cognitive skills such as design (synthesis),
then students should be provided practice exercises at this cognitive level during the course.
Goel et al. report the top levels of Bloom’s Taxonomy “represent higher-level cognitive activities
that require and develop mental faculties of creativity, critical thinking and innovative problem
solving [8].”
Aside from having a firm understanding of the different levels of cognitive learning, the
effective teacher must be self aware of their own personality type. Furthermore, the effective teacher
must be aware of the wide variety of different personality types. To reach students, we must tailor
our teaching approach to a learning style compatible with their personality type. In the next section
we discuss the work of Myer and Briggs in identifying different personality types first elucidated by
Carl Jung.
2.4
PERSONALITY TYPES
In the next several sections we briefly review Carl Jung’s seminal, fascinating work in personality
theories. We also examine the work of Katherine Briggs and Isabel Briggs Myers in codifying Jung’s
personality traits into a test of approximately 125 questions to determine one’s personality type. This
test is commonly known as Myers-Briggs TypeR Indicator or MBTIR and places an individual in one
of 16 different personality types. Certain personality traits are more in tune to specific learning styles
than others. Furthermore, due to our own personality type, we as educators are more comfortable with
specific types of teaching styles. To effectively reach our students we need to bridge their learning
style with our teaching style. We close with the ground-breaking work of Felder and Silverman in
bridging this gap with concrete teaching techniques to address all learning styles.
2.5. JUNG, MYERS AND BRIGGS 21
2.5
JUNG, MYERS AND BRIGGS
Carl Jung developed a theory of personality traits. It is based on the fundamental difference between
introversion and extroversion. Introverts are more comfortable with their internal thoughts and
feelings while extroverts prefer things, people and related activities. As we deal with the world
around us, as introverts or extroverts, there are four basic functions we employ: sensing the world by
looking or listening, thinking by evaluating information, intuiting by integrating a large amount of
information, and feeling by evaluating information using our emotional response. The proportion
of each of these functions places us in a specific personality type [9].
Myers and Briggs developed a test tool, designated Myers-Briggs Type IndicatorR or MBTIR,
to identify an examinee’s personality type via a series of 125 questions. The questions illuminate an
examinee’s preferences in dealing with the world in four different preference areas as described by
Jung [9, 10]:
(cid:129) What is your favorite world, extroversion (E) or introversion (I) [10]?
(cid:129) How do you process information, sense basic information (S) or interpret and add meaning
(N) [10]?
(cid:129) How do you make decisions, do you apply logic and thinking (T) or do assess people and
related circumstances (F) [10]?
(cid:129) In dealing with the world, do you prefer to decide things (judging ( J)) or do you keep your
options open (perceiving (P)).
The four preferences and resulting 16 personality types are illustrated in Figure 2.3. A per-
sonality type is identified by a four-letter designator such as (ISTP, ENTJ, etc.). A brief explanation
of each personality type is provided at [10]. Also, if you are interested in determining your own
personality type via the MBTI instrument, please see [10].
2.6
FELDER AND SILVERMAN: BRIDGING THE GAP
BETWEEN LEARNING AND TEACHING STYLES
As previously mentioned, based on one’s personality type, we have a preferred method of learning
new material. Also, as educator’s we have a preferred teaching style linked to our personality type.
Richard Felder and Linda Silverman published a paper in 1988, “Learning and Teaching Styles in
Engineering Education” to assist engineering educators in bridging the gap between the diverse
learning styles of their students and their own teaching styles [11].
They proposed a five axis preferred student learning style based on perception, input, organi-
zation, processing and understanding. Correspondingly, they proposed a five axis teaching style based
on content, presentation, organization, student participation, and perspective. Their hypothesis was
that engineering educators who adapt their teaching style to include the extremes of each axis of
22
2. A LITTLE LEARNING THEORY
(cid:31)(cid:20)(cid:28)(cid:8)
2
(cid:18)(cid:20)(cid:28)(cid:8)
5
(cid:5)
1
(cid:10)
(cid:15)
(cid:7)
(cid:4)
(cid:7)
(cid:12)
(cid:4)
(cid:31)(cid:20)(cid:30)(cid:8)
(cid:5)(cid:7)(cid:11)(cid:9)(cid:3)(cid:17)(cid:12)(cid:9)(cid:11)(cid:10)142
(cid:8)(cid:3)(cid:17)(cid:6)(cid:9)(cid:5)(cid:11)(cid:12)(cid:10)(cid:23)(cid:6)(cid:9)(cid:16)(cid:19)(cid:10)
(cid:11)(cid:14)(cid:5)(cid:7)(cid:24)(cid:5)(cid:7)(cid:15)(cid:10)1!2
(cid:18)(cid:20)(cid:30)(cid:8)
(cid:19)(cid:12)(cid:13)(cid:5)(cid:4)(cid:5)(cid:6)(cid:7)(cid:4)
(cid:8)(cid:3)(cid:17)(cid:6)(cid:9)(cid:5)(cid:11)(cid:12)(cid:10)(cid:23)(cid:6)(cid:9)(cid:16)(cid:19)(cid:10)
(cid:12)#(cid:11)(cid:9)(cid:3)(cid:17)(cid:12)(cid:9)(cid:11)(cid:10)132
(cid:7)
(cid:6)
(cid:5)
(cid:11)
(cid:3)
(cid:28)
(cid:9)
(cid:6)
(cid:8)
(cid:7)
(cid:5)
(cid:19)(cid:12)(cid:13)(cid:5)(cid:4)(cid:5)(cid:6)(cid:7)(cid:4)
(cid:31)(cid:27)(cid:28)(cid:8)
(cid:8)(cid:12)(cid:12)(cid:16)(cid:5)(cid:7)(cid:15)(cid:10)172
2
6
1
(cid:10)
(cid:7)
(cid:6)
(cid:5)
(cid:11)
(cid:5)
(cid:18)
(cid:11)
(cid:7)
(cid:5)
(cid:18)(cid:27)(cid:28)(cid:8)
(cid:2)(cid:12)(cid:9)(cid:13)(cid:12)(cid:5)(cid:17)(cid:5)(cid:7)(cid:15)192
(cid:4)(cid:11)(cid:9)(cid:18)(cid:13)(cid:11)(cid:18)(cid:9)(cid:12)
(cid:18)(cid:27)(cid:30)(cid:8)
(cid:18)(cid:20)(cid:28)(cid:29)
(cid:31)(cid:27)(cid:30)(cid:8)
(cid:31)(cid:20)(cid:28)(cid:29)
2
5
(cid:5)
1
(cid:10)
(cid:15)
(cid:7)
(cid:4)
(cid:7)
(cid:12)
(cid:4)
(cid:31)(cid:20)(cid:30)(cid:29)
(cid:11)(cid:14)(cid:5)(cid:7)(cid:24)(cid:5)(cid:7)(cid:15)(cid:10)1!2
(cid:7)
(cid:6)
(cid:5)
(cid:11)
(cid:3)
(cid:28)
(cid:9)
(cid:6)
(cid:8)
(cid:7)
(cid:5)
(cid:19)(cid:12)(cid:13)(cid:5)(cid:4)(cid:5)(cid:6)(cid:7)(cid:4)
(cid:5)(cid:7)(cid:11)(cid:9)(cid:3)(cid:17)(cid:12)(cid:9)(cid:11)(cid:10)142
(cid:8)(cid:3)(cid:17)(cid:6)(cid:9)(cid:5)(cid:11)(cid:12)(cid:10)(cid:23)(cid:6)(cid:9)(cid:16)(cid:19)(cid:10)
(cid:8)(cid:3)(cid:17)(cid:6)(cid:9)(cid:5)(cid:11)(cid:12)(cid:10)(cid:23)(cid:6)(cid:9)(cid:16)(cid:19)(cid:10)
(cid:12)#(cid:11)(cid:9)(cid:3)(cid:17)(cid:12)(cid:9)(cid:11)(cid:10)132
(cid:19)(cid:12)(cid:13)(cid:5)(cid:4)(cid:5)(cid:6)(cid:7)(cid:4)
(cid:31)(cid:27)(cid:28)(cid:29)
(cid:7)
(cid:6)
(cid:5)
(cid:11)
(cid:3)
(cid:28)
(cid:9)
(cid:6)
(cid:8)
(cid:7)
(cid:5)
(cid:8)(cid:12)(cid:12)(cid:16)(cid:5)(cid:7)(cid:15)(cid:10)172
(cid:18)(cid:27)(cid:28)(cid:29)
2
6
1
(cid:10)
(cid:7)
(cid:6)
(cid:5)
(cid:11)
(cid:5)
(cid:18)
(cid:11)
(cid:7)
(cid:5)
(cid:31)(cid:27)(cid:30)(cid:29)
(cid:21)(cid:18)(cid:19)(cid:15)(cid:5)(cid:7)(cid:15)182
(cid:18)(cid:27)(cid:30)(cid:29)
Figure 2.3: Myers and Briggs personality types [10].
2.6. FELDER AND SILVERMAN 23
student learning styles are apt to provide “an optimal environment for most (if not all) students in
the class [11].”
It might appear to be an insurmountable task to link a diverse group of student preferred
learning styles with an educator’s teaching style; however, Felder and Silverman indicated usual
methods of engineering education adequately address many categories. Furthermore, they indicated
the addition of a small number of additional teaching techniques accommodates the learning style
of every student in the class [11].
As you may have already gathered, the article by Felder and Silverman is a “must read” in its
entirety. Felder and Silverman concluded the article with a list of teaching techniques to address all
learning styles. I highly encourage you to obtain a copy of this article and keep the list of teaching
techniques available for ready and regular reference. Here is an abbreviated version of their list [11]:
(cid:129) Relate new material to what has already been presented and to student’s prior experience. For
example, when teaching new material, relate it to concepts presented in pre-requisite courses.
I also find it helpful to present the course framework during the first session of the course. I
refer to it frequently throughout the course to show how new material relates to the overall
class [11].
(cid:129) Provide balance between concrete and abstract information. This may be accomplished by
supplementing theoretical concepts with practical real world examples. I believe students grasp
new concepts quicker if they can see how they might use the material on the job or how to solve
a specific engineering challenge. Felder and Silverman suggest using the scientific method to
link theoretical material with concrete examples [11].
(cid:129) Use a wide variety of visual material and computer-assisted instruction to enhance learn-
ing [11]. When I served on the faculty at the United States Air Force Academy, I had the
opportunity to audit a microcontrollers course taught by Dr. Pamela (Pam) Neal. She effec-
tively used a series of visual worksheets and executing computer code projected on classroom
screen to illustrate the link between assembly language programming and its effect on computer
registers.
(cid:129) Felder and Silverman strongly recommended not filling classroom time with only lecturing and
writing on the board. Instead, they recommend short, periodic breaks for student reflection.
They also recommend active, small-group brainstorming activities during classroom sessions
to involve students in active learning [11].
(cid:129) Felder and Silverman also recommended assigning a reasonable number of homework exercises
to practice and apply the material taught in class. The exercises should align with the intended
Bloom’s Taxonomy level of the course objectives. Furthermore, they recommended allowing
students to work together on homework assignments [11].
24
2. A LITTLE LEARNING THEORY
Figure 2.4: “How do I reach them? [ J. Barrett, Closer to the Sun International, Inc.]”
2.7
SUMMARY
This chapter was devoted to the concept of learning and related teaching theories. To be a good
teacher we need to be familiar with some of the theoretical underpinnings behind sound instruction
techniques and how they are related to learning. These theoretical concepts will help us better
understand the learning process and provide concrete methods on how to best reach our students.
The chapter began with a brief review of the physiological basis of learning with an emphasis on the
difference between short-term and long-term memory and the conversion or consolidation of short-
term memory to long-term memory. We then investigated the different levels of cognitive learning
as described by Benjamin Bloom followed with a brief introduction to the work of Myers and Briggs
in identifying 16 different personality types based on the work of C.G. Jung. A teacher needs to be
aware of their own personality type and those of their students so they can best develop teaching
strategies to reach them. The work of Felder and Silverman who bridged a variety of learning styles
to teaching styles and provided concrete advice on how to reach all of the students in our classroom
was then reviewed.
REFERENCES AND FURTHER READING 25
REFERENCES AND FURTHER READING
[1] Kupfermann, Irving. “Learning.” Principles of Neuroscience. Ed. Eric Kandel and James
Schwartz. 2nd edition. New York: Elsevier, 1985. Cited on page(s) xvii, 17, 18
[2] Martini, Frederic and Edwin Bartholomew. Essentials of Anatomy and Physiology. 2nd edition.
Upper Saddle River: Prentice Hall, 2000. Cited on page(s) 17, 18
[3] Guyton, Arthur.Textbook of Medical Physiology. 7th edition. Philadelphia: W.B. Saunders, 1986.
Cited on page(s) 17, 18
[4] Ganong, William. Review of Medical Physiology — 1989. 14th edition. Norwalk: Appleton and
Lange, 1989. Cited on page(s) 18
[5] Kandel, Eric and James Schwartz. Principles of Neuroscience. 2nd edition. New York: Elsevier,
1985. Cited on page(s) 17
[6] Bloom, Benjamin. Taxonomy of Educational Objectives, The Classification of Educational Goals,
Handbook 1 Cognitive Domain. New York: David McKay Company, Inc, 1956. Cited on page(s)
xvii, 18, 19, 20
[7] Eisner, Elliott. “Profiles of Famous Educators, Benjamin Bloom, 1913-99,” Prospects 30(3)
(September 2000): 387-395. Cited on page(s) 18
[8] Goel, Sanjay and Nalin Sharda “What do engineers want? Examining engineering education
through Bloom’s taxonomy.” 15th Annual Conference for the Australian Association for Engi-
neering Education, AaeE 2004. September 27-29, 2004, Toowoomba, Queensland, Australia,
2004. Cited on page(s) 19, 20
[9] Boeree, C. George“Personality Theories—Carl Jung, 1875-1961.” 13 pp. Online. Internet.
Webspace.ship.edu/cgboer/jung.html Cited on page(s) 21
[10] “The Myers and Briggs Foundation.” Online. Internet. www.myersbriggs.org Cited on
page(s) xvii, 21, 22
[11] Felder, Richard and Linda Silverman “Learning and Teaching Styles in Engineering Educa-
tion,” Engineering Education 78(7) (1988): 674-681. Cited on page(s) 21, 23
26 REFERENCES AND FURTHER READING
2.8 CHAPTER ACTIVITIES
1. In your own words, describe the physiological basis of learning.
2. Describe processes to accelerate memory consolidation.
3. Provide a sketch of Bloom’s Taxonomy. Provide a list of action verbs associated with each
cognitive level.
4. How does the work of Myers and Briggs relate to the personality theories of Carl Jung?
5. Determine your personality type using the MBTIR test instrument.
6. What is the difference between learning and teaching styles?
7. How did the work of Felder and Silverman bridge the gap between learning and teaching
styles?
8. Based on the work of Felder and Silverman, develop your own personal list of techniques to
bridge your teaching style to the learning styles of your students.
C H A P T E R 3
27
Preparation for the first day of
classes
3.1 OVERVIEW
This chapter provides practical suggestions on getting ready for the first day of classes. The chapter
begins by reiterating the theme that we have used throughout the book: our students are our cus-
tomers. As faculty members we owe our students our very best preparation. A trip is then taken down
memory lane to reflect on what we expected from teachers when we were students. The developed
list of desired attributes will become what to work towards as faculty members. A brief introduction
to ABET accreditation requirements is then provided to include a review of basic terms and most
importantly see how our individual courses contribute to program accreditation. The development
of a course syllabus, textbook selection, and the development of course material followed by a dis-
cussion on proactive methods to establish a good relationship with our students is then discussed.
The chapter concludes with a forthright discussion of challenges facing faculty members.
3.2 THE STUDENT AS A CUSTOMER
I believe all of us became faculty members because we enjoy working with students. If this is not
true, perhaps we should consider a different line of work. At the most fundamental level, our jobs
as faculty members would not exist if it were not for the students. A common thread throughout
this book, and also my personal inspiration as a faculty member, is a student-first attitude. That is,
our guiding principle in what we do on a day-to-day basis is guided by what is best for the student
collectively and individually.This principle should not be misinterpreted as being academically “easy”
on students. It is quite the contrary. We as dedicated educators set high expectations for our students
and then work with them to help them achieve the goals we have set.
3.3 WHAT DID YOU WANT FROM A TEACHER WHEN YOU
WERE A STUDENT?
In this section we take a trip down memory lane to remember what we wanted from teachers when
we were students. This is only a partial list; you are encouraged to add to the list.
As a student this is what I wanted from my teachers.
28
3. PREPARATION FOR THE FIRST DAY OF CLASSES
(cid:129) A well-defined course syllabus. As a student, it was important to know what the course was
about, a detailed schedule of what would be covered during each lesson, when examinations
would occur, and a detailed list of homework assignments. During my undergraduate years, I
was carrying a very full academic load and was also working 20-30 hours per week. I was also
newly married. My time was precious and it was important to know when key events in each
course were scheduled. I considered a vague, general syllabus virtually useless. It communicated
to me (and maybe unfairly so) that the instructor did not know where the course was going.
(cid:129) Big picture of course. I found it very helpful if the instructor provided a detailed overview,
a framework, of the course during the first meeting. The overview helped me to connect the
course with prior coursework I had completed. Also, an overall framework provided a scaffold
where I could connect new course concepts. I also found it very helpful when the instructor
would review the big picture periodically throughout the course. This was an effective tool to
keep on track and also provide structure for all course concepts and material. As we discovered
in Chapter 1, providing this structure helps to provide the same structure within our own
memory and aids in the recall and application of course material.
(cid:129) Objectives. As a student, clear course objectives were important to me. I wanted to know
what I would be learning in the course and to what level I would be held accountable for the
material.
(cid:129) Expectations. I, as a student, like many students, was motivated to excel in my coursework.
It was important to me to know exactly what was expected of me to achieve a specific grade
in each course. Furthermore, I was challenged by high instructor expectations.
(cid:129) Well-prepared, understandable lectures. As a student I thought that a faculty member’s
primary responsibility was to translate complex course material into a well-prepared, compre-
hendible lectures. Now that I am a faculty member, I still feel the same way. We owe it to our
students to provide nothing less than our best efforts in classroom preparation and delivery.
(cid:129) Real world examples. As a student I always wanted to know how I would use the concepts and
information presented in class to solve real world problems. If I could make the connection
to a real world application, I found the material easier to understand, comprehend, learn and
apply. Material we learned in Chapter 2 backs this up as a sound teaching style [1].
(cid:129) Knowledgeable, available, approachable, and helpful. Like many, many students, I worked
very hard to do well in school. When I got stuck on a homework assignment, I appreciated
helpful suggestions and insight from my teachers. It was important to know that I could find
my teachers during scheduled office hours. I found it frustrating to seek out an instructor for
help and either had a difficult time finding them or if they were available they often seem
disgruntled to provide help. We owe it to our students to provide regular, advertised office
hours and be available and willing to help.
3.4. COURSE DEVELOPMENT 29
(cid:129) Laboratory exercises that worked. As a student of many engineering and science courses,
I completed many, many laboratory assignments. The laboratory is an essential component
of many courses. I found it very frustrating as a student to work on poorly constructed or
unworkable laboratory exercises. Fortunately, this did not occur very often.
(cid:129) Fair examinations. As a student examinations brought closure. I enjoyed studying hard to
prepare for a well-written, fair examination. I defined a fair examination as one that thoroughly
covered the presented material at the same depth and level at which the material was taught.
(cid:129) Fair, timely, and transparent grading. Students work very hard to excel in class and earn their
grades. It is important that faculty provide fair, consistent, timely and transparent grading
on course assignments and examinations. For example, a written rubric is useful to grade
written assignments. In like manner, grading examinations against a prepared solution with
established partial credit clearly delineated is very helpful. As a student I felt treated fairly if a
faculty member could clearly describe why I missed points on assignments and examinations.
Also, timely feedback on examinations and homework is important. I found it very difficult as
a student to apply myself and concentrate on new material if I did not know how I performed
on previous work.
With this list of what students want (and deserve) from faculty members, the remainder of
this chapter provides practical, useful advice to accomplish these expectations.
3.4 COURSE DEVELOPMENT
In this section, we discuss techniques to develop a course. We begin with a brief discussion of
ABET, Incorporated followed by syllabus development and textbook selection. We then discuss the
development of teaching materials including lesson plans and other courseware such as laboratory
exercises.
3.4.1 ACCREDITATION
ABET, Incorporated provides accreditation services for programs in engineering, engineering tech-
nology and computer science throughout the United States and several countries throughout the
world. Other disciplines have similar accreditation agencies.
As a faculty member, it is essential that you are familiar with some of the basic ABET concepts
or those of your discipline’s accrediting body. In this section we provide a basic overview of ABET
accreditation, review key terms, and most importantly, describe how your course(s) support your
program’s accreditation process.
It is important to realize that ABET does not accredit universities, colleges, or departments.
They accredit programs within a department. For example, a department of electrical and com-
puter engineering may have several accredited programs in electrical engineering and computer
engineering.
30
3. PREPARATION FOR THE FIRST DAY OF CLASSES
The key concept of a strong, viable, current program is continuous improvement. The purpose
of continuous improvement is to regularly assess the health of a program via a number of measurable
attributes. If issues or challenges are found within a program, they are proactively corrected before
becoming major stumbling blocks for the program. This is achieved by performing a regular, defined
assessment process.
Associated with this continuous improvement process is a series of related concepts. These
are quoted verbatim from a key ABET source document: “Criteria for Accrediting Engineering
Programs [2].”
(cid:129) Program Educational Objectives: “Program educational objectives are broad statements that
describe what graduates are expected to attain within a few years of graduation. Program
educational objectives are based on the needs of the program’s constituencies [2].” Constituents
are those who your program serves.
(cid:129) Student Outcomes: “Student outcomes describe what students are expected to know and
be able to do by the time of graduation. These relate to the skills, knowledge, and behaviors
that students acquire as they progress through the program [2].” “The ABET Criterion 3 (a)
through (k) student outcomes for engineering programs are [2]:
(a) an ability to apply knowledge of mathematics, science, and engineering
(b) an ability to design and conduct experiments, as well as to analyze and interpret data
(c) an ability to design a system, component, or process to meet desired needs within realistic
constraints such as economic, environmental, social, political, ethical, health and safety,
manufacturability, and sustainability
(d) an ability to function on multidisciplinary teams
(e) an ability to identify, formulate, and solve engineering problems
(f ) an understanding of professional and ethical responsibility
(g) an ability to communicate effectively
(h) the broad education necessary to understand the impact of engineering solutions in a
global, economic, environmental, and societal context
(i) a recognition of the need for, and an ability to engage in life-long learning
(j) a knowledge of contemporary issues
(k) an ability to use the techniques, skills, and modern engineering tools necessary for engi-
neering practice [2].”
So how does your course support ABET accreditation efforts for your department programs?
Figure 3.1 demonstrates the accountability trail from the university mission, to the college and
department missions, to different department programs, to program objectives, to student outcomes,
and to your specific course objectives. It is important to know which student outcomes your course
3.4. COURSE DEVELOPMENT 31
:(cid:7)(cid:5)(cid:17)(cid:12)(cid:9)(cid:4)(cid:5)(cid:11)(cid:29)(cid:10);(cid:5)(cid:4)(cid:4)(cid:5)(cid:6)(cid:7)
<(cid:6)(cid:16)(cid:16)(cid:12)(cid:15)(cid:12)(cid:10);(cid:5)(cid:4)(cid:4)(cid:5)(cid:6)(cid:7)
’(cid:12)(cid:2)(cid:3)(cid:9)(cid:11)(cid:28)(cid:12)(cid:7)(cid:11)(cid:10);(cid:5)(cid:4)(cid:4)(cid:5)(cid:6)(cid:7)
’(cid:12)(cid:2)(cid:3)(cid:9)(cid:11)(cid:28)(cid:12)(cid:7)(cid:11)(cid:10)9(cid:9)(cid:6)(cid:15)(cid:9)(cid:3)(cid:28)(cid:4)
9(cid:9)(cid:6)(cid:15)(cid:9)(cid:3)(cid:28)
9(cid:9)(cid:6)(cid:15)(cid:9)(cid:3)(cid:28)
9(cid:9)(cid:6)(cid:15)(cid:9)(cid:3)(cid:28)(cid:10)=(cid:20)(cid:21)(cid:12)(cid:13)(cid:11)(cid:5)(cid:17)(cid:12)(cid:4)
9(cid:9)(cid:6)(cid:15)(cid:9)(cid:3)(cid:28)(cid:10)=(cid:20)(cid:21)(cid:12)(cid:13)(cid:11)(cid:5)(cid:17)(cid:12)(cid:4)
5(cid:11)(cid:18)(cid:19)(cid:12)(cid:7)(cid:11)(cid:10)=(cid:18)(cid:11)(cid:13)(cid:6)(cid:28)(cid:12)(cid:4)
5(cid:11)(cid:18)(cid:19)(cid:12)(cid:7)(cid:11)(cid:10)=(cid:18)(cid:11)(cid:13)(cid:6)(cid:28)(cid:12)(cid:4)
<(cid:6)(cid:18)(cid:9)(cid:4)(cid:12)(cid:10)=(cid:20)(cid:21)(cid:12)(cid:13)(cid:11)(cid:5)(cid:17)(cid:12)(cid:4)
<(cid:6)(cid:18)(cid:9)(cid:4)(cid:12)(cid:10)=(cid:20)(cid:21)(cid:12)(cid:13)(cid:11)(cid:5)(cid:17)(cid:12)(cid:4)
Figure 3.1: How does your course support program accreditation [2]?
32
3. PREPARATION FOR THE FIRST DAY OF CLASSES
supports. As you develop your course, it is essential that your course objectives support these student
outcomes. Course objectives help develop the content of the course. They describe what the student
will learn by the completion of the course. They also help the student understand the content of the
course and the level to which they will be held accountable. A course objective describes.
(cid:129) What each student will be able to do by the end of the course.
(cid:129) The required cognitive level of learning for the objective. The required level of learning is
specified using one of Bloom’s Taxonomy action verbs discussed in Chapter 2.
(cid:129) The ABET student outcome (3(a)–3(k)) the objective supports.
Example. Here are some example course objectives from a senior level, design intensive course
in Verilog Hardware Descriptive Language.
Students shall:
1. (ABET: 3(c), 3(e), 3(k)) Design a Verilog Hardware Description Language module to imple-
ment a State Machine diagram.
2. (ABET: 3(b), 3(k)) Create test benches to validate correct operation of HDL implemented
design.
3. (ABET: 3(b), 3(c), 3(e), 3(k)) Design a Verilog HDL based system to meet established re-
quirements. Verify system design using Verilog test benches.
4. (ABET: 3(f )) Relate the concepts of ethical practice to the proper testing of a new design.
5. (ABET: 3(g)) Construct a written and oral report on your Verilog HDL based system em-
ploying provided guidelines. Present a 15 minute oral presentation on your design.
In the next section, we discuss development of a course syllabus. The objectives for your course
are included in the syllabus.
3.4.2
SYLLABUS
For several years a close friend and I would take our adult sons and their mutual friends on a fishing
trip to Lac Laronge in Northern Saskatchewan, Canada. We met in Cheyenne, Wyoming and then
caravaned through eastern Wyoming and Montana into the beautiful province of Saskatchewan. It
is about a 2,000 mile round trip. Could you imagine making this trip without prior planning and
without a map? Teaching a course without a detailed syllabus would be a similar challenge.
Some universities (including mine) specify the minimum essential parts contents for a syllabus.
These include (quoted from UW Regulation 6-809 [3]):
(cid:129) A description of the course, including its purpose, content, and goals,
(cid:129) Meeting times and/or schedule of the course,
(cid:129) The general requirements and expectations for the course,
(cid:129) The instructor’s contact information and office hours,
(cid:129) Academic dishonesty policies, with a statement or a reference to the appropriate university
3.4. COURSE DEVELOPMENT 33
regulation,
(cid:129) Grading and attendance policies,
(cid:129) A list of required materials, including texts, etc,
(cid:129) A statement or a reference to the University Disability Support Services website.
(cid:129) If a University Studies Program (General Studies) course, include what requirement(s) it
fulfills.
In addition to these minimum specified requirements, I would like to add the following:
(cid:129) A detailed lesson-by-lesson schedule of topics, reading assignments and homework assign-
ments.
(cid:129) A listing of course objectives and the ABET student outcomes supported by the course.
(cid:129) A description of where course material can be found. For example, if the course has an associated
website, provide the location in the syllabus.
(cid:129) A list of class expectations (attendance, cell phones, participation, etc.)
A sample syllabus is provided in the Appendix.
3.4.3 TEXTBOOK SELECTION
It is important to select a good textbook that supports your course objectives. Potential textbooks in a
specific topic area may be obtained from textbook publisher representatives or may be requested from
publisher websites. It is also very helpful to review textbooks at engineering educators’ conventions
such as the annual American Society for Engineering Education (ASEE) Annual Conference and
Exposition [6].
I try to choose textbooks from both the students’ and educators’ point of view. From the
students’ point of view, I try to find textbooks that are readable, have multiple worked examples, and
ample illustrations. From the educators’ point of view, I look for the same thing as from the students’
point of view but also look for included teaching materials such as lecture slides, supplemental
materials, and sample code.
To choose the best textbook for a course, it is helpful to construct a textbook selection matrix
as shown in Figure 3.3. Course concepts are listed in the first column. Potential textbooks are listed
along the top row. If desired, you may assign a weight to the importance of each concept. Each
textbook is then scored in each of the course concept areas. The best textbook for the course is
readily identified via this process.
34
3. PREPARATION FOR THE FIRST DAY OF CLASSES
Figure 3.2: “This is going to be a challenging course. The syllabus has a table of contents! [ J. Barrett,
Closer to the Sun International, Inc.]”
3.4.4 LESSON PLANS
Much like a syllabus guides the flow and conduct of a course, a lesson plan guides the flow and
conduct for each lesson. A lesson plan contains.
(cid:129) Lesson objectives. While attending Academic Instructor School (AIS) many years ago at
Maxwell Air Force Base, Alabama; I was taught the format for lesson objectives with the
acronym “tootlifest.” This acronym stands for “The objective of this lesson is for each student
to (insert level of learning)(insert concept) [4].” The construction of lesson objectives helps you
to focus on the specific content of the lesson. The level of learning will be one of the Bloom’s
Taxonomy action verbs discussed in Chapter 2. An objective is required for each major concept
to be taught during the lesson.
(cid:129) A listing of assigned reading material.
(cid:129) A listing of assigned homework exercises.
3.4. COURSE DEVELOPMENT 35
2
"
A
>
1
(cid:10)
(cid:11)
(cid:14)
(cid:15)
(cid:12)
(cid:30)
(cid:5)
2
C
>
A
C
1
(cid:10)
(cid:15)
(cid:7)
(cid:5)
(cid:11)
(cid:3)
B
>
(cid:10)
(cid:24)
(cid:6)
(cid:6)
(cid:20)
(cid:11)
#
(cid:12)
!
?
(cid:10)
(cid:24)
(cid:6)
(cid:6)
(cid:20)
(cid:11)
#
(cid:12)
!
0
(cid:10)
(cid:24)
(cid:6)
(cid:6)
(cid:20)
(cid:11)
#
(cid:12)
!
.
(cid:10)
(cid:24)
(cid:6)
(cid:6)
(cid:20)
(cid:11)
#
(cid:12)
!
"
(cid:10)
(cid:24)
(cid:6)
(cid:6)
(cid:20)
(cid:11)
#
(cid:12)
!
@
(cid:10)
(cid:24)
(cid:6)
(cid:6)
(cid:20)
(cid:11)
#
(cid:12)
!
<(cid:6)(cid:7)(cid:13)(cid:12)(cid:2)(cid:11)(cid:10)>
<(cid:6)(cid:7)(cid:13)(cid:12)(cid:2)(cid:11)(cid:10)?
<(cid:6)(cid:7)(cid:13)(cid:12)(cid:2)(cid:11)(cid:10)0
<(cid:6)(cid:7)(cid:13)(cid:12)(cid:2)(cid:11)(cid:10).
<(cid:6)(cid:7)(cid:13)(cid:12)(cid:2)(cid:11)(cid:10)"
<(cid:6)(cid:7)(cid:13)(cid:12)(cid:2)(cid:11)(cid:10)@
!(cid:6)(cid:11)(cid:3)(cid:16)
Figure 3.3: Textbook selection matrix.
(cid:129) An outline of the material to be discussed. This section is the heart of the lesson plan. Some
faculty members will provide great detail in this area while other only provide a listing of
topics. Personally, I write very detailed lesson plans and insure I understand the details of the
material I will teach. I have found that concepts that are vague to me while preparing lesson
plans will be a complete blank when lecturing in front of a room of sharp students. Detailed
lesson plans prevent this from happening.
36
3. PREPARATION FOR THE FIRST DAY OF CLASSES
(cid:129) Examples to be used during the course of the lecture.
(cid:129) A record copy of student handouts used to support the lecture.
(cid:129) Related support materials such as power point slides.
(cid:129) A record copy of student in class exercises.
(cid:129) As you prepare the lesson plan, remember from Chapter 2 that you are serving a wide variety
of learning styles.
(cid:129) A core theme when developing lessons is to remember we are preparing students for a pro-
fession. It is important to weave in practical illustrations of real world examples involving
integrity, ethics and doing the right thing. The public good depends on engineers and sci-
entists performing their duties with the utmost integrity. For example, in a design-intensive
course, it is important to emphasize that a design is as only as good as the test plan that
supports it. You can emphasize the ethical considerations of properly and exhaustively testing
a new design or product before it goes into production.
The first time I teach a course, I spend considerable time developing good lesson plans. I will
then reuse the lesson plans and update them each time the course is taught. Also, I readily share my
lesson plans with other faculty members.
It takes quite a bit of time to develop good lesson plans. However, once they are complete,
they are a real treasure. I would highly encourage you to seek out others who have taught courses
you have been assigned to teach. Rather than starting lesson plans from scratch, you can use their
lesson plans and most importantly their wisdom as a starting point for developing your own plans.
3.5 OTHER ITEMS TO CONSIDER
There are a number of important concepts to cover in the professional development of an engineer
or scientist. Many of these concepts are required by accreditation bodies. Often these concepts are
not covered by a specific course but instead our spread throughput the curriculum. These concepts
include but are not limited to [2]:
(cid:129) Design.
(cid:129) Economic concepts.
(cid:129) Environmental considerations.
(cid:129) Societal impacts.
(cid:129) Political aspects.
(cid:129) Ethical considerations.
3.6. ESTABLISHING GOOD STUDENT RELATIONSHIPS.
37
(cid:129) Health and safety.
(cid:129) Manufacturability.
(cid:129) Sustainability.
(cid:129) Multidisciplinary teamwork.
(cid:129) Global considerations.
(cid:129) Contemporary issues.
(cid:129) Leadership.
(cid:129) Management.
(cid:129) Professional licensure.
3.6 ESTABLISHING GOOD STUDENT RELATIONSHIPS.
It is essential to establish good student relationships. As a student it was important to me for my
teachers to be available, friendly, and approachable. Here are a few pointers to help get this started.
(cid:129) Be warm, friendly, approachable, and respectful. Park your ego.There is no room for arrogance,
pride, disrespect, or a condescending attitude when trying to establish a positive and productive
relationship with your students.
(cid:129) Be available. Provide publicized office hours and keep them. When students come by for help
be warm, friendly, and approachable.
(cid:129) Keep an open-door policy. During non-office hours be available to help students that come
by for assistance. This is easily accomplished by keeping your door open any time you are in
your office.
(cid:129) Learn students’ names. There are several methods to do this. As you hand back graded quizzes
and examinations, work to associate the student’s names with their faces. Some of my colleagues
take pictures of their students early in the semester to help them learn student’s names. This
only requires a few minutes of class time but goes along way to making students feel welcome.
(cid:129) Tell students that you care about their success and then show it.
(cid:129) Encourage students to ask questions when they arise. It is important they understand concepts
occurring earlier in the lecture to avoid confusion on more complex, related concepts that may
come up later in the lecture.
(cid:129) Always, always, always treat students with respect.
38
3. PREPARATION FOR THE FIRST DAY OF CLASSES
3.7 CONDUCTING THE LECTURE
As a student I thought that a faculty member’s primary responsibility was to translate complex course
material into a well-prepared, comprehendible lectures. I still believe this. So, with a good lesson
plan in hand, how do we deliver a good lesson? Here are a few basic guidelines.
(cid:129) Start the lesson on time.
(cid:129) Start with an overview of lesson plan contents and related course announcements.
(cid:129) Briefly review key concepts from the previous lesson.
(cid:129) Briefly describe how new lesson concepts link to those previously covered.
(cid:129) Present the lesson content detailed in your lesson plan. During lesson plan development,
carefully consider how the material is best presented and how available board space will be
used and then follow your plan.
(cid:129) During the lesson presentation, speak clearly and with suitable volume.
(cid:129) Encourage questions from students to clarify course concepts.
(cid:129) Observe students during the lecture for feedback queues. Are they following the lecture? Are
they bored? I always look at student eyes for queues on how the lesson is progressing.
(cid:129) Encourage active student participation with in class demonstrations and student exercises.
(cid:129) Conclude the lecture with a brief summary of lesson concepts.
(cid:129) Conclude the lecture on time.
3.8 CHALLENGES
As faculty members we sometimes must deal with complicated and challenging situations. In this
section we briefly discuss professional conduct, academic dishonesty, and challenging parents.
Professional conduct. As faculty members it is important that we conduct ourselves in a
competent, professional manner–always. We must model professional conduct for our students. Our
actions, conduct and speech must be beyond reproach. In a similar manner, we should be very
careful not to place ourselves into difficult situations. I always keep my door open when meeting
with students. If a sensitive matter must be discussed behind closed doors, I will always ask another
faculty or staff member to sit in on the discussion as an impartial observer.
We must also insure that all members of our classroom feel welcome and comfortable. In-
appropriate remarks concerning a student’s sex, race, national origin, or sexual orientation will not
be tolerated. In all contacts with students, we should be professional, warm, friendly, approachable,
3.8. CHALLENGES 39
and respectful. There is no room for arrogance, pride, disrespect, or a condescending attitude when
trying to establish a positive and productive relationship with your students.
Academic Dishonesty. One of the greatest challenges facing faculty members is establishing
a charge of academic dishonesty against a student. It is a very stressful situation for all involved;
however, an academic dishonesty situation must be dealt with appropriately. Your institution has
published regulations and guidelines concerning academic dishonesty.You should familiarize yourself
with their contents to insure you comply with directives when a situation occurs. As mentioned
above, it is highly recommended that another faculty or staff member be present when you discuss
the situation with the student. Furthermore, you should carefully document all that transpires in
relation to the academic dishonesty situation.
Well-meaning parents and FERPA. Occasionally you may be contacted by concerned par-
ents. As a parent of three college graduates, I was frequently concerned about how they were doing
in their coursework, their college life, and their overall well-being. Sometimes you may be contacted
by parents who demand that you provide them information on their child. Under no circumstances
may you share information with the parent without the student’s written permission. The Family
Educational Rights and Privacy Act (FERPA) provides very strict guidelines concerning a student’s
right to privacy. In a nutshell student records belong to the student. The protected information
includes grades, finances and discipline records. Parents are not allowed access to student records or
information on progress without the written permission of the student [5].
In addition to not sharing student information with parents without written student permis-
sion, there are other FERPA rules which must be adhered to [5].
(cid:129) Grades may not be posted with student names. A unique code known by the student and the
teacher may be used to link a specific student and their grade for posting. Even if encoded, a
grade list may not be posted in alphabetical order. In like manner, graded material may not be
left unattended for students to pick up. The basic premise is that a student must not be in a
position to view the grades or graded work of another student.
(cid:129) Class rosters and grade sheets should always be protected. Social security numbers should not
be disclosed. Also, under no circumstances will a list of students be provided to any third party
for commercial purposes or otherwise.
(cid:129) In like manner, a student schedule may not be shared with anyone without specific student
permission.
(cid:129) Potential employers or employers do not have the right to access student educational infor-
mation. This includes letters of recommendation. You may not include information on grades,
grade point average, or class standing without permission.
(cid:129) Access to online student records is on a strict need to know basis. You must have a legitimate
educational need to access a specific student’s record.
40
3. PREPARATION FOR THE FIRST DAY OF CLASSES
3.9 AVAILABLE RESOURCES
There are a number of instructional resources available to the faculty member from online sources,
your university library and professional societies. We discuss each in turn.
(cid:129) Online resources. There are a number of online resources available to the instructor. For
example, AcademicPub allows an instructor to develop a custom textbook from a variety of
sources. This is especially helpful of your course contains a variety of concepts not contained
within a single textbook. The company handles copyright permissions and releases. The final
assembled textbook is then available in print or digital copy www.academicpub.com.
(cid:129) Library resources. Your best source of information may be your university library. You will find
a visit with your professional discipline’s librarian to be time well spent. I recently met with
librarians at my university to discuss education and discipline specific resources available to
the new instructor. Here is a sample of what they shared [7]:
– A wide variety of educational resources are available from ERIC, the Education Resources
Information Center. ERIC is a U.S. Department of Education sponsored online digital
library of education research and information. ERIC provides access to education based
literature to support educational research, improve practice in learning and teaching, and
educational decision-making www.eric.ed.gov.
– Professional disciplines have national level databases to support the discipline. For ex-
ample, engineering programs are supported by Engineering Village. Engineering Village
describes itself as an “information discovery platform of choice for the engineering com-
munity www.ei.org.”
– The library stacks are a rich resource on how students learn. Resources available on higher
education may be found in the “LB” section of the stacks.
– Subscriptions. Your library subscribes to a large number of journal and possibly textbook
resources. For example, Morgan & Claypool Publishers (the publisher of this book) pro-
vide a series of Synthesis digital libraries. If your library subscribes to this service; faculty,
staff, and students at your university may download digital books from the series www.
morganclaypool.com.
(cid:129) Professional societies. Your discipline’s professional society (e.g., ASCE, IEEE, ASME, etc.)
provides considerable teaching resources on disciple concepts of interest. Also, the national
engineering and computer science professional practice societies (e.g., NSPE, ACM, etc.) have
information on ethical practice and a code of ethics for use in lesson development. Also, the
NCEES has considerable information available on professional licensure.
3.10. SUMMARY 41
3.10 SUMMARY
This chapter provided practical suggestions on getting ready for the first day of classes. It began by
reiterating the mindset that we have used throughout the book—our students are our customers.
A trip down memory lane was then taken to reflect on what we expected from teachers when we
were students. A brief introduction to ABET accreditation requirements to see how our individual
courses contribute to our program accreditation was then provided followed by a discussion on the
development of a course syllabus, textbook selection and the development of course material. We
then discussed proactive methods to establish a good classroom dynamic with our students. The
chapter concluded with a forthright discussion of challenges facing faculty members.
REFERENCES AND FURTHER READING
[1] Felder, Richard and Linda Silverman “Learning and Teaching Styles in Engineering Educa-
tion,” Engineering Education 78(7) (1988): 674-681. Cited on page(s) 28
[2] “ABET—Assuring Quality in Technical Education.” Online. Internet. www.abet.org Cited
on page(s) xvii, 30, 31, 36, 48
[3] “Course Syllabus Requirement.” UW Regulation 6-809. September 12, 2008. Cited on page(s)
32
[4] “Academic Instructor School” Online. Internet. www.au.af.mil Cited on page(s) 34
[5] “Family Educational Rights and Privacy Act (FERPA).” Online. Internet. www2.ed.gov Cited
on page(s) 39
[6] “American Society for Engineering Education (ASEE).” Online. Internet. www.asee.org
Cited on page(s) 33
[7] Melissa Bowles-Terry and Lawrence Schmidt. Personal interview. January 30, 2012. Cited on
page(s) 40
42 REFERENCES AND FURTHER READING
3.11 CHAPTER ACTIVITIES
1. Develop a list of attributes that you expected from a teacher when you were a student. Which
ones will you follow as a teacher?
2. What is your department mission?
3. What is the difference between program objectives and student outcomes?
4. What are your program’s objectives?
5. What are your program’s student outcomes?
6. What student outcomes does your course(s) support?
7. Develop objectives for your course(s).
8. Develop a syllabus for your course(s).
9. Develop a textbook selection matrix for your course(s).
10. What are the differences between course objectives and lesson objectives?
11. Develop lesson objectives for each lesson within your course.
12. What are the key elements of a lesson plan?
13. Deliver a lesson plan and capture the delivery for later review. Review the captured lesson.
What did you learn about your delivery? In what areas can you improve?
14. What techniques will you employ to establish good student relationships.
15. Locate and read in detail the academic dishonesty policies for your university.
16. What is FERPA? What are your responsibilities under FERPA?
C H A P T E R 4
Assessment
43
4.1 OVERVIEW
As a faculty member it is imperative that we regularly assess the academic health of our students,
ourselves, and our courses. This chapter is divided into three separate sections covering each of these
topics. The chapter begins by discussing techniques to regularly assess student performance. This is
followed by a discussion on the assessment of our performance as teachers including a discussion on
assessing our course(s). In this section we see how our course-level assessment and the related activity
of evaluation is an integral part of our department’s continuous improvement program. The term
assessment is used throughout this chapter to include the gathering, interpretation, and evaluation
of data to render improvements in our students, our performance as instructors and in our course
content.
4.2 ASSESSMENT OF YOUR STUDENTS
As educators we want our students to succeed and be successful in our courses. The student also
has a shared responsibility to keep up with and complete course requirements. In this section we
provide proactive methods to assess students’ progress in our lectures and during the course of our
class. Our goal is the early detection of student issues so we can proactively make corrections for
student success.
Assessing ongoing lectures. When giving a lecture it is important to periodically assess
students’ understanding during the course of the lecture. There is no reason to move on to more
advanced concepts if the majority of students do not understand the concepts already covered.
There are several methods to quickly assess student understanding and progress during a
lecture. As mentioned earlier, watching student eyes for attentiveness and understanding during
the course of the lecture is a good barometer of lecture progress. If students appear to be attentive
and following the lecture, it might be safe to proceed to other lecture concepts. On the other hand,
if students appear confused, corrective action may be required. When this occurs, it is helpful to
remind students of how the concepts being covered are related to previous covered concepts. Also,
it is helpful to “hit the rewind button.” That is, briefly highlight the main points of the concepts
just covered. Most importantly, ask the students if they have any questions on the material before
moving on.
Another technique to engage students and assess their progress during a lecture is to conduct a
“secret poll.” In this technique, ask the students to close their eyes for a few moments. They are then
asked to put their thumb up in the air pointing up if they comfortable with the current concept being
44
4. ASSESSMENT
covered, point their thumb sideways if they grasp the concept but would like additional information
or examples, or point their thumb down if they are lost or confused. The students seem to enjoy this
exercise and it readily provides their view on how the lecture is going. It provides you the chance
to quickly assess how the lecture is going and to make adjustments to your lecture as necessary
for student comprehension. This technique was originally learned from William “Bill” Parker who
served with and taught for many years with the National Weather Service. I frequently use this
technique to assess the health of ongoing lectures.
Quizzes. A quiz is an effective tool to periodically take the pulse of your course. It also
provides students an opportunity to gauge how they are progressing in your course. It also provides
them a glimpse of how you write examination questions.
To carefully craft a quiz question, determine the concept to be covered and the Bloom’s
Taxonomy level at which it will be assessed. The quiz question may now be written. If a one-hour
examination typically consists of five questions, allow students ten minutes in class to complete a
quiz. The goal is to help students prepare for the rigor of an examination by providing quiz questions
of similar rigor under similar time constraints.
If possible, quizzes should be graded and handed back at the beginning of the next lecture. It is
difficult for students to concentrate on new lecture material if they (and you) do not know how they
performed on previous concepts. A technique a colleague of mine, Dr. David “Dave” Whitman, has
used with great success is for the students to grade their own quiz. After students have completed the
quiz, Dave reviews the quiz and has students correct their own quiz during the review. The students
are instructed to make corrections in a different color of ink than that used while taking the quiz.
This is an effective method of emphasizing quiz concepts and providing timely feedback on the quiz.
Dave then collects the quizzes and assigns appropriate grades later and returns them to students at
the beginning of the next lecture.
Examinations. An examination is prepared in much the same way as a quiz. As a starting
point, draw up a list of concepts to be covered by the examination. For each concept, determine the
Bloom’s Taxonomy level at which it will be assessed. It is also helpful to determine if the examination
questions are testing established course objectives. Each examination question may now be written.
It is also helpful to include a cover sheet for the examination which provides a location for the
student’s printed name and signature. The cover sheet also provides a summary of points assessed
for each examination problem and a point total and grade.
When you have finished writing the examination, it is important to “test drive” the exami-
nation. Ideally, the examination should be completed by a colleague familiar with the examination
concepts. The colleague can provide feedback on examination concepts and time allotted for the ex-
amination. If this is not possible, you should work the examination to insure questions are complete
and accurate. An answer key with partial credit defined for each portion of the examination should
be completed before the examination is administered.
It is essential to provide fair, timely, and transparent grading on the examination. As mentioned
with quizzes, every attempt should be made to return graded examinations the next time the course
4.3. ASSESSMENT OF YOU 45
meets. Often, this will require considerable work and dedication on your part to make this happen.To
insure fair assignment of partial credit, grade all examinations a single question at a time against the
answer key. Also, it is important to provide written comments on how to successfully solve incorrect
problems. Also, while grading examinations, it is important to maintain student anonymity. Your
grading and assignment of partial credit is based only on the prepared key.
When examinations are returned at the beginning of the next class, hand them back individ-
ually. This helps associate names with faces and also preserves students’ privacy. The examination is
then briefly reviewed. I inform the students that I will not quibble over partial credit since it was
established before the examination was given. However, if a mistake in examination grading has
occurred, students are encouraged to make a note of it on the examination cover sheet and return
the examination for further review.
It is also helpful to pass around a grade sheet showing student progress after a major event
(such as an examination) in a course. Students naturally want to know how they are performing
in class. Insure the grade sheet does not provide any identifying information on the student in
accordance with FERPA rules discussed elsewhere in the book.
4.3 ASSESSMENT OF YOU
It is essential for you to regularly assess your progress as an instructor and implement required
changes to improve your performance and that of your students. This will require a great deal of
honesty and self assessment on your part. There are a number of sources of available data to evaluate
your progress as an instructor. We discuss several.
Student performance on quizzes and examinations. Quiz and test results may be used as a
barometer of your performance as an instructor. On average, if students are performing on par in
course assignments, quizzes, and examinations; no adjustments may be required. On the other hand,
if on average, you are disappointed with student’s performance, perhaps adjustments on your part
are required. When a quiz or examination average is not where you would like it to be, ask yourself
the following questions.
(cid:129) Is there a particular concept that students are struggling with? If so, how can I reinforce this
concept in class?
(cid:129) Have I examined students on a concept at a higher level of Bloom’s Taxonomy then where I
taught them? If so, is this fair?
(cid:129) Was the quiz or examination too long for the allotted time?
(cid:129) Were examination questions and instructions written clearly?
(cid:129) Did the examination require information or concepts that were not covered in the course?
Midterm course critiques. Another method of obtaining data to evaluate how your course
is progressing is to use midterm course critiques. Your institution may mandate a midterm course
46
4. ASSESSMENT
Figure 4.1: “A 57% average. What went wrong? [ J. Barrett, Closer to the Sun International, Inc.]”
4.4. SELF ASSESSMENT 47
assessment. If so, midterm critiques provide valuable insight into how the course is progressing.
Study critique results carefully to determine if midcourse corrections are required. If so, make them.
You owe this to your students.
Many institutions do not require midterm critiques. Instead, you may employ an informal
method of obtaining student feedback. A lesson or two after handing back the first examination
pass out a 3 by 5 inch card to each student. Ask them to anonymously give you a grade from “A”
to “F” and ask them to tell you what they like about the course and how you might immediately
modify it to improve their performance in the course. Typically, students are more than willing
to provide honest feedback on the course and instructor performance. Leave the room while the
students are completing the card and ask them to place the cards face down on a desk at the front of
the room when they are complete. Then review the cards in your office and see if there are areas of
improvement to be made. Then decide on what improvements should be made and then shred the
students cards. Most importantly, report back to the students during the next lecture on what they
said and improvements you will make in response to their comments. Then implement the changes
required.
End-of-course critiques. Most institutions mandate some form of formal end-of-course
critiques. Encourage students to complete the course critiques. Emphasize the results are used to
improve the course and to improve your teaching. When course critiques are received, it is important
to carefully review the critique results to determine possible corrective action for your course or your
delivery of course content. You may receive critiques that appear to be unfair or unfounded. One
suggestion is to read over the critiques and then let them lie for a week. Revisit the critiques after
you’ve had time to get past your emotional response to them.Then review the critiques dispassionately
to determine if corrections in your teaching methods are required.
4.4
SELF ASSESSMENT
In previous chapters we developed a list of tenets of good instructors and also what you expected
from instructors when you were a student.These tenets establish a benchmark against which to assess
your performance. Your personal assessment will require complete honesty and deep soul searching
on your part.
To perform a self assessment, list the tenets of good instructing that you have committed to
professionally live by. Based on your end-of-course critiques, assign yourself a grade from for each
of the tenets “A” to “F.” Where there is room for improvement, provide concrete changes that you
will make to improve your performance.
In the next chapter we discuss the role of a mentor. It is a good idea to share the results of your
self assessment with a trusted mentor and advisor. Your mentor will be able to provide additional
feedback on how best to improve your performance.
48
4. ASSESSMENT
4.5 ASSESSMENT OF YOUR COURSE
In Chapter 3 we discussed the connection between your course and meeting ABET program ob-
jectives and student outcomes [1]. ABET also provides definitions associated with continuous im-
provement. Some of these definitions are quoted verbatim from a key ABET source document:
“Criteria for Accrediting Engineering Programs [2].” Computer Science and Technology programs
have similar criteria documents.
(cid:129) Assessment: “Assessment is one or more processes that identify, collect, and prepare data to
evaluate the attainment of student outcomes and program educational objectives. Effective
assessment uses relevant direct, indirect, quantitative and qualitative measures as appropriate
to the objective or outcome being measured. Appropriate sampling methods may be used as
part of an assessment process [2].”
(cid:129) Evaluation: “Evaluation is one or more processes for interpreting the data and evidence
accumulated through assessment processes. Evaluation determines the extent to which student
outcomes and program educational objectives are being attained. Evaluation results in decisions
and actions regarding program improvement [2].”
(cid:129) Continuous Improvement: Continuous improvement uses the results of evaluation processes
for the program educational objectives and the student outcomes and any other available
information as input to make program improvements [2].
Before continuing with this chapter, it would be helpful if you familiarized yourself with your
program’s educational objectives and student outcomes.
Provided in Figure 4.2 is a sample continuous improvement model.This diagram picks up from
Figure 3.1. Recall from Figure 3.1 that course objectives directly ABET support student outcomes
3(a) through 3(k).We also discussed deriving lesson objectives and content from the course objectives.
Periodically we need to assess and evaluate how well our course is achieving specific course and lesson
objectives. This is accomplished using the two step assessment and evaluation process.
In the assessment step we gather data on the course. This would include student course
performance, data gathered from course critiques, and also other sources of information. In the
evaluation step the data is interpreted to determine if course and lesson objectives have been achieved.
An important part of the evaluation process is to determine what improvements will be made.
The short-term improvement cycle may occur at the completion of the course. In your eval-
uation you may decide to modify course objectives, lesson objectives, or lesson content to meet
established goals. Your program has established procedures in place to complete the long-term (12–
18 month) improvement cycle. The long-term cycle provides feedback on how well and to what level
program objectives and student outcomes have been achieved. The bottom line is your course is not
stagnant. It is alive and constantly evolving. It is imperative that we strive to continually improve
the content and delivery of our courses. Our students deserve nothing less.
4.5. ASSESSMENT OF YOUR COURSE 49
9(cid:9)(cid:6)(cid:15)(cid:9)(cid:3)(cid:28)(cid:10)=(cid:20)(cid:21)(cid:12)(cid:13)(cid:11)(cid:5)(cid:17)(cid:12)(cid:4)
5(cid:11)(cid:18)(cid:19)(cid:12)(cid:7)(cid:11)(cid:10)=(cid:18)(cid:11)(cid:13)(cid:6)(cid:28)(cid:12)(cid:4)
<(cid:6)(cid:18)(cid:9)(cid:4)(cid:12)(cid:10)=(cid:20)(cid:21)(cid:12)(cid:13)(cid:11)(cid:5)(cid:17)(cid:12)(cid:4)
+(cid:12)(cid:4)(cid:4)(cid:6)(cid:7)(cid:10)=(cid:20)(cid:21)(cid:12)(cid:13)(cid:11)(cid:5)(cid:17)(cid:12)(cid:4)
+(cid:12)(cid:4)(cid:4)(cid:6)(cid:7)(cid:10)<(cid:6)(cid:7)(cid:11)(cid:12)(cid:7)(cid:11)
D(cid:4)(cid:4)(cid:12)(cid:4)(cid:4)(cid:28)(cid:12)(cid:7)(cid:11)
A(cid:10)5(cid:11)(cid:18)(cid:19)(cid:12)(cid:7)(cid:11)(cid:10)(cid:2)(cid:12)(cid:9)(cid:8)(cid:6)(cid:9)(cid:28)(cid:3)(cid:7)(cid:13)(cid:12)
A(cid:10)<(cid:6)(cid:18)(cid:9)(cid:4)(cid:12)(cid:10)(cid:13)(cid:9)(cid:5)(cid:11)(cid:5)%(cid:18)(cid:12)(cid:4)
A(cid:10)73(cid:10)3#(cid:3)(cid:28)(cid:5)(cid:7)(cid:3)(cid:11)(cid:5)(cid:6)(cid:7)(cid:10)(cid:9)(cid:12)(cid:4)(cid:18)(cid:16)(cid:11)(cid:4)
3(cid:17)(cid:3)(cid:16)(cid:18)(cid:3)(cid:11)(cid:5)(cid:6)(cid:7)
A(cid:10)4(cid:7)(cid:11)(cid:12)(cid:9)(cid:2)(cid:9)(cid:12)(cid:11)(cid:10)(cid:19)(cid:3)(cid:11)(cid:3)
A(cid:10)D(cid:13)(cid:14)(cid:5)(cid:12)(cid:17)(cid:12)(cid:19)(cid:10)(cid:29)(cid:6)(cid:18)(cid:9)(cid:10)(cid:15)(cid:6)(cid:3)(cid:16)(cid:4)(cid:31)
A(cid:10)E(cid:6)(cid:23)(cid:10)(cid:23)(cid:5)(cid:16)(cid:16)(cid:10)4(cid:10)(cid:5)(cid:28)(cid:2)(cid:9)(cid:6)(cid:17)(cid:12)(cid:31)
(cid:16)
(cid:6)
(cid:7)
(cid:15)
(cid:10)
(cid:11)
(cid:12)
(cid:9)
(cid:28)
(cid:10)
(cid:5)
(cid:28)
(cid:2)
(cid:9)
(cid:6)
(cid:17)
(cid:12)
(cid:28)
(cid:12)
(cid:7)
(cid:11)
(cid:4)
(cid:4)
(cid:11)
(cid:7)
(cid:12)
(cid:28)
(cid:12)
(cid:17)
(cid:6)
(cid:9)
(cid:2)
(cid:28)
(cid:5)
(cid:10)
(cid:28)
(cid:9)
(cid:12)
(cid:11)
(cid:10)
(cid:11)
(cid:9)
(cid:6)
(cid:14)
(cid:4)
Figure 4.2: Continuous improvement.
50 REFERENCES AND FURTHER READING
It is often difficult to obtain external, unbiased data on your course. One source of data is the
Fundamentals of Engineering (FE) examination.The FE examination is administered twice per year
typically in late April and late October by the National Council of Examiners for Engineering and
Surveying (NCEES).The examination consists of a common four hour morning portion of 120 ques-
tions covering the following topics: engineering economics, electricity and magnetism, chemistry,
ethics, engineering statistics, fluid mechanics, strength of materials, thermodynamics, mathematics,
statics and dynamics, computers, and material properties. The afternoon session is also 4 hours long
and consists of 60 discipline specific questions. Examinees may select from one of the following
disciplines: civil, electrical, industrial, mechanical, environmental, or general engineering [2, 3].
If students from your institution take the FE examination, NCEES provides results of ex-
amination pass rates and also topic specific assessment data. For example, if you teach a course in
thermodynamics you can obtain data on how well the FE examinees from your institution performed
in this area relative to all examinees nationwide. This is valuable, unbiased data for use in assessing
your course [2, 3].
4.6
SUMMARY
As a faculty member it is imperative that we regularly assess the academic health of our students,
ourselves, and our courses. This chapter was divided into three separate sections covering each of
these topics. The chapter began by discussing techniques to regularly assess student performance.
This was followed by a discussion on the assessment of our performance as teachers. We concluded
with a discussion on assessing our course(s). In this section, we saw how our course-level assessment
and the related activity of evaluation is an integral part of our department’s continuous improvement
program.
REFERENCES AND FURTHER READING
[1] “ABET—Assuring Quality in Technical Education.” Online. Internet. www.abet.org Cited
on page(s) 48
[2] “National Council of Examiners for Engineering and Surveying (NCEES).” Online. Internet.
www.ncees.org Cited on page(s) 50
[3] S.F. Barrett, W. LeFevre, J.W. Steadman, J.S. Tietjen, K.R. White and D.L. Whitman, “Us-
ing the Fundamentals of Engineering (FE) Examination as an Outcomes Assessment Tool.”
Online. Internet. www.ncees.org Cited on page(s) 50
4.7 CHAPTER ACTIVITIES
1. Why is assessment and evaluation important?
2. What is the difference between assessment and evaluation?
4.7. CHAPTER ACTIVITIES 51
3. Describe methods of obtaining feedback on lecture progress during the course of the lecture.
4. Describe basic guidelines in writing a quiz.
5. Describe basic guidelines in assembling and writing an examination.
6. Describe basic guidelines in conducting a self assessment.
7. Conduct a self assessment of your teaching progress.
8. Conduct an assessment of your course. Are you achieving course objectives? What proactive
changes can you make to improve the course?
9. How may the Fundamentals of Engineering Examination be employed in your course assess-
ment and evaluation?
C H A P T E R 5
Beyond the first day
53
This chapter looks beyond the first day of class and delves into the areas of effective mentoring, the
rewards of teaching, and some practical guidelines of balancing all the demands placed upon the
new educator. The chapter concludes with suggestions on how to continue to be a good and effective
educator.
5.1 MENTORING
Everyone needs someone to talk to, consult with, bounce ideas off of, obtain professional advice
from, or commiserate with when things are not going well. That is where a good mentor can help.
In this chapter we discuss the traits of a good mentor, how to find one, and how to be one. This
chapter is based on a column on mentorship that was originally published in the IEEE Computers
and Science Engineering Magazine. This chapter is dedicated to the great mentors I’ve had in my
professional development.
5.1.1 TRAITS OF A GOOD MENTOR
Please take a moment to answer the four following questions [1]:
(cid:129) Why did you become a engineer or scientist [1]?
(cid:129) At what point in your life did you decide to pursue a career in science or engineering career [1]?
(cid:129) Were there people in your life who helped you see the excitement to be found in a technical
career [1]?
(cid:129) Were there people in your life who encouraged you when things were not going so well [1]?
The answers to these questions illuminate the importance of mentors in career develop-
ment [1]. I recently participated in a university committee on mentoring. One of our tasks was
to identify traits of a good faculty mentor. We identified many traits including: good mentors are
committed members of the community of scholars; enthusiastic; always learning, interested in new
ideas; selfless; friendly; caring; warm; engaged; hard-working; and inspiring.
5.1.2 FINDING A GOOD MENTOR
As a new faculty member it is important to find a good mentor. Your job responsibilities will typically
fall into three overlapping categories of teaching, research, and service. Ideally, it would be helpful
54
5. BEYOND THE FIRST DAY
to find a mentor who will be helpful in all three areas but that is not always possible. A different
mentor for each category of your job is perfectly acceptable.
I did not have to seek out a mentor. They came to me. The first day on the job at the University
of Wyoming, Dr. Raymond (Ray) Jacquot, Ph.D., P.E. and Dr. Jerry Cupal, Ph.D. came by my office
and welcomed me to Wyoming and the University. They made it clear they were available to help
in any way possible. I never forgot their hospitality and helpful spirit. I took them up on their offer
for advice and assistance many times.
To find a mentor, I would recommend finding a kindred spirit, someone that you feel com-
fortable talking to and sharing your greatest concerns. For teaching assistance, I recommend seeking
out the help of those faculty members who teach similar courses in your area of expertise. They may
be a source of lesson material and advice on how to best present the material.
For research mentoring, seek out the assistance of those who work in a similar or related area.
They may be a source of collaboration. It is also helpful to have a sounding board when pursuing
new research ideas.
5.1.3 BEING A GOOD MENTOR
I was blessed with an outstanding graduate advisor, Dr. Ashley J. (A.J.) Welch of the University of
Texas at Austin (UT). Dr. Welch was always readily available, approachable, and held me to a high
level of accountability.
Upon my arrival at UT, Dr. Welch told me what my graduate research project would be. He
then gave me the time and latitude to develop a research plan to reach the research goal. We reviewed
the plan in detail and iteratively developed a strong plan with key milestones well defined. I knew
exactly what I was supposed to do and had an achievable, workable plan to get there. We met weekly
over the next three years. At each meeting, Dr. Welch and I would go over what I had worked on the
previous week and then discuss goals for the coming week. It should be mentioned that Dr. Welch
used this approach with all of the multiple masters and Ph.D. students he was advising.
When I step back from my specific situation and try to determine what made Dr. Welch
such a successful graduate advisor, I see a common pattern appear. Dr. Welch worked very hard
to be accessible to his students and develop a strong relationship based on trust. He also set high
expectations for each of his students and then helped us to achieve them. This was accomplished
through setting a clear research goal early in each student’s program and then meeting with them
on a regular basis to review progress and provide assistance as needed. Along the way, Dr. Welch’s
students became strong, independent researchers and I suspect good research mentors.
Your time will be very precious as you begin your career. If you elect to serve as a mentor to
develop the next generation of engineers and scientists, here are some suggestions on how to get
involved: visit local junior and senior highs to describe science and engineering careers; get involved
with freshmen orientation and retention programs; volunteer to give information sessions to prospec-
tive students; and help teach summer outreach programs to junior and senior high students [1].
5.2. TEACHING REWARDS 55
5.2 TEACHING REWARDS
Teaching is a very fulfilling profession. You will find it challenging and exciting with no two days the
same. There will be good days when you feel that you have made a difference in students’ lives. On
the other hand, you will have days when you will wonder if you were really meant to be a teacher.
Fortunately, these days are few and far between. Often, the challenging days are where we learn the
most toward becoming better educators.
There are obvious tangible benefits and rewards of serving as an educator. These tangible
benefits include promotion and tenure. As a new educator it is essential that you know exactly
what is expected of you to achieve these milestones. Your job description will provide performance
expectations in the areas of teaching, research and service. Sometimes these expectations are provided
as a percentage of your workload. For example, a new faculty member may have a job distribution of
50% teaching, 40% research and 10% service. It is important that you know what the expectations are
for your performance and how it will be assessed. A seasoned mentor help you successfully navigate
the tenure and promotion process. Also, a regular meeting to discuss your performance with your
department head is highly recommended.
There are many intangible benefits of serving as an educator. You will not find a more fulfilling
profession. You will go home from work every night (typically exhausted) knowing that you have
made a positive difference in a number of students’ lives. You will celebrate with them when they
graduate but you will also be there to encourage them when they have failed an examination or a
class. The high points of your career will come when a student unexpectedly drops by some years
after their graduation and tell you what an impact you had on their professional development and
their success. There is no greater reward.
5.3
FINDING BALANCE
The education profession is not a sprint, it is a marathon. You will need to find balance between
your professional and personal life to remain healthy and invigorated for the long haul. In the early
years of your appointment it will be important to pursue with vigor the requirements of tenure and
promotion. However, your personal life is equally (or more) important. Constantly strive to find
the balance between the two. Also, you will be asked to participate in a number of committees and
projects. When asked to participate in these worthwhile activities, carefully consider the impact on
your time and career progression. Learn to graciously say “no.”
5.4 WHERE TO GO FROM HERE?
This book was intended as a basic primer on college-level teaching and learning for a new faculty
member of engineering or applied science. First and foremost, this book was about learning and
teaching. However, it also provided helpful information on related topics such as mentorship, student
challenges, graduate students, tenure and promotion, and accreditation. I hope you have found this
book useful in preparing for service as a faculty member. However, it is only a beginning. Serving
56
5. BEYOND THE FIRST DAY
Figure 5.1: Serving as an educator is a lifelong profession based on continual improvement and growth
[ J. Barrett, Closer to the Sun International, Inc.]
as an educator is a lifelong profession based on continual improvement and growth. You need to
constantly work toward self improvement as a teacher and in your course delivery and content.
So where to from here? I would recommend the following.
(cid:129) Track down and read each of the references in this book. You will find them to be a wealth of
information and inspiration and a springboard to related material.
(cid:129) Your institution most likely has a center for teaching excellence. Become a regular attendee
(and contributor) to their professional development seminars.
(cid:129) Find a mentor and meet regularly with them.
(cid:129) Commit to becoming the best educator possible. If does not take a lot of time. You owe your
students nothing less.
(cid:129) Read the book “Survive and Thrive: A Guide for Untenured Faculty,” authored by Wendy
Crone [2].
(cid:129) Become a member of
the American Society for Engineering Education (ASEE)
(www.asse.org). They host an annual symposium which highlights the best practices in engi-
neering education through the presentation of papers, workshops and vendor displays. Also,
a number of resources for the teacher are available from their website including access to
published ASEE papers.
For the students!
5.5. SUMMARY 57
5.5
SUMMARY
This chapter looked beyond the first day of class and delved into the areas of effective mentoring, the
rewards of teaching and some practical guidelines of balancing all the demands placed upon the new
educator. The chapter concluded with suggestions on how to continue to be a good and effective
educator.
REFERENCES AND FURTHER READING
[1] S. F. Barrett. “Mentoring and Making a Difference: What Can One Person Do?” Computing
in Science and Engineering, 13(1): 70-73, 2011. Cited on page(s) 53, 54
[2] W. C. Crone. “Survive and Thrive: A Guide for Untenured Faculty,” Morgan & Claypool
Publishers, 2010. Cited on page(s) 56
58 REFERENCES AND FURTHER READING
5.6 CHAPTER ACTIVITIES
1. Who were the mentors in your life? Were there character traits they shared?
2. What are the traits of a good mentor?
3. Find mentors for the teaching, research, and service aspects of your job.
4. What are the expectations for your job performance? Develop a plan on how you will meet
these expectations.
5. Develop a plan on how you will continue to grow as an educator.
6. Develop a personal mission statement to be the best educator possible.
59
A P P E N D I X A
Sample syllabus
EE4490 Hardware Descriptive Language
(HDL) Digital Design, Fall 2011
Course Information and Policies
Instructor: Steve Barrett, Ph.D., P.E., EN2076, Phone: 766-6181, e-mail: [email protected]
Class time: M, W, F, 1:10-2:00 PM, CR103
Office hours: M, W 2:00-5:00 PM, EN2076
Texts:
(cid:129) “Verilog HDL: A Guide to Digital Design and Synthesis,” Samir Palnitkar (SP), Sun Mi-
crosystems Press – A Prentice Hall Title, 2003, second edition, ISBN: 0-13-044911-3
(cid:129) “Logic and Computer Design Fundamentals,” Mano and Kime (M&K), Pearson- Prentice
Hall, 2008, 4th edition, ISBN: 0-13-600158-0. This textbook is required.
(cid:129) “EE4490 HDL Digital Design” course notes – available from bookstore
Grading: Grades will be awarded at the standard 90%, 80%, 70%, and 60% break points.
Prerequisites: It is expected that the student has had a class in digital circuit design (EE2390 or
equivalent).
Course description: Hardware Descriptive Language (HDL) Digital Design. 3. Hardware De-
scriptive Language design of digital systems. Industrial CAD tools are used to produce a functional
description of hardware that is both simulated and then synthesized into hardware. Methods to
describe both combinational logic and synchronous devices are given. Devices such as CPLDs and
FPGAs are targeted in this design process. Emphasizes design techniques. Prerequisite: EE2390.
60 SAMPLE SYLLABUS
Course objectives:
Students shall:
1. (ABET: 3(c), 3(e), 3(k)) Design a Verilog Hardware Description Language module to imple-
ment a State Machine diagram.
2. (ABET: 3(b), 3(k)) Create test benches to validate correct operation of HDL implemented
design.
3. (ABET: 3(b), 3(c), 3(e), 3(k)) Design a Verilog HDL based system to meet established re-
quirements. Verify system design using Verilog test benches.
4. (ABET: 3(f )) Relate the concepts of ethical practice to the proper testing of a new design.
5. (ABET: 3(g)) Construct a written and oral report on your Verilog HDL based system em-
ploying provided guidelines. Present a 15 minute oral presentation on your design.
Topics covered:
(cid:129) Economic/time to market incentives behind an HDL
(cid:129) The design flow process with an HDL
(cid:129) Target hardware from an HDL
(cid:129) Fundamentals of the Verilog HDL
(cid:129) Application of Verilog to combinational logic, synchronous logic, and finite state machines
(cid:129) Use of behavioral and structure state machine diagrams for Verilog HDL development and
documentation
(cid:129) Importance of testing the designs for correctness and reliability
(cid:129) Appropriate use of the Xilinx Verilog HDL simulation and synthesis tools
(cid:129) Design, implementation, and documentation techniques
(cid:129) Real world design issues
Requirements: The course consists of 3 one-hour lectures per week for a total of 15 weeks. A
heavy emphasis is placed on practical, regular homework assignments. All students are expected to
satisfactorily complete the assignments. If code is used from another source, you must reference the
source in your program. Two exams will be given throughout the semester and a comprehensive final
examination. You will also be required to complete a team-based final design project to demonstrate
your capability to solve a challenging digital design project using Verilog HDL as the target solution.
SAMPLE SYLLABUS 61
Homework: Homework sets will be periodically given with prescribed due dates. Assignments must
be handed in at the beginning of class time on the specified due date. No credit will be given for
late assignments. Assignments must be worked neatly, properly documented, and tested. You will
work on a two-person team for each homework assignment. The student team is responsible for
developing a test bench to thoroughly document the operation of their homework solution. A single
assignment solution will be turned in for each two-person student team. However, each student will
be held individually accountable for all material covered in the homework via quizzes, examinations,
and final examinations.
Attendance: Attendance at every scheduled class session is highly encouraged. Students who are
habitually absent will be at a disadvantage. Students are responsible for all material presented in
class. Attendance is required for scheduled examination times. Students who miss an examination
must obtain an excuse in accordance with the UW bulletin. For absences not covered by these rules,
students must contact the instructor immediately to avoid a grade of zero for missed examinations.
Suggestions: This course covers considerable material. Some recommendations for success include:
(cid:129) Attend every class – new material is covered each lecture.
(cid:129) Read assigned material in advance as detailed in the syllabus.
(cid:129) Start homework assignments early. Seek instructor help as needed early.
(cid:129) Do not ignore homework. It comprises 20% of your grade. It is the best preparation for
examinations.
(cid:129) Ask questions in class, during discussions, and during office hours.
Class project: The purpose of the final class project is to demonstrate your ability to design, test,
and document a challenging digital design project using Verilog HDL design techniques.
(cid:129) Two-person teams
(cid:129) Potential projects
– Data encryption/decryption
– Asynchronous Communications
– Synchronous Communications
62 SAMPLE SYLLABUS
– Priority Encoder
– Rate Multiplier
– Johnson Counter
– Cyclic Redundancy Check Generator
– Pulse Width Modulator Signal Generator
– Stepper Motor Controller – 4 channel
– Linear Feedback Shift Registers
– Autonomous Robot Controller
– Simple Pipeline Processor
(cid:129) Project must be approved in advance by instructor via proposal
(cid:129) Single page proposal due Wednesday, Oct 5, 2011
(cid:129) Proposal consists of title, abstract, keywords, and requirements
(cid:129) Deliverables:
– 25 minute presentation
– 5 page written report
– Design solution with test bench demonstrating proper project operation
– Written Report - I will grade your written report for:
∗ Organization - following the prescribed format (10%)
∗ Title
∗ Abstract
∗ Keywords
∗ Background
∗ Requirements
∗ Design
∗ Testing
∗ Results
∗ Conclusions
∗ Appendices
· Verilog Project
· Test bench
– Grammar, technical completeness, and readability (10%)
– Sufficient background information – depth as well as breadth of material (20%)
– Literature search - you must use at minimum three separate references (10%)
– Your design – did it meet requirements? Did you prove that it worked via testing proce-
dures (50%)
SAMPLE SYLLABUS 63
(cid:129) Oral Report: Your fellow students will anonymously score your oral presentation from 0 to 100.
The students’ scores will be averaged and serve as your score for the oral presentation. You will
also be graded on the number of student presentations you attend. On the final examination
you will be responsible for information presented during the student presentations.
Disability assistance: If you have a physical, learning, or psychological disability and require accom-
modations, please let the instructor know as soon as possible. You must register with, and provide
documentation of your disability to University Disability Support Services (UDSS) in SEO, room
330 Knight Hall. Appropriate protocols will be developed after that time.
Academic Honesty: The University of Wyoming is built upon a strong foundation of integrity,
respect, and trust. All members of the university community have a responsibility to be honest and
the right to expect honesty from others. Any form of academic dishonesty is unacceptable to our
community and will not be tolerated. Teachers and students should report suspected violations of
standards of academic honesty to the instructor, department head, or dean [University Regulation
6-802].
64 SAMPLE SYLLABUS
k
r
o
w
e
m
o
H
t
n
e
m
n
g
i
s
s
A
1
S
W
H
n
o
i
t
p
i
r
c
s
e
D
t
c
e
j
o
r
P
s
s
a
l
C
,
w
e
i
v
r
e
v
O
e
s
r
u
o
C
2
2
g
u
A
M
,
-
i
b
m
o
c
:
w
e
i
v
e
R
n
g
i
s
e
D
l
a
t
i
g
i
D
l
a
i
t
n
e
u
q
e
s
,
n
g
i
s
e
d
t
i
u
c
r
i
c
l
a
n
o
i
t
a
n
s
t
n
e
n
o
p
m
o
c
,
I
S
M
n
g
i
s
e
d
t
i
u
c
r
i
c
4
2
g
u
A
W
,
6
2
g
u
A
F
,
g
n
i
d
a
e
R
s
c
i
p
o
T
e
t
a
D
n
o
i
s
s
e
S
1
S
W
H
:
e
u
D
,
2
S
W
H
)
P
S
(
1
p
h
C
w
e
i
v
r
e
v
O
9
2
g
u
A
M
,
2
S
W
H
:
e
u
D
9
p
e
S
,
F
3
S
W
H
)
P
S
(
3
p
h
C
-
r
e
p
O
d
n
a
,
s
e
p
y
T
a
t
a
D
,
s
e
v
i
t
i
m
i
r
P
7
p
e
S
W
,
s
r
o
t
a
)
P
S
(
2
p
h
C
s
t
p
e
c
n
o
C
g
n
i
l
e
d
o
M
l
a
c
i
h
c
r
a
r
e
i
H
1
3
g
u
A
W
,
-
x
E
s
e
s
s
a
l
C
y
a
d
i
l
o
H
y
a
D
r
o
b
a
L
5
p
e
S
M
,
d
e
s
u
c
2
p
e
S
,
F
n
o
i
t
a
t
n
e
s
-
e
r
p
e
r
l
a
c
i
r
e
m
u
n
t
n
i
o
p
g
n
i
t
a
o
l
F
2
1
p
e
S
M
,
0
1
1
2
3
4
5
6
7
8
9
SAMPLE SYLLABUS 65
2
3
2
2
2
1
2
0
1
9
1
8
1
7
1
6
1
5
1
4
1
3
1
2
1
1
,
W
O
c
t
1
2
,
M
O
c
t
1
0
,
F
O
c
t
7
,
W
O
c
t
5
,
M
O
c
t
3
F
,
S
e
p
3
0
,
W
S
e
p
2
8
B
e
h
a
v
i
o
r
a
l
M
o
d
e
l
i
n
g
C
h
p
7
(
S
P
)
P
r
o
p
o
s
a
l
d
u
e
H
W
S
5
H
W
S
6
,
D
u
e
:
N
o
c
l
a
s
s
–
i
n
s
t
r
u
c
t
o
r
t
r
a
v
e
l
P
r
o
j
e
c
t
P
l
a
n
n
i
n
g
,
M
S
e
p
1
9
,
M
S
e
p
2
6
D
a
t
a
F
l
o
w
M
o
d
e
l
i
n
g
C
h
p
6
(
S
P
)
i
l
o
g
M
o
d
e
l
s
,
W
S
e
p
2
1
i
S
m
u
l
a
t
i
n
g
H
a
r
d
w
a
r
e
w
i
t
h
V
e
r
-
H
W
S
5
F
,
S
e
p
2
3
E
t
h
i
c
s
a
n
d
s
y
s
t
e
m
t
e
s
t
i
n
g
D
u
e
:
H
W
S
4
,
W
S
e
p
1
4
M
o
d
u
l
e
s
a
n
d
P
o
r
t
s
C
h
p
4
(
S
P
)
H
W
S
4
G
a
t
e
L
e
v
e
l
M
o
d
e
l
i
n
g
F
,
S
e
p
1
6
M
o
d
e
l
i
n
g
S
t
r
u
c
t
u
r
e
w
i
t
h
V
e
r
i
l
o
g
,
C
h
p
5
(
S
P
)
D
u
e
:
H
W
S
3
S
e
s
s
i
o
n
D
a
t
e
T
o
p
i
c
s
R
e
a
d
i
n
g
A
s
s
i
g
n
m
e
n
t
H
o
m
e
w
o
r
k
66 SAMPLE SYLLABUS
:
e
u
D
,
9
/
8
/
7
S
W
H
6
S
W
H
k
r
o
w
e
m
o
H
t
n
e
m
n
g
i
s
s
A
s
e
n
i
h
c
a
M
e
t
a
t
S
e
t
i
n
i
F
g
n
i
l
e
d
o
M
h
t
i
w
s
r
e
l
l
o
r
t
n
o
C
h
t
a
p
a
t
a
D
d
n
a
g
o
l
i
r
e
V
4
1
t
c
O
F
,
7
1
t
c
O
M
,
m
e
t
s
y
S
n
o
i
t
a
g
i
r
r
I
:
e
l
p
m
a
x
E
9
1
t
c
O
W
,
1
2
t
c
O
F
,
g
n
i
d
a
e
R
s
c
i
p
o
T
e
t
a
D
n
o
i
s
s
e
S
:
e
u
D
1
1
,
0
1
S
W
H
)
K
&
M
(
7
r
e
t
p
a
h
C
s
r
e
f
s
n
a
r
T
r
e
t
s
i
g
e
R
d
n
a
s
r
e
t
s
i
g
e
R
4
2
t
c
O
M
,
g
n
i
n
n
a
l
P
t
c
e
j
o
r
P
9
/
8
/
7
S
W
H
,
5
0
1
3
N
E
M
P
8
-
6
,
1
m
a
x
E
6
2
t
c
O
W
,
8
2
t
c
O
F
,
1
3
t
c
O
M
,
1
1
/
0
1
S
W
H
:
e
u
D
2
1
S
W
H
)
K
&
M
(
9
r
e
t
p
a
h
C
s
c
i
s
a
B
n
g
i
s
e
D
r
e
t
u
p
m
o
C
7
v
o
N
M
,
)
K
&
M
(
8
r
e
t
p
a
h
C
s
c
i
s
a
B
y
r
o
m
e
M
2
v
o
N
W
,
4
v
o
N
F
,
4
2
5
2
6
2
7
2
8
2
9
2
0
3
1
3
2
3
3
3
4
3
F
i
n
a
l
E
x
a
m
M
o
n
,
D
e
c
5
,
1
:
1
5
-
3
:
1
5
P
M
C
R
1
0
3
,
SAMPLE SYLLABUS 67
4
3
4
2
4
1
4
0
3
9
3
8
3
7
3
6
3
5
,
F
D
e
c
2
P
r
o
j
e
c
t
p
r
e
s
e
n
t
a
t
i
o
n
s
(
3
)
,
W
N
o
v
3
0
P
r
o
j
e
c
t
p
r
e
s
e
n
t
a
t
i
o
n
s
(
3
)
,
M
N
o
v
2
8
P
r
o
j
e
c
t
p
r
e
s
e
n
t
a
t
i
o
n
s
(
3
)
,
F
N
o
v
2
5
T
h
a
n
k
s
g
i
v
i
n
g
B
r
e
a
k
,
W
N
o
v
2
3
T
h
a
n
k
s
g
i
v
i
n
g
B
r
e
a
k
N
o
c
l
a
s
s
N
o
c
l
a
s
s
,
M
N
o
v
2
1
E
x
a
m
2
i
n
g
U
n
i
t
s
,
F
N
o
v
1
8
R
I
S
C
a
n
d
C
I
S
C
C
e
n
t
r
a
l
P
r
o
c
e
s
s
-
C
h
a
p
t
e
r
1
1
(
M
&
K
)
,
W
N
o
v
9
,
F
N
o
v
1
1
I
n
s
t
r
u
c
t
i
o
n
S
e
t
A
r
c
h
i
t
e
c
t
u
r
e
C
h
a
p
t
e
r
1
0
(
M
&
K
)
,
M
N
o
v
1
4
,
W
N
o
v
1
6
N
o
c
l
a
s
s
–
i
n
s
t
r
u
c
t
o
r
t
r
a
v
e
l
i
n
g
m
e
n
t
i
T
m
e
P
r
o
j
e
c
t
D
e
v
e
l
o
p
-
1
2
P
r
o
j
e
c
t
D
u
e
:
H
W
S
S
e
s
s
i
o
n
D
a
t
e
T
o
p
i
c
s
R
e
a
d
i
n
g
A
s
s
i
g
n
m
e
n
t
H
o
m
e
w
o
r
k
69
A P P E N D I X B
Personal Worksheet
Personal Worksheet
(cid:129) Develop your personal list of tenets of great teaching that you will follow.
(cid:129) Spend some time reflecting on teachers, both good and bad, from your past. Develop a list of
both good and bad tenets from your personal reflections.
70 B. PERSONAL WORKSHEET
(cid:129) Develop a personal teaching mission statement based on the tenets of great teaching from the
area of attitude.
(cid:129) Select a course you are currently teaching or will teach in the near future. Develop objectives
for that course. How do the course objectives support student outcomes?
(cid:129) Develop a list of concrete methods to use in the classroom to apply the tenets of great teaching
summarized in the synthesized model.
71
72 B. PERSONAL WORKSHEET
(cid:129) In the Paul Crips vignette, what tenets of great teaching were exhibited?
(cid:129) In the Jaime Escalante vignette, what tenets of great teaching were exhibited?
73
(cid:129) Identify a teacher you greatly admire; interview them, identify the tenets of great teaching
they exhibit, and write a teaching vignette about them.
74 B. PERSONAL WORKSHEET
(cid:129) Write your own personal teaching vignette.
(cid:129) Based on the work of Felder and Silverman, develop your own personal list of techniques to
bridge your teaching style to the learning styles of your students.
75
76 B. PERSONAL WORKSHEET
(cid:129) Develop a list of attributes that you expected from a teacher when you were a student. Which
ones will you follow as a teacher?
(cid:129) What techniques will you employ to establish good student relationships?
77
78 B. PERSONAL WORKSHEET
(cid:129) Conduct a self assessment of your teaching progress.
(cid:129) Conduct an assessment of your course. Are you achieving course objectives? What proactive
changes can you make to improve the course?
79
80 B. PERSONAL WORKSHEET
(cid:129) Who were the mentors in your life? Do they share any character traits?
(cid:129) What are the traits of a good mentor?
81
82 B. PERSONAL WORKSHEET
(cid:129) What are the expectations for your job performance? Develop a plan on how you will meet
these expectations.
(cid:129) Develop a plan on how you will continue to grow as an educator.
83
84 B. PERSONAL WORKSHEET
(cid:129) Develop a personal mission statement to be the best educator possible.
Author’s Biography
85
STEVEN F. BARRETT
Steven F. Barrett, Ph.D., P.E., received a BS in Electronic Engineering Technology from the
University of Nebraska at Omaha in 1979, a M.E.E.E. from the University of Idaho at Moscow
in 1986, and a Ph.D. from The University of Texas at Austin in 1993. He was formally an active
duty faculty member at the United States Air Force Academy, Colorado and is now the Associate
Dean of Academic Programs at the University of Wyoming. He is a member of IEEE (senior)
and Tau Beta Pi (chief faculty advisor). His research interests include digital and analog image
processing, computer-assisted laser surgery, and embedded controller systems. He is a registered
Professional Engineer in Wyoming and Colorado. He, along with co-author Dr. Daniel Pack, wrote
six textbooks on microcontrollers and embedded systems. In 2004, Barrett was named “Wyoming
Professor of the Year” by the Carnegie Foundation for the Advancement of Teaching and in 2008
was the recipient of the National Society of Professional Engineers (NSPE) Professional Engineers
in Higher Education, Engineering Education Excellence Award.
87
Index
“Stand and Deliver”, 11
6 Ps, 8
ABET accreditation, 29
academic dishonesty, 39
assessment, 43, 48
attitude, 7
Bloom’s Taxonomy, 18
Bloom, Benjamin, 18
Carnegie Foundation, 5
CASE, 5
challenges, 38
codified, 18
consolidation, 18
continuous improvement, 48
conversion, 18
course assessment, 48
course critiques, 47
course objectives, 32
Crips, Paul, 9
Escalante, Jaime, 11
evaluation, 48
examinations, 44
FE examination, 7, 50
Felder and Silverman, 21
FERPA, 7, 39
Garfield High School, 12
GreatSchools, 5
Jung, Carl, 20
Kupfermann, 17
lecture, 38
lecture assessment, 43
lesson objectives, 34
lesson plans, 34
long term improvement, 48
Mathews, Jay, 12
MBTIR, 20
memory consolidation, 17
memory model, 17
mentoring, 53
midterm critiques, 45
Myers-Briggs, 20
NCEES, 7
parents, well-meaning, 39
personal assessment, 45
preparation, 8
professional conduct, 38
program educational objectives, 30
quizzes, 44
rewards, 55
self assessment, 47
sensitization, 18
short term improvement, 48
88
INDEX
student outcomes, 30
student relationships, 37
syllabus, 32
synthesized model, 7
textbook selection, 33
tootlifest, 34
U.S. Professors of the Year, 5
Whitaker, Todd, 2
|
https://ieeexplore.ieee.org/xpl/ebooks/bookPdfWithBanner.jsp?fileName=8306844.pdf&bkn=8306843&pdfType=book
|
[ERROR] unpack requires a buffer of 4 bytes
|
https://ieeexplore.ieee.org/xpl/ebooks/bookPdfWithBanner.jsp?fileName=6813309.pdf&bkn=6813308&pdfType=book
|
SYNTHESIS LECTURES ON ENGINEERING
SYNTHESIS LECTURES ON ENGINEERING
SYNTHESIS LECTURES ON ENGINEERING
Series ISSN: 1939-5221
Series ISSN: 1939-5221
Series ISSN: 1939-5221
Engineering Thermodynamics and
Engineering Thermodynamics and
Engineering Thermodynamics and
21st Century Energy Problems
21st Century Energy Problems
21st Century Energy Problems
A Textbook Companion for Student Engagement
A Textbook Companion for Student Engagement
A Textbook Companion for Student Engagement
Donna Riley, Smith College
Donna Riley, Smith College
Donna Riley, Smith College
Energy is a basic human need; technologies for energy conversion and use are fundamental to human
Energy is a basic human need; technologies for energy conversion and use are fundamental to human
Energy is a basic human need; technologies for energy conversion and use are fundamental to human
survival. As energy technology evolves to meet demands for development and ecological sustainability
survival. As energy technology evolves to meet demands for development and ecological sustainability
survival. As energy technology evolves to meet demands for development and ecological sustainability
in the 21st century, engineers need to have up-to-date skills and knowledge to meet the creative challenges
in the 21st century, engineers need to have up-to-date skills and knowledge to meet the creative challenges
in the 21st century, engineers need to have up-to-date skills and knowledge to meet the creative challenges
posed by current and future energy problems. Further, engineers need to cultivate a commitment to and
posed by current and future energy problems. Further, engineers need to cultivate a commitment to and
posed by current and future energy problems. Further, engineers need to cultivate a commitment to and
passion for lifelong learning which will enable us to actively engage new developments in the field. This
passion for lifelong learning which will enable us to actively engage new developments in the field. This
passion for lifelong learning which will enable us to actively engage new developments in the field. This
undergraduate textbook companion seeks to develop these capacities in tomorrow’s engineers in order
undergraduate textbook companion seeks to develop these capacities in tomorrow’s engineers in order
undergraduate textbook companion seeks to develop these capacities in tomorrow’s engineers in order
to provide for future energy needs around the world.
to provide for future energy needs around the world.
to provide for future energy needs around the world.
This book is designed to complement traditional texts in engineering thermodynamics, and thus is
This book is designed to complement traditional texts in engineering thermodynamics, and thus is
This book is designed to complement traditional texts in engineering thermodynamics, and thus is
organized to accompany explorations of the First and Second Laws, fundamental property relations, and
organized to accompany explorations of the First and Second Laws, fundamental property relations, and
organized to accompany explorations of the First and Second Laws, fundamental property relations, and
various applications across engineering disciplines. It contains twenty modules targeted toward meeting
various applications across engineering disciplines. It contains twenty modules targeted toward meeting
various applications across engineering disciplines. It contains twenty modules targeted toward meeting
five often-neglected ABET outcomes: ethics, communication, lifelong learning, social context, and
five often-neglected ABET outcomes: ethics, communication, lifelong learning, social context, and
five often-neglected ABET outcomes: ethics, communication, lifelong learning, social context, and
contemporary issues. The modules are based on pedagogies of liberation, used for decades in the humanities
contemporary issues. The modules are based on pedagogies of liberation, used for decades in the humanities
contemporary issues. The modules are based on pedagogies of liberation, used for decades in the humanities
and social sciences for instilling critical thinking and reflective action in students by bringing attention
and social sciences for instilling critical thinking and reflective action in students by bringing attention
and social sciences for instilling critical thinking and reflective action in students by bringing attention
to power relations in the classroom and in the world.
to power relations in the classroom and in the world.
to power relations in the classroom and in the world.
This book is intended to produce a conversation and creative exploration around how to teach and
This book is intended to produce a conversation and creative exploration around how to teach and
This book is intended to produce a conversation and creative exploration around how to teach and
learn thermodynamics differently. Because liberative pedagogies are at their heart relational, it is important
learn thermodynamics differently. Because liberative pedagogies are at their heart relational, it is important
learn thermodynamics differently. Because liberative pedagogies are at their heart relational, it is important
to maintain spaces for discussing classroom practices with these modules, and for sharing ideas for
to maintain spaces for discussing classroom practices with these modules, and for sharing ideas for
to maintain spaces for discussing classroom practices with these modules, and for sharing ideas for
implementing critical pedagogies in engineering contexts. The reader is therefore encouraged to visit the
implementing critical pedagogies in engineering contexts. The reader is therefore encouraged to visit the
implementing critical pedagogies in engineering contexts. The reader is therefore encouraged to visit the
book’s blog at http://smiththermo.wordpress.com.
book’s blog at http://smiththermo.wordpress.com.
book’s blog at http://smiththermo.wordpress.com.
About SYNTHESIs
About SYNTHESIs
About SYNTHESIs
This volume is a printed version of a work that appears in the Synthesis
This volume is a printed version of a work that appears in the Synthesis
This volume is a printed version of a work that appears in the Synthesis
Digital Library of Engineering and Computer Science. Synthesis Lectures
Digital Library of Engineering and Computer Science. Synthesis Lectures
Digital Library of Engineering and Computer Science. Synthesis Lectures
provide concise, original presentations of important research and development
provide concise, original presentations of important research and development
provide concise, original presentations of important research and development
topics, published quickly, in digital and print formats. For more information
topics, published quickly, in digital and print formats. For more information
topics, published quickly, in digital and print formats. For more information
visit www.morganclaypool.com
visit www.morganclaypool.com
visit www.morganclaypool.com
Mor gan Cl aypool Publishers
Mor gan Cl aypool Publishers
Mor gan Cl aypool Publishers
&
&
&
ISBN: 978-1-60845-363-4
ISBN: 978-1-60845-363-4
ISBN: 978-1-60845-363-4
90000
90000
90000
w w w . m o r g a n c l a y p o o l . c o m
w w w . m o r g a n c l a y p o o l . c o m
w w w . m o r g a n c l a y p o o l . c o m
9 781608 453634
9 781608 453634
9 781608 453634
R
R
R
I
I
I
L
L
L
E
E
E
Y
Y
Y
E
E
E
N
N
N
G
G
G
I
I
I
N
N
N
E
E
E
E
E
E
R
R
R
I
I
I
N
N
N
G
G
G
T
T
T
H
H
H
E
E
E
R
R
R
M
M
M
O
O
O
D
D
D
Y
Y
Y
N
N
N
A
A
A
M
M
M
I
I
I
C
C
C
S
S
S
A
A
A
N
N
N
D
D
D
2
2
2
1
1
1
S
S
S
T
T
T
C
C
C
E
E
E
N
N
N
T
T
T
U
U
U
R
R
R
Y
Y
Y
E
E
E
N
N
N
E
E
E
R
R
R
G
G
G
Y
Y
Y
P
P
P
R
R
R
O
O
O
B
B
B
L
L
L
E
E
E
M
M
M
S
S
S
M
M
M
o
o
o
r
r
r
g
g
g
a
a
a
n
n
n
&
&
&
C
C
C
l
l
l
a
a
a
y
y
y
p
p
p
o
o
o
o
o
o
l
l
l
&
&
&
CM& Mor gan Cl aypool Publishers
CM& Mor gan Cl aypool Publishers
CM& Mor gan Cl aypool Publishers
Engineering Thermodynamics and
Engineering Thermodynamics and
Engineering Thermodynamics and
21st Century Energy Problems
21st Century Energy Problems
21st Century Energy Problems
A Textbook Companion for Student Engagement
A Textbook Companion for Student Engagement
A Textbook Companion for Student Engagement
Donna Riley
Donna Riley
Donna Riley
SYNTHESIS LECTURES ON ENGINEERING
SYNTHESIS LECTURES ON ENGINEERING
SYNTHESIS LECTURES ON ENGINEERING
Engineering Thermodynamics
and 21st Century Energy Problems
A textbook companion for student engagement
Synthesis Lectures on
Engineering
Engineering Thermodynamics and 21st Century Energy Problems: A textbook companion
for student engagement
Donna Riley
2011
MATLAB for Engineering and the Life Sciences
Joseph V. Tranquillo
2011
Systems Engineering: Building Successful Systems
Howard Eisner
2011
Fin Shape Thermal Optimization Using Bejan’s Constructal Theory
Giulio Lorenzini, Simone Moretti, Alessandra Conti
2011
Geometric Programming for Design and Cost Optimization (with illustrative case study
problems and solutions), Second Edition
Robert C. Creese
2010
Survive and Thrive: A Guide for Untenured Faculty
Wendy C. Crone
2010
Geometric Programming for Design and Cost Optimization (with Illustrative Case Study
Problems and Solutions)
Robert C. Creese
2009
Style and Ethics of Communication in Science and Engineering
Jay D. Humphrey, Jeffrey W. Holmes
2008
iii
Introduction to Engineering: A Starter’s Guide with Hands-On Analog Multimedia
Explorations
Lina J. Karam, Naji Mounsef
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Digital Multimedia and
Robotics Explorations
Lina J. Karam, Naji Mounsef
2008
CAD/CAM of Sculptured Surfaces on Multi-Axis NC Machine: The DG/K-Based
Approach
Stephen P. Radzevich
2008
Tensor Properties of Solids, Part Two: Transport Properties of Solids
Richard F. Tinder
2007
Tensor Properties of Solids, Part One: Equilibrium Tensor Properties of Solids
Richard F. Tinder
2007
Essentials of Applied Mathematics for Scientists and Engineers
Robert G. Watts
2007
Project Management for Engineering Design
Charles Lessard, Joseph Lessard
2007
Relativistic Flight Mechanics and Space Travel
Richard F. Tinder
2006
Copyright © 2012 by Morgan & Claypool
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in
any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations in
printed reviews, without the prior permission of the publisher.
Engineering Thermodynamics and 21st Century Energy Problems: A textbook companion for student
engagement
Donna Riley
www.morganclaypool.com
ISBN: 9781608453634
paperback
ISBN: 9781608453641
ebook
DOI 10.2200/S00387ED1V01Y201110ENG016
A Publication in the Morgan & Claypool Publishers series
SYNTHESIS LECTURES ON ENGINEERING
Lecture #16
Series ISSN
Synthesis Lectures on Engineering
Print 1939-5221 Electronic 1939-523X
Engineering Thermodynamics
and 21st Century Energy Problems
A textbook companion for student engagement
Donna Riley
Smith College
SYNTHESIS LECTURES ON ENGINEERING #16
CM&
Morgan
&
cLaypool
publishers
ABSTRACT
Energy is a basic human need; technologies for energy conversion and use are fundamental to
human survival. As energy technology evolves to meet demands for development and ecological
sustainability in the 21st century, engineers need to have up-to-date skills and knowledge to meet
the creative challenges posed by current and future energy problems. Further, engineers need to
cultivate a commitment to and passion for lifelong learning which will enable us to actively engage
new developments in the field. This undergraduate textbook companion seeks to develop these
capacities in tomorrow’s engineers in order to provide for future energy needs around the world.
This book is designed to complement traditional texts in engineering thermodynamics, and
thus is organized to accompany explorations of the First and Second Laws, fundamental property
relations, and various applications across engineering disciplines. It contains twenty modules targeted
toward meeting five often-neglected ABET outcomes: ethics, communication, lifelong learning,
social context, and contemporary issues. The modules are based on pedagogies of liberation, used
for decades in the humanities and social sciences for instilling critical thinking and reflective action
in students by bringing attention to power relations in the classroom and in the world.
This book is intended to produce a conversation and creative exploration around how to teach
and learn thermodynamics differently. Because liberative pedagogies are at their heart relational,
it is important to maintain spaces for discussing classroom practices with these modules, and for
sharing ideas for implementing critical pedagogies in engineering contexts. The reader is therefore
encouraged to visit the book’s blog at http://smiththermo.wordpress.com.
KEYWORDS
energy, thermodynamics, entropy, liberative pedagogies, critical pedagogy, feminist ped-
agogy, engineering education, climate change, engineering ethics, communication, life-
long learning, social context, contemporary issues, development, service learning
Contents
vii
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Why College? Why Thermodynamics? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Why this Book? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
A Textbook Companion: A Book of Ideas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
An Open Discussion for Students and Teachers: Learning Objectives . . . . . . . . . . 4
Learning Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Evaluating Student Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1 What and Why? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.1 Module 1.1. Thermodynamics is About Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.1.1 Exploration: What is Energy? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2 Module 1.2. Pedagogy: How to Learn Using this Book . . . . . . . . . . . . . . . . . . . . . . 12
1.2.1 Exploration 1: Principles of Critical Pedagogies . . . . . . . . . . . . . . . . . . . . . . 14
1.2.2 Exploration 2: Models of Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.3 Module 1.3. US and World Energy Needs and Uses . . . . . . . . . . . . . . . . . . . . . . . . 20
1.3.1 Exploration 1: Energy Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.3.2 Exploration 2: Women, Poverty, and Energy . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.3.3 Exploration 3: 1 kW per capita? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.4 Module 1.4. US and World Energy Policies: What are the Issues? . . . . . . . . . . . . 24
1.4.1 Exploration 1: Copenhagen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.4.2 Exploration 2: The Cost of Energy [20] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.5 Module 1.5. Getting Education Right for a Sustainable Energy Future . . . . . . . . 28
1.5.1 Exploration 1: Power/Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.5.2 Exploration 2: What do Current Engineering Students Need to Learn
to be Able to Work on Energy Issues? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
viii
2
3
4
The First Law: Making Theory Relevant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.1 Module 2.1. Learning from History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.1.1 Exploration 1: First Law in Western Europe . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.1.2 Exploration 2: De-Centering Western Thermo . . . . . . . . . . . . . . . . . . . . . . 37
2.2 Module 2.2. Energy Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.2.1 Exploration 1: “Foreign” Oil Independence . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.2.2 Exploration 2: Energy Independence Reconceived . . . . . . . . . . . . . . . . . . . . 39
2.3 Module 2.3. Evaporative Coolers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.4 Module 2.4. Hunger, Poverty, and Obesity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.5 Module 2.5. Thermo to Life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
The Second Law and Property Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.1 Module 3.1. The Limits of Efficiency: Heat Engines vs. Other Energy
Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.2 Module 3.2. Perpetual Motion Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.3 Module 3.3. Entropy as a Social Construct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.3.1 Exploration 1: Origins of Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.3.2 Exploration 2: Entropy’s Philosophical Implications . . . . . . . . . . . . . . . . . . 56
3.4 Module 3.4. Evaluating Entropy Analogies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.5 Module 3.5. Making Math Relevant: Thermodynamic Relations in Context . . . . 59
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Thinking Big Picture about Energy and Sustainability . . . . . . . . . . . . . . . . . . . . . . . 63
4.1 Module 4.1. Climate Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.2 Module 4.2. Selection Criteria for Energy Technologies . . . . . . . . . . . . . . . . . . . . . 66
4.2.1 Exploration 1: Developing Selection Criteria . . . . . . . . . . . . . . . . . . . . . . . . 66
4.2.2 Exploration 2: Evaluating and Selecting Power Generation
Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.2.3 Exploration 3: Evaluating and Selecting Transportation Technologies . . . 69
4.3 Module 4.3. Is it Green? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.3.1 Exploration 1: Nuclear Power as a Green Alternative? . . . . . . . . . . . . . . . . . 71
4.3.2 Exploration 2: Ethanol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.3.3 Exploration 3: Coal Train [19] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.4 Module 4.4. Home Energy Uses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.4.1 Exploration 1: Solar Cooker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.4.2 Exploration 2: Refrigeration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.4.3 Exploration 3: Dean Kamen’s Stirling Engine . . . . . . . . . . . . . . . . . . . . . . . . 78
4.5 Module 4.5. Ethics of Energy Disasters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Author’s Biography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
ix
Acknowledgments
Many of the innovations in this book came from my own students in thermodynamics, and from
friends and colleagues kind enough to help me think through the course and my pedagogy. In the
early years it was Stefan Brueck and Sylvia Thorson-Smith who introduced me to bell hooks’s work
and the critical pedagogy tradition. Colleagues at Smith including Lisa Armstrong, Alex Keller,
Ginetta Candelario, Jennifer Guglielmo, and Marguerite Harrison have helped me think further
about my teaching. In more formal settings colleagues participating in the Kahn Institute on Disorder
and the Sherred Center Teaching Circles on Diversity and on the Gulf Spill have further helped me
develop ideas in this book. I thank Kamyar Haghighi and the Purdue Engineering Education faculty
for hosting me on sabbatical while I worked on this book, and particularly thank Alice Pawley and
Julia Thomson for discussing the book with me at some length.
I thank the Engineering, Social Justice, and Peace community, particularly George Catalano
and Caroline Baillie, for the opportunities they have extended to me to develop ideas for this book,
especially George’s grant from Campus Compact that resulted in the development of the module
on hunger, poverty, and obesity.
Some of the material in this book is based upon work supported by the National Science
Foundation under grants 0448240 and 1037655. Any opinions, findings, and conclusions or recom-
mendations expressed in this material are those of the author and do not necessarily reflect the views
of the NSF.
All along my students in the thermodynamics course have been great sports – whether they
came along enthusiastically or reluctantly, I thank them for all they did to improve learning in my
course over the years. Student researchers Lindsay Holle and Ally Gorin worked on this project with
Smith College funding, and students from the Liberative Pedagogies Project and later the e-book
Dissemination Project also contributed to this book. I particularly thank Nora Paul Schultz and Ida
Ngambeki for their contributions to the development of modules in thermo, and Haley Dell’Orso
and Amanda Nadeau for reviewing drafts of this book.
I thank Lionel Claris and Eleanor Jaffee, my colleagues on the Liberative Pedagogies Project,
for their immeasurable contributions to this project. In the early years of the project Lionel in
particular helped shape many of the curricular innovations found in these pages. It was a joy and a
delight to engage in this creative work with them both.
Thanks to the faculty collaborators in the NSF E-book dissemination project who reviewed
and tested out many of the modules in this book, and continue to help improve them in many ways.
I could not have done this without the support of friends and family. Susannah Howe and
Borjana Mikic, friends and colleagues in the Picker Engineering Program, were sounding boards for
ideas in their infancy. Running partners Lisa, Daphne, Marybeth, Kim and Pam (and Susannah and
xii ACKNOWLEDGMENTS
Borjana) provided support and a much needed stress release. The Tribe (you know who you are) was
there to listen and support me, and provide me with hilarity, excellent food and friendship along the
way. My familiars Willow and Raven were constant companions through the writing of the book. I
thank my family for offering me support and encouragement.
Finally, to Phil, for being a fellow scientist who “gets it,” for cheering me on, for believing
in me as I dream the impossible, for being at the center of the fullness of my life outside work, my
deepest love and gratitude.
Donna Riley
October 2011
Introduction
1
Energy is a basic human need; technologies for energy conversion and use are fundamental
to human survival. As energy technology evolves to meet demands for development and ecological
sustainability in the 21st century, engineers need to have up-to-date skills and knowledge to meet the
creative challenges posed by current and future energy problems. Further, engineers need to cultivate
a commitment to and passion for lifelong learning which will enable us to actively engage new
developments in the field. This undergraduate textbook companion seeks to develop these capacities
in tomorrow’s engineers in order to provide for future energy needs around the world.
WHY COLLEGE? WHY THERMODYNAMICS?
I usually start my thermodynamics class off by asking students why they are in college. Typically,
my students are taken aback by the question; most haven’t thought about it, at least not recently.
They describe college as a “logical next step,” as something expected of them, by parents who went
to college as well as by those who did not. Some describe college as necessary to be credentialed for
particular kinds of jobs that they view as desirable. I work with them to challenge their assumptions,
to help them see college as a choice they have made, to take ownership over that choice. Only once
every few years does a student draw on liberal education ideals in her/his/hir answer: she/he/ze is in
college to learn, to develop her/his/hir intellect and abilities in independent and critical thought.
After we discuss why they are in college, I also ask my students why they are taking my course
in thermodynamics. The vast majority respond that they are there because it is required for the
engineering major. I want students, and I want you, the reader, to have other reasons for engaging
with this book: intellectual curiosity, and a commitment to engage with energy issues as future
professionals and/or as citizens of the planet. I don’t want you to read this because it is assigned, but
because energy matters.
Energy availability, production, and use have enormous political and economic implications.
The First and Second Laws are central organizing principles for science and technology, for industry
and commerce. Using theoretical principles like these as well as mathematics to describe physical
phenomena and to model or design useful products and processes goes to the very heart of what
engineering is all about. With applications in such a breadth of areas – transportation, electric power,
refrigeration, heating, ventilation, and air conditioning (HVAC), nutrition and exercise, manufac-
turing of pharmaceuticals, distillation of liquor and gasoline, analyzing behavior of contaminants in
environmental media, and the list goes on – how can an engineering student not find something
relevant to their lives and livelihoods, something of interest personally or professionally?
2 ACKNOWLEDGMENTS
WHY THIS BOOK?
Current engineering thermodynamics textbooks seem to adhere to an unspoken canon, grounded in
19th century developments of the steam engine in Europe, and subsequent fossil fuel technologies.
While several texts have added updates, sidebars, and problems on more recent technologies, they
do not frame their texts around what engineers need to know to innovate and lead society into
a sustainable energy future. Alfred Carlson, professor of chemical engineering at Rose-Hulman
Institute of Technology, commented in ASEE Prism, “Most thermo books either have no new info
or outdated or useless material.”[1]
This book takes a fresh look at the engineering knowledge and skills required for current
and emerging technologies, and organizes learning around acquiring them. It incorporates innova-
tive engineering pedagogies that foster intentional and independent learning, preparing students to
face new problems and approach new energy technologies with a spirit of inquiry and confidence
throughout their careers. Because our energy future is at least as much about political will as it is
about technological know-how, it includes the fundamentals of energy policy analysis and assists en-
gineering programs in meeting accreditation criteria related to the social implications of technology,
communications skills, and professional ethics.
The book’s distinguishing features include the following:
(cid:2) Liberative pedagogies and intentional learning – This book embodies the principles of
critical, feminist, anti-racist and post-colonial pedagogies, which seek to empower students
as independent learners and thinkers. Liberative pedagogies engage students where they are,
starting from what students already know from their life experience, and connecting with the
things they find relevant. Recognizing student authority builds confidence and de-mystifies es-
oteric material. With an inductive approach that fosters critical thinking and the development
and pursuit of important questions, readers are encouraged to collectively and independently
explore topics of individual or collective interest. Critical engagement with the world in the
form of reflective action is the ultimate goal of liberative pedagogies, and to this end, exercises
in the book encourage reflection on one’s own learning, and on our collective energy future.
Readers are further challenged to take action as involved citizens and professionals on energy
issues locally, regionally, nationally, and globally.
(cid:2) Sustainability – The ability to properly assess the ecological impacts of different energy
technologies is increasingly important as sustainability becomes a basic design criterion for
energy systems, in response to deepening concerns about environmental quality and global
climate change. This book takes a critical approach to sustainability and seeks to examine
definitions of sustainability in broader economic, political, and social context.
(cid:2) Global Perspective – As industrially developing nations plan to meet energy needs for eco-
nomic growth, they are poised to make crucial and far-reaching decisions for developing new
energy infrastructures. This is an exciting time for engineers, offering a teaching moment for
students to consider the impacts of technology and the importance of forward-thinking design.
It is equally important that engineers learn to understand issues of power in the economic and
political contexts of globalization, and this book encourages students to explore global issues
in ways that take these dynamics into account.
ACKNOWLEDGMENTS 3
(cid:2) Policy considerations – Politics drives our energy priorities, and our choices of energy tech-
nologies can drive our foreign and domestic political priorities. Engineers must have a working
knowledge of policy and politics as it relates to energy technology.
(cid:2) Ethics and social responsibility – Energy poses ethical questions that must be confronted
at multiple levels of analysis. Engineers face both individual and collective decisions as pro-
fessionals with important ethical dimensions. Local, national, and international communities
face ethical choices about energy systems and uses. Engineers need a set of analytical skills to
understand the factors that influence their ethical decision making in all of these settings and
roles. Presenting ethics and social responsibility provides realistic and significant professional
and social context for technical material in thermodynamics.
(cid:2) A multidisciplinary approach – Today’s complex energy systems require a multidisciplinary
approach that spans all engineering disciplines. Engineering Thermodynamics is typically
taken as a required course (or multiple courses) by engineering undergraduates in specific
disciplines. This book covers a range of applications across engineering disciplines; while
applications are rarely specific to a particular discipline, the intent is to broaden the perspectives
of engineers in every discipline.
(cid:2) History – The historical development of thermodynamics is important for engineers to un-
derstand. Historical presentations of information can provide insight and make the material
approachable for some readers using the drama of discovery. Historical material is made rele-
vant to students’ understanding of key concepts and to current issues in energy.
A TEXTBOOK COMPANION: A BOOK OF IDEAS
There are many thermodynamics textbooks available for engineering classes today. This book does
not replicate this effort, but is designed as a companion to these. It does not cover fundamentals or
provide the typical practice problems found in traditional texts. It is designed for the Morgan and
Claypool Synthesis series, such that students at subscriber institutions might be able to use the book
in electronic form at no additional cost.
Each module in this book is an idea that can be implemented in courses and classrooms any
number of ways. It is up to professors and students to adapt these ideas as appropriate for different
circumstances and learning settings. I have intentionally avoided being prescriptive; instead, modules
represent suggestions that can no doubt be improved upon implementation.
4 ACKNOWLEDGMENTS
AN OPEN DISCUSSION FOR STUDENTS AND TEACHERS:
LEARNING OBJECTIVES
This book is written with both students and instructors in mind. It is a principle of critical pedagogies
to blur these roles intentionally. And so I hope that students as well as instructors will read this section
on accreditation criteria as they relate to course design and learning objectives.
As ABET’s educational outcomes criteria (Figure 1) have been implemented within engineer-
ing programs over the past decade, the need for textbooks that cover social context and professional
ethics has grown. This book provides content to meet ABET criteria while co-existing with tried-
and-true textbooks. The book is designed to help students develop knowledge and abilities primarily
related to outcomes (f-j): ethics, communication, context, lifelong learning, and contemporary issues.
(a)
(b)
(c)
(d)
(e)
(f )
(g)
(h)
(i)
(j)
(k)
ABET OUTCOMES CRITERIA
an ability to apply knowledge of mathematics, science, and
engineering
an ability to design and conduct experiments, as well as to
analyze and interpret data
an ability to design a system, component, or process to meet
desired needs within realistic constraints such as economic,
environmental, social, political, ethical, health and safety,
manufacturability, and sustainability
an ability to function on multidisciplinary teams
an ability to identify, formulate, and solve engineering
problems
an understanding of professional and ethical responsibility
an ability to communicate effectively
the broad education necessary to understand the impact of
engineering solutions in a global, economic, environmental,
and societal context
a recognition of the need for, and an ability to engage in life-
long learning
a knowledge of contemporary issues
an ability to use the techniques, skills, and modern engineer-
ing tools necessary for engineering practice.
Figure 1: ABET outcomes [2].
ACKNOWLEDGMENTS 5
While thermodynamics instructors will note that this book does not emphasize ABET out-
comes (b) (experiments and data), (c) (design), (e) (problem solving), or (k) (modern tools), most
traditional thermo courses already heavily emphasize (e), also (b) and (k) when taught with a lab-
oratory, and (c) if a design project is incorporated into the course. The teamwork outcome (d) can
be addressed if the modules are implemented in teams. It is up to the instructor and students to
implement modules in ways that fulfill this outcome if desired. Throughout the book modules that
address particular ABET outcomes are identified with the icons shown in Figure 2.
a
SEM
knowledge
c
e
f
design
problems
ethics
g
communi-
cation
h
context
i
j
lifelong
learning
contemporary
issues
Figure 2: ABET outcomes icons.
LEARNING PROCESS
This book uses a modular format and employs a particular set of pedagogies to accomplish its
learning objectives. All modules are laid out using a four-step process that draws on critical pedagogies
(Figure 3). First, students engage a topic, usually through reading given material and/or searching for
material on their own. Next, students analyze a process or situation related to the topic. Sometimes
analysis is technical, sometimes social. Sometimes analysis is quantitative, sometimes qualitative.
Then students reflect on a particular question or what they have learned from the analysis. Finally,
students are challenged to initiate some change, either to their way of thinking or in the world at
large as a result of what they have learned. This process is normally iterative, where the change may
initiate another question to engage, and so on.
This book will ask students to “do something,” to engage in learning and teaching in ways that
might be unfamiliar and demand more responsibility than is familiar (more on this in Module 1.2).
Some work may occur in the classroom as traditional assignments. Some of it, I hope, readers
will choose to do independently out of a particular interest. The actions aren’t the same as most
typical “hands on” work done in the lab or design shop. It might not be what most have thought
of as “engineering” before. But this is part of the work we need to do if we want engineering and
engineers to have something meaningful to say about the nation’s and world’s energy issues.
Instructors and students should each understand the additional workloads that are required
when exploring relevant material. There may be limits on what is realistically achievable, given
available resources or institutional policies. Expanding these possibilities is part of the struggle for
education. We need to acknowledge the power dynamics at work in our institutions, and become
creative as we dismantle or work around constraints. This is a process that takes time. This book is
6 ACKNOWLEDGMENTS
Figure 3: Learning process for modules.
the result of 10 years of creative experiment and iteration. I recommend introducing modules from
this book incrementally rather than all at once, and evaluating each effort, adapting the materials
for particular students in particular settings.
Some will argue that thermodynamics instructors already have too great a challenge before
them in helping students understand the technical material. There is no time for these “extras,”
which would be nice, but are not necessary. I believe skills in context, ethics, communication, and
contemporary issues are absolutely essential, and underemphasized in engineering education today.
So on one level, this may simply be a question of priorities and values. However, even if technical
skills are considered of the utmost value, it has been my experience that teaching material in this
book actually helps students with their technical understanding, by providing motivation and new
perspectives on the material. Thinking in terms of synergy rather than zero-sum games reveals what
these modules have to offer. It helps to remember that concerns about coverage can be counterpro-
ductive if insistence on coverage impedes learning overall. The question should be the following:
what is really important for students to learn now? It is possible to prioritize in order to incorporate
some of these important topics in the thermodynamics classroom? We need to work toward learning
those things that are going to serve students and society best in the long run.
With additional work for students, and with innovative pedagogies introduced into a classroom
that is very reliant on traditional modes of learning, students should expect to feel challenged and
uncomfortable at times, and instructors should expect resistance from students. To the extent that
REFERENCES 7
introducing these topics bucks institutional trends (department, institution, discipline…), expect
resistance there as well. The best thing for both students and instructors to do is be transparent and
intentional, always providing the motivation behind our decisions and actions. This does not remove
the dynamic of resistance, but it moves the conversation forward.
It is worth the extra effort if this book can contribute to the development of its readers as
thinkers, leaders, ethical decision-makers and agents of social change.Ultimately, I hope each reader,
instructor and student alike, will come away from this book having learned something new – about
energy, or about learning itself. I hope the book leads us all to ask different kinds of questions, hard
questions that change the world and change our own ways of thinking about the world.
EVALUATING STUDENT WORK
Faculty assigning some of the modules in this book, and students undertaking these asignments, will
find themselves outside their comfort zones – these are not problem sets. Even the modules that
require quantitative analysis are open-ended and do not have a single right answer. Several modules
are well suited to in-class discussions or other kinds of interactive activities. Some are suited to
community-based learning projects.
For instructors new to some of these teaching methods, it may be helpful to consult resources
on techniques such as leading discussions. An excellent resource on this topic is the Canadian Society
for Teaching and Learning in Higher Education (CSTLHE) Green Guide [3]. When it comes to
assigning written work, I have found that the development and sharing of evaluation rubrics as part
of the assignment prompt can help students perform better on these assignments, and remove some
of the anxiety around what is perceived as more subjective evaluation of student work. Generally, I
have used the ABET criteria on communication, context, contemporary issues, ethics, and lifelong
learning as performance criteria on the rubric, specifying for each item what constitutes excellent,
average, and poor work. Students sometimes need to iterate to learn the kind of depth of reflection and
critical or original thinking required of lifelong learning, and here I have found that early feedback is
particularly helpful. Many students need practice in developing and supporting effective arguments
in written work, and writing in ethics often requires reminding students of the importance of using
multiple, different ethical frameworks to build their arguments and explore the ethics of an issue in
depth.
REFERENCES
[1] Sharp, J.E.M. (2005). High Tech Text Books. Prism 15(3) (November 2005). Accessed June
6, 2011 from http://www.prism-magazine.org/nov05/tt_01.cfm. Cited on page(s) 2
[2] ABET (2011). Criteria for Accrediting Engineering Programs. Accessed May 31, 2011
from http://www.abet.org/Linked%20Documents-UPDATE/Criteria%20and%20PP/
E001%2010-11%20EAC%20Criteria%201-27-10.pdf. Cited on page(s) 4
8 REFERENCES
[3] Kustra, E.D.H. and Potter, M.K. (2008). Leading Effective Discussions. Green Guide, No.
9. London, Ontario: Society for Teaching and Learning in Higher Education. Ordering in-
formation accessed September 19, 2011 from http://www.stlhe.ca/resources/green-
guides/. Cited on page(s) 7
C H A P T E R 1
What and Why?
9
This book is organized as a textbook companion. It is meant to supplement and complement other
more technically focused thermodynamics textbooks. It is organized into stand-alone modules that
parallel the general development of most thermodynamics texts, so that students and instructors can
engage this book as little or as much as time permits.
This chapter provides an introduction to the book, to the study of thermodynamics, and to
energy problems on a local, national, and global scale. It asks readers to think about what students
need to learn as engineers and as citizens of the planet, to build a sustainable energy future.
The first module asks readers to develop their own definitions of energy and thermodynamics.
The second module provides a hands-on, learn-by-doing introduction to the pedagogies used in this
book. The third and fourth modules tackle big-picture questions: How much energy do we need?
For what do we need energy? How do our energy needs relate to global problems such as climate
change and war? The last module challenges readers to think about what’s in a thermodynamics
textbook or syllabus, and whether that constitutes what engineers need to know about energy in the
Twenty-First Century. Who decided what students should be learning, and what influenced that
decision? What do you think students need to know? How will you pursue this knowledge?
Module 1.1: Thermodynamics is About Energy.
Module 1.2: Pedagogy: How to Learn Using this Book.
Module 1.3: US and World Energy Needs and Uses.
Module 1.4: US and World Energy Policy: What are the Issues?
Module 1.5: Getting Education Right for a Sustainable Energy Future.
1.1 MODULE 1.1. THERMODYNAMICS IS ABOUT ENERGY
Thermodynamics is, and ought to be, the study of energy. For some reason, the word thermodynamics
is daunting, off-putting, and esoteric. Thermodynamics is something your professor knows about,
or other kinds of experts with many degrees in physics or chemistry or engineering. You hope they
will tell you about it in class, or you will read about it in your expensive textbook, and you will write
it down and practice solving problems and hopefully absorb some of what they know.
g
i
communi-
cation
lifelong
learning
10
1. WHAT AND WHY?
This book is built on the premise that traditional learning, as the saying goes, “from professor’s
notes to students’ notes and through the minds of neither” is the wrong way to go about learning
thermodynamics. In fact, thermodynamics is nothing more than – and nothing less than – the study
of energy. Most of us have been studying energy our whole lives; we know a lot about it from hands-
on experience. The trouble is most thermodynamics textbooks only focus on a small, outdated slice
of what energy is and how it is used in socio-technical systems.
This book starts with the recognition that you already know about energy, and that you can
speak with some authority about it. This book is also realistic in acknowledging that there is an
existing curriculum in thermodynamics that will continue to exist for some time, perpetuated by
textbooks, accreditation criteria, industry demand, and other forces.The book is therefore organized
to parallel the organization of many typical thermodynamics textbooks, using the concept mapping
of Figure 1.1.
Figure 1.1: A schematic of concepts in an Introductory Thermodynamics course.
Typically, courses start by covering four building blocks in thermodynamics. Clearly, the First
and Second Laws are fundamental ideas that take up the bulk of time in a first thermodynamics
course. The idea that energy is conserved and can be converted from one form to another is a central
organizing principle for science and industry. Understanding how the Second Law limits achievable
efficiencies in energy systems is crucial for realistic design. To put these two laws of thermodynamics
to use, it is necessary to understand the properties of working fluids and other substances on a
conceptual level (what are entropy, enthalpy, and internal energy? What is the difference between
heat and work?). It is also necessary to be able to look up, calculate, and/or estimate property data
1.1. MODULE 1.1. THERMODYNAMICS IS ABOUT ENERGY 11
using equations of state and either computerized or printed data tables. While property relations are
not emphasized in every first course on thermodynamics depending on mathematical preparation and
other considerations, they play an essential role in developing an understanding of thermodynamic
relationships and in making it possible to quantify properties that are difficult to measure from
those that are easier to measure. These four areas can be thought to form a basis that is common
among different “flavors” of thermodynamics in physics, chemistry, mechanical engineering, chemical
engineering, etc. The pyramids one might build on top of this foundation are many and varied.
The most common sets of applications might be engine cycles common in mechanical engineering
(including automobile and jet engines, electric power generation, and refrigeration cycles) or solution
theory and phase and chemical reaction equilibria in chemical engineering.
All of this may have begun to seem esoteric, full of new technical vocabulary and presented in
the abstract.The following exploration returns us to concrete and familiar considerations, developing
a clear definition of energy.
1.1.1 EXPLORATION: WHAT IS ENERGY?
engage
Write down what you
know or believe
about energy, and
why it Is necessary.
change
Can you develop
your own definition of
energy? Of
thermodynamics?
analyze
Now look up several
definitions of energy.
You might want to
check a few
textbooks and
Internet sources.
reflect
How do different
definitions fit
together? Do they
address why energy
is so necessary? What
questions do you
have about this?
Figure 1.2: What is energy? Old Faithful, a geyser in Yellowstone National Park, Wyoming, USA, is
a dramatic example of geothermal energy. Photo by Jon Sullivan, Public Domain. http://pdphoto.
org/PictureDetail.php?mat=pdef&pg=5274.
At this point, a traditional textbook would typically supply you with a definition of energy.
Instead, this book asks you: what is energy? I think you already know. That doesn’t mean I think you
12
1. WHAT AND WHY?
can repeat a formal definition that will be the same as one an expert would write. It means I think
in your life experience you have come to know what energy is.
1. Engage. Write down what you know about what energy is. Some of it may be what you
remember from other classes you’ve had. Some of it may be what you’ve learned by experiencing
the world. It’s ok if what you write is not 100% correct in expert terms – this is what learning
is all about.
2. Analyze. Use your information literacy skills to gather a few different definitions of energy.
You might want to start with your course textbook, or some reliable sources in the library or
on the Internet. Write these down, and keep track of the source and page numbers, or the
permanent URL and date accessed for Web resources.
3. Reflect. Evaluate these definitions. Don’t try to select a single most valid definition, though
you may want to consider the reliability of different sources you have selected. Think about
what value you draw from each definition, and how they might be related. What questions
do you have about how different definitions you’ve read fit together? Do the definitions make
clear why energy is such an important part of our world today? If not, how could they?
4. Change. Can you develop a definition of energy that brings together what you know, expert
knowledge, and energy’s importance in life? Now move through steps 1-3 again to develop
your own working definition of thermodynamics. Why do you think most engineering courses
and textbooks are called engineering thermodynamics and not energy engineering?
The previous module challenged you to develop your own definitions of energy and thermo-
dynamics. It utilized a four-step process that included engagement, analysis, reflection, and some
action for change. This process is somewhat different from the typical engineering design process, or
engineering problem solving processes that would be commonly used in a thermodynamics course.
What is the basis for this learning process, and why is it being employed here? The next module
provides an opportunity to explore the development of the process and the educational theory behind
it, contrasted with more familiar approaches to teaching and learning. Some of the learning activities
in the next module involve theatre techniques that build an embodied knowledge of what it means
to learn using different pedagogies. While it may be outside the experience of many students, and
instructors may be apprehensive to incorporate acting into a course, creating opportunities to do
the unexpected, even if it means leaving one’s comfort zone, can lead to breakthrough insights not
achievable in routine and familiar settings.
1.2 MODULE 1.2. PEDAGOGY: HOW TO LEARN USING THIS
BOOK
i
lifelong
learning
1.2. MODULE 1.2. PEDAGOGY: HOW TO LEARN USING THIS BOOK 13
You may have noticed that this book is written for students, yet these modules read a bit like
something that might be considered a lesson plan. This transparency is intentional. The book is
based on a set of learning techniques that have come to be known by labels such as critical pedagogy,
feminist pedagogy, or liberative pedagogy. [1] It is based on the following principles:
The point is not only to understand the world, but also to change it. [2] The study of energy should
begin with real-world problems, addressing what matters now.Theory is explored as it relates to
these real needs of people.Then those ideas are put to work in communities, and the experiences
of communities contribute to new theories, and so on in a continuing conversation.
“No education is politically neutral.” [3] In engineering in particular we tend to think we are
just learning the facts about science and technology, and we don’t often notice the ways in
which what we learn has a political bent. We therefore need to ask “Who benefits? Who
loses? Who isn’t even at the table?” We not only ask this of the syllabus, of the text, of energy
research agendas and energy company portfolios, but also we need to ask this of ourselves in the
classroom. For example, thermodynamics textbooks focus centrally on the contributions of 19th
century European males. A broader examination of history reveals important contributions to
thermodynamics from every continent East to West, by men as well as women, of all races
and ethnicities. In diverse classrooms, one can no longer assume a common base of knowledge
acquired through a common cultural or social background, and one can no longer take a
“one size fits all” approach to education. Instead, offering diverse learning opportunities and
multiple points of access can build a strong foundation for everyone to share their strengths
and learn from each other.
Power relations are everywhere and the classroom is a perfect place to learn how power relations
work and how to resist unjust power relations. For example, we want to challenge the idea that
the professor knows everything, the students know nothing, and the professor makes deposits
in students’ brains, which are hopefully retained and regurgitated later [4]. We want to explore
what the opposite of this might be. Disrupting classroom hierarchies is an aspiration; although
many forces work to resist this, working toward this goal is itself meaningful.
Student responsibility for learning. If we take this project seriously, and students have more
power in the classroom, it means more responsibility, and more work for you the student. But
the work is different from endless grueling late-night problem sets. It is, or should be, work
that matters – to you, and to your community, however that is defined in a given situation.
A lot of things will be more open-ended than you are used to. There will not always be a
single right answer, or a single right approach to a problem. You will wonder what it is you are
expected to do. The way to approach such things is to try a little, see what happens, reflect on
that in order to learn from it, and maybe try something else, and so on.
One of the hardest things to put into a book form is the centrality of relationships to this kind
of learning. This approach challenges the primacy of individualism – so you want to learn not
14
1. WHAT AND WHY?
only independently, but also interdependently in a community of scholars. In my classes, this
means I am accessible to students, and they work with each other a lot. It is up to you to ensure
that this element is present in your learning. It is absolutely at the heart of these pedagogies.
The phrase “pedagogies of liberation” has caused some critics to ask what students are supposed
to be liberated from (or to), and to challenge masculinist assumptions in the language of
liberation [5]. However, Bell Hooks [3] has suggested that liberatory language has resonance
in particular for some women of color, and takes on the phrase “education as the practice of
freedom” as a central goal of her pedagogy.
My students are sometimes anxious about this shift in the classroom in terms of both content
and pedagogy. Year after year I am convinced by the results – students with deep understanding and
confidence in their knowledge, and abilities that endure, as seniors return from their Fundamentals
of Engineering exam and report that “I rocked the thermo part.”
The following two explorations employ techniques of an approach closely related to liberative
pedagogies known as Theatre of the Oppressed [6, 7]. Developed by Brazilian theatre practitioner
and educator Augusto Boal in the 1960s, this set of practices explores relevant topics in an embodied
way with “spect-actors” who participate in generating the performance, providing opportunities for
individuals and groups to create, visualize, and live out scenarios for personal or social transformation.
Using these methods, you will act out your understandings of traditional and critical pedagogies and
explore what they might mean in engineering education.
1.2.1 EXPLORATION 1: PRINCIPLES OF CRITICAL PEDAGOGIES
1. Engage. Design a classroom experience in which learning is minimized. What does it look
like? Act it out in a skit that illustrates your vision. 5 min. to brainstorm, 15 min. to plan,
10 min. for a couple of performances.
2. Analyze. Discuss the performances. What did you learn about effective pedagogies from
viewing and/or acting out their opposite?
3. Reflect. What does it mean to learn? What is the goal of education?
4. Change. What would need to change about engineering education to fully apply the principles
presented and that you’ve developed here? What are the primary obstacles to achieving these
goals, and how might they be overcome?
1.2. MODULE 1.2. PEDAGOGY: HOW TO LEARN USING THIS BOOK 15
engage
Design a classroom
experience in which
learning is minimized.
What does it look
like? Plan and
perform a short skit
illustrating your vision.
change
What would need to
change about
engineering
education to fully
apply the principles
discussed?
analyze
Discuss the
performances. What
did you learn about
effective pedagogy
from viewing/acting
out its opposite?
reflect
What does it mean to
you to learn? What is
the goal of
education?
Figure 1.3: Learning process.
1.2.2 EXPLORATION 2: MODELS OF LEARNING
1. Engage. Read the tableaus on the following pages. Assign one to each of three groups to act
out for the others.
engage
Read the Tableaus
that follow. Create a
frieze illustrating each
learning model.
change
What would you
change about each
scene to align it
better with the
principles you’ve
developed so far?
analyze
Identify the similarities
and differences
among the three
scenarios.
reflect
What questions does
the comparison raise
for you about
learning?
16
1. WHAT AND WHY?
Each group will create a frieze, or collection of actors “frozen” in the midst of a particular
action or relationship, that illustrates each model of learning. Choose one person from each
group who can narrate the scene and explain it to the others.
2. Analyze. What is similar about each of the three friezes? What are the differences among
them? What does it mean to learn in each of these models?
3. Reflect. What questions arise for you in thinking about these different learning models? How
does comparing the models change your own thinking about what learning means to you, or
what learning might mean to society?
4. Change. What would you change about each scene to align it better with the principles of
critical pedagogy as you understand them? What does this imply about what needs to change
in engineering education?
The modules in this book utilize the liberative learning processes explored here in order to
help students explore some of the big questions about energy that have relevance to all of our lives.
The rest of this chapter will explore world energy needs, uses, and policies, and then return to the
question of what engineers need to know to work effectively in these contexts both now and in the
future.
1.2. MODULE 1.2. PEDAGOGY: HOW TO LEARN USING THIS BOOK 17
i
n
t
h
e
w
o
r
l
d
d
,
w
i
t
h
t
h
e
w
o
r
l
d
d
,
a
n
d
w
i
t
h
e
a
c
c
h
o
t
h
e
r
.
t
e
a
c
h
e
r
’
s
e
e
x
i
s
t
e
n
c
e
-
-
b
u
t
u
n
l
i
k
e
t
h
e
s
l
a
v
e
e
,
t
h
e
y
n
e
v
e
r
d
i
s
c
o
v
e
r
t
h
a
t
t
h
e
e
y
e
d
u
c
a
t
e
t
h
e
e
t
e
a
c
h
e
r
.
e
x
i
s
t
e
n
c
e
.
T
T
h
e
s
t
u
d
e
n
t
s
,
a
l
l
i
e
n
a
t
e
d
l
i
k
e
t
h
e
e
s
l
a
v
e
i
n
t
h
e
H
e
g
e
l
i
i
a
n
d
a
e
c
t
t
i
c
l
,
a
c
c
e
p
t
t
h
e
i
r
i
r
g
n
o
a
n
c
e
a
s
j
i
u
s
t
i
f
y
n
g
t
h
e
l
l
k
n
o
w
e
d
g
e
e
a
b
e
u
p
o
n
t
h
o
s
s
e
w
h
o
m
t
h
e
y
c
c
o
n
s
i
d
e
r
t
o
k
n
o
o
w
n
o
t
h
n
g
i
.
j
j
P
r
o
e
c
t
i
n
g
a
n
a
b
s
o
o
u
t
e
l
i
r
g
n
o
a
n
c
e
o
n
t
o
o
t
h
e
r
s
,
a
a
r
c
h
a
a
c
t
e
r
i
s
s
t
i
c
o
f
t
h
e
i
l
d
e
o
o
g
y
o
f
r
o
p
p
e
s
s
i
o
n
,
n
e
g
a
t
e
s
e
d
d
u
c
a
l
t
i
o
n
a
n
d
k
k
n
o
w
e
d
g
e
a
s
p
p
o
c
e
s
s
e
s
o
r
f
i
n
q
u
i
r
y
.
T
h
e
t
e
a
c
h
e
r
I
i
n
t
h
e
b
a
n
k
n
g
c
o
o
n
c
e
p
t
o
f
e
d
u
c
c
a
t
i
o
n
,
l
k
n
o
w
e
d
d
g
e
i
s
a
g
i
f
t
b
e
s
s
t
o
w
e
d
b
y
t
h
o
s
s
e
w
h
o
c
o
n
s
i
d
e
r
t
h
e
m
s
e
v
e
s
l
r
p
e
s
e
n
t
s
h
m
m
s
e
i
l
f
t
o
h
i
s
s
t
u
d
d
e
n
t
s
a
s
t
h
e
i
r
n
e
e
c
e
s
s
a
r
y
o
p
p
o
s
s
i
t
e
;
b
y
c
o
n
s
i
d
e
r
i
n
g
t
h
e
i
r
i
r
g
n
o
a
a
n
c
e
a
b
s
o
u
t
e
l
,
h
e
j
u
s
t
i
f
i
e
s
h
i
s
o
o
w
n
o
n
y
l
r
t
h
o
u
g
g
h
i
n
v
e
n
t
i
o
n
a
n
d
d
r
e
-
i
n
v
e
n
t
i
o
n
,
r
t
h
o
u
g
h
t
h
e
r
e
s
s
t
l
e
s
s
,
i
m
p
a
t
i
e
n
t
t
c
o
n
t
i
n
u
n
g
i
,
h
o
o
p
e
f
u
l
i
n
q
u
i
r
y
h
u
u
m
a
n
b
e
n
g
s
p
u
r
s
u
e
i
m
i
i
s
g
u
d
e
d
s
y
s
t
e
m
.
F
o
r
a
p
a
a
r
t
f
r
o
m
i
n
q
u
i
r
y
,
,
a
p
a
r
t
f
r
o
m
t
h
e
e
p
a
x
i
s
,
r
i
i
i
n
d
v
d
u
u
a
l
s
c
a
n
n
o
t
l
b
e
t
r
u
y
h
u
m
a
n
.
l
K
n
o
w
e
d
g
e
e
m
e
e
g
e
s
r
l
t
h
e
p
e
o
p
e
e
t
h
e
m
s
e
v
e
s
l
w
h
h
o
a
e
r
f
i
l
e
d
a
w
a
a
y
r
t
h
o
u
g
h
t
h
e
l
a
c
k
o
f
r
c
e
a
t
i
v
i
i
t
y
,
t
r
a
n
s
f
o
m
a
r
t
i
i
o
n
,
a
n
d
k
n
o
w
e
e
d
g
e
l
i
n
t
h
i
s
(
a
t
b
e
s
t
)
i
w
h
c
h
t
h
e
s
s
c
o
p
e
o
f
a
c
t
i
o
n
n
a
l
l
o
w
e
d
t
o
t
h
e
e
s
t
u
d
e
n
t
s
e
x
t
e
n
d
s
o
n
y
a
s
l
f
a
r
a
s
r
i
e
c
e
v
n
g
i
,
f
i
l
i
n
g
,
a
n
d
s
t
o
r
i
n
g
g
t
h
e
d
e
p
o
s
i
t
s
.
T
T
h
e
y
d
o
,
i
t
i
s
t
r
u
e
e
,
h
a
v
e
t
h
e
o
p
p
p
o
r
t
u
n
i
t
y
t
o
b
e
c
c
o
m
e
c
o
l
l
e
c
t
o
r
s
s
o
r
l
t
c
a
a
o
g
u
e
r
r
s
o
f
t
h
e
t
h
n
g
s
i
t
t
h
e
y
s
t
o
e
r
.
B
u
t
i
n
t
h
e
l
a
s
t
a
n
a
y
y
s
i
s
,
l
i
t
i
s
i
w
h
c
h
t
h
e
s
t
u
d
d
e
n
t
s
p
a
t
i
e
n
t
l
y
r
e
c
e
v
e
i
,
m
e
m
o
o
r
i
z
e
,
a
n
d
r
e
p
e
a
a
t
.
T
h
i
s
i
s
t
h
e
`
b
a
n
k
n
g
i
'
c
o
n
c
e
p
p
t
o
f
e
d
u
c
a
t
i
o
n
n
,
i
n
a
n
d
r
e
p
e
a
t
s
t
h
e
s
e
p
h
a
s
e
s
r
w
w
i
t
h
o
u
t
i
r
p
e
c
e
v
n
g
w
h
a
i
t
f
o
u
r
t
i
m
m
e
s
f
o
u
r
r
e
a
l
l
y
m
e
a
n
s
,
o
r
r
e
a
l
i
i
z
z
n
g
t
h
e
t
r
u
e
t
e
a
c
h
e
r
i
s
t
h
e
d
e
p
o
s
i
t
o
r
.
I
n
s
t
e
e
a
d
o
f
c
o
m
m
u
n
c
a
i
t
i
n
g
,
t
h
e
t
e
e
a
c
h
e
r
i
i
s
s
u
e
s
c
o
o
m
m
u
n
q
u
é
s
a
n
n
d
m
a
k
e
s
d
e
p
o
o
s
i
t
s
a
n
d
w
h
a
r
r
t
P
a
a
m
e
a
n
s
f
o
r
B
r
a
a
z
i
l
…
.
E
d
u
c
c
a
t
i
o
n
t
h
u
s
b
e
c
c
o
m
e
s
a
n
a
c
t
o
o
f
d
e
p
o
s
i
t
i
n
g
,
i
i
n
w
h
c
h
t
h
e
s
t
u
d
d
e
n
t
s
a
e
t
h
e
d
e
e
p
o
s
i
t
o
r
r
i
e
s
a
n
d
t
h
e
s
i
g
n
i
f
i
c
a
n
c
e
o
o
f
"
c
a
p
i
t
a
l
"
i
n
t
h
e
a
f
f
i
r
m
a
t
i
o
n
"
t
h
h
e
c
a
p
i
t
a
l
o
f
P
a
a
a
r
i
s
B
e
e
m
l
,
"
t
h
h
a
t
i
s
,
w
h
a
l
t
B
e
e
e
m
m
e
a
n
s
f
o
r
P
P
a
a
r
t
r
i
r
a
n
s
f
o
m
n
g
p
p
o
w
e
r
.
"
F
o
u
r
t
i
m
m
e
s
f
o
u
r
i
s
s
i
x
t
e
e
e
n
;
t
h
e
c
a
p
i
t
a
l
o
o
f
P
a
a
r
i
s
B
e
e
m
m
l
.
"
T
h
e
s
t
u
d
e
n
t
r
r
e
c
o
d
s
,
r
m
e
m
o
o
r
i
z
e
s
,
T
h
e
o
o
u
t
s
t
a
n
d
n
g
c
h
a
a
c
t
e
r
i
r
i
s
t
i
c
o
f
t
t
h
i
s
n
a
r
r
a
t
i
v
e
e
d
d
u
c
a
t
h
e
s
o
n
o
r
i
t
y
o
o
f
w
o
d
s
,
r
n
o
t
t
h
h
e
i
r
T
h
e
t
e
a
c
h
e
e
r
t
l
a
k
s
a
b
o
u
t
r
e
e
a
l
i
t
y
a
s
i
f
i
t
r
r
w
e
e
m
o
t
i
o
n
e
s
s
,
l
s
t
t
a
t
i
c
,
c
o
m
p
a
r
t
m
m
e
n
t
a
l
i
z
e
d
r
e
l
s
e
h
e
[
s
i
c
]
r
c
o
n
c
e
t
e
n
e
s
s
a
n
d
b
e
c
o
m
e
a
a
h
o
l
l
o
w
,
a
l
i
e
n
a
a
t
e
d
,
a
n
d
a
l
i
e
n
a
t
h
e
t
o
t
a
l
i
t
y
t
h
a
a
t
e
n
g
e
n
d
e
e
d
d
r
t
h
e
m
a
n
d
c
o
u
u
d
g
v
e
t
h
e
m
i
l
s
i
g
n
i
f
i
c
a
n
c
e
.
r
W
o
o
d
s
a
e
e
m
p
r
t
i
e
e
d
o
f
t
h
e
i
r
s
t
u
d
e
n
t
s
w
i
t
h
t
h
e
c
o
n
t
e
n
t
s
o
f
f
h
i
s
n
a
r
r
a
t
i
o
n
-
-
-
c
o
n
t
e
n
t
s
i
r
w
h
c
h
a
e
d
e
t
a
c
h
e
d
f
r
o
m
r
e
a
l
i
t
y
,
e
x
p
o
u
n
d
s
o
n
a
i
t
o
p
c
c
o
m
p
e
e
t
e
y
a
l
l
l
i
e
n
t
o
t
h
e
e
x
i
s
t
e
n
t
i
a
l
e
x
x
p
e
r
i
e
n
c
e
o
f
t
h
e
e
s
t
u
d
e
n
t
s
.
H
i
s
t
i
s
t
o
a
a
s
k
d
d
i
s
c
o
n
n
e
c
t
e
t
d
h
e
er
o
m
"
f
i
l
l
"
f
r
,
t
i
o
n
t
i
n
g
v
e
b
t
o
h
e
s
i
t
n
y
,
.
i
s
r
[
4
]
h
t
t
.
p
p
:
/
/
w
w
w
w
e
b
s
t
t
e
r
.
e
d
u
/
~
c
o
r
b
e
e
t
r
e
/
p
h
i
l
o
s
o
p
h
y
y
/
e
d
u
c
a
t
i
E
x
c
e
r
p
t
t
f
r
o
m
P
a
u
o
F
r
e
e
i
r
e
l
,
P
e
d
a
g
o
g
y
o
f
t
h
e
O
p
p
r
e
s
s
e
d
T
a
b
l
e
e
a
u
1
:
E
d
d
u
c
a
t
i
o
n
a
s
U
s
u
u
a
l
,
r
a
n
d
d
p
e
d
c
t
o
n
/
f
r
e
i
r
e
/
f
r
e
i
r
e
-
2
.
h
t
m
O
lO
a
b
e
.
i
l
18
1. WHAT AND WHY?
n
o
i
t
a
c
c
u
d
E
d
e
e
r
e
t
n
e
C
-
-
r
e
n
r
a
e
e
L
:
2
u
a
e
e
l
b
a
T
y
n
a
m
w
o
o
H
“
.
]
7
6
1
:
8
[
n
r
a
e
e
L
e
p
o
e
P
w
o
H
l
,
l
i
c
n
u
o
C
h
c
r
a
e
e
s
e
R
l
a
n
o
i
t
a
N
m
m
o
r
f
t
p
r
e
c
x
E
7
6
1
=
e
g
g
a
p
&
3
5
8
9
=
d
_
d
o
c
e
r
i
.
r
?
p
h
p
k
o
o
o
b
n
e
p
o
/
u
d
e
p
a
a
n
w
w
w
/
/
:
p
.
.
t
t
h
”
?
?
r
e
h
t
e
g
o
t
l
a
.
n
o
i
t
a
a
t
u
p
m
o
c
c
i
s
a
b
b
a
f
l
o
e
p
m
a
x
e
n
a
r
o
f
t
s
e
u
q
e
r
a
a
h
t
i
w
i
s
n
g
e
b
r
e
e
h
c
a
e
t
e
h
T
e
h
t
,
l
a
a
u
s
u
s
a
d
n
A
.
s
s
p
u
o
g
r
n
i
s
r
a
j
e
h
t
f
o
k
n
h
t
i
e
e
w
f
i
,
r
e
h
t
e
g
o
t
t
l
a
r
e
a
r
e
e
h
t
s
s
e
i
l
f
r
e
t
t
u
b
y
n
a
m
m
w
o
h
t
n
u
o
c
o
t
s
u
r
o
f
r
e
i
s
a
e
e
b
l
l
i
w
t
i
,
w
o
N
.
s
e
i
l
f
r
e
t
t
u
b
r
o
f
d
n
a
t
s
l
l
i
w
m
e
h
t
t
n
i
s
r
a
t
s
e
h
T
.
s
r
r
a
j
e
h
t
r
e
a
e
e
h
h
r
,
y
a
k
O
:
r
e
h
c
a
e
T
.
s
e
i
l
f
r
e
t
t
u
b
e
h
t
t
g
n
i
t
n
u
o
c
r
o
f
r
e
e
u
d
e
c
o
p
r
.
r
e
e
h
t
e
g
o
t
l
a
s
e
i
l
f
r
e
e
t
t
u
b
y
n
a
m
t
a
h
t
d
a
h
u
o
y
w
o
n
k
k
d
u
o
Y
'
:
a
c
i
s
s
e
J
?
s
e
i
l
f
r
e
t
t
u
b
d
d
n
a
s
r
a
j
a
t
c
c
u
r
t
s
n
o
c
d
n
a
y
y
r
o
t
s
s
’
a
c
i
s
s
e
J
e
e
t
a
r
t
s
u
l
l
i
t
x
e
n
s
t
t
n
e
d
u
t
s
d
n
a
r
e
e
h
c
a
e
t
e
h
T
e
s
o
o
h
t
t
u
o
b
a
w
o
n
k
k
I
l
d
u
o
w
t
a
h
w
,
r
e
w
s
n
a
e
h
t
d
n
u
u
o
f
d
n
a
n
o
i
t
a
c
i
i
l
p
i
t
l
u
m
s
i
h
t
d
d
i
I
f
i
d
n
A
.
t
i
n
i
s
e
i
l
f
r
e
t
t
u
b
b
4
d
a
h
h
c
a
e
d
d
n
a
,
s
r
a
j
r
2
1
e
e
w
w
e
e
h
T
r
?
4
x
2
1
.
.
.
.
n
o
i
t
a
c
i
l
p
i
t
l
u
m
s
i
h
t
h
t
i
w
o
g
d
u
u
o
c
t
l
a
h
t
i
y
r
o
t
s
a
e
m
e
v
g
e
n
o
y
n
n
a
n
a
C
:
r
e
h
c
a
e
T
:
r
e
h
c
a
e
T
:
a
c
i
s
s
e
J
]
.
s
r
a
j
0
1
d
n
u
o
a
p
o
o
r
l
a
w
a
a
D
r
[
?
s
i
r
s
p
u
o
g
t
t
u
o
b
a
g
n
k
n
h
t
i
i
r
o
f
r
e
b
m
u
n
e
t
i
r
o
v
a
f
s
'
n
a
c
i
t
i
a
m
m
e
h
t
a
m
.
0
1
:
y
l
l
a
S
r
u
o
f
f
o
o
s
t
e
s
0
1
i
g
n
p
u
u
o
g
r
f
o
n
o
i
t
a
t
n
n
e
s
e
p
e
r
r
l
a
i
r
o
t
c
c
p
i
a
t
c
u
r
t
s
n
o
c
c
s
t
n
e
d
u
t
s
d
n
a
a
r
e
h
c
a
e
t
e
h
t
s
a
s
e
s
s
e
g
o
p
r
r
n
o
s
s
e
l
e
h
T
t
r
e
p
m
a
a
L
.
4
x
2
s
u
p
l
4
x
x
O
l
s
a
f
o
t
h
g
u
o
o
h
t
e
b
n
a
c
4
x
2
1
t
a
h
t
i
e
z
n
g
o
o
c
e
r
y
e
h
t
;
p
u
o
g
g
r
e
h
t
n
i
t
o
n
s
r
a
a
j
2
i
g
n
v
a
h
d
n
a
a
s
e
i
l
f
r
e
t
t
u
b
.
s
r
a
j
6
f
o
s
p
u
o
g
g
o
w
r
t
o
t
n
i
,
l
e
p
m
m
a
x
e
r
o
f
,
s
r
a
j
e
h
t
g
n
p
u
o
g
r
i
f
o
s
s
y
a
w
r
l
l
r
e
h
t
o
e
o
p
x
e
n
e
d
r
l
i
h
c
e
h
h
t
s
a
h
n
e
h
t
1.2. MODULE 1.2. PEDAGOGY: HOW TO LEARN USING THIS BOOK 19
c
o
n
s
i
d
e
a
r
t
i
o
n
o
f
r
e
a
l
p
o
l
i
t
i
c
a
l
r
e
a
s
o
n
s
t
h
a
t
p
e
o
p
e
l
,
i
i
l
n
c
u
d
n
g
c
h
i
l
r
d
e
n
,
i
m
g
h
t
s
t
r
i
k
e
.
r
g
a
d
e
r
s
m
o
v
e
d
f
r
o
m
a
p
e
r
s
p
e
c
t
i
v
e
o
r
i
e
n
t
e
d
t
o
w
a
d
r
t
h
e
i
r
o
w
n
d
e
s
i
r
e
f
o
r
l
p
a
y
a
n
d
p
e
a
s
u
e
r
l
t
o
a
i
w
o
u
n
d
n
g
m
a
n
y
c
h
i
l
r
d
e
n
.
i
D
a
v
d
y
e
l
l
e
d
,
"
W
h
a
t
d
o
y
o
u
m
e
a
n
,
s
t
o
p
t
h
e
w
a
r
?
"
s
t
r
i
k
e
t
o
s
t
o
p
t
h
e
w
a
r
i
n
f
A
g
h
a
n
i
s
t
a
n
.
”
T
h
a
t
t
o
o
k
m
y
r
b
e
a
t
h
a
w
a
y
.
I
n
t
h
i
s
b
r
i
e
f
i
d
a
o
g
l
,
t
h
e
s
e
f
i
r
s
t
S
t
i
l
l
l
o
o
k
n
g
i
i
n
t
e
n
t
l
y
a
t
h
i
s
c
a
c
k
e
r
r
,
A
l
l
i
a
n
s
a
d
s
o
f
t
l
y
b
u
t
c
e
a
l
r
l
y
,
i
“
M
a
y
b
e
k
d
s
c
o
u
d
g
o
o
n
l
"
M
a
y
b
e
w
e
c
o
u
d
d
o
l
i
t
t
o
s
t
o
p
t
h
e
w
a
r
.
"
r
t
h
o
u
g
h
o
u
t
t
h
i
s
i
a
n
m
a
t
e
d
e
x
c
h
a
n
g
e
.
V
e
r
y
i
q
u
e
t
l
y
,
w
i
t
h
h
i
s
r
c
a
c
k
e
r
n
e
a
r
h
i
s
m
o
u
t
h
,
h
e
s
a
d
i
,
A
s
h
y
,
t
h
o
u
g
h
t
f
u
l
b
o
y
n
a
m
e
d
A
l
l
a
n
h
a
d
b
e
e
n
s
i
t
t
i
n
g
i
q
u
e
t
l
y
a
t
t
h
e
e
n
d
o
f
t
h
e
t
a
b
e
l
l
p
a
y
t
i
m
e
w
a
s
n
'
t
s
u
c
h
a
g
e
a
r
t
r
e
a
s
o
n
t
o
s
t
r
i
k
e
.
a
n
d
n
o
t
o
i
l
e
t
,
a
n
d
w
e
g
e
t
t
o
g
o
t
o
s
c
h
o
o
l
f
o
r
f
r
e
e
.
I
'
d
b
e
e
m
b
a
r
r
a
s
s
e
d
t
o
t
e
l
l
t
h
e
m
w
e
d
d
i
i
t
.
"
h
a
v
e
a
n
y
.
"
A
n
o
t
h
e
r
s
t
u
d
e
n
t
r
a
g
e
e
d
,
a
n
d
D
a
v
d
i
r
e
c
o
n
s
i
d
e
e
d
r
,
s
a
y
n
g
i
t
h
a
t
m
a
y
b
e
g
e
t
t
i
n
g
m
o
e
r
J
o
h
n
a
d
d
e
d
,
"
I
i
t
h
n
k
i
k
d
s
i
f
n
A
g
h
a
n
i
s
t
a
n
r
e
a
l
l
y
w
a
n
t
t
o
g
o
t
o
s
c
h
o
o
l
t
o
o
,
a
n
d
t
h
e
y
d
o
n
'
t
w
o
u
d
l
i
t
h
n
k
w
e
w
e
e
r
r
c
a
z
y
.
T
h
e
y
h
a
v
e
t
o
p
a
y
m
o
n
e
y
t
o
g
o
t
o
a
s
c
h
o
o
l
w
i
t
h
r
h
a
d
y
l
a
n
y
b
o
o
k
s
C
u
r
t
i
s
l
o
o
k
e
d
o
v
e
r
a
t
a
p
o
s
t
e
r
o
f
o
u
r
p
e
n
p
a
l
s
a
t
a
r
r
u
a
l
s
c
h
o
o
l
i
n
S
o
u
t
h
A
f
r
i
c
a
.
H
e
s
a
d
i
,
"
I
i
t
h
n
k
o
u
r
p
e
n
p
a
l
s
t
h
e
l
a
n
g
u
a
g
e
o
f
t
h
e
W
h
i
t
e
m
n
o
i
r
i
t
y
t
h
a
t
r
a
n
t
h
e
i
r
c
o
u
n
t
r
y
.
T
h
e
S
o
u
t
h
A
f
r
i
c
a
n
p
o
l
i
c
e
f
i
r
e
d
w
i
t
h
o
u
t
r
w
a
n
n
g
i
,
k
i
l
l
i
n
g
a
n
d
w
h
o
w
e
n
t
o
n
s
t
r
i
k
e
i
n
1
9
7
6
,
h
o
w
t
h
e
y
r
e
f
u
s
e
d
t
o
a
t
t
e
n
d
l
c
a
s
s
e
s
a
n
d
d
e
m
o
n
s
t
r
a
t
e
d
t
o
r
p
o
t
e
s
t
h
a
v
n
g
i
t
o
l
r
e
a
n
A
f
r
i
k
a
a
n
s
,
s
t
r
i
k
e
?
"
I
s
a
d
i
t
h
a
t
w
a
s
a
g
o
o
d
q
u
e
s
t
i
o
n
,
a
n
d
t
h
o
u
g
h
t
a
b
o
u
t
i
t
.
I
t
o
d
l
t
h
e
m
a
b
o
u
t
1
5
,
0
0
0
S
o
u
t
h
A
f
r
i
c
a
n
s
t
u
d
e
n
t
s
i
n
S
o
w
e
t
o
t
h
e
y
i
d
e
c
d
e
d
t
o
a
s
k
m
y
a
d
v
c
e
i
.
D
a
v
d
i
,
a
n
o
u
t
s
p
o
k
e
n
b
o
y
,
a
s
k
e
d
m
e
l
o
u
d
y
.
l
"
W
o
u
d
l
t
h
a
t
w
o
r
k
,
M
s
.
C
o
w
h
e
y
?
C
a
n
i
k
d
s
a
m
o
n
g
t
h
e
m
,
a
b
o
u
t
w
h
e
t
h
e
r
t
h
e
y
l
s
h
o
u
d
k
e
e
p
t
h
i
s
p
o
t
l
s
e
c
e
t
r
f
r
o
m
m
e
.
r
P
e
h
a
p
s
f
i
g
u
r
i
n
g
t
h
e
i
r
c
o
v
e
r
w
a
s
a
l
r
e
a
d
y
b
o
w
n
l
,
t
h
e
m
s
e
v
e
s
l
a
b
o
u
t
t
h
e
i
d
e
a
o
f
g
o
n
g
i
o
n
s
t
r
i
k
e
t
o
d
e
m
a
n
d
m
o
e
r
r
e
c
e
s
s
a
t
s
c
h
o
o
l
.
I
d
e
t
e
c
t
e
d
a
n
o
t
e
o
f
n
e
r
v
o
u
s
n
e
s
s
i
l
n
c
a
s
s
.
"
A
s
I
o
f
t
e
n
d
o
,
I
s
a
t
d
o
w
n
a
t
a
t
a
b
e
l
o
f
s
t
u
d
e
n
t
s
t
o
h
a
v
e
m
y
s
n
a
c
k
.
T
h
e
y
w
e
e
r
e
x
c
i
t
e
d
y
l
t
l
a
k
n
g
i
a
m
o
n
g
m
a
d
e
t
h
e
t
r
a
n
s
i
t
i
o
n
t
o
s
n
a
c
k
t
i
m
e
,
I
l
t
o
d
m
y
n
e
w
s
t
u
d
e
n
t
t
e
a
c
h
e
r
,
"
Y
o
u
h
a
v
e
t
o
b
e
c
a
e
f
u
r
l
w
h
e
n
y
o
u
r
e
a
d
a
b
o
o
k
l
i
k
e
t
h
i
s
M
y
r
f
i
r
s
t
g
a
d
e
r
s
h
a
d
a
l
i
l
v
e
y
d
i
s
c
u
s
s
i
o
n
a
b
o
u
t
d
e
m
a
n
d
s
,
s
t
r
i
k
e
s
,
a
l
l
i
e
s
,
n
e
g
o
t
i
a
t
i
o
n
s
,
a
n
d
s
o
l
i
d
a
r
i
t
y
.
A
s
t
h
e
c
h
i
l
r
d
e
n
t
h
e
s
t
r
i
k
e
,
r
e
f
u
s
i
n
g
t
o
l
a
y
e
g
g
s
u
n
t
i
l
t
h
e
y
a
n
d
t
h
e
c
o
w
s
g
e
t
e
e
c
t
r
i
l
l
c
b
a
n
k
e
t
s
.
n
o
t
e
t
e
l
l
i
n
g
t
h
e
f
r
a
m
e
r
t
h
e
y
a
e
o
n
r
s
t
r
i
k
e
u
n
t
i
l
t
h
e
y
g
e
t
t
h
e
i
r
l
e
e
c
t
r
i
l
c
b
a
n
k
e
t
s
.
W
h
e
n
t
h
e
f
r
a
m
e
r
r
e
f
u
s
e
s
,
t
h
e
i
c
h
c
k
e
n
s
j
o
n
i
t
y
p
e
a
l
e
t
t
e
r
t
o
t
h
e
f
r
a
m
e
r
,
a
s
k
n
g
i
f
o
r
l
e
e
c
t
r
i
l
c
b
a
n
k
e
t
s
b
e
c
a
u
s
e
t
h
e
b
a
n
r
i
s
c
o
d
l
.
T
h
e
f
r
a
m
e
r
r
e
f
u
s
e
s
,
s
o
t
h
e
y
w
r
i
t
e
a
n
o
t
h
e
r
c
o
w
h
e
y
&
s
o
u
r
c
e
=
b
&
o
t
s
=
V
3
W
Y
l
l
f
I
Q
P
d
&
s
i
g
=
s
7
f
T
P
f
b
m
k
V
-
w
A
g
#
v
=
o
n
e
p
a
g
e
&
q
=
%
2
2
c
l
i
l
c
k
%
2
0
c
a
c
k
%
2
0
m
o
o
%
2
2
%
2
0
m
a
r
y
%
2
0
c
o
w
h
e
y
&
=
a
l
s
e
f
f
f
i
l
l
C
V
0
u
u
g
u
y
2
I
3
1
3
A
&
h
=
e
n
&
e
=
X
1
M
S
T
O
b
M
A
s
W
B
A
e
d
g
s
n
X
B
w
&
s
a
=
X
&
o
=
b
o
o
k
_
r
e
s
u
l
t
l
i
&
c
t
=
r
e
s
u
l
t
&
r
e
s
n
u
m
=
3
&
v
e
d
=
0
C
C
E
Q
6
A
E
I
n
t
h
e
s
t
o
r
y
,
s
o
m
e
c
o
w
s
f
i
n
d
a
n
l
o
d
m
a
n
u
a
l
t
y
p
e
w
r
i
t
e
r
i
n
t
h
e
i
r
b
a
n
r
a
n
d
t
e
a
c
h
t
h
e
m
s
e
v
e
s
l
h
o
w
t
o
t
y
p
e
.
T
h
e
y
8
2
,
8
4
.
]
h
t
t
i
i
l
.
p
:
/
/
b
o
o
k
s
.
g
o
o
g
e
c
o
m
/
b
o
o
k
s
?
d
=
E
j
x
M
O
Q
D
F
1
U
C
&
p
g
=
P
A
2
4
3
&
p
g
=
P
A
2
4
3
&
d
q
=
%
2
2
c
l
l
i
l
c
k
+
c
a
c
k
+
m
o
o
%
2
2
+
m
a
r
y
+
8
2
,
8
4
]
i
T
h
n
k
n
g
C
i
r
i
t
i
c
a
l
l
y
a
n
d
T
e
a
c
h
n
g
D
i
i
f
f
e
r
e
n
t
l
y
i
n
t
h
e
P
r
i
m
a
r
r
y
G
a
d
e
s
.
P
o
r
t
l
a
n
d
,
M
E
:
S
t
e
n
h
o
u
s
e
P
u
b
l
i
s
h
e
r
s
,
2
0
0
6
.
P
p
.
d
e
s
c
r
i
b
e
s
t
h
e
r
e
s
u
l
t
s
o
f
t
e
a
c
h
n
g
w
i
i
t
h
l
i
b
e
r
a
t
i
v
e
p
e
d
a
g
o
g
e
s
i
i
n
t
h
i
s
e
x
c
e
r
p
t
f
r
o
m
h
e
r
b
o
o
k
,
l
B
a
c
k
A
n
t
s
a
n
d
B
u
d
d
h
i
s
t
s
[
9
:
[
I
n
J
a
n
u
a
r
y
2
0
0
2
M
a
r
y
C
o
w
h
e
y
r
e
a
d
C
l
i
c
k
,
l
C
a
c
k
,
M
o
o
:
C
o
w
s
T
h
a
t
T
y
p
e
b
y
D
e
b
a
C
r
o
n
n
t
r
i
o
h
e
r
f
i
r
s
t
r
g
a
d
e
c
a
s
s
.
l
S
h
e
T
a
b
l
e
a
u
3
:
L
i
b
e
r
a
t
i
v
e
L
e
a
r
n
n
g
i
20
1. WHAT AND WHY?
1.3 MODULE 1.3. US AND WORLD ENERGY NEEDS AND
USES
Most engineering thermodynamics books do not include much information about the overall US
and World energy landscape.
a
SEM
knowledge
e
problems
g
communi-
cation
h
context
j
contemporary
issues
There are entire books and courses on this topic [10], but undergraduate engineers rarely encounter
this material. Knowing that information changes rapidly, this module presents some basics about
where things stand now and, most important, where you might go for updated information.
We might begin by asking how much energy people need. Basic uses might include cooking,
heating and space conditioning, lighting, and food storage. Providing clean water and sanitation
systems consumes energy. Transportation of goods and people requires energy. Energy powers in-
dustrial and agricultural processes, including the production of building materials for shelter. Energy
is further required for communication and commerce. The energy used for these activities can come
from a number of different sources, and the amount of energy consumed varies widely. Energy analyst
Amulya Reddy argues that rather than focusing on the sources and quantifying energy supply, we
should think in terms of characterizing the demand for particular services that energy provides [11],
and on meeting consumer requirements that energy be accessible, affordable, reliable, safe, of high
quality, and ecological.
The three explorations that follow address the issue of energy needs from different angles.
First, we consider current energy use in different nations around the world, and consider why the
United States is a disproportionate consumer, even among highly industrialized nations. Next, we
take up the relationships among energy, poverty and gender inequality, critically questioning the
conventional wisdom about the role of energy in development. Finally, we turn to the question of
how much energy people need with a personal challenge to students in industrialized nations to live
on one kilowatt per capita.
1.3.1 EXPLORATION 1: ENERGY USE
1. Engage. Find reliable sources of information on energy use in the United States and
Worldwide. Places to start include the International Energy Agency [12], (http://
www.iea.org/textbase/nppdf/free/2009/key_stats_2009.pdf),
the World Re-
sources Institute [13], (http://earthtrends.wri.org/searchable_db/index.php?
theme=6), and the BP Statistical Review of World Energy [14] (http://www.bp.com/
statisticalreview).
1.3. MODULE 1.3. US AND WORLD ENERGY NEEDS AND USES 21
engage
Find reliable sources
of information on
energy use in the US
and worldwide.
analyze
Find or create useful
representation of
energy use.
change
What are three best
ways for the US to
reduce energy use?
Quantify your
recommendations.
Try to reduce to levels
in Europe or Japan.
reflect
What story do your
data tell? Why is US
energy use so much
higher even than
other industrialized
nations?
2. Analyze. Find or create useful representations of energy use (for example, pie charts on uses
in different countries; bar chart of energy use per capita in different countries). Think about
the units presented in the reports and reconcile them for your best presentation; do different
sources of data suggest very different usages? Think about why that might be, and dig for more
information about their assumptions and exactly what they are measuring or estimating.
3. Reflect. What is the story that your data tell? Have you formatted your presentation to tell
the story best? Compare US energy use to that of other nations, including industrialized
and developing nations. Why is US energy use so much higher, even when compared with
industrialized nations like Japan or Germany? Is this justifable? Why or why not? What
responsibilities or duties fall to engineers to address this imbalance?
4. Change. What are the best opportunities the US has for reducing energy consumption? Find
research that makes recommendations on this topic, and synthesize the findings of several
authors to arrive at your own recommendations. To start you off, try Lester Lave’s article [15]
here: http://www.nae.edu/File.aspx?id=14867.
1.3.2 EXPLORATION 2: WOMEN, POVERTY, AND ENERGY
1. Engage. Can we critically examine the connections between women, poverty, and energy?
Read Reddy’s [11] chapter on women, poverty and energy: http://manowar.ma.ohost.
de/UNWEa/chapter2.pdf. How does he connect poverty and energy in developing nations?
How does gender play a role? How is the story the same or different in industrialized countries?
Reddy argues that energy needs to be given serious consideration in development plans. What
roles can energy play in development, according to Reddy?
22
1. WHAT AND WHY?
engage
change
What relationship
does Reddy lay out
among gender,
energy, and poverty
in the developing
world and in
industrialized nations?
How will you gain
knowledge about
poverty and
incorporate it into
your professional
practice?
analyze
Critique the
argument that
energy development
plays a critical role in
both poverty
eradication and
achieving gender
equality.
reflect
Consider the energy-
poverty nexus in the
Chicago Heat Wave
of 1995. What do
engineers need to
know about poverty
for ethical practice?
Figure 1.4: Woman with improved cookstove. Accessed June 2 from http://commons.
wikimedia.org/wiki/File:Cameroon_2005_-_cooking_woman.jpg. Creative commons license
2.0 TreesForTheFuture, originally posted to Flickr as Cameroon2005.
2. Analyze. Engineers may jump to the conclusion that energy development will end poverty
and benefit women – a win-win and a moral imperative that calls for immediate involvement.
But the realities of the situation are far more complex. First, can energy development actually
end poverty or improve the status of women, as Reddy argues? It may be helpful to com-
pare Reddy’s argument with writing on gender and development such as NailaKabeer’s paper
on gender and poverty eradication: http://www.unescap.org/esid/gad/Publication/
DiscussionPapers/13/Paper13.pdf. [16] What are the problems with reducing complex
issues such as poverty or gender inequality to a single technical issue such as energy? What is
the significance of Reddy’s discussion of productivity and women’s involvement in producing
energy, as compared with a more traditional development model in which energy is provided
for consumption by a community from outside? What do you make of the image presented
here of a woman with a cookstove, from this perspective? Is she/he/ze producing or consuming
energy? What opportunities does this use present for development or for poverty eradication?
3. Reflect. Consider the energy-poverty nexus in the case study of the Chicago Heat Wave
of 1995: http://www.slate.com/id/2125572/. [17] What is the role of engineers in
preventing such disasters? The author makes a connection to Hurricane Katrina, in which
engineers also played a significant role. What do engineers need to know about poverty? How
should they consider poverty in responsible professional practice?
1.3. MODULE 1.3. US AND WORLD ENERGY NEEDS AND USES 23
4. Change. What will you do to acquire knowledge about poverty and other critical social is-
sues that surround your areas of expertise as an engineer? How will you incorporate these
considerations in your professional practice?
1.3.3 EXPLORATION 3: 1 KW PER CAPITA?
In the 1980s, a group of development energy experts proposed that the world could meet basic needs
and in fact reach the standard of living of 1970s Europe on 1 kW per capita – provided optimal
use of available efficient technologies [18] (http://www.jstor.org/pss/4313148). Rather than
focusing on developing nations’ energy development goals, these experts present a daunting challenge
to residents of developed countries and particularly energy-intensive nations like the United States,
which uses on the order of 10 kW per capita [12, 13]. The thought experiment that follows is
aspirational and hopefully will also prove inspirational. What would it take for you to live on 1 kW?
engage
Track your energy
consumption for a
week and try to live
on 1kW.
change
How can you reduce
your energy use to
come closer to 1kW?
How does this relate
to your goals as an
engineer?
analyze
Tracking your energy
requires
understanding where
your energy comes
from and efficiencies
in its production.
reflect
What would you
have to change
about your lifestyle to
live on 1kW? What
things do you
consider basic
needs?
Figure 1.5: José Goldemberg, Brazilian physicist and educator, put forward the 1 kW per capita con-
cept in 1985. Accessed June 2 from http://www.sect.am.gov.br/arquivos/imagens/noticias/
20110117150231josegoldemberg.jpg.
1. Engage. Keep a journal or blog for a week or more. Can you live on 1 kW? (Hint: begin
by thinking through the units here and be sure you understand where time comes into this
picture.)
2. Analyze. Structure your analysis – think about energy services including transportation, light-
ing, refrigeration, computing, cooking, heating, and food manufacturing, preparation and con-
sumption. How do one-time large energy expenditures, such as jet travel, impact your analysis?
24
1. WHAT AND WHY?
How do you estimate energy inputs for items you consume such as books, food packaging,
clothing, etc.?
3. Reflect. What would you have to change about your lifestyle or about the infrastructure of
society to live on 1 kW? What things do you consider basic needs? How would you evaluate
the 1 kW proposal in terms of ethics or justice?
4. Change. Develop a plan to reduce your energy use. How close can you realistically come to
1 kW now? What structural changes can you work for in the future to help you get even closer?
How does this relate to your goals as an engineer?
Having explored energy needs and uses and their relationship to development, we next take
up questions of national and international policy. Given these contexts of energy needs, and energy
use and over-use, how do governments and multilateral institutions make decisions about energy?
1.4 MODULE 1.4. US AND WORLD ENERGY POLICIES:
WHAT ARE THE ISSUES?
To comprehensively study a nation’s energy policy, or international energy policies, would require
another volume, and another course.
f
h
ethics
context
j
contemporary
issues
But engineers need to understand the national and global contexts in which we work in order to
design technology in an informed way. Specific global and US energy policy questions are visited
throughout this book in an attempt to connect critical policy issues with the existing curriculum
in engineering thermodynamics. Here we explore the big picture briefly to motivate continued
discussion of energy policy issues in a thermodynamics course.
Governments must decide how best to build, maintain, and retool energy infrastructures for
economic development (in both developing and developed nations). Energy cannot be considered
in isolation because of its relationship to basic human needs, the economy, as well as to peace
and security, generation of pollution, and global climate change. Traditionally, engineers respond
to national priorities set by others. What role could engineers play nationally and internationally
to inform the setting of these priorities? Is this desirable? Would engineers’ involvement in such
questions represent a conflict of interest? Would engineers tend to support the status quo in order
to maintain job security?
As industrially developing nations are currently planning to meet energy needs for economic
growth, they are poised to make critical and far-reaching decisions for developing new energy
infrastructures. This is an exciting time for engineers, offering a teaching moment for students to
consider the impacts of technology, global economic inequality, and the importance of forward-
thinking design.
1.4. MODULE 1.4. US AND WORLD ENERGY POLICIES: WHAT ARE THE ISSUES?
25
While some of the modules found later in this book tackle questions of energy technology
selection and development of renewable energy sources, government decision making is often not as
straightforward as choosing and pursuing particular technologies. The two case studies that follow
illustrate the complex contexts in which governments approach energy issues. In the first case,
countries in the global South look to wealthier nations for commitments on carbon reduction and
renewable energy development, which are largely resisted in the North. In the second case, the
United States secures energy resources through costly military conflict.
1.4.1 EXPLORATION 1: COPENHAGEN
engage
Read about the 2009
Copenhagen summit
here and in other
sources you can find.
analyze
Were the actions
taken by the nations
at the Copenhagen
summit ethical?
Examine this from a
number of ethical
and stakeholder
perspectives.
change
What can you do to
achieve greenhouse
gas emission
reductions on your
campus, in your
community, and at
the state and federal
levels?
reflect
Why was the United
States particularly
reluctant to agree to
greenhouse gas
emission reductions?
Figure 1.6: Global Day of Action for Climate, mass demonstration and march at the Copenhagen
COP15 Climate Summit, December 12, 2009. Photo from Greenpeace Finland/Lauri Myllyvirta. Used
under Creative Commons 2.0 license. Accessed June 2, 2011 from http://commons.wikimedia.org/
wiki/File:Global_day_of_action.jpg.
1. Engage. Read the following information about the Copenhagen Climate Summit [19], and
seek out other sources on this meeting and subsequent and prior international climate meetings.
Climate scientists have created models that predict numerous changes in climate caused by
increased atmospheric concentrations of greenhouse gases, including carbon dioxide. While
these changes are expected to be widespread across the planet, some changes will be more
damaging than others, and some places will be hit harder than others. Many scenarios predict
significant damage in the global South, where many people and their governments lack the
resources needed to adapt to climate change. In 2009, at the Copenhagen climate summit,
26
1. WHAT AND WHY?
Lumumba Di-Apping, chair of the G77 group of developing nations, declared a 2 degree rise
in global temperature a “suicide pact” for Africa.
At Copenhagen as at both previous and subsequent international meetings on climate change,
the South sought leadership from the global North in curbing emissions and in offering
economic development funds for building renewable energy infrastructure. Wealthy nations
committed to less than half the emissions cuts needed, and declined to offer development
funds for renewable energy infrastructure. Europe offered to cut 20% by 2020, and the US 4%
by 2020. 60% reductions are required in order to avoid a 2 degree temperature rise.
Economics is a primary rationale for lack of action on climate change. The conventional
perspective in the US and global North is that cheap energy derived from fossil fuels forms
the basis for economic activity worldwide and must be maintained in order to compete with
emerging economies such as China and India. Many governments in the global South view
energy as essential to development, and want to have the chance northern countries did to
use cheap energy to develop their economies. They argue this is necessary to lift people out of
poverty, meet basic human needs, and to overcome the long history of colonialism that led to
today’s global economic inequalities.
2. Analyze the ethics of the Copenhagen agreement (or lack thereof ) from a variety of philo-
sophical standpoints and stakeholder perspectives. What duties or responsibilities do nations
have to one another and to the planet, or the global North to the global South and vice versa?
What would the principle of justice require of nations at this summit? What rights apply to
nations in this context, and how are these balanced against responsibilities to the international
community? As India, China and other nations develop energy infrastructure, they are draw-
ing on their strengths, using resources available in country where possible – ought they do
otherwise? Who should decide?
3. Reflect. Why is the United States particularly reluctant to cooperate with climate agreements,
more so than other wealthy nations? Why are the US reductions so small compared to Europe,
especially when Europe’s energy consumption and climate emissions are already so much lower
on a per capita basis?
4. Change. What level of greenhouse gas reductions would you like to see your country achieve?
Has your campus complied with this level of reduction? Has your local community? Where
might a city begin to take action that would lead a significant reduction in greenhouse gas
emissions? How does this scale up to larger groups of people? Take some action to expand the
scope of reductions on a state, national, or international level.
1.4.2 EXPLORATION 2: THE COST OF ENERGY [20]
1. Engage. At the time of writing, the National Priorities Projectestimates the cost of
in Iraq and Afghanistan since 2001 at $1.26 trillion and total Defense and
war
1.4. MODULE 1.4. US AND WORLD ENERGY POLICIES: WHAT ARE THE ISSUES?
27
engage
Gather data on the
cost of war to secure
oil resources, the
volume and cost of
oil imports, and
gasoline use in the
United States.
change
Propose some ways
to reduce the price
we pay for energy.
Estimate their
impact.
analyze
If monetary costs of
war were paid for
through a gas tax at
the pump, how much
more would we pay
per gallon?
reflect
What about the non-
monetary costs of
war? How else have
we paid for oil that
isn’t reflected in
dollars per gallon?
Figure 1.7: Rumaylah Oil Fields, Iraq (April 2, 2003) – A US Army soldier stands guard duty near a
burning oil well in the Rumaylah Oil Fields. US Navy photo by Photographer’s Mate 1st Class Arlo
K. Abrahamson. Public Domain. http://upload.wikimedia.org/wikipedia/commons/9/99/
US_Navy_030402-N-5362A-010_A_US_Army_soldier_stands_guard_duty_near_a_burning_
oil_well_in_the_Rumaylah_Oil_Fields.jpg.
Homeland Security spending at $7.6 trillion [21]. According to the US Energy In-
formation Agency, the US imported about 9 million barrels of crude oil per day
in 2010 [22]. (http://www.eia.gov/pub/oil_gas/petroleum/data_publications/
company_level_imports/current/import.html)
about
$78/barrel [23]. (http://www.eia.gov/dnav/pet/hist/LeafHandler.ashx?n=PET&
s=WTOTWORLD&f=W). The trade group NACS estimates that the US consumes about 9 million
barrels/day of gasoline and diesel fuel for highway use [24]. (A little more than half of a barrel
of crude oil is processed into gasoline.) (http://www.nacsonline.com/NACS/Resources/
campaigns/GasPrices_2011/Documents/GasPriceKit2011.pdf) Can you confirm
and/or update this information?
an average price of
at
2. Analyze. If the war costs were paid for by a gasoline tax instead of income tax, how much
more would we pay per gallon of gas for “securing” oil in the Middle East? If our total Defense
and Homeland Security budget were paid for with a gasoline tax, how much more would be
pay per gallon? One might rightly observe that not all of the Homeland Security and Defense
28
1. WHAT AND WHY?
budget is related to middle east conflict. Using resource [21] and other sources, estimate what
percentage is related, and calculate the cost per gallon of gas.
3. Reflect. The monetary costs of war do not begin to account for the total costs. What are the
costs of war beyond dollars and cents? How else have American taxpayers paid for this access to
oil in the region? What have been the non-monetary costs of US Homeland Security efforts?
4. Change. If each of the 200 million cars and light trucks in the US were traded in for replacement
vehicles that saved 10 mpg, by what fraction would this reduce US oil imports? Each vehicle
drives 12000 miles per year, on average. What other proposals can you make that would reduce
US dependence on oil and military involvement to secure these resources in other parts of the
world?
The brief exploration of energy needs and energy policy in this chapter may feel unsettled or
unsettling. How do we decide what we believe the relationships are between national security and
energy? How do we know how much energy would meet basic human needs, or how best to achieve
that? Many engineers retreat from these kinds of uncertainties into seemingly politically neutral
equations and facts. Some feel that technology is a refuge where at least you can calculate things and
know them for certain. Some even think that once you understand the technology, it will point you
toward a correct policy solution.
However, these approaches leave out – or worse, dismiss as unimportant – political and social
realities that influence and are influenced by technology in countless ways. We will see throughout
this book as we explore the history of thermodynamics and its contemporary applications that what
we believe about energy, and what we believe is important to know or ask about energy, is influenced
by social factors in much the same way as larger global questions about energy.
The following module explores the question of what is important for engineers to know about
energy and how answers to this question are themselves shaped by forces of power in the profession
and in society.
1.5 MODULE 1.5. GETTING EDUCATION RIGHT FOR A
SUSTAINABLE ENERGY FUTURE
Twentieth century social theorist Michel Foucault wrote about how, even in science, there is a dual
relationship between power and knowledge.
h
context
i
lifelong
learning
What becomes considered to be valid knowledge is laden with the decidedly political process of who
gets to decide what is true or untrue. In turn, knowledge is used in the interest of power and powerful
institutions. This is to be distinguished from the conventional Baconian belief that “knowledge is
1.5. MODULE 1.5. GETTING EDUCATION RIGHT FOR A SUSTAINABLE ENERGY FUTURE 29
power,” i.e., that coming into knowledge makes one powerful. Instead, Foucault posits that yes,
knowledge is power, but power is also knowledge. He explains the ways in which institutions –
Figure 1.8: Michel Foucault. http://www.msa.ac.uk/mac/Assets/Embedded%20Websites/
Panopticon/Images/Michel_Foucault_Par23100007_130145833_std.jpg.
science, universities, government – play a role in validating knowledge. The following exploration is
intended to stimulate your critical thinking about your own learning, course and curriculum content,
and engineering in society. It lays the groundwork for connections made later to thermodynamic
theory, especially the Second Law.
1.5.1 EXPLORATION 1: POWER/KNOWLEDGE
1. Engage. First,
excerpt
read this
from Foucault on Truth and Power
in sci-
ence [25]: http://www.scribd.com/doc/10262971/Foucault-Truth-and-Power-
in-Power-Knowledge (pp. 131–133). It’s important to acknowledge for those of you who
have not encountered Foucault before that his work may seem abstract at first, but is in fact
highly relevant when grounded in context. His writing is not linear or direct, and this is some-
what intentional. Derrida, Foucault’s contemporary, wrote about how language can impose
constraints upon what we are able to say, reflecting and perpetuating a certain kind of power
relations. Foucault consciously sought to challenge the power embedded in language. A wise
colleague in film studies once told me that reading Foucault is a bit like surfing – you ride the
wave for a while when you are in tune with his thoughts, but you do not have the same mind,
so you necessarily slip off the board – and that is the point. But it was a great ride, and you
can always get back on the board and go again [26].
30
1. WHAT AND WHY?
engage
Read Foucault’s
excerpt from “Truth
and Power”
analyze
What is Foucault’s
regime of truth? How
does science wield
power in the
construction of
knowledge?
change
How can you
determine what you
believe, given the
institutional power
structures that
influence what is
presented as truth?
reflect
How have you seen
power/knowledge
dynamics operating
in the world?
2. Analyze. Answer the following four questions:
a. What does Foucault mean by “a regime of truth” and how does this fit with his definition
of truth on p. 133?
b. Foucault is often characterized assaying that truth is relative, but he is saying rather that
truth is political. How are these two concepts different?
c. Foucault focuses on science as an important institution appointed as the arbiter of truth in
present day society. How does the institution of science wield power in the construction
of truth?
d. Foucault’s conception of power, which he writes on extensively elsewhere, is that power
is NOT one-way or top-down, but rather one of power relations in which power neces-
sarily produces resistance. How does this conception of power play out in the notion of
power/knowledge?
3. Reflect.Think of a concrete example from your experience that illustrates the dual relationship
between power and knowledge that Foucault discusses. Make sure the example shows not
just knowledge supporting power, but also ways in which power constructs knowledge. How
might power/knowledge manifest itself in a thermodynamics course, or in the engineering
curriculum? For example, who controls what you learn in thermodynamics, or what courses
you take in order to receive an engineering degree?
4. Change. How can you determine what you believe, given the institutional power structures
that influence what is presented as truth?
1.5. MODULE 1.5. GETTING EDUCATION RIGHT FOR A SUSTAINABLE ENERGY FUTURE 31
Given these power relations in engineering where accreditation and academic processes resist
changes to the accepted body of knowledge in the field, this book explicitly seeks to challenge the
engineering canon as it relates to thermodynamics and energy. It also seeks to encourage students
to engage your power of resistance by taking responsibility for your own learning in a system that
usually seems to remove a lot of student choice in order to adhere to a standard set knowledge. In the
next exploration, you will identify what you think engineering students need to learn to be able to
work on energy issues, and compare that curriculum to your current education. It may be tempting
to resist the new ideas here and reinscribe traditional ideas about what belongs in a thermodynamics
class; try to keep an open mind as you deliberate on these questions.
1.5.2 EXPLORATION 2: WHAT DO CURRENT ENGINEERING STUDENTS
NEED TO LEARN TO BE ABLE TO WORK ON ENERGY ISSUES?
1. Engage. Make a list of what you think engineering students need to learn as undergraduates
in order to prepare to work on energy problems. Think about what kinds of technical skills,
professional skills, values or ethics, and ways of thinking will be essential in this work.
2. Analyze. Compare your list with what’s emphasized in your textbook, your course syllabus, your
engineering curriculum overall, and ABET’s accreditation criteria (see Introduction, Figure 1).
Refine your list if you come across items you’d like to add or take away.
engage
Make a list of what
you think engineering
students need to
learn today to work
on tomorrow’s
energy issues.
change
What else do you
need to learn, and
where will you find it?
Develop a strategy to
learn what you need
to work on today’s
energy problems.
analyze
How does this list
match or not match
with ABET criteria,
your textbook’s
contents, and your
engineering
curriculum?
reflect
How does the
content of your
thermo text
(curriculum, ABET
criteria) reinforce
certain energy
choices?
3. Reflect. How does the content of your thermodynamics textbook (or course syllabus, or ac-
creditation criteria, etc.) reinforce certain energy choices? What material is not found in
thermodynamics? Is it found in other engineering courses, or courses outside of engineering
but required for the major, or is it not part of your curriculum at all?
32 REFERENCES
4. Change. What else do you need to learn, and where will you find it? Develop a strategy to
learn what you need to know to work effectively on energy problems today and in the future.
What can you learn on your own, and where do you need to seek assistance and guidance from
a mentor?
REFERENCES
[1] Darder, A., Maltodano, M. P., and Torres, R.D. (2008). Critical Pedagogy Reader, 3rd ed. New
York: Routledge. Cited on page(s) 13
[2] Marx, K. [1845] (1976) Theses on Feuerbach. In K. Marx and F. Engels (Eds.), Collected Works
of Karl Marx and Friedrich Engels, 1845–47, Vol. 5: Theses on Feuerbach, The German Ideology
and Related Manuscripts. New York: International Publishers, p. 8. Cited on page(s) 13
[3] Hooks, B. (1994) Teaching to Transgress. New York: Routledge, p. 37. Cited on page(s) 13, 14
[4] Freire, P. (1970) Pedagogy of the Oppressed. Translated by Myra Bergman Ramos. New York:
Seabury Press. Cited on page(s) 13
[5] Luke, C. and Gore, J. (1992). Feminisms and Critical Pedagogy. New York: Routledge. Cited
on page(s) 14
[6] Boal, A. (1985). Theatre of the Oppressed. Translated by Charles A. and Maria-Odilia Leal
McBride. New York: Theatre Communications Group. Cited on page(s) 14
[7] Boal, A. (1992). Games for Actors and Non-Actors. Translated by Adrian Jackson. New York:
Routledge. Cited on page(s) 14
[8] National Research Council (2000). How People Learn: Brain, Mind, Experience and School.
Washington, DC: National Academy Press. Accessed June 10, 2011 from http://www.nap.
edu/openbook.php?record_id=9853. Cited on page(s)
[9] Cowhey, M. (2006). Black Ants and Buddhists: Thinking Critically and Teaching Differently in
the Primary Grades. Portland, ME: Stenhouse Publishers. Cited on page(s)
[10] See, e.g., Shepherd, W. and Shepherd, D.W. (2003). Energy Studies, 2nd ed. London: Imperial
College Press. Cited on page(s) 20
[11] Reddy, A.K.N. (2000).Energy and Social Issues. In World Energy Assessment: Energy and the
Challenge of Sustainability. New York: United Nations Development Program. Accessed June
10, 2011 from http://manowar.ma.ohost.de/UNWEa/chapter2.pdf. Cited on page(s)
20, 21
REFERENCES 33
[12] International Energy Agency (2009). Key World Energy Statistics. Paris: IEA. Accessed June
10, 2011 from http://www.iea.org/textbase/nppdf/free/2009/key_stats_2009.
pdf. Cited on page(s) 20, 23
[13] World Resources Institute (2011). EarthTrends Energy and Resources Database. Washington,
DC: WRI. Accessed June 10, 2011 from http://earthtrends.wri.org/searchable_
db/index.php?theme=6. Cited on page(s) 20, 23
[14] British Petroleum (2011).Statistical Review of World Energy. London: British Petroleum.
Accessed June 10, 2011 from http://www.bp.com/statisticalreview. Cited on page(s)
20
[15] Lave, L. (2009). The Potential of Energy Efficiency: An Overview. The Bridge, 39(2): 5–
14. Accessed June 10, 2011 from http://www.nae.edu/File.aspx?id=14867. Cited on
page(s) 21
[16] Kabeer, N. (2003).Gender Equality, Poverty Eradication and the Millennium Development
Goals: Promoting Women’s Capabilities and Participation. Gender & Development Discussion
Paper Series No. 13, United Nations Economic and Social Commission for Asia and the Pacific.
Accessed September 17, 2011 from http://www.unescap.org/esid/gad/Publication/
DiscussionPapers/13/Paper13.pdf. Cited on page(s) 22
[17] Klinenberg, E. (2005). When Chicago Baked: Unheeded lessons from another great urban
catastrophe. Slate, September 2, 2005. Accessed September 17, 2011 from http://www.
slate.com/id/2125572/. Cited on page(s) 22
[18] Goldemberg, J., Johansson, T.B., Reddy, A.K.N., and Williams, R.H. (1985). Basic Needs and
Much More with One Kilowatt per Capita. Ambio, 14(4/5): 190–200. Accessed June 10, 2011
from http://www.jstor.org/pss/4313148. Cited on page(s) 23
[19] Livingstone, K. Copenhagen talks show south-north divide is alive, well, and ever-more
polluting. Progressive London, Dec. 16, 2009. Accessed June 6, 2011 from http://
www.progressivelondon.org.uk/blog/copenhagen-talks-show-north-south-
divide-is-alive-well-and-ever-more-polluting.html. Cited on page(s) 25
[20] Adapted from an assignment Frank von Hippel gave to Princeton University students in his
course on Science, Technology, and Policy in the 1990s. DOI: 10.1038/sj.ijo.0801938 Cited
on page(s) vii, 26
[21] National Priorities Project. (2011). US Security Spending since 9/11. May 26, 2011. Ac-
cessed June 7, 2011 from http://nationalpriorities.org/en/publications/2011/
us-security-spending-since-911/. Cited on page(s) 27, 28
34 REFERENCES
[22] US Energy Information Administration (2011). Crude Oil and Total Petroleum Imports.
March 2011 Import Highlights. Accessed June 10, 2011 from http://www.eia.gov/
pub/oil_gas/petroleum/data_publications/company_level_imports/current/
import.html. DOI: 10.1001/jama.292.10.1232 Cited on page(s) 27
[23] US Energy Information Administration (2011). Petroleum and Other Liquids. Accessed June
10, 2011 from http://www.eia.gov/dnav/pet/hist/LeafHandler.ashx?n=PET&s=
WTOTWORLD&f=W. Cited on page(s) 27
[24] NACS (2011). Fueling America: Key Facts and Figures. Accessed June 10, 2011
from http://www.nacsonline.com/NACS/Resources/campaigns/GasPrices_2011/
Documents/GasPriceKit2011.pdf Cited on page(s) 27
[25] Foucault, M. (1980) Truth and Power. In: C. Gordon (Ed.) Power/Knowledge: Selected Inter-
viewsand Other Writings 1972–1977. New York: Pantheon, 131–133. Cited on page(s) 29
[26] Keller, A. (2005) Comments at Liberative Pedagogies workshop, used with permission. Cited
on page(s) 29
C H A P T E R 2
35
The First Law: Making Theory
Relevant
The First Law of Thermodynamics, the idea that energy cannot be created nor destroyed, but is
converted from one form to another, is familiar to many of us from our experience of the world.
While the formal study of energy can quickly become abstract, the modules in this chapter are
designed to keep our explorations rooted in topics that resonate with our lives.
The first module explores the First Law in its historical context. This can make the material
more accessible because its presentation follows the arc of scientific discovery. The histories raise
many questions about the practice of science: Who gets to do science? Why are the Western European
discoveries the ones that became enshrined in our textbooks? What are some alternatives? What
does it mean for us doing science today that individuals who “had the science wrong” in that they
subscribed to now-debunked ideas like the caloric theory or the animal theory of heat nevertheless
made bold contributions to science?
In the following three modules we consider applications driven by particular needs to provide
real-world context for exploring the First Law in an open-ended way. Notably, each of these questions
or needs is occurring (more or less) outside of a for-profit context and outside of military applications,
which are the more common focuses in engineering. These modules raise the question of what is
considered to be within our outside the bounds of the engineering discipline, and why.
A fifth and final application allows you to choose your own setting for application of the First
Law, perhaps something that is interesting and relevant to your life, or something you are curious
about.
Module 2.1: The First Law in Historical Context.
Module 2.2.: Technology Selection for Energy Independence.
Module 2.3: Evaporative Cooling.
Module 2.4: Hunger, Poverty, and Obesity.
Module 2.5: Thermo to Life.
2.1 MODULE 2.1. LEARNING FROM HISTORY
g
communi-
cation
h
context
i
lifelong
learning
36
2. THE FIRST LAW: MAKING THEORY RELEVANT
While your engineering thermodynamics textbook may present thermodynamic theories as abstract
principles and laws of nature, the particular ways in which these laws were discovered and articulated
in history are rich and fascinating stories that can deepen our understanding and appreciation for
thermodynamics. Different expressions of the laws of thermodynamics are grounded in particular
historical times and places. Specifically, thermodynamics texts tend to rely on discoveries in Germany,
England, and France in the 18th and 19th centuries. These stories (and the fact that these are the
stories) tell us much about the process of science and the development of scientific knowledge, but
the histories of thermodynamics from other times and places also deserve our attention. Therefore,
the first exploration below considers the development of the First Law in Europe, while the second
poses an opportunity to uncover other histories of thermodynamic discovery in other times and
places.
2.1.1 EXPLORATION 1: FIRST LAW IN WESTERN EUROPE
engage
Uncover histories of
the development of
the First Law in
Western Europe.
change
Where do we identify
social privilege in the
practice of science
today, and how can
we work for change?
analyze
Write a biography
that places the
contributions of those
who developed the
First and Second
Laws in the context of
their lives and times.
reflect
What does this
analysis tell us about
the political process
of the production of
scientific knowledge?
Figure 2.1: Sadi Carnot. Retrieved June 3, 2011 from http://upload.wikimedia.org/wikipedia/
commons/8/80/Sadi_Carnot.jpeg, Public domain.
1. Engage. Locate histories of the development of the First Law of thermodynamics in Western
Europe. A good source is Hans Christian Von Baeyer’s book, Warmth Disperses and Time Passes:
A history of heat. [1] He states that there were 12 different individuals who contributed to the
discovery of the First Law, and discusses in detail the lives and work of Julius Mayer, Count
Rumford, and James Joule, among others.
2.1. MODULE 2.1. LEARNING FROM HISTORY 37
2. Analyze. Choose one or more individuals to profile. Read their publications if available.
For example, you can find Joule’s “On the Mechanical Equivalent of Heat”[2] at http://
www.chemteam.info/Chem-History/Joule-Heat-1845.html and Rumford’s “Heat is
a Form of Motion”[3] at http://www.chemteam.info/Chem-History/Rumford-1798.
html. Learn what you can about their lives. Write a short biography that provides context for
their discoveries and explains their contributions. What enabled them to do the work they
did? What did they know? What gaps in knowledge were they able to close, and what gaps
remained?
3. Reflect. What does it mean that the laws of thermodynamics are not attributable to a single
person, as Newton’s or Maxwell’s laws are? What was accepted as “truth” by the scientific
establishment, and why were so many of the ideas surrounding the First and Second Laws
initially rejected? What does it mean to be able to hold some ideas that have since been proven
wrong, but still make a valid contribution to science? How did social privilege (gender, class,
race) influence who was able to do science, and/or whose contributions are recognized today?
What did people need to do science then? What is needed now?
4. Change. What do you take away from this for your own life doing science? How does social
privilege persist in what you need today to engage in science? What can you do to address this
problem?
2.1.2 EXPLORATION 2: DE-CENTERING WESTERN THERMO
1. Engage. Locate histories of the First Law (or thermodynamics more generally) outside of
Western Europe. Think broadly as you select keywords for your search; histories of technology
may be especially fruitful [4]–[8]. How did technologies make use of energy conservation and
conversion, and how were these principles understood and discussed?
2. Analyze. Write a narrative about one or more contributions and explain the context of its
development. Who are the main actors? What enabled them to do the work they did? What
did they know? What gaps in knowledge were they able to close, and what gaps remained?
3. Reflect. How did you identify non-Western contributions to thermo? What does this tell us
about what counts (and what ought to count) as science, or as scientific theory?
4. Change. How can non-Western contributions be made more visible in thermodynamics?
Where else can you identify similar biases in your education, or your life? What can you do
about them?
While the histories of thermodynamic discovery are indeed dynamic and revealing, contem-
porary conversations may also capture readers’ imaginations. The next module therefore takes up
contemporary conversations around energy independence in order to explore further the usefulness
and relevance of the First Law.
38
2. THE FIRST LAW: MAKING THEORY RELEVANT
engage
Uncover non-Western
conceptions
of/contributions to
thermodynamics.
analyze
Write a narrative
about one
contribution that
explains its context
and development.
change
How can non-
Western contributions
be made more visible
in thermo? In other
areas of your
education or your
life?
reflect
How did you identify
non-Western thermo?
What does this tell us
about what counts
(and what should
count) as science or
scientific theory?
Figure 2.2: Maria the Jewess, considered to be the first alchemist and inventor of the still, lived in
Alexandria in the first or second century CE. From Michael Maier, Symbola aurea mensae duodecim
nationum, 1617. Accessed June 3, 2011 from http://www.alchemywebsite.com/images/amcl111.
jpg.
2.2 MODULE 2.2. ENERGY INDEPENDENCE
Contemporary conversations in the United States around energy independence have strong political
resonance.
h
context
j
contemporary
issues
This module critically examines claims made in the public arena about the goal of energy inde-
pendence, and then entertains a reinterpretation of the concept of energy independence in local
communities.
2.2.1 EXPLORATION 1: “FOREIGN” OIL INDEPENDENCE
1. Engage. US leaders have made much of the concept of energy independence on a national
level. This phrase typically is used to mean independence from foreign oil sources. Watch a
segment of the Rachel Maddow Show [9] (http://www.msnbc.msn.com/id/26315908/#
37769319) that argues that “energy independence” as conventionally conceived in US politics
is a myth.
2. Analyze. Why does Maddow argue that energy independence is a myth? Would it be possible
to achieve independence from foreign oil? If so, how? Try to think of multiple ways to achieve
this goal.
2.2. MODULE 2.2. ENERGY INDEPENDENCE 39
3. Reflect. How dependent are you on oil? How would your life change if that oil were entirely
produced in your country? Think both in terms of the practical aspects of your life and your
experience of national or international politics.
4. Change. Maddow’s vision is for the US to become oil independent altogether. How can
engineers help make this happen? What would need to change structurally to make this a
possibility?
engage
Watch Rachel
Maddow’s analysis of
Energy
Independence in US
politics.
change
Maddow’s vision is for
the US to become oil
independent
altogether. How can
engineers help make
this happen?
analyze
Would it be possible
to achieve
independence from
foreign oil? If so,
how? Try to think of
multiple ways to
achieve this goal.
reflect
How dependent are
you on oil? How
would your life
change if that oil
were entirely
domestically
produced?
2.2.2 EXPLORATION 2: ENERGY INDEPENDENCE RECONCEIVED
A different kind of energy independence is occurring in local cities in the US and elsewhere: in-
dependent, public ownership of utilities. With this model, local communities can create low-cost,
sustainable energy alternatives. For example, in the state of Massachusetts, there are 41 cities that
own their own municipal utilities (these mostly date back to the early 20th century). A recent state
report found that these utilities offered significantly cheaper rates than industrially owned facilities,
from 14% cheaper in 2004 to 30% cheaper in 2006. At the same time, municipal utilities were as
reliable or more reliable than their industrial counterparts, with more local control to respond rapidly
to outages [10]. Many towns support changes in state law that would lift barriers to municipalization,
so that new municipally owned facilities can be added [11].
1. Engage. Select a site near you that could be electrified using a new local energy resource.
40
2. THE FIRST LAW: MAKING THEORY RELEVANT
engage
Pick a site near you
that could be
sourced by a new
local energy
resource.
change
What is needed
beyond these
constructs of “first
law” and “efficiency”
to help us select an
appropriate energy
technology?
analyze
Use first law analysis
to compare solar,
geothermal, wind,
hydro, and local
biomass as potential
sources. Calculate
efficiencies for each.
reflect
How do the forms of
the energy equation
change, and what
does efficiency
mean for each? Can
you use this analysis
to select one best?
Figure 2.3: Holyoke Dam, a municipally owned power generation facility in Holyoke, MA. Photo from
US Fish and Wildlife Service, Public domain. Accessed June 3, 2011 from http://www.fws.gov/
r5crc/images/Fish/holyokedam.jpg.
2. Analyze. Use a First Law analysis (energy balance) to compare solar, geothermal, wind, hydro,
and local biomass options for your plant. Calculate efficiencies for each.
3. Reflect. How do the forms of the energy equation change for each technology, and what does
efficiency mean in each case? Can you use this analysis to select a single best technology?
4. Change. What is needed beyond the constructs of efficiency and energy balances to determine
the best energy technology for municipal application? Where in your education can you learn
these other pieces? What will you do to learn them?
If municipalization represents a form of energy independence in the United States, what
might energy independence look like in the global South? To explore one angle of this, consider
that in development contexts, the notion of technology transfer can be controversial when wealthy
nations in the North simply export their technologies to settings in the South without consideration
for geographic or cultural differences, and often with economic strings attached. This traditional
model of technology transfer can create dependence in a variety of forms, from reliance on imported
parts and materials to foreign technical knowledge required to maintain continued operation. In the
next module we consider examples of technologies emerging from the global South, utilizing the
principle of evaporative cooling for refrigeration and space conditioning.
2.3 MODULE 2.3. EVAPORATIVE COOLERS
Evaporative cooling makes use of the First Law of thermodynamics.The process of water evaporation
requires heat as water changes phase from liquid to gas.
2.3. MODULE 2.3. EVAPORATIVE COOLERS 41
c
e
h
design
problems
context
i
lifelong
learning
Several technologies make use of this principle by drawing heat from an area one is trying to cool
– a warm room, for example, or vegetables one is trying to preserve – cooling the area of interest
while evaporating water from the system. This module explores applications of this principle in
technologies originating in countries in the global South.
engage
Read about one of
several evaporative
cooler designs
presented. How do
they work?
change
How could these
technologies be used
in your life to replace
reliance on
refrigeration or air
conditioning?
analyze
Use first law analysis
to estimate the
cooling process for a
typical scenario. How
much water is used?
How much cooling is
produced?
reflect
What did you learn
about these
technologies? About
the process of
estimation and open-
ended problem
solving?
Figure 2.4: Zeer pot. Accessed June 3, 2011 from http://practicalaction.org/images/zeer6-
fresh.jpg.
1. Engage. Consider one of the many designs of evaporative cooling used in locales where
electricity or electric refrigerators are unavailable. Although numerous clay pot designs have
utilized evaporative cooling for centuries in many locations around the world, Nigerian Mo-
hammad Bah Abba patented his design of a pot-in-pot refrigerator, generating international
interest [12, 13]. There are also a number of designs for evaporative room coolers; for example,
Myra Wong offers two versions [14] and Eric Rusten several more [15].
2. Analyze. Use a First Law analysis to estimate the cooling process for a typical scenario. For
the pot-in-pot case, assume typical outdoor temperatures on location, a typical pot size, and
42
2. THE FIRST LAW: MAKING THEORY RELEVANT
estimate how much water would be used to cool what mass of vegetables. For the room
coolers, assume typical room sizes, and outdoor temperatures/humidity on location, and the
specifications described to determine how much water would cool how much room air. Your
thermodynamics textbook will have data that can help you with these calculations. Can you
make this connection?
3. Reflect. What did you learn about these technologies? About the process of estimation and
open-ended problem-solving? About the First Law and how to apply it in engineering design?
4. Change. What improvements if any would you suggest for the design you reviewed? How
could these kinds of technologies be used in your own life context? Under what conditions if
any could they, for example, replace refrigeration or air conditioning?
We’ve seen how the First Law can be used in engineering design applications. The next
module illustrates its usefulness in a very different applied context: hunger and nutrition.
2.4 MODULE 2.4. HUNGER, POVERTY, AND OBESITY
Betty Ann, of Nacogdoches, TX, posted the following comment to an online CNN article on food
stamps:
a
SEM
knowledge
f
ethics
i
j
lifelong
learning
contemporary
issues
Granted, $3.00 a day is not very much for food and of course those who are hungry should receive
more, however; in a country where over-weight and obese thrive, lets make sure these people are
really needing the food… I can not tell you how many times I have been in line at the grocery behind
an obese person who used food stamps to pay. Should people have to “weigh in” to receive the food
assistance?.... There needs to be more control over the food stamps. The ones who are truly starving
should be the recipients [16].
Betty Ann has hit upon a common misperception about hunger in the United States. Being
hungry means not being able to supply sufficient food for one’s self and family. Sufficient food is not
defined in terms of a person’s size, and being a certain weight does not give us any information about
that person’s access to adequate nutrition. In the United States, where processed foods abound, even
in so-called “food deserts” where fresh produce is scarce, often the cheapest foods that are readily
available are also the most Calorie-dense. This creates a situation where getting a large number of
Calories (note that Calories in nutrional contexts is spelled with a capital C and denotes kilocalories
in thermodynamic terms, so 1 Calorie = 1000 calories) does not necessarily result in sufficient mass
or volume of food to fill one’s belly, or in sufficient nutrition to support good health. Can we use
the First Law of thermodynamics to link hunger, poverty and obesity in order to challenge popular
misconceptions about what hunger looks like?
2.4. MODULE 2.4. HUNGER, POVERTY, AND OBESITY 43
What is Obesity? Is it a problem? Public health studies tell us that obesity rates in the United
States have been rising. In 1995, less than 20 percent of the population in each of the 50 states was
obese. By 2005, only four states had populations in which less than 20 percent were obese, and three
states had more than 30% obese people (Louisiana, Mississippi, and West Virginia) [17]. It is not
a coincidence that in 2005 those states ranking 1st, 2nd, and 3rd in obesity ranked 50th, 49th, and
47th in personal income per capita [18]. The highest rates of obesity in the United States are found
among those with the lowest incomes [19]. At first this may seem counterintuitive; worldwide, as
countries develop economically, the population gains weight [20]. Why is it that in the United States,
the highest rates of obesity are in low income groups? This is an important question, but it cannot
be answered simply, as the relationship between the two is complex, and correlations do not imply
causation.
The link between obesity and health is also complicated and subject to a lot of misinterpreta-
tion. The Centers for Disease Control defines obesity as “having a very high amount of body fat in
relation to lean body mass,” or a Body Mass Index (BMI) of 30 or higher. The BMI is a measure of
weight that accounts for different heights of individuals; to calculate the BMI, an adult’s weight (kg)
is divided by the square of his/her height. A BMI ranging from 18.5-25 is considered healthy, while
a BMI of 30 or more is considered obese [21]. Though a lot of evidence finds a correlation between
high BMI and negative health outcomes including diabetes and cardiovascular disease, other public
health researchers point out that direct measures of physical fitness are better predictors of health
outcomes, and that physically fit obese people have better health outcomes than inactive people who
are not overweight [22]. Feminists and others are quick to point out the dangers of a size-focused
anti-obesity movement that heaps more stigma on a group already targeted for bullying and ridicule.
[23] We have already seen in Betty Ann’s comment the way that fat stigma and classism combine
in a vicious size-based judgment of a person’s worthiness to receive food stamps.
So how do we make sense of the links among health, poverty, and obesity? Slate’s Daniel
Engber [24] explores this question, concluding that health, poverty and obesity “are spun together
in a dense web of reciprocal causality.” That is, being obese can make one poor as sure as being
poor can make one obese, and both increase the likelihood of getting sick, and so on. Engber notes,
importantly, that being poor is a stronger predictor of negative health outcomes than being obese.
What is Hunger? In 2006, the US government stopped using the word “hunger” to describe
the condition of not knowing where one’s next meal is coming from [25]. The current term is food
insecurity. In 2005, the USDA reported that 12.6 million households (about 35 million people, or
12% of Americans) were food insecure, meaning that at some point during the year they were unable
to afford sufficient food for their family [26]. While the average US household spends about $40
per person per week on food, a typical food insecure household spends about $30 [25].
Adam Drewnowski and associates study the energy density and energy cost of food [19, 26].
Energy density is defined as the ratio of energy provided by food (kcal) to its mass (g). Energy
cost of food is the ratio of the amount paid ($) to the energy provided (kcal). Drewnowski and
Darmon [26] considered the relationship between energy density and energy cost, and found that
44
2. THE FIRST LAW: MAKING THEORY RELEVANT
the cost per Calorie of “healthy” foods such as fresh produce was several thousand percent higher than
“unhealthy” foods such as fats and sweets. Furthermore, they showed using linear programming
models that when food expenditures are restricted, diets become more energy dense, with fewer
vegetables and more fats.
Results from a USDA study corroborate this finding. When asked what foods they would
buy if they had more money, low-income respondents indicated they would buy more meats, eggs,
cereals, and bakery products. People only increased the amount spent on fruits, vegetables, and dairy
products when the income level rose above 30 percent over the poverty level [27].
Anecdotal experiences of hungry Americans also support this idea. Consider the following
account from the wife of a Marine:
My husband knew he was going to be in the field for three weeks. He also knew that I would be here
by myself with very little money and no dishes or pots and pans. So he went down to McDonald’s on
Sunday when hamburgers were thirty-nine cents and bought twenty-one of them. I’ve been eating
one hamburger every day for the last twenty days [28].
engage
Explore a local
grocery store. Gather
data on cost and
energy content of
foods.
analyze
Plan a day’s menu
for yourself using
three alternative
budgets, while still
meeting basic
nutritional guidelines.
change
How would you
critique thermo
textbooks’ discussion
of “biological
systems,” based on
what you have
learned in this
exercise?
Reflect
Compare Pollan and
Drewnowski’s takes
on causes of (and
ways to address)
hunger. What do you
think should be
done? Why?
Figure 2.5: Smith Engineering students in the Fall 2010 Thermodynamics class created a graph similar
to Drewnowski’s [16] based on data they collected from three food outlets in Springfield, MA. The graph
relates the energy density of selected foods (MJ/kg) with energy costs ($/MJ). As with Drewnowski’s
findings, the energy cost difference between processed foods high in sugar and fat compared with fresh
vegetables is striking (note the log scale).
1. Engage. Explore a local grocery store.
2.4. MODULE 2.4. HUNGER, POVERTY, AND OBESITY 45
a. Search for the cheapest item in each of the pyramid groups (grains, fruits, vegetables,
milk/cheese/yogurt, meats/beans) you can find and write down each one’s nutritional
data from the USRDA label and cost. What is the energy cost ($/100kcal)? What is the
energy density (kcal/kg)?
b. Now find the most nutritious item you can find in each category in the store and write
down their nutritional values and costs. What are their energy costs ($/100kcal) and
energy densities (kcal/kg)? (Nutritious is a highly subjective term; use http://www.
mypyramid.gov for some guidelines).
c. Tip: make sure you write down information that can help you estimate the mass of food
per serving. Don’t just copy what’s on the label, but think about what the real mass or
volume of a serving will be, and whether the recommended serving on the label is realistic.
2. Analyze. Plan a day’s menu for yourself using each of three alternative budgets:
a. $5 (maximum individual daily allotment for a food stamp recipient).
b. $10 (low budget/student).
c. Maximize nutrition regardless of cost.
For each menu you must meet the national nutrition guidelines for a 21 year old female exercising
less than 30 minutes per day, or 2000 Calories (kcal) [29] which including the following:
6 oz. grains, half of which are whole grains
2.5 c. vegetables, varied among dark green, orange, pea/bean, starchy, and others
2 c. fruits or fruit juices
3 c. milk, yogurt, cheese, or other calcium-rich food
5.5 oz. meat and beans
Visit USDA’s website http://www.mypyramid.gov for more information.
One question that will arise is whether one can buy bulk items, or any items with multiple
servings. Can one assume that certain staples have already been purchased? You will need to make
a reasonable judgment here. It is neither realistic to assume all costs are borne up front for a single
serving, nor is it realistic to assume that costs can be infinitely prorated – people on a tight budget
don’t have the luxury of affording the “family size” item of everything in order to save money in the
long run.
3. Reflect. Read Michael Pollan’s New York Times article on the Farm Bill (http://www.
michaelpollan.com/article.php?id=88) [30]. Why, in his view, are carrots more ex-
pensive than twinkies? Now consider Drewnowski and Darmon’s [27] supposition “that the
rising obesity rates reflect an increasingly unequal distribution of incomes and wealth.” How
might each analysis lead to different approaches to addressing hunger, poverty and/or obesity?
What do you think should be done? Why? Provide at least two substantively different ethical
arguments for your position. What specific action will you take as a result?
46
2. THE FIRST LAW: MAKING THEORY RELEVANT
4. Change. Many thermodynamics textbooks engage students with a discussion of “thermody-
namic aspects of biological systems” and a series of related homework problems [31]. How
would you critique this part of the textbook, based on what you have learned in this exercise?
Can you think of a way to move the conversation forward about health effects of hunger,
poverty and nutrition in the US without adding to social stigma around size or weight?
Having explored the First Law in the contexts of developing strategies for national and local
energy independence, designing evaporative cooling technologies, and understanding links among
hunger, poverty, and obesity in the US, we now turn to a “free choice” module where you can explore
the First Law in any application that strikes your interest and curiosity.
2.5 MODULE 2.5. THERMO TO LIFE
a
SEM
knowledge
e
h
problems
context
i
lifelong
learning
engage
Explore and describe
the thermodynamic
aspects of an
everyday life interest;
pose a question to
explore further.
change
What might we
change to improve
the system, or our
understanding?
analyze
Define a system and
conduct an energy
balance or other
analysis employing
thermodynamic
principles that might
address the question.
reflect
What does this
analysis tell us about
the phenomenon
described? What did
we learn? What new
questions emerged
as a result?
Figure 2.6: Inveneo’s Bicycle Powered Generator, 2005. Photo by Ho John Lee, used under the
Creative Commons Attribution 2.0 Generic license. Accessed June 8, 2011 from http://commons.
wikimedia.org/wiki/File:Inveneo_bicycle_powered_generator.jpg.
This module employs the idea of praxis, in which theory and practice are interdependent
and inform one another, grounded in community and directed toward social change [32]. You will
explore a question that arises from a social need related to energy, conduct an engineering analysis
of that phenomenon, and take socially transformative action in response to what you have learned.
The goal is to pick a topic that you find relevant and interesting, and explore how the theory and
analytical tools you are learning apply to your topic.
2.5. MODULE 2.5. THERMO TO LIFE 47
1. Engage. Choose a question that you have heard emerge from the community (you define com-
munity here – it could be the campus community, your home community, a local community,
or any other group that has posed a relevant question). Example questions might include the
following:
What would it take for my community to be compliant with the Kyoto Protocol, or other
proposed climate change policies? Is it feasible to be carbon neutral? What would that
entail?
What is the potential for using more human-powered machines in my community? What
would be involved in, say, developing a human-powered television?
How do igloos work, and what would it take to construct a working igloo in my com-
munity?
What is the energy and nutritional content of local elementary school lunches?
Some energy usage might be considered a basic human need, such as home heating in
cold climates. How much energy goes to basic human needs locally, and how could we
address the impact of rising energy costs, or the impact of policies such as a carbon tax,
on the poor?
What does it take to retrofit a car for biodiesel, or to refine biodiesel fuel in my community?
Conduct an energy audit on a local building to identify opportunities for energy and cost
savings.
You want to be sure your question will also meet the other requirements of the assignment (can it
be subjected to analysis, and ultimately result in transformative action?). Present a background
description and a qualitative write-up that explains the thermodynamics in layperson’s terms
and illustrates the potential transformative value of the work you will do.
2. Analyze. Perform some quantitative analysis on your selection – this will most likely be an
energy balance, but you could also perform other calculations that illustrate how it works
thermodynamically – for example, an engine cycle analysis, or chemical reaction equilibrium
analysis, depending on your chosen system. Some of the topics may not have been covered yet
in your course, but you should feel free to explore topics as they are relevant and learn what
you can about them, driven by your interest. Thoroughly explain the thermodynamics behind
how it works. Make reasonable assumptions where necessary. Be as realistic as possible, but
make simplifications if needed.
3. Reflect. Think (do not write or type, just think) for 15 minutes about what have you learned
from your engagement and analysis so far. What questions emerge for you? Write a short
48 REFERENCES
reflection on what you learned, and what you would like to explore further. Identify possible
avenues of change or further exploration.
4. Change. Take some action that changes the situation. If your topic is policy-relevant, it may
mean contacting your representatives and communicating with them about what you have
learned. If your topic is local, it may mean communicating with the local community through
pubic media or through private contacts. Perhaps you have an opportunity to suggest a design
improvement and show either through calculations or some course of experimental action how
your idea improves the artifact or situation. Be creative. Reflect on the potential or realized
impact of your action. What else might you do in the future?
REFERENCES
[1] Von Baeyer, H.C. (1999). Warmth Disperses and Time Passes: a history of heat. New York: Modern
Library. Cited on page(s) 36
[2] Joule, J.P. (1845). On the Existence of an Equivalent Relation between Heat and the ordi-
nary Forms of Mechanical Power. Philosophical Magazine. Series 3, Vol. xxvii, p. 205. Ac-
cessed June 12, 2011 from http://www.chemteam.info/Chem-History/Joule-Heat-
1845.html. Cited on page(s) 37
[3] Thompson, B. (Count Rumford) (1798). Heat is a Form of Motion: An experiment in bor-
ing cannon. Philosophical Transactions (vol. 88). Accessed June 12, 2011 from http://www.
chemteam.info/Chem-History/Rumford-1798.html. Cited on page(s) 37
[4] Al-Hassan, A.Y. and Hill, D.R. (1986). Islamic Technology: an illustrated history. Cambridge:
Cambridge University Press. Cited on page(s) 37
[5] James, P. and Thorpe, N. (1994). Ancient Inventions. New York: Ballantine Books. Cited on
page(s)
[6] Macdonald, A. (1992). Feminine Ingenuity: Women and Invention in America. New York: Bal-
lantine Books. Cited on page(s)
[7] Stanley, A. (1993). Mothers and Daughters of Invention: Notes for a Revised History of Technology.
New Brunswick, NJ: Rutgers University Press. Cited on page(s)
[8] Andah, B.W. (1992). Nigeria’s Indigenous Technology. Ibadan: Ibadan University Press. Cited
on page(s) 37
[9] Maddow, R. (2010). Oil Independence is a Myth. In B. Wolff (Producer), The Rachel Maddow
Show, New York: MSNBC. June 17, 2010. Accessed June 12, 2011 from http://www.msnbc.
msn.com/id/26315908/#37769319. Cited on page(s) 38
REFERENCES 49
[10] Massachusetts Department of Energy Resources. (2010). Municipal Utility Study. Technical
Report. January 28, 2010. Accessed June 7, 2011 from http://www.mass.gov/Eoeea/docs/
doer/publications/doer-municipal-utility-rpt.pdf Cited on page(s) 39
[11] Massachusetts Alliance for Municipal Electric Choice. (2011). Website. Accessed June 7, 2011
from http://www.massmunichoice.org/. Cited on page(s) 39
[12] Practical Action (2011). How a Zeer Pot Fridge Makes Food Last Longer. Accessed June 7,
2011 from http://practicalaction.org/?id=zeerpots. Cited on page(s) 41
[13] Elkheir, M. (2004).The Zeer Pot: A Nigerian invention keeps food fresh without electricity. Sci-
ence in Africa, September 2004. Accessed June 7, 2011 from http://www.scienceinafrica.
co.za/2004/september/refrigeration.htm. Cited on page(s) 41
[14] Wong, M. (2003). An Evaporative Cooler. In Field Guide to Appropriate Technology, B. Hazel-
tine and C. Bull eds. New York: Elsevier Science. pp. 257–258. Accessed July 7, 2011 from
http://books.google.com/books?id=kEAOTpIYFBcC&pg=PA257&lpg=PA257&dq=
%22myra+wong%22+%22evaporative+cooler%22&source=bl&ots=Pe6C0Ic9jM&sig=
TBe12l8tYxtUZogMvv6Nn69ySb4&hl=en&ei=riYhTO_jNMH98AaE-c2ZAQ&sa=X&oi=
book_result&ct=result&resnum=1&ved=0CB4Q6AEwAA#v=onepage&q=%22myra
%20wong%22%20%22evaporative%20cooler%22&f=false. Cited on page(s) 41
[15] Rusten, E. (1985). Understanding Evaporative Cooling. VITA Technical Paper #35. Volunteers
in Technical Assistance. Accessed June 7, 2011 from http://www.cd3wd.com/cd3wd_40/
vita/evapcool/en/evapcool.htm Cited on page(s) 41
[16] Anderson Cooper 360 Blog. Accessed June 15, 2007 from: http://www.cnn.com/
CNN/Programs/anderson.cooper.360/blog/2007/04/oregon-governor-tries-
living-on-food.html. Cited on page(s) 42, 44
[17] Centers for Disease Control Obesity Trends. Accessed June 15, 2007: http://www.cdc.gov/
nccdphp/dnpa/obesity/trend/maps/ . Cited on page(s) 43
[18] US Department of Commerce, Bureau of Economic Analysis, Survey of Current Business. Web:
www.bea.doc.gov/bea/regional/spi/. Cited on page(s) 43
[19] Drewnowski, A. and Specter, S.E. (2004). Poverty and obesity: the role of energy density and
energy costs. American Journal of Clinical Nutrition, 79:6–16. Cited on page(s) 43
[20] Kumanyika, S., Jeffery, R.W., Morabia, A., Ritenbaugh, C. and Antipatis, V.J. (2002). Obesity
prevention: the case for action, International Journal of Obesity, 26 (3):425–436. Cited on
page(s) 43
[21] Centers for Disease Control Obesity Trends. Accessed June 15, 2007: http://www.cdc.gov/
nccdphp/dnpa/obesity/trend/maps/ Cited on page(s) 43
50 REFERENCES
[22] Blair, S.N. and Church,T.S. (2004).The fitness, obesity, and health equation: is physical activity
the common denominator? JAMA 292 (10):1232–1234. Cited on page(s) 43
[23] Harding, K. and Kirby, M. (2009). Lessons from the Fat-o-Sphere: Stop dieting and declare a truce
with your body. New York: Perigree Trade. Cited on page(s) 43
[24] Engber, D. (2009). Give me your poor, your tired, your big fat asses: Does poverty make people
obese, or is it the other way around? Slate, Sept. 28, 2009. Accessed June 7, 2011 from http://
www.slate.com/id/2229523/. Cited on page(s) 43
[25] Williamson, E. (2006). Some Americans Lack Food, but USDA Won’t Call Them Hun-
gry, Washington Post November 16, 2006. Accessed June 7, 2011 from http://www.
washingtonpost.com/wp-dyn/content/article/2006/11/15/AR2006111501621.
html. Cited on page(s) 43
[26] Nord, M., Andrews, M., and Carlson, S. (2006). Food Security in the United States, 2005,
Economic Research Report No. (ERR-29) 68 pp, United States Department of Agriculture,
November 2006. Cited on page(s) 43
[27] Drewnowski, A. and Darmen, N. (2005). The economics of obesity: dietary energy density and
energy cost. American Journal of Clinical Nutrition, 82(suppl): 265S-273S. Cited on page(s) 44,
45
[28] Blisard, N. and Stewart, H. (2006). How Low-Income Households Allocate Their Food Budget
Relative to the Cost of the Thrifty Food Plan Economic Research Report No. (ERR-20),
United States Department of Agriculture, August 2006. Cited on page(s) 44
[29] USDA. MyPyramid Plan. Accessed August 23, 2007 from http://www.mypyramid.gov/
mypyramid/index.aspx. Cited on page(s) 45
[30] Pollan, M. (2007). You Are What You Grow. New York Times Magazine, April 22, 2007.
Accessed August 23, 2007 from http://www.michaelpollan.com/article.php?id=88.
Cited on page(s) 45
[31] Çengel and Boles. (2008). Thermodynamics: An engineering approach. 6th ed. New York:
McGraw-Hill, pp. 193–200, 210–211. (Other texts have similar sections or sidebars.) Cited
on page(s) 46
[32] Marx, K. [1845] (1976). Theses on Feuerbach. In K. Marx and F. Engels (Eds.), Collected Works
of Karl Marx and Friedrich Engels, 1845–1847, Vol. 5: Theses on Feuerbach, The German Ideology
and Related Manuscripts. New York: International Publishers, p. 8. Cited on page(s) 46
C H A P T E R 3
51
The Second Law and Property
Relations
This chapter explores the Second Law of thermodynamics and the related concept of entropy in
practical, historical, and philosophical terms, and grounds the fundamental property relations of
thermodynamics in relevant contexts.
The best you can do is break even.
Heat flows
naturally from
hot to cold.
The Second Law and the related concept of entropy are often challenging for students to grasp
initially; students often see multiple statements of the Second Law that have been developed his-
torically as well as colloquial statements intended to assist student understanding, although it often
leads to confusion as students struggle to reconcile disparate statements and long for a single, concise,
and correct one.
No process is possible
whose sole result is
the transfer of heat
from a body of lower
temperature to a body
of higher temperature.
No process is possible in which the
sole result is the absorption of heat
from a reservoir and its complete
conversion into work
This chapter is designed to help you with these new ideas by demonstrating their relevance in
personal, professional, and philosophical terms. Using historical and social analysis to view the Sec-
ond Law from multiple perspectives, you will gain insight into the concepts and their development,
as well as into the scientific enterprise.
The entropy of an isolated system
(or the entropy of a system plus its
surroundings) always increases
(except for reversible processes
where it remains constant).
52
3. THE SECOND LAW AND PROPERTY RELATIONS
The first module explores how we define efficiencies and what efficiency has to do with the
Second Law.What do the limits of achievable efficiency mean in real terms for heat engines compared
with other energy technologies? The second module considers the history of pursuit of perpetual
motion in the United States and asks why so many are seduced by the idea even in contradiction
of reason. The third module provides historical background on the development of entropy as a
thermodynamic property and explores its philosophical implications. The fourth module tests the
accuracy and helpfulness of entropy analogies used to help students with the concept of entropy.
The fifth module demonstrates the relevance of the mathematical “guts” of thermo, the fundamental
property relations, by challenging you to apply them in a context of your choosing.
Module 3.1: The Limits of Efficiency – Heat Engines vs. Other Technologies.
Module 3.2: Perpetual Motion.
Module 3.3: Entropy: Origins and Implications.
Module 3.4: Entropy Analogies in Textbooks…
Module 3.5: Making Math Relevant: Thermodynamic Relations in Context.
3.1 MODULE 3.1. THE LIMITS OF EFFICIENCY: HEAT
ENGINES VS. OTHER ENERGY TECHNOLOGIES
Efficiency is a central principle in thermodynamics; you may have been calculating the efficiencies
of different systems as part of your problem solving in your thermo course.
a
SEM
knowledge
h
context
You may also have noticed popular discussions of energy efficiency as part of energy conservation
strategies. What do each of these discussions of efficiency have to do with the Second Law? This
module guides your work to answer this question by comparing definitions of efficiency, as well as
comparing the limits of efficiency, for different types of systems.
1. Engage. What does efficiency mean? What is the difference between thermal efficiency and
mechanical efficiency? Which kinds of efficiency apply to which energy technologies? Seek
out some definitions of efficiency from textbooks and other sources. What is your definition?
Pay attention to qualifying terms such as thermal or mechanical efficiencies. What is being
measured, relative to what?
2. Analyze. Find or develop specific definitions of efficiency for solar, geothermal, wind, hydro,
and coal fired power plants. What is similar among them, and how do they differ? What is
the difference in consequence (economic, environmental, social) of low (or high) efficiencies
in each case? Can you conclude one technology is better than another based on efficiency
figures? Why or why not? What is the maximum possible efficiency of each type? When does
3.2. MODULE 3.2. PERPETUAL MOTION MACHINES 53
engage
Write some definitions
of efficiency as you
understand it and as
it is described in your
textbook or other
sources.
analyze
Refine your definitions
for specific energy
systems. Characterize
the impacts of low or
high efficiency in
each case. Can you
compare systems?
change
What are some other
ways of presenting
essential
performance
information that help
us think about
sustainable energy?
reflect
How is efficiency
properly used in
engineering design?
In public
conversations about
energy?
Figure 3.1: Hoover Dam, Windmills in Lubbock, TX. What does efficiency mean? http://
www.windmill.com/images/Cluster_at_Sunset.jpg http://www.visitingdc.com/images/
hoover-dam-directions.jpg.
Carnot Efficiency come in to play, and when is it irrelevant? Does the Second Law still apply
to systems that are not heat engines? If so, how?
3. Reflect. Based on this exploration, how do you think efficiency is properly used in the context
of engineering design? How is it properly used in public conversations or political debates
about energy?
4. Change. What are some other ways of presenting information about an energy system’s per-
formance, particularly with regard to sustainability? Can any of these new methods help us
compare different kinds of technologies better?
Having explored achievable efficiencies, the limits of what’s possible, we now turn to the
seductive pursuit of the impossible: perpetual motion machines.
3.2 MODULE 3.2. PERPETUAL MOTION MACHINES
For centuries, people have pursued machines that produce infinite energy.
g
communi-
cation
h
context
i
lifelong
learning
54
3. THE SECOND LAW AND PROPERTY RELATIONS
Why have such pursuits garnered so much attention, even well after science’s widespread acceptance
of the Second Law?
engage
change
How can science
help?
Learn about and
retell an incident of
perpetual motion
machines in history.
Choose from
examples below or
find your own.
analyze
reflect
How did the
technology violate
either the first or
second law? Why
were people fooled
for a time?
Why are perpetual
motion machines so
seductive? Why do
people readily
distrust science in this
and other areas?
Figure 3.2: An ironic t-shirt referencing the debate over teaching evolution in public schools urges us
to “teach the controversy” of perpetual motion. http://controversy.wearscience.com/img190/
perpetual.gif. Used with permission.
1. Engage. Learn about an incident in history where perpetual motion or “free energy” was
pursued. Retell the story in your own words. Choose from the following examples detailed by
Bob Park [1], in which the United States Congress gave time and attention to these ideas,
despite their lack of scientific merit:
The 1989 Cold Fusion experiments of Fleischmann and Pons.
The Newman Energy Machine.
The Giragossian Energy Machine.
2. Analyze. How did the technology violate either the First or Second Law of thermodynam-
ics? Why were people fooled for a time? How was the idea debunked? Both Newman and
Fleischmann/Pons continue to have defenders to this day. Why do you think that is?
3. Reflect. Why are perpetual motion machines so seductive? What would be the social and
economic consequences if we could operate perpetual motion machines? Think about other
present-day cases where science is distrusted or discounted – for example, in approaching
evolution or climate change. What social and economic consequences might be at stake,
driving a discounting or distrust of science?
3.3. MODULE 3.3. ENTROPY AS A SOCIAL CONSTRUCT 55
4. Change. The notion of critical thinking or skepticism is held up by both sides in these debates.
Supporters of Newman and others claim that the scientific establishment is closed to new ideas,
and evolution proponents argue for “teaching the controversy” – allowing religious accounts to
be taught alongside scientific accounts in biology classrooms.Why are such positions ultimately
uncritical?
Here we’ve applied some social analysis to understand why and how some well-proven scien-
tific concepts go unaccepted and not understood by people at large. In the next module we take up
the more challenging task of applying social analysis to the development and acceptance of concepts
by the scientific establishment itself, examining how entropy came to be, and its implications for
other fields of knowledge.
3.3 MODULE 3.3. ENTROPY AS A SOCIAL CONSTRUCT
Having encountered in the last module the persistence of challenges to the Second Law, it may be
particularly provocative to title this section “entropy as a social construct.”
g
communi-
cation
h
context
i
lifelong
learning
Let me be clear that I do not doubt entropy any more than I doubt gravity, but both concepts take
on certain forms of expression that are shaped by the social and historical contexts in which they
were developed, and are subsequently interpreted in new times and places. In this module we first
consider the social forces influencing the historical development of the concept of entropy, then
explore the implications of entropy as interpreted in contexts far afield of heat engines.
3.3.1 EXPLORATION 1: ORIGINS OF ENTROPY
1. Engage. Read historical accounts of the conceptual development of entropy by Rudolf Clau-
sius; Von Baeyer’s is particularly readable [2]. Clausius’s key papers can be found at http://
www.humanthermodynamics.com/Clausius.html. Von Baeyer argues that the history of
entropy illustrates some central points about the “thematic content of science”[3] – that science
has certain preferences in expressing theoretical content for both universal theory and for par-
simony (mathematical simplicity that can be expressed on a t-shirt like Maxwell’s Equations or
Einstein’s E=mc2). These preferences and biases drove Rudolf Clausius’s attempts to express
the Second Law of thermodynamics in ways that were both elegant and parallel, as well as his
attempts to create entropy as a mathematical quantity to put into an equation, which ends up
an inequality that doesn’t “balance.
2. Analyze. Reviewing your textbook and other sources, gather as many expressions of entropy
and the Second Law as you can. Write a short essay reviewing how the thematic content of
science plays out in these forms of expression. What does it mean that there are so many ways
56
3. THE SECOND LAW AND PROPERTY RELATIONS
engage
How did the concept
of entropy come to
be? How did the
preferences of
scientific institutions
shape this process?
change
What can you do to
understand entropy
better? Choose one
thing and try it out.
analyze
How does the
thematic content of
science play out in
multiple expressions
of the Second Law?
reflect
What have you
struggled with most in
coming to
understand the
second law or the
concept of entropy?
Figure 3.3: Parsimony: Reducing Entropy to a symbol on a button http://rlv.zcache.com/
entropy_button-p145655327883250137t5sj_400.jpg.
of expressing the same concept? What does it mean to acknowledge that entropy is socially
and historically constructed?
3. Reflect. Why do you think students find entropy difficult to grasp on a first encounter? How
do the difficulties relate to the thematic content of science and our expectations to learn
engineering concepts in certain forms? What have you struggled with most in coming to
understand the Second Law, or entropy?
4. Change. What can you do to understand entropy better? Choose one action you can take and
try it out.
While entropy may be challenging to grasp at first, it is rewarding in its profundity. Consider
for example, entropy’s ability to answer why it is that we experience time as moving ever forward,
never backward.
3.3.2 EXPLORATION 2: ENTROPY’S PHILOSOPHICAL IMPLICATIONS
1. Engage. Read some descriptions of the arrow of time [6]. In the quotes above, both Einstein
and Vonnegut reference Minkowski’s concept of space-time, in which forward and backward
in time would be matters of convention… in theory. But the Second Law gives time a direction
– how? A good place to start is Von Baeyer’s summary [2] of the work of several physicists
3.4. MODULE 3.4. EVALUATING ENTROPY ANALOGIES 57
engage
How does one get
from the expressions
of the Second Law in
thermo to concepts
like the arrow of
time?
analyze
Explain how the
Second Law shows
that time moves ever
forward… or at least
almost ever.
change
What can you do to
find or make deeper
meanings in your
work?
reflect
How did the practical
considerations of
Carnot, Kelvin,
Planck, and others
lead to profound
philosophical
insights?
For those of us who believe in physics,
this separation between past, present, and
future is only an illusion, albeit a stubborn
one.
-Albert Einstein[4]
It is just an illusion we have here on Earth
that one moment follows another one, like
beads on a string, and that once a moment
is gone, it is gone forever…. So it goes.
-Kurt Vonnegut [5]
Figure 3.4: Albert Einstein (top) and Kurt Vonnegut (bottom) http://www.glsc.org/einstein/
images/einstein_3.jpg http://adreampuppet.files.wordpress.com/2007/04/vonnegut.
jpg.
and mathematicians instrumental in developing a probabilistic and microscopic approach to
the Second Law, including Maxwell and Boltzmann.
2. Analyze. How did Maxwell and Boltzmann come to characterize the Second Law in terms of
probability, and at the molecular level? How did entropy come to be characterized as disorder?
How do the work of Ehrenfest, Ruelle, and Boltzmann come together to show that the proof
of the Second Law is not absolute but statistical in nature (albeit with an astronomically high
probability of holding true)? How do these findings give time a direction?
3. Reflect. How did we get from Carnot’s, Kelvin’s, and Planck’s very practical investigation of
steam engines to philosophical conclusions about the direction of time? What follows directly,
and what is indirect, unrelated, or even metaphorical?
4. Change. Engineers aren’t known for producing deep philosophical insights, and yet these
grand ideas can certainly be traced back to engineers. Can we cultivate an appreciation for the
insights, and the questions around these deeper meanings, in engineering? What can you do
to find or make deeper meanings in your work as an engineering student?
Entropy is indeed an abstract concept, with implications waxing philosophical. It is not sur-
prising then, that engineers in particular would seek to return the concept to concrete and practical
considerations. We will see in the next module how some instructors and authors of thermo textbooks
seek to make entropy more relevant through various analogies to common life experiences.
58
3. THE SECOND LAW AND PROPERTY RELATIONS
3.4 MODULE 3.4. EVALUATING ENTROPY ANALOGIES
1. Engage. Thermodynamics textbooks and other sources seeking to make thermodynamics
relevant to everyday life will often use analogies to illustrate entropy. Some of these analogies
are metaphorical, not literally true, while others have been developed as “entropy” in their
own right, applied in other fields. For example, Çengel and Boles [7] discuss four different
analogies: entropy in learning, in libraries, in rooms, and in armies. The concept of entropy
is used in military science [8] and in information theory, the Shannon entropy represents
missing information [9]. Neither of these is literally the same as thermodynamic entropy,
and is instead an analogous concept, though there are strong theoretical connections between
thermodynamic and information entropy that continue to be pursued [10].
2. Analyze. Provide a critical discussion (as in critical thinking; you may defend or refute any part)
of an entropy analogy from your textbook or another source. Include both thermodynamic
and social considerations, discussing the following: Where do the analogies for entropy hold
(thermodynamically and socially), and where do they fail? What are the (thermodynamic and
social) implications of applying entropy in these ways, and of using these examples? What is
a
g
SEM
knowledge
communi-
cation
i
lifelong
learning
engage
Find entropy
analogies used in
your textbook or
another source.
change
How else might you
make entropy
concrete for
students?
analyze
Provide a critical
discussion of the
analogy. Where does
it hold? Where does it
fail, in both
thermodynamic and
social terms?
reflect
Are these analogies
helpful for your
learning? Why or why
not?
Figure 3.5: Messy rooms are a misleading metaphor for students learning about entropy. http://www.
roommatesusa.com/wp-content/wpuploads/2010/12/messy-room.jpg.
3.5. MODULE 3.5. MAKING MATH RELEVANT:THERMODYNAMIC RELATIONS IN CONTEXT 59
the bias of the source; do they seem to have a position on entropy as good or bad, desirable or
undesirable? Can you challenge their assumptions?
3. Reflect. How do these analogies help you learn? How might they get in the way of learning?
4. Change. How else could you make entropy more concrete for students learning thermody-
namics? Supply either a new analogy that you think holds better, or devise a new way to help
students relate entropy to everyday life.
Entropy is not the only abstract entity thermodynamics students come across. Indeed, making
entropy useful in many engineering applications requires an understanding of the fundamental
property relations, mathematical relationships that help us derive expressions for thermodynamic
properties of interest in terms of quantities that are known or readily measured. Thus, the next
module revisits the “Thermo to Life” approach used in Module 2.5 finding relevant applications of
the thermodynamic relations.
3.5 MODULE 3.5. MAKING MATH RELEVANT:
THERMODYNAMIC RELATIONS IN CONTEXT
This module seeks to demonstrate the usefulness of the thermodynamic relations.The goal is to pick
a topic that you find relevant and interesting, where the thermodynamic relations can be applied.
This is somewhat more difficult than in the Thermo to Life exercise in Module 2.5 because of the
narrower applicability of the material considered here.
a
SEM
knowledge
e
problems
i
j
lifelong
learning
contemporary
issues
1. Engage. Find and describe an application of the thermodynamic relations in everyday life.
You may find your own or choose one of these:
a. Calculating Entropy in terms of measurable quantities using Maxwell Relations.
b. Predicting partitioning behavior of pollutants in soil, air, and water using Gibbs Energy
and Fugacity [11].
c. How the Gibbs Energy is used to characterize Fuel Cell function [12].
d. Using the Gibbs Energy to describe fuel distillation or other separations processes (vapor-
liquid equilibrium).
e. Using Gibbs energy and chemical reaction equilibrium to understand fuel combustion.
f. Using Gibbs energy and chemical reaction equilibrium to understand environmental
controls on power plants.
g. Using Helmholtz energy to predict behavior of volcanic eruptions or other explosions.
60 REFERENCES
engage
Find and describe an
application of the
thermodynamic
relations in everyday
life; choose from the
list below or find your
own.
analyze
Show how
thermodynamic
property relations are
used to analyze this
system.
change
What might we
change to improve
the system, or our
understanding?
reflect
What does this
analysis tell us about
the system? What did
we learn? What new
questions emerged
as a result?
3.6: Biodiesel processing equipment http://www.extremebiodiesel.com/photos/
Figure
articles/full-Processor.jpg.
2. Analyze. How is thermodynamic theory used to characterize your system? Which thermo-
dynamic properties are relevant? On which measurable properties do these depend? Find or
create an illustrative example that demonstrates the usefulness of the thermodynamic prop-
erties in characterizing your system. Thoroughly explain the thermodynamics behind how it
works. Make reasonable assumptions where necessary. Be as realistic as possible, but make
simplifications if needed.
3. Reflect. Think (do not write or type, just think) for 15 minutes about what have you learned
from your engagement and analysis so far. What questions emerge for you? Write a short
reflection on what you learned, and what you would like to explore further. Identify possible
avenues of change or further exploration.
4. Change. Did your example provide a satisfactory case study explaining how thermodynamic
theory can be useful in everyday life? If not, what would you explore further to establish a
better link?
REFERENCES
[1] Park, R. (2000). Voodoo Science: The Road from Foolishness to Fraud. New York: Oxford University
Press. Cited on page(s) 54
[2] Von Baeyer, H.C. (1999). Warmth Disperses and Time Passes: The History of Heat. New York:
Modern Library. Cited on page(s) 55, 56, 61
REFERENCES 61
[3] The idea of the “thematic content of science” is attributed to Horton, G. (1973). Thematic
Origins of Scientific Thought. Cambridge, MA: Harvard University Press, p. 47. Cited in [2,
p. 56]. Cited on page(s) 55
[4] Einstein, A. (1972). Letter to Michele Angelo Besso’s son after his father’s death, 1955. In P.
Speziali, ed., Albert Einstein–Michele Besso Correspondence. Paris: Hermann, p. 538–9. Cited on
page(s)
[5] Vonnegut, K. [1969] (1991). Slaughterhouse-five, or, The children’s crusade, a duty-dance with
death. New York: Dell, p. 27. Cited on page(s)
[6] Eddington, A. (1929). The Nature of the Physical World. New York: MacMillan, p. 68ff. Cited
on page(s) 56
[7] Çengel and Boles. (2008). Thermodynamics: An engineering approach. 6th ed. New York:
McGraw-Hill. Cited on page(s) 58
[8] Herman, M. (1998–9) Entropy-Based Warfare: Modeling the Revolution in Military Affairs.
Joint Force Quarterly (JFQ) No. 20: 85–90. Accessed June 8, 2011 from http://www.au.af.
mil/au/awc/awcgate/jfq/1620.pdf. Cited on page(s) 58
[9] Shannon, C.E. (1948). A Mathematical Theory of Communication. Bell System Technical Jour-
nal, 27:379–423, 623–656. DOI: 10.1145/584091.584093 Cited on page(s) 58
[10] Von Baeyer [2] covers these connections. See also Maroney, O. (2009). Information Process-
ing and Thermodynamic Entropy. Stanford Encyclopedia of Philosophy, E.N. Zalta, ed. Stan-
ford, CA: Metaphysics Research Lab, Center for the Study of Language and Information,
Stanford University. Accessed June 8, 2011 from http://plato.stanford.edu/entries/
information-entropy/. Cited on page(s) 58
[11] Mackay, D. (1979) Finding Fugacity Feasible. Environmental Science and Technology, 13(10):
1218–1223. DOI: 10.1021/es60158a003. See also Mackay, D. (2004). Finding Fugacity
Feasible, Fruitful, and Fun. Environmental Toxicology and Chemistry, 23(10): 2282–2289.
DOI: 10.1897/03–465. DOI: 10.1021/es60158a003 Cited on page(s) 59
[12] Smith, J.M., Van Ness, H.C., and Abbott, M.M. (2001) Introduction to Chemical Engineering
Thermodynamics. 6th ed. New York: McGraw Hill. Cited on page(s) 59
C H A P T E R 4
63
Thinking Big Picture about
Energy and Sustainability
The goal of the modules in this chapter is to create opportunities to think about complex, real-
world issues in energy and sustainability. While the list of topics explored here is by no means
comprehensive, each module is designed to help you learn how to consider technical and social
contexts, engineering ethics, community needs, and public policy simultaneously.
What should the United States do to curb its greenhouse gas emissions in order to mitigate
climate change? Module 4.1 challenges you to develop and test out a concrete plan to achieve
meaningful reductions. How does one choose a technology for a particular community or application
in power generation or transportation? In Module 4.1 you will first define and refine selection criteria,
then apply these to cases in the power generation and transportation sectors. Module 4.3 examines
sustainability criteria, asking you to evaluate the “green-ness” of three scenarios: nuclear power
generation, corn-based ethanol as a transportation fuel for the United States, and the transportation
of western, low-sulfur coal to eastern power plants in the US. Module 4.4 takes up how consumers
use energy in their homes, for cooking, refrigeration, and water purification. Finally, Module 4.5
asks how we understand large-scale disasters that have come to be common occurrences in our quest
for energy, and work for their prevention. All of these questions require keeping the big picture in
mind, even as detailed analyses are brought to bear on these topics.
Module 4.1: Climate Action.
Module 4.2: Selection Criteria for Energy Technologies.
Module 4.3: Is it Green?
Module 4.4: Home Energy Uses.
Module 4.5: Ethics of Energy Disasters.
4.1 MODULE 4.1. CLIMATE ACTION
This module challenges you to move between a “big picture” contextual perspective and the focused,
sometimes narrow world of engineering thought.
e
f
problems
ethics
j
contemporary
issues
64
4. THINKING BIG PICTURE ABOUT ENERGY AND SUSTAINABILITY
engage
Identify a set of
significant actions
that can be taken to
reduce US
Greenhouse Gas
Emissions by 1000Tg
CO2 eq / year.
change
Take some action
toward making these
reductions happen.
Document and
reflect on the impact
of your action.
analyze
Justify your
reductions strategy in
quantitative,
qualitative, and
ethical/moral terms.
reflect
What are the limits of
individual behavior
strategies such as
green consumerism
on GHG reductions?
Figure 4.1: From Derrick Jensen and Stephanie MacMillan’s graphic novel As the World Burns: 50 Simple
Things You Can Do to Stay in Denial [1]. Used with permission.
Learning to move between these frames is essential in forming sound engineering judgment. This
assignment also challenges you to move between theory and action, between your life as a student
and your life as a citizen of the planet. Integrating theory and action is the essence of engineering;
engagement reminds us it is a false distinction we sometimes make between “College” and “The
Real World,” between an academic subject like “thermo” and what we more generally refer to as our
“life.”
1. Engage. Identify a set of significant actions that can be taken to reduce US greenhouse gas
emissions [2]. Significant in this case means it must have the potential to reduce greenhouse
gas emissions to 1990 levels, when the atmospheric global carbon dioxide concentration was
354 ppm. This is a significant reduction, but is also far from sufficient when one considers
that global increases in CO2 emissions from fossil fuel combustion between 1990 and 2008
have been much higher, around 40%, compared to the US’s 15% [3]. Despite these emissions
increases abroad, the US remains a grossly disproportionate emitter of CO2, putting out 19%
of global CO2 emissions from human activity (excluding deforestation) while comprising only
4.6% of the world population [3]. On this basis one could argue that US reductions need to
be much deeper in order to be equitable and to allow developing economies to grow.
4.1. MODULE 4.1. CLIMATE ACTION 65
2. Analyze. Justify your choice by explaining what impact you expect your actions to have and
put them in perspective. You must do this quantitatively, qualitatively, and in terms of ethical
or moral argument.
Quantitatively, estimate the total CO2 equivalent reductions your actions would bring
about on an annual basis (see [2] for a definition of CO2 equivalents). The goal is to
eliminate enough Tg of CO2 equivalent emissions per year to return the US to 1990
emissions levels. Keep track of uncertainty in your assumptions and present your estimated
reductions with a sensitivity analysis (carry through +/- values that extend from a critical
assessment of your own assumptions used in your estimates). Your sensitivity analysis
should capture the range of emissions reductions that can reasonably be expected, given
the uncertainty in your assumptions.
Qualitatively, you need to describe why your proposed action is feasible in the time
allotted, and why you expect it to be effective in the long-term toward bringing about
the reductions targeted.
Make an ethics-based argument for why your proposed action is necessary or justified,
referencing multiple ethical frameworks (e.g., utilitarian, deontological, social justice,
morally deep world, etc. – see [4]–[7] for more on ethics frameworks and how to apply
them).
3. Reflect. How much can individual personal actions, such as using energy efficient light bulbs
impact climate change? How likely are individuals to comply with behavioral strategies? What
adjustments would you make to your calculations to make sure they are realistic? What kinds
of collective actions that target structural and infrastructural issues might be more likely to
bring about significant change? What are the barriers to individuals acting collectively for
change?
4. Change. Take some action that demonstrates the effectiveness of your proposed reduction
strategy or that works toward actually making these reductions happen. For example, you
might implement one strategy on a small scale, which, if implemented widely, would result
in the reductions claimed. Or you may work to bring about larger structural change through
collective action – for example, working to pass national legislation. Document your actions
and their short-term effects on yourself, your local community, and larger society.
5. Reflect. What are your accomplishments so far? Describe results quantitatively, qualitatively
and in ethical or moral terms. What impact have your actions had globally, locally, and within
yourself? What feedback have you received, and what new knowledge have you acquired as
a result of your actions? How will you adjust your actions going forward, as a result of what
66
4. THINKING BIG PICTURE ABOUT ENERGY AND SUSTAINABILITY
you learned? What opportunities for transformation lie ahead? What do you wish you had
done differently? What future actions do you recommend or commit to do next? How did this
exercise change you? What have you learned? How did this project connect to your learning
thermodynamics? How will you use what you’ve learned in your future as a student? As a
professional? As a citizen of the world?
All climate strategies must take up the question of which energy technologies ought to be
implemented to best support greenhouse gas reduction plans.The answers will be different depending
on intended applications in specific communities. In the next module you will generate a set of
criteria for energy technology selection, considering not only greenhouse gas emissions, but also
other environmental, economic, social, and political considerations, and apply the criteria to uses in
transportation and power generation.
4.2 MODULE 4.2. SELECTION CRITERIA FOR ENERGY
TECHNOLOGIES
How does one determine the appropriate energy technology for a given application? Here you will
generate a set of criteria to use in approaching this problem, and apply the criteria in transportation
and power generation.
4.2.1 EXPLORATION 1: DEVELOPING SELECTION CRITERIA
1. Engage. Brainstorm a set of criteria you think society should use in making choices about
an energy technology. Some possible criteria are presented in Table 4.1, and details on how
different energy sources address some of these criteria can be found in energy studies texts,
e.g. [8]. Are there any criteria you would add or take away? How is each criterion defined?
Conduct background research to develop a working knowledge and critical understanding of
what each criterion means in an energy context. Are terms different for different technologies,
such that you can’t compare them directly? For example, efficiency means different things
for different energy sources (see Module 3.1). Should it be included, and if so, how can it
be used as a basis of comparison? A category like sustainability might have multiple criteria
within it – air toxics, water pollution, greenhouse gas emissions – that cannot be compared
directly. How sustainability is defined may itself be contested. What human impacts deserve
consideration? Job loss or gain, displacement of people, quality of life, environmental racism or
classism in citing of energy resource extraction, production, or waste facilities? How do energy
systems require certain types of control and certain social structures to maintain them? See
Langdon Winne’s classic article “Do Artifacts have Politics?”[9] http://zaphod.mindlab.
umd.edu/docSeminar/pdfs/Winner.pdf.
2. Analyze. Devise a strategy for determining whether a given criterion has been met. Is the goal
to meet a set standard (if so, what is the standard?), or to maximize or minimize that quality? Is
4.2. MODULE 4.2. SELECTION CRITERIA FOR ENERGY TECHNOLOGIES 67
e
f
h
problems
ethics
context
j
contemporary
issues
engage
Brainstorm the
considerations
involved in choosing
an energy
technology.
change
Based on what you
learned in your
reflection, revise your
decision-making plan
to address ethical
considerations.
analyze
How is each criterion
defined and met?
Devise a system for
decision-making and
test it with a few
example energy
technologies.
reflect
What difference does
it make how the
criteria are framed
and evaluated
against each other?
What are the ethical
considerations?
Figure 4.2: Generators at Hoover Dam, Jon Sullivan, PDphoto.org. Public Domain. http://commons.
wikimedia.org/wiki/File:Hoover_Dam%27s_generators2.jpg.
anything a go/no-go criterion where something must be met at a given level or the technology
should be rejected? Which if any criteria can be traded off against another? Develop a process
for deciding about a technology – be careful of tools such as weighted objectives trees [10] or
cost-benefit analysis [11] that might not capture all considerations. Do a test run comparing
several energy technologies – what comes out on top and why?
3. Reflect. How do different methods of decision-making, or different definitions of criteria,
affect the choice that ends up on top? What ethical considerations come into play in setting
up the rules of decisions? How do we currently make decisions about energy technologies?
How should we? Who should devise criteria or decision-making plans, and who should apply
them?
4. Change. Based on what you learned in your reflection, revise your decision-making plan to
be more responsive to the ethical considerations you discussed.
4.2.2 EXPLORATION 2: EVALUATING AND SELECTING POWER
GENERATION TECHNOLOGIES
1. Engage. Select an energy technology to evaluate (this could be a real technology in use or an
ideal cycle you are studying – see Table 4.2 for ideas). You may want to have each person in
68
4. THINKING BIG PICTURE ABOUT ENERGY AND SUSTAINABILITY
Table 4.1: Some Suggested Criteria for Energy
Choices.
Cost
Efficiency
Scale
Sustainability
Reliability
Safety
Rate
Human impacts
Social structures required
engage
Select an energy
technology to
evaluate and gather
basic information
about it.
change
What would you
change about your
course or textbook,
given what you
learned?
analyze
Evaluate the
technology
according to
specified criteria.
reflect
What was difficult
about the
evaluations? What
was surprising? What
did you learn?
Figure 4.3: The Brazos Wind Farm, also known as the Green Mountain Energy Wind Farm,
near Fluvanna, Texas. Public domain. http://upload.wikimedia.org/wikipedia/commons/8/
8b/GreenMountainWindFarm_Fluvanna_2004.jpg.
the class choose a different one and compare results. For some technologies such as coal-fired
power plants, there are many different designs with different characteristics – so you will need
to be specific about the type of plant, and in some cases, the type of fuel as well. If you are doing
the assignment individually, you might choose more than one technology or plant designs, so
you can compare those. Research how the technology works and other information relevant
for your evaluation and write a brief description.
4.2. MODULE 4.2. SELECTION CRITERIA FOR ENERGY TECHNOLOGIES 69
2. Analyze. Evaluate each technology using the criteria you developed. Try to take uncertainty
into account by working with reasonable ranges of values where appropriate.
3. Reflect. What was most difficult about conducting the evaluations? What was most surpris-
ing? How do the social and technical merge, interrelate and overlap in these considerations,
becoming the socio-technical? What did you learn?
4. Change. What would you change about your course or textbook to incorporate this material?
Where does it fit? How would you teach it?
Table 4.2: Possible Technologies to Consider for Evaluation and Selection.
(cid:51)(cid:82)(cid:86)(cid:86)(cid:76)(cid:69)(cid:79)(cid:72)(cid:3)(cid:55)(cid:72)(cid:70)(cid:75)(cid:81)(cid:82)(cid:79)(cid:82)(cid:74)(cid:76)(cid:72)(cid:86)(cid:3)(cid:87)(cid:82)(cid:3)(cid:38)(cid:82)(cid:81)(cid:86)(cid:76)(cid:71)(cid:72)(cid:85)(cid:3)
(cid:36)(cid:66)(cid:83)(cid:79)(cid:80)(cid:85)(cid:1)(cid:36)(cid:90)(cid:68)(cid:77)(cid:70)(cid:1)
(cid:42)(cid:69)(cid:70)(cid:66)(cid:77)(cid:1)(cid:51)(cid:66)(cid:79)(cid:76)(cid:74)(cid:79)(cid:70)(cid:1)(cid:36)(cid:90)(cid:68)(cid:77)(cid:70)(cid:1)
(cid:51)(cid:66)(cid:79)(cid:76)(cid:74)(cid:79)(cid:70)(cid:1)(cid:36)(cid:90)(cid:68)(cid:77)(cid:70)(cid:1)(cid:88)(cid:74)(cid:85)(cid:73)(cid:1)(cid:52)(cid:86)(cid:81)(cid:70)(cid:83)(cid:73)(cid:70)(cid:66)(cid:85)(cid:1)
(cid:51)(cid:66)(cid:79)(cid:76)(cid:74)(cid:79)(cid:70)(cid:1)(cid:36)(cid:90)(cid:68)(cid:77)(cid:70)(cid:1)(cid:88)(cid:74)(cid:85)(cid:73)(cid:1)(cid:51)(cid:70)(cid:73)(cid:70)(cid:66)(cid:85)(cid:1)
(cid:51)(cid:66)(cid:79)(cid:76)(cid:74)(cid:79)(cid:70)(cid:1)(cid:36)(cid:90)(cid:68)(cid:77)(cid:70)(cid:1)(cid:88)(cid:74)(cid:85)(cid:73)(cid:1)(cid:51)(cid:70)(cid:72)(cid:70)(cid:79)(cid:70)(cid:83)(cid:66)(cid:85)(cid:74)(cid:80)(cid:79)(cid:1)
(cid:51)(cid:66)(cid:79)(cid:76)(cid:74)(cid:79)(cid:70)(cid:1)(cid:36)(cid:90)(cid:68)(cid:77)(cid:70)(cid:1)(cid:88)(cid:74)(cid:85)(cid:73)(cid:1)(cid:36)(cid:80)(cid:72)(cid:70)(cid:79)(cid:70)(cid:83)(cid:66)(cid:85)(cid:74)(cid:80)(cid:79)(cid:1)
(cid:35)(cid:83)(cid:66)(cid:90)(cid:85)(cid:80)(cid:79)(cid:1)(cid:36)(cid:90)(cid:68)(cid:77)(cid:70)(cid:1)
(cid:42)(cid:79)(cid:85)(cid:70)(cid:72)(cid:83)(cid:66)(cid:85)(cid:70)(cid:69)(cid:1)(cid:40)(cid:66)(cid:84)(cid:74)(cid:71)(cid:74)(cid:68)(cid:66)(cid:85)(cid:74)(cid:80)(cid:79)(cid:16)(cid:36)(cid:80)(cid:78)(cid:67)(cid:74)(cid:79)(cid:70)(cid:69)(cid:1)(cid:36)(cid:90)(cid:68)(cid:77)(cid:70)(cid:1)
(cid:37)(cid:74)(cid:70)(cid:84)(cid:70)(cid:77)(cid:1)(cid:36)(cid:90)(cid:68)(cid:77)(cid:70)(cid:1)
(cid:52)(cid:85)(cid:74)(cid:83)(cid:77)(cid:74)(cid:79) (cid:1)(cid:36)(cid:90)(cid:77)(cid:70)(cid:1)
(cid:39)(cid:86)(cid:70)(cid:77)(cid:1)(cid:36)(cid:70)(cid:77)(cid:77)(cid:84)(cid:1)(cid:9)(cid:73)(cid:90)(cid:69)(cid:83)(cid:80)(cid:72)(cid:70)(cid:79)(cid:1)(cid:84)(cid:80)(cid:86)(cid:83)(cid:68)(cid:70)(cid:69)(cid:1)(cid:71)(cid:83)(cid:80)(cid:78)(cid:1)(cid:78)(cid:70)(cid:85)(cid:73)(cid:66)(cid:79)(cid:80)(cid:77)(cid:13)(cid:1)(cid:72)(cid:66)(cid:84)(cid:80)(cid:77)(cid:74)(cid:79)(cid:70)(cid:13)(cid:1)(cid:88)(cid:66)(cid:85)(cid:70)(cid:83)(cid:121)(cid:10)(cid:1)
(cid:56)(cid:74)(cid:79)(cid:69)(cid:1)(cid:38)(cid:79)(cid:70)(cid:83)(cid:72)(cid:90)(cid:1)
(cid:52)(cid:80)(cid:77)(cid:66)(cid:83)(cid:1)(cid:38)(cid:79)(cid:70)(cid:83)(cid:72)(cid:90)(cid:1)(cid:1)
(cid:41)(cid:90)(cid:69)(cid:83)(cid:80)(cid:1)(cid:9)(cid:78)(cid:74)(cid:68)(cid:83)(cid:80)(cid:1)(cid:66)(cid:79)(cid:69)(cid:1)(cid:78)(cid:66)(cid:68)(cid:83)(cid:80)(cid:10)(cid:1)
(cid:36)(cid:80)(cid:79)(cid:84)(cid:70)(cid:83)(cid:87)(cid:66)(cid:85)(cid:74)(cid:80)(cid:79)(cid:1)(cid:9)(cid:74)(cid:79)(cid:68)(cid:77)(cid:86)(cid:69)(cid:74)(cid:79)(cid:72)(cid:1)(cid:69)(cid:70)(cid:68)(cid:83)(cid:70)(cid:66)(cid:84)(cid:70)(cid:69)(cid:1)(cid:69)(cid:70)(cid:78)(cid:66)(cid:79)(cid:69)(cid:13)(cid:1)(cid:83)(cid:70)(cid:85)(cid:83)(cid:80)(cid:71)(cid:74)(cid:85)(cid:84)(cid:1)(cid:85)(cid:80)(cid:1)(cid:74)(cid:79)(cid:68)(cid:83)(cid:70)(cid:66)(cid:84)(cid:70)(cid:1)(cid:70)(cid:71)(cid:71)(cid:74)(cid:68)(cid:74)(cid:70)(cid:79)(cid:68)(cid:90)(cid:13)(cid:1)(cid:70)(cid:85)(cid:68)(cid:15)(cid:10)(cid:1)
(cid:39)(cid:86)(cid:70)(cid:77)(cid:1)(cid:68)(cid:73)(cid:80)(cid:74)(cid:68)(cid:70)(cid:84)(cid:27)(cid:1)(cid:79)(cid:66)(cid:85)(cid:86)(cid:83)(cid:66)(cid:77)(cid:1)(cid:72)(cid:66)(cid:84)(cid:13)(cid:1)(cid:79)(cid:86)(cid:68)(cid:77)(cid:70)(cid:66)(cid:83)(cid:1)(cid:9)(cid:86)(cid:83)(cid:66)(cid:79)(cid:74)(cid:86)(cid:78)(cid:13)(cid:1)
(cid:81)(cid:77)(cid:86)(cid:85)(cid:80)(cid:79)(cid:74)(cid:86)(cid:78)(cid:10)(cid:13)(cid:1)(cid:68)(cid:80)(cid:66)(cid:77)(cid:1)(cid:9)(cid:87)(cid:66)(cid:83)(cid:90)(cid:74)(cid:79)(cid:72)(cid:1)(cid:68)(cid:80)(cid:78)(cid:81)(cid:80)(cid:84)(cid:74)(cid:85)(cid:74)(cid:80)(cid:79)(cid:84)(cid:10)(cid:13)(cid:1)
(cid:72)(cid:70)(cid:80)(cid:85)(cid:73)(cid:70)(cid:83)(cid:78)(cid:66)(cid:77)(cid:13)(cid:1)(cid:67)(cid:74)(cid:80)(cid:71)(cid:86)(cid:70)(cid:77)(cid:84)(cid:1)(cid:9)(cid:88)(cid:80)(cid:80)(cid:69)(cid:13)(cid:1)(cid:66)(cid:77)(cid:72)(cid:66)(cid:70)(cid:13)(cid:1)(cid:70)(cid:85)(cid:73)(cid:66)(cid:79)(cid:80)(cid:77)(cid:10)(cid:13)(cid:1)(cid:70)(cid:85)(cid:68)(cid:15)(cid:1)
4.2.3 EXPLORATION 3: EVALUATING AND SELECTING
TRANSPORTATION TECHNOLOGIES
1. Engage. Select a transportation technology to evaluate. Use Table 4.3; though not exhaustive
by any means, it provides a place to start. Begin with choosing an intended application (pas-
senger or freight, start and end points?). Then mix and match choices of vehicle types, power
source, and thermodynamic cycle as appropriate. There are many choices and configurations;
be specific, and be careful, as some choices aren’t appropriate for all combinations. Research
how the technology works and other information relevant for your evaluation and write a brief
description.
2. Analyze. Evaluate each technology using the criteria you developed. Try to take uncertainty
into account by working with reasonable ranges of values where appropriate.
70
4. THINKING BIG PICTURE ABOUT ENERGY AND SUSTAINABILITY
engage
Select a
transportation
technology to
evaluate and gather
basic information
about it.
change
What would you
change about
transportation
infrastructure or
standards?
analyze
Evaluate the
technology
according to
specified criteria.
reflect
What was difficult
about the
evaluations? What
was surprising? What
did you learn?
Figure 4.4: Traffic congestion, Brasilia, Brazil. Photo by Mario Roberto Duran Ortiz (Mariordo). Used
with GNU Free documentation license http://upload.wikimedia.org/wikipedia/commons/e/
ec/Traffic_Congestion_Brasilia.jpg.
3. Reflect.What was most difficult about conducting the evaluations? What was most surprising?
What was different about these considerations compared with power generation? How do you
feel being put in the position of decision-maker here? Who should decide? Government?
Technocrats? Consumers? Citizens?
4. Change.What would you change about transportation infrastructure or standards (e.g., CAFE
standards or limitations on shipping emissions) based on what you learned? Engage in public
advocacy of your position by connecting with a group that represents your interests, writing a
public official or media outlet.
The explorations in this module have made clear some of the complexities involved in evalu-
ating and selecting particular technologies for specific settings. The next module considers in greater
depth how one evaluates the environmental performance, or “green-ness” of three different energy
technologies.
Table 4.3: A Wide Array of Options for Transportation Technologies. Choose a Mode of
Travel and Vehicle Type, Fuel, and Cycle.
4.3. MODULE 4.3. IS IT GREEN?
71
Cycles
Carnot
Otto
Diesel
Brayton
Stirling
Rankine
2-or 4-stroke?
Mode Vehicle type
Fuel/Power type
Road Cars
Diesel (incl. 5-100% biodiesel)
Tractor-trailers
Buses
Bicycles
Water Rowboats
Motorboats
Cargo ships
Sailboats
Cruise ships
Passenger planes
Cargo planes
Passenger trains
Freight trains
Air
Rail
Gasoline (incl. reformulations,
oxygenates, up to 100% ethanol)
Jet Fuel (conventional, biofuel)
Electric (multiple sources)
Electric Hybrid
Steam (multiple sources)
Hydrogen Fuel Cell (multiple sources)
Wind
Solar
Human Power
Natural Gas (fossil fuel or biodigested?)
Biomass (wood, dung, grass, etc.)
4.3 MODULE 4.3. IS IT GREEN?
e
problems
g
communi-
cation
h
context
i
lifelong
learning
j
contemporary
issues
This module explores three instances where energy activities are labeled green, but upon closer
examination, their claim to sustainability is perhaps more limited than previously assumed. Is nuclear
power a green alternative to carbon-based power generation? Under what circumstances can ethanol
be considered a sustainable fuel choice? When eastern power plants seek out low-sulfur coal from the
western United States to control air pollution and acid rain, what environmental costs are introduced
in transportation? In each case you are challenged to think more deeply about what sustainability
might mean.
4.3.1 EXPLORATION 1: NUCLEAR POWER AS A GREEN ALTERNATIVE?
1. Engage. Patrick Moore [12] made waves when he published a “green” argument for nuclear
power in 2006: http://www.washingtonpost.com/wp-dyn/content/article/2006/
04/14/AR2006041401209.html.
72
4. THINKING BIG PICTURE ABOUT ENERGY AND SUSTAINABILITY
engage
Read widely on the
debate over nuclear
power as a green
alternative. What are
the arguments?
change
Advocate for your
position in the public
sphere. Join an
action group, attend
a rally, or write a
public official.
analyze
reflect
Evaluate the best
arguments on both
sides and make an
evaluative judgment
whether nuclear
power is green.
How did you make
your decision? What
criteria did you use in
evaluating sources
and their content?
Figure 4.5: Have a nice day. Is nuclear as green as it looks? http://www.ecosprinter.eu/wp-
content/uploads/2010/10/nuclear.jpg.
Greenpeace [13] disputed Moore’s claims of affiliation with their organization and pointed
out his ties to the nuclear industry: http://www.greenpeace.org/usa/en/campaigns/
nuclear/patric-moore-background-inform/. While some environmentalists see nu-
clear power as an important part of addressing climate change, other environmentalists [14]
take issue with the substance of Moor’s argument, pointing out the ways in which nuclear
power is not green at all: http://www.counterpunch.org/montague11032008.html.
Read these articles and search for additional material on the debate over whether nuclear
energy is green.
2. Analyze. Evaluate the best arguments on both sides and make an evaluative judgment about
whether nuclear power is green. In what ways is it green and in what ways is it not green?
Be sure to include a range of conceptions of sustainability. Think holistically about the entire
process from mining to waste disposal, and consider the environmental impacts of nuclear
accidents. How do other “green” technologies such as wind, solar, and conservation/efficiency
improvements compare?
3. Reflect. How did you make your decision? What criteria did you use in evaluating sources and
their content? Are there considerations beyond the “green” aspect of nuclear power that affect
its desirability? What are they? For example, does the fact that nuclear fuel presents a potential
security risk raise issues of sustainability? Does the scale of a technology, or the amount of
social control required to implement it, affect sustainability? Does its impact on marginalized
communities, such as uranium mining on indigenous lands [15], affect sustainability?
4. Change. Advocate for your position in the public sphere. Join an action group, attend a rally
or other event, or write a public official to express your views.
4.3. MODULE 4.3. IS IT GREEN?
73
4.3.2 EXPLORATION 2: ETHANOL
engage
Read debates on
ethanol in Science.
What are the key
issues?
change
How will you take
your independent
learning strategies
forward?
analyze
Write a dialogue
among the paper
authors to illustrate
the key issues and
authors’ main points.
reflect
What did you learn
about reading the
scientific literature to
understand current
topics in energy?
Figure 4.6: Corn is the primary source of ethanol in the US. But is it a green choice? http://blogs.
princeton.edu/chm333/f2006/biomass/ethanol%20corn%20gas.jpg.
Is ethanol a green fuel? This is a surprisingly complex question to answer. It depends on what
biosource is used to produce it, how it is produced, and perhaps most important, what is taken into
account in the analysis, reflecting different definitions of what “green” means. In some cases, ethanol
can provide a net benefit, and in other cases, a net loss for the environment.
How can you, as a student, or as a technically educated person who may not be an expert in
this particular area, sort through literally hundreds of studies on this topic and come to an informed
conclusion? This exercise guides you through one approach.
1. Engage. One useful place to go to learn about a topic with this level of policy relevance is the
journal Science, which includes sections with readable introductions to technical issues in their
public policy context. While the development of these debates over many years is itself quite
interesting, here we will consider only a few of the most recent discussions of this issue in the
journal.
Read the following three articles:
74
4. THINKING BIG PICTURE ABOUT ENERGY AND SUSTAINABILITY
Scharlemann and Laurance, How Green are Biofuels? [16] DOI: 10.1126/sci-
ence.1153103.
Robertson et al., Sustainable Biofuels Redux [17] DOI: 10.1126/science.1161525.
Tilman et al., Beneficial Biofuels: The Food, Energy, and Environment Trilemma [18]
DOI: 10.1126/science.1177970.
(Note: Some DOI links may be publicly available on the Internet, or you may need to authen-
ticate through your campus libraries to access Science online.)
2. Analyze. Write a script that creates a dialogue (tri-alogue?) among the three articles to summa-
rize their key positions and reflect the issues in the ethanol debate. Under what circumstances,
if any, can ethanol be considered a sustainable fuel choice, and why?
3. Reflect. What did you learn from this exercise about using peer-reviewed articles to help you
understand a current topic in energy?
4. Change. How can you take this independent learning strategy forward in other areas that
spark your curiosity?
4.3.3 EXPLORATION 3: COAL TRAIN [19]
1. Engage. Read the essay “Coal Train” in John McPhee’s book Uncommon Carriers [20]. Why
did the Clean Air Act of 1970 reinvigorate the railroading industry? Why don’t all eastern
power plants use higher-Btu coal from nearby West Virginia and Pennsylvania as opposed to
Wyoming?
2. Analyze. Perform a back-of-the-envelope calculation based on information in McPhee’s chap-
ter. If a coal train weighs 3000 tons empty, and 19,000 tons when loaded with low-sulfur coal
from Powder River Basin, and travels 1800 miles from Wyoming to Georgia:
a. How much energy does it take to haul the coal this distance? Make simplifying assump-
tions for an initial estimate.
b. How much energy is in the coal being hauled?
3. Reflect. Under what conditions is it the “right” thing to do to move coal across the country?
What criteria are you using to determine whether it is “right” or not? What other criteria that
you haven’t considered might change the outcome of your evaluation?
4. Change. How does what you’ve learned here change your views on energy, if at all? How (if
at all) does it change your consumption of energy?
4.4. MODULE 4.4. HOME ENERGY USES 75
engage
Read John McPhee’s
chapter on the Coal
Train in Uncommon
Carriers [20].
change
How does this
change your views
on energy, if at all?
How does it change
your consumption?
analyze
Estimate the energy
required to move a
trainful of coal from
Wyoming to Georgia,
and how much
energy that coal
provides.
reflect
Under what
conditions it is “right”
to move coal across
the country? What
criteria influence this
evaluation?
Figure 4.7: Union Pacific coal train with two locomotives (at the end) in Converse County close to
Douglas, Wyoming USA. July 20, 2010. Photo by Wusel007. Used with permission under GNU Free
Documentation License version 1.2 from http://commons.wikimedia.org/wiki/File:Union_
Pacific_Coal_Train_Douglas_WY.JPG.
4.4 MODULE 4.4. HOME ENERGY USES
So far in this chapter we have primarily considered industrial and commercial uses of energy in
electric power generation and transportation.
c
h
design
context
This module turns our focus to applications in the home. It is interesting to note that engineering has
traditionally focused on large-scale industrial and commercial applications, preferring these settings
over the home environment. Historically, areas traditionally considered “women’s sphere” such as
the home, or “women’s work” such as cooking or cleaning, were excluded from the engineering field
altogether, categorized instead as “home economics”[21]. Alice Pawley [22] has pointed out that
the field of engineering therefore looks something like Swiss cheese – certain areas that ought to
be considered engineering are excluded, leaving holes. This module takes up to three explorations
related to home energy uses: solar energy in cooking, three alternatives for refrigeration, and a
Stirling-powered electrical generator combined with water filtration. All three explorations have
some potential and some limitations for sustainability, as well as applications in developed and
developing nation settings.
76
4. THINKING BIG PICTURE ABOUT ENERGY AND SUSTAINABILITY
4.4.1 EXPLORATION 1: SOLAR COOKER
engage
Learn about designs
of solar cookers. How
do they work? What
are the different
types of designs
available?
change
What would you
change to improve
your solar cooker for
next time?
analyze
Design and build a
solar cooker from
scavenged objects.
reflect
Demonstrate your
design. What worked
well? What do you
wish worked better?
How does it fit or not
fit with your culture’s
cuisine and lifestyle?
Figure 4.8: Solar cooker or solar barbecue Alsol 1.4 made in Spain: more information on http://
www.solarcookingatlas.com. Public domain. http://upload.wikimedia.org/wikipedia/
commons/e/ed/ALSOL.jpg.
1. Engage. Learn about designs of solar cookers. How do they work? What are the different
types of designs available? What principles of heat transfer apply to their functioning?
◦
2. Analyze. Design and build a solar cooker from scavenged or borrowed objects and tools
◦
that can bake a cookie (recommended temperature: 350
C; minimum temperature:
100
C). Make sure the cooker is of adequate size, easy to use and convenient, with low startup
and cooking times. It should be durable and stable during operation. It should be aesthetically
pleasing and achieve the highest quality construction possible, subject to the design constraints.
Consider the ability of the cooker to collect sunlight at different times of day. You will need a
method or device to help you determine whether the oven is aimed directly at the sun (do not
look directly at the sun!)
F, 160
◦
3. Reflect. Demonstrate your design. What worked well? What do you wish worked better?
How does it fit or not fit with your culture’s cuisine? With your physical setting? With your
lifestyle? (Or you could choose an application setting different from your own for this same
evaluation.) Compare solar cookers to biomass stoves, natural gas stoves, or electric stoves,
considering criteria from Module 4.2.
4. Change. What would you change to improve the design of your cooker in terms of its technical
performance and/or its suitability for use in your culture?
4.4.2 EXPLORATION 2: REFRIGERATION
4.4. MODULE 4.4. HOME ENERGY USES 77
engage
Learn about
evaporative coolers,
vapor-compression
refrigeration, and
absorption
refrigeration.
change
What else do you
need to know to
complete a
refrigeration design
for a particular
application?
analyze
Estimate design
requirements for a
single family using
each refrigeration
technology.
reflect
Should the analysis
you did be
considered
engineering? How
did your systems
compare? Why does
the US have mostly
vapor-compression?
Figure 4.9: Old Refrigerator, Restaurant, Mandeville, LA. Photo by Ingfrogmation of New Orleans.
(Multi-license with GFDL and Creative Commons CC-BY 3.0) http://commons.wikimedia.org/
wiki/File:Mandeville_Maxens_refrigerator.JPG.
1. Engage. Learn about evaporative coolers (see Module 2.3), vapor-compression refrigerators,
and absorption refrigerators. How does each work? What are they used for, and why are these
uses important? What materials and energy are required for each? What are their typical
operating temperatures?
2. Analyze. Select an application for each type of refrigeration in different home settings. Provide
a back-of-the envelope estimate of the design requirements for a single family application in
each setting. Give dimensions of the refrigerated space, amount of food that can be cooled,
approximate temperature of the food, and energy requirements. Make reasonable and justified
assumptions.
3. Reflect. Should the analysis you just did be considered engineering? Why or why not? How
did the energy requirements compare for your different systems? Why do you think we have
mostly vapor-compression systems in the United States and not other technologies? (See Ruth
Schwartz Cowa’s history [23] at http://epl.scu.edu/∼stsvalues/readings/cowan2.
pdf for an answer.) What recommendations would you make for home refrigeration in different
settings?
78
4. THINKING BIG PICTURE ABOUT ENERGY AND SUSTAINABILITY
4. Change. What (else) would you need to know in order to complete a full design for a particular
application
4.4.3 EXPLORATION 3: DEAN KAMEN’S STIRLING ENGINE
engage
Learn about Dean
Kamen’s Stirling
Engine that
simultaneously
creates household
electricity and
purifies water.
analyze
Evaluate the
inventor’s claim
about the
performance of the
engine.
change
How would you
change the project?
How has the project
changed since the
article was written?
reflect
Is Kamen’s technical
claim reasonable?
What other factors
are important for
implementation in
the developing
world?
Figure 4.10: Dean Kamen’s Stirling Engine produces both electricity and heat to purify water in the
Slingshot water filter. http://www.geekologie.com/2008/04/22/water-cleaner.jpg.
1. Engage. Consider the following excerpt from a recent article in the Manchester, NH Union
Leader [24] http://forums.segwaychat.com/archive/index.php/t-286.html.
NH inventor Kamen eyes Stirling Engine.
By KATHARINE McQUAID, Union Leader Staff.
Inventor Dean Kamen is inching closer to the creation of a
Stirling Cycle Engine that can create enough electricity to
run a few household appliances, while at the same time making
contaminated water drinkable....
Kamen said he imagines the device will give people in remote
villages of India and other third world countries a constant
source of clean, safe, drinking water, as well as a central
source of electricity.
‘‘It could be used to make a central place where people go
to charge batteries for computers or cell phones, where people
4.5. MODULE 4.5. ETHICS OF ENERGY DISASTERS 79
could get access to electricity so that they could have light
at night and, all the while, it could be turning 10 gallons of
just about anything into potable water,’’ Kamen told Rather.
The version unveiled by Kamen on last night’s program creates
about 300 continuous watts of electrical power, according to
Rather.
2. Analyze. Evaluate this claim by considering an ideal Stirling engine with helium as the working
fluid (Kamen’s patent states that helium could be used). Suppose it operates at one cycle per
second, between temperature limits of 300 K and 1500 K, and pressure limits of 150 kPa and
1.5 MPa. Assuming the mass of the helium used in the cycle is 0.1 kg, determine the thermal
efficiency of the cycle, the amount of heat transfer in the regenerator, and the work output per
cycle. Assume that water purification is achieved by boiling off all the water once, and that no
heat is recovered through condensation.
3. Reflect. Is Kamen’s claim reasonable? Why or why not? What other considerations are nec-
essary to determine whether this technology is suited to the proposed application? What else
would you need to know?
4. Change. How would you change the project, either the device or implementation plan? How
has the project changed since this article was written?
Having explored the engineering involved in home uses of energy, the next module connects
consumer energy choices with large scale energy disasters such as oil spills and nuclear accidents.
4.5 MODULE 4.5. ETHICS OF ENERGY DISASTERS
This module examines connections between the ethics of two energy disasters: The BP Oil Spill
following the explosion of the Deepwater Horizon rig in April 2010, and the meltdowns at the
Fukushima Daiichi nuclear power plant following the earthquake and tsunami in Japan in March
2011.
f
ethics
j
contemporary
issues
Each disaster has parallels with past disasters that exclude it from being considered an “isolated
incident.” As with mining accidents for energy resources such as coal and uranium, these disasters
form patterns that repeat, in this case it seems every few decades. Social scientists who have studied
engineering ethics cases such as the Ford Pinto case [25] or the space shuttle Challenger disaster [26]
have pointed out the ways in which the cases don’t boil down to individual decisions of professional
engineers, but have at their heart institutional and organizational norms (and beyond these, political
and economic forces acting on organizations) that produce these outcomes as a matter of course.
80
4. THINKING BIG PICTURE ABOUT ENERGY AND SUSTAINABILITY
engage
Learn about the BP
oil spill and Fukushima
meltdowns. In what
ways could they
have been predicted
30 years ago?
change
Develop an effective
concrete strategy to
prevent future
energy disasters,
focusing on the
collective nature of
the disasters.
analyze
What are the
structural forces that
shaped the
engineering of each
facility? Were events
preventable or
inevitable?
reflect
Given that we can
anticipate that things
we can’t anticipate
will occur, what kinds
of preventive design
strategies should
engineers employ?
Figure 4.11: Deepwater Horizon Offshore Drilling Unit on Fire, April 21, 2010. US Coast Guard photo,
Public domain. http://cgvi.uscg.mil/media/main.php?g2_view=core.DownloadItem&g2_
itemId=836364&g2_serialNumber=5.
1. Engage.Watch the Rachel Maddow Show segment “That was then, this is then”[27] (http://
www.msnbc.msn.com/id/26315908/) from May 26, 2010. What are the similarities be-
tween the BP Spill and the Ixtoc 1 Spill in 1979? Read the March 15, 2011 New York Times ar-
ticle [28] on the Mark I containment system used at the Fukushima Daiichi reactors: http://
www.nytimes.com/2011/03/16/world/asia/16contain.html. What were the prob-
lems identified in the early 1970s with the Mark I containment system?
2. Analyze. Watch a January 11, 2011 Maddow segment [29] in which a government report
labels the BP disaster foreseeable and preventable with regulatory oversight. http://www.
msnbc.msn.com/id/26315908/#41026520.Then watch this March 25, 2011 segment [30]
that exposes regulators issuing new deep water drilling permits with the same blowout preven-
ter device found to have a flawed design: http://www.msnbc.msn.com/id/26315908/#
42278768. In what sense are these events preventable? In what sense are they predictable, or
even inevitable?
Watch “A is for Atom” a BBC documentary on nuclear power’s history [31]: http://www.
bbc.co.uk/blogs/adamcurtis/2011/03/a_is_for_atom.html. While the entire doc-
umentary is of interest, key segments are at minutes 20-29 and 36-42. What structural forces
shaped the scale and safety system designs of nuclear plants in the United States?
REFERENCES 81
3. Reflect. Sarah Pfatteiche’s book on the ethics of engineering and the 9/11 collapse of the World
Trade Center asks a number of questions in the wake of that tragedy that can be translated
for these and other energy disasters [32]. To what extent are energy disasters “business as
usual?” Should they be prevented? Can they be prevented? Who is responsible to prevent
them? It can be argued that in both cases considered here, energy companies believed they
were taking sufficient care and protecting people and the environment “to the extent possible.”
What makes such measures possible or impossible? Are economics and a company’s desire
for profit-making legitimate constraints on the health, safety, and welfare of the public? Why
or why not? Do engineers have a duty to design for the unanticipatable? That is, if we know
things will go wrong that we can’t predict specifically (and thus design for), can we design
out some of the problems – for example, by working at a smaller scale in the case of nuclear
power, or not as deep in the case of ocean oil drilling? Can better regulation prevent disasters,
or are other changes required? What are the responsibilities of individual engineers? Of their
management? Of energy companies? Of government? Of consumers/citizens?
4. Change. Develop a strategy for change – among energy consumers, corporate culture at an
energy company, or governmental regulations and oversight – that would best prevent future
energy disasters.
REFERENCES
[1] Jensen, D. and MacMillan, S. (2007). As the World Burns: 50 Simple Things You Can Do to Stay
in Denial. New York: Seven Stories Press. Cited on page(s) 64
[2] Environmental Protection Agency.
Inventory Report.
Accessed June 8, 2011 from http://www.epa.gov/climatechange/emissions/
usinventoryreport.html. Cited on page(s) 64, 65
(2011). US Greenhouse Gas
[3] International Energy Agency. (2010). CO2 Emissions from Fuel Combustion High-
lights. 2010 Edition. Accessed June 8, 2011 from http://www.iea.org/co2highlights/
CO2highlights.pdf. Cited on page(s) 64, 65
[4] Catalano, G.D. (2006). Engineering Ethics: Peace, Justice and the Earth. San Rafael, CA: Morgan
and Claypool. Cited on page(s) 65
[5] Harris, C.E., Pritchard, M.S. and Rabins, M.J. (2005). Engineering Ethics: Concepts and Cases.
3rd ed. Stamford, CT: Thomson Wadsworth. Cited on page(s)
[6] Martin, M.W. and Schinzinger, R. (1996). Ethics in Engineering. 3rd ed. New York: McGraw-
Hill. Cited on page(s)
[7] Whitbeck, C. (1998). Ethics in Engineering Practice and Research. New York: Cambridge Uni-
versity Press. Cited on page(s) 65
82 REFERENCES
[8] Shepherd, W. and Shepherd, D.W. (2003). Energy Studies. 2nd ed. London: Imperial College
Press. Cited on page(s) 66
[9] Winner, L. (1986). Do Artifacts have Politics? In The Whale and the Reactor: A Search for
limits in an age of high technology. Chicago: University of Chicago Press, 19–39. Accessed June
9, 2011 from http://zaphod.mindlab.umd.edu/docSeminar/pdfs/Winner.pdf Cited
on page(s) 66
[10] Dym, C.L., Little, P., and Orwin, E.J. (2009). Engineering Design: A Project-Based Introduction.
New York: Wiley. Cited on page(s) 67
[11] Fischhoff, B. (1977). Cost Benefit Analysis and the Art of Motorcycle Maintenance. Policy
Sciences, 8: 177–202. Accessed June 9, 2011 from http://sds.hss.cmu.edu/media/pdfs/
fischhoff/CostBenefitAnalysisMotorcyc.pdf. DOI: 10.1007/BF01712294 Cited on
page(s) 67
[12] Moore, P. (2006). Going Nuclear: A Green Makes the Case. Washington Post, April 16,
2006. Accessed June 9, 2011 from http://www.washingtonpost.com/wp-dyn/content/
article/2006/04/14/AR2006041401209.html Cited on page(s) 71
[13] Greenpeace USA. Patrick Moore Background Information. Accessed June 9, 2011
from http://www.greenpeace.org/usa/en/campaigns/nuclear/patric-moore-
background-inform/ Cited on page(s) 72
[14] Montague, P. (2008). Is Nuclear Power Green? Counterpunch, November 3, 2008. Accessed
June 9, 2011 from http://www.counterpunch.org/montague11032008.html Cited on
page(s) 72
[15] Brugge, D., deLemos, J.L., Bui, C. (2007). The Sequoyah Corporation Fuels Re-
lease and the Church Rock Spill: Unpublicized Nuclear Releases in American In-
dian Communities. American Journal of Public Health, 97(9): 1595–1600. Accessed
June 12, 2011 from http://www.sric.org/Churchrock/SFCChurchRockAJPH2007.pdf
DOI: 10.2105/AJPH.2006.103044 Cited on page(s) 72
[16] Scharlemann, J.P.W. and Laurance, W.F. (2008). How Green are Biofuels? Science 319(5859):
43–44. DOI: 10.1126/science.1153103 DOI: 10.1126/science.1153103 Cited on page(s) 74
[17] Robertson, G.P., Dale, V.H., Doering, O.C., et al. (2008). Sustainable Biofuels Redux. Science,
322 (5898): 49–50. DOI: 10.1126/science.1161525 Cited on page(s) 74
[18] Tilman, D., Socolow, R., Foley, J.A., Hill, J., Larson, E., Lynd, L., Pacala, S., Reilly, J.,
Searchinger, T., Somerville, C., and Williams, R. Beneficial Biofuels—The Food, Energy,
and Environment Trilemma. Science, 325 (5938): 270–271. DOI: 10.1126/science.1177970
Cited on page(s) 74
REFERENCES 83
[19] Michael Greenfield, Chemical Engineering, University of Rhode Island had the idea for this
exploration. Cited on page(s) viii, 74
[20] McPhee, J. (2006). Uncommon Carriers. New York: Farrar, Strauss, and Giroux, pp. 185–236.
Originally published in two parts (Coal Train I: Disassembling the Planet for Powder River
Coal and Coal Train II: Going into Thunder) in The New Yorker, October 32005, p. 72, and
October 10, 2005, p. 62. Cited on page(s) 74
[21] Bix, A. (2002). Equipped for life: Gendered technical training and consumerism in home
economics, 1920–1980. Technology and Culture, 43: 728–54. DOI:10.1353/tech.2002.0152.
DOI: 10.1353/tech.2002.0152 Cited on page(s) 75
[22] Pawley, A. (2012). What Counts as “Engineering:” Towards a Redefinition. In Engineering
and Social Justice: In the University and Beyond. C. Baillie, A. Pawley, and D. Riley, eds. West
Lafayette, IN: Purdue University Press. Cited on page(s) 75
[23] Cowan, R. S. (1985). How the Refrigerator Got Its Hum. In D. MacKenzie & J. Wajcman,
(Eds.),The Social Shaping Of Technology. Philadelphia: Open University Press, pp. 202–218. Ac-
cessed May 18, 2011 from http://epl.scu.edu/˜stsvalues/readings/cowan2.pdf.
Cited on page(s) 77
[24] McQuaid, K. (2002). NH inventor Kamen eyes Stirling Engine. Union Leader (Manchester,
NH) November 14, 2002, P. A1. Available June 8, 2011 at http://forums.segwaychat.
com/archive/index.php/t-286.html Cited on page(s) 78
[25] Lee, M.T. and Ermann, M.D. (1999). Pinto “Madness” as a Flawed Landmark Narrative: An
Organizational and Network Analysis. Social Problems 46(1): 30–47. Accessed March 30, 2011
from http://www.jstor.org/pss/3097160 DOI: 10.1525/sp.1999.46.1.03x0240f Cited
on page(s) 79
[26] Vaughan, D. (1996). The Challenger Launch Decision: Risky Technology, Culture, and Deviance at
NASA, Chicago: University of Chicago Press. Cited on page(s) 79
[27] Maddow, R. (2010). That was then, this is then. In B. Wolff (Producer), The Rachel Maddow
Show, New York: MSNBC. May 26, 2010. Accessed June 9, 2011 from http://www.msnbc.
msn.com/id/26315908/ Cited on page(s) 80
[28] Zeller, Jr., T. (2011). Experts had long criticized potential weakness in design of stricken
reactor.New York Times, March 15, 2011. Accessed June 9, 2011 from http://www.nytimes.
com/2011/03/16/world/asia/16contain.html. Cited on page(s) 80
[29] Maddow, R. (2011). Gulf Spill Report Shows String of Failures. In B. Wolff (Producer), The
Rachel Maddow Show, New York: MSNBC. January 11, 2011. Accessed June 9, 2011 from
http://www.msnbc.msn.com/id/26315908/#41026520. Cited on page(s) 80
84 REFERENCES
[30] Maddow, R. (2011). Dubious Assurances on Deepwater Drilling Safety. In B.Wolff (Producer),
The Rachel Maddow Show, New York: MSNBC. March 25, 2011. Accessed June 9, 2011 from
http://www.msnbc.msn.com/id/26315908/#42278768. Cited on page(s) 80
[31] Curtis, A. (Writer and Producer) (1992). A is for Atom [documentary] London: BBC. Ac-
cessed June 9, 2011 from http://www.bbc.co.uk/blogs/adamcurtis/2011/03/a_is_
for_atom.html Cited on page(s) 80
[32] Pfatteicher, S.K.A. (2010). Lessons Amid the Rubble: An Introduction to Post-Disaster Engineer-
ing. Baltimore, MD: Johns Hopkins University Press. Cited on page(s) 81
Author’s Biography
85
DONNA RILEY
Donna Riley is a founding faculty member and Associate Profes-
sor in the Picker Engineering Program at Smith College, where
she has been teaching thermodynamics for over 10 years. She
received her B.S.E. in Chemical Engineering from Princeton
University and a Ph.D. in Engineering and Public Policy from
Carnegie Mellon University. Her technical research combines
methods in engineering and the social sciences to characterize
and communicate chemical risk. She seeks to integrate quantita-
tive modeling of chemical risks (from sources to exposure end-
points) with an understanding of the ways in which human beliefs
and behavior influence risk. Past projects have involved charac-
terizing the risks of mercury use as part of religious and folk traditions in Latino and Caribbean
communities, and developing improved consumer-product warnings. She is currently collaborating
with chemists at Smith and the University of Massachusetts on developing a community-oriented
air quality research lab.
In 2005 Riley received a CAREER award from the National Science Foundation for imple-
menting pedagogies of liberation, based on the work of Paulo Freire, bell hooks, and others, into
engineering education. Aspects of critical pedagogies that are operationalized in Riley’s classrooms
include connecting course material to student experience, emphasizing students as authorities in the
classroom, integrating ethics and policy considerations in the context of social justice, problematiz-
ing science as objectivity, and incorporating contributions from women, people of color, and people
living in the global South.
This is Riley’s second book with Morgan and Claypool, having published Engineering and
Social Justice in 2008.
|
https://ieeexplore.ieee.org/xpl/ebooks/bookPdfWithBanner.jsp?fileName=7224644.pdf&bkn=7224643&pdfType=book
|
Series ISSN: 1932-3166
A, B, See…
in 3D
A Workbook to Improve
3-D Visualization Skills
Dan G. Dimitriu, PhD, P.E., Professor, San Antonio College
The workbook provides over 100 3D visualization exercises challenging the student to
create three dimensions from two. It is a powerful and effective way to help engi-
neering and architecture educators teach spatial visualization. Most of the 3-D visual-
ization exercises currently being used by students in Design and Graphics classes pres-
ent the objects in isometric views already in 3-D, asking the viewer to create multiple
views, fold patterns, manipulate, reflect, or rotate them. The exercises presenting the
objects in incomplete multiview projections asking the students to add missing lines
use mostly real 3D objects that are more easily recognizable to help the student cor-
relate 2D with 3D.
This workbook uses a different approach. Each view of the solid represents a letter of
the alphabet. The letters are by definition 2D representations and when they are com-
bined to create a 3D object, visualizing it becomes quite a challenge.
This workbook is intended for Engineering, Architecture, and Art students and faculty
that want to increase their 3-D visualization skills
w w w . m o r g a n c l a y p o o l . c o m
9 781627 058186
ISBN: 978-1-62705-818-6
9 0 0 0 0
I
I
D
M
T
R
U
I
A
,
B
,
S
E
E
…
I
N
3
D
M
O
R
G
A
N
&
C
L
A
Y
P
O
O
L
A, B, See…
in 3D
A Workbook to Improve
3-D Visualization Skills
Dan G. Dimitriu, PhD, P.E.
A,B,See… in 3D
A Workbook to Improve 3-D Visualization Skills
Dan G. Dimitriu, PhD, P.E.
San Antonio College
Copyright © 2015 by Morgan & Claypool
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in
any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations in
printed reviews, without the prior permission of the publisher.
A,B, See.... in 3D: A Workbook to Improve 3-D Visualization Skills
Dan G. Dimitriu
www.morganclaypool.com
ISBN-13: 9781627058186 paperback
ISBN-13: 9781627058193 ebook
First Edition
10 9 8 7 6 5 4 3 2 1
Printed in the United States of America
Contents
Table of Content .............................................................................................................. 1
Abstract ........................................................................................................................... 2
Introduction .................................................................................................................... 3
Methodology .....................................................................................................................
List of Problems by Difficulty ......................................................................................... 10
Alphabetical Listing by Letter Combinations ................................................................. 11
Chapter 1 - Combination of Cubes (C) ............................................................................ 12
Level of difficulty: Easy
Chapter 2 - First Level (F) .............................................................................................. 25
Level of difficulty: Low
Chapter 3 - Second Level (S) .......................................................................................... 52
Level of difficulty: Medium
Chapter 4 - Third Level (T) ............................................................................................. 84
Level of difficulty: Difficult
List of Problem Solutions by Chapter .............................................................................106
Chapter 1 (C) - Solutions ............................................................................................. 107
Chapter 2 (F) - Solutions ............................................................................................. 111
Chapter 3 (S) - Solutions ............................................................................................. 124
Chapter 4 (T) - Solutions ............................................................................................. 137
Author Info ..................................................................................................................144
1
ABSTrACT
The workbook provides over 100 3D visualization exercises challenging the student to create three dimen-
sions from two. It is a powerful and effective way to help engineering and architecture educators teach
spatial visualization. Most of the 3-D visualization exercises currently being used by students in Design and
Graphics classes present the objects in isometric views already in 3-D, asking the viewer to create multiple
views, fold patterns, manipulate, reflect, or rotate them. The exercises presenting the objects in incomplete
multiview projections asking the students to add missing lines use mostly real 3D objects that are more eas-
ily recognizable to help the student correlate 2D with 3D.
This workbook uses a different approach. Each view of the solid represents a letter of the alphabet. The
letters are by definition 2D representations and when they are combined to create a 3D object, visualizing
it becomes quite a challenge.
This workbook is intended for Engineering, Architecture, and Art students and faculty that want to
increase their 3-D visualization skills.
2
Introduction
There is ample evidence that instruction in spatial visualization skills is effective in improving outcomes
for engineering students. Research conducted since the early 1990’s has proven that spatial visualization
practice and training leads to better grades in engineering graphics and engineering coursework, and in the
retention of underrepresented groups in the field.
In 1993 Dr. Sheryl Sorby (1998, 2006, 2009) of Michigan Technological University began work under
and NSF grant to figure out why first-year women engineering students were more than three times as likely
to fail the Purdue Spatial Visualization Test: Rotations (PSVT:R) as men, and what could be done about
it. Sorby’s analysis of the results of the test and a background questionnaire she administered to test-takers
showed that previous experience in design-related courses such as drafting, mechanical drawing, and art,
as well as play as children with construction toys such as Legos, Lincoln Logs, and Erector Sets, predicted
good performance on the PSVT:R. She and her colleagues then developed a three-credit hour spatial-
visualization course and administered it to students who had failed the PSVT:R. The course covered topics
such as cross sections of solids, sketching multiview drawings of simple objects, and paper folding to illus-
trate 2-D to 3-D transformations. In the lab, students used solid-modeling computer-aided design (CAD)
Software. Student’s tests scores on the PSVT:R improved from an average of 52% to an average of 82%.
Work by Hsi, et al (1997) supported the effect of spatial strategies instruction on erasing gender differences
and improving grades for engineering students.
Additional spatial visualization training was also discovered to positively affect retention in engineer-
ing for women. Sorby found that among the women who initially failed the PSVT:R and took the spatial-
visualization course between 1993 and 1998, 77% were retained in Engineering Design, compared to 48%
of the women who didn’t take the course (Female n=251). In additional studies Sorby also found that middle
school girls who took a spatial-visualization course took more advanced-level math and science courses in
high school than did girls who did not take the course, and that the materials were shown to be effective in
improving spatial skills for undergraduate students outside of engineering, and for students in high school.
After offering the three-hour spatial visualization course for six years (yielding improvements of 20 to
32 percentage points on the PSVT:R), Sorby condensed the course to a one-hour course and tested it be-
tween 2000 and 2002, seeing average improvement on the PSVT:R of 26%. In 2003 Sorby, Beverly Baart-
mans and Anne Wysocki, published a multimedia software-workbook package which contained content
similar to the course Introduction to 3D Spatial Visualization, now used for engineering graphics education
throughout the nation.
At Penn State Erie, Dr. Kathy Holliday-Darr and Dr. Dawn Blasko (2009) conducted a one-credit-
hour intervention with mechanical engineering technology and plastic engineering technology students
who performed below criteria on the PSVT, the Mental Rotation Test (MRT, and paper-folding and water-
level tasks. They used the Sorby and Wysocki multimedia package and found significant improvement
compared to an untreated control group. The improvement was correlated with grades in other courses and
scores on spatial tasks correlated with overall GPA and key courses taken in the following semester and year.
3
At Virginia State University, a Historically Black College or University (HBCU), retention of minori-
ties in STEM-related majors tended to be lower than their non-minority peers, and students enrolled in
introductory engineering graphics courses had significantly lower-than-average test scores on the PSVT.
Dr. Nancy Study piloted changes to engineering graphics courses, including the use of sketching, blocks
and multimedia, that resulted in improvement of students’ visualization abilities. Significantly higher GPAs
were earned by students taking the enhanced pilot engineering graphics course, compared to a control group
that did not take the enhanced course, and a higher percentage of students in the test group were retained
both in an engineering or technology major and at the university even if they did change their major.
Uttal, Meadow and Newcombe, (2010) conducted a meta-analysis of 200 studies on improvement of
spatial skills and found that the average effect size of improvement for students who receive extensive prac-
tice on spatially-relevant skills, such as mentally rotating figures or disembodying, was .53 (equivalent to an
intervention improving SAT scores by more than 50 points or IQ scores by more than 7.5 points). They also
found that the effects of training endured over time, after practice interventions were completed.
Although the materials currently being used nationally are now assisting the new generations of en-
gineering students to succeed, they do not challenge the student to create three dimensions from two.
On today’s market there are some valuable tools with which engineering and architecture educators teach
spatial visualization. Most of the 3-D visualization exercises currently being used for students in Design
and Graphics classes present objects in isometric views already in 3-D, asking the viewer to create multiple
views, fold patterns, manipulate, reflect, or rotate them. The materials presented in this workbook take a
universally accepted 2-D flat pattern (a letter) and ask the viewer to mold it as part of a 3-D solid, in com-
bination with two other flat-pattern letters from adjacent views.
This workbook is intended for Engineering, Architecture, and Art students and faculty that want to
increase their 3-D visualization skills.
4
Methodology
The exercises use alphabet letters represented in standard multiview projections (front, top, right, left, or
bottom side views). The 3-D object made up of the three letters, one in each view, has to be mentally as-
sembled in 3-D, with no assistance from an isometric representation of the solution figure. The problems
ask the solver to break out of the 2-D image of the letter and visualize the third dimension, the depth, or
“Z axis”, and combine with the other two letters from the other views of the 3-D object. The exercises are
presented with increasing degrees of difficulty to help students improve their 3-D visualization skills. No
other universally-recognized flat patterns are currently being used to enhance students’ ability to spatially
visualize 3-D objects.
“A, B, See...” presents over 100 three-letter combinations many of them with multiple solutions and a
brief instructional text on how to solve these exercises. The problems will build on the body of knowledge
already developed in early stages of graphics core concepts such as:
• Alphabet of lines (visible, hidden, and center lines)
• Multiview Orthographic Projections
• Surface and Edge Classifications (Normal, Inclined, Oblique, and Curved)
The graphical problems are presented in order of increasing difficulty and they are designed to gradu-
ally break students out of their 2-D preconceptions about 3-D space. The student must learn how to
represent a variety of surfaces normal, inclined, oblique and cylindrical in multiple positions, visible and
invisible, in edge view, true size and shape, or foreshortened in order to complete these assignments. No
other universally-recognized flat patterns are currently being used to enhance students’ ability to spatially
visualize 3-D objects.
5
Figure 1
Solution: 9 cubes
Figure 2
Solution: 13 cubes
How many cubes
make the solid
shown here in three
views?
_____ cubes
Figure 3
6
Solution
Figure 4
Figure 5
See solutions for Figures 4 and 5 on page 9.
7
Because letters are universally known, they are images that can be kept in mind by the student
as they are mentally manipulated, rather than forcing the student to compare shapes on a page. Their
universality also allows students to draw them from memory and manipulate them mentally. They
can work on the assignments everywhere, at home, at lunch, or when they go for a walk in the park.
For this reason, the solutions for these exercises are also easy to compare with other students’
solutions and argue about. In addition, within the alphabet soup is a progression of easy-to-difficult
that gives students a sense of accomplishment as they advance through spatial skills levels. All
letters of the alphabet are present in this workbook and while just as challenging as isometric
workbook exercises, the letter-based problems appear more like puzzles, and therefore more like fun.
Procedure
The table of content has two configurations. The exercises are organized following the letter
combinations and by the level of difficulty. There are five letter combinations and four categories of
difficulty listed under four chapters. The first group of problems show straight letters that can be
made completely out of cubes and, with one exception, have only normal edges and surfaces. The
challenge is to determine the exact number of cubes needed to make the object despite the fact that a
cube might appear in more than one view. The last three problems have more than one solution and
the instructions ask for the minimum number of cubes required.
The second chapter “First Level” takes the cube problems and by eliminating the cubes
presents them as solids with straight faces without the cube partitions. Ten more combinations of
letters are defining new solids in this chapter. Beginning with this chapter, the problems ask the
solver to place the missing lines standing for visible and hidden edges and surfaces within the
confines of the contours to complete the views. It is suggested that each exercise should be limited
initially to the following cross sectional shapes:
The following chapters “Second Level” and “Third Level” are increasingly challenging, as the
solvers have to visualize letter parts that are not where they are appear to be from the 2D image.
Almost all the problems have multiple solutions as the letter parts can be visualized from rectangular
prisms, to triangular ones, or even cylindrical shapes as indicated above. At these advanced stages
8
students can experiment with other shapes as well such as a hexagon or a rhombus. The challenge is
to have the correct representation with visible, hidden, and centerlines in each view. The advantage
of all these exercises is that the solution can be easily verified for correctness by building a 3D model
in any 3D modeling package. The standard front, top, and side view projections of the model should
reveal all the necessary lines for verification.
Many times the students propose new solutions or start to look for new combination of
letters which is a challenge in itself because not all combinations of three letters can form a solid.
That is another way to improve the visualization abilities.
Imagination is the only limit!
Solutions for Figures 4 and 5:
Solution 1
Solution 2
Solution 3
Solution 1
9
List of Problems by Difficulty
Chapter 1 - Combination of Cubes
Level of difficulty: Easy
Chapter 2 - First Level
Level of difficulty: Low
Chapter 3 - Second Level
Level of difficulty: Medium
Chapter 4 - Third Level
Level of difficulty: Difficult
Page #
12
25
52
84
10
Alphabetical listing by Letter Combinations
Letters Made Out of Cubes
EFT
C 09
HHH C 10
C 08
HLE
C 05
HLL
C11
HTT
C 03
IIH
C 02
IIL
C 01
IIT
C 04
LLL
C 12
LUF
C 06
TTT
UHL C 07
Three Identical Letters
DDD F 21
S 22
EEE
FFF
S23
HHH F 11
F 06
LLL
OOO F19
F 10
TTT
T 21
XXX
Two Identical + One Different
EEN S 13
F 04
IIA
F 03
IIH
IIL
F 02
IIM F 05
F 01
IIT
F 09
LLA
LLD
F 08
LLQ S 03
TTH F 15
One Different + Two Identical
ATT
F 14
HLL
F 07
MOO T 10
OHH F 16
XAA T 15
XTT
F 13
XVV T 16
ZEE
T 07
ZHH F 17
ZOO T 09
All Three Different Letters
AUL
DEL
EDU
EFD
EFT
ELM
FEZ
FJT
GOP
HEB
HLE
HMT
HUT
HUV 1
HUV 2
HZT
KLE
LCX
LED
LOT
LUF
MHE
MHF
MLU
MOE
MUD
MUE
MUG
MUL
MUZ
NHB
NUE
NUL
OLE
OUA
PET
PFT
PLE
POT
RET
SET
TAP
UAL
UHL
VLT
WMX
WTF
YTF
ZEF
ZXN
F 26
F 20
S 12
S 14
F 12
S 15
T 01
T 02
T 03
T 18
F 18
S 01
S 16
S 17a
S 17b
S 24
S 05
T 04
S 02
S 04
F 24
T 05
T 06
S 18
S 19
S 20
S 21
S 22
S 23
S 30
T 17
S 26
S 25
S 27
T 11
S 06
S 08
S 09
S 10
S 07
S 11
T 19
S 28
F 25
S 29
T 20
T 12
T 13
T 08
T 14
11
Chapter 1 Problems - Combination of Cubes (C)
Level of difficulty: Easy
Problem Letters
Page #
C 01
C 02
C 03
C 04
C 05
C 06
C 07
C 08
C 09
C 10
C 11
C 12
IIT
IIL
IIH
LLL
HLL
TTT
UHL
HLE
EFT
HHH
HTT
LUF
13
14
15
16
17
18
19
20
21
22
23
24
12
131415161718192021222324Chapter 2 - First Level (F)
Level of difficulty: Low
Problem Letters
Page #
F 01
F 02
F 03
F 04
F 05
F 06
F 07
F 08
F 09
F 10
F 11
F 12
F 13
F 14
F 15
F 16
F 17
F 18
F 19
F 20
F 21
F 22
F 23
F 24
F 25
F 26
IIT
IIL
IIH
IIA
IIM
LLL
HLL
LLD
LLA
TTT
HHH
EFT
XTT
ATT
TTH
OHH
ZHH
HLE
OOO
DEL
DDD
EEE
FFF
LUF
UHL
AUL
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
25
2627282930313233343536373839404142434445464748495051Chapter 3 - Second Level (S)
Level of difficulty: Medium
Problem Letters
Page #
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
S 01
S 02
S 03
S 04
S 05
S 06
S 07
S 08
S 09
S 10
S 11
S 12
S 13
S 14
S 15
S 16
S 17a
S 17b
S 18
S 19
S 20
S 21
S 22
S 23
S 24
S 25
S 26
S 27
S 28
S 29
S 30
HMT
LED
LLQ
LOT
KLE
PET
RET
PFT
PLE
POT
SET
EDU
EEN
EFD
ELM
HUT
HUV 1
HUV 2
MLU
MOE
MUD
MUE
MUG
MUL
HZT
NUL
NUE
OLE
UAL
VLT
MUZ
52
53545556575859606162636465666768697071727374757677787980818283Chapter 4 - Third Level (T)
Level of difficulty: Difficult
Problem Letters
Page #
T 01
T 02
T 03
T 04
T 05
T 06
T 07
T 08
T 09
T 10
T 11
T 12
T 13
T 14
T 15
T 16
T 17
T 18
T 19
T 20
T 21
FEZ
FJT
GOP
LCX
MHE
MHF
ZEE
ZEF
ZOO
MOO
OUA
WTF
YTF
ZXN
XAA
XVV
NHB
HEB
TAP
WMX
XXX
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
84
858687888990919293949596979899100101102103104105List of Problem Solutions by Chapter
Chapter
Chapter 1 (C) - Solutions
Chapter 2 (F) - Solutions
Chapter 3 (S) - Solutions
Chapter 4 (T) - Solutions
Page #
107
111
124
137
106
Chapter 1 (C) - Solutions
Problem Letters
Page #
C 01
C 02
C 03
C 04
C 05
C 06
C 07
C 08
C 09
C 10
C 11
C 12
IIT
IIL
IIH
LLL
HLL
TTT
UHL
HLE
EFT
HHH
HTT
LUF
Solution
Solution
Solution
Solution
Solution
Solution
Solution
Solution
Solution
Solution
Solution
Solution
108
108
108
108
109
109
109
109
110
110
110
110
107
108
109
110
Chapter 2 (F) - Solutions
#
Letters
Page #
#
Letters
Page #
F 01 IIT
F 01 IIT
F 01 IIT
F 01 IIT
F 02 IIL
F 02 IIL
F 02 IIL
F 03 IIH
F 03 IIH
F 03 IIH
F 04 IIA
F 04 IIA
F 04 IIA
F 05 IIM
F 05 IIM
F 06 LLL
F 06 LLL
F 07 HLL
F 07 HLL
F 08 LLD
F 08 LLD
F 09 LLA
F 09 LLA
F 10 TTT
Solution 1
Solution 2
Solution 3
Solution 4
Solution 1
Solution 2
Solution 3
Solution 1
Solution 2
Solution 3
Solution 1
Solution 2
Solution 3
Solution 1
Solution 2
Solution 1
Solution 2
Solution 1
Solution 2
Solution 1
Solution 2
Solution 1
Solution 2
Solution 1
112
112
112
112
113
113
113
113
114
114
114
114
115
115
115
115
116
116
116
116
117
117
117
117
F 10 TTT
F 11 HHH
F 11 HHH
F 12 EFT
F 12 EFT
F 13 XTT
F 13 XTT
F 14 ATT
F 14 ATT
F 14 ATT
F 15 TTH
F 15 TTH
F 16 OHH
F 17 ZHH
F 18 HLE
F 19 OOO
F 20 DEL
F 21 DDD
F 21 DDD
F 22 EEE
F 23 FFF
F 24 LUF
F 25 UHL
F 26 AUL
Solution 2
Solution 1
Solution 2
Solution 1
Solution 2
Solution 1
Solution 2
Solution 1
Solution 2
Solution 3
Solution 1
Solution 2
Solution
Solution
Solution
Solution
Solution
Solution 1
Solution 2
Solution
Solution
Solution
Solution
Solution
118
118
118
118
119
119
119
119
120
120
120
120
121
121
121
121
122
122
122
122
123
123
123
123
111
112
113
114
115
116
117
118
119
120
121
122
123
Chapter 3 (S) - Solutions
#
Letters
Page #
#
Letters
Page #
S 01 HMT
LED
S 02
LLQ
S 03
LOT
S 04
LOT
S 04
KLE
S 05
KLE
S 05
PET
S 06
PET
S 06
RET
S 07
RET
S 07
PFT
S 08
PFT
S 08
PLE
S 09
POT
S 10
SET
S 11
EDU
S 12
EDU
S 12
EEN
S 13
EEN
S 13
EFD
S 14
ELM
S 15
S 15
ELM
S 16 HUT
Solution
Solution
Solution
Solution 1
Solution 2
Solution 1
Solution 2
Solution 1
Solution 2
Solution 1
Solution 2
Solution 1
Solution 2
Solution
Solution
Solution
Solution 1
Solution 2
Solution 1
Solution 2
Solution
Solution 1
Solution 2
Solution 1
125
125
125
125
126
126
126
126
127
127
127
127
128
128
128
128
129
129
129
129
130
130
130
130
S 16 HUT
S 17a HUV
S 17b HUV
S 18 MLU
S 18 MLU
S 19 MOE
S 19 MOE
S 20 MUD
S 20 MUD
S 21 MUE
S 21 MUE
S 22 MUG
S 22 MUG
S 23 MUL
S 23 MUL
S 24 HZT
S 24 HZT
S 25 NUL
S 25 NUL
S 26 NUE
S 27 OLE
S 28 UAL
S 29
VLT
S 30 MUZ
Solution 2
Solution 1
Solution 2
Solution 1
Solution 2
Solution 1
Solution 2
Solution 1
Solution 2
Solution 1
Solution 2
Solution 1
Solution 2
Solution 1
Solution 2
Solution 1
Solution 2
Solution 1
Solution 2
Solution
Solution
Solution
Solution
Solution
131
131
131
131
132
132
132
132
133
133
133
133
134
134
134
134
135
135
135
135
136
136
136
136
124
125
126
127
128
129
130
131
132
133
134
135
136
Chapter 4 (T) - Solutions
Problem Letters
Page #
T 01
T 01
T 02
T 03
T 04
T 05
T 06
T 07
T 08
T 09
T 10
T 11
T 12
T 13
T 14
T 15
T 15
T 16
T 16
T 17
T 18
T 19
T 20
T 21
FEZ
FEZ
FJT
GOP
LCX
MHE
MHF
ZEE
ZEF
ZOO
MOO
OUA
WTF
YTF
ZXN
XAA
XAA
XVV
XVV
NHB
HEB
TAP
WMX
XXX
Solution 1
Solution 2
Solution
Solution
Solution
Solution
Solution
Solution
Solution
Solution
Solution
Solution
Solution
Solution
Solution
Solution 1
Solution 2
Solution 1
Solution 2
Solution
Solution
Solution
Solution
Solution
138
138
138
138
139
139
139
139
140
140
140
140
141
141
141
141
142
142
142
142
143
143
143
143
137
138
139
140
141
142
143
Author Info
Dan D. Dimitriu, PhD, P.E., is a tenured professor in SAC’s Phys-
ics, Engineering, and Architecture Department and has been the En-
gineering Program Coordinator at San Antonio College since 2001.
He has 21 years of teaching experience in post-secondary education,
five years in academic research, and 23 years in private practice as a
professional engineer. He has worked on research projects at North
Dakota State University and holds also an MBA in International
Economic Relations. He has managed several NSF and Department
of Education MSEIP grants for SAC, and has been a co-PI for a
NASA CIPAIR grant with the University of Texas at San Antonio
and for an NSF CCLI grant with Wright University.
He was elected Vice Chair of the Two Year College Division of ASEE in 2005 and was the recipient of
2006 NISOD Excellence in Teaching Award. He was also named “San Antonio’s Top Professor” by Scene
in SA Monthly in 2006. In 2005 he was the only community college committee member and presenter
for the “Enhancing Community College Pathways into Engineering Careers,” a collaborative effort by the
National Academy of Engineering’s Committee on Engineering Education and the National Research
Council Board on Higher Education and Workforce . Their final report described the evolving roles of
community colleges in engineering education, identified exemplary programs at community colleges and
model partnerships between two- and four-year engineering schools, and made recommendations for future
research in this area. He has also made numerous presentations at the American Society for Engineer-
ing Educators Annual Conferences. Dr. Dimitriu is also coordinator for the Alamo Community College
District’s participation in NASA’s Aerospace Scholars program and concurrently serves as the director for
SAC’s MESA Center.
This workbook is a resultant of his leadership and expertise in developing curricula, coordinating engi-
neering educational programs, years of teaching, and years of professional practice.
144
Series ISSN: 1932-3166
A, B, See…
in 3D
A Workbook to Improve
3-D Visualization Skills
Dan G. Dimitriu, PhD, P.E., Professor, San Antonio College
The workbook provides over 100 3D visualization exercises challenging the student to
create three dimensions from two. It is a powerful and effective way to help engi-
neering and architecture educators teach spatial visualization. Most of the 3-D visual-
ization exercises currently being used by students in Design and Graphics classes pres-
ent the objects in isometric views already in 3-D, asking the viewer to create multiple
views, fold patterns, manipulate, reflect, or rotate them. The exercises presenting the
objects in incomplete multiview projections asking the students to add missing lines
use mostly real 3D objects that are more easily recognizable to help the student cor-
relate 2D with 3D.
This workbook uses a different approach. Each view of the solid represents a letter of
the alphabet. The letters are by definition 2D representations and when they are com-
bined to create a 3D object, visualizing it becomes quite a challenge.
This workbook is intended for Engineering, Architecture, and Art students and faculty
that want to increase their 3-D visualization skills
w w w . m o r g a n c l a y p o o l . c o m
|
https://ieeexplore.ieee.org/xpl/ebooks/bookPdfWithBanner.jsp?fileName=8940927.pdf&bkn=8940926&pdfType=book
|
Series ISSN 1939-5221
Relativistic Classical Mechanics and Electrodynamics
Martin Land, Hadassah College, Jerusalem
Lawrence P. Horwitz, Tel Aviv University, Bar Ilan University, and Ariel University
This book presents classical relativistic mechanics and electrodynamics in the Feynman-Stueckelberg
event-oriented framework formalized by Horwitz and Piron. The full apparatus of classical analytical
mechanics is generalized to relativistic form by replacing Galilean covariance with manifest Lorentz
covariance and introducing a coordinate-independent parameter τ to play the role of Newton’s
universal and monotonically advancing time. Fundamental physics is described by the τ-evolution of
a system point through an unconstrained 8D phase space, with mass a dynamical quantity conserved
under particular interactions. Classical gauge invariance leads to an electrodynamics derived from five
τ-dependent potentials described by 5D pre-Maxwell field equations. Events trace out worldlines as τ
advances monotonically, inducing pre-Maxwell fields by their motions, and moving under the influence
of these fields. The dynamics are governed canonically by a scalar Hamiltonian that generates evolution
of a 4D block universe defined at τ to an infinitesimally close 4D block universe defined at τ+dτ. This
electrodynamics, and its extension to curved space and non-Abelian gauge symmetry, is well-posed and
integrable, providing a clear resolution to grandfather paradoxes. Examples include classical Coulomb
scattering, electrostatics, plane waves, radiation from a simple antenna, classical pair production, classical
CPT, and dynamical solutions in weak field gravitation. This classical framework will be of interest to
workers in quantum theory and general relativity, as well as those interested in the classical foundations
of gauge theory.
ABOUT SYNTHESIS
This volume is a printed version of a work that appears in the Synthesis Digital Library of Engineering and
Computer Science.
research and
development topics, published quickly in digital and print formats. For more information, visit our website:
http://store.morganclaypool.com
Synthesis Lectures provide concise original presentations of
important
store.morganclaypool.com
Relativistic
Classical
Mechanics and
Electrodynamics
Martin Land
Lawrence P. Horwitz
L
A
N
D
•
H
O
R
W
I
T
Z
R
E
L
A
T
I
V
I
S
T
I
C
C
L
A
S
S
I
C
A
L
M
E
C
H
A
N
I
C
S
A
N
D
E
L
E
C
T
R
O
D
Y
N
A
M
I
C
S
M
O
R
G
A
N
&
C
L
A
Y
P
O
O
L
Relativistic Classical
Mechanics and Electrodynamics
Synthesis Lectures on
Engineering, Science, and
Technology
Relativistic Classical Mechanics and Electrodynamics
Martin Land and Lawrence P. Horwitz
2019
Copyright © 2020 by Morgan & Claypool
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in
any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations
in printed reviews, without the prior permission of the publisher.
Relativistic Classical Mechanics and Electrodynamics
Martin Land and Lawrence P. Horwitz
www.morganclaypool.com
ISBN: 9781681737065
ISBN: 9781681737072
ISBN: 9781681737089
paperback
ebook
hardcover
DOI 10.2200/S00970ED1V01Y201912EST001
A Publication in the Morgan & Claypool Publishers series
SYNTHESIS LECTURES ON ENGINEERING, SCIENCE, AND TECHNOLOGY
Lecture #1
Series ISSN
ISSN pending.
Relativistic Classical
Mechanics and Electrodynamics
Martin Land
Hadassah College, Jerusalem
Lawrence P. Horwitz
Tel Aviv University, Bar Ilan University, and Ariel University
SYNTHESIS LECTURES ON ENGINEERING, SCIENCE, AND
TECHNOLOGY #1
CM&cLaypoolMorganpublishers&ABSTRACT
This book presents classical relativistic mechanics and electrodynamics in the Feynman-
Stueckelberg event-oriented framework formalized by Horwitz and Piron. The full apparatus
of classical analytical mechanics is generalized to relativistic form by replacing Galilean covari-
ance with manifest Lorentz covariance and introducing a coordinate-independent parameter (cid:28)
to play the role of Newton’s universal and monotonically advancing time. Fundamental physics
is described by the (cid:28)-evolution of a system point through an unconstrained 8D phase space,
with mass a dynamical quantity conserved under particular interactions. Classical gauge invari-
ance leads to an electrodynamics derived from five (cid:28)-dependent potentials described by 5D
pre-Maxwell field equations. Events trace out worldlines as (cid:28) advances monotonically, inducing
pre-Maxwell fields by their motions, and moving under the influence of these fields. The dy-
namics are governed canonically by a scalar Hamiltonian that generates evolution of a 4D block
d (cid:28). This elec-
universe defined at (cid:28) to an infinitesimally close 4D block universe defined at (cid:28)
trodynamics, and its extension to curved space and non-Abelian gauge symmetry, is well-posed
and integrable, providing a clear resolution to grandfather paradoxes. Examples include classi-
cal Coulomb scattering, electrostatics, plane waves, radiation from a simple antenna, classical
pair production, classical CPT, and dynamical solutions in weak field gravitation. This classical
framework will be of interest to workers in quantum theory and general relativity, as well as
those interested in the classical foundations of gauge theory.
C
KEYWORDS
spacetime, relativistic mechanics, classical electrodynamics, electrostatics, quantum
field theory
Contents
vii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
1
2
3
PART I Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Conceptual Approaches to Spacetime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1
Point Mechanics in 4D Spacetime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 The Two Aspects of Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 The “Proper Time” Formalism in QED . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4 The Stueckelberg–Horwitz–Piron (SHP) Framework . . . . . . . . . . . . . . . . . . . . 9
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5
PART II Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Canonical Relativistic Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1
Lagrangian and Hamiltonian Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 The Free Relativistic Particle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3 The Relativistic Particle in a Scalar Potential
. . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4
Two-Body Problem with Scalar Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.5 Many-Body Problem and Statistical Mechanics . . . . . . . . . . . . . . . . . . . . . . . . 21
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.6
Classical Electrodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.1 Classical Gauge Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Lorentz Force . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2
Field Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.3
Ensemble of Event Currents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.4
viii
4
5
3.5 The 5D Wave Equation and its Green’s Functions . . . . . . . . . . . . . . . . . . . . . . 33
3.6 The Mass-Energy-Momentum Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.7 Worldline Concatenation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
PCT in Classical SHP Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.8
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.9
PART III Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Problems in Electrostatics and Electrodynamics . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.1 The Coulomb Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.1.1 Contribution to Potential from GMaxwell . . . . . . . . . . . . . . . . . . . . . . . . 48
4.1.2 Contribution to Potential from GCorrelation . . . . . . . . . . . . . . . . . . . . . . 50
Liénard–Wiechart Potential and Field Strength . . . . . . . . . . . . . . . . . . . . . . . . 53
4.2
Electrostatics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.3
Plane Waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.4
4.5
Radiation from a Line Antenna . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.6 Classical Pair Production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Particle Mass Stabilization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.7
Self-Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.7.1
4.7.2
Statistical Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Speeds of Light and the Maxwell Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
4.8
4.9
Advanced Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Electrodynamics from Commutation Relations . . . . . . . . . . . . . . . . . . . . . . . . 97
5.1
5.2 Classical Non-Abelian Gauge Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
5.3
Evolution of the Local Metric in Curved Spacetime . . . . . . . . . . . . . . . . . . . 110
Zeeman and Stark Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
5.4
5.5 Classical Mechanics and Quantum Field Theory . . . . . . . . . . . . . . . . . . . . . . 116
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
5.6
Authors’ Biographies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Preface
ix
This book presents classical relativistic mechanics and describes the classical electrodynamics of
relativistic particles following the approach of Stueckelberg, Horwitz, and Piron (SHP). This
framework, pioneered by E. C. G. Stueckelberg in 1941 and employed by Schwinger and Feyn-
man in the development of QED, generalizes classical analytical mechanics to relativistic form
by replacing Galilean covariance with Lorentz covariance, and introducing a new coordinate-
independent evolution parameter (cid:28) to play the role of Newton’s postulated universal and mono-
tonically advancing time. Fundamental physics is described by the (cid:28)-evolution of a system point
through an unconstrained phase space, in which each event is represented by its covariant space-
time coordinates and velocities or momenta. The full apparatus of analytical mechanics is thus
made available in a manifestly covariant form, from Lagrangian and symplectic Hamiltonian
methods to Noether’s theorem. This approach to relativistic classical mechanics makes SHP a
convenient framework for analyzing the “paradoxes” of special relativity, and in particular pro-
vides a clear resolution to the grandfather paradox.
Making the free particle Lagrangian invariant under classical gauge transformations of
the first and second kind leads to an electrodynamics derived from five (cid:28)-dependent potentials,
described by 5D pre-Maxwell field equations. Individual events trace out worldline trajectories
as (cid:28) advances monotonically, inducing pre-Maxwell fields by their motions, and moving under
the influence of these fields. The resulting theory is thus integrable and well posed, governed
canonically by a scalar Hamiltonian that generates evolution of a 4D block universe defined at
(cid:28) to an infinitesimally close 4D block universe defined at (cid:28)
d (cid:28). This electrodynamics, and its
extension to curved space and non-Abelian gauge symmetry, is the most general interaction pos-
sible in an unconstrained 8D phase space. We present examples that include classical Coulomb
scattering, electrostatics, plane wave solutions, and radiation from a simple antenna. Standard
Maxwell theory emerges from SHP as an equilibrium limit, reached by slowing the (cid:28)-evolution
to zero, or equivalently, by summing the contributions over (cid:28) at each spacetime point.
C
A feature of SHP not present in standard Maxwell theory is that under certain condi-
tions, particles and fields may exchange mass dynamically, under conservation of total mass,
energy, and momentum. As a result, pair processes such as electron-positron creation and anni-
hilation are permitted in classical electrodynamics, implementing Stueckelberg’s original goal.
Two processes that tend to restore a particle’s mass to its standard value are described, one a
self-interaction along the event trajectory and the other a general result in statistical mechanics.
Mass restoration of this type has been found in mathematical simulations of event trajectories.
Beyond its usefulness as an approach to electrodynamics, the theory presented in this book
provides the basis for a systematic, step-by-step progression from relativistic classical mechanics
x PREFACE
to relativistic quantum mechanics, many-body theory, and quantum field theory. As an example,
we discuss the correspondence of the fifth classical gauge potential to the Lorentz scalar potential
used in quantum mechanical two-body problems to obtain manifestly covariant solutions for the
bound state, scattering experiments, and relativistic entanglement in time. Similarly, we discuss
the implications of the classical relativistic mechanics for quantum field theory.
This classical framework will thus be of interest to workers in quantum theory, as well as
those interested in its foundations.
Martin Land and Lawrence P. Horwitz
December 2019
Symbols
xi
µ, ν, λ, ρ = 0, 1, 2, 34D spacetime indicesα, β, γ, δ = 0, 1, 2, 3, 55D formal indices (skipping 4)ηµν = diag(–1, 1, 1, 1)4D fl at Minkowski metricηαβ = diag(–1, 1, 1, 1, η55)Formal 5D fl at Minkowski metriccSpeed of light associated with x0 = ctc5Speed associated with x5 = c5t{F, G} = ∂F ∂G – ∂F ∂G ∂xµ ∂pµ ∂pµ ∂xµPoisson bracket[F, G] = F G – G F Commutator bracketDẋ µ = dẋ µ + Γ𝜈µ ρẋ 𝜈 ẋ ρ Dτ dτ Absolute derivative∇αXβ = ∂Xβ + XγΓβγα ∂x∂Covariant derivativeΓσµ λ = ½gµv (∂σg𝜈λ + ∂λg𝜈σ – ∂𝜈gλσ)Christoff el symbolΦ(τ) = δ(τ) – (ξ λ)2 δ(τ)Interaction kernel for electromagnetic fi eldλParameter with dimensions of timeξ = ½ 1 + c52Numerical factorφ(τ) = λ Φ–1 (τ)Inverse function for kernel c PART I
Background
C H A P T E R 1
3
Conceptual Approaches to
Spacetime
1.1
POINT MECHANICS IN 4D SPACETIME
By one measure of success, Newtonian analytical mechanics continues to outshine the mod-
ern physics that has replaced it: the impact of its underlying physical picture on conventional
notions of “reality” in the wider culture. Beyond science per se, this picture was absorbed into
the foundations of Enlightenment philosophy, expanding into the modern humanities and so-
cial sciences, lending it an appearance of self-evident ordinariness. Thus, in his influential text-
book Classical Mechanics, Herbert Goldstein introduces the physical framework—space, time,
simultaneity, and mass—by writing [1, p. 1] that “these concepts will not be analyzed criti-
cally here; rather, they will be assumed as undefined terms whose meanings are familiar to the
reader.” This familiarity is understood to flow from everyday experience with Newtonian objects
qn
1;
D
f
1;
of infinite extent, whose configuration develops through their functional dependence
on the universal time t flowing forward uniformly. Indeed, the Newtonian picture is so central to
conventional understandings of the “everyday” that more than one hundred years after Einstein’s
annus mirabilis, it is the relativistic character of the Global Positioning System (GPS) found in
billions of smartphones that feels distinctly unfamiliar, and “weirdness” still seems an apt term
for quantum phenomena. Moreover, it is easy to forget that much of the Newtonian worldview
seemed similarly “weird” to many in Newton’s day, especially the uniform linearity of time, a no-
tion seemingly at odds with certain varieties of human experience outside the laboratory, more
readily described in the language of nonuniform and cyclical flows of time.
defined as positions in an abstract Cartesian space
n
D
; N
g
qi
n.t/
j
(cid:1) (cid:1) (cid:1)
; 3; n
; N
(cid:1) (cid:1) (cid:1)
(cid:1) (cid:1) (cid:1)
D
1;
g
f
i
j
As early as 1908, Minkowski [2, p. 34] declared that: “space and time as such must fade
away into shadow, and only a kind of union of the two will maintain its reality.” Although ini-
tially resistant to Minkowski’s tensor formulation, Einstein’s 1912 exposition of special relativ-
ity [2, p. 128] elaborates the advantages of taking the event in 4D spacetime as the fundamental
object. Then, in formulating general relativity, the deconstruction of the Newtonian view of
space was a crucial step, as emphasized by Einstein in his 1921 lecture at Princeton [3, pp. 2–3].
Arguing that direct experience must be the basis for physical concepts, he declared that, “the
earth’s crust plays such a dominant role in our daily life in judging the relative position of bod-
ies that it has led to an abstract conception of space which certainly cannot be defended.” That
contemporary textbooks on relativity must still repeat Einstein’s identification of the spacetime
4
1. CONCEPTUAL APPROACHES TO SPACETIME
event as actual experience—superseding antique notions of infinite Euclidean space he deemed
illusory—indicates not only the conceptual complexity of relativity, but also a continuing cultural
disparity between modern physics and other realms of human knowledge.
In 1937, Fock [4] generalized the Newtonian picture to relativistic form by writing events
in 4D Minkowski spacetime as
x(cid:22)
n .(cid:28)/
f
(cid:22)
0;
; 3; n
1;
; N
;
g
n.(cid:28)/
D
where x0
ctn.(cid:28)/ represents the time registered for the event on the laboratory clock. These
events describe a configuration that evolves as the scalar parameter (cid:28), identified by Fock with
the proper time, advances monotonically. Writing
(cid:1) (cid:1) (cid:1)
(cid:1) (cid:1) (cid:1)
D
D
j
x(cid:22) .(cid:28)/
P
D
dx(cid:22)
d (cid:28)
he showed that by minimizing the action
Z d (cid:28) (cid:18) 1
2
m
x2
P
C
e
x(cid:22)A(cid:22)(cid:19)
c P
S
D
(1.1)
for a point event in an electromagnetic potential A(cid:22).x/, one obtains the classical relativistic
equations of motion. Here and in the rest of the book we take the flat metric in 4D spacetime
to be
Fock observed that the elimination of (cid:28) in favor of t in these equations is generally difficult, but
is easily accomplished for the free event satisfying
0 as
x(cid:22)
R
D
(cid:17)(cid:22)(cid:23)
diag .
1; 1; 1; 1/ :
D
(cid:0)
x.(cid:28)/
P
D
x0.(cid:28)/;
(cid:0)
P
x.(cid:28)/(cid:1)
P
D
(cid:0)u0; u(cid:1)
H)
d x
dt D
d x=d (cid:28)
dt=d (cid:28) D
u
u0=c
D
for constant u
(cid:0)u0; u(cid:1). Still, Fock’s generalization was not yet complete.
In the Newtonian picture, a point particle whose position is described by the 3-vector
trajectory x.t/ may follow any continuous curve. In 1941, Stueckelberg [5, 6] observed that the
relativistic generalization described by Fock cannot represent all possible spacetime curves be-
cause the evolution parameter is identified with the proper time of the motion. In particular, any
worldline whose time evolution reverses direction must cross the spacelike region that separates
future-oriented trajectories from past-oriented trajectories. Therefore, in curves of this type the
sign of
x2.(cid:28)/ will change twice and the computed proper time interval
P
ds.(cid:28)/
1
c
D
q
(cid:0)
(cid:17)(cid:22)(cid:23)dx(cid:22)dx(cid:23)
1
c
D
p
x2.(cid:28)/ d (cid:28)
(cid:0) P
fails as a parameterization. Recognizing a physical meaning in curves of this type, Stueckelberg
argued for their inclusion in relativistic mechanics, requiring the introduction of an indepen-
dent evolution parameter (cid:28), analogous to the time t in the Newtonian picture, and related to
the proper time s through the dynamical relation c2ds2.(cid:28)/
x2.(cid:28)/d (cid:28) 2. In this, he followed
Einstein’s approach, by deprecating an historical abstraction he saw as an obstruction to clear
physical understanding of observed phenomena.
D (cid:0) P
1.1. POINT MECHANICS IN 4D SPACETIME 5
Stueckelberg’s interest in general 4D curves can be understood from Figure 1.1 on page 5
(adapted from [5]). In his model, pair annihilation is observed in curve B when the worldline
reverses its time direction, because laboratory apparatus registers two events (two points on the
worldline) appearing at coordinate time t
t2. The event first propagates
x0 < 0), continuing to
forward in t (with
P
x0 < 0 piece of the
earlier times while advancing in space. Stueckelberg’s identification of the
P
trajectory with an antiparticle observed in the laboratory will be discussed in detail in Chapter 2.
D
x0 > 0) and then propagates backward in t (with
P
t1 but none at t
D
Figure 1.1: Three types of worldline identified by Stueckelberg.
In a similar way, curve C represents pair creation as two events are observed at t
t2 but
none at t
t1. These curves may thus be seen as the smooth classical equivalent of a Feynman
spacetime diagram, and the physical picture they present is known as the Feynman–Stueckelberg
interpretation of antiparticles [7, 8].
D
D
x0=ctxτ=−∞τ=−∞τ=−∞τ=∞τ=∞τ=∞t= 0t=t2t=t1ABC6
1. CONCEPTUAL APPROACHES TO SPACETIME
Stueckelberg recognized that the standard Maxwell field F (cid:22)(cid:23).x/ alone would not permit
c2ds2.(cid:28)/
x2d (cid:28) 2 to change sign and proposed a modified Lorentz force
D (cid:0) P
x(cid:22)
D
P
D(cid:28) D
with local metric g(cid:22)(cid:23) and compatible connection (cid:128) (cid:22)
that is required to overcome conservation of
x(cid:22)
d
P
d (cid:28) C
x(cid:23)
(cid:23)(cid:26) P
x(cid:26)
P
(cid:128) (cid:22)
D
F (cid:22)(cid:23).x/g(cid:23)(cid:26)
x(cid:26)
P
C
G(cid:22).x/
(1.2)
(cid:23)(cid:26). He also included a new vector field G(cid:22).x/
x2, as seen through
P
D
x2
D(cid:28) P
x(cid:22)
D
P
D(cid:28) D
x(cid:22)G(cid:22).x/
P
In the absence of G(cid:22), spacetime curves are single-valued in x0 and may, in principle, be repa-
rameterized by the proper time of the motion.
(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)!
!
x(cid:22)
P
G(cid:22)
D
0:
2
2
0
As a simple example we consider a particle in flat space in a constant electric field E
and take G(cid:22)
D
with solution
0. Writing the velocity
c
t
R
D
F 0i
xi
P
D
.c
t; 0; 0;
P
x
P
E
D
z
P
D
z/, the equations of motion reduce to
P
z
R
F 30
x0
P
cE
D
D
t
P
E
z
O
t.(cid:28)/
1
E
D
sinh E(cid:28)
z.(cid:28)/
c
E
D
.cosh E(cid:28)
1/
(cid:0)
C
z.0/
which can be reparameterized by t as
z.t/
D
z.0/
c
E
C
(cid:16)p1
.Et/2
1(cid:17) :
(cid:0)
C
The velocities are
t.(cid:28)/
P
cosh E(cid:28)
D
confirming that the mass is conserved with
c2
c2
t 2
P
z2
(cid:0) P
D
D
2 "
(cid:19)
c2 (cid:18) dt
d (cid:28)
1
(cid:0)
x(cid:22)
x(cid:22)
P
P
(cid:18) dz
dt
D (cid:0)
2#
(cid:19)
c sinh E(cid:28)
z.(cid:28)/
P
D
c2 and so
(cid:20)1
v2
c2
(cid:0)
t
P
D
1
2
(cid:21)(cid:0)
;
(cid:0)!
G
z and take E
O
D
0
D
where v
dz=dt.
D
Now, by contrast, we consider the particle in a constant field G(cid:22)
so that the equations of motion are
with solution
c
t
R
D
0
G
z
R
D
t.(cid:28)/
(cid:28)
D
1
2
z
D
G(cid:28) 2
1
2
D
Gt 2:
In this case the mass decreases with (cid:28) as
and the motion may become spacelike (superluminal).
x(cid:22)
(cid:0) P
x(cid:22)
P
D
c2
(cid:0)
G2(cid:28) 2
1.2
THE TWO ASPECTS OF TIME
1.2. THE TWO ASPECTS OF TIME 7
As seen in the previous section, curves B and C in Figure 1.1 cannot be parameterized by the co-
ordinate time because they are double-valued in x0, and cannot be parameterized by the proper
time of the motion s because s2 becomes negative in the region of the time reversal point.
Realization of the classical Feynman–Stueckelberg picture thus requires the introduction of a
parameter (cid:28) entirely independent of the spacetime coordinates—an irreducible chronological
(historical) time, similar in its role to the external time t in nonrelativistic Newtonian mechan-
ics. The simplicity of this picture in accounting for the observed phenomena of pair processes
strongly supports the conclusion [9] that time must be understood as two distinct physical phe-
nomena, chronology (cid:28) and coordinate x0. A laboratory clock registers the coordinate time of an
event occurrence much as a 3D array of detectors (meter sticks) registers the event’s coordinate
position.
The chronological time determines the order of occurrence of multiple events, with natural
implications for relations of causality. Thus, when laboratory equipment reparameterizes the
observed events along curve B in Figure 1.1 by x0, two events approaching one another will be
observed at t
0. But the underlying physics will be determined by field
interactions at the locations of four distinct, ordered events, governed by a microscopic dynamics
such as (1.2) and registered sequentially at t
t1 and again at t
0, and later at t
0, again at t
t1, t
t1.
D
D
D
D
D
D
In this sense, there are no closed timelike curves in this picture. The so-called grandfather
paradoxes, by which one may return to an earlier time to interfere with the circumstances that
brought about ones own physical presence and agency, are thus resolved. We notice that the
return trip to a past coordinate time x0 must take place while the chronological time (cid:28) con-
tinues to increase. Since the occurrence of event x(cid:22).(cid:28)1/ at (cid:28)1 is understood to be an irreversible
process that cannot be changed by a subsequent event occurring at the same spacetime location,
x(cid:22).(cid:28)2/
x(cid:22).(cid:28)1/ when (cid:28)2 > (cid:28)1, the return trip cannot erase the earlier trajectory. This restriction
is analogous to the conceptually simpler observation in nonrelativistic physics that a process may
produce new events at any given moment, but cannot delete from the historical record events
that occurred at an earlier moment.
D
A more complex problem is the twin scenario, in which a traveler initially at rest in an
inertial frame makes a trip of total distance d at speed v, so that the round trip time measured
d=v. The coordinates assigned to the traveler in the rest frame
by a clock in this frame is (cid:129)t
evolve as
D
.ct; x/
x
D
D
8
<
:
.c(cid:13)(cid:28); x0
.c(cid:13)(cid:28); x0
C
C
u(cid:28)/
u(cid:129)(cid:28)
(cid:0)
; 0
(cid:28)
(cid:20)
u(cid:28)/
; (cid:129)(cid:28)=2
(cid:129)(cid:28)=2
(cid:129)(cid:28)
(cid:28)
(cid:20)
(cid:20)
(cid:20)
;
where
d x
d (cid:28) D
d x
dt
dt
d (cid:28) D
u
D
(cid:13)v
(cid:129)t
D
dt
d (cid:28)
(cid:129)(cid:28)
D
(cid:13)(cid:129)(cid:28)
8
1. CONCEPTUAL APPROACHES TO SPACETIME
so that d
as
D
u(cid:129)(cid:28)
D
v(cid:129)t. The coordinates assigned to the traveler in a co-moving frame evolve
and so the elapsed time registered on the traveler’s clock is (cid:129)(cid:28)
with the usual presentation of the twin scenario.
D
(cid:129)t=(cid:13). This result is consistent
x0
D
(cid:0)ct 0; x0(cid:1)
D
.c(cid:28); 0/
1.3
THE “PROPER TIME” FORMALISM IN QED
Although this book focuses on relativistic classical mechanics, we make a brief digression into
the application of spacetime parameterization methods by Schwinger and Feynman in devel-
oping quantum electrodynamics. In his 1951 calculation of vacuum polarization in an external
electromagnetic field, Schwinger [10] represented the Green’s function for the Dirac field as
a parametric integral and formally transformed the Dirac problem into a dynamical theory in
which the integration variable acts as an independent time. Applying his method to the Klein–
Gordon equation, we express the Green’s function as
G
D
.p
(cid:0)
1
eA=c/2
m2
i(cid:15)
(cid:0)
C
so that writing
the function
satisfies
G.x; x0/
G
x
j
j
x0
D h
i D
i Z 1
0
i.m2
dse(cid:0)
i(cid:15)/s
(cid:0)
i.p
(cid:0)
e(cid:0)
x
h
j
eA=c/2s
x0
j
i
(1.3)
G.x; x0
s/
I
x.s/
j
D h
x0.0/
x
i.p
e(cid:0)
(cid:0)
eA=c/2s
j
x0
j
i
ih
i
@
@s h
x.s/
x0.0/
i D
j
(cid:16)p
2
A(cid:17)
e
c
(cid:0)
x.s/
x0.0/
i
j
h
(1.4)
with the boundary condition
x.s/
lim
0h
s
!
j
x0.0/
i D
(cid:14)4.x
x0/:
(cid:0)
Schwinger regarded x(cid:22).s/ and (cid:25) (cid:22).s/
that satisfy canonical relations
D
p(cid:22).s/
(cid:0)
e
c A(cid:22).s/ as operators in a Heisenberg picture
(cid:140)x(cid:22); (cid:25) (cid:23)(cid:141)
D
i(cid:140)x(cid:22); K(cid:141)
D (cid:0)
i(cid:17)(cid:22)(cid:23)
dx(cid:22)
ds
(cid:140)(cid:25) (cid:22); (cid:25) (cid:23)(cid:141)
i(cid:140)(cid:25) (cid:22); K(cid:141)
ie
c
D
F (cid:22)(cid:23)
d (cid:25) (cid:22)
ds
;
D (cid:0)
(1.5)
(1.6)
1.4. THE STUECKELBERG–HORWITZ–PIRON (SHP) FRAMEWORK 9
where K
.p
(cid:0)
D
eA=c/2. Using (1.5) and (1.6) we find
x(cid:22).s/
P
D (cid:0)
i(cid:140)x(cid:22); K(cid:141)
D (cid:0)
i (cid:20)x(cid:22); (cid:16)p
2(cid:21)
A(cid:17)
e
c
(cid:0)
D
2 (cid:16)p(cid:22)
e
c
(cid:0)
A(cid:22)(cid:17)
(1.7)
and so may perform the Legendre transformation
Z ds L
Z ds (cid:0)
x(cid:22)p(cid:22)
P
(cid:0)
K(cid:1)
D
D
Z ds (cid:18) 1
x2
4 P
C
e
x
c P
(cid:1)
A(cid:19)
whose classical limit takes the form of the Fock action (1.1). Although Schwinger found this
x(cid:22),
representation useful because the scalar parameter s is necessarily independent of x(cid:22) and
P
and so respects Lorentz and gauge invariance, it is known [8] as the Fock-Schwinger “proper
time method.”
DeWitt [11] regarded (1.4) as defining the Green’s function for a Schrodinger equation
i
@
@s
s.x/
D
K s.x/
(cid:16)p
2
A(cid:17)
e
c
(cid:0)
D
s.x/
(1.8)
which he used for quantum mechanical calculations in curved spacetime. Similarly, Feyn-
man [12] used (1.8) in his derivation of the path integral for the Klein–Gordon equation. He
im2s as the require-
regarded the integration (1.3) of the Green’s function with the weight e(cid:0)
ment that asymptotic solutions of the Schrödinger equation be stationary eigenstates of the
mass operator i@(cid:28) . To pick the mass eigenvalue one extends the lower limit of integration in
(1.3) from 0 to
0 for s < 0. Feynman noted
that this requirement, equivalent to imposing retarded causality in chronological time s, leads to
the Feynman propagator (cid:129)F .x
x0/ whose causality properties in t are rather more complex.
(cid:0)
Related issues of causality arise in classical relativistic field theory.
, and adds the requirement that G.x; x0I
(cid:0)1
D
s/
1.4
THE STUECKELBERG–HORWITZ–PIRON (SHP)
FRAMEWORK
In 1973, Horwitz and Piron set out to systematically construct a manifestly covariant relativistic
mechanics with interactions. They observed that the principal difficulties in previous efforts
arose when attempting to define observables that respect a priori constraints associated with the
presumed dynamics. For example, although it may seem natural to choose the proper time of
the motion as the worldline parameterization, Stueckelberg showed that this choice prohibits
a classical account of observed pair phenomena. Worse still, in the Fock–Schwinger formalism
identification of s with the proper time clashes with the formulation of quantum observables,
since p
x2 ds does not commute with x(cid:22), rendering the relations (1.5) and (1.6)
D
difficult to interpret rigorously.
dx2
(cid:0) P
p
(cid:0)
A closely related question is reparameterization invariance. Although one might regard
f .(cid:28)/
the parameter (cid:28) as arbitrary, the Fock action (1.1) is clearly not invariant under (cid:28)
(cid:28) 0 D
!
10
1. CONCEPTUAL APPROACHES TO SPACETIME
because the Lagrangian is not homogeneous of first degree in the velocities. Invariance is often
restored by replacing the quadratic term in the action with a first-order form such as
S
D
Z d (cid:28) (cid:16)mcp
x2
(cid:0) P
e
x(cid:22)A(cid:22)(cid:17)
c P
C
which leads to fixed particle masses
p(cid:22)
D
@L
x(cid:22) D
@
P
mc
x(cid:22)
P
p
x2 C
e
c
A(cid:22)
(cid:16)p
2
A(cid:17)
e
c
(cid:0)
m2c2
D (cid:0)
(cid:0)!
(cid:0) P
and restricts the system dynamics to the timelike region by imposing
x2 < 0.
P
Although the Fock action permits mass exchange, the mass of individual particles is fixed
for interactions governed by Stueckelberg’s force law (1.2) when G(cid:22)
0. Similarly, in the Fock–
4K and thus corresponds to a classical constant
Schwinger formalism (1.7) shows that
of the motion. Thus, fixed mass is demoted from the status of a priori constraint to that of a
posteriori conservation law for appropriate interactions.
x2
P
D
D
Rejecting such a priori restrictions, Horwitz and Piron postulate that classical particles
and quantum states can be described in an unconstrained 8D phase space
.ct; x/
x
D
(cid:18) E
c
p
D
; p(cid:19)
with canonical equations
x(cid:22)
P
D
dx(cid:22)
d (cid:28) D
@K
@p(cid:22)
p(cid:22)
P
D
dp(cid:22)
d (cid:28) D (cid:0)
@K
@x(cid:22) ;
where K is a scalar function that determines the system dynamics and its conservation laws. This
framework is seen to include Newtonian mechanics by imposing the restrictions
which leads to
t
(cid:28)
D
K
D
H.x; p/
E
(cid:0)
dxi
dt D
@H
@pi
dpi
dt D (cid:0)
@H
@xi
dE
dt D
@H
@t
;
where i
1; 2; 3.
D
To describe a free relativistic particle one may write
so that dt=d (cid:28)
D
K
p2
2M
E=Mc2 and d x=dt
D
(cid:0)!
x(cid:22)
P
D
p(cid:22)
M
and
p(cid:22)
P
D
0
pc2=E. In particular, for a timelike particle,
D
p2
M 2 D (cid:0)
x2
P
D
m2c2
M 2 D
constant;
where the dynamical quantity m2.(cid:28)/ is conserved because @K=@(cid:28)
0. Similarly, a relativistic
e
particle in a four-potential A(cid:22).x/ is characterized by K
c A/2=2M with results compa-
rable to the classical limit of the Fock–Schwinger system. Moreover, Horwitz and Piron con-
sidered a two-body problem with a scalar interaction characterized by the Hamiltonian
.p
D
D
(cid:0)
1.5. BIBLIOGRAPHY 11
p2
1
2M1 C
p2
2
2M2 C
K
D
V .
x1
j
x2
/ ;
j
(cid:0)
where
V .
j
x1
x2
(cid:0)
j
V .(cid:26)/
/
D
D
p.x1
x2/2
.t1
(cid:0)
(cid:0)
t2/2
(cid:0)
generalizes action at a distance to action at a spacelike interval. As in nonrelativistic mechanics,
the center of mass and relative motion may be separated as
P (cid:22)P(cid:22)
2M C
p(cid:22)p(cid:22)
2m C
K
D
V .(cid:26)/;
where
P (cid:22)
D
(cid:0)M2p(cid:22)
p(cid:22)
1 C
p(cid:22)
2
M1p(cid:22)
2 (cid:1) =M
1 (cid:0)
p(cid:22)
M
D
M1
C
M2
m
M1M2=M:
D
D
P (cid:22)
The center of mass motion is thus free, satisfying P
0. For the relative motion, one has
p(cid:22)
P
D (cid:0)
@K
@x(cid:22) D (cid:0)
D
@V
@x(cid:22)
(1.9)
(cid:0)
@V =@x(cid:22) with G(cid:22) in (1.2) so that individual particle masses
in which case we may identify
are no longer necessarily fixed. In this framework, Horwitz and Arshansky found relativistic
generalizations for the standard central force problems, including scattering [13, 14] and bound
states [15, 16]. This formulation of the relativistic two-body problem can be extended to many
bodies in the context of classical gauge theory, providing the basis for the SHP approach to
classical relativistic mechanics.
BIBLIOGRAPHY
1.5
[1] Goldstein, H. 1965. Classical Mechanics, Addison-Wesley, Reading, MA. 3
[2] Einstein, A. 1996. Specielle Relativitätstheorie, George Braziller, New York, English and
German on facing pages. 3
[3] Einstein, A. 1956. The Meaning of Relativity, Princeton University Press, Princeton, NJ.
DOI: 10.4324/9780203449530. 3
[4] Fock, V. 1937. Physikalische Zeitschrift der Sowjetunion, 12:404–425. http://www.neo-cl
assical-physics.info/uploads/3/4/3/6/34363841/fock_-_wkb_and_dirac.pdf
4
12
1. CONCEPTUAL APPROACHES TO SPACETIME
[5] Stueckelberg, E. 1941. Helvetica Physica Acta, 14:321–322 (in French). 4, 5
[6] Stueckelberg, E. 1941. Helvetica Physica Acta, 14:588–594 (in French). 4
[7] Halzen, F. and Martin, A. D. 1984. Quarks and Leptons: An Introductory Course in Modern
Particle Physics, John Wiley & Sons, New York. DOI: 10.1119/1.14146. 5
[8] Itzykson, C. and Zuber, J. B. 1980. Quantum Field Theory, McGraw-Hill, New York. DOI:
10.1063/1.2916419. 5, 9
[9] Horwitz, L., Arshansky, R., and Elitzur, A. 1988. Foundations of Physics, 18:1159. 7
[10] Schwinger, J. 1951. Physical Review, 82(5):664–679. https://link.aps.org/doi/10.
1103/PhysRev.82.664 8
[11] DeWitt, B. 1965. Dynamical Theory of Groups and Fields, Gordon and Breach, New York.
DOI: 10.1119/1.1953053. 9
[12] Feynman, R. 1950. Physical Review, 80:440–457. 9
[13] Arshansky, R. and Horwitz, L. 1989. Journal of Mathematical Physics, 30:213. 11
[14] Arshansky, R. and Horwitz, L. 1988. Physics Letter A, 131:222–226. 11
[15] Arshansky, R. and Horwitz, L. 1989. Journal of Mathematical Physics, 30:66. 11
[16] Arshansky, R. and Horwitz, L. 1989. Journal of Mathematical Physics, 30:380. 11
PART II
Theory
C H A P T E R 2
15
Canonical Relativistic
Mechanics
LAGRANGIAN AND HAMILTONIAN MECHANICS
2.1
In many ways, the picture underlying classical relativistic mechanics is a generalization of its
Newtonian predecessor, with the replacements
3D space
chronological time t
Galilean covariance
9
>>=
>>;
(cid:0)!
8
(cid:136)(cid:136)<
(cid:136)(cid:136):
4D spacetime
chronological time (cid:28)
Lorentz covariance
made in an analogous canonical structure. A spacetime event x(cid:22) refers to the 4-tuple .ct; x/ of
coordinate observables that can, in principle, be measured by a clock and an array of spatially
arranged detectors in a laboratory.1 Each event occurs at a chronological time (cid:28) such that for
(cid:28)2 > (cid:28)1 the event x(cid:22).(cid:28)2/ is said to occur after the event x(cid:22).(cid:28)1/. Event occurrence is an irre-
versible process—a given event cannot be influenced by a subsequent event, although laboratory
equipment may present the history of events in the order of their recorded values of x0
ct.
Following Fock and Stueckelberg, we consider a relativistic particle to be a continuous
sequence of events traced out by the evolution of a function x(cid:22).(cid:28)/ as (cid:28) proceeds monotonically
. The chronological time (cid:28) is taken to be an external universal parameter, playing
from
a role similar to that of t in Newtonian physics.
(cid:0)1
1
to
D
In Stueckelberg–Horwitz–Piron (SHP) theory, event dynamics are defined on an uncon-
strained 8D phase space .x(cid:22); p(cid:22)/ by the canonical equations
x(cid:22)
P
D
dx(cid:22)
d (cid:28) D
@K
@p(cid:22)
p(cid:22)
P
D
dp(cid:22)
d (cid:28) D (cid:0)
@K
@x(cid:22)
;
(2.1)
where K.x; p; (cid:28)/ is a Lorentz invariant Hamiltonian. This framework thus inherits the canon-
ical structure of Newtonian analytical mechanics, with the additional complexity of Lorentz
covariance. Defining Poisson brackets as
F; G
f
g D
@F
@x(cid:22)
@G
@p(cid:22) (cid:0)
@F
@p(cid:22)
@G
@x(cid:22)
1Although this description oversimplifies the measurement process, it will be sufficient here.
16
2. CANONICAL RELATIVISTIC MECHANICS
we have for any function on phase space,
dF
d (cid:28) D
@F
@x(cid:22)
dx(cid:22)
d (cid:28) C
@F
@p(cid:22)
dp(cid:22)
d (cid:28) C
@F
@(cid:28) D
@F
@x(cid:22)
@K
@p(cid:22) (cid:0)
@F
@p(cid:22)
@K
@x(cid:22) C
@F
@(cid:28) D f
F; K
g C
@F
@(cid:28)
0, the Hamiltonian is a
generalizing the result in nonrelativistic mechanics. Since
constant of the motion unless K depends explicitly on (cid:28). Because of its unconstrained canonical
structure, the conditions for the Liouville–Arnold theorem apply: the 4D system is integrable—
solvable by quadratures—if it possesses 8 independent conserved quantities Fi ; i
; 8 sat-
isfying
f
Performing the Legendre transformation from the Hamiltonian to the Lagrangian
Fi ; Fj
f
0 and
K; Fi
K; K
g (cid:17)
g D
g D
(cid:1) (cid:1) (cid:1)
D
0.
1;
f
variation of the action
D
leads to the Euler–Lagrange equations
(cid:14)S
L
x(cid:22)p(cid:22)
D P
K;
(cid:0)
(cid:14) Z d (cid:28) L .x;
x; (cid:28) /
P
D
0
@L
x(cid:22) (cid:0)
@
P
in familiar form. Under transformations x
!
Noether theorem follows in the usual manner, so that for infinitesimal variation (cid:14)x we find
@L
@x(cid:22) D
x0 D
f .x/ that leave the action invariant, the
d
d (cid:28)
0
d
d (cid:28)
(cid:18) @L
@
x(cid:22) (cid:14)x(cid:22)(cid:19)
P
0
D
leading to the conserved quantity .@L=@
Lorentz transformations (cid:131) with antisymmetric generators M(cid:22)(cid:23)
x(cid:22)/ (cid:14)x(cid:22). In particular, since L is a scalar invariant under
P
x0
D
(cid:131)x
(cid:0)!
(cid:14)x
x0
x
(cid:0)
D
’
(cid:14)!(cid:22)(cid:23)M(cid:22)(cid:23) x
the quantity
l(cid:22)(cid:23)
D
(cid:16)@L=@
(cid:21)(cid:27)
x(cid:21)(cid:17) (cid:0)M(cid:22)(cid:23)(cid:1)
P
x(cid:27)
D
x(cid:22)p(cid:23)
(cid:0)
x(cid:23)p(cid:22)
is conserved, and the Poisson bracket relations
C
express the Lie algebra of the Lorentz group. The components of l(cid:22)(cid:23) can be split into
C
D
C
(cid:8)l(cid:22)(cid:23); l(cid:26)(cid:27) (cid:9)
g(cid:23)(cid:26)l(cid:27)(cid:22)
g(cid:22)(cid:26)l(cid:23)(cid:27)
g(cid:23)(cid:27) l(cid:22)(cid:26)
g(cid:22)(cid:27) l(cid:26)(cid:23)
so that
(cid:15)ij kxj pk
Li
D
Ai
D
x0pi
(cid:0)
xi p0
l 2
D
2 (cid:0)L2
(cid:0)
A2(cid:1)
(2.2)
(2.3)
generalizes the conserved nonrelativistic total angular momentum in central force problems.
We write the velocity of a general event as
2.2. THE FREE RELATIVISTIC PARTICLE 17
D
with no restrictions on its orientation—
case, an observer can boost to a co-moving frame in which
D
x.(cid:28)/
P
(cid:0)c
t;
P
x(cid:1)
P
(cid:0)u0.(cid:28)/; u.(cid:28)/(cid:1)
x may be timelike, lightlike, or spacelike. In the timelike
P
and so by Lorentz invariance,
D (cid:0)
be a dynamical result in the rest frame, it is not an a priori constraint.
in any instantaneous frame. Still, while
(cid:0)c
x2
P
2
t (cid:1)
P
1 may
t
P
D
x.(cid:28)/
P
D
(cid:0)c
t; 0(cid:1)
P
2.2
THE FREE RELATIVISTIC PARTICLE
As in the earlier work of Fock and Schwinger, the free particle Hamiltonian is taken to be
p2
2M
K
D
generalizing the nonrelativistic form. Applying the canonical equations (2.1), the equations of
motion are
x(cid:22)
P
D
with solution
@K
@p(cid:22) D
p(cid:22)
M
p(cid:22)
P
D (cid:0)
@K
@x(cid:22) D
0
x(cid:22)
x(cid:22)
0 C
D
u(cid:22)(cid:28)
x(cid:22)
0 C
p(cid:22)
M
(cid:28)
D
x(cid:22), a Legendre transformation leads to the free
P
as seen previously in Section 1.4. From p(cid:22)
particle Lagrangian
M
D
and so naturally,
L
x(cid:22)p(cid:22)
D P
K
(cid:0)
D
1
2
M
x2
P
D
Given the absence of constraints, the sign of p2 depends on its spacetime orientation.
D
D
L
1
2
M
x2
P
K
p2
2M D
constant:
Introducing the mass m2
p2=c2 for a timelike event, we have
m2c2=M 2 and
D (cid:0)
1 in the rest frame. For this case,
x2
P
we generally take m
D
M so that
m=M
c2
(cid:0)
x2
D P
(cid:0)
2
x0(cid:1)
(cid:0)
P
D (cid:0)
c2
D (cid:0)
t
P
D
t 2 "
1
P
D
(cid:18) d x
dt
(cid:19)
2#
(cid:0)
(cid:0)!
t
P
D (cid:6)
p1
1
(cid:0)
(cid:13);
(cid:12)2 D (cid:6)
where (cid:12)
v=c, v
D
D
d x=dt, and (cid:13) is the usual relativistic dilation factor.
18
2. CANONICAL RELATIVISTIC MECHANICS
For a timelike free event evolving forward in coordinate time (
and recover the standard representation of relativistic velocity:
t
P
(cid:21)
1), we choose
t
P
(cid:13)
D C
(cid:0)u0; u(cid:1)
x
P
D
D
(cid:13) .c; v/
(cid:18) E
Mc
;
p
M
(cid:19) ;
D
where E > 0.
Choosing
t
P
D (cid:0)
free event evolving backward in coordinate time (
(cid:13) produces a solution of particular interest to Stueckelberg, the timelike
1),
t
P
(cid:18)
(cid:20) (cid:0)
E
j
j
Mc
(cid:0)
(cid:19)
;
p
M
(cid:13) .c;
v/
(cid:0)
D
D (cid:0)
describing a negative energy event tracing out a trajectory that when reordered by the laboratory
clock describes an antiparticle.
x(cid:22)
The general solution
P
p(cid:22)=M for a free particle can also accommodate tachyon (p2 >
D
0) and lightlike (p2
0) worldlines with no loss of generality.
D
2.3
THE RELATIVISTIC PARTICLE IN A SCALAR
POTENTIAL
Adding a scalar potential V .x/ to the Hamiltonian
p2
2M C
K
D
V .x/
leads to the equations of motion
x(cid:22)
P
D
@K
@p(cid:22) D
p(cid:22)
M
p(cid:22)
P
D (cid:0)
@K
@x(cid:22) D (cid:0)
@V
@x(cid:22) :
Equivalently, the Lagrangian formulation is
L
D
1
2
M
x2
P
(cid:0)
V .x/
d
d (cid:28)
@L
x(cid:22) (cid:0)
@
P
@L
@x(cid:22) D
0
M
x(cid:22)
R
D (cid:0)
@V
@x(cid:22)
:
(cid:0)!
As seen in (1.9), this problem may describe the reduced interaction of a two-body problem in
relative coordinates.
As a simplified but suggestive model, we consider the scalar potential
V .x/
Ma
x;
(cid:1)
D
where a is a constant timelike vector. We choose a frame in which
2.3. THE RELATIVISTIC PARTICLE IN A SCALAR POTENTIAL 19
a
.cg; 0; 0; 0/
V .x/
Mcgx0
(cid:0)!
providing an analogy in the time direction to the approximate nonrelativistic gravitational field
close to earth. The equations of motion are
D (cid:0)
D
M
x(cid:22)
R
D (cid:0)
@V
@x(cid:22) D (cid:0)
Ma(cid:22)
becoming in this frame
with solution
M
x0
R
D (cid:0)
Mcg
M
x
R
D
0
t .(cid:28) /
t0
t0(cid:28)
C P
(cid:0)
D
1
2
g(cid:28) 2
x .(cid:28)/
x0
C
D
u0(cid:28);
where g, t0 and
t0 are taken as positive constants. We recognize this parabolic trajectory as
P
describing the pair annihilation process shown in curve B of Figure 1.1. For simplicity, we now
take t0
0. Thus, the event velocity is
0 and x0
D
D
t .(cid:28) /
P
t0
D P
(cid:0)
g(cid:28)
x .(cid:28)/
P
D
u0
D
constant
and the trajectory reverses t-direction at t
t 2
0 =2g when (cid:28)
(cid:3) D P
(cid:3) D P
t0=g. From
p(cid:22)
D
@L
x(cid:22) D
@
P
M
x(cid:22)
P
p0
E
c D
D
Mc
t
P
(cid:0)!
(cid:0)!
E
Mc2
t
P
D
we see that the event propagates forward in t with E > 0 for (cid:28) < (cid:28)
0 for (cid:28) > (cid:28)
of an antiparticle. The velocity remains timelike except near (cid:28)0 in the interval
and backward in t with E <
portion of the trajectory corresponds to Stueckelberg’s interpretation
. The (cid:28) > (cid:28)
(cid:3)
(cid:3)
(cid:3)
c2 (cid:0)P
t0
(cid:0)
2
g(cid:28) (cid:1)
(cid:0)
u2
0 < 0
(cid:0)!
u0
j
cg
j
(cid:28)
(cid:3) (cid:0)
< (cid:28) < (cid:28)
u0
j
cg
j
;
(cid:3) C
where it becomes spacelike (tachyonic).
The event trajectory recorded in the laboratory may be reordered according to t. Thus, at
0
coordinate time t
and a subsequent a negative energy event at (cid:28)
. From this perspective, the two pieces of
D
the worldline appear as a pair of events approaching one another and mutually annihilating at
t
0, two events will be recorded, a positive energy event that occurred at (cid:28)
2(cid:28)
(cid:3)
D
D
D
t
(cid:3)
, with no events recorded with t > t
In a similar way, taking
.
(cid:3)
t0 and g to be negative constants, this solution describes a pair cre-
P
ation process. Although this account of pair processes is not physically realistic, we will present
a more accurate description in Section 4.6 using the full apparatus of classical SHP electrody-
namics.
20
2. CANONICAL RELATIVISTIC MECHANICS
2.4
TWO-BODY PROBLEM WITH SCALAR POTENTIAL
As we showed in Section 1.4, the two-body problem with scalar interaction can be written as
an equivalent one-body problem
p2
1
2M1 C
p2
2
2M2 C
K
D
V .x1
x2/
(cid:0)
D
P (cid:22)P(cid:22)
2M C
p(cid:22)p(cid:22)
2m C
V .x/;
(2.4)
P (cid:22)
0. Arshansky [1] studied classical problems of
where the center of mass motion satisfies P
D
this type (for the extension to quantum mechanics, see [2]), generalizing the standard nonrela-
tivistic central force problems by taking
V .x/
D
V (cid:16)px2(cid:17)
(cid:0)!
V .x/
D
V (cid:16)px2
c2t 2(cid:17)
(cid:0)
for spacelike separations, x2 > 0. Restriction to the spacelike region can be accomplished
through a representation in hyperspherical coordinates of the type
(cid:26) (cid:20) sinh (cid:12)
cosh (cid:12)
x
D
(cid:21)
r
O
2
r
O
D
4
sin (cid:18) cos (cid:30)
sin (cid:18) sin (cid:30)
cos (cid:18)
3
5
r2
O
D
1:
But it was found that reasonable solutions lie in a subspace of the full spacelike region, found by
choosing a spacelike unit vector n(cid:22) and solving the equations of motion in the O(2,1)-invariant
restricted space
x
(cid:8)x
(cid:140)x
j
(cid:0)
.x
(cid:1)
2
n/n(cid:141)2
0(cid:9)
(cid:21)
for which the component of x orthogonal to n is itself spacelike. Arshansky has described this as
a classical case of spontaneous symmetry breaking leading to a lowering of the energy spectrum.
Taking n
.0; 0; 0; 1/ this region has the representation
D
(cid:26) (cid:20) sin (cid:18)
q
O
cos (cid:18)
(cid:21)
x
D
2
q
O
D
4
sinh (cid:12)
cosh (cid:12) cos (cid:30)
cosh (cid:12) sin (cid:30)
3
5
q2
O
D
1:
(2.5)
In addition to the O(3,1) invariant l 2 defined in (2.3), the O(2,1) invariant
3 (cid:0)
with components defined in (2.2) is also conserved and plays a role in characterizing the solu-
tions. In these coordinates, the first integrals
1 (cid:0)
D
N 2
L2
A2
A2
2
p2
2M C
K
D
V .x/
1
2
M
(cid:26)2
P
D
l 2
C
2M(cid:26)2 C
V .(cid:26)/
(cid:20)
D
which is cyclic in (cid:12) and (cid:30), and
2.5. MANY-BODY PROBLEM AND STATISTICAL MECHANICS 21
l 2
D
M 2(cid:26)4
P(cid:18) 2
C
N 2
sin2 (cid:18)
provide a separation of variables. As in nonrelativistic mechanics, but with an additional degree
of freedom, solutions can be found from the four first-order equations
P(cid:12)
(cid:30)
P
(cid:26)
P
P(cid:18)
D
D
D
D
0
0
s 2
M
(cid:18)(cid:20)
(cid:0)
V .(cid:26)/
l 2
2M(cid:26)2
(cid:19)
(cid:0)
(cid:0)!
Z
(cid:28)
D
d(cid:26)
r 2
M
(cid:16)(cid:20)
V .(cid:26)/
(cid:0)
(cid:0)
l 2
2M(cid:26)2 (cid:17)
1
M(cid:26)2.(cid:28)/
s
l 2
N 2
sin2 (cid:18)
(cid:0)
Z
d (cid:28)
Z
M(cid:26)2.(cid:28) / D
(cid:0)!
d(cid:18)
ql 2
N 2
sin2 (cid:18)
(cid:0)
providing an example of Liouville integrability.
In the quantum case, Horwitz and Arshansky [2–4] solved the bound state problem, lead-
ing to a mass spectrum coinciding with the non-relativistic Schrodinger energy spectrum. For
small excitations, the corresponding energy spectrum is that of the non-relativistic Schrodinger
theory with relativistic corrections.
2.5 MANY-BODY PROBLEM AND STATISTICAL
MECHANICS
The many body problem and classical and quantum statistical mechanics, along with applica-
tions to bound states, scattering, and relativistic statistical mechanics, are covered extensively
in [5]. Here we provide a brief introduction to the subject as preparation for discussion of mass
stabilization in Section 4.7.2.
The generalization of (2.4) to N -bodies is
K
D
N
X
1
i
D
pi (cid:22)pi
(cid:22)
2Mi C
V .x1; x2; : : : ; xN /
for which case one may define center of mass coordinates
M
Mi
D
X
i
X (cid:22)
D
Pi Mi x(cid:22)
M
i
P (cid:22)
p(cid:22)
i
D
X
i
and relative coordinates
p(cid:22)
i D
O
p(cid:22)
i (cid:0)
.Mi =M / P (cid:22)
x(cid:22)
i D
O
x(cid:22)
i (cid:0)
X (cid:22)
22
2. CANONICAL RELATIVISTIC MECHANICS
satisfying
p(cid:22)
i D
O
0
X
i
Mi
x(cid:22)
i D
O
0
X
i
for the phase space. The Poisson brackets are
(cid:17)(cid:22)(cid:23)
X (cid:22); P (cid:23)
f
x(cid:22)
i ;
(cid:8)
O
and although the relative coordinates do not satisfy canonical Poisson bracket relations, these
relations become canonical in the thermodynamic limit N
0. The
invariant Hamiltonian takes the form
for which Mj =M
(cid:17)(cid:22)(cid:23) (cid:0)(cid:14)ij
Mj =M (cid:1)
p(cid:23)
j (cid:9)
O
! 1
g D
!
D
(cid:0)
P (cid:22)P(cid:22)
2M C
K
D
p(cid:22)
p(cid:22)i
i
O
O
2Mi C
X
i
V .x1; x2; : : : ; xN /
so that for relative forces, V .x1; x2; : : : ; xN /
decouples from the interacting system. The equations of motion
x2; : : : ;
O
x1;
O
V .
D
xN / and the center of mass motion
O
P (cid:22)
M
D
X (cid:22)
P
P (cid:22)
M
x(cid:22)
PO
i D
P (cid:22)
P
0
D
p(cid:22)
PO
i D
@K
x(cid:22)i D (cid:0)
@
O
@V
x(cid:22)i
@
O
are canonical in form.
In statistical mechanics, one regards the N events as elements in a relativistic Gibbs en-
semble. As a generalization of the nonrelativistic formalism, we set a mass shell condition K
(cid:20),
however this is not a sufficient restriction because integration over the hyperbolic 4D phase
space may run to infinity for finite p(cid:22)p(cid:22). We must therefore also set an energy shell condition
Pi Ei
1 in this section). Fixing the energy shell is equiva-
lent to choosing a Lorentz frame for the system relative to the measurement apparatus, without
which we could not give meaning to the idea of temperature. The microcanonical ensemble of
events at fixed energy is then defined as
i (we take c
E, where Ei
p0
D
D
D
D
where
(cid:128).(cid:20); E/
D
Z d (cid:127)(cid:14).K
(cid:20)/(cid:14).(cid:134)Ei
E/;
(cid:0)
(cid:0)
d (cid:127)
D
Y
i
d 4pi d 4xi
D
d 4N p d 4N x
is the infinitesimal volume element in the phase space of the many-body system. The entropy
and temperature are given by
S.(cid:20); E/
D
ln (cid:128).(cid:20); E/
1
T (cid:0)
@S.(cid:20); E/
@E
;
D
where we take the Boltzmann constant kB
D
We may construct a canonical ensemble by extracting a small subensemble (cid:128)s from its en-
vironment (cid:128)b (the bath), and summing over all possible partitions of energy and mass parameter
between the subensemble and the bath,
1.
2.6. BIBLIOGRAPHY 23
(cid:128).(cid:20); E/
D
Z d (cid:20)0dE 0 (cid:128)b (cid:0)(cid:20)
(cid:20)0; E
(cid:0)
(cid:0)
E0(cid:1) (cid:128)s (cid:0)(cid:20)0; E 0(cid:1) ;
where both mass and energy may be exchanged. Similarly, a grand canonical ensemble may be
constructed by summing over all possible exchanges of event number and volume between the
subensemble and the bath.
We return to the relativistic statistical mechanics in Section 4.7.2 to show that a particle
represented as an ensemble of events possesses a mass that tends toward a stable equilibrium,
even under perturbations.
2.6
BIBLIOGRAPHY
[1] Arshansky, R. 1986. The classical relativistic two-body problem and symptotic mass con-
servation. Tel Aviv University preprint TAUP 1479-86. 20
[2] Horwitz, L. P. 2015. Relativistic Quantum Mechanics, Springer, Dordrecht, Netherlands.
DOI: 10.1007/978-94-017-7261-7. 20, 21
[3] Arshansky, R. and Horwitz, L. 1989. Journal of Mathematical Physics, 30:66.
[4] Arshansky, R. and Horwitz, L. 1989. Journal of Mathematical Physics, 30:380. 21
[5] Horwitz, L. P. and Arshansky, R. I. 2018. Relativistic Many-Body Theory and Statistical Me-
chanics, 2053–2571, Morgan & Claypool Publishers. http://dx.doi.org/10.1088/978-
1-6817-4948-8 DOI: 10.1088/978-1-6817-4948-8. 21
C H A P T E R 3
25
Classical Electrodynamics
3.1
CLASSICAL GAUGE TRANSFORMATIONS
Historically, classical electrodynamics proceeded from experiment to theory. The Maxwell equa-
tions (1860s) were initially posed as a summary of discoveries in the laboratory, including
the Cavendish experiments in electrostatics (1770s), Coulomb’s studies of electric and mag-
netic forces (1780s), and Faraday’s work on time-varying fields (1830s). But the importance of
Maxwell’s mathematical theory was not fully recognized [1, p. xxv] until its prediction of electro-
magnetic waves traveling at the speed of light was verified by Hertz in 1888. It was the successful
incorporation of optics into electrodynamics that provoked Einstein to study the spacetime sym-
metries underlying Maxwell theory in 1906 and led Fock to associate potential theory with gauge
symmetry in 1929 [2]. Building on the success of such considerations, the Standard Model of
fundamental interactions was developed by requiring invariance under more complex symmetry
groups, as were the many candidates for a successor theory.
As discussed in Chapter 1, Stueckelberg recognized that the perception of a worldline as a
sequence of events following dynamical laws could lead to pair annihilation processes in classical
mechanics. Such worldlines moving in the positive or negative direction of the Einstein time
t should be parameterized by an invariant (cid:28), progressing monotonically in the positive direc-
tion. Horwitz and Piron generalized this notion to make the parameter (cid:28) universal, and in this
way were able to study the relativistic classical dynamics of many body systems. In this chapter,
we approach classical electrodynamics in a similar manner. Instead of restricting the formal-
ism to the known features of Maxwell theory, we begin with the Lorentz invariant Lagrangian
description of a free event and introduce the maximal U(1) gauge invariance applicable to the
action, leading to a generalization of the Stueckelberg force law (1.2). We construct an action
for the field strengths, again applying general principles of Lorentz and gauge invariance, and
obtain (cid:28)-dependent Maxwell-like equations. The resulting framework can be understood as a
microscopic theory of interacting events that reduces to Maxwell electrodynamics in a certain
equilibrium limit. Thus, as we explore SHP theory, our points of comparison will be with the
Maxwell theory we hope to generalize.
The action for a free event
Z d (cid:28) L
S
D
Z d (cid:28)
D
1
2
Mg(cid:22)(cid:23)
x(cid:22)
P
x(cid:23)
P
26
3. CLASSICAL ELECTRODYNAMICS
with g(cid:22)(cid:23).x/ a local metric, is invariant under the addition of a total (cid:28)-derivative
L
(cid:0)!
d
d (cid:28)
L
C
(cid:131) .x; (cid:28) /
L
D
x(cid:22) @
@x(cid:22) (cid:131) .x; (cid:28) /
C P
@
@(cid:28)
C
(cid:131) .x; (cid:28) /
(3.1)
on condition that (cid:131) .x; (cid:28) / vanishes at the endpoints of the action integral. In analogy to x0
it is convenient to introduce the notation
ct,
D
x5
c5(cid:28)
D
x5
P
D
c5
@5
D
1
c5
@
@(cid:28)
and adopt the convention
(cid:11); (cid:12); (cid:13); (cid:14); (cid:15)
0; 1; 2; 3; 5
D
(cid:21); (cid:22); (cid:23); (cid:26); (cid:27)
0; 1; 2; 3;
D
where we skip (cid:11)
be written in the compact form
D
4 to avoid confusion with older notations for ct. In this notation (3.1) can
L
L
(cid:0)!
x(cid:11)@(cid:11)(cid:131) .x; (cid:28) /
C P
(cid:12) x(cid:12) . But we insist that in the presence
suggesting a five dimensional symmetry acting as x0
of matter, x(cid:22) and (cid:28) belong to vector and scalar representations of O(3,1). Still, free fields may
enjoy a 5D symmetry, such as
L(cid:11)
D
(cid:11)
O(4,1) for metric
O(3,2) for metric
L
L
2
2
(cid:17)(cid:11)(cid:12)
(cid:17)(cid:11)(cid:12)
diag.
(cid:0)
1; 1; 1; 1;
diag.
1; 1; 1; 1;
(cid:0)
1/
C
1/
(cid:0)
D
D
which contain O(3,1) as a subgroup. Nevertheless, the higher symmetry does play a role in wave
equations, much as nonrelativistic pressure waves satisfy
(cid:18)
2
r
(cid:0)
1
v2
@2
@t 2
(cid:19) p .x; t /
0
D
speed of sound
v
D
suggesting a 4D symmetry not physically present in the theory of acoustics.
In light of (3.1), we introduce the five potentials a(cid:11).x; (cid:28)/ into the Lagrangian and note
that the action
Z d (cid:28) (cid:18) 1
2
S
D
M
x(cid:22)
P
x(cid:22)
P
C
(cid:19)
e
x(cid:11)a(cid:11)
c P
D
Z d (cid:28) (cid:20) 1
2
M
x(cid:22)
P
x(cid:22)
P
C
e
c
(cid:0)
x(cid:22)a(cid:22)
P
C
(cid:21)
c5a5(cid:1)
is invariant under the 5D local gauge transformation
(cid:0)!
As a brief quantum aside, we may write the canonical momentum
D
C
a(cid:11) .x; (cid:28) /
a0(cid:11) .x; (cid:28) /
a(cid:11) .x; (cid:28) /
@(cid:11)(cid:131) .x; (cid:28) / :
(3.2)
p(cid:22)
D
@L
x(cid:22) D
@
P
M
x(cid:22)
P
C
e
c
a(cid:22)
x(cid:22)
(cid:0)! P
D
1
M
(cid:16)p(cid:22)
e
c
(cid:0)
a(cid:22)(cid:17)
3.2. LORENTZ FORCE 27
to find the Hamiltonian
K
p(cid:22)
x(cid:22)
P
(cid:0)
L
D
D
1
2M
(cid:16)p(cid:22)
e
c
(cid:0)
a(cid:22)(cid:17) (cid:16)p(cid:22)
e
c
(cid:0)
a(cid:22)(cid:17)
ec5
c
a5
(cid:0)
showing that under (3.2) the Stueckelberg–Schrodinger equation
@(cid:28) .x; (cid:28)/
i
(cid:132)
D
K .x; (cid:28)/
(cid:16)i
@(cid:28)
(cid:132)
C
(cid:0)!
ec5
c
a5(cid:17) .x; (cid:28) /
1
2M
(cid:16)p
2
a(cid:17)
e
c
(cid:0)
D
.x; (cid:28)/
enjoys the symmetry [3]
.x; (cid:28)/
!
exp (cid:20) ie
c
(cid:132)
(cid:131).x; (cid:28)/(cid:21) .x; (cid:28)/
expressing the local U(1) gauge transformation in the familiar form introduced by Fock.
LORENTZ FORCE
3.2
To study the interaction of an event with the gauge potentials a(cid:11).x; (cid:28)/, we write the Lagrangian
as
C
with local metric g(cid:22)(cid:23).x/. Applying the Euler–Lagrange derivative to the kinetic term we obtain
C
D
L
Mg(cid:22)(cid:23)
1
2
x(cid:22)
P
x(cid:23)
P
e
c
g(cid:22)(cid:23)
x(cid:22)a(cid:23)
P
ec5
c
a5
(3.3)
g(cid:26)(cid:27) (cid:18) d
d (cid:28)
@
@
x(cid:27) (cid:0)
P
@
@x(cid:27)
(cid:19) 1
2
Mg(cid:22)(cid:23)
x(cid:22)
P
x(cid:23)
P
M
x(cid:26)
R
D
C
M (cid:18)g(cid:26)(cid:27) @g(cid:27)(cid:22)
@x(cid:23) (cid:0)
1
2
g(cid:26)(cid:27) @g(cid:22)(cid:23)
@x(cid:27)
(cid:19)
x(cid:22)
P
x(cid:23):
P
Using the symmetry of the first term in parentheses under (cid:22)
g(cid:26)(cid:27) @g(cid:27)(cid:22)
@x(cid:23) P
x(cid:22)
x(cid:23)
P
D
1
2
(cid:18)g(cid:26)(cid:27) @g(cid:27)(cid:22)
@x(cid:23) C
(cid:23)
$
g(cid:26)(cid:27) @g(cid:27)(cid:23)
@x(cid:22)
(cid:19)
x(cid:22)
P
x(cid:23)
P
we find
where
g(cid:26)(cid:27) (cid:18) d
d (cid:28)
@
x(cid:27) (cid:0)
@
P
@
@x(cid:27)
(cid:19) 1
2
Mg(cid:22)(cid:23)
x(cid:22)
P
x(cid:23)
P
M
x(cid:26)
R
D
C
M (cid:128) (cid:26)
x(cid:22)
(cid:22)(cid:23) P
x(cid:23)
P
D
M
x(cid:22)
D
P
D(cid:28)
;
(cid:128) (cid:26)
(cid:22)(cid:23) D
1
2
g(cid:26)(cid:27) (cid:18) @g(cid:27)(cid:22)
@x(cid:23) C
@g(cid:27)(cid:23)
@x(cid:22) (cid:0)
@g(cid:22)(cid:23)
@x(cid:27)
(cid:19)
x(cid:22)=D(cid:28) is the absolute derivative of
P
x(cid:22) along a geodesic.
P
is the standard Christoffel symbol and D
For the interaction term
@
x(cid:27) (cid:0)
P
(cid:18) d
d (cid:28)
x(cid:22)a(cid:23)
P
@
@x(cid:27)
(cid:0)g(cid:22)(cid:23)
C
(cid:19)
@
c5a5(cid:1)
(cid:18)g(cid:22)(cid:23)
d
d (cid:28)
da(cid:27)
d (cid:28) (cid:0) P
x(cid:22) (cid:0)@(cid:22)a(cid:27)
x(cid:22) (cid:0)@(cid:22)a(cid:27)
D
D
D P
D P
x(cid:22)
@
x(cid:27) a(cid:23)(cid:19)
P
@
P
x(cid:22)@(cid:27) a(cid:22)
(cid:0)
@(cid:27) a(cid:22)(cid:1)
@(cid:27) a(cid:22)(cid:1)
(cid:0)
(cid:0)
@
@x(cid:27) (cid:0)g(cid:22)(cid:23)
x(cid:22)a(cid:23)
P
C
(cid:0)
c5a5(cid:1)
c5@(cid:27) a5
@(cid:28) a(cid:27)
(cid:0)
x5 .@5a(cid:27)
C
C P
c5@(cid:27) a5
@(cid:27) a5/
(cid:0)
28
3. CLASSICAL ELECTRODYNAMICS
so that the Lorentz force is
M
x(cid:26)
R
C
M (cid:128) (cid:26)
x(cid:22)
(cid:22)(cid:23) P
x(cid:23)
P
D (cid:0)
e
c
D
e
c
g(cid:26)(cid:27) (cid:0)
x(cid:22) (cid:0)@(cid:22)a(cid:27)
P
(cid:0)
@(cid:27) a(cid:22)(cid:1)
x5 .@5a(cid:27)
C P
@(cid:27) a5/(cid:1)
(cid:0)
g(cid:26)(cid:27) (cid:0)f(cid:27)(cid:22)
x(cid:22)
P
C
f(cid:27)5
x5(cid:1) ;
P
where we have introduced
f(cid:11)(cid:12) .x; (cid:28) /
D
@(cid:11)a(cid:12) .x; (cid:28) /
(cid:0)
@(cid:12) a(cid:11) .x; (cid:28) /
(3.4)
(3.5)
as the gauge invariant field strength tensor. We note that (3.4) reduces to the Stueckelberg force
(1.2) if we put
f(cid:22)(cid:23) .x; (cid:28) /
F(cid:22)(cid:23) .x/
!
f(cid:22)5 .x; (cid:28) /
G(cid:22) .x/
!
and so may be said to generalize the Stueckelberg ansatz, for which it provides a foundational
justification in gauge theory. In analogy to Maxwell theory, we may take a(cid:22)
0 in (3.3) and
approximate
D
.ec5=c/a5.x; (cid:28)/
(cid:0)
.e=c/(cid:30).x/
V .x/
D
’ (cid:0)
to identify the fifth potential with the scalar potential V .x/ used in Section 2.3.
We put the Lorentz force into a more compact form as
x(cid:26)
D
P
D(cid:28) D
M
x(cid:26)
R
C
M (cid:128) (cid:26)
x(cid:22)
(cid:22)(cid:23) P
x(cid:23)
P
D
e
c
g(cid:26)(cid:27) f(cid:27)(cid:11)
x(cid:11)
P
(3.6)
and notice that the index (cid:26) runs to 3, while the index (cid:11) runs to 5. The fifth equation is found by
evaluating
D
D(cid:28)
(cid:18)
(cid:0)
1
2
M
x2(cid:19)
P
x(cid:26)M
D (cid:0) P
x(cid:26)
P
D
x(cid:26)
D(cid:28) D (cid:0) P
e
c
g(cid:26)(cid:27) (cid:0)f(cid:27)(cid:22)
x(cid:22)
P
C
f(cid:27)5
x5(cid:1)
P
D
c5
c
ef5(cid:22)
x(cid:22);
P
(3.7)
(cid:17)
0. This expression shows that the f5(cid:22) field, expressing the action of a5.x; (cid:28) /
where we used f55
x2 and must play a role
and the (cid:28)-dependence of a(cid:22).x; (cid:28)/, permits the non-conservation of
P
is classical pair processes. We will see in Section 3.6 that this non-conservation represents an
exchange of mass between particles and fields, where total mass-energy-momentum of particles
and fields is conserved.
Notice that the mass exchange is scaled by the factor c5=c. As we shall see in Section 4.8,
this factor is a continuous measure of the deviation of SHP electrodynamics from Maxwell
theory, which is recovered in the limit c5=c
0. We will generally take this factor to be small
but finite.
!
3.3. FIELD DYNAMICS 29
3.3
FIELD DYNAMICS
To construct a dynamical action for the fields we first rewrite the interaction term as
x(cid:11)a(cid:11).x; (cid:28)/
P
X (cid:11)a(cid:11).x; (cid:28)/
(cid:0)! P
(cid:0)!
1
c
Z d 4x j (cid:11).x; (cid:28)/a(cid:11).x; (cid:28) /;
where the event current
j (cid:11).x; (cid:28)/
c
X (cid:11).(cid:28)/(cid:14)4 (cid:0)x
P
(cid:0)
D
X.(cid:28)/(cid:1)
(3.8)
is defined at each (cid:28) with support restricted to the spacetime location of the event at x
X.(cid:28)/.
The standard Maxwell current, representing the full worldline traced out by evolution of the
event X.(cid:28)/, is found from
D
J (cid:22).x/
D
Z d (cid:28) j (cid:22).x; (cid:28)/
c Z d (cid:28)
D
X (cid:22).(cid:28)/(cid:14)4 (cid:0)x
P
(cid:0)
X.(cid:28) /(cid:1)
(3.9)
as seen for example in [4, p. 612]. This integration is called concatenation [5] and can be un-
derstood as the sum at x of all events occurring at this spacetime location over (cid:28).
The choice of kinetic term for a field theory is guided by three principles: it should be
a Lorentz scalar, gauge invariant, and simple (bilinear in the fields with the lowest reasonable
order of derivatives). From experience with the Maxwell theory, we first consider the electro-
magnetic action containing a term of the form f (cid:11)(cid:12) .x; (cid:28)/f(cid:11)(cid:12) .x; (cid:28) / originally proposed by Saad
et al. [3]. However, low-energy Coulomb scattering trajectories calculated in this theory [6] can-
not be reconciled with Maxwell theory or experiment (we return to this point in Section 4.1). A
satisfactory theory is found by generalizing the kinetic term so that the action takes the form [7]
Sem D
Z d 4xd (cid:28) (cid:26) e
c2 j (cid:11).x; (cid:28)/a(cid:11).x; (cid:28)/
Z ds
(cid:21)
1
4c
(cid:0)
hf (cid:11)(cid:12) .x; (cid:28) /(cid:136).(cid:28)
s/f(cid:11)(cid:12) .x; s/i(cid:27) ;
(cid:0)
where (cid:21) is a parameter with dimensions of time. This may be written more compactly as
Sem D
Z d 4xd (cid:28) (cid:26) e
c2 j (cid:11).x; (cid:28)/a(cid:11).x; (cid:28)/
1
4c
(cid:0)
(cid:136) .x; (cid:28) / f(cid:11)(cid:12) .x; (cid:28)/(cid:27) ;
f (cid:11)(cid:12)
(3.10)
where
f (cid:11)(cid:12)
(cid:136) .x; (cid:28)/
Z ds
(cid:21)
D
(cid:136).(cid:28)
(cid:0)
s/f (cid:11)(cid:12) .x; s/
is a superposition of fields, non-local in (cid:28). The field interaction kernel is chosen to be
(cid:136).(cid:28)/
(cid:14) .(cid:28) /
(cid:0)
D
.(cid:24)(cid:21)/2(cid:14)00 .(cid:28) /
Z d (cid:20)
2(cid:25)
D
h1
C
.(cid:24)(cid:21)(cid:20)/2i e(cid:0)
i(cid:20)(cid:28) ;
where the factor
(cid:20)1
1
2
C
2(cid:21)
(cid:17)
(cid:16)
c5
c
(cid:24)
D
(3.11)
(3.12)
30
3. CLASSICAL ELECTRODYNAMICS
insures that the low-energy Lorentz force agrees with Coulomb’s law. Integrating by parts the
term in (3.10) produced by the factor (cid:14)00 .(cid:28)
s/ in (3.11),
(cid:0)
Z d (cid:28)ds f (cid:11)(cid:12) .x; (cid:28)/(cid:14)00.(cid:28)
s/f(cid:11)(cid:12) .x; s/
(cid:0)
Z d (cid:28)ds (cid:16)@(cid:28) f (cid:11)(cid:12) .x; (cid:28)/(cid:17) (cid:14)0.(cid:28)
s/f(cid:11)(cid:12) .x; s/
(cid:0)
Z d (cid:28) (cid:16)@(cid:28) f (cid:11)(cid:12) .x; (cid:28)/(cid:17) @(cid:28) f(cid:11)(cid:12) .x; (cid:28) /
D (cid:0)
D (cid:0)
so that
Sem D
Z d 4xd (cid:28) (cid:26) e
c2 j (cid:11)a(cid:11)
1
4c
(cid:0)
f (cid:11)(cid:12) f(cid:11)(cid:12)
.(cid:24)(cid:21)/2
4c
C
(cid:16)@(cid:28) f (cid:11)(cid:12) (cid:17) (cid:0)@(cid:28) f(cid:11)(cid:12) (cid:1)
(cid:27)
(3.13)
and the higher derivative in (cid:28) is seen to break the 5D symmetry of f (cid:11)(cid:12) f(cid:11)(cid:12) to O(3,1), leaving
the gauge invariance of f (cid:11)(cid:12) unaffected. It remains necessary to give meaning to raising and
(cid:17)55f5(cid:11). Expanding
lowering the 5-index through f 5
(cid:11) D
f (cid:11)(cid:12) f(cid:11)(cid:12)
f (cid:22)(cid:23)f(cid:22)(cid:23)
2(cid:17)55f (cid:22)
5 f5(cid:22)
D
C
1 as the sign of the f 2
we see that we may interpret (cid:17)55
any necessary interpretation as an element in a 5D metric.
D (cid:6)
5 term in the action, sidestepping
Variation of the electromagnetic action (3.10) with respect to the potentials a(cid:11).x; (cid:28)/ leads
to the field equations
e
c
describing a non-local superposition of fields f (cid:11)(cid:12)
j (cid:11).x; (cid:28)/. In order to remove (cid:136).(cid:28)/ from the LHS, we use the inverse function
(cid:136) .x; (cid:28)/
j (cid:11).x; (cid:28)/
@(cid:12) f (cid:11)(cid:12)
D
(cid:136) .x; (cid:28) / sourced by the local event current
(3.14)
’.(cid:28)/
D
(cid:21)(cid:136)(cid:0)
1.(cid:28)/
(cid:21) Z d (cid:20)
2(cid:25)
D
1
which satisfies
i(cid:20)(cid:28)
e(cid:0)
.(cid:24)(cid:21)(cid:20)/2 D
C
1
2(cid:24)
(cid:28)
e(cid:0)j
j
=(cid:24)(cid:21)
Z ds
(cid:21)
’ .(cid:28)
(cid:0)
s/ (cid:136) .s/
(cid:14).(cid:28)/
D
Z d (cid:28)
(cid:21)
’ .(cid:28) /
1:
D
Integrating (3.14) with (3.15), we obtain
@(cid:12) f (cid:11)(cid:12) .x; (cid:28) /
e
c
D
Z ds ’ .(cid:28)
s/ j (cid:11) .x; s/
(cid:0)
e
c
D
j (cid:11)
’ .x; (cid:28) /
(3.15)
(3.16)
(3.17)
which describes a local field sourced by a non-local superposition of event currents. While the
event current (3.8) has sharp support at one spacetime point, the current
j (cid:11)
’ .x; (cid:28) /
c Z ds
2(cid:24)
D
e(cid:0)j
(cid:28)
s
=(cid:24)(cid:21)
j
(cid:0)
X (cid:11).s/(cid:14)4 (cid:0)x
P
(cid:0)
X.s/(cid:1)
(3.18)
3.4. ENSEMBLE OF EVENT CURRENTS 31
can be interpreted as the current induced by a smooth ensemble of events distributed in a neigh-
borhood (cid:21) of a spacetime point. This interpretation is discussed further in Section 3.4.
Because the field strengths are derived from potentials, the Bianchi identity
@(cid:11)f(cid:12)(cid:13)
@(cid:13) f(cid:11)(cid:12)
@(cid:12) f(cid:13)(cid:11)
0
D
C
C
(3.19)
holds. We see that (3.17) and (3.19) are formally similar to Maxwell’s equations in 5D, and are
known as pre-Maxwell equations.
Expanding the field equations in 4D tensor, vector and scalar components, they take the
form
@(cid:23) f (cid:22)(cid:23)
1
c5
@
@(cid:28)
(cid:0)
f 5(cid:22)
e
c
D
j (cid:22)
’
@(cid:22) f 5(cid:22)
e
c
j 5
’
D
@(cid:22)f(cid:23)(cid:27)
@(cid:23)f(cid:27)(cid:22)
@(cid:27) f(cid:22)(cid:23)
0
D
C
C
@(cid:23)f5(cid:22)
@(cid:22)f5(cid:23)
(cid:0)
1
c5
@
@(cid:28)
C
f(cid:22)(cid:23)
0
D
(3.20)
which when compared with the 3-vector form of Maxwell’s equations
B
(cid:0)
r (cid:2)
1
c
@
@t
E
D
e
c
J
E
D
r (cid:1)
B
0
D
r (cid:1)
E
r (cid:2)
C
e
c
1
c
J 0
@
@t
B
0
D
suggest that f 5(cid:22) plays the role of the electric field, whose divergence provides the Gauss law,
and f (cid:22)(cid:23) plays the role of the magnetic field. It follows from (3.17) that
@(cid:11)j (cid:11)
@(cid:22)j (cid:22)
D
1
c5
@
@(cid:28)
C
j (cid:11)
0
D
(3.21)
so that j 5 .x; (cid:28) /
D
c5 (cid:26) .x; (cid:28) / plays the role of an event density, and
d
d (cid:28)
Z d 4x (cid:26) .x; (cid:28) /
Z d 4x @(cid:22)j (cid:22) .x; (cid:28) /
0
D
D (cid:0)
shows the conservation of total event number over spacetime, in the absence of injection/removal
of events at the boundary by an external process.
ENSEMBLE OF EVENT CURRENTS
3.4
The function ’.(cid:28)/ smooths the current defined sharply at the event, over a range determined
1 for all (cid:28), producing a current ensemble associated with a large
by (cid:21). For (cid:21) very large, ’.(cid:28)/
section of the worldline, approximating the standard Maxwell current. For (cid:21)
0, we approach
the limit ’.(cid:28)/=(cid:21)
(cid:14).(cid:28)/ which restricts the source current to the instantaneous current produced
by a single event.
!
!
’
32
3. CLASSICAL ELECTRODYNAMICS
Rewriting the current (3.18) as
j (cid:11)
’ .x; (cid:28) /
D
Z ds’ .(cid:28)
(cid:0)
s/ j (cid:11) .x; s/
1
2(cid:24)
D
Z ds e(cid:0)j
s
=(cid:24)(cid:21) j (cid:11) .x; (cid:28)
j
s/
(cid:0)
(cid:0)
we recognize j (cid:11)
’ .x; (cid:28) / as a weighted superposition of currents. Each of these currents originates
s/ along the worldline, occurring before or after the event X (cid:22).(cid:28)/, depend-
at an event X (cid:22).(cid:28)
ing on the displacement s. The superposition may thus be seen [8] as the current produced by
an ensemble of events in the neighborhood of X (cid:22).(cid:28)/, a probabilistic view encouraged by the
functional form of the weight ’.s/. Consider a Poisson distribution describing the occurrence
of independent random events produced at a constant average rate of 1=(cid:21)(cid:24) events per second.
The average time between events is (cid:21)(cid:24) and the probability at (cid:28) that the next event will occur
s=(cid:24)(cid:21)=(cid:24)(cid:21), which may be extended to positive
following a time interval s > 0 is just ’.s/=(cid:21)
and negative values of the displacement. The current j (cid:11)
’ .x; (cid:28) / is constructed by assembling a set
of event currents j (cid:11) .x; (cid:28)
s/ along the worldline, each weighted by ’.s/, the probability that
the event occurrence is delayed from (cid:28) by an interval of at least
. We will see that the causality
relations embedded in the pre-Maxwell equations select the one event from this ensemble for
which an interaction occurs at lightlike separation, preserving relativistic causality.
e(cid:0)
D
(cid:0)
s
j
j
We may also regard j (cid:11)
’ .x; (cid:28) / as a random variable describing the probability of finding a
current density at x at a given (cid:28). The correlation function for the event density is
(cid:10)(cid:26) .(cid:28)/ (cid:26) .s/(cid:11)
1
N
D
Z d 4x (cid:26) .x; (cid:28) / (cid:26) .x; s/ ;
where N is a normalization. In the case of an event X (cid:22).(cid:28)/
the unsmoothed event current (3.8) leads to
D
u(cid:22)(cid:28) with constant velocity u(cid:22),
(cid:10)(cid:26) .(cid:28) / (cid:26) .s/(cid:11)
c2
N
D
Z d 4x (cid:14)4 .x
u(cid:28) / (cid:14)4 .x
(cid:0)
us/
(cid:0)
D
c2(cid:14)3 .0/
u0
N (cid:14).(cid:28)
j
j
s/
(cid:0)
showing that the currents at differing times (cid:28)
defined in (3.18) the correlation becomes
⁄
s are uncorrelated. For the ensemble current
Z d (cid:28) 0ds0d 4x ’.(cid:28)
(cid:28) 0/’.s
(cid:0)
(cid:0)
s0/(cid:14)4 (cid:0)x
(cid:0)
u(cid:28) 0(cid:1) (cid:14)4 (cid:0)x
us0(cid:1)
(cid:0)
(cid:10)(cid:26)’ .(cid:28)/ (cid:26)’ .s/(cid:11)
D
D
D
c2
N
c2(cid:14)3 .0/
N
u0
j
j
c2(cid:14)3 .0/
N
u0
4(cid:24) 2
j
j
Z d (cid:28) 0 ’.(cid:28)
(cid:0)
Z d (cid:28) 0 e(cid:0)j
(cid:28)
(cid:28) 0/’.(cid:28) 0
s/
(cid:0)
=(cid:24)(cid:21)
(cid:28) 0j
s
(cid:28) 0(cid:0)
=(cid:24)(cid:21):
j
(cid:0)j
(cid:0)
Taking (cid:28) > s and evaluating the integral over three intervals punctuated by s, (cid:28) 0, and (cid:28) leads to
(cid:10)(cid:26)’ .(cid:28) / (cid:26)’ .s/(cid:11)
D
(cid:21)c2(cid:14)3 .0/
N
u0
4(cid:24)
j
j
(cid:18)1
(cid:28)
(cid:0)
(cid:24)(cid:21)
C
s
(cid:19) e(cid:0)
.(cid:28)
(cid:0)
s/=(cid:24)(cid:21)
3.5. THE 5D WAVE EQUATION AND ITS GREEN’S FUNCTIONS 33
with a time-dependence characteristic of an Ornstein–Uhlenbeck process with correlation
length (cid:21). This correlation suggests that the current ensemble may be seen as the set of instan-
taneous currents induced by an event undergoing a Brownian motion that produces random
displacement in (cid:28) under viscous drag along the worldline.
3.5
THE 5D WAVE EQUATION AND ITS GREEN’S
FUNCTIONS
Using (3.5) to expand (3.17) leads to the wave equation
@(cid:12) f (cid:11)(cid:12)
(cid:0)
D (cid:0)
@(cid:12) (cid:16)@(cid:11)a(cid:12)
(cid:0)
@(cid:12) a(cid:11)(cid:17)
D
@(cid:12) @(cid:12) a(cid:11)
D
(cid:18)@(cid:22)@(cid:22)
(cid:17)55
c2
5
C
(cid:19) a(cid:11)
@2
(cid:28)
e
c
j (cid:11)
’ ;
D (cid:0)
(3.22)
where we work in the 5D Lorenz gauge @(cid:12) a(cid:12)
0. As discussed above, this form partially pre-
D
serves 5D symmetries broken by the O(3,1) symmetry of the event dynamics. A Green’s function
solution to
(cid:18)@(cid:22)@(cid:22)
C
(cid:17)55
c2
5
(cid:19) G.x; (cid:28)/
@2
(cid:28)
(cid:14)4 .x/ (cid:14) .(cid:28) /
D (cid:0)
can be used to obtain potentials in the form
a(cid:11) .x; (cid:28) /
e
c
D (cid:0)
Z d 4x0d (cid:28) 0 G (cid:0)x
x0; (cid:28)
(cid:0)
(cid:0)
(cid:28) 0(cid:1) j (cid:11)
’ (cid:0)x0; (cid:28) 0(cid:1) :
(3.23)
The Green’s function can be expressed as the Fourier transform
G.x; (cid:28)/
1
.2(cid:25)/5
Z
C
D
d 5k
eik(cid:11)x(cid:11)
k(cid:11)k(cid:11) D
1
.2(cid:25)/5
Z
C
d 4k d (cid:20) ei.k
x
(cid:1)
C
c5(cid:17)55(cid:20)(cid:28)/
1
(cid:17)55(cid:20)2
k2
C
over an appropriate contour C . To break the 5D symmetry present in the wave equation, we
leave the (cid:20) integration for last, writing
G.x; (cid:28)/
1
2(cid:25)
D
Z d (cid:20) eic5(cid:17)55(cid:20)(cid:28) (cid:129) (cid:0)x; (cid:17)55(cid:20)2(cid:1) ;
where (cid:129).x; m2/ is Schwinger’s principal part Green’s function [9] associated with the Klein–
Gordon equation for a particle of mass m. Carefully repeating the steps of Schwinger’s deriva-
tion, while allowing (cid:17)55 to be positive or negative, we are led to
G.x; (cid:28)/
1
.2(cid:25)/2
D (cid:0)
Z d (cid:20) eic5(cid:17)55(cid:20)(cid:28) (cid:20)(cid:14) (cid:0)x2(cid:1)
(cid:18) (cid:0)
C
(cid:17)55x2(cid:1)
(cid:0)
@
1=2(cid:17)(cid:21) :
x2(cid:12)
@x2 J0 (cid:16)(cid:20) (cid:12)
(cid:12)
(cid:12)
Now performing the (cid:20) integration, the pre-Maxwell Green’s function becomes
G.x; (cid:28)/
1
2(cid:25)
D (cid:0)
(cid:14).x2/(cid:14).(cid:28)/
c5
2(cid:25) 2
@
@x2 (cid:18).
(cid:0)
(cid:17)55g(cid:11)(cid:12) x(cid:11)x(cid:12) /
(cid:0)
1
q
(cid:0)
(cid:17)55g(cid:11)(cid:12) x(cid:11)x(cid:12)
(3.24)
34
3. CLASSICAL ELECTRODYNAMICS
2
1. The first term contains the O(3,1) scalars
so that both terms have units of distance(cid:0)
time(cid:0)
x2 and (cid:28) separately, and is called GMaxwell. It has support at instantaneous (cid:28) and, as in Maxwell
theory, along lightlike separations. The second term, called GCorrelation, has support determined
by
(cid:2)
(cid:17)55(cid:17)(cid:11)(cid:12) x(cid:11)x(cid:12)
(cid:0)
8
<
:
D
(cid:0)
(cid:0)x2
(cid:0)x2
c2
5 (cid:28) 2(cid:1)
c2t 2
x2
c2
5 (cid:28) 2 > 0 ; (cid:17)55
D
x2
C
c2
5 (cid:28) 2(cid:1)
(cid:0)
c2
5 (cid:28) 2 > 0
(cid:0)
1 and spacelike separations for (cid:17)55
(cid:0)
c2t 2
D
(cid:0)
(cid:0)
1
D
; (cid:17)55
1
D (cid:0)
on timelike separations for (cid:17)55
1. Contributions from
GCorrelation are generally smaller than those of GMaxwell and drop off faster with distance from
the source. To avoid singularities, particular care must be taken in handling the distribution
functions. The derivative in GCorrelation produces two singular terms
D (cid:0)
D
GCorrelation .x; (cid:28) /
c5
2(cid:25) 2
1
2
D (cid:0)
(cid:18).
x2
(cid:0)
x2
(cid:0)
(cid:0)
(cid:0)
c2
5 (cid:28) 2/
(cid:0)
c2
5 (cid:28) 2(cid:1)
3=2 (cid:0)
(cid:0)
(cid:14) (cid:0)
x2
(cid:0)
x2
(cid:0)
(cid:0)
!
c2
5 (cid:28) 2(cid:1)
1=2
(cid:0)
c2
5 (cid:28) 2(cid:1)
but these singularities cancel when first combined under integrals of the type (3.23) prior to
applying the limits of integration. This order of operations expresses an aspect of the boundary
conditions posed by Schwinger in deriving the Klein–Gordon Green’s function.
THE MASS-ENERGY-MOMENTUM TENSOR
(cid:11)
x0
!
(cid:30) .x/
x(cid:11)
D
C
(cid:14)0(cid:30) .x/
C
(cid:14)x(cid:11) that leave the action invariant, a field undergoes
(cid:14)x(cid:30) .x/
(cid:30) .x/
C
D
(cid:14)0(cid:30) .x/
C
C
(cid:14)x(cid:11)@(cid:11)(cid:30) .x/ ;
3.6
Under transformations x(cid:11)
(cid:30) .x/
(cid:30)0 (cid:0)x0(cid:1)
D
!
where
is a variation in the form of the field at a fixed point x and
(cid:14)0(cid:30) .x/
(cid:30)0 .x/
(cid:30) .x/
D
(cid:0)
is a variation induced in the fixed form of the field by the variation of x. The action undergoes
(cid:14)x(cid:30) .x/
D
(cid:14)x(cid:11)@(cid:11)(cid:30) .x/
(cid:14)Sem D
Z
(cid:131)0
d 4x0d (cid:28) 0L0
Z
(cid:0)
(cid:131)
d 4x d (cid:28) L;
where (cid:131)
!
this becomes
(cid:131)0 is the change of volume induced by the variation in x. Expanding the first term,
(cid:14)Sem D (cid:0)
Z
(cid:131)
d 4x d (cid:28) .@(cid:11)L/ (cid:14)x(cid:11)
Z
C
(cid:131)
d 4x d (cid:28) @(cid:11)
(cid:18)Lg(cid:11)
@L
@ .@a(cid:30)/
(cid:12) (cid:0)
@(cid:12) (cid:30) .x/(cid:19) (cid:14)x(cid:12)
Z
(cid:0)
(cid:131)
d 4x d (cid:28) @(cid:11)
(cid:18) @L
@ .@a(cid:30)/
(cid:14)x(cid:12) (cid:14)(cid:12) (cid:30)(cid:19) ;
where we used the Euler–Lagrange equations
3.6. THE MASS-ENERGY-MOMENTUM TENSOR 35
@L
@(cid:30) (cid:0)
@a
(cid:18) @L
(cid:19)
@ .@a(cid:30)/
D
0:
Since (cid:14)S
D
0 and the variations are arbitrary, we obtain Noether’s theorem
(cid:18)Lg(cid:11)(cid:12)
@(cid:11)
@L
@ .@a(cid:30)/
(cid:0)
@(cid:12) (cid:30) .x/(cid:19)
@(cid:11)Q(cid:11)(cid:12)
0
D
D
for the conserved current Q(cid:11)(cid:12) .
The electromagnetic Lagrangian can be written
Lem D
e
c2 j (cid:11).x; (cid:28)/a(cid:11).x; (cid:28)/
(cid:0)
1
4c
f (cid:11)(cid:12)
(cid:136) .x; (cid:28)/f(cid:11)(cid:12) .x; (cid:28) / ;
where
f (cid:11)(cid:12)
(cid:136) .x; (cid:28) /
Z ds
(cid:21)
D
(cid:136).(cid:28)
(cid:0)
s/ f (cid:11)(cid:12) .x; s/
is the non-local convolved field. Under translations,
(cid:14)x(cid:12)
"(cid:12)
D
(cid:14)a(cid:11)
0
D
(cid:0)!
and so the conserved current is
Q(cid:18) (cid:11)(cid:12)
(cid:136) D
@L
@ (cid:0)@(cid:11)a(cid:13) (cid:1)
@(cid:12) a(cid:13)
(cid:0)
Lg(cid:11)(cid:12)
1
c
g(cid:11)(cid:12) (cid:18) 1
4
D
f (cid:14)"
(cid:136) f(cid:14)"
e
c
j
(cid:1)
a(cid:19)
(cid:0)
1
c
(cid:0)
f (cid:11)(cid:13)
(cid:136) @(cid:12) a(cid:13) :
This current may be made symmetric in the indices by adding the total divergence
(cid:129)(cid:18) (cid:11)(cid:12)
(cid:136) D
1
c
@(cid:13) (cid:16)f (cid:11)(cid:13)
(cid:136) a(cid:12) (cid:17)
e
c2 j (cid:11)a(cid:12)
C
1
c
D
f (cid:11)(cid:13)
(cid:136) @(cid:13) a(cid:12) ;
where the second form follows from the inhomogeneous pre-Maxwell equation. Now, the sym-
metric current is
(cid:136) D Q(cid:18) (cid:11)(cid:12)
(cid:18) (cid:11)(cid:12)
(cid:136) C
(cid:129)(cid:18) (cid:11)(cid:12)
(cid:136) D
(cid:18) (cid:11)(cid:12)
(cid:136)0 C
e
c2
hj (cid:11)a(cid:12)
j
(cid:1)
(cid:0)
a g(cid:11)(cid:12) i ;
where
(cid:18) (cid:11)(cid:12)
(cid:136)0 D
1
c
(cid:20)f (cid:11)(cid:13)
(cid:136) f (cid:12)
(cid:13) C
(cid:136) f(cid:14)"g(cid:11)(cid:12) (cid:21)
f (cid:14)"
1
4
is the source-free current. By explicit calculation, using the homogeneous pre-Maxwell equation,
we find
@(cid:11)T (cid:11)(cid:12)
(cid:136) D (cid:0)
e
c2 f (cid:12) (cid:11)j(cid:11);
36
3. CLASSICAL ELECTRODYNAMICS
where
T (cid:11)(cid:12)
(cid:136) D (cid:0)
(cid:18) (cid:11)(cid:12)
(cid:136)0 D
1
c
(cid:20)f (cid:11)(cid:13)
(cid:136) f (cid:12)
(cid:13) (cid:0)
g(cid:11)(cid:12) f (cid:14)"
(cid:136) f(cid:14)"
(cid:21)
1
4
(3.25)
is the conserved mass-energy-momentum tensor.
Writing the (cid:12)
D
5 component of the conservation law
@(cid:11)T (cid:11)5
(cid:136) D (cid:0)
e
c2 f 5(cid:11)j(cid:11)
and using
for the single particle current leads to
j (cid:11).x; (cid:28)/
c
X (cid:11).(cid:28)/(cid:14)4 (cid:0)x
P
(cid:0)
D
X.(cid:28)/(cid:1)
@(cid:11)T (cid:11)5
(cid:136) D (cid:0)
e
c
f 5(cid:11) .x; (cid:28) /
X(cid:11).(cid:28)/(cid:14)4 (cid:0)x
P
(cid:0)
X.(cid:28) /(cid:1) :
Integrating the LHS over spacetime leaves the (cid:28)-derivative
Z d 4x @(cid:11)T (cid:11)5
(cid:136) D
Z d 4x @(cid:22)T (cid:22)5
(cid:136) C
1
c5
d
d (cid:28)
Z d 4x T 55
(cid:136) D
1
c5
d
d (cid:28)
Z d 4x T 55
(cid:136)
and integrating the RHS gives
e
c
(cid:0)
Z d 4x f 5(cid:22) .x; (cid:28) /
X(cid:22).(cid:28)/(cid:14)4 (cid:0)x
P
(cid:0)
X.(cid:28)/(cid:1)
D (cid:0)
e
c
f 5(cid:22) .X.(cid:28)/; (cid:28) /
X(cid:22).(cid:28)/:
P
Recognizing this expression from the fifth Lorentz force equation
d
d (cid:28)
(cid:18)
(cid:0)
1
2
M
x2(cid:19)
P
D
(cid:17)55
ec5
c
f 5(cid:22)
x(cid:22)
P
the RHS and LHS combine as
d
d (cid:28)
(cid:20)Z d 4x T 55
(cid:136) C
(cid:17)55
(cid:18)
(cid:0)
1
2
M
x2(cid:19)(cid:21)
P
0
D
demonstrating that the total mass of fields and events is conserved.
c2), we see that T 55
Since M
x2 has units of energy (
P
x2
P
D (cid:0)
(energy per 4D spacetime volume).
(cid:136) has units of energy density
3.7 WORLDLINE CONCATENATION
We saw in (3.21) that the source current satisfies @(cid:11)j (cid:11)
’ .x; (cid:28) /
cannot be a divergenceless Maxwell current. However, Stueckelberg noticed that under the
boundary condition
0, and so the vector part j (cid:22)
’ .x; (cid:28) /
D
j 5
’ .x; (cid:28)/
0
(cid:28)
(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)!
!(cid:6)1
3.7. WORLDLINE CONCATENATION 37
we have
@(cid:22)
Z d (cid:28) j (cid:22)
’ .x; (cid:28) /
1
c5
C
Z d (cid:28) @(cid:28) j 5
’ .x; (cid:28) /
@(cid:22)J (cid:22).x/
0;
D
D
where using (3.16) we confirm
J (cid:22).x/
D
Z d (cid:28) j (cid:22)
’ .x; (cid:28) /
Z d (cid:28) Z ds
(cid:21)
D
’ .(cid:28)
(cid:0)
s/ j (cid:11) .x; s/
D
Z ds j (cid:22) .x; s/
in agreement with (3.9). Again, this integration, called concatenation [5], represents the sum
at the spacetime point x of all events occurring over time (cid:28). Saad and Horwitz [3] extended
Stueckelberg’s argument, showing that under the additional boundary condition
f 5(cid:22).x; (cid:28)/
0
(cid:28)
(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)!
!(cid:6)1
(cid:28)-integration of the pre-Maxwell equations leads to Maxwell’s equations in the form
@(cid:12) f (cid:11)(cid:12) .x; (cid:28) /
e
c
D
j (cid:11)
’ .x; (cid:28) /
@(cid:140)(cid:11)f(cid:12)(cid:13)(cid:141)
0
D
@(cid:11)j (cid:11)
0
D
9
>>>>>=
>>>>>;
(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)!
Z d (cid:28)
(cid:21)
8
(cid:136)(cid:136)(cid:136)(cid:136)(cid:136)<
(cid:136)(cid:136)(cid:136)(cid:136)(cid:136):
@(cid:23)F (cid:22)(cid:23) .x/
e
c
D
J (cid:22) .x/
@(cid:140)(cid:22)F(cid:23)(cid:26)(cid:141)
0
D
@(cid:22)J (cid:22).x/
0;
D
where
Z d (cid:28)
(cid:21)
Under concatenation, F (cid:22)(cid:23) become Maxwell fields while F 5(cid:22) decouples from the Maxwell sys-
tem. In addition, integrating the Green’s function (3.24) for the pre-Maxwell wave equation
f (cid:11)(cid:23).x; (cid:28)/:
F (cid:11)(cid:23).x/
D
Z d (cid:28) GMaxwell
D.x/
D
D (cid:0)
1
2(cid:25)
(cid:14).x2/
Z d (cid:28) GCorrelation
0
D
(3.26)
recovers the 4D Maxwell Green’s function. When concatenating GCorrelation, the two singular
terms arising from the derivative must once again be subtracted prior to applying the limits of
integration.
As we have seen, SHP electrodynamics can be understood as a microscopic theory of
events interacting at time (cid:28). We saw in Section 3.6 that during these interactions, particles and
pre-Maxwell fields may exchange mass under conservation of total mass-energy-momentum. As
mentioned in Section 1.3, Feynman recognized that mass exchange of this type is also permitted,
in principle, in QED. He interpreted integration over the evolution parameter, the final step in
Equation (1.3) that describes the quantum Green’s function for scalar particles, as extraction
of asymptotic mass eigenstates from these complex interactions. In much the same way, we
will see that concatenation—integration of the pre-Maxwell field equations over the evolution
38
3. CLASSICAL ELECTRODYNAMICS
parameter (cid:28)—extracts from the microscopic event interactions the massless modes in Maxwell
electrodynamics, expressing a certain equilibrium limit when mass exchange settles to zero. Thus,
we will frequently compare the concatenated form of results in pre-Maxwell electrodynamics
with the corresponding formulation in Maxwell theory, as a means of maintaining contact with
established phenomenology.
3.8
PCT IN CLASSICAL SHP THEORY
We recall from Section 1.1 that Stueckelberg’s initial motivation for consideration of (cid:28)-evolution
was his desire to formulate a classical electrodynamics that includes antiparticles and describes
pair processes through the dynamic evolution of a single type of evolution x(cid:22).(cid:28)/. And as we
saw in Section 2.3, particles and antiparticles differ only in the direction of their t-evolution,
p0c. In quantum field
x0.(cid:28)/, or equivalently the sign of the energy E
specifically the sign of
P
theory the relationship of particles and antiparticles, characterized by a charge conjugation op-
eration C , is significantly different. This operation is understood as a third discrete symmetry
of the field equations, along with the improper Lorentz symmetries—time reversal T and space
reversal P —that we expect to hold for a Lorentz covariant system. An antiparticle is obtained
by acting with C and T to produce a particle with both the sign of the charge and the temporal
ordering of its evolution reversed. Operator implementation of C generally requires a quantum
formalism with complex wavefunctions, and the combined C T operation is anti-unitary.
D
We require electrodynamics to be symmetric under the improper Lorentz transformations
T and P , and first find the transformations of the fields under these operations. We then consider
a C operation, which in Wigner’s original sense of “reversal of the direction of motion” [10],
acts on the (cid:28)-evolution.
The Lorentz equations in explicit three-vector form are
M
d 2x0
d (cid:28) 2 D
d 2x
d (cid:28) 2 D
M
(cid:20)e .t; x; (cid:28) /
(cid:1)
(cid:20)e .t; x; (cid:28) /
e
c
e
c
d x
d (cid:28) (cid:0)
dx0
d (cid:28) C
(cid:17)55 (cid:15)0 .t; x; (cid:28) /(cid:21)
d x
d (cid:28) (cid:2)
b .t; x; (cid:28) /
(cid:0)
(cid:17)55 (cid:15) .t; x; (cid:28) /(cid:21)
and under space inversion P
x
D
(cid:0)x0; x(cid:1)
xP
D
(cid:0)!P
(cid:0)x0
P ; xP (cid:1)
(cid:0)x0;
x(cid:1)
(cid:0)
D
become
M
M
d 2x0
P
d (cid:28) 2 D
d 2xP
d (cid:28) 2 D
e
c
e
c
(cid:20)eP .tP ; xP ; (cid:28) /
(cid:20)eP .tP ; xP ; (cid:28) /
d xP
d (cid:28) (cid:0)
(cid:1)
dx0
P
d (cid:28) C
(cid:17)55(cid:15)0
P .tP ; xP ; (cid:28) /(cid:21)
d xP
d (cid:28) (cid:2)
bP .tP ; xP ; (cid:28) /
(cid:0)
(cid:17)55(cid:15)P .tP ; xP ; (cid:28) /(cid:21)
so that
M
d 2x0
d (cid:28) 2 D
d 2x
d (cid:28) 2 D
M
3.8. PCT IN CLASSICAL SHP THEORY 39
e
c
e
c
(cid:20)
(cid:0)
(cid:20)
(cid:0)
eP .tP ; xP ; (cid:28) /
eP .tP ; xP ; (cid:28) /
d x
d (cid:28) (cid:0)
(cid:1)
dx0
d (cid:28) C
(cid:17)55(cid:15)0
P .tP ; xP ; (cid:28) /(cid:21)
d x
d (cid:28) (cid:2)
bP .tP ; xP ; (cid:28) /
(cid:17)55 .
(cid:15) .tP ; xP ; (cid:28) //(cid:21) :
(cid:0)
(cid:0)
Invariance under P , understood as form invariance of the interaction, requires that
eP .tP ; xP ; (cid:28) /
(cid:15)0
P .tP ; xP ; (cid:28) /
e .t; x; (cid:28) /
(cid:15)0 .t; x; (cid:28) /
D (cid:0)
D
bP .tP ; xP ; (cid:28) /
(cid:15)P .tP ; xP ; (cid:28) /
D
D (cid:0)
b .t; x; (cid:28) /
(cid:15) .t; x; (cid:28) /
and as we would generally expect, the vectors e and (cid:15) change sign, while the axial vector b and
0-component (cid:15)0 are unchanged. Under time inversion T ,
x
D
(cid:0)x0; x(cid:1)
xT
D
(cid:0)!T
(cid:0)x0
T ; xT (cid:1)
(cid:0)
(cid:0)
D
x0; x(cid:1)
I
we similarly write
M
M
d 2x0
T
d (cid:28) 2 D
d 2xT
d (cid:28) 2 D
e
c
e
c
(cid:20)eT .tT ; xT ; (cid:28) /
(cid:20)eT .tT ; xT ; (cid:28) /
d xT
d (cid:28) (cid:0)
(cid:1)
dx0
T
d (cid:28) C
(cid:17)55(cid:15)0
T .tT ; xT ; (cid:28) /(cid:21)
d xT
d (cid:28) (cid:2)
bT .tT ; xT ; (cid:28) /
(cid:0)
(cid:17)55(cid:15)T .tT ; xT ; (cid:28) /(cid:21)
so that
M
d 2x0
d (cid:28) 2 D
d 2x
d (cid:28) 2 D
e
c
e
c
(cid:20)
(cid:20)
(cid:0)
(cid:0)
M
eT .tT ; xT ; (cid:28) /
eT .tT ; xT ; (cid:28) /
d x
d (cid:28) (cid:0)
(cid:1)
dx0
d (cid:28) C
(cid:15)0
T .tT ; xT ; (cid:28) /(cid:1)
(cid:21)
(cid:17)55 (cid:0)
(cid:0)
d x
d (cid:28) (cid:2)
bT .tT ; xT ; (cid:28) /
(cid:0)
(cid:17)55(cid:15)T .tT ; xT ; (cid:28) /(cid:21) :
Now form invariance requires
eT .tT ; xT ; (cid:28) /
(cid:15)0
T .tT ; xT ; (cid:28) /
e .t; x; (cid:28) /
(cid:15)0 .t; x; (cid:28) /
D (cid:0)
D (cid:0)
bT .tT ; xT ; (cid:28) /
(cid:15)T .tT ; xT ; (cid:28) /
b .t; x; (cid:28) /
(cid:15) .t; x; (cid:28) /
D
D
and here we notice that (cid:15)0 and (cid:15) transform as expected for components of a 4-vector, but the
transformations of e and b are opposite to the behavior generally attributed to the electric and
magnetic 3-vectors under time inversion. This can be attributed to our having respected the
independence of x0 .(cid:28) / as a function of (cid:28), not constrained by the mass-shell condition
dx0
d (cid:28) D C
1
q1
(cid:0)
.d x=dt/2
:
40
3. CLASSICAL ELECTRODYNAMICS
In general, all of the field components transform tensorially as components of the f (cid:22)(cid:23) and (cid:15)(cid:22).
From the transformation properties for the field strengths, we may deduce the transfor-
mation properties of the 5-vector potential components. First, we have
ei
P D (cid:0)
and so we conclude that
ei
H)
@0ai
P (cid:0)
(cid:0)
(cid:0)
@i (cid:1) a0
P D (cid:0)
(cid:0)@0ai
@i a0(cid:1)
(cid:0)
(3.27)
which is consistent with
a0
P D
a0
ai
P D (cid:0)
ai
bi
P D
bi
D
"ij k@j ak :
Similarly,
so we see that
ei
T D (cid:0)
ei
H)
@0ai
T (cid:0)
(cid:0)
@i a0
T D (cid:0)
(cid:0)@0ai
(cid:0)
@i a0(cid:1)
again consistent with (3.27). For the second vector field,
a0
T D (cid:0)
a0
ai
T D
ai
(cid:15)i
P D (cid:0)
(cid:15)i
H)
@5ai
P (cid:0)
(cid:0)
(cid:0)
@i (cid:1) a5
P D (cid:0)
(cid:0)@5ai
(cid:0)
@i a5(cid:1)
along with ai
P D (cid:0)
ai leads to
which is consistent with
Similarly,
a5
P D
a5
(cid:15)0
P D
(cid:15)0
D
@5a0
(cid:0)
@0a5:
(cid:15)i
(cid:15)i
T D
ai leads to
H)
@5ai
T (cid:0)
@i a5
T D
@5ai
(cid:0)
@i a5
along with ai
T D
a5
T D
Thus, the 4-vector and scalar components of the potential transform tensorially under space and
time inversion.
a5:
The pre-Maxwell equations in 3-vector form, as given in (4.17) and (4.18),
e
(cid:0)
r (cid:1)
b
(cid:0)
r (cid:2)
1
c5
1
c
@
@(cid:28)
(cid:15)0
@
@t
e
e
c
j 0
’ D
e(cid:26)0
’
@
@(cid:28)
(cid:15)
D
e
c
j’
D
1
c5
(cid:15)
C
r (cid:1)
1
c
@
@t
(cid:15)0
D
ec5
c
(cid:26)’
(cid:0)
e
c
r
j 5
’ D
1
c
C
(cid:15)0
@
@t
(cid:15)
(cid:17)55 1
c5
@
@(cid:28)
e
C
0
D
r (cid:1)
b
D
1
@
c
@t
C
(cid:17)55 1
c5
0
b
0
D
@
@(cid:28)
b
0
D
e
r (cid:2)
(cid:15)
(cid:0)
r (cid:2)
are seen to be invariant under P and T using the transformations of the fields, under the choices
3.8. PCT IN CLASSICAL SHP THEORY 41
j 0
P .tP ; xP ; (cid:28) /
D
j 0 .t; x; (cid:28) /
j 0
T .tT ; xT ; (cid:28) /
j 0 .t; x; (cid:28) /
D (cid:0)
jP .tP ; xP ; (cid:28) /
D (cid:0)
j .t; x; (cid:28) /
jT .tT ; xT ; (cid:28) /
j .t; x; (cid:28) /
D
j 5
P .tP ; xP ; (cid:28) /
D
j 5 .t; x; (cid:28) /
j 5
T .tT ; xT ; (cid:28) /
D
j 5 .t; x; (cid:28) / ;
where again the 4-vector and scalar components of the current transform tensorially under space
and time inversion.
In order to discuss charge conjugation, we must make another short digression into quan-
tum mechanics. As in Section 3.1, we may write the Stueckelberg–Schrodinger equation as
(cid:16)i@(cid:28)
ec5
c
C
a5(cid:17) .x; (cid:28) /
D
1
2M
1
2M
(cid:16)p(cid:22)
(cid:0)
(cid:18)@(cid:22)
e
c
e
c
a(cid:22)(cid:17) (cid:16)p(cid:22)
ie
c
(cid:0)
a(cid:22)(cid:19) (cid:18)@(cid:22)
a(cid:22)(cid:17) .x; (cid:28) /
ie
c
a(cid:22)
(cid:0)
(cid:19) .x; (cid:28) /
(cid:0)
D (cid:0)
and, taking the complex conjugate, observe that this system will be form invariant under a charge
conjugation C that operates as
(cid:3).x;
(cid:28)/
(cid:0)
D
e
(cid:28)
eC
D (cid:0)
C .x; (cid:28)/
(cid:28)C
D (cid:0)
a(cid:22)
C .x; (cid:28)/
e
.x; (cid:28)/
(cid:28)
a(cid:22).x; (cid:28)/
a5.x; (cid:28)/
(cid:0)!C
(cid:0)!C
(cid:0)!C
(cid:0)!C
(cid:0)!C
D
a(cid:22).x;
(cid:28)/
(cid:0)
a5.x;
(cid:28)/
(cid:0)
a5
C .x; (cid:28)/
D (cid:0)
if these transformations can be made consistent with the pre-Maxwell equations and Lorentz
force. As we now show, this consistency can indeed be established. Leaving aside the quantum
wavefunction and returning to classical mechanics, transformations of the potentials lead to field
strength transformations
ek
(cid:15)k
(cid:15)0
D
D
@0ak
D
bk
D
(cid:17)55@(cid:28) ak
(cid:17)55@(cid:28) a0
@ka0
(cid:0)
"kij @i aj
@ka5
@0a5
(cid:0)
(cid:0)
(cid:0)!C
(cid:0)!C
(cid:0)!C
(cid:0)!C
ek
bk
(cid:15)k
(cid:15)0
(cid:0)
(cid:0)
so that this operation reverses the sign of tensor quantities carrying a scalar index. Under these
transformations, the pre-Maxwell equations remain form invariant as long as
(cid:0)j 0; j; j 5(cid:1)
(cid:0)j 0; j; j 5(cid:1)C D
(cid:0)j 0; j;
j 5(cid:1)
(cid:0)
(cid:0)!C
42
3. CLASSICAL ELECTRODYNAMICS
which is again a reversal of the scalar component. Similarly, the Lorentz force
M
M
d 2x0
C
d (cid:28) 2
C D
d 2xC
d (cid:28) 2
C D
e
c
e
c
(cid:20)eC
(cid:20)eC
d xC
d (cid:28)C (cid:0)
(cid:1)
dx0
C
d (cid:28)C C
(cid:21)
(cid:17)55(cid:15)0
C
d xC
d (cid:28)C (cid:2)
bC
(cid:0)
(cid:21)
(cid:17)55(cid:15)C
undergoes
becoming
M
d 2x0
d (cid:28) 2 D
d 2x
d (cid:28) 2 D
e
c
e
c
(cid:18)
(cid:20)e
(cid:1)
(cid:20)e (cid:18)
(cid:19)
(cid:19)
d x
d (cid:28)
(cid:0)
dx0
d (cid:28)
(cid:0)
M
(cid:17)55 (cid:0)
(cid:21)
(cid:15)0(cid:1)
(cid:0)
(cid:18)
C
(cid:0)
(cid:0)
d x
d (cid:28)
(cid:19)
b
(cid:2)
(cid:0)
(cid:17)55 .
(cid:0)
(cid:15)/(cid:21)
M
d 2x0
d (cid:28) 2 D (cid:0)
d 2x
d (cid:28) 2 D (cid:0)
e
c
e
c
(cid:20)e
(cid:20)e
d x
d (cid:28) (cid:0)
(cid:1)
dx0
d (cid:28) C
(cid:17)55(cid:15)0(cid:21)
d x
d (cid:28) (cid:2)
b
(cid:0)
(cid:17)55(cid:15)(cid:21) ;
M
thus implementing classical charge conjugation. We see that current conservation
@(cid:22)j (cid:22)
@(cid:28) j 5
0
D
C
@(cid:22)j (cid:22)
.
C
@(cid:28) / (cid:0)
(cid:0)
j 5(cid:1)
(cid:0)
0
D
(cid:0)!C
is preserved, but since j 5 is interpreted as the number of events in a localized spacetime volume
at a given (cid:28), the meaning of j 5
j 5 must be examined carefully.
In standard relativistic mechanics, the continuity equation leads to a conserved charge
C D (cid:0)
through integration over volume in space as
@(cid:22)J (cid:22)
0
D
(cid:0)!
dQ
d (cid:28) D
d
d (cid:28)
Z d 3x (cid:0)eJ 0(cid:1)
c Z d 3x
D (cid:0)
.eJ/
0
D
r (cid:1)
and since
J 0.x/
c Z d (cid:28)
X 0.(cid:28)/ (cid:14)4.x
P
X.(cid:28)//
(cid:0)
cannot change sign in this approach, only the conjugation e
versal. But in SHP, charge conservation follows from
D
e can account for charge re-
! (cid:0)
@(cid:11)j (cid:11)
0
D
(cid:0)!
dQ
d (cid:28) D
d
d (cid:28)
Z d 4x (cid:0)ej 5(cid:1)
D (cid:0)
Z d 4x e @(cid:22)j (cid:22)
c5
0;
D
where it is the event density
j 5.x/
c
X 5.(cid:28)/ (cid:14)4.x
P
(cid:0)
D
X.(cid:28)//
D
cc5 (cid:14)4.x
X.(cid:28)//
(cid:0)
that cannot change sign. But the effective charge of an event interacting through the Lorentz
force is associated with
3.9. BIBLIOGRAPHY 43
ej 0.x/
ec
X 0.(cid:28)/ (cid:14)4.x
P
(cid:0)
D
X.(cid:28)//
X 0.(cid:28)/ according to Stueckelberg’s prescription. Thus, the oper-
which can change sign through P
ation e
e is not a required symmetry.
! (cid:0)
Following Stueckelberg, we disentangle the symmetries of the coordinate time t from
those of the chronological parameter (cid:28) by making the following interpretations of the discrete
reflections.
1. Space inversion covariance P implies certain symmetric relations between a given experi-
ment and one performed in a spatially reversed configuration.
2. Time inversion covariance T implies certain symmetric relations between a given experi-
ment and one performed in a t-reversed configuration, which is to say one in which ad-
x0 > 0 is replaced by a trajec-
vancement in t is replaced by retreat, and so a trajectory with
P
x0 < 0. Thus, we expect symmetric behavior between pair annihilation processes
tory with
P
and pair creation processes.
3. Charge conjugation covariance C implies certain symmetric relations between a given ex-
periment and one in which the events are traced out in the reverse chronological order and
carry opposite charge.
The operations P and T are improper Lorentz transformations and therefore must be symme-
tries of any (spinless Abelian) relativistic electrodynamics. But we do not regard the operation C
defined here as connecting symmetrical dynamical evolutions. Rather, we associate the reversal
of temporal order performed by C with the re-ordering of events performed by the observer in
the laboratory, who interprets events as always evolving from earlier to later values of t. Thus,
charge conjugation exchanges the viewpoint of the events under interaction with the viewpoint
of the laboratory observer. The charge inversion (associated with the gauge symmetry) under
this exchange reinforces the view of antiparticles in the laboratory, but does not influence the
event dynamics.
3.9
BIBLIOGRAPHY
[1] Born, M. and Wolf, E. 1999. Principles of Optics: Electromagnetic Theory of Propagation,
Interference and Diffraction of Light, Cambridge University Press, Cambridge. 25
[2] Jackson, J. D. and Okun, L. B. 2001. Review of Modern Physics, 73:663. 25
[3] Saad, D., Horwitz, L., and Arshansky, R. 1989. Foundations of Physics, 19:1125–1149. 27,
29, 37
44
3. CLASSICAL ELECTRODYNAMICS
[4] Jackson, J. 1975. Classical Electrodynamics, Wiley, New York. DOI: 10.1063/1.3057859.
29
[5] Arshansky, R., Horwitz, L., and Lavie, Y. 1983. Foundations of Physics, 13:1167. 29, 37
[6] Land, M. 1996. Foundations of Physics, 27:19. 29
[7] Land, M. 2003. Foundations of Physics, 33:1157. 29
[8] Land, M. 2017. Entropy, 19:234. http://dx.doi.org/10.3390/e19050234 32
[9] Schwinger, J. 1949. Physical Review, 75(4):651–679. https://link.aps.org/doi/10.
1103/PhysRev.75.651 33
[10] Wigner, E. P. 1959. Group theory and its application to the quantum mechanics of atomic
spectra, Pure Applied Physics, Academic Press, New York (translation from the German).
https://cds.cern.ch/record/102713 38
PART III
Applications
C H A P T E R 4
47
Problems in Electrostatics and
Electrodynamics
4.1
THE COULOMB PROBLEM
Introductory treatments of electromagnetism quite naturally begin with the static Coulomb
force between two point charges at rest. However, in the framework of Stueckelberg, Horwitz,
and Piron, this seemingly simple configuration requires some clarification. A timelike event in
its rest frame can be given with velocity
so that this “static” event evolves uniformly in (cid:28) with coordinates
X 2
P
c2
D (cid:0)
(cid:0)!
.c; 0/
X
P
D
X.(cid:28)/
.ct; X/
.c.t0
C
D
(cid:28)/; X0/
D
and the displacement .ct0; X0/ at (cid:28)
0 plays a role in interactions with other events.
Taking X0
0, so that the event simply evolves along the t-axis in its rest frame, the
D
associated event current is
D
j (cid:11) .x; (cid:28) /
c
x(cid:11)(cid:14)4 .x
P
(cid:0)
D
X.(cid:28)//
(cid:0)!
8
(cid:136)(cid:136)(cid:136)<
(cid:136)(cid:136)(cid:136):
j 0 .x; (cid:28) /
j .x; (cid:28) /
D
j 5 .x; (cid:28) /
c2(cid:14) .ct
c.t0
(cid:0)
C
(cid:28)// (cid:14)3 .x/
cc5(cid:14) .ct
c.t0
(cid:0)
C
(cid:28)// (cid:14)3 .x/
D
0
D
with support restricted to the spatial origin—as in Maxwell theory—and to the time t
The source for the pre-Maxwell field is the smoothed ensemble current
t0
(cid:28).
C
D
j (cid:11)
’ .x; (cid:28) /
D
Z ds ’ .(cid:28)
s/ j (cid:11) .x; s/
(cid:0)
(cid:0)!
8
(cid:136)(cid:136)(cid:136)<
(cid:136)(cid:136)(cid:136):
j 0
’ D
j’
D
j 5
’ D
c’ .t
.t0
(cid:0)
C
(cid:28)// (cid:14)3 .x/
0
c5’ .t
.t0
(cid:0)
C
(cid:28)// (cid:14)3 .x/
(4.1)
which varies continuously in t, and as (cid:28) advances has its maximum at t
(cid:28). The potential
induced by this current may be found, as in (3.23), by integration with the Green’s function
GCorrelation. We first treat the Maxwell term, which
(3.24), containing two terms, G
GMaxwell
D
C
t0
D
C
48
4. PROBLEMS IN ELECTROSTATICS AND ELECTRODYNAMICS
produces a potential with the expected 1=R dependence, multiplied by a time-dependent form
factor found from ’. We then find the contribution from the correlation term which is scaled
by the small factor c5=c and drops off as 1=R2.
4.1.1
CONTRIBUTION TO POTENTIAL FROM GMaxwell
The leading term in the potential is
Z d 4x0d (cid:28) 0 (cid:14) (cid:16)(cid:0)x
Z cdt 0 (cid:14) (cid:16)c2 (cid:0)t
’ (cid:18)(cid:28)
(cid:18)t
(cid:0)
t0
(cid:0)
(cid:0)
(cid:0)
2
t 0(cid:1)
R
c
(cid:0)
(cid:19)(cid:19)
2(cid:17) (cid:18) ret(cid:14).(cid:28)
x0(cid:1)
(cid:0)
(cid:28) 0/’ (cid:0)t 0
(cid:0)
x2(cid:17) (cid:18) ret’ (cid:0)t 0
.t0
(cid:0)
(cid:28) 0/(cid:1) (cid:14)3 (cid:0)x0(cid:1)
.t0
C
(cid:28)/(cid:1)
(cid:0)
C
e
2(cid:25)
e
4(cid:25)
e
4(cid:25)R
0
a0 .x; (cid:28) /
a .x; (cid:28) /
a5 .x; (cid:28) /
D
D
D
D
D
c5
c
a0 .x; (cid:28) / ;
(cid:0)
D
(cid:18) .x
.ct; x/, this field will grow as (cid:28)
x0/ to select retarded spacetime causality, and write R
where we insert (cid:18) ret
. As
D j
R=c from be-
observed from a spacetime point x
t
(cid:0)
D
!
(cid:28), the maximum
low and then decrease. Since the time coordinate of the source is tevent D
R=c, representing a delay equal to the signal
tevent C
occurs if the observer is located at time t
transmission time at the speed of light. Put in a more familiar way, the time coordinate of the
t
event detected at time t is tevent D
.c(cid:28)
experienced by the test event becomes
To study the “static” Coulomb problem, we consider a test event evolving uniformly at x
ct test
D
0 ; x/, where x is constant. Inserting these coordinates and using (3.15), the potential
D
tretarded D
R=c.
(cid:0)
t0
x
j
C
C
t0
(cid:0)
’ (cid:18)
(cid:0)
(cid:129)t0
C
(cid:19)
R
c
D
1
2(cid:24)
e
4(cid:25)R
(cid:129)t0
e(cid:0)j
R=c
=(cid:24)(cid:21)
j
(cid:0)
(4.2)
a0 .x; (cid:28) /
a5 .x; (cid:28) /
D
D
e
4(cid:25)R
c5
c
a0 .x; (cid:28) / ;
where (cid:129)t0
may take (cid:24)
D
’
t test
0 (cid:0)
1=2 for c5=c
1, and so recover the Coulomb potential
(cid:28)
t0 defines the mutual t-synchronization between the events. From (3.12) we
in the particular case that (cid:129)t0
Yukawa potential
D
a0 .x; (cid:28) /
e
4(cid:25)R
D
R=c. By contrast, if (cid:129)t0
D
a0 .x; (cid:28) /
e
4(cid:25)R
D
e(cid:0)
2
R
j
=(cid:21)c
j
0, then a0 takes the form of a
(4.3)
4.1. THE COULOMB PROBLEM 49
suggesting a semi-classical interpretation in which the photons carrying the pre-Maxwell in-
teraction have mass m(cid:13) c2
=(cid:21). Taking m(cid:13) to be smaller than the experimental error on the
18eV =c2) [1], we may estimate (cid:21) > 104 seconds. In this approximation
mass of the photon (10(cid:0)
(cid:21)c will be larger than any practical distance in the problems we consider.
2
(cid:132)
(cid:24)
The field strength components found from the Yukawa-type potential with (cid:129)t0
f k0.x; (cid:28)/
@k e
4(cid:25)R
1
2(cid:24)
D
R=(cid:24)(cid:21)c
e(cid:0)
f k5.x; (cid:28)/
c5
c
D
f k0.x; (cid:28) /
0 are
D
so that the test event will undergo Coulomb scattering
f ij .x; (cid:28)/
f 50.x; (cid:28)/
0
D
D
M
xk
R
D
e
c
f k
x(cid:23)
(cid:23) P
(cid:0)
(cid:17)55
ec5
c
f 5k
e
c
f k0 (cid:18)
x0
P
(cid:17)55
C
(cid:19)
c2
5
c
D (cid:0)
according to the Lorentz force (3.4). Since the test event velocity is
M
x
R
D (cid:0)
e2
2(cid:24)
(cid:18)1
C
(cid:17)55 (cid:16)
2(cid:19)
(cid:17)
c5
c
e(cid:0)
R=(cid:24)(cid:21)c
!
r
4(cid:25)R
D (cid:0)
e2 1
D
x.(cid:28) /
P
c5
(cid:17)55 (cid:0)
c (cid:1)
2 r
c5
c (cid:1)
(cid:0)
2
C
1
C
.c; 0/ this becomes
R=(cid:24)(cid:21)c
e(cid:0)
4(cid:25)R
!
;
where we used (3.12) for (cid:24). Now suppose the source event were an antiparticle event evolving
c. This would change the signs of a0.x; (cid:28) / and f k0.x; (cid:28) / but not
X 0
backward in time with P
the signs of a5.x; (cid:28)/ or f k5.x; (cid:28)/. We can thus write the Coulomb force for both cases as
D (cid:0)
F.
C
=
(cid:0)
/
D (cid:7)
e2 1
2
c5
c (cid:1)
2 r
(cid:17)55 (cid:0)
c5
c (cid:1)
(cid:0)
(cid:6)
1
C
R=(cid:24)(cid:21)c
e(cid:0)
4(cid:25)R
!
;
where the upper sign is for a particle event and the lower sign is for an antiparticle event. Since
(cid:17)55
1, this expression provides an experimental bound on c5=c, given by
D (cid:6)
(cid:27) (cid:0)e(cid:0) C
(cid:27) .e(cid:0) C
eC (cid:0)!
e(cid:0) (cid:0)!
e(cid:0) C
e(cid:0) C
eC(cid:1)
e(cid:0)/ D
1
(cid:6)
experimental error
" 1
’
(cid:7)
1
c5
c (cid:1)
2
(cid:17)55 (cid:0)
c5
c (cid:1)
(cid:0)
C
2
2
#
;
where (cid:27) is the total classical scattering cross-section at very low energy.
The action (3.13) recovers the usual first-order kinetic term f (cid:11)(cid:12) f(cid:11)(cid:12) in the limit (cid:21)
0,
!
in which case
D
and the source of the pre-Maxwell field reduces to j (cid:11)
’ .x; (cid:28)/
but finite, then from (4.2) we have
!
lim
0
(cid:21)
1
(cid:24)(cid:21)
e(cid:0)j
(cid:28)
=(cid:24)(cid:21)
j
(cid:14).(cid:28)/
j (cid:11).x; (cid:28) /. If we take (cid:21) very small
!
a0 .x; (cid:28) /
e
4(cid:25)R
’ (cid:0)
(cid:21)(cid:14) (cid:18)(cid:129)t0
(cid:19)
R
c
(cid:0)
50
4. PROBLEMS IN ELECTROSTATICS AND ELECTRODYNAMICS
for the potential experienced by the test event. Now the support of the potential is restricted to
a lightline between the events, and for any synchronization (cid:129)t0
R=c there will be no inter-
action. As we remarked in Section 3.3, a solution for Coulomb scattering can be found in this
case [2], but the delta function potential leads to a discontinuous trajectory that is difficult to
reconcile with classical phenomenology. This discontinuity is a primary motivation for intro-
ducing the interaction kernel. We mention in passing that this difficulty is not present in SHP
quantum field theory because the definition of asymptotic states with sharp mass implies the
loss of all information about the initial t-synchronization of the scattering particles.
⁄
The significance of the small (cid:21) limit appears in a number of places. As discussed in Sec-
tion 3.4, (cid:21) characterizes the section of a worldline over which the event current is smoothed. In
this sense, (cid:21) can be seen as the correlation length of a statistical process that assembles the cur-
rent from an ensemble of events occurring along the trajectory. When (cid:21) is small, the interaction
between an event trajectory and a test event is determined by a small number of points along the
=(cid:21)
2
worldline, including only one point when (cid:21)
(cid:132)
D
of the electromagnetic field associated with the Yukawa-like potential (4.3) becomes large.
0. Moreover, the mass spectrum m(cid:13) c2
(cid:24)
By contrast, if (cid:21) is large, then the source j (cid:11)
’ .x; (cid:28)/ of the pre-Maxwell field is assembled
from a large ensemble of events along the worldline, locally approximating the concatenation
of the worldline performed in constructing the Maxwell current J (cid:22).x/. In this case, the mass
spectrum m(cid:13) c2
=(cid:21) of the electromagnetic field is small, approaching zero in the limit (cid:21)
!
2
(cid:132)
(cid:24)
.
1
From (3.16) and (4.1) the concatenated current is
J 0 .x/
c Z d (cid:28)
(cid:21)
D
’ .t
.t0
(cid:0)
C
(cid:28)// (cid:14)3 .x/
c(cid:14)3 .x/
D
J .x/
0
D
describing a static Maxwell charge at the origin, and the concatenated potential is
A0.x/
e
4(cid:25)R
Z d (cid:28)
(cid:21)
D
’ (cid:18)(cid:28)
(cid:18)t
(cid:0)
t0
(cid:0)
(cid:0)
R
c
(cid:19)(cid:19)
e
4(cid:25)R
D
A .x/
0
D
describing the static Coulomb potential. As required, J (cid:22).x/ and A(cid:22).x/ are independent of t0
and invariant under a shift of the event x(cid:22).(cid:28)/ along the time axis. The microscopic interaction
between the events is thus seen to be sensitive to the t-synchronization (cid:129)t0 of the interacting
events, a parameter not accessible by the standard Coulomb law.
CONTRIBUTION TO POTENTIAL FROM GCorrelation
4.1.2
Up to this point, we have treated only the potential found from the leading term GMaxwell in the
Green’s function. To consider the potential found from GCorrelation again we take as source the
0 and approximate ’.(cid:28) 0 (cid:0)
event X
c(cid:28); 0/, but simplify the calculation by taking t0
.ct0
D
C
D
s/
(cid:21)(cid:14).(cid:28) 0 (cid:0)
D
s/ so that
a0 .x; (cid:28) /
D (cid:0)
D (cid:0)
e
Z d 4x0d (cid:28) 0 GCorrelation (cid:0)x
c
(cid:21)ec Z d (cid:28) 0 GCorrelation (cid:0).ct
c(cid:28) 0; x/; (cid:28)
(cid:28) 0(cid:1) :
(cid:0)
(cid:0)
4.1. THE COULOMB PROBLEM 51
x0; (cid:28)
(cid:0)
(cid:0)
(cid:28) 0(cid:1) c2(cid:21)(cid:14) (cid:0)ct 0
c(cid:28) 0(cid:1) (cid:14)3 (cid:0)x0(cid:1)
(cid:0)
We introduce the function g.s/ to express terms of the type
(cid:16).x
(cid:0)
(cid:0)
X.s//2
c2
5 .(cid:28)
C
(cid:0)
s/2(cid:17)
D (cid:0)
(cid:16)..ct; x/
(cid:0)
.cs; 0//2
c2
5 .(cid:28)
C
(cid:0)
s/2(cid:17)
D
c2g .s/ ;
where
and
g .s/
.t
(cid:0)
D
s/2
R2
c2 (cid:0)
c2
5
c2 .(cid:28)
(cid:0)
(cid:0)
s/2
D
C s2
Bs
A
C
C
(cid:22)2
D
c2
5
c2
C
(cid:0)1
(cid:0)
D
(cid:22)2(cid:1)
B
2 (cid:0)t
(cid:0)
D (cid:0)
(cid:22)2(cid:28) (cid:1)
R2
c2 (cid:0)
t 2
(cid:0)
A
D
(cid:22)2(cid:28) 2
so that the potential can be written as
a .x; (cid:28) /
D
(cid:21)ec5
2(cid:25) 2c3 .c; 0; c5/ Z ds (cid:20) 1
2
(cid:18) .g .s//
g3=2 .s/ (cid:0)
(cid:14) .g .s//
g1=2 .s/
(cid:21) (cid:18) .t
s/ :
(cid:0)
The zeros of g .s/ are found to be
B
(cid:0)
(cid:6)
pB 2
2C
(cid:0)
s
(cid:6) D
4AC
D
(cid:22)2(cid:28)(cid:1)
(cid:0)t
(cid:0)
(cid:6)
r R2
c2 .1
.1
(cid:0)
(cid:0)
(cid:22)2/
(cid:22)2/
(cid:22)2 .t
(cid:28)/2
(cid:0)
C
(4.4)
and since we assume (cid:22)2 < 1 there will be roots for any values of t and R. In addition, the
condition (cid:18) ret
(cid:18).t
Attempting to set t < s
s/ requires t > s.
leads to
D
(cid:0)
(cid:0)
(cid:22)2(cid:28) (cid:1)
(cid:0)t
(cid:0)
(cid:0)
t <
q R2
c2 .1
.1
(cid:22)2 .t
(cid:28)/2
(cid:0)
C
(cid:22)2/
(cid:0)
(cid:22)2/
(cid:22)2 .t
(cid:0)
(cid:0)
(cid:28) /2 >
R2
c2
)
(cid:0)
and so t
leads to
s
(cid:0)
(cid:21)
is a condition of integration for the (cid:18) term. Similarly, attempting to set t > s
C
(cid:22)2(cid:28) (cid:1)
(cid:0)t
(cid:0)
C
t >
leading to the condition
q R2
c2 .1
.1
(cid:22)2 .t
(cid:28) /2
(cid:0)
C
(cid:22)2/
(cid:0)
(cid:22)2/
(cid:22)2 .t
(cid:0)
(cid:0)
(cid:28)/ >
R2
c2
)
(cid:0)
(cid:18).g.s// (cid:18).t
s/
0
⁄
(cid:0)
)
s
(cid:0) (cid:20)
s
t
s
C
(cid:20)
(cid:20)
52
4. PROBLEMS IN ELECTROSTATICS AND ELECTRODYNAMICS
from which
a .x; (cid:28) /
(cid:21)ec5
2(cid:25) 2c3
D
(cid:16)1; 0;
c5
c
(cid:17) (cid:18) 1
2
s
(cid:0)
Z
(cid:0)1
ds
1
g3=2 .s/ (cid:0)
Z 1
ds
(cid:0)1
(cid:14) .g .s//
g1=2 .s/
(cid:18) .t
(cid:0)
s/(cid:19) :
Using the well-known form [3]
Z
dx
Bx
C
.C x2
C
where
we notice from (4.4) that
2 .2C s
B/
C
Bx
;
A/1=2
C
A/3=2 D
q.C x2
C
4AC
q
D
(cid:0)
B 2
B
(cid:0)
(cid:0)
pB 2
2C
(cid:0)
s
(cid:0) D
4AC
B
(cid:0)
D
p
(cid:0)
(cid:0)
2C
q
)
p
q
(cid:0)
(cid:0)
D
2C s
(cid:0) C
B
and so
1
2
s
(cid:0)
Z
ds
(cid:0)1
1
g3=2 .s/ D
D
D
B
/ (cid:0)
2C s
(cid:0) C
qg1=2 .s
(cid:0)
q
p
(cid:0)
qg1=2 .s
1
qg1=2 .s
(cid:0)
(cid:0)
/ C
p
(cid:0)
(cid:0)
2C s
B
C
qg1=2 .s/
2pC
(cid:12)
(cid:12)
(cid:12)
(cid:12)(cid:0)1
B/2
.2C s
(cid:0) C
1
2
/ C
(cid:22)2
p1
(cid:22)2/
(cid:0)
C
(cid:22)2 .t
(cid:0)
:
(cid:28) /2
(4.5)
R2
c2 .1
(cid:0)
The second term is
and using the identity
we can evaluate
Z 1
ds
(cid:0)1
(cid:14) .g .s//
g1=2 .s/
(cid:18) .t
s/
(cid:0)
Z ds f .s/ (cid:14) .g .s//
f .s(cid:0)/
g0 .s(cid:0)/
j
j
(cid:12)
(cid:12)
(cid:12)
(cid:12)s(cid:0)D
D
1.0/
g (cid:0)
Z 1
ds
(cid:0)1
(cid:14) .g .s//
g1=2 .s/
(cid:18) .t
s/
(cid:0)
D
(cid:18) .t
/
g0 .s
j
(cid:0)
/
s
(cid:0)
(cid:0)
g1=2 .s
j
(cid:0)
/ D
g0 .s
j
(cid:0)
/
j
1
g1=2 .s
:
/
(cid:0)
Since
(cid:0)C s2
D (cid:0)
we see that this term cancels the singularity in the first term, leaving
g0 .s
2C s
(cid:0) C
(cid:0) C
(cid:0) C
A(cid:1)0
Bs
D
D
B
(cid:0)
/
q
p
(cid:0)
1
2
s
(cid:0)
Z
(cid:0)1
ds
1
g3=2 .s/ (cid:0)
Z 1
ds
(cid:0)1
(cid:14) .g .s//
g1=2 .s/
(cid:18) .t
s/
(cid:0)
D
1
2
R2
c2 .1
(cid:0)
p1
(cid:22)2/
(cid:0)
C
(cid:22)2
(cid:22)2 .t
(cid:28)/2
(cid:0)
4.2. LIÉNARD–WIECHART POTENTIAL AND FIELD STRENGTH 53
and
a .x; (cid:28) /
(cid:21)e
4(cid:25) 2 .c; 0; c5/
c5
c
D
r1
c5
c
(cid:17)
(cid:0)
c5
c
c5
c
c2 .t
:
(cid:28) /2
R2 (cid:16)1
(cid:0)
We notice that the potential has units of (cid:21)c=distance2
1/distance, as does the potential as-
sociated with GMaxwell. This contribution to the potential is smaller by a factor of c5=c than the
Yukawa potential found in (4.3), and drops off faster with distance, as 1=R2 compared to 1=R.
This term may be neglected when the contribution from GMaxwell is significant, but as we will
see in Section 4.7.1, it may lead to qualitatively important phenomena when the leading term
vanishes.
D
C
(cid:0)
4.2
LIÉNARD–WIECHART POTENTIAL AND FIELD
STRENGTH
We now consider an arbitrary event X (cid:11) .(cid:28) / for which the smoothed current is
j (cid:11)
’ .x; (cid:28) /
D
c Z ds ’ .(cid:28)
s/
X (cid:11) .s/ (cid:14)4 (cid:140)x
P
(cid:0)
(cid:0)
X .s/(cid:141)
and the Liénard–Wiechert potential found from GMaxwell is
a(cid:11) .x; (cid:28) /
D
D
e
2(cid:25)c
e
2(cid:25)
Z d 4x0d (cid:28) 0(cid:14) (cid:16)(cid:0)x
x0(cid:1)
2(cid:17) (cid:18) ret(cid:14) (cid:0)(cid:28)
(cid:0)
(cid:28) 0(cid:1) j (cid:11)
’ (cid:0)x0; (cid:28) 0(cid:1)
(cid:0)
Z ds ’ .(cid:28)
s/
X (cid:11) .s/ (cid:14) (cid:16).x
P
(cid:0)
(cid:0)
X .s//2(cid:17) (cid:18) ret;
(4.6)
where (cid:18) ret imposes retarded x0 causality. Writing the line of observation as
z(cid:22)
x(cid:22)
(cid:0)
D
X (cid:22).s/
(cid:0)!
z2
(cid:140)x
(cid:0)
D
X .s/(cid:141)2
and using the identity
we obtain
Z ds f .s/ (cid:14) (cid:140)g .s/(cid:141)
f .(cid:28)R/
g0 .(cid:28)R/
j
D
j
;
(cid:28)R
D
g(cid:0)
1 .0/
a(cid:11) .x; (cid:28) /
e
4(cid:25)
D
’ .(cid:28)
(cid:28)R/
(cid:0)
X (cid:11) .(cid:28)R/
P
X (cid:22) .(cid:28)R//
;
X(cid:22) .(cid:28)R/(cid:12)
P
(cid:12)
.x(cid:22)
(cid:12)
(cid:12)
(cid:0)
(4.7)
(4.8)
where the retarded time (cid:28)R satisfies
z2
(cid:140)x
(cid:0)
D
X.(cid:28)R/(cid:141)2
0
D
(cid:18) .x
(cid:0)
X .(cid:28)R//
1:
D
54
4. PROBLEMS IN ELECTROSTATICS AND ELECTRODYNAMICS
Introducing the notation for velocity
u(cid:22)
X (cid:22) .(cid:28) /
D P
(cid:12)(cid:22)
D
X (cid:22)
P
c
u5
X 5
D P
D
c5
and the scalar length
the potential becomes
R
D
1
2c
d
d (cid:28)R
(cid:140)x
(cid:0)
X .(cid:28)R/(cid:141)2
z(cid:22)u(cid:22)
z
c D j
(cid:12)
j
(cid:1)
D (cid:0)
a(cid:22) .x; (cid:28) /
e
4(cid:25)R
D
’ .(cid:28)
(cid:0)
(cid:28)R/ (cid:12)(cid:22)
a5 .x; (cid:28) /
e
4(cid:25)R
D
’ .(cid:28)
(cid:28)R/
(cid:0)
c5
c
;
(4.9)
(4.10)
where R is nonnegative because u(cid:22) is timelike and z(cid:22) is lightlike. Thus, a(cid:22) .x; (cid:28) / takes the
form of the usual Liénard–Wiechert potential from Maxwell theory multiplied by the factor
’ .(cid:28)
(cid:28)R/ which separates out the (cid:28)-dependence of the fields.
To calculate the field strengths, we need derivatives of the Liénard–Wiechert potential.
(cid:0)
Since
d
d (cid:28)R
’ .(cid:28)
(cid:28)R/
(cid:0)
D (cid:0)
1
2(cid:24)
d
d (cid:28)
(cid:28)
e(cid:0)j
(cid:28)R
(cid:0)
=(cid:24)(cid:21)
j
" .(cid:28)
(cid:28)R/
(cid:0)
(cid:24)(cid:21)
D (cid:0)
’ .(cid:28)
(cid:28)R/ ;
(cid:0)
where " .(cid:28) /
D
signum.(cid:28)/, we obtain the (cid:28)-derivative
1
c5
@(cid:28) a(cid:22) .x; (cid:28) /
1
c5
D
4(cid:25)
e
u
j
z
j
(cid:1)
’ .(cid:28)
P
(cid:0)
(cid:28)R/ u(cid:22)
D (cid:0)
e
4(cid:25)c5
’ .(cid:28)
u
j
(cid:28)R/
(cid:0)
z
(cid:1)
j
" .(cid:28)
(cid:28)R/
u(cid:22)
(cid:0)
(cid:24)(cid:21)
directly from (4.10).
The spacetime derivative is most conveniently found by applying the identity (4.7) to
expression (4.6)
@(cid:22)a(cid:12) .x; (cid:28) /
D
D
e
2(cid:25)
e
2(cid:25)
e
2(cid:25)
Z ds ’ .(cid:28)
s/
X (cid:11) .s/ (cid:18) ret @(cid:22)(cid:14) (cid:16).x
P
(cid:0)
Z ds ’.(cid:28)
s/
(cid:0)
Z ds ’.(cid:28)
(cid:0)
X (cid:12) .s/ (cid:18) ret (cid:14)0 h.x
P
X (cid:12) .s/ (cid:140)x(cid:22)
.x
X .s/
P
s/ P
(cid:0)
(cid:0)
X (cid:22) .s/(cid:141)
X .s//
(cid:0)
(cid:1)
D (cid:0)
X .s//2(cid:17)
(cid:0)
X .s//2i (cid:140)2 .x(cid:22)
(cid:18) ret d
ds
(cid:14) h.x
X (cid:22) .s//(cid:141)
X .s//2i
(cid:0)
(cid:0)
and integrating by parts to obtain
@(cid:22)a(cid:12) .x; (cid:28) /
e
2(cid:25)
e
4(cid:25)
D
D
d
ds
d
ds
Z ds
1
u
j
z
j
(cid:1)
"
’.(cid:28)
"
’.(cid:28)
(cid:0)
(cid:0)
X (cid:22) .s/(cid:141)
X .s//
#
(cid:18) ret (cid:14) h.x
X .s//2i
(cid:0)
s/ P
X (cid:12) .s/ (cid:140)x(cid:22)
X .s/
.x
P
(cid:0)
(cid:1)
z(cid:22).s/u(cid:12) .s/
#
(cid:0)
s/
:
u
z
(cid:1)
(cid:28)R
s
D
4.2. LIÉNARD–WIECHART POTENTIAL AND FIELD STRENGTH 55
Using
z(cid:22)
P
D (cid:0)
u(cid:22)
R
P
D (cid:0)
d
d (cid:28)
z
u
(cid:1)
c D
c(cid:12)2
z
(cid:1) P(cid:12)
(cid:0)
we find the field strengths as
f (cid:22)(cid:23).x; (cid:28)/
’ .(cid:28)
e
4(cid:25)
D
(cid:0)
R
(cid:28)R/
(cid:26) .z(cid:22)(cid:12)(cid:23)
z(cid:23)(cid:12)(cid:22)/ (cid:12)2
(cid:0)
R2
(cid:0)
" .(cid:28)
(cid:28)R/
(cid:0)
(cid:24)(cid:21)c
z(cid:22)(cid:12)(cid:23)
z(cid:23)(cid:12)(cid:22)
(cid:0)
R
(cid:16)z(cid:22)
P(cid:12)(cid:23)
(cid:0)
z(cid:23)
P(cid:12)(cid:22)(cid:17) R
.z(cid:22)(cid:12)(cid:23)
C
cR2
(cid:0)
z(cid:23)(cid:12)(cid:22)/ (cid:16)
P(cid:12)
z(cid:17)
9
=
(cid:1)
(cid:0)
(4.11)
f 5(cid:22).x; (cid:28)/
c5
e
4(cid:25)
D
’ .(cid:28)
(cid:0)
cR
(cid:28)R/
(cid:26)
(cid:0)
z(cid:22)(cid:12)2
C
R2
(cid:12)(cid:22)R
(cid:0)
" .(cid:28)
(cid:28)R/
(cid:0)
(cid:24)(cid:21)c
;
(cid:12)(cid:22)Rc2=c2
5
R
z(cid:22)
C
z(cid:17)
z(cid:22) (cid:16)
P(cid:12)
(cid:1)
cR2
C
:
9
=
;
(4.12)
It is convenient to express the fields as elements of a Clifford algebra [4] with basis vectors
e(cid:11)
e(cid:12)
(cid:1)
D
(cid:17)(cid:11)(cid:12)
e(cid:11)
e(cid:12)
^
D
e(cid:11)
(cid:10)
e(cid:12)
(cid:0)
e(cid:12)
(cid:10)
e(cid:11)
(4.13)
and Clifford product
Separating spacetime and scalar quantities as
e(cid:11)e(cid:12)
e(cid:11)
e(cid:12)
(cid:1)
D
C
e(cid:11)
^
e(cid:12) :
X.(cid:28)/
D
d
X (cid:22).(cid:28)/e(cid:22)
@(cid:22)e(cid:22)
D
X 5
@5
c5(cid:28)
D
1
c5
@(cid:28)
D
and writing (cid:15)(cid:22)
D
f 5(cid:22), the field strength tensors
f
D
1
2
f (cid:22)(cid:23) e(cid:22)
e(cid:23)
^
f 5
D
f 5(cid:22) e5
e(cid:22)
e5
(cid:15)
^
D
^
are Clifford bivectors, (3.5) takes the form
In this notation, the pre-Maxwell equations (3.20) are
f
d
a
^
D
(cid:15)
@5a
(cid:0)
D (cid:0)
da5:
e
c
j’
d
(cid:0)
(cid:1)
f
d
@5(cid:15)
(cid:0)
D
f
^
0
D
d
d
^
(cid:1)
(cid:15)
e
c
j 5
’
(cid:15)
D
@5f
0;
D
C
(4.14)
56
4. PROBLEMS IN ELECTROSTATICS AND ELECTRODYNAMICS
where we may evaluate d
(cid:1)
f and similar terms using the Clifford identity
(cid:1)
Defining the dimensionless quantities associated with acceleration P(cid:12)
D
(cid:0)
^
(cid:1)
(cid:1)
a
.b
c/
.a
b/ c
.a
c/ b:
Q
D (cid:0)
z
P(cid:12)
(cid:1)
c
W
D
P(cid:12)R
(cid:0)
c
(cid:12)cQ
D (cid:0)
P(cid:12) .(cid:12)
(cid:1)
z/
(cid:0)
c
X=c,
D R
(cid:12) (cid:16)
P(cid:12)
z(cid:17)
(cid:1)
the field strengths become
f .x; (cid:28)/
e
4(cid:25)
D
’ .(cid:28)
(cid:28)R/
(cid:0)
z
R3 ^
(cid:26)(cid:12) (cid:18)(cid:12)2
(cid:0)
" .(cid:28)
(cid:28)R/
(cid:0)
(cid:24)(cid:21)c
R(cid:19)
(cid:0)
W (cid:27)
(cid:15).x; (cid:28)/
’ .(cid:28)
c5
c
e
4(cid:25)
D
(cid:0)
R
(cid:28)R/
(cid:26)
(cid:0)
(cid:12)R
z(cid:12)2
C
R2
(cid:0)
" .(cid:28)
(cid:28)R/
(cid:0)
(cid:24)(cid:21)c
(cid:18) z
R C
(cid:19)
(cid:12)
c2
c2
5
(cid:0)
(cid:27)
zQ
R2
in which the factors ’ .(cid:28)
(cid:28)R/ and " .(cid:28)
(cid:0)
(cid:0)
(cid:28)R/ contain the (cid:28)-dependence. Since
Z d (cid:28)
(cid:21)
’ .(cid:28)/
1
D
Z d (cid:28)
(cid:21)
(cid:0)
’ .(cid:28)/
" .(cid:28)/
(cid:24)(cid:21) D
Z d (cid:28)
(cid:21)
’0 .(cid:28)/
0
D
(cid:28)R/
the concatenated fields are found by replacing ’ .(cid:28)
0, in agreement
with the standard Maxwell result. We mention again that these field strengths were obtained us-
ing only the leading term GMaxwell in the Green’s function, and neglect the smaller contributions
from GCorrelation. Although the neglected terms vanish under concatenation, they may dominate
the dynamics when the leading contribution is zero. In particular, while GMaxwell has support on
the lightcone, GCorrelation has timelike or spacelike support (depending on the choice of (cid:17)55) and
so becomes significant in self-interactions.
1 and " .(cid:28)
(cid:28)R/
!
!
(cid:0)
(cid:0)
Taking (cid:21)c
(cid:29)
imate
R and neglecting mass transfer, so that (cid:12)2
u2=c2
D
D (cid:0)
1, we may approx-
f .x; (cid:28)/
e
4(cid:25)
D (cid:0)
’ .(cid:28)
(cid:28)R/
(cid:0)
z
R3 ^
.(cid:12)
C
W /
(4.15)
(cid:15).x; (cid:28)/
c5
c
e
4(cid:25)
D
’ .(cid:28)
(cid:28)R/
(cid:0)
z .1
(cid:12)R
(cid:0)
Q/
R3
(cid:0)
and split the field strengths into the short-range retarded fields
f ret
e
4(cid:25)
that drop off as 1=R2, and the radiation fields
^
R3
D (cid:0)
’ .(cid:28)
(cid:28)R/
(cid:0)
(cid:12)
z
(cid:15)ret
c5
c
e
4(cid:25)
D
’ .(cid:28)
z
(cid:28)R/
(cid:12)R
(cid:0)
R3
(cid:0)
f rad
e
4(cid:25)
D (cid:0)
’ .(cid:28)
(cid:28)R/
(cid:0)
z
W
^
R3
(cid:15)rad
c5
c
e
4(cid:25)
D (cid:0)
’ .(cid:28)
(cid:28)R/
(cid:0)
zQ
R3
associated with acceleration that drop off as 1=R.
4.3. ELECTROSTATICS 57
As elements of a Clifford algebra, the field strengths admit geometrical interpretation. The
(cid:12) in f ret represents the plane spanned by the velocity (cid:12) and the line of observation z.
factor z
Similarly, we recognize
^
(cid:12)R
z
(cid:0)
D (cid:0)
(cid:12)2z
(cid:12) .z
(cid:12)/
(cid:1)
.z
(cid:12)/
(cid:12)
(cid:1)
^
D (cid:0)
C
representing the projection of (cid:12) onto the z
f ret
e
4(cid:25)
D (cid:0)
’ .(cid:28)
(cid:28)R/
(cid:0)
for the retarded fields. Similarly, using
(cid:0)
z
(cid:12)
^
R3
(cid:12) plane, and so we have
(cid:15)ret
c5
c
f ret
(cid:12)
(cid:1)
D
.b
a
(cid:1)
^
c
^
d /
D
.a
(cid:1)
b/ c
d
.a
(cid:1)
(cid:0)
^
c/ b
d
.a
(cid:1)
C
^
d / b
c
^
and z2
D
0, we see that
^
in f rad represent the projection of z onto the volume spanned by z, (cid:12), and P(cid:12). Similarly, (cid:15)ret is
proportional to zQ
z/z=c, the projection of z onto the acceleration P(cid:12).
. P(cid:12)
D
^
(cid:1)
D
(cid:1)
z
W
(cid:16)z
(cid:12)
^ P(cid:12)(cid:17)
z
ELECTROSTATICS
4.3
The covariant equivalent of a spatially static charge is a uniformly evolving event
X .(cid:28)/
u(cid:28)
D
D
(cid:0)u0(cid:28); u(cid:28) (cid:1)
X
with constant timelike velocity P
time axis as t
the field strengths are essentially kinematical in structure.
(cid:12)c, which in its rest frame simply advances along the
(cid:12)0(cid:28). As a result, and given the geometrical interpretation of the Clifford forms,
D
D
D
u
Writing the timelike velocity (cid:12) in terms of the unit vector O(cid:12)
(cid:12)
(cid:12)2 < 0
(cid:12)
1
j O(cid:12)
D j
O(cid:12)2
D (cid:0)
(cid:12)2
(cid:12)
D (cid:0) j
2
j
the observation line z can be separated into components
k D (cid:0) O(cid:12) (cid:16)
z
O(cid:12)
(cid:1)
z(cid:17)
z
? D
z
C O(cid:12) (cid:16)
O(cid:12)
(cid:1)
z(cid:17)
which satisfy
z2
? D
z2
k D O(cid:12)2 (cid:16)
z2
O(cid:12)
2
2 (cid:16)
z(cid:17)
O(cid:12)
(cid:1)
C
2
z(cid:17)
(cid:1)
(cid:16)
O(cid:12)
2
z(cid:17)
(cid:16)
O(cid:12)
(cid:1)
(cid:16)
O(cid:12)
D
D (cid:0)
2
z(cid:17)
(cid:1)
(cid:0)
2 (cid:16)
O(cid:12)
j
2
z(cid:17)
(cid:1)
(cid:12)
D (cid:0) j
j
z/2
.(cid:12)
(cid:1)
(cid:12)
D j
z2
k
D (cid:0)
2
:
z(cid:17)
(cid:1)
2 z2
k
(4.16)
58
4. PROBLEMS IN ELECTROSTATICS AND ELECTRODYNAMICS
The condition of retarded causality
z2
D
c2(cid:28) 2
R(cid:12)2
(cid:0)
2c(cid:28)R(cid:12)
x2
x
(cid:1)
C
0
D
relates the field to the location of the event along the backward lightcone of the observation
point. This implicit choice of (cid:28)R and its gradient
d.z2/
0
D
D
2 (cid:0)c2(cid:28)Rd (cid:28)R(cid:12)2
c(cid:28)R(cid:12)
cd (cid:28)R(cid:12)
x
(cid:1)
C
x(cid:1)
D
(cid:0)
(cid:0)
2 (cid:140)cRd (cid:28)R
z(cid:141)
C
lead to the following expressions:
d (cid:28)R
z
cR
D (cid:0)
d / (cid:28)R
.(cid:12)
(cid:1)
D
(cid:12)
(cid:12)
(cid:1)
(cid:1)
d .(cid:12)
z/
(cid:1)
D
d (cid:0)(cid:12)
x
(cid:1)
(cid:0)
(cid:12)2(cid:28)R(cid:1)
D
z
z D
1
.(cid:12)
(cid:1)
.z
d / (cid:28)R
(cid:1)
(cid:12)2z
D (cid:0)
(cid:12)2(cid:12)
(cid:12)
2
C
n (cid:12)
(cid:12)
Rn
(cid:0)
D
z/ (cid:12)
(cid:0)
(cid:12)
z
(cid:1)
(cid:12)2z(cid:3)
z2
cR D
0
D (cid:0)
(cid:12)2(cid:12)
(cid:12)
(cid:12)
(cid:12)
z
?
cR
z
?
d
1
Rn D
.
(cid:0)
1/n (cid:0)
n (cid:2).(cid:12)
(cid:1)
.(cid:12)
z/ (cid:12)
z/n
(cid:0)
2
C
d
z
(cid:1)
.x
d
(cid:1)
(cid:0)
D
d
x
(cid:1)
(cid:0)
D
c(cid:12)
(cid:1)
d (cid:28)R
(cid:1)
c(cid:12)(cid:28)R/
3
z
D
(cid:12)
d
z
^
D
d
d
z
^ O
D
d
^
^
z
z
j
.x
(cid:0)
c(cid:12)(cid:28)R/
cd (cid:28)R
(cid:12)
^
D (cid:0)
D
^
R
1
z
j
j
D (cid:0)
j
(cid:12)
z
z
^
R (cid:0) O
^
z
z
j
j
(cid:12)
z
^ O
R
:
2 D (cid:0)
Using these expressions, the pre-Maxwell equations (4.14) can be easily verified for the case of
’"=(cid:24)(cid:21), the exterior derivative of f
a uniform velocity event [5]. For example, recalling ’0 D (cid:0)
is
e
4(cid:25)
(cid:12)
R3 (cid:12)2
^
z
(cid:12)
^
cR2
(cid:18)’ .(cid:28)
’0 .(cid:28)
(cid:28)R/
(cid:28)R/
(cid:19)
f
d
d
z
C
D
(cid:0)
(cid:0)
^
^
which produces terms of the type:
d’.n/
.z
^
^
(cid:12)/
D (cid:0)
’.n
C
1/ z
cR ^
.z
u/
0
D
^
d
.z
(cid:12)/
.d
z/
(cid:12)
^
^
D
^
^
D (cid:0)
(cid:12)
z
^
R ^
(cid:12)
0
D
(cid:20)d
(cid:21)
1
Rn
.z
^
^
u/
D
"
(cid:0)
n (cid:12)
(cid:12)
Rn
(cid:12)2(cid:12)
(cid:12)
2
C
#
z
?
.z
^
? ^
u/
0
D
and thus we recover
d
f
^
0
D
from kinematics.
It is convenient to write the field strengths in 3-vector and scalar form
.e/i
f 0i
D
.b/i
D
"ij kf j k
.(cid:15)/i
f 5i
D
(cid:15)0
D
f 50
for which the field equations split into four generalizations of the 3-vector Maxwell equations
4.3. ELECTROSTATICS 59
e
(cid:0)
r (cid:1)
b
(cid:0)
r (cid:2)
1
c5
1
c
@
@(cid:28)
(cid:15)0
@
@t
e
(cid:0)
D
1
c5
e
c
j 0
’ D
e(cid:26)0
’
@
@(cid:28)
(cid:15)
D
e
c
j’
b
r (cid:1)
e
C
r (cid:2)
1
c
0
b
D
@
@t
0
D
and three new equations for the fields (cid:15) and (cid:15)0
(cid:15)
C
r (cid:1)
1
c
@
@t
(cid:15)0
D
e
c
j 5
’ D
1
c
C
(cid:15)0
ec5
c
(cid:26)’
(cid:17)55 1
c5
@
@(cid:28)
b
(cid:15)
(cid:0)
0
D
r (cid:2)
@
@t
(cid:15)
(cid:17)55 1
c5
@
@(cid:28)
e
C
0:
D
r
(4.17)
(4.18)
Writing d
e0@0
D
C r
and f
e0
e
^
C
D
1
2 f j kej
^
ek we find that
d
f
^
0
D
(cid:0)!
8
(cid:136)<
(cid:136):
b
0
D
r (cid:1)
e
C
r (cid:2)
1
c
@
@t
b
0
D
expressing the absence of electromagnetic monopoles.
In the rest frame of a charged event, we may set
point x
.ct; x/
D
1
t
P
D
!
(cid:12)
D
e0, so for an observation
z2
0
D
(cid:0)!
8
(cid:136)(cid:136)(cid:136)(cid:136)<
(cid:136)(cid:136)(cid:136)(cid:136):
R
z
(cid:28)R
t
D
j
x
c
j
.x
(cid:0)
e0
c(cid:28)Re0/
D (cid:0)
(cid:1)
.c .t
(cid:0)
D
(cid:0)
(cid:28)R/ ; x/
D
x
j
D j
R .e0
x/
C O
and the field strengths reduce to
f .x; (cid:28)/
e
4(cid:25)
D
’ .(cid:28)
(cid:28)R/
(cid:0)
e0
x
^ O
R2
(cid:18)1
C
" .(cid:28)
(cid:28)R/
(cid:0)
(cid:24)(cid:21)c
R(cid:19)
e0
^
D
e.x; (cid:28) /
(cid:15).x; (cid:28)/
c5
c
e
4(cid:25)
D
’ .(cid:28)
x
(cid:28)R/ (cid:26) O
R2 C
(cid:0)
" .(cid:28)
(cid:28)R/
(cid:0)
(cid:24)(cid:21)cR
(cid:20)e0
(cid:18)1
(cid:19)
c2
c2
5
C
x(cid:21)(cid:27) :
C O
60
4. PROBLEMS IN ELECTROSTATICS AND ELECTRODYNAMICS
We thus find that the magnetic field b is zero, while
e
4(cid:25)
e
D
(cid:18) ’ .(cid:28)
(cid:0)
R2
(cid:28)R/
’0 .(cid:28)
(cid:28)R/
(cid:19)
x
O
(cid:0)
R
c5
c
e
(cid:15)
D
and
(cid:15)0.x; (cid:28)/
D (cid:0)
’0 .(cid:28)
(cid:0)
R
(cid:28)R/
(cid:18) c5
c C
(cid:19) :
c
c5
(cid:0)
e
4(cid:25)
(4.19)
(4.20)
Because we obtained f .x; (cid:28)/ using only the leading term GMaxwell in the Green’s function, we
expect errors on the order of the neglected term GCorrelation. In particular, we notice that
(cid:18)@(cid:22)@(cid:22)
(cid:17)55
c2
5
C
(cid:19) GMaxwell
@2
(cid:28)
(cid:14)4 .x/ (cid:14) .(cid:28)/
D (cid:0)
1
2(cid:25)
(cid:17)55
c2
5
(cid:0)
(cid:14).x2/ (cid:14)00.(cid:28)/;
where the second term on the right is canceled when GCorrelation is included in the wave equation.
As a result, calculating
(cid:15)
D
r (cid:1)
c5
c
e
4(cid:25)
(cid:18)’ .(cid:28)
(cid:0)
(cid:28)R/ (cid:14)3 .x/
’00 .(cid:28)
(cid:0)
cR
(cid:0)
(cid:28)R/
(cid:19) ;
where we use
.
x=R2/
O
D
r (cid:1)
4(cid:25)(cid:14)3.x/, and
1
c
@
@t
(cid:15)0
D
e
4(cid:25)
’00 .(cid:28)
(cid:0)
cR
(cid:28)R/
(cid:18) c5
c C
(cid:19)
c
c5
leads to the Gauss law as
1
c
@
@t
(cid:15)0
C r (cid:1)
(cid:15)
D
c5
c
e
4(cid:25)
’ .(cid:28)
(cid:0)
(cid:28)R/ (cid:14)3 .x/
c
c5
e
4(cid:25)
C
’00 .(cid:28)
(cid:0)
cR
(cid:28)R/
exposing an error at the order of (cid:14)00.(cid:28)
(cid:28)R/.
(cid:0)
We now consider a long straight charged line oriented along the z-axis, with charge per
unit length (cid:21)e. In cylindrical coordinates
x
.(cid:26); z/
(cid:26)
.x; y/
D
the fields (cid:15) and e are found by replacing R
along the z-axis to find
D
D
(cid:26)
D
p(cid:26)2
(cid:26)
D
px2
y2
C
z2 in (4.19) and (4.20) and integrating
(cid:26)
O
C
(cid:21)e
4(cid:25)
e
D
Z dz
0
’ (cid:18)(cid:28)
B
B
@
t
(cid:0)
.(cid:26)2
C
C
.(cid:26)2
z2/1=2
c
C
(cid:19)
(cid:18)(cid:28)
’0
z2/3=2
(cid:0)
t
(cid:0)
C
c .(cid:26)2
.(cid:26)2
z2/1=2
c
C
(cid:19)
1
z2/
C
.(cid:26)
(cid:26); z/
O
C
C
A
(cid:15)0
D (cid:0)
(cid:21)e
4(cid:25)
c5
c
Z dz
’0
(cid:18)(cid:28)
t
C
(cid:0)
c .(cid:26)2
.(cid:26)2
z2/1=2
c
C
(cid:19)
:
z2/1=2
C
To get a sense of these expressions, we may use (3.15) to approximate ’.x/
permits us to easily carry out the z-integration to obtain
D
(cid:21)(cid:14).x/ which
4.3. ELECTROSTATICS 61
(cid:21)(cid:21)e
2(cid:25)
e
D
0
B
@
(cid:18) .t
c (cid:16).t
(cid:0)
(cid:26)=c
(cid:0)
(cid:28) /2
(cid:0)
(cid:28)/ (cid:26)
(cid:0)
(cid:26)2=c2(cid:17)
3=2 (cid:0)
(cid:14) .t
q.t
(cid:26)=c
(cid:0)
(cid:28) /2
(cid:28) /
(cid:0)
(cid:26)2=c2
(cid:0)
(cid:0)
1
(cid:26)
C
A O
which vanishes (cid:28) > (cid:28)R
D
t
(cid:26)=c as required for retarded causality. Since
(cid:0)
Z d (cid:28)
(cid:21)
’ .(cid:28)/
1
D
Z d (cid:28)
(cid:21)
’0 .(cid:28) /
0
D
the concatenated electric field is found as
(cid:21)e
4(cid:25)
Z d (cid:28)
(cid:21)
e.x; (cid:28)/
E.x/
D
D
Z dz
in agreement with the standard expression.
1
z2/3=2
C
.(cid:26)2
.(cid:26)
(cid:26); z/
O
D
(cid:21)e
2(cid:25)(cid:26)
.
(cid:26); 0/
O
To obtain the field of a charged sheet in the x
y plane with charge per unit area (cid:27), it is
convenient to start from the potential from a charged event, and integrating over x and y with
R
z2. Thus,
px2
y2
(cid:0)
D
C
C
’ (cid:18)(cid:28)
t
q.x
1
c
x0/2
.y
a0.x; (cid:28)/
(cid:27)c
4(cid:25)
D
Z dx0dy0
(cid:0)
y0/2
.c5=c/a0.x; (cid:28)/. Changing to radial coordinates .x; y/
(cid:0)
C
cq.x
(cid:0)
x0/2
.y
C
C
(cid:0)
(cid:0)
y0/2
C
z2(cid:19)
C
z2
.(cid:26); (cid:18)/ we obtain
and a5.x; (cid:28)/
D
a0.x; (cid:28)/
(cid:27)c
4(cid:25)
D
Z d(cid:18)d(cid:26)
’ (cid:16)(cid:28)
t
(cid:0)
C
cp(cid:26)2
1
c p(cid:26)2
z2
C
C
which by change of variable (cid:16)
z2 becomes
1
c p(cid:26)2
D
a0.x; (cid:28)/
C
(cid:27)c
2
D
Z 1
z
j
=c
j
’ .(cid:28)
t
(cid:0)
C
(cid:16)/ d (cid:16):
!
z2(cid:17)
We calculate the fields from
e.x; (cid:28)/
a0
D
(cid:27)
2
’ (cid:18)(cid:28)
t
(cid:0)
C
(cid:19)
j
z
c
j
z
r j
j D
(cid:27)
2
D (cid:0)r
".z/’ (cid:18)(cid:28)
t
(cid:0)
C
(cid:19)
z
j
c
j
z;
O
where (cid:15).x; (cid:28)/
D
.c5=c/e.x; (cid:28)/ and
(cid:15)0
D
D
1
c
@(cid:28) a0
(cid:17)55
c5
(cid:27)
(cid:18)(cid:17)55
c
C
c
c5 (cid:0)
@t a5
1
c
D
(cid:19) ’ (cid:18)(cid:28)
c5
c
(cid:19) @(cid:28) a0
(cid:18)(cid:17)55
t
(cid:0)
C
c
c5
c5 (cid:0)
c
z
(cid:19) :
j
c
j
62
4. PROBLEMS IN ELECTROSTATICS AND ELECTRODYNAMICS
By concatenation, we recover
Z d (cid:28) e.x; (cid:28)/
E.x/
D
Z d (cid:28)
D
(cid:27)
2
".z/’ (cid:18)(cid:28)
t
(cid:0)
C
(cid:19)
j
z
c
j
(cid:27)
2
z
O
D
".z/
z
O
in agreement with the Maxwell field from a charged sheet. We notice that, as expected, the
space part of the electric fields change sign at the plane of the sheet, pointing out at each side.
Consequently, an event passing through a charged sheet of equal sign will decelerate in space
on its approach and then accelerate as it retreats. However, unlike the field of a point event, the
temporal part (cid:15)0 is an even function of spatial distance and so the event may accelerate along
the time axis on both its approach to the charged sheet and its retreat. In such a case, the spatial
motion will asymptotically return to its initial condition, while the event acquires a net temporal
acceleration, corresponding to a shift in energy and mass.
PLANE WAVES
4.4
From the wave equation (3.22) for j (cid:11).x; (cid:28)/
transform [6]
D
0 we may write the field in terms of the Fourier
f (cid:11)(cid:12) .x; (cid:28)/
1
.2(cid:25)/5
D
Z d 5k eik
xf (cid:11)(cid:12) .k/
(cid:1)
1
.2(cid:25)/5
D
Z d 4k d (cid:20) ei.k
(cid:1)
x
C
k0x0
C
(cid:17)55c5(cid:20)(cid:28)/f (cid:11)(cid:12) .k; (cid:20)/;
where
(cid:20)
k5
(cid:17)55k5
D
is understood to represent the mass carried by the plane wave, much as k0 and k represent energy
and 3-momentum. This interpretation is supported by the wave equation which imposes the 5D
constraint
D
k(cid:11)k(cid:11)
k2
(cid:0)
D
.k0/2
C
(cid:17)55(cid:20)2
0
D
H)
(cid:17)55(cid:20)2
.k0/2
k2
(cid:0)
D
(4.21)
expressing (cid:20) in terms of the difference between energy and momentum. Under concatenation,
the field becomes
F (cid:11)(cid:12) .x/
Z d (cid:28)
(cid:21)
D
f (cid:11)(cid:12) .x; (cid:28)/
D
Z d 4k
.2(cid:25)/4 eik(cid:22)x(cid:22) 1
(cid:21)c5
f (cid:11)(cid:12) .k; 0/
Z d 4k
.2(cid:25)/4 eik(cid:22)x(cid:22)
D
F (cid:11)(cid:12) .k/
and recovers the 4D mass-shell constraint k(cid:22)k(cid:22)
domain, the sourceless pre-Maxwell equations take the form
D
0 for the Maxwell field. In the transform
(cid:17)55(cid:20)(cid:15)0
0
k0b
D
0
D
k
(cid:1)
k
e
(cid:2)
(cid:0)
e
(cid:0)
(cid:15)
(cid:0)
k
(cid:2)
b
k
(cid:1)
D
0
k
(cid:1)
(cid:15)
b
k
(cid:2)
C
k0(cid:15)0
0
D
(cid:17)55(cid:20)(cid:15)
(cid:0)
k0e
k0(cid:15)
k(cid:15)0
(cid:0)
(cid:0)
0
D
0
D
(cid:20)b
0
D
(cid:20)e
(cid:0)
C
4.4. PLANE WAVES 63
which can be solved by taking (cid:15)
and e
?
k
as independent 3-vector polarizations, and writing
e
k D
(cid:17)55
(cid:20)
k0 (cid:15)
k
(cid:15)
? D
(cid:20)
k0
e
?
(cid:15)0
D
1
k0
k
(cid:15)
k
(cid:1)
1
k0
b
D
k
e
?
(cid:2)
for the remaining fields. Unlike Maxwell plane waves, for which E, B, and k are mutually orthog-
onal, the pre-Maxwell electric fields e and (cid:15) have both transverse and longitudinal components.
0, we find that e, b, and k become mutually orthogonal and (cid:15) becomes a decoupled
When (cid:20)
longitudinal polarization parallel to k.
!
We use (3.11) to write the convolved field as
f (cid:11)(cid:12)
(cid:136) .x; (cid:28)/
Z ds
(cid:21)
D
(cid:136).(cid:28)
(cid:0)
s/f (cid:11)(cid:12) .x; s/
where
D
1
1
.2(cid:25)/5
Z d 4k d (cid:20) ei.k
(cid:1)
x
(cid:0)
k0x0
(cid:17)55c5(cid:20)(cid:28)/f (cid:11)(cid:12)
C
(cid:136) .k; (cid:20)/;
f (cid:11)(cid:12)
(cid:136) .k; (cid:20)/
.(cid:24)(cid:21)c5(cid:20)/2
C
f (cid:11)(cid:12) .k; (cid:20)/
D
introduces a multiplicative factor that will appear once in each field bilinear of T (cid:11)(cid:12)
the 3-vector fields, the mass-energy-momentum tensor components are
(cid:21)
(cid:136) . In terms of
e(cid:136)
(cid:2)e
(cid:1)
b
(cid:1)
C
b(cid:136)
C
(cid:17)55 (cid:0)(cid:15)
(cid:15)(cid:136)
(cid:1)
C
(cid:15)0(cid:15)0
(cid:136)(cid:1)(cid:3)
T 00
(cid:136) D
T 0i
(cid:136) D
T 50
(cid:136) D
T 5i
(cid:136) D
T 55
(cid:136) D
1
2c
1
c
1
c
1
c
1
2c
(cid:0)e
b(cid:136)
(cid:2)
C
(cid:17)55(cid:15)0(cid:15)(cid:136)(cid:1)
i
e
(cid:15)(cid:136)
(cid:1)
(cid:0)(cid:15)
b(cid:136)
(cid:2)
C
i
(cid:15)0e(cid:136)(cid:1)
(cid:15)(cid:136)
(cid:2)(cid:15)
(cid:1)
(cid:0)
(cid:15)0(cid:15)0
(cid:136) C
(cid:17)55 .e
e(cid:136)
b
(cid:1)
(cid:0)
(cid:1)
b(cid:136)/(cid:3) :
For the plane wave, the energy density is
1
c
(cid:16)e2
T 00
(cid:136) D
.(cid:24)(cid:21)(cid:20)/2
(cid:21)
b2(cid:1), is equivalent in form to the energy density in Maxwell theory
(cid:17)55(cid:15)2
k
? C
C
(cid:17)
1
which, since e2
? D
1
2 (cid:0)e2
? C
(cid:18) 00
1
2c
D
(cid:0)E2
B2(cid:1)
C
with the addition of the independent polarization (cid:15)
. The mass density is found to be
k
T 55
(cid:136) D
(cid:20)2
ck2
0
(cid:16)e2
? C
(cid:17)
(cid:17)55(cid:15)2
k
1
C
.(cid:24)(cid:21)(cid:20)/2
(cid:21)
(cid:20)2
k2
0
D
T 00
(cid:136)
64
4. PROBLEMS IN ELECTROSTATICS AND ELECTRODYNAMICS
expressing energy density scaled by the squared mass-to-energy ratio for the field. The energy
flux—the standard Poynting 3-vector—is
T 0i
(cid:136) (cid:0)!
T0
(cid:136) D
k
k0 T 00
(cid:136)
expressing the energy density T 00
malized to energy. Comparing the proportionality factor to that for a free particle
(cid:136) flowing uniformly in the direction of the momentum nor-
k
k0 (cid:0)!
p
E=c D
1
c
M d x=d (cid:28)
M dt =d (cid:28) D
v
c
which will not generally be a unit vector unless (cid:20)
The mass flux vector—a second Poynting 3-vector—can be written
D
0, as it must be for Maxwell plane waves.
T 5i
(cid:136) (cid:0)!
T5
(cid:136) D
k
(cid:20)
T 55
(cid:136)
expressing the mass density T 55
ized to mass. Finally,
(cid:136) flowing uniformly in the direction of the momentum normal-
so that T 5(cid:22)
(cid:136) can be written as
T 50
(cid:136) D
k0
(cid:20)
T 55
(cid:136) D
(cid:20)
k0 T 00
(cid:136)
T 5(cid:22)
(cid:136) D
k(cid:22)
(cid:20)
T 55
(cid:136) D
(cid:20)k(cid:22)
k2
0
T 00
(cid:136)
(cid:136) flowing in the direction of the 4-momentum. In this sense, T 50
expressing the mass density T 55
(cid:136)
0, as is the case
represents the flow of mass into the time direction. We notice that when (cid:20)
for Maxwell plane waves, k=k0 becomes a unit vector and T 5(cid:11)
0, so that mass density and
flow vanish. The interpretation of plane waves carrying energy and momentum (energy flux)
uniformly to infinity is thus seen to generalize to mass flow, where mass is best understood
through (4.21) as the non-identity of energy and momentum.
(cid:136) D
(cid:0)!
Suppose that a plane wave of this type impinges on a test particle in its rest frame, de-
0, the wave will interact with the event through the
.c(cid:28); 0; c5(cid:28) /. Since
scribed by x(cid:11).(cid:28)/
Lorentz force (3.6) and (3.7) as
D
x
P
D
M
x(cid:22) .(cid:28) /
R
D
e
c
(cid:2)f (cid:22)
0.x; (cid:28)/
x0 .(cid:28)/
P
C
c5f (cid:22)
5.x; (cid:28)/(cid:3)
d
d (cid:28)
.
(cid:0)
1
2 M
x2/
P
D (cid:0)
ec5
c
(cid:17)55(cid:15)0
x0
P
which for (cid:15)
k ⁄
0
t
R
D (cid:0)
(cid:17)55e
)
c5
Mc2
(cid:1)
1
k0
k
(cid:15)
0 becomes
k ⁄
k
(cid:15)
k
(cid:1)
e
M
x
R
D
(cid:16)1
he
?
C
c5
c
(cid:20)
k0
(cid:17)
(cid:16)
(cid:15)
k
c5
c C
C
(cid:17)55
(cid:20)
k0
(cid:17)i
4.5. RADIATION FROM A LINE ANTENNA 65
d
d (cid:28)
.
(cid:0)
1
2 M
x2/
P
(cid:17)55ec5
k
(cid:15)
1
k0
D (cid:0)
showing that the incident plane wave will initially accelerate the test event in such a way as to
transfer mass. If the plane wave is a far field approximation to the radiation field of an accelerating
charge, then the resulting picture describes the transfer of mass by the radiation field between
charged events.
k
(cid:1)
4.5
RADIATION FROM A LINE ANTENNA
The radiation from a dipole antenna is treated generally in Maxwell theory [7] by approximating
the oscillating current as the separable current density
J .x; t /
D
J .x/ ei!t
J .x/
i!(cid:26)
0;
D
C
r (cid:1)
where the second equation expresses represents current conservation, and of course we take the
real parts of all physical quantities. This approximation may be justified by posing a collection
of oscillating charges with position 4-vectors
Xn .(cid:28)/
D
(cid:0)ctn .(cid:28) / ; anei!(cid:28) (cid:1)
which for nonrelativistic motion includes tn.(cid:28)/
for this collection is
t
D
D
(cid:28) for each particle. The Maxwell current
J .x; t /
X
n
D
Z d (cid:28) c
Xn .(cid:28)/ (cid:14)4 .x
P
(cid:0)
Z d (cid:28) i! canei!(cid:28) (cid:14) .ct
Xn .(cid:28)//
c(cid:28) / (cid:14)3 (cid:0)x
anei!(cid:28) (cid:1)
(cid:0)
(cid:0)
X
n
"
X
n
D
D
i! an(cid:14)3 (cid:0)x
(cid:0)
anei!t (cid:1)
#
ei!t
so that replacing the term in square brackets with its time average over one cycle of oscillation
T
2(cid:25)=! we obtain
D
J .x; t /
" 1
T
T
Z
0
’
dt X
n
i! an(cid:14)3 (cid:0)x
(cid:0)
anei!t (cid:1)
#
ei!t
J .x/ ei!t :
D
Thus, J .x/ approximates the time-dependent current density by a time averaged static configu-
ration in space, rendering the antenna problem tractable.
To treat the dipole antenna in SHP electrodynamics [8] we cannot make use of this ap-
proximation because the microscopic current
j .x; t; (cid:28) /
c X
n
D
Xn .(cid:28) / (cid:14)4 .x
P
(cid:0)
Xn .(cid:28) //
X
n
D
i! anei!(cid:28) (cid:14) .t
(cid:28)/ (cid:14)3 (cid:0)x
(cid:0)
(cid:0)
anei!(cid:28) (cid:1)
66
4. PROBLEMS IN ELECTROSTATICS AND ELECTRODYNAMICS
is not integrated over (cid:28), and so time averaging cannot be performed in any meaningful way.
Instead, in analogy to this approximation, we pose a current of the form
c (cid:2)(cid:26)0 .x/
C
(cid:26) .x/ ei!t (cid:3) (cid:30) .(cid:28)
t/
(cid:0)
J .x/ ei!t (cid:30) .(cid:28)
t/
(cid:0)
j 0 .x; (cid:28) /
j .x; (cid:28) /
j 5 .x; (cid:28) /
D
D
D
c5
c
j 0 .x; (cid:28) /
c5 (cid:2)(cid:26)0 .x/
C
D
(cid:26) .x/ ei!t (cid:3) (cid:30) .(cid:28)
t/ ;
(cid:0)
where (cid:26)0 .x/ is a background event density. The function (cid:30) .(cid:28)
t / expresses a correlation be-
tween t and (cid:28), inserted by hand in place of a time averaging procedure. In this sense, the re-
placement
(cid:0)
j .x; t; (cid:28) /
(cid:0)!
J .x/ ei!t (cid:30) .(cid:28)
t/
(cid:0)
may be less precise than the comparable approximation in Maxwell theory, and we must be
attentive to artifacts introduced by the model. In analogy to (3.15), we choose
(cid:30) .(cid:28)
t/
(cid:0)
D
1
2(cid:27)
(cid:28)
e(cid:0)j
t
=(cid:27)
j
(cid:0)
Z d!
2(cid:25)
D
(cid:136) .!/ ei!.(cid:28)
t /
(cid:0)
(cid:136) .!/
1
.(cid:27)!/2
D
1
C
which imposes a correlation (cid:28)
t
(cid:0)
’
(cid:27) through
(cid:30) .(cid:28)
t/
(cid:0)
(cid:0)!
8
<
:
strong correlation: (cid:27)
weak correlation:
(cid:27)
0
)
large
(cid:30) .(cid:28)
(cid:0)
t
(cid:0)
)
!
!
t/
(cid:14) .(cid:28)
t/
!
(cid:0)
(cid:28) evenly distributed:
)
t
(cid:28)
D
Notice that in the strong correlation limit, the potential found from the Green’s function
a .x; (cid:28) /
D
D
e
2(cid:25)
e
4(cid:25)c
Z d 3x0d.ct 0/ (cid:14) (cid:16)(cid:0)x
1
ei!(cid:28) Z d 3x0
(cid:0)
2
x0(cid:1)
(cid:0)
(cid:14) (cid:18)(cid:28)
c2 (cid:0)t
t
C
(cid:0)
2(cid:17) J (cid:0)x0(cid:1) ei!t 0(cid:14) (cid:0)(cid:28)
x0j
(cid:19) J (cid:0)x0(cid:1)
t 0(cid:1)
(cid:0)
x
j
(cid:0)
c
t 0(cid:1)
(cid:0)
x
j
x0j
(cid:0)
describes a Coulomb-like potential oscillating in (cid:28) simultaneously across spacetime, rather than
!t /. This suppression of the expected wavelike behavior
a wave propagating with phase .kr
can be characterized by the dimensionless parameter
(cid:0)
1
!(cid:27) D
T
2(cid:25)(cid:27) D
antenna period
correlation time
which we take to be small but greater than zero. The total number of events in this system at
time (cid:28) is found from the spacetime integral
4.5. RADIATION FROM A LINE ANTENNA 67
N .(cid:28)/
D
D
D
where
Z d 4x j 5 .x; (cid:28) /
1
c5
Z d 3x (cid:26)0 .x/ Z dt (cid:30) .(cid:28)
N ei!(cid:28)
N0
C
1
C
.(cid:27)!/2 ;
t/
(cid:0)
C
Z d 3x (cid:26) .x/ Z dt ei!t (cid:30) .(cid:28)
t/
(cid:0)
Z d 3x (cid:26)0 .x/
N0
D
Z d 3x (cid:26) .x/
N
D
given as a background event number with an oscillating perturbation. We must have
to unsure that the event number remains positive. Similarly, the total charge is given by
N0 >
N
.(cid:27)!/2
1
C
Q .(cid:28)/
Q0
D
C
1
Q ei!(cid:28)
.(cid:27)!/2 ;
C
eN0 and Q
eN , so that the total charge does not change sign, which would sug-
where Q0
gest pair creation and annihilation processes. Since the background density (cid:26)0.x/ is independent
of t and (cid:28), conservation of the 5D current becomes
D
D
1
c
@
@t
j 0
0
D
1
c5
@
@(cid:28)
j
C
j 5
D
(cid:26) .x/
C r (cid:1)
t/(cid:3)
(cid:0)
C r (cid:1)
J .x/ ei!t (cid:30) .(cid:28)
t /
(cid:0)
@
(cid:2)(cid:0)ei!t (cid:1) (cid:30) .(cid:28)
@t
(cid:26) .x/ ei!t @
@(cid:28)
(cid:30) .(cid:28)
t/
(cid:0)
J .x/(cid:141) ei!t (cid:30) .(cid:28)
t/
(cid:0)
C r (cid:1)
C
(cid:140)i!(cid:26) .x/
so that
D
i!(cid:26)
C r (cid:1)
J
0
D
(cid:0)!
e Z d 3x J .x/
e Z d 3x x
J
D
r (cid:1)
D (cid:0)
e Z d 3x x .i!(cid:26)/
and we identify
Z d 3x x e (cid:26) .x/
p
D
i!p
Id
d
O
D
as the dipole moment p of the charge distribution (cid:26) .x/, so that i!p can be written as a constant
current I along a dipole of length d in the direction O
Z d 4x J .x/ ei!t (cid:30) .(cid:28)
d. The total current density is
J .(cid:28) /
t/
e
c
D
(cid:0)
D
i! pei!(cid:28)
.(cid:27)!/2
1
C
68
4. PROBLEMS IN ELECTROSTATICS AND ELECTRODYNAMICS
representing an oscillating dipole.
The induced potential found from the Green’s function GMaxwell is
a(cid:11) .x; (cid:28) /
e
c
D
Z d 3x0
1
x
(cid:0)
x0j
4(cid:25)
j
j (cid:11) (cid:18)c (cid:18)t
x
j
(cid:0)
c
(cid:0)
x0j
(cid:19) ; x0; (cid:28) (cid:19)
so that writing x
r
r, we make the far field approximation
O
D
R
x
(cid:12)
(cid:12)
(cid:0)
D
x0(cid:12)
(cid:12) D
(cid:16)r 2
C
2
(cid:0)x0(cid:1)
2r
r
O
(cid:1)
(cid:0)
1=2
x0(cid:17)
r
r
(cid:0) O
(cid:1)
x0
’
and the dipole approximation
k
(cid:12)
(cid:12)
r
O
(cid:1)
x0(cid:12)
(cid:12)
< kd
2(cid:25)d
D
(cid:21) (cid:28)
eik
r
(cid:1)
O
x0
1
)
1
’
x0
r
r
(cid:0) O
c
(cid:1)
(cid:18)1
r
c
’
r
(cid:0) O
x0
(cid:1) O
(cid:19)
d
r
’
r
c
to obtain
a0 .x; (cid:28) /
Q0
4(cid:25) r C
Q
4(cid:25) r
’
e(cid:0)
i .kr
!t / (cid:30) (cid:16)(cid:28)
(cid:0)
t
(cid:0)
C
(cid:17)
r
c
a .x; (cid:28) /
ik
4(cid:25) r
p
’
e(cid:0)
i .kr
!t / (cid:30) (cid:16)(cid:28)
(cid:0)
t
(cid:0)
C
(cid:17)
r
c
a5 .x; (cid:28) /
c5
c
D
a0 .x; (cid:28) / :
We define the spherical wave factor
e(cid:0)
(cid:31) .x; (cid:28) /
D
i .kr
!t /
(cid:0)
4(cid:25) r
(cid:30) (cid:16)(cid:28)
t
(cid:0)
C
(cid:17)
r
c
and split the field strengths into spacetime and polarization factors, as
b
e
(cid:15)
D r (cid:2)
1
c
D (cid:0)
@
@t
(cid:17)55
D
(cid:15)0
D
(cid:17)55
a
b (cid:31)
D b
a
(cid:0) r
@
@(cid:28)
a
(cid:0)
@
@(cid:28)
a0
1
c5
1
c5
a0
D
c5
c r
1
c
(cid:0)
@
@t
Q0
r
4(cid:25) r 2 O
c5
c
D
a0
e (cid:31)
Cb
Q0
r
4(cid:25) r 2 O
b
b
e
b
(cid:15)
b
(cid:15) (cid:31)
Cb
D
D
ikId "1
r
O
d
(cid:2) O
D (cid:0)
ik"1 (cid:16)Q
r
O
(cid:0)
Id
d(cid:17)
O
ik h
"1Q
r
O
C
i"2Id
di
O
a5
(cid:15) 0(cid:31)
D b
(cid:15) 0
b
D
ik h
i"2i Q;
"1
C
c5
c
c5
c
where we used 1=kr
1 and define
(cid:28)
"1
1
C
D
R=c/
".(cid:28)
(cid:0)
t
C
i!(cid:27)
"2
(cid:17)55
D (cid:0)
".(cid:28)
c
c5
(cid:0)
t
C
!(cid:27)
R=c/
:
4.5. RADIATION FROM A LINE ANTENNA 69
We drop the static Coulomb terms produced by Q0, as these do not contribute to radiation. Since
!(cid:27)
1 but
small tends to suppress wavelike behavior, but c5=c
z the polarizations then
d
leave "2 unchanged. Taking the orientation of the antenna to be O
simplify to
1 [9], we approximate "1
D O
(cid:28)
’
(cid:24)
ik .Q
ik h
e
’
b
(cid:15) 0
b
’
Id
z/
O
i"2i Q
(cid:0)
r
O
c5
c C
r
O
z
(cid:2) O
b
b
(cid:15)
b
ikId
’ (cid:0)
ik h
’
c5
c
Q
r
O
C
i"2Id
zi
O
(cid:15)(cid:22). Such terms are
and we notice that terms containing 1=!(cid:27) appear only in the components of
b
t/, and can be understood as the contribution
artifacts of modeling the time correlation by (cid:30).(cid:28)
to the fields required to impose this correlation across spacetime. As was seen for plane waves,
these fields will accelerate a test event initially at rest through the Lorentz force in such a way
as to transfer mass to the event.
(cid:0)
The mass-energy-momentum tensor will contain bilinear field combinations of the type
T (cid:11)(cid:12)
1
c
D
(cid:18)f (cid:11)(cid:13)
(cid:136) f (cid:12)
(cid:13) (cid:0)
(cid:136) f(cid:14)"g(cid:11)(cid:12) (cid:19)
f (cid:14)"
1
4
Re h(cid:0)A(cid:11)
i B(cid:11)
(cid:1) (cid:31)i
C
(cid:0)!
and it is convenient to separate the resulting products as T (cid:11)(cid:12)
all terms containing 1=!(cid:27). We designate
T (cid:11)(cid:12)
0 C
D
i D(cid:12) (cid:17) (cid:31)i
Re h(cid:16)C(cid:12)
C
(cid:1)
(cid:27) , where T (cid:11)(cid:12)
T (cid:11)(cid:12)
(cid:27)
includes
S .x; (cid:28) /
C .x; (cid:28) /
X .x; (cid:28) /
D
D
D
k2 (cid:30) (cid:0)(cid:28)
k2 (cid:30) (cid:0)(cid:28)
k2 (cid:30) (cid:0)(cid:28)
t
(cid:0)
4(cid:25) r
C
t
(cid:0)
4(cid:25) r
C
t
(cid:0)
4(cid:25) r
C
2
r
c (cid:1)
!
2
r
c (cid:1)
!
2
r
c (cid:1)
!
sin2 .kr
!t /
(cid:0)
cos2 .kr
!t /
(cid:0)
2 sin .kr
(cid:0)
!t / cos .kr
!t /
(cid:0)
and note that these functions drop off as 1=r 2 and so will produce nonzero surface integrals at
large r, as is characteristic of radiation fields. Using these functions, the components of T (cid:11)(cid:12)
are
0
Id cos (cid:18)/2
2 .Id /2 (cid:16)1
C
Id cos (cid:18) /
z
O
C
(cid:18)Id .Id
(cid:0)
.cos (cid:18) /2(cid:17)
Q cos (cid:18)/
C
C
(cid:0)
(cid:0)
2(cid:17)55 (cid:16)
2
(cid:17)
Q2(cid:21) S .x; (cid:28) /
(cid:17)55 (cid:16)Q
2(cid:19)
(cid:17)
r(cid:21) S .x; (cid:28) /
O
c5
c
c5
c
T 00
0 D
T0
0 D
T 50
0 D
T5
0 D
T 55
0 D
(cid:20).Q
1
2
C
(cid:20)Id .Q
c5
c
c5
c
.Q
.Q
Q
Q
1
2
(cid:17)55 .Q
(cid:0)
(cid:0)
(cid:0)
Id cos (cid:18)/ S .x; (cid:28) /
Id cos (cid:18)/ S .x; (cid:28) /
r
O
Id cos (cid:18)/2 S .x; (cid:28) /
T 50
0
r
O
D
70
4. PROBLEMS IN ELECTROSTATICS AND ELECTRODYNAMICS
all of which have spacetime dependence S .x; (cid:28) /. The components of T (cid:11)(cid:12)
(cid:27)
are
T 00
(cid:27) D
1
2
h"2
2
h.Id /2
C
Q2i C .x; (cid:28) /
(cid:17)55
(cid:0)
c5
c
"2Q (cid:140)Q
C
Id cos (cid:18)(cid:141) X .x; (cid:28) /i
T0
(cid:27) D (cid:0)
(cid:17)55
c5
c
"2Q (cid:20)Id (cid:18)X .x; (cid:28) /
(cid:17)55
C
c
c5
C .x; (cid:28) /(cid:19)
Q X .x; (cid:28) /
z
O
C
r(cid:21)
O
T 50
(cid:27) D (cid:0)
"2Id .Id
(cid:0)
Q cos (cid:18)/ X .x; (cid:28) /
T5
(cid:27) D (cid:0)
"2 hId (cid:140)Q
(cid:0)
Id cos (cid:18)(cid:141)
(cid:16).Id /2
z
O
C
Q2(cid:17)
ri X .x; (cid:28) /
O
(cid:0)
T 55
(cid:27) D
1
2
h"2
2
(cid:16).Id /2
(cid:0)
Q2(cid:17) C .x; (cid:28) /
c5
c
C
"2Q .Q
(cid:0)
Id cos (cid:18) / X .x; (cid:28) / i
whose spacetime dependence is determined by C .x; (cid:28) / and X .x; (cid:28) / and is thus out of phase
with the T (cid:11)(cid:12)
0 .
As expected from the transfer of mass made possible by the fields (cid:15)(cid:22), we find a nonzero
mass density T 55 and mass flux T0 and T5 into time and space. Moreover, integrating over a
sphere of radius r, the net mass flux into space will be of the form
Z d (cid:127) r 2
T5
0
r
O
(cid:1)
Z d (cid:127) r 2
c5
c
hQ
r
O
(cid:1)
r 2 k2 (cid:30) (cid:0)(cid:28)
P
D
D
D
D
Q
c5
c
k2c5
4(cid:25)c
.Q
Id .cos (cid:18)// S .x; (cid:28) /
ri
O
(cid:0)
r
c (cid:1)
2
!
t
(cid:0)
4(cid:25) r
C
sin2 .kr
(cid:0)
!t / Z d (cid:127) (cid:140)Q
Id cos (cid:18) (cid:141)
(cid:0)
Q2 (cid:16)(cid:30) (cid:16)(cid:28)
t
(cid:0)
C
2
(cid:17)(cid:17)
r
c
sin2 .kr
!t /
(cid:0)
and thus nonzero wherever (cid:28)
r=c. Just as the energy radiated by a Maxwell dipole antenna
must be provided by the amplifier that drives the oscillating current density, the mass radiated
by an SHP antenna is continuously provided by an amplifier that creates events and drives them
into the antenna.
’
(cid:0)
t
For a center-fed antenna of length d oriented along the z-axis, the charge density may be
described by
where
(cid:26) .x/
( (cid:14) .x/ (cid:14) .y/ (cid:26)z .z/
D
0
;
z
d
2 (cid:20)
(cid:20)
; otherwise;
(cid:0)
d
2
(cid:26)z.z/
1
2
D
(cid:140)(cid:26)z.z/
(cid:26)z.
z/(cid:141)
(cid:0)
C
C
1
2
(cid:140)(cid:26)z.z/
(cid:26)z.
z/(cid:141)
(cid:0)
.z/
(cid:26)
C
.z/
(cid:26)
(cid:0)
C
D
(cid:0)
divides the charge density into even and odd parts. The total oscillating charge is
4.5. RADIATION FROM A LINE ANTENNA 71
Z d 3x e(cid:26) .x/
Q
D
d
2
2e Z
0
D
dz (cid:26)
.z/
C
and the dipole moment is
Id
d
O
D
i!e Z d 3x x(cid:26) .x/
2ie!
z
O
D
d
2
Z
0
dz z (cid:26)
.z/
(cid:0)
C
describes a net charge Q driven symmetrically into the left and right segments
showing that (cid:26)
of the antenna, while (cid:26)
describes a dipole moment produced by shifting charge from one
(cid:0)
antenna segment into the other segment. Since Q
eN , we see that the amplifier driving net
charge into the antenna must be driving new events into the antenna as well, accounting for
the radiated mass. Taking Q
0 so that the amplifier shifts charged events between antenna
segments without injecting new events, the fields reduce to
D
D
ikId
e
b
(cid:15) 0
b
D (cid:0)
0
D
z
O
b
b
(cid:15)
b
ikId
r
O
z
(cid:2) O
D (cid:0)
k"2Id
z
O
D
so that the effect of the waves on a test event at rest reduces to
d
d (cid:28)
.
(cid:0)
1
2 M
x2/
P
D (cid:0)
e(cid:17)55c5(cid:15)0
0
D
and there is no transfer of mass. Similarly, the components of become T (cid:11)(cid:12)
0
T 00
0 D
T0
0 D
T 50
0 D
T5
0 D
T 55
0 D
.Id /2 (cid:18)1
.Id /2
(cid:0)
(cid:0)
1
2
(cid:0)
cos (cid:18)
cos2 (cid:18)(cid:19) S .x; (cid:28) /
z
O
r (cid:1) S .x; (cid:28) /
C O
0
0
1
2
(cid:17)55 .Id /2 cos2 (cid:18) S .x; (cid:28) /
describing no transfer of mass into space or time.
72
4. PROBLEMS IN ELECTROSTATICS AND ELECTRODYNAMICS
The components of T (cid:11)(cid:12)
(cid:27)
also simplify to
T 00
(cid:27) D
T0
(cid:27) D
T 50
(cid:27) D
T5
(cid:27) D
T 55
(cid:27) D
2
(cid:19)
1
2
(cid:18) c
!c5
(cid:30)0
(cid:30)
0
.Id /2 C .x; (cid:28) /
(cid:30)0
(cid:30)
.Id /2 X .x; (cid:28) /
(cid:17)55
c
!c5
T 50 .
z
O
(cid:18) c
!c5
1
2
r/
(cid:0) O
(cid:30)0
(cid:30)
2
(cid:19)
.Id /2 C .x; (cid:28) / ;
(cid:0)
"R=(cid:27) with (cid:30)0=(cid:30). These expressions involve no transfer of energy but do
where we replaced
describe nonzero transfer of mass into space and time directions. Once again, we understand
this transfer as an artifact of the time correlation model that enters through the derivative of
t/, rather than an inherent feature of radiation from an oscillating charge. In particular,
(cid:30) .(cid:28)
all of the nonzero terms in the expression for mass conservation contain (cid:30)0, so that these terms
are separately conserved among themselves. To see this we expand (3.25) as
(cid:0)
@(cid:11)T (cid:11)5
e
c
D (cid:0)
f 5(cid:11)j(cid:11)
(cid:0)!
1
c
@
@t
T 50
T5
1
c5
@
@(cid:28)
C
T 55
e
c
(cid:15)(cid:22)j(cid:22)
D (cid:0)
C r (cid:1)
which becomes
1
c
@
@t
T 50
(cid:27) C r (cid:1)
T5
(cid:27) C
1
c5
@
@(cid:28)
(cid:0)T 55
0 C
T 55
(cid:27) (cid:1)
D (cid:0)
e
c
(cid:15)(cid:22)
(cid:27) j(cid:22)
(4.22)
T5
0 D
0. We also write the field as (cid:15)(cid:22)
because T 50
(cid:27) because it contains the factor "2. Finally,
0 D
we note that because T 55
0 depends on (cid:28) only through the factor of (cid:30)2 in S .x; (cid:28) /, the derivative
@(cid:28) T 55
0 must similarly contain (cid:30)0. Thus, each term in (4.22) enters through the derivative of the
time correlation model, and these terms are conserved among themselves with no corresponding
energy transfer.
Integrating the energy Poynting vector T0 over the surface of a sphere of radius r we must
evaluate
T0
r
O
(cid:1)
D
D
D
r
O
(cid:0)
cos (cid:18)
.Id /2
.Id /2
.Id /2 sin2 (cid:18) S .x; (cid:28) /
(cid:0)
(cid:0)
cos2 (cid:18)
z
O
C O
1(cid:1) S .x; (cid:28) /
C
(cid:1)
(cid:0)
r (cid:1) S .x; (cid:28) /
to find the instantaneous radiated power
4.6. CLASSICAL PAIR PRODUCTION 73
P
D
D
D
D
Z d (cid:127) r 2 .Id /2 S .x; (cid:28) / sin2 (cid:18)
.Id /2 r 2 k2 (cid:30) (cid:0)(cid:28)
2
r
c (cid:1)
!
t
(cid:0)
4(cid:25) r
C
sin2 .kr
!t / Z
0
(cid:0)
2(cid:25)
(cid:25)
d(cid:30) Z
0
d(cid:18) sin3 (cid:18)
.Id /2 k2 (cid:30) (cid:0)(cid:28)
2
!
r
c (cid:1)
t
(cid:0)
4(cid:25)
C
sin2 .kr
!t /
(cid:0)
8(cid:25)
3
k2.Id /2
6(cid:25)
(cid:16)(cid:30) (cid:16)(cid:28)
t
(cid:0)
C
r
c
(cid:17) sin .kr
2
!t /(cid:17)
:
(cid:0)
Since we have assumed that 1=!(cid:27) is small, we may take (cid:30) (cid:0)(cid:28)
over one cycle of the wave, so that the average radiated power over one cycle is
C
(cid:0)
t
r
c (cid:1) as effectively constant
k2.Id /2
6(cid:25)
P
N
’
(cid:16)(cid:30) (cid:16)(cid:28)
t
(cid:0)
C
r
c
(cid:17)(cid:17)
2 1
T
T
Z
0
dt .sin .kr
!t //2
(cid:0)
k2.Id /2
12(cid:25)
D
(cid:16)(cid:30) (cid:16)(cid:28)
t
(cid:0)
C
2
(cid:17)(cid:17)
r
c
which agrees with the standard result up to the factor of (cid:30)2. The neutral antenna radiates energy
in agreement with the Maxwell result and radiates no mass (leaving aside the derivatives of the
arbitrarily chosen function (cid:30)).
CLASSICAL PAIR PRODUCTION
4.6
A standard technique for pair creation in the laboratory is the two-step process by which An-
derson [10] first observed positrons in 1932: high energy electrons are first scattered by heavy
nuclei to produce bremsstrahlung radiation, and electron/positron pairs are then created from
the radiation field. The Bethe-Heitler mechanism [11] describes this technique as the quantum
process,
e(cid:0)
Z
e(cid:0)
Z
Z
(cid:13)
(cid:13)
C
C
involving a quantized radiation field and the external Coulomb field of the nuclei. We now
calculate the classical trajectories that produce this two-step process, as shown in Figure 4.1.
(cid:0)!
(cid:0)!
eC
e(cid:0)
C
C
C
C
Z
(cid:0)!
Because the electromagnetic interaction is instantaneous in (cid:28), we may take both stages
of the Bethe-Heitler process as occurring at (cid:28)2: (1) the scattering of particle-2 by a nucleus
Eout > 0) and (2) the absorption of the resulting bremsstrahlung radiation
at t1 (Ein > 0
Eout > 0). In the Stueckelberg picture, the E < 0 (antiparticle)
by particle-1 at t2 (Ein < 0
trajectory of particle-1 must have been produced at the earlier chronological time (cid:28)1 < (cid:28)2. To
examine the conditions that might produce this initial negative energy trajectory, we describe
particle-1 scattering in the Coulomb field of another nucleus at t
t3 and emerging with neg-
ative energy moving backward in t.
(cid:0)!
D
74
4. PROBLEMS IN ELECTROSTATICS AND ELECTRODYNAMICS
Figure 4.1: Bethe–Heitler mechanism in classical electrodynamics.
In the laboratory, where events are recorded in the order determined by clock t, the process
t1 and emitting bremsstrahlung, followed by the appear-
t3, the antiparticle encounters another
t2 of a particle/antiparticle pair. Then at t
appears as particle-2 scattering at t
ance at t
particle causing their mutual annihilation.
D
D
D
Our analysis is carried out in three parts. We first consider the Coulomb scattering of
a slow incoming particle by an oppositely charged nucleus. To produce the pair annihilation
observed at (cid:28)1, the outgoing particle must have E < 0, while at (cid:28)2 the interaction must lead to
E > 0. We identify the condition that allows the energy of the outgoing particle to change sign.
In the second part, we compute the radiation field produced by the acceleration of the scattered
particle at (cid:28)2 using the Liénard–Wiechert potential for an arbitrary trajectory. In the third part
we again use the Lorentz force to treat the acceleration of the E < 0 particle absorbing the
radiation at (cid:28)2, and find the condition for its return to an E > 0 trajectory.
(cid:0)
With the function ’.(cid:28)
(cid:28)1/ in the field strengths, the Lorentz force is a set of coupled
nonlinear differential equations. By taking the correlation time (cid:21) to be small we may again ap-
proximate ’.(cid:28)
(cid:21)c and out-
going scattering trajectories are easily obtained by integration of the Lorentz force. This solution
provides a reasonable qualitative description of the classical Bethe–Heitler process, which may
be refined by numerical solution of the exact Lorentz force equations.
(cid:28)1/, so that interactions are limited to a range R
(cid:21)(cid:14).(cid:28)
(cid:28)1/
(cid:25)
(cid:24)
(cid:0)
(cid:0)
tZE>0E>0E>0E>0ZE<0τ1τ2>τ1τ2t3t2t1particle−1particle−1particle−2(anti)particle−1Initially (at time (cid:28)
! (cid:0)1
rated. We set the nucleus at rest at the origin of the laboratory frame,
4.6. CLASSICAL PAIR PRODUCTION 75
), the target nucleus Z and incoming particle are widely sepa-
and from some point x the line of observation
XZ .(cid:28) /
D
.ctZ; xZ/
D
.c; 0/ (cid:28)
z
x
(cid:0)
D
XZ .(cid:28) /
D
.ct; x/
(cid:0)
.c; 0/ (cid:28)
satisfies
z2
D
.c.t
(cid:0)
(cid:28)/; x/2
0
D
(cid:0)!
c.t
(cid:28)/
(cid:0)
D
R
D j
x
j (cid:0)!
z
D
R (cid:16)1;
R(cid:17) ;
O
where R is the scalar length defined in (4.9) as
u
z
(cid:1)
c D (cid:0)
(cid:0)
XZ
P
(cid:1)
z
.c; 0/
R (cid:16)1;
R(cid:17)
O
(cid:1)
c D (cid:0)
c
R:
D
For the observation point we use the location of the incoming particle-1, approaching the nu-
cleus on the trajectory
x
D
Xin .(cid:28) /
D
.ct; x/
u(cid:28)
s
C
D
tin .c; v; 0; 0/ (cid:28)
D P
C
(cid:0)st ; 0; sy; 0(cid:1) ;
where
d
d (cid:28)
u
D
.ct; x; y; z/
dx
d (cid:28) D
dx
tin
dt P
v
tin
P
D
tin
P
D
dt
d (cid:28) D
1
(cid:0)
(cid:12)2
p1
v=c. The scattering takes place in the plane z
and (cid:12)
we can write the spatial distance between the incoming particle and the target as
0 and since the nucleus is at the origin,
D
D
R .(cid:28) /
x
j D
D j
px2
y2
q(cid:0)v
2
tin(cid:28) (cid:1)
P
s2
y :
C
D
C
Putting (cid:21)
so that (cid:28)1 is determined from the causality conditions for the initial trajectories,
R.(cid:28)1/, the support of the fields is narrowly centered around the retarded time (cid:28)1,
(cid:25)
(cid:140)Xin .(cid:28)1/
(cid:0)
XZ .(cid:28)1/ (cid:141)2
0
D
X 0
in .(cid:28)1/
(cid:0)
X 0
Z .(cid:28)1/ > 0:
These equations have the solution
(cid:28)1
D
v
1
tin (cid:0)1
P
(cid:0)
(cid:17)2
v(cid:1)
(cid:18)(cid:17)vst
qs2
t (cid:0)
C
s2
y (cid:0)1
(cid:0)
(cid:19)
(cid:17)2
v(cid:1)
qs2
t (cid:0)
v
s2
y
;
(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)!v
(cid:28)
c
76
4. PROBLEMS IN ELECTROSTATICS AND ELECTRODYNAMICS
where we introduce the smooth parameter
(cid:17)v
D
(cid:18)1
1
v
(cid:0)
(cid:19)
1
tin
P
(cid:26) 0; v
1; v
(cid:0)!
0
c:
D
D
Notice that the 0-component st of the impact parameter must be positive in order for the in-
teraction to take place. The location of the incoming particle at the time of interaction is now
where
x .(cid:28)1/
R
R
O
D
t .(cid:28)1/
tin(cid:28)1
D P
C
st =c;
R
D
1
1
(cid:0)
(cid:17)2
v
(cid:18)(cid:17)vqs2
t (cid:0)
s2
y (cid:0)1
(cid:17)2
v(cid:1)
(cid:0)
C
(cid:19)
st
st
(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)!v
(cid:28)
c
(cid:16)(cid:17)vst
C
R
O
D
s2
y (cid:0)1
t (cid:0)
(cid:17)vqs2
qs2
st
C
(cid:17)2
v(cid:1); (cid:0)1
(cid:0)
t (cid:0)
s2
y (cid:0)1
(cid:0)
v(cid:1) sy; 0(cid:17)
(cid:17)2
(cid:0)
(cid:17)2
v(cid:1)
(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)!v
(cid:28)
c
s
1
0
@
s2
y
s2
t
;
sy
st
1
A
:
; 0
(cid:0)
Applying the Coulomb potential calculated in Section 4.1.1, the potential induced by the target
nucleus in this approximation is
a0 .x; (cid:28) /
(cid:21)
Ze
4(cid:25)R
D
(cid:14) .(cid:28)
(cid:28)1/
(cid:0)
ai
0
D
a5 .x; (cid:28) /
c5
c
D
a0 .x; (cid:28) /
so that the nonzero field strengths
ei
D
f 0i
D
@0ai
(cid:0)
@i a0
f 5i
(cid:15)i
D
D
@5ai
(cid:0)
@i a5
(cid:15)0
D
@5a0
(cid:0)
@0a5
are
e
D (cid:0)r
a0
(cid:15)
D (cid:0)r
a5
D
c5
c
e
where we used
@
@t
(cid:14).(cid:28)
(cid:28)1/
(cid:0)
D
(cid:18) dt
d (cid:28)
(cid:15)0
D
(cid:17)55
c5
(cid:18)1
C
c2
5
c2
1
tin
P
(cid:19) @(cid:28) a0;
@
@(cid:28)
tin
(cid:14).(cid:28)
(cid:0)
(cid:28)1/ :
1
(cid:19)(cid:0)
t
D
The nucleus and the incoming particle have opposite charge, so the Lorentz force
M
x0
R
D (cid:0)
M
x
R
D (cid:0)
e
c
e
c
(cid:0)f 0i
xi
P
C
f 05
x5(cid:1)
P
D (cid:0)
e
c
(cid:0)e
x
(cid:1) P
(cid:0)
(cid:17)55c5(cid:15)0(cid:1)
(cid:16)f k0
x0
P
C
f k5
x5(cid:17)
P
D (cid:0)
e
c
(cid:0)ec
t
P
(cid:0)
(cid:17)55c5(cid:15)(cid:1)
on the incoming particle becomes
(cid:21)Ze2
Mc2
(cid:21)Ze2
M
(cid:20)
x
P
(cid:18)
t
P
t
R
D
x
R
D
(cid:19) @(cid:28)
(cid:21) (cid:14) .(cid:28)
(cid:28)1/
(cid:0)
4(cid:25)R
c2
5
c2
1
tin
P
(cid:14) .(cid:28)
(cid:18)1
C
(cid:1) r (cid:0)
(cid:19)
c2
5
c2
r
(cid:17)55
(cid:0)
(cid:28)1/
:
(cid:0)
4(cid:25)R
The delta function enables immediate integration of the force equations as
4.6. CLASSICAL PAIR PRODUCTION 77
tf
P
tin
(cid:0) P
D
(cid:21)Ze2
Mc2
(cid:28)1
(cid:21)=2
C
Z
d (cid:28) (cid:20)
x
P
(cid:1) r (cid:0)
(cid:18)1
C
(cid:28)1
(cid:21)=2
(cid:21)Ze2
Mc2 P
(cid:21)
Mc2
(cid:0)
x .(cid:28)1/
Ze2
4(cid:25)R2 P
(cid:1) r
D
D (cid:0)
1
4(cid:25)R
x .(cid:28)1/
R
(cid:1) O
(cid:19) @(cid:28)
c2
5
c2
1
tin
P
(cid:21) (cid:14) .(cid:28)
(cid:28)1/
(cid:0)
4(cid:25)R
xf
P
xin
(cid:0) P
D
(cid:21)Ze2
M
(cid:28)1
(cid:21)=2
C
Z
d (cid:28) (cid:18)
t
P
(cid:0)
(cid:17)55
(cid:19)
c2
5
c2
r
(cid:14) .(cid:28)
(cid:28)1/
(cid:0)
4(cid:25)R
(cid:21)
M
D (cid:0)
(cid:21)=2
(cid:28)1
(cid:0)
Ze2
4(cid:25)R2
(cid:18)
t .(cid:28)1/
P
(cid:0)
(cid:17)55
(cid:19)
c2
5
c2
R;
O
where the velocities are evaluated at the interaction point as
t;
(cid:0)P
x(cid:1) .(cid:28)1/
P
D
1
2
h(cid:0)P
t;
x(cid:1)f C
P
t;
(cid:0)P
x(cid:1)in
P
i :
We introduce the dimensionless parameter for Coulomb scattering
(cid:21)
Mc
Ze2
4(cid:25)R2 D
(cid:21)c
R (cid:2)
Ze2
4(cid:25)R
ge
D
1
Mc2 D
correlation length
impact parameter (cid:2)
interaction energy
mass energy
(4.23)
(4.24)
which appears in (4.23) and (4.24) as the factor controlling the strength of the interaction.
Writing
1
2
we can expand the Lorentz force as components in the form
Rx
ge O
1
2
(cid:11)x
(cid:11)y
D
D
Ry
ge O
2
3
2
1
(cid:11)x
(cid:11)y
(cid:11)x (cid:11)y
0
1
1
0
c
tf
P
xf
P
yf
P
and solve for the final velocity,
4
4
5
3
2
5 D
4
1
(cid:11)x
(cid:11)y
(cid:0)
(cid:0)
(cid:11)x
(cid:0)
1
0
3
2
5
4
0
0
0
tin
c
P
v
tin
P
0
3
5 C
2(cid:17)55
c2
5
c2
2
4
0
(cid:11)x
(cid:11)y
3
5
c
2
3
6
4
tf
P
xf
P
yf
P
where we neglect c2
7
5 D
1
5 =c2
tin.
(cid:28) P
1
1
4 g2
e
(cid:0)
8
2
(cid:136)(cid:136)<
6
4
(cid:136)(cid:136):
c
v
tin
P
tin
P
0
3
tin
ge P
7
5 (cid:0)
2
6
4
v
c
c
Rx
O
Rx
O
Ry
O
3
7
5 C
1
4
g2
tin
e P
2
6
6
4
c
v (cid:16)
R2
O
2v
R2
x (cid:0) O
y
Ry
Rx O
O
(cid:17)
3
7
7
5
9
>>=
>>;
;
(4.25)
78
4. PROBLEMS IN ELECTROSTATICS AND ELECTRODYNAMICS
Before considering pair annihilation, we examine the low velocity and low interaction
energy limit of this result. Taking
j D
the initial velocity reduces to
x
jP
v
c
(cid:28)
tin
P
!
1
(cid:17)v
0
!
ge
1
(cid:28)
the final velocity becomes
Xin .(cid:28)/
P
!
.c; v; 0; 0/ ;
tf
P
tin
(cid:25) P
xf
P
x
(cid:25) P
(cid:0)
gec
R
O
and the scattering angle can be found as
0
s
D
@
s2
y
s2
t
;
sy
st
1
A
; 0
1
(cid:0)
R
O
R
st
D
cos (cid:18)
R
gec
O
(cid:0)
x
xf (cid:12)
(cid:12)
(cid:12) jP
(cid:12)P
j
If we also wish to impose the nonrelativistic condition for conservation of energy, we obtain a
new constraint in the form
gec
(cid:0)
xf (cid:12)
(cid:12)
(cid:12)
(cid:12)P
xf
P
xf (cid:12)
(cid:12)
(cid:12)P
x
(cid:1) P
x
(cid:12) jP
Rx
O
x
(cid:1) P
D
D
D
v
j
:
x2
P
x2
P
v2
x2
f D
D P
h
x
P
(cid:0)
D
gec
2
Ri
O
)
2v
Rx
O
D
gec
in which case
1
xf (cid:12)
(cid:12)
(cid:12)P
(cid:12)
Now, using the definition of ge we find
cos (cid:18)
D
hv
gec
Rxi
O
1
(cid:0)
D
R2
x:
2
O
(cid:0)
cot
(cid:18)
2 D
r 1
1
cos (cid:18)
cos (cid:18) D
Ry
O
Rx D
O
C
(cid:0)
sy
st
2v
gec D
2st
(cid:21)v (cid:2)
4(cid:25)M v2sy
Ze2
which recovers the Rutherford scattering formula if
2st
(cid:21)v D
1:
(4.26)
R .(cid:28)1/ which we assumed to be comparable to (cid:21)c. Since we
But for low energy we have st
c in this low velocity case, (4.26) cannot be maintained. This result is unsur-
cannot have v
prising because the short-range potential cannot provide an adequate model of nonrelativistic
Rutherford scattering.
D
(cid:24)
Removing these restrictions and returning to the relativistic case, the condition for pair
tf < 0 for some value of
P
annihilation at (cid:28)1 is that particle-1 scatters to negative energy, that is
ge which we call g1. From (4.25),
1
(cid:0)
tf
P
tin
D P
Rx
g1.v=c/
O
1
4 g2
1
1
(cid:0)
1
4 g2
1
C
4.6. CLASSICAL PAIR PRODUCTION 79
and we see that for small values of g1,
Since v < c and Rx < 1, the numerator has discriminant
tf
P
tin
(cid:0)! P
(cid:21)
1:
.vRx=c/2
1 < 0
(cid:0)
and so is positive definite. The denominator becomes negative when
1
4
1
(cid:0)
g2
1 < 0
)
g1
D
correlation length
impact parameter (cid:2)
interaction energy
mass energy
> 2
and since we take the correlation length (cid:21)c approximately equal to the impact parameter R, the
requirement for pair annihilation is
Ze2
4(cid:25)R
> 2 Mc2
meaning that the interaction energy is greater than the mass energy of the annihilated particles.
As g1 approaches 2 from below
tf
P
decreases from large negative values, taking the limiting value
tf becomes very large. After g1 passes this critical value,
P
tin
(cid:0)P
D (cid:0)
so that the outgoing trajectory is timelike for all values of g1 > 2.
(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)!g1
!1
H)
Ef
2(cid:1)
tf
P
C
(cid:0)
.Ein
2Mc2/
C
Having found the condition for pair annihilation at time (cid:28)1 we now consider the scattering
at time (cid:28)2, which we also treat as an incoming particle approaching a nucleus of opposite charge.
Therefore we may apply the general expression (4.25). Particle-2 approaches a second nucleus
along some trajectory x(cid:22)
f .(cid:28)/ with posi-
tive energy. The scattering and acceleration of particle-2 produces a radiation field which can be
evaluated at some point of observation y(cid:22) using the Liénard-Wiechert potential for an arbitrary
trajectory.
in.(cid:28)/ and emerges from the interaction along trajectory x(cid:22)
The support of ’.(cid:28)
(cid:28)2/ is narrowly centered on (cid:28)2, and so the line of observation z(cid:22)
(cid:0)
must be a lightlike vector, which we write as
z(cid:22)
x(cid:22) .(cid:28)2/
y(cid:22)
(cid:26)
D
We express the initial and final 4-velocities of the scattered particle as
D
D
D
(cid:0)
(cid:26)(cid:22)
O
(cid:26)
O
.1;
(cid:26)/ ;
O
(cid:26)2
O
1:
and define
(cid:12)in
xin=c
D P
(cid:12)f
xf =c
D P
(cid:129)(cid:12)
(cid:12) .(cid:28) /
P(cid:12) .(cid:28) /
(cid:12) .(cid:28)2/
D
D
D
D N(cid:12)
(cid:12)f
(cid:0)
(cid:12)in
C
(cid:129)(cid:12) (cid:14) .(cid:28)
(cid:12)in
(cid:129)(cid:12) (cid:18) .(cid:28)
(cid:28)2/
(cid:28)2/
(cid:0)
(cid:0)
(cid:2)(cid:12)f
1
2
D
(cid:12)in(cid:3) :
C
80
4. PROBLEMS IN ELECTROSTATICS AND ELECTRODYNAMICS
From (4.11) and (4.12) express the radiation fields produced by an arbitrary trajectory as
(cid:28)2/F 5(cid:22) (cid:16)z; (cid:12); P(cid:12)(cid:17) ;
(cid:28)2/F (cid:22)(cid:23) (cid:16)z; (cid:12); P(cid:12)(cid:17)
e’.(cid:28)
e’.(cid:28)
f (cid:22)(cid:23)
rad D (cid:0)
f 5(cid:22)
rad D
(cid:0)
(cid:0)
where
F (cid:22)(cid:23)
2
(cid:16)z
^ P(cid:12)(cid:17) (cid:26)
D
4
.z
(cid:0)
^
4(cid:25)c(cid:26)3
(cid:22)(cid:23)
(cid:12)/ (cid:16)
P(cid:12)
(cid:1)
z(cid:17)
3
5
F 5(cid:22)
2
4
z(cid:17) z
(cid:16)
P(cid:12)
(cid:1)
4(cid:25)c(cid:26)3
3
5
c5
c
D
(cid:22)
(cid:1)
z
and (cid:26)
(cid:12) is the scalar distance from the scattered particle to the point of observation.
D (cid:0)
As pictured in Figure 4.1, the radiation emitted by the scattering of particle-2 is absorbed
by the negative energy particle-1 arriving at y(cid:22). Using the Lorentz force equations we calculate
the change in velocity of particle-1 caused by the incoming radiation. Since each term in the
y(cid:22).(cid:28)2/ of
(cid:28)2/ and ’.0/
field strengths contains P(cid:12).(cid:28)/
P
particle-1 is
1=2, the change in velocity
(cid:129)(cid:12) (cid:14) .(cid:28)
D
D
(cid:0)
(cid:129)
y(cid:22)
P
D
D
e
Mc
e2
Mc
Z 1
(cid:0)1
Z 1
d (cid:28) hf (cid:22)(cid:23)
rad
y(cid:23)
P
C
d (cid:28) ’.(cid:28)
(cid:0)
(cid:28)2/ h
f (cid:22)5
rad
y5i
P
F (cid:22)(cid:23) (cid:16)z; (cid:12); P(cid:12)(cid:17)
(cid:0)
(cid:0)1
e2
2Mc
(cid:20)F (cid:22)(cid:23) (cid:0)z; N(cid:12); (cid:129)(cid:12)(cid:1)
y(cid:23)
P
(cid:17)55c5F 5(cid:22) (cid:16)z; (cid:12); P(cid:12)(cid:17)i
y(cid:23)
P
(cid:17)55c5F 5(cid:22) (cid:0)z; N(cid:12); (cid:129)(cid:12)(cid:1)
(cid:0)
(cid:21)
(4.27)
D (cid:0)
C
expressed in terms of the velocity change (cid:129)(cid:12) and average velocity N(cid:12). These are found from (4.25)
to be
(cid:129)(cid:12)
D (cid:0)
1
ge
tin
1
e P
4 g2
(cid:0)
2
6
6
4
(cid:12)
Rx
O
Rx
O
Ry
O
3
7
7
5
1
2 g2
e
tin
1
e P
4 g2
C
1
(cid:0)
2
6
6
4
(cid:12)
1
R2
(cid:12)
x
O
Ry
Rx O
O
3
7
7
5
N(cid:12)
D
1
1
tin
1
e P
4 g2
(cid:0)
2
4
1
(cid:12)
0
3
5 (cid:0)
1
1
2 ge
tin
1
e P
4 g2
(cid:0)
2
6
6
4
(cid:12)
Rx
O
Rx
O
Ry
O
3
7
7
5
1
4 g2
e
1
4 g2
e
C
1
(cid:0)
(cid:12)
tin
P
2
6
6
4
0
R2
y
(cid:0) O
Ry
Rx O
O
3
7
7
5
;
where now O
of scattering. Since particle-2 scatters at (cid:28)2 to an E > 0 outgoing trajectory, we may take v
and so we set ge
R is the unit vector from the second nucleus to incoming particle-2 at the moment
c
0 for this interaction.
g2 < 1 and g2
(cid:28)
From (4.27) the Lorentz force acting on particle-1 at (cid:28)2 can be written
D
2 (cid:25)
y(cid:22)
f C
P
e2
2Mc
" .z
^
(cid:129)(cid:12)/ (cid:26)
^ N(cid:12)(cid:1) .(cid:129)(cid:12)
(cid:0)z
(cid:0)
4(cid:25)c(cid:26)3
z/
(cid:1)
y(cid:22)
in (cid:0)
D P
e2
2Mc
" .z
^
(cid:129)(cid:12)/ (cid:26)
(cid:0)z
(cid:0)
4(cid:25)c(cid:26)3
(cid:22)(cid:23)
#
z/
(cid:1)
yin
(cid:23)
P
(cid:22)(cid:23)
#
y(cid:23)f
P
^ N(cid:12)(cid:1) .(cid:129)(cid:12)
4.6. CLASSICAL PAIR PRODUCTION 81
neglecting the term .c2
5 =c2/F 5(cid:22). Making the simplifying choice O
R
1
2
Rx
g2 O
v (cid:18)
(cid:20)1
(cid:129)(cid:12)
tin
P
(cid:19)(cid:21)
(cid:26)x
O
C
(cid:0)
(cid:26)
z
(cid:1)
D
D (cid:0)
z
N(cid:12)
(cid:1)
(cid:26)
(cid:1) O
D
0, we find
tinv(cid:26)
g2 P
Rx;
O
where again we take g2
2 (cid:25)
0. Defining a second dimensionless factor for radiation
gR
D
1
2
e2
4(cid:25)(cid:26)
1
Mc2 D
1
2
interaction energy
mass energy
using
(cid:2).z
^
and now taking (cid:12)
(cid:25)
(cid:129)(cid:12)/ (cid:26)
(cid:1)
(cid:0)
.(cid:129)(cid:12)
z/ (cid:0)z
^ N(cid:12)(cid:1)(cid:3)
y/ (cid:129)(cid:12)(cid:26)
.z
(cid:0)
(cid:1) P
z/ (cid:2)(cid:0) N(cid:12)
y(cid:1) z
(cid:1) P
0, the Lorentz force splits into the 0-component
y/ (cid:26)
(cid:1) P
.(cid:129)(cid:12)
z .(cid:129)(cid:12)
y
(cid:1) P
D
(cid:0)
(cid:1)
(cid:0)
.z
y/ N(cid:12)(cid:3)
(cid:1) P
y0
f (cid:0)
P
R
g2gR O
yf
(cid:1) P
y0
in C
D P
R
g2gR O
yin
(cid:1) P
and the space component
yf
P
(cid:0)
g2gR h(cid:16)
R
O
yf (cid:17)
(cid:1) P
(cid:26)
O
C
(cid:16)
y0
(cid:26)
f (cid:0) O
P
yin
P
yf (cid:17)
(cid:1) P
Ri
O
D
g2gR h(cid:16)
C
R
O
yin(cid:17)
(cid:1) P
(cid:26)
O
C
(cid:0)
y0
(cid:26)
in (cid:0) O
P
Ri :
yin(cid:1) O
(cid:1) P
We write the velocity of incoming negative energy particle-1 as
y0
in <
P
D
and write the Lorentz force in components, with g
yin
P
(cid:26)
(cid:1) O
(cid:0)
1
2
6
6
4
(cid:0)
(cid:0)
1
g
g
Rx
O
Ry
O
1
Rx
g
O
(cid:26)x O
g
Rx
O
Rx
(cid:26)y O
O
(cid:0)
(cid:0)
g
(cid:0)
(cid:0)
g
Ry
g
O
Ry
(cid:26)x O
O
Ry
(cid:26)y O
g
O
3
7
7
5
2
6
4
y0
f
P
yxf
P
yyf
P
3
7
5 D
(cid:0)
1
(cid:0)
0
D
2
6
6
4
yin
yin
R
j O
D jP
) P
g2gR, as
1
0 0
3
g
g
Rx
O
Ry
O
1 0
0 1
7
7
5
2
6
4
y0
i
P
yxi
P
yyi
P
3
g
yi
jP
j
7
5 C
3
7
5
2
6
4
1
(cid:26)x
O
(cid:26)y
O
so that the final velocity of particle-1 after absorbing the radiation is
"
y0
f
P
yf
P
#
1
"
D
1
g2
(cid:0)
y0
in
P
yin
jP
R
j O
#
2g
"
C
1
(cid:0)
g2
g2
"
C
1
g2
(cid:0)
yin
jP
j
yin
C jP
yin
y0
R
in O
P
y0
in C jP
P
y0
(cid:26)
2
2
in O
P
j
yin
jP
C
#
#
(cid:26)
j O
R
j O
g3
"
C
1
g2
(cid:0)
#
:
yin
jP
yin
jP
j
(cid:26)
j O
82
4. PROBLEMS IN ELECTROSTATICS AND ELECTRODYNAMICS
The 0-component is
y0
f D
P
approximated at low velocity as
1
1
C
(cid:0)
g2
y0
in C
g2 P
2
g
C
1
g2
C
g2
g
(cid:0)
yin
jP
j
y0
f (cid:25)
P
1
1
C
(cid:0)
g2
y0
in D (cid:0)
g2 P
(cid:11)
y0
in;
P
where
g2
D
(cid:11)
(cid:11)
1
1 )
(cid:11)
D (cid:0)
1
1
C
(cid:0)
g2
g2
C
(cid:0)
is written so that (cid:11) > 1 for a positive energy timelike particle. The exact final velocity of the
scattered particle is
#
"
y0
f
P
yf
P
1
"
(cid:11)
(cid:0)
2
D (cid:0)
y0
in
P
yin
jP
R
j O
#
p(cid:11)2
C
"
1
(cid:0)
1
"
(cid:11)
C
2
(cid:0)
yin
y0
in C jP
P
y0
(cid:26)
2
2
in O
P
C
j
yin
jP
"
r (cid:11)
(cid:11)
1
1
C
(cid:0)
yin
jP
yin
jP
j
(cid:26)
j O
#
#
y0
R
in O
P
yin
jP
j
yin
C jP
#
(cid:11)
(cid:0)
R
j O
(cid:26)
j O
1
C
2
(cid:11)
(cid:21)
1
(cid:0)
:
yin
jP
j
with 0-component
y0
f D (cid:0)
P
(cid:11)
y0
in (cid:0)
P
1
(cid:20)1
(cid:11)
C
2
C
3
(cid:0)
p(cid:11)2
A pair creation event is observed at (cid:28)2 for (cid:11) > 1 which requires that
g2gR > 1
g
D
(cid:0)!
e2
4(cid:25)(cid:26)
>
2Mc2
g2
;
where g2 < 1 and so the energy absorbed from the bremsstrahlung emitted from the scattering
at (cid:28)2 must be at least the total mass of the particle creation event observed in the laboratory. This
provides a classical equivalent of the Bethe–Heitler mechanism in Stueckelberg–Horwitz–Piron
electrodynamics.
4.7
PARTICLE MASS STABILIZATION
As we have seen, under the right circumstances a particle and an interacting pre-Maxwell field
may exchange mass. In practical examples, such as pair creation and annihilation, the mass shift
will be symmetric under evolution, so that the initial and final masses will be equal.
As another model of mass shift, consider an event propagating uniformly on-shell as
x .(cid:28)/
u(cid:28)
D
D
(cid:0)u0; u(cid:1)
u2
c2
D (cid:0)
until it passes through a dense region of charged particles inducing
4.7. PARTICLE MASS STABILIZATION 83
where X .(cid:28)/ is a small stochastic perturbation. If the typical distance scale between force centers
is d then the perturbation will be roughly periodic with a characteristic period
x .(cid:28) /
u(cid:28)
D
C
X .(cid:28) / ;
j
a fundamental frequency
d
u
j
a very short distance
a moderate velocity D
D
a very short time,
!0
D
2(cid:25) j
u
j
d D
very high frequency,
and an amplitude on the order of
for some macroscopic factor (cid:11) < 1. The perturbation can be represented in a Fourier series
X (cid:22) .(cid:28) /
j
j (cid:24)
(cid:11)d
X .(cid:28) /
Re X
n
D
an ein!0(cid:28)
(cid:11)d Re X
n
D
n ein!0(cid:28)
s(cid:22)
with four-vector coefficients
an
D
where the sn represent a normalized Fourier series (s(cid:22)
0 (cid:24)
but the perturbed velocity
n; sn(cid:1)
(cid:11)dsn
D
D
(cid:11)d (cid:0)s0
(cid:11)d (cid:0)cst
n; sn(cid:1) ;
1). The perturbed motion is of scale d ,
x(cid:22) .(cid:28) /
P
D
u(cid:22)
X (cid:22) .(cid:28) /
C P
D
u(cid:22)
(cid:11)
u
j
j
C
Re X
n
2(cid:25) n s(cid:22)
n iein!0(cid:28)
is of macroscopic scale (cid:11)
turbed mass is
u
j
. The unperturbed mass is m
j
M
x2 .(cid:28) / =c2
P
D
D (cid:0)
M and the per-
m
D (cid:0)
M
x2 .(cid:28) /
P
c2
M
c2
D (cid:0)
u
C
M
1
’
4(cid:25) (cid:11)
C
u
j
j
Re X
n
(cid:11)
Re X
n
u
j
j
n iein!0(cid:28) !
;
n st
2(cid:25) n sn iein!0(cid:28) !
2
where we neglect terms in (cid:11)2. This kind of interaction may produce a macroscopic mass shift
m
(cid:0)!
m (cid:18)1
(cid:19)
(cid:129)m
m
C
(cid:129)m
m D
4(cid:25) (cid:11)
u
j
j
Re X
n
n st
n iein!0(cid:28)
that remains significant after the interaction.
Two approaches have been suggested to explain why such mass shifts are not observed:
one involving a self-interaction of the particle and its radiation field under mass shift, and the
second a more general argument in statistical mechanics.
84
4. PROBLEMS IN ELECTROSTATICS AND ELECTRODYNAMICS
SELF-INTERACTION
4.7.1
We consider an arbitrarily moving event X (cid:22).(cid:28)/ at the origin of a co-moving frame so that
X .(cid:28) /
.ct .(cid:28) / ; 0/
X .(cid:28) /
P
(cid:0)c
t .(cid:28)/ ; 0(cid:1)
P
D
D
M
D
M
X 2=c2
P
t 2 can only result from a change in energy through
and a change in mass m
P
D (cid:0)
1. The Green’s function permits us
acceleration of t. We say that the event is on-shell if
to compute the field at some point x induced by the evolving event. If the motion at time (cid:28)
X.(cid:28) (cid:3)/ along the trajectory of
produces an observable field at time (cid:28) (cid:3) > (cid:28) at some point x
the event itself, then the event will experience a self-force. Because GMaxwell
0 on the event’s
timelike trajectory, only a contribution from GCorrelation can produce such a self-interaction, and,
as seen from (3.24), only if (cid:17)55
We approximate ’.(cid:28) 0 (cid:0)
s/ as in Section 4.1.2, introduce the function g.s/
1.
(cid:21)(cid:14).(cid:28) 0 (cid:0)
D C
s/
D
D
D
D
t
P
to express terms of the type
c2g .s/
(cid:16)(cid:0)X.(cid:28) (cid:3)/
2
X.s/(cid:1)
(cid:0)
c2
5 .(cid:28) (cid:3)
s/2(cid:17)
(cid:0)
D
C
D (cid:0)
c2 (cid:18)
(cid:0)t (cid:0)(cid:28) (cid:3)(cid:1)
2
t .s/(cid:1)
(cid:0)
c2
5
c2 .(cid:28) (cid:3)
(cid:0)
(cid:0)
s/2(cid:19)
and write
a(cid:11) (cid:0)X (cid:0)(cid:28) (cid:3)(cid:1) ; (cid:28) (cid:3)(cid:1)
(cid:21)ec5
2(cid:25) 2c3
D
Z ds
X (cid:11).s/
P
1
2
(cid:18) .g.s//
.g.s//3=2 (cid:0)
(cid:14) .g.s//
.g.s//1=2
!
(cid:18) ret
for the self-field experienced by the event. We designate the two terms as
For an event evolving uniformly on-shell we have
a(cid:11) (cid:0)X (cid:0)(cid:28) (cid:3)(cid:1) ; (cid:28) (cid:3)(cid:1)
a(cid:11)
(cid:18) C
a(cid:11)
(cid:14) :
D
t (cid:0)(cid:28) (cid:3)(cid:1)
(cid:28) (cid:3)
D
g.s/
(cid:18)1
c2
5
c2
(cid:0)
D
(cid:19) .(cid:28) (cid:3)
s/2
(cid:0)
and using identity (4.7) are led to
a (cid:0)X (cid:0)(cid:28) (cid:3)(cid:1) ; (cid:28) (cid:3)(cid:1)
D
0
(cid:21)ec5
2(cid:25) 2c3 .c; 0; c5/ Z ds (cid:18) (cid:0)(cid:28) (cid:3)
(cid:18) (cid:18)(cid:18)1
(cid:19) .(cid:28) (cid:3) (cid:0)
(cid:19) .(cid:28) (cid:3) (cid:0)
(cid:18)(cid:18)1
B
B
B
@
c2
5
c2
c2
5
c2
1
2
(cid:0)
(cid:0)
s(cid:1)
(cid:0)
s/2(cid:19)
3=2 (cid:0)
s/2(cid:19)
(cid:14) (cid:18)(cid:18)1
(cid:18)(cid:18)1
(cid:0)
(cid:0)
c2
5
c2
c2
5
c2
(cid:19) .(cid:28) (cid:3) (cid:0)
(cid:19) .(cid:28) (cid:3) (cid:0)
s/ 2(cid:19)
1=2
s/ 2(cid:19)
1
C
C
C
A
(cid:21)ec5 .c; 0; c5/
D
2(cid:25) 2c3 (cid:18)1
3=2
(cid:19)
c2
5
c2
(cid:0)
(cid:28) (cid:3)
Z
ds
(cid:0)1
0
@
1
2
1
.(cid:28) (cid:3) (cid:0)
s/ 3 (cid:0)
(cid:14) .(cid:28) (cid:3) (cid:0)
(cid:12)
(cid:12)
(cid:12)
s/ (cid:18) .(cid:28) (cid:3) (cid:0)
s/2(cid:12)
(cid:12)
(cid:12)
.(cid:28) (cid:3) (cid:0)
s/
1
A
:
4.7. PARTICLE MASS STABILIZATION 85
(cid:28) (cid:3)
Z
ds
(cid:0)1
1
.(cid:28) (cid:3) (cid:0)
s/ 3 D
1
2 .(cid:28) (cid:3) (cid:0)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
s/ 2
(cid:28) (cid:3)
(cid:0)1
D
lim
(cid:28) (cid:3)
s
!
1
2 .(cid:28) (cid:3) (cid:0)
s/ 2
Since
and
(cid:28) (cid:3)
Z
ds
(cid:0)1
(cid:14) .(cid:28) (cid:3) (cid:0)
s/ (cid:18) .(cid:28) (cid:3) (cid:0)
s/2
.(cid:28) (cid:3) (cid:0)
s/
D
lim
(cid:28) (cid:3)
s
!
(cid:18) .(cid:28) (cid:3) (cid:0)
.(cid:28) (cid:3) (cid:0)
s/
s/2 D
lim
(cid:28) (cid:3)
s
!
1
2
s/2
.(cid:28) (cid:3) (cid:0)
we find that for uniform on-shell motion
a (cid:0)X (cid:0)(cid:28) (cid:3)(cid:1) ; (cid:28) (cid:3)(cid:1)
D
(cid:21)ec5
2(cid:25) 2c3 .c; 0; c5/ lim
(cid:28) (cid:3)
s
!
1
2 .(cid:28) (cid:3) (cid:0)
1
2
s/ 2 (cid:0)
s/2
.(cid:28) (cid:3) (cid:0)
!
0
D
the self-force vanishes.
X i
In general, because P
D
0 and a(cid:11) .X .(cid:28) (cid:3)/ ; (cid:28) (cid:3)/ does not depend on X i , we have
ai
0
D
@i a0
@i a5
0
D
D
)
f (cid:22)(cid:23)
f 5i
0
D
D
and so the field reduces to
f 50
@5a0
@0a5
(cid:0)
D
1
c5
D
a0
@(cid:28) (cid:3)
1
c
C
@t a5;
where the partial derivative @(cid:28) (cid:3) only acts on the explicit variable (not on t .(cid:28) (cid:3)/ or (cid:18) ret). Similarly,
the velocity P
X (cid:11).s/ is constant with respect to @(cid:28) (cid:3).
Inserting the potential we find
@5a0
(cid:18) (cid:0)
@0a5
(cid:18) D
3(cid:21)ec5
4(cid:25) 2c3
c5
c
Z ds
(cid:18) (cid:18).t .(cid:28) (cid:3)/
t .s//2
(cid:0)
c2
5
c2 .(cid:28) (cid:3) (cid:0)
(cid:0)
s/2(cid:19)
(cid:20).t .(cid:28) (cid:3)/
(cid:0)
(cid:14) (cid:18).t .(cid:28) (cid:3)/
t .s//2
(cid:0)
t .s//2
s/2(cid:21)
c2
5
c2 .(cid:28) (cid:3) (cid:0)
c2
5
c2 .(cid:28) (cid:3) (cid:0)
(cid:0)
(cid:0)
s/2(cid:19)
(cid:20).t .(cid:28) (cid:3)/
t .s//2
(cid:0)
c2
5
c2 .(cid:28) (cid:3) (cid:0)
(cid:0)
3=2
s/2(cid:21)
(cid:18) ret (cid:129) (cid:0)(cid:28) (cid:3); s(cid:1)
5=2
(cid:18) ret (cid:129) (cid:0)(cid:28) (cid:3); s(cid:1) ;
(cid:21)ec5
2(cid:25) 2c3
c5
c
(cid:0)
Z ds
where
(cid:129) (cid:0)(cid:28) (cid:3); s(cid:1)
t.s/.(cid:28) (cid:3)
D P
s/
(cid:0)
(cid:0)
(cid:0)t (cid:0)(cid:28) (cid:3)(cid:1)
(cid:0)
t .s/(cid:1)
86
4. PROBLEMS IN ELECTROSTATICS AND ELECTRODYNAMICS
characterizes the energy acceleration in the rest frame, which will be associated with mass shift.
Similarly, the derivatives of a(cid:14) produce
@5a0
(cid:14) (cid:0)
@0a5
(cid:14) D (cid:0)
(cid:21)ec5
2(cid:25) 2c3
c5
c
Z ds
(cid:21)ec5
2(cid:25) 2c3
c5
c
(cid:0)
Z ds
(cid:14) (cid:18).t .(cid:28) (cid:3)/
t .s//2
(cid:0)
c2
5
c2 .(cid:28) (cid:3) (cid:0)
(cid:0)
s/2(cid:19)
(cid:18).t .(cid:28) (cid:3)/
(cid:0)
(cid:18).t .(cid:28) (cid:3)/
2(cid:14)0
t .s//2
(cid:0)
t .s//2
(cid:0)
(cid:18).t .(cid:28) (cid:3)/
t .s//2
(cid:0)
(cid:0)
3=2
s/2(cid:19)
c2
5
c2 .(cid:28) (cid:3) (cid:0)
c2
5
c2 .(cid:28) (cid:3) (cid:0)
s/2(cid:19)
c2
5
c2 .(cid:28) (cid:3) (cid:0)
(cid:0)
s/2(cid:19)
1=2
(cid:18) ret (cid:129) (cid:0)(cid:28) (cid:3); s(cid:1)
(cid:18) ret (cid:129) (cid:0)(cid:28) (cid:3); s(cid:1)
and combining terms we find
where
f 50
(cid:18) D
3(cid:21)e
4(cid:25) 2
c2
5
c4
Z ds
f 50
(cid:14) D (cid:0)
(cid:21)e
(cid:25) 2
c2
5
c4
Z ds
f 50
(cid:14) 0 D (cid:0)
(cid:21)e
(cid:25) 2
c2
5
c4
Z ds
f 50
f 50
(cid:18) C
f 50
(cid:14) C
f 50
(cid:14) 0
;
D
(cid:18) (cid:18).t .(cid:28) (cid:3)/
t .s//2
(cid:0)
c2
5
c2 .(cid:28) (cid:3) (cid:0)
(cid:0)
s/2(cid:19)
(cid:20).t .(cid:28) (cid:3)/
(cid:0)
(cid:14) (cid:18).t .(cid:28) (cid:3)/
t .s//2
(cid:0)
t .s//2
(cid:0)
s/2(cid:21)
c2
5
c2 .(cid:28) (cid:3) (cid:0)
c2
5
c2 .(cid:28) (cid:3) (cid:0)
(cid:0)
s/2(cid:19)
3=2
(cid:20).t .(cid:28) (cid:3)/
(cid:0)
(cid:18).t .(cid:28) (cid:3)/
(cid:14)0
t .s//2
(cid:0)
t .s//2
(cid:0)
s/2(cid:21)
c2
5
c2 .(cid:28) (cid:3) (cid:0)
c2
5
c2 .(cid:28) (cid:3) (cid:0)
(cid:0)
s/2(cid:19)
(cid:18).t .(cid:28) (cid:3)/
t .s//2
(cid:0)
c2
5
c2 .(cid:28) (cid:3) (cid:0)
(cid:0)
1=2
s/2(cid:19)
(cid:18) ret (cid:129) (cid:0)(cid:28) (cid:3); s(cid:1)
(4.28)
5=2
(cid:18) ret (cid:129) (cid:0)(cid:28) (cid:3); s(cid:1)
(4.29)
(cid:18) ret (cid:129) (cid:0)(cid:28) (cid:3); s(cid:1) :
(4.30)
Notice that if the particle remains at constant velocity (in any uniform frame), then
x0 .(cid:28)/
u0(cid:28)
D
(cid:0)!
(cid:129) (cid:0)(cid:28) (cid:3); s(cid:1)
u0
c
D
.(cid:28) (cid:3)
s/
(cid:0)
(cid:0)
(cid:18) u0
c
(cid:28) (cid:3)
(cid:0)
u0
c
s(cid:19)
0
D
and so the self-force f 50 vanishes. For any smooth t .(cid:28) /, we may approximate
t (cid:0)(cid:28) (cid:3)(cid:1)
t .s/
(cid:0)
D
t .s/
C P
t.s/.(cid:28) (cid:3)
t.s/.(cid:28) (cid:3)
s/
C
(cid:0)
D P
s/
1
2 R
C
t.s/.(cid:28) (cid:3)
(cid:0)
1
2 R
t.s/.(cid:28) (cid:3)
s/2
(cid:0)
C
s/2
(cid:0)
C
o (cid:0).(cid:28) (cid:3)
s/3(cid:1)
(cid:0)
o (cid:0).(cid:28) (cid:3)
s/3(cid:1)
(cid:0)
(cid:0)
t .s/
4.7. PARTICLE MASS STABILIZATION 87
so the function
(cid:129) (cid:0)(cid:28) (cid:3); s(cid:1)
t.s/.(cid:28) (cid:3)
D P
s/
(cid:0)
(cid:0)
(cid:0)t (cid:0)(cid:28) (cid:3)(cid:1)
(cid:0)
t .s/(cid:1)
D (cid:0)
1
2 R
t.s/.(cid:28) (cid:3)
s/2
(cid:0)
C
o (cid:0).(cid:28) (cid:3)
s/3(cid:1)
(cid:0)
is nonzero only when the time coordinate accelerates in the rest frame, equivalent to a shift in
the particle mass.
As a first-order example, we consider a small, sudden jump in mass at (cid:28)
0 characterized
D
by
t .(cid:28)/
D
(cid:28)
.1
8
<
:
C
;
;
(cid:28) < 0
(cid:28) > 0
(cid:12)/ (cid:28)
)
t .(cid:28) /
P
D
1
1
8
<
:
C
;
(cid:28) < 0
(cid:12) ;
(cid:28) > 0
and calculate the self-interaction. Since (cid:18) ret enforces t.(cid:28) (cid:3)/ > t.s/, it follows that
(cid:28) (cid:3) < 0
)
s < 0
t.(cid:28) (cid:3)/
) P
t.s/
D P
D
1
)
(cid:129).(cid:28) (cid:3); s/
0:
D
Similarly,
(cid:28) (cid:3) > 0 and s > 0
t.(cid:28) (cid:3)/
) P
t.s/
D P
(cid:12)
1
C
D
)
(cid:129).(cid:28) (cid:3); s/
0:
D
But when (cid:28) (cid:3) > 0 and s < 0,
(cid:129).(cid:28) (cid:3); s/
t.s/.(cid:28) (cid:3)
D P
s/
(cid:0)
(cid:0)
(cid:0)t (cid:0)(cid:28) (cid:3)(cid:1)
t .s/(cid:1)
(cid:0)(cid:28) (cid:3)
s(cid:1)
(cid:0)
(cid:0)
(cid:2).1
C
D
(cid:0)
(cid:12)/ (cid:0)(cid:28) (cid:3)(cid:1)
s(cid:3)
(cid:0)
D (cid:0)
(cid:12)(cid:28) (cid:3)
and f 50 can be found from the contributions (4.28)–(4.30). Writing
g .s/
D
(cid:0)t (cid:0)(cid:28) (cid:3)(cid:1)
(cid:0)
2
t .s/(cid:1)
c2
5
c2 .(cid:28) (cid:3)
(cid:0)
(cid:0)
s/2
(cid:0).1
C
D
(cid:12)/ (cid:28) (cid:3)
(cid:0)
2
s(cid:1)
c2
5
c2 .(cid:28) (cid:3)
(cid:0)
(cid:0)
s/2
and solving for g.s(cid:3)/
0, we find
D
s(cid:3)
D
0
B
@
1
C
1
1
C
A
(cid:12)
c5
c
(cid:0)
(cid:28) (cid:3) > (cid:28) (cid:3)
so that g.s/ > 0 in the region of interest s < 0 < (cid:28) (cid:3) and there will be no contribution from the
terms (4.29) or (4.30). Thus,
f 50
f 50
(cid:18) D
.
(cid:0)
D
(cid:12)(cid:28) (cid:3)/
3(cid:21)e
4(cid:25) 2
c2
5
c4
0
Z
ds
(cid:0)1
1
(cid:0)
c2
5
c2 .(cid:28) (cid:3) (cid:0)
5=2
s/2(cid:21)
(cid:20).t .(cid:28) (cid:3)/
t .s//2
(cid:0)
1
.
D
(cid:12)(cid:28) (cid:3)/
(cid:0)
3(cid:21)e
4(cid:25) 2
c2
5
c4
0
Z
ds
(cid:0)1
(cid:20)..1
(cid:12)/ (cid:28) (cid:3) (cid:0)
C
s/2
c2
5
c2 .(cid:28) (cid:3) (cid:0)
(cid:0)
:
5=2
s/2(cid:21)
88
4. PROBLEMS IN ELECTROSTATICS AND ELECTRODYNAMICS
Shifting the integration variable as x
(cid:28) (cid:3) (cid:0)
D
s the integral becomes
0
Z
ds
(cid:0)1
1
(cid:20)..1
(cid:12)/ (cid:28) (cid:3) (cid:0)
C
s/2
c2
5
c2 .(cid:28) (cid:3) (cid:0)
(cid:0)
5=2 D (cid:0)
s/2(cid:21)
(cid:28) (cid:3)
Z
1
dx
Bx
C
;
A/5=2
.C x2
C
where
C
1
(cid:0)
D
c2
5
c2
B
D
2(cid:12)(cid:28) (cid:3)
2
(cid:0)(cid:12)(cid:28) (cid:3)(cid:1)
A
D
which can be evaluated using the well-known form [3]
Z
.C x2
dx
2.2C x
Bx
A/5=2 D
3qpC x2
C
C
C
B 2. We finally find the field strength in the form
C
C
A
C x2
B/
C
Bx
(cid:18)
1
Bx
where q
4AC
D
(cid:0)
f 50
(cid:21)e
4(cid:25) 2
D
1
5 .(cid:12)(cid:28) (cid:3)/3 Q (cid:18)(cid:12);
c2
c2
5
c2
(cid:19) ;
where Q (cid:18)(cid:12);
c2
5
c2
(cid:19) is the positive, dimensionless factor
8C
q
(cid:19) ;
A C
C
1
1
Q (cid:18)(cid:12);
(cid:19)
c2
5
c2
D
2
6
6
6
6
6
6
6
6
6
6
6
6
4
3=2
(cid:19)
2 (cid:18)1
c2
5
c2
(cid:0)
0
B
B
B
B
B
1
B
B
B
B
B
B
B
@
C
C
C
C
C
C
C
C
C
C
C
C
A
1=2
(cid:19)
(cid:18)1
c2
5
c2
(cid:0)
0
B
B
B
@
1
C
(cid:12)
(cid:18)1
(cid:19)
c2
5
c2
(cid:0)
C
C
C
A
(cid:0)
2
1
6
6
4
2(cid:12)
(cid:12)2
C
c2
5
c2
1
(cid:0)
C
c2
5
c2
1
(cid:0)
(cid:12)2 c2
5
c2
0
B
B
@
c2
5
c2
1
C
(cid:12)
c2
5
c2
1
(cid:0)
1=2
3
7
7
5
1
C
C
A
C
(cid:18)1
(cid:0)
c2
5
c2
(cid:19)
1=2 2
6
6
4
1
C
2(cid:12)
(cid:12)2
c2
5
c2
1
(cid:0)
C
c2
5
c2
1
(cid:0)
3
7
7
7
7
7
7
7
7
7
7
7
5
3=2
3
7
7
5
4.7. PARTICLE MASS STABILIZATION 89
which is seen to be finite for c5 < c, with
Q (cid:18)(cid:12);
(cid:19)
c2
5
c2
(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)!c5
!
0
2
1
(cid:0)
(cid:140)1
C
1
2(cid:12)
C
(cid:12)
(cid:12)2(cid:141)1=2
!
0:
D
C
Since f (cid:22)(cid:23)
D
0, the Lorentz force induced by this field strength is then
M
x(cid:22)
R
D
ef (cid:22)(cid:11)
x(cid:11)
P
D
ef (cid:22)5
x5
P
ef 5(cid:22)
x5
P
D (cid:0)
D (cid:0)
(cid:17)55ef 5(cid:22)
x5
P
D (cid:0)
ef 5(cid:22)c5
and since f 5i
0
D
M
M
xi
R
x0
R
0
D
D (cid:0)
c5ef 50
D
8
(cid:136)<
(cid:136):
0
;
(cid:28) (cid:3) < 0
(cid:21)e2
4(cid:25) 2
(cid:0)
1
c5 .(cid:12)(cid:28) (cid:3)/3 Q (cid:18)(cid:12);
c2
5
c2
(cid:19) ;
(cid:28) (cid:3) > 0
which causes the 0-coordinate to decelerate. When the event returns to on-shell propagation
the function (cid:129).(cid:28) (cid:3); s/ and field strength f 50 again vanish. The mass decay can also be seen in
the Lorentz force for the mass
d
d (cid:28)
(cid:18)
(cid:0)
1
2
M
x2(cid:19)
P
ef 5(cid:22)
x(cid:22)
P
D
ef 50
x
P
D
ecf 50
t
P
D (cid:0)
D (cid:0)
(cid:21)e2
4(cid:25) 2
c
5 .(cid:12)(cid:28) (cid:3)/3 Q (cid:18)(cid:12);
c2
c2
5
c2
(cid:19)
t:
P
We notice that if (cid:12) < 0 then f 50 changes sign so that the self-interaction results in damping or
anti-damping to push the trajectory toward on-shell behavior. Although this model is approx-
imate, it seems to indicate that the self-interaction of the event with the field generated by its
mass shift will restore the event to on-shell propagation.
4.7.2
STATISTICAL MECHANICS
In Section 3.4 we saw that a particle, as observed through its electromagnetic current, can be
interpreted as a weighted ensemble of events ’.s/x(cid:22).(cid:28)
s/ selected from a neighborhood of
event x(cid:22).(cid:28)/ (along a single timelike trajectory) determined by ’.s/. Here we model a particle as
an ensemble x(cid:22)
i .(cid:28)/ of N mutually interacting event trajectories given at a single (cid:28). Construct-
ing the canonical and grand canonical ensembles without an a priori constraint on the total
mass of the system, the total mass of the particle is determined by a chemical potential. Under
perturbation, such as collisions for which the final asymptotic mass of an elementary event is
not constrained by the basic theory, the particle returns to its equilibrium mass value. Here we
provide here a brief summary of the full model given in [12, 13].
C
As described in Section 2.5, we first construct a canonical ensemble by extracting a small
subensemble (cid:128)s (the particle system) from its environment (cid:128)b (the bath ensemble). Summing
90
4. PROBLEMS IN ELECTROSTATICS AND ELECTRODYNAMICS
over all possible partitions of energy and mass parameter between the particle and bath
(cid:128).(cid:20); E/
D
D
Z d (cid:127)bd (cid:127)sd (cid:20)bd (cid:20)s(cid:14).Kb
(cid:20)b/(cid:14).Ks
(cid:20)s/(cid:14).Es
Eb
C
(cid:0)
E/(cid:14).(cid:20)s
(cid:20)b
C
(cid:0)
(cid:20)/
(cid:0)
(cid:0)
Z d (cid:20)0dE 0(cid:128)b.(cid:20)
(cid:20)0; E
(cid:0)
(cid:0)
E0/(cid:128)s.(cid:20)0; E 0/
in which both mass and energy may be exchanged. We suppose that the integrand has a max-
imum over both variables (cid:20)0; E 0, providing an equilibrium point for the system. By analyzing
the partial derivatives, it can be shown that no saddle point configuration is possible in the
neighborhood of the maximum. The conditions for equilibrium can then be written
and
1
(cid:20)0; E
E0/
(cid:0)
(cid:128)b.(cid:20)
(cid:0)
1
(cid:20)0; E
E0/
(cid:0)
(cid:128)b.(cid:20)
(cid:0)
@(cid:128)b
@E
@(cid:128)b
@(cid:20)
.(cid:20)
(cid:0)
(cid:20)0; E
(cid:0)
E0/
max
j
D
1
(cid:128)s.(cid:20)0; E 0/
@(cid:128)s
@E
.(cid:20)0; E 0/
max
j
(cid:17)
1
T
.(cid:20)
(cid:0)
(cid:20)0; E
(cid:0)
E0/
max
j
D
1
(cid:128)s.(cid:20)0; E 0/
@(cid:128)s
@(cid:20)
.(cid:20)0; E/
max
j
(cid:17)
1
T(cid:20)
;
defining temperature in the usual way, and a new effective “mass temperature” T(cid:20). Writing
Sb.(cid:20); E/
D
ln (cid:128)b.(cid:20); E/
Ss.(cid:20); E/
D
ln (cid:128)s.(cid:20); E/
it follows that at maximum
@Sb
@E D
@Ss
@E D
1
T
@Sb
@(cid:20) D
@Ss
@(cid:20) D
1
T(cid:20)
:
By additivity of entropy, the total entropy of the system is independent of (cid:20)0; E 0 in the neigh-
borhood of the maximum, and for (cid:20)0 and E0 small compared to (cid:20) and E,
(cid:128)b.(cid:20)
(cid:20)0; E
E0/
D
in this neighborhood. Then
(cid:0)
(cid:0)
eSb .(cid:20)
(cid:0)
(cid:20)0;E
(cid:0)
E 0/
(cid:138)
eSb .(cid:20);E /
(cid:20)0
(cid:0)
@Sb
@(cid:20) (cid:0)
E 0
@Sb
@E
eSb .(cid:20);E /e(cid:0)
(cid:20)
0T(cid:20) e(cid:0)
E
0T
D
(cid:128).(cid:20); E/
D
Z d (cid:20)0dE 0(cid:128)s.(cid:20)0; E 0/eSb .(cid:20);E /e(cid:0)
(cid:20)
0T(cid:20) e(cid:0)
E
0T
eSb .(cid:20);E / Z d (cid:127)se(cid:0)
Ks
T(cid:20) e(cid:0)
Es
T
D
leading to the partition function
QN .T(cid:20); T /
D
Z d (cid:127)e(cid:0)
K
T(cid:20) e(cid:0)
E
T ;
where the overall factor Sb.(cid:20); E/ cancels out in any computation of average values. The
Helmholtz free energy A is defined through
QN .T(cid:20); T /
D
A.T(cid:20) ;T /=T
e(cid:0)
Z d (cid:127)e(cid:0)
K=T(cid:20) e.A
E /=T
(cid:0)
1
D
4.7. PARTICLE MASS STABILIZATION 91
from which it follows that
A
E
D h
i C
T
@A
@T D h
E
i (cid:0)
T S
S
D (cid:0)
@A
@T
and
K
h
i D (cid:0)
T 2
(cid:20)
T
@A
@T(cid:20)
:
Under the canonical distribution, corresponding to an equilibrium of both heat and mass, with-
out exchange of particles with the bath, we therefore obtain a mean value for
, the effective
i
center-of-mass mass of the subensemble, which is determined by T(cid:20) and T .
K
h
Computing the fluctuations in energy, one finds
(cid:10).E
E
(cid:0) h
/2(cid:11)
i
D
T 2 @
E
h
@T
i
(cid:10).K
K
(cid:0) h
/2(cid:11)
i
D
T(cid:20)
2 @
K
h
@T(cid:20)
i
showing that the mean mass rises with the mass temperature. (Since K is proportional to a neg-
T(cid:20) is a positive number, to be identified with a “mass temperature.”)
ative mass in this metric,
Repeating the above for the grand canonical ensemble, in which the system (particle) en-
semble may exchange events and volume with the bath, one decomposes the full microcanonical
in terms of its canonical subsets
(cid:0)
QN .V; T; T(cid:20)/
N
X
0
Ns
D
D
Z d (cid:127)se(cid:0)
Ks =T(cid:20) e(cid:0)
Es =T QN
Ns .V
(cid:0)
(cid:0)
Vs; T; T(cid:20)/;
where
Kb
K
(cid:0)
D
Ks and Eb
D
Vs; T; T(cid:20)/
(cid:0)
D
Z d (cid:127)be(cid:0)
Kb =T(cid:20) e(cid:0)
Eb =T
Es. Making the usual identifications
QN
Ns .V
(cid:0)
E
(cid:0)
@A
@V D (cid:0)
P
@A
@N D
(cid:22)
and defining the new mass chemical potential
leads to the grand partition function
@A
@K D (cid:0)
(cid:22)(cid:20)
Q.V; T; T(cid:20)/
eVP =T
D
D
N
X
0
Ns
D
zNs QNs .T; Ks; Es/;
where
QNs .T; Ks; Es/
D
Z d (cid:127)s(cid:16)Ks e(cid:0)
Es =T
e(cid:22)=T
z
D
e(cid:0) O
(cid:22)(cid:20) =T :
(cid:16)
D
92
4. PROBLEMS IN ELECTROSTATICS AND ELECTRODYNAMICS
It follows that
@
@z
Modifying the Helmholtz free energy for the grand canonical ensemble,
ln Q:
ln Q
@
@(cid:16)
K
h
N
h
i D
i D
z
(cid:16)
A
N
D h
i
T ln z
K
C h
i
T ln (cid:16)
(cid:0)
T ln Q
leads to
Q
It follows that the internal energy is
e(cid:0)
A=T z<N >(cid:16)<K> :
D
U
E
(cid:17) h
A
N
(cid:0) h
(cid:22)
i
C
i D
(cid:18)(cid:22)(cid:20)
(cid:19)
T
T(cid:20)
C
K
h
i C
T ln Q
T 2 @
@T
C
ln Q
and using the thermodynamic relation
U
A
C
D
T S
one finds
S
D
@
@T
.T ln Q/
(cid:18) (cid:22)(cid:20)
C
T C
(cid:19)
1
T(cid:20)
K
h
i (cid:0)
(cid:22)
T h
N
:
i
Finally, the Maxwell relations are
S
D (cid:0)
(cid:18) @A
@T
(cid:19)(cid:12)
(cid:12)
(cid:12)
(cid:12)V;
N
K
;
i
h
i
h
P
D (cid:0)
(cid:18) @A
@V
(cid:19)(cid:12)
(cid:12)
(cid:12)
(cid:12)<N >;<K>;T
At the critical point in
(cid:22)
D
@A
N
h
i
i
@
K
h
@A
K
h
i
@
(cid:18)(cid:22)(cid:20)
(cid:19) :
T
T(cid:20)
C
D (cid:0)
@A
K
h
i
@
0
D
T
T(cid:20) D (cid:0)
(cid:22)(cid:20)
(cid:0)!
(4.31)
and so (cid:22)(cid:20) is positive since T(cid:20) is negative.
The particle in this model is a statistical ensemble which has both an equilibrium energy
and an equilibrium mass, controlled by the temperature and chemical potentials, thus assuring
asymptotic states with the correct mass. The thermodynamic properties of this system, involve
the maximization of the integrand in the microcanonical ensemble, where both the energy and
the mass are parameters of the distribution. A critical point in the free energy is made available
by the interplay of the equilibrium requirements of the canonical ensemble (where the total
mass of the system is considered variable) as for the energy, and the equilibrium requirements of
the grand canonical ensemble (where a chemical potential arises for the particle number). The
particle mass is controlled by a chemical potential, so that asymptotic variations in the mass can
be restored to a given value by relaxation to satisfy the equilibrium conditions.
4.8. SPEEDS OF LIGHT AND THE MAXWELL LIMIT 93
SPEEDS OF LIGHT AND THE MAXWELL LIMIT
4.8
As discussed in Section 3.7, concatenation—integration of the pre-Maxwell field equations over
the evolution parameter (cid:28)—extracts from the microscopic event interactions the massless modes
in Maxwell electrodynamics, expressing a certain equilibrium limit when mass exchange settles
to zero. In this picture, the microscopic dynamics approach an equilibrium state because the
boundary conditions hold pointwise in x as (cid:28)
, asymptotically eliminating interactions that
! 1
cannot be described in Maxwell theory. The Maxwell-type description recovered by concatenat-
ing the microscopic dynamics may thus be understood as a self-consistent summary constructed
a posteriori from the complete worldlines.
We have assumed that 0
c5 < c and we must check that SHP theory remains finite
0. First we notice that c5 appears explicitly three times in the pre-Maxwell equations
(cid:20)
as c5
(3.20)
!
@(cid:23) f (cid:22)(cid:23)
1
c5
(cid:0)
@(cid:28) f 5(cid:22)
e
c
D
j (cid:22)
’
@(cid:22) f 5(cid:22)
e
c
D
j 5
’ D
c5
c
e(cid:26)’
@(cid:22)f(cid:23)(cid:26)
@(cid:23)f(cid:26)(cid:22)
@(cid:26)f(cid:22)(cid:23)
0
@(cid:23)f5(cid:22)
@(cid:22)f5(cid:23)
@(cid:28) f(cid:22)(cid:23)
0
C
D
C
twice in the form 1
@(cid:28) and once multiplying the event density (cid:26)’. The derivative term poses
c5
no problem in the homogeneous pre-Maxwell equation, which is satisfied identically for fields
1
derived from potentials. Specifically, the fields f5(cid:22) contain terms of the type @5a(cid:22)
@(cid:28) a(cid:22) that
c5
0. However,
cancel the explicit (cid:28)-derivative of f(cid:22)(cid:23), evaluated before passing to the limit c5
the homogeneous equation does impose a new condition through
D
!
C
D
(cid:0)
1
c5
c5 (cid:0)@(cid:23)f5(cid:22)
@(cid:22)f5(cid:23)(cid:1)
@(cid:28) f(cid:22)(cid:23)
0
@(cid:28) f(cid:22)(cid:23)
0
(cid:0)
C
D
requiring that the field strength f (cid:22)(cid:23) become (cid:28)-independent in this limit. For the fields derived
(cid:28)R/ unless we simul-
in Section 4.2 this condition is violated by the multiplicative factor ’.(cid:28)
taneously require c5
1, using (3.12)
1=c5
(cid:24)
D
for (cid:24). This requirement effectively spreads the event current j (cid:11)
’ uniformly along the particle
worldline, recovering the (cid:28)-independent particle current
, in which case ’.x; (cid:28)/
(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)!c5
!
(cid:0)
1=2(cid:24)
! 1
)
!
!
D
(cid:21)
0
0
j (cid:22)
’ .x; (cid:28) /
j 5
’ .x; (cid:28) /
Z ds ’ .(cid:28)
Z ds ’ .(cid:28)
D
D
@(cid:22)j (cid:22)
’ .x; (cid:28) /
C
s/ j (cid:22) .x; s/
s/ j 5 .x; s/
@(cid:28) j 5
’ .x; (cid:28) /
(cid:0)
(cid:0)
1
c5
(cid:0)!
(cid:0)!
(cid:0)!
Z ds 1
(cid:1)
j (cid:22) .x; s/
J (cid:22).x/
D
Z ds j 5 .x; s/
@(cid:22)J (cid:22) .x/
0
D
associated with Maxwell theory. Generally, because the (cid:28)-dependence of the potentials and fields
is contained in ’, the condition (cid:21)
eliminates all the terms in the pre-Maxwell equations
containing @(cid:28) . Similarly, the photon mass m(cid:13)
=(cid:24)(cid:21)c2 must vanish.
! 1
(cid:24) (cid:132)
94
4. PROBLEMS IN ELECTROSTATICS AND ELECTRODYNAMICS
We saw that f 5(cid:22) is generally proportional to c5 for fields of the Liénard–Wiechert type.
Therefore, we can write the inhomogeneous pre-Maxwell equations in the finite form
@(cid:23) f (cid:22)(cid:23)
e
c
D
j (cid:22)
’
@(cid:22)
(cid:18) 1
c5
f 5(cid:22)(cid:19)
e
c
D
(cid:26)’;
where we see that f 5(cid:22) decouples from the field f (cid:22)(cid:23) that now satisfies Maxwell’s equations.
To find the limiting form of the electromagnetic interactions, we consider an arbitrary
event X (cid:22) .(cid:28)/, which induces the current
j (cid:11)
’ .x; (cid:28) /
D
c Z ds ’ .(cid:28)
s/
X (cid:11) .s/ (cid:14)4 (cid:140)x
P
(cid:0)
(cid:0)
X .s/(cid:141) :
From the field strengths found in Section 4.2 the Lorentz force on a test event moving in the
field induced by this current can be written
M
x(cid:22)
R
D
e
c
(cid:2)f (cid:22)
(cid:23).x; (cid:28)/
x(cid:23)
P
C
f 5(cid:22).x; (cid:28)/
x5(cid:3)
P
e2
4(cid:25)c
D
(cid:28)
e(cid:0)j
(cid:0)
(cid:28)R
=(cid:24)(cid:21)
j
F (cid:22)
(cid:23).x; (cid:28)/
1
x(cid:23)
P
C
F 5(cid:22).x; (cid:28)/
c2
5
C
.c5=c/2
;
where
F (cid:22)(cid:23).x; (cid:28)/
e
4(cid:25)R
D
(cid:26) .z(cid:22)(cid:12)(cid:23)
z(cid:23)(cid:12)(cid:22)/ (cid:12)2
(cid:0)
R2
(cid:0)
" .(cid:28)
(cid:28)R/
(cid:0)
(cid:24)(cid:21)c
z(cid:22)(cid:12)(cid:23)
z(cid:23)(cid:12)(cid:22)
(cid:0)
R
(cid:16)z(cid:22)
P(cid:12)(cid:23)
(cid:0)
z(cid:23)
P(cid:12)(cid:22)(cid:17) R
C
.z(cid:22)(cid:12)(cid:23)
z(cid:23)(cid:12)(cid:22)/ (cid:16)
z(cid:17)
P(cid:12)
(cid:1)
(cid:0)
F 5(cid:22).x; (cid:28)/
e
4(cid:25)cR
(cid:26)
(cid:0)
D
z(cid:22)(cid:12)2
C
R2
(cid:0)
(cid:12)(cid:22)R
(cid:0)
R2
" .(cid:28)
(cid:28)R/
(cid:0)
(cid:24)(cid:21)c
z(cid:22)
C
(cid:12)(cid:22)Rc2=c2
5
R
9
=
;
0, we see that c2
and c5
In the limit (cid:21)
5
action reduces to the (cid:28)-independent expression
! 1
!
z(cid:17)
z(cid:22) (cid:16)
P(cid:12)
(cid:1)
cR2
C
:
9
=
;
F 5(cid:22).x; (cid:28)/
!
0, and so the Lorentz force inter-
M
e2
4(cid:25)c
recovering the Lorentz force in the standard Maxwell form. The parameter c5=c thus provides
a continuous scaling of Maxwell’s equations and the Lorentz force to the standard forms in
Maxwell theory. The combined limit (cid:21)
0 restricts the possible dynamics in SHP
and c5
to those of Maxwell theory, as a system in (cid:28)-equilibrium [9].
! 1
x(cid:22)
R
(cid:23).x/
x(cid:23)
P
F (cid:22)
!
D
4.9
BIBLIOGRAPHY
4.9. BIBLIOGRAPHY 95
[1] Tanabashi, M., Hagiwara, K., Hikasa, K., Nakamura, K., Sumino, Y., Takahashi, F.,
Tanaka, J., Agashe, K., Aielli, G., Amsler, C., Antonelli, M., Asner, D. M., Baer, H.,
Banerjee, S., Barnett, R. M., Basaglia, T., Bauer, C. W., Beatty, J. J., Belousov, V. I.,
Beringer, J., Bethke, S., Bettini, A., Bichsel, H., Biebel, O., Black, K. M., Blucher,
E., Buchmuller, O., Burkert, V., Bychkov, M. A., Cahn, R. N., Carena, M., Ceccucci,
A., Cerri, A., Chakraborty, D., Chen, M. C., Chivukula, R. S., Cowan, G., Dahl, O.,
D’Ambrosio, G., Damour, T., de Florian, D., de Gouvêa, A., DeGrand, T., de Jong, P.,
Dissertori, G., Dobrescu, B. A., D’Onofrio, M., Doser, M., Drees, M., Dreiner, H. K.,
Dwyer, D. A., Eerola, P., Eidelman, S., Ellis, J., Erler, J., Ezhela, V. V., Fetscher, W.,
Fields, B. D., Firestone, R., Foster, B., Freitas, A., Gallagher, H., Garren, L., Gerber,
H. J., Gerbier, G., Gershon, T., Gershtein, Y., Gherghetta, T., Godizov, A. A., Good-
man, M., Grab, C., Gritsan, A. V., Grojean, C., Groom, D. E., Grünewald, M., Gurtu,
A., Gutsche, T., Haber, H. E., Hanhart, C., Hashimoto, S., Hayato, Y., Hayes, K. G.,
Hebecker, A., Heinemeyer, S., Heltsley, B., Hernández-Rey, J. J., Hisano, J., Höcker,
A., Holder, J., Holtkamp, A., Hyodo, T., Irwin, K. D., Johnson, K. F., Kado, M., Kar-
liner, M., Katz, U. F., Klein, S. R., Klempt, E., Kowalewski, R. V., Krauss, F., Kreps,
M., Krusche, B., Kuyanov, Y. V., Kwon, Y., Lahav, O., Laiho, J., Lesgourgues, J., Lid-
dle, A., Ligeti, Z., Lin, C. J., Lippmann, C., Liss, T. M., Littenberg, L., Lugovsky, K.
S., Lugovsky, S. B., Lusiani, A., Makida, Y., Maltoni, F., Mannel, T., Manohar, A. V.,
Marciano, W. J., Martin, A. D., Masoni, A., Matthews, J., Meißner, U. G., Milstead,
D., Mitchell, R. E., Mönig, K., Molaro, P., Moortgat, F., Moskovic, M., Murayama, H.,
Narain, M., Nason, P., Navas, S., Neubert, M., Nevski, P., Nir, Y., Olive, K. A., Pagan,
G. S., Parsons, J., Patrignani, C., Peacock, J. A., Pennington, M., Petcov, S. T., Petrov, V.
A., Pianori, E., Piepke, A., Pomarol, A., Quadt, A., Rademacker, J., Raffelt, G., Ratcliff,
B. N., Richardson, P., Ringwald, A., Roesler, S., Rolli, S., Romaniouk, A., Rosenberg,
L. J., Rosner, J. L., Rybka, G., Ryutin, R. A., Sachrajda, C. T., Sakai, Y., Salam, G. P.,
Sarkar, S., Sauli, F., Schneider, O., Scholberg, K., Schwartz, A. J., Scott, D., Sharma, V.,
Sharpe, S. R., Shutt, T., Silari, M., Sjöstrand, T., Skands, P., Skwarnicki, T., Smith, J.
G., Smoot, G. F., Spanier, S., Spieler, H., Spiering, C., Stahl, A., Stone, S. L., Sumiyoshi,
T., Syphers, M. J., Terashi, K., Terning, J., Thoma, U., Thorne, R. S., Tiator, L., Titov,
M., Tkachenko, N. P., Törnqvist, N. A., Tovey, D. R., Valencia, G., Van de Water, R.,
Varelas, N., Venanzoni, G., Verde, L., Vincter, M. G., Vogel, P., Vogt, A., Wakely, S. P.,
Walkowiak, W., Walter, C. W., Wands, D., Ward, D. R., Wascko, M. O., Weiglein, G.,
Weinberg, D. H., Weinberg, E. J., White, M., Wiencke, L. R., Willocq, S., Wohl, C. G.,
Womersley, J., Woody, C. L., Workman, R. L., Yao, W. M., Zeller, G. P., Zenin, O. V.,
Zhu, R. Y., Zhu, S. L., Zimmermann, F., Zyla, P. A., Anderson, J., Fuller, L., Lugovsky,
V. S., and Schaffner, P. (Particle Data Group) 2018. Physical Review D, 98(3):030001.
https://link.aps.org/doi/10.1103/PhysRevD.98.030001 49
96
4. PROBLEMS IN ELECTROSTATICS AND ELECTRODYNAMICS
[2] Land, M. 1996. Foundations of Physics, 27:19. 50
[3] Pierce, P. O. 1899. A Short Table of Integrals, Ginn and Company, New York. 52, 88
[4] Hestenes, D. 1966. Space-Time Algebra, Documents on modern physics, Gordon and
Breach. https://books.google.co.il/books?id=OoRmatRYcs4C DOI: 10.1007/978-
3-319-18413-5. 55
[5] Land, M. 2013. Journal of Physics: Conference Series, 437. https://doi.org/10.1088%2F
1742-6596%2F437%2F1%2F012012 58
[6] Land, M. and Horwitz, L. 1991. Foundations on Physics Letters, 4:61. 62
[7] Jackson,
J. 1975. Classical Electrodynamics, 9:391, Wiley, New York. DOI:
10.1063/1.3057859. 65
[8] Land, M. 2019. Journal of Physics: Conference Series, 1239:012005. https://doi.org/10.
1088%2F1742-6596%2F1239%2F1%2F012005 65
[9] Land, M. 2017. Journal of Physics: Conference Series, 845:012024. http://stacks.iop.o
rg/1742-6596/845/i=1/a=012024 69, 94
[10] Anderson, C. D. 1932. Physical Review, 41:405. 73
[11] Bethe, H. A. and Heitler, W. 1934. Proc. Royal Society of London, A(146):83. 73
[12] Horwitz, L. P. 2017 Journal of Physics: Conference Series, 845:012026. http://stacks.i
op.org/1742-6596/845/i=1/a=012026 89
[13] Horwitz, L. P. and Arshansky, R. I. 2018. Relativistic Many-Body Theory and Statistical
Mechanics, 2053–2571, Morgan & Claypool Publishers. http://dx.doi.org/10.1088/
978-1-6817-4948-8 DOI: 10.1088/978-1-6817-4948-8. 89
C H A P T E R 5
Advanced Topics
97
5.1
ELECTRODYNAMICS FROM COMMUTATION
RELATIONS
In (2.1) we introduced an unconstrained 8D phase space .x(cid:22); p(cid:22)/ along with Poisson brackets
for which
x(cid:22); p(cid:23)
f
g D
@x(cid:22)
@x(cid:21)
@p(cid:23)
@p(cid:21) (cid:0)
@x(cid:22)
@p(cid:21)
@p(cid:23)
@x(cid:21) D
g(cid:22)(cid:23).x/
D
in curved spacetime. In 1990, Dyson [1] published a 1948 attempt by Feynman to derive
the Lorentz force law and homogeneous Maxwell equations starting from Euclidean relations
(cid:8)xi ; pj (cid:9)
(cid:14)ij on 6D phase space. Several authors noted that the derived equations have only
Galilean symmetry, and so are not actually the Maxwell theory, leading to a number of in-
teresting theoretical developments. Tanimura [2] generalized Feynman’s derivation to Lorentz
covariant form and obtained expressions similar to Maxwell theory, but including a fifth elec-
tromagnetic potential, a scalar evolution parameter that cannot be identified with proper time,
absence of reparameterization invariance, and violations of the mass-shell constraint. His result
can be identified with SHP electrodynamics. Significantly, Hojman and Shepley [3] proved that
the existence of quantum commutation relations is a strong assumption, sufficient to determine
a corresponding classical action, from which this system can be derived. We generalize Tan-
imura’s result to curved spacetime and show that this approach to SHP provides the final step
in Feynman’s program. Using the technique of Hojman and Shepley, we show that SHP elec-
trodynamics follows as the most general interacting system consistent with the unconstrained
commutation relations we have assumed [4].
We begin with the commutation relations among the quantum operators
for (cid:22); (cid:23)
0; 1;
; D
(cid:0)
(cid:1) (cid:1) (cid:1)
D
(cid:140)x(cid:22); x(cid:23)(cid:141)
0
D
D
1, and suppose equations of motion
m(cid:140)x(cid:22);
x(cid:23)(cid:141)
P
g(cid:22)(cid:23).x/
i
(cid:132)
(5.1)
m
x(cid:22)
R
D
F (cid:22).(cid:28); x;
x/:
P
We regard these quantities as operators in a Heisenberg picture, so that the field equations and
the Lorentz force may be interpreted, in the Ehrenfest sense, as relations among the expectation
values which correspond to relations among classical quantities. It follows that
(cid:140)
x(cid:22); q.x/(cid:141)
P
D
i
(cid:132)
m
@q
@x(cid:22)
(5.2)
98
5. ADVANCED TOPICS
for any function q.x/. Differentiating (5.1) with respect to (cid:28) we find
and so define W (cid:22)(cid:23)
D (cid:0)
x(cid:23)(cid:141)
P
C
m(cid:140)x(cid:22);
x(cid:23)(cid:141)
R
i
(cid:132)
D
@(cid:26)g(cid:22)(cid:23).x/
x(cid:26)
P
m(cid:140)
x(cid:22);
P
W (cid:23)(cid:22) by
W (cid:22)(cid:23)
D
m2
i
(cid:132)
(cid:140)
x(cid:22);
P
x(cid:23)(cid:141) :
P
(5.3)
From (5.1) and the Jacobi identity,
(cid:140) x(cid:21); (cid:140)
x(cid:22);
P
x(cid:23) (cid:141) (cid:141)
P
(cid:140)
x(cid:22); (cid:140)
P
x(cid:23); x(cid:21) (cid:141) (cid:141)
P
(cid:140)
x(cid:23); (cid:140) x(cid:21);
P
x(cid:23) (cid:141) (cid:141)
P
C
C
D
0
we find that
(cid:140) x(cid:21); W (cid:22)(cid:23) (cid:141)
Defining f (cid:22)(cid:23)
D (cid:0)
(cid:16)(cid:140) (cid:140) x(cid:21);
m2
i
D
(cid:132)
f (cid:23)(cid:22) by
x(cid:22) (cid:141);
P
x(cid:23) (cid:141)
P
C
(cid:140)
x(cid:22); (cid:140) x(cid:21);
P
x(cid:23) (cid:141) (cid:141)(cid:17)
P
i
(cid:132)
D
(cid:16)@(cid:23)g(cid:21)(cid:22)
(cid:0)
@(cid:22)g(cid:21)(cid:23)(cid:17) :
f (cid:22)(cid:23)
W (cid:22)(cid:23)
(cid:0)
D
m D.@(cid:23)g(cid:21)(cid:22)
@(cid:22)g(cid:21)(cid:23)/
x(cid:21)E ;
P
(cid:0)
(5.4)
where the brackets
:::
h
i
represent Weyl ordering, we find
(cid:140)x(cid:27) ; f (cid:22)(cid:23)(cid:141)
0;
D
which shows that f (cid:22)(cid:23) is independent of
x. When lowering indices, we define
P
and from
we may show that
x(cid:22)
P
D h
g(cid:22)(cid:23).x/
x(cid:23)
P
i
(cid:2)
x(cid:22);
P
x(cid:23)(cid:3)
P
D
hDg(cid:22)(cid:21)
x(cid:21)E ; (cid:10)g(cid:23)(cid:26)
P
x(cid:26)(cid:11)i
P
leading to the Bianchi relation
f(cid:22)(cid:23)
D
g(cid:22)(cid:21)g(cid:23)(cid:26)f (cid:21)(cid:26)
D (cid:0)
m2
i
(cid:132)
(cid:140)
x(cid:22);
P
x(cid:23)(cid:141)
P
(5.5)
Rearranging Equation (5.1) and using (5.3) and (5.4), we see that
@(cid:22)f(cid:23)(cid:26)
@(cid:23)f(cid:26)(cid:22)
@(cid:26)f(cid:22)(cid:23)
0:
D
C
C
where
m(cid:140)x(cid:22);
x(cid:23)(cid:141)
R
D
i
(cid:132)
m
f (cid:22)(cid:23)
2i
(cid:132)h
C
(cid:128) (cid:23)(cid:21)(cid:22)
x(cid:21)
P
;
i
(cid:128) (cid:23)(cid:21)(cid:22)
1
2
D (cid:0)
.@(cid:22)g(cid:21)(cid:23)
@(cid:21)g(cid:22)(cid:23)
C
(cid:0)
@(cid:23)g(cid:21)(cid:22)/
5.1. ELECTRODYNAMICS FROM COMMUTATION RELATIONS 99
is the Levi–Civita connection.
We now define g(cid:22) through the equation
F (cid:22)
m
x(cid:22)
R
D
D
g(cid:22).x;
x; (cid:28)/
P
C h
f (cid:22)(cid:23)
x(cid:23)
P
i (cid:0)
m
h
(cid:128) (cid:22)(cid:21)(cid:23)
x(cid:21)
P
x(cid:23)
P
i
and it follows that
(cid:140)x(cid:21); g(cid:22)(cid:141)
D
D
D
(cid:140)x(cid:21); f (cid:22)(cid:141)
i
(cid:132)
m
0
f (cid:21)(cid:22)
(cid:0)
f (cid:22)(cid:23)(cid:140)x(cid:21);
x(cid:23)(cid:141)
P
D(cid:128) (cid:22)(cid:26)(cid:21)
2 i
(cid:132)
C
C
x(cid:26)E
P
C
i
(cid:132)
m
m (cid:128) (cid:22)(cid:23)(cid:26) (cid:140)x(cid:21);
f (cid:22)(cid:23) (cid:14)(cid:21)
x(cid:23)(cid:141)
P
(cid:23) (cid:0)
i
x(cid:26)
P
(cid:132)
(cid:128) (cid:22)(cid:23)(cid:26)
x(cid:23)(cid:140)x(cid:21);
P
C
D(cid:16)(cid:128) (cid:22)(cid:23)(cid:26) (cid:14)(cid:21)
x(cid:26)
(cid:23) P
C
x(cid:26)(cid:141)
P
(cid:128) (cid:22)(cid:23)(cid:26)
(cid:17)E
x(cid:23) (cid:14)(cid:21)
(cid:26)
P
so that g(cid:22) is also independent of
x. We may write the force as
P
g(cid:22)
f (cid:22)(cid:23)
C h
x(cid:23)
P
i D
m h
x(cid:22)
R
C
D(cid:128) (cid:22)(cid:21)(cid:23)
x(cid:21)
P
x(cid:23)Ei
P
D
m
x(cid:22)
D
P
D(cid:28)
and since
we lower the index of g(cid:22) to find
m
x(cid:22)
R
D
m
d
d (cid:28) h
g(cid:22)(cid:23)
;
x(cid:23)
P
i
x(cid:26)
P
We write the first term on the right-hand side as
(cid:0) h
g(cid:23)
D
g(cid:23)(cid:21) f (cid:21)(cid:26)
g(cid:23)(cid:21) f (cid:21)
g(cid:23)(cid:21) (cid:128) (cid:21)(cid:26)(cid:27)
m
h
x(cid:26)
P
x(cid:27)
P
:
i
i C
g(cid:23)(cid:21) f (cid:21)
m
g(cid:23)(cid:21)
h
x(cid:21)
R
D
i D
m g(cid:23)(cid:21)
d
d (cid:28) h
g(cid:21)(cid:26)
x(cid:26)
P
i D
m
x(cid:23)
R
C
m
g(cid:23)(cid:21) @(cid:27) g(cid:21)(cid:26)
h
x(cid:26)
P
x(cid:27)
P
:
i
Since the indices (cid:26) and (cid:27) of @(cid:27) g(cid:21)(cid:26) occur in symmetric combination, we may write
so that
1
2
.@(cid:27) g(cid:21)(cid:26)
@(cid:26)g(cid:21)(cid:27) /
(cid:128) (cid:21)(cid:26)(cid:27)
D (cid:0)
C
1
2
C
@(cid:21)g(cid:26)(cid:27)
g(cid:23)
m
x(cid:23)
R
D
C
1
2
m
@(cid:23)g(cid:21)(cid:26)
h
x(cid:21)
P
x(cid:26)
P
i (cid:0) h
f(cid:23)(cid:21) g(cid:21)(cid:26)
x(cid:26)
P
:
i
Using (5.2) and (5.5) we obtain
x(cid:22); g(cid:23)(cid:141)
(cid:140)
P
m(cid:140)
x(cid:22);
P
x(cid:23)(cid:141)
R
D
i
(cid:132)
m
(cid:0)
m(cid:140)
x(cid:22);
P
x(cid:23)(cid:141)
R
D
i
(cid:132)
m
(cid:0)
@(cid:23)g(cid:21)(cid:26).f(cid:22)(cid:21)
(cid:28) 1
2
i
@(cid:22)@(cid:23)g(cid:21)(cid:26)
C
(cid:132)
@(cid:22).f(cid:23)(cid:21) g(cid:21)(cid:26)/
x(cid:26)
P
C
i
(cid:28) 1
@(cid:22)@(cid:23)g(cid:21)(cid:26)
2
C
(cid:132)
.@(cid:22)g(cid:21)(cid:26)/f(cid:23)(cid:21)
x(cid:26)
P
(cid:0)
(cid:0)
x(cid:21)
P
i
x(cid:26)
(cid:132)
2m
P
i
m2 f(cid:23)(cid:21) g(cid:21)(cid:26) f(cid:22)(cid:26)
(cid:132)
i
x(cid:26)
x(cid:21)
(cid:132)
m
P
P
i
@(cid:22)f(cid:23)(cid:21)g(cid:21)(cid:26)
(cid:132)
m
x(cid:26)
P
(cid:0)
(cid:29)
.@(cid:23)g(cid:21)(cid:26)/f(cid:22)(cid:21)
x(cid:26)
P
x(cid:21)f(cid:22)(cid:26)/
C P
x(cid:26)
P
i
m2 f(cid:23)(cid:21) g(cid:21)(cid:26) f(cid:22)(cid:26)
(cid:132)
(cid:29) :
C
100
5. ADVANCED TOPICS
Finally, antisymmetrization with respect to the indices (cid:22) and (cid:23) gives
x(cid:22); g(cid:23)(cid:141)
(cid:140)
P
x(cid:23); g(cid:22)(cid:141)
(cid:140)
P
(cid:0)
D
D
i
(cid:132)
m
(cid:0)
D.@(cid:22)f(cid:23)(cid:21)
(cid:0)
@(cid:23)f(cid:22)(cid:21)/g(cid:21)(cid:26)
x(cid:26)E
P
D.@(cid:22)f(cid:23)(cid:21)
@(cid:23)f(cid:21)(cid:22)/
x(cid:21)E
P
C
(cid:10).@(cid:22)f(cid:23)(cid:26)
@(cid:23)f(cid:26)(cid:22)/
x(cid:26)(cid:11)
P
C
x(cid:22)(cid:141)
R
i
(cid:132)
m
x(cid:23)(cid:141)
R
x(cid:23);
(cid:140)
P
(cid:0)
x(cid:23)(cid:141)
P
(cid:140)
x(cid:22);
P
d
d (cid:28)
f(cid:22)(cid:23)
(cid:0)
(cid:0)
i
(cid:132)
m
m(cid:140)
m
x(cid:22);
P
d
d (cid:28)
i
(cid:132)
m
i
(cid:132)
m
D (cid:0)
D (cid:0)
(cid:2)(cid:10).@(cid:26)f(cid:22)(cid:23)
@(cid:22)f(cid:23)(cid:26)
C
@(cid:23)f(cid:26)(cid:22)/
x(cid:26)(cid:11)
P
C
C
@(cid:28) f(cid:22)(cid:23)(cid:3)
and so using the Bianchi identity for f(cid:22)(cid:23),
@(cid:22)g(cid:23)
@(cid:23)g(cid:22)
(cid:0)
@f(cid:22)(cid:23)
@(cid:28) D
0:
C
Regarding these equations in the Ehrenfest sense, we may summarize the classical theory as
m
x(cid:22)
D
P
D(cid:28) D
m(cid:140)
x(cid:22)
R
C
(cid:27) (cid:22)(cid:21)(cid:23)
x(cid:21)
P
x(cid:23)(cid:141)
P
D
f (cid:22)(cid:23)
g(cid:22)
x(cid:23)
P
C
@(cid:22)f(cid:23)(cid:26)
@(cid:22)g(cid:23)
C
(cid:0)
@(cid:23)f(cid:26)(cid:22)
@(cid:26)f(cid:22)(cid:23)
0
D
C
@(cid:23)g(cid:22)
@f(cid:22)(cid:23)
@(cid:28) D
0:
C
Introducing the definitions
xD
(cid:28)
D
@(cid:28)
@D
D
f(cid:22)D
fD(cid:22)
g(cid:22) :
D
D (cid:0)
We may then combine the inhomogeneous field equations as
@(cid:11)f(cid:12)(cid:13)
@(cid:12) f(cid:13)(cid:11)
@(cid:13) f(cid:11)(cid:12)
0
D
C
C
(5.6)
(cid:1) (cid:1) (cid:1)
; D), which shows that the two form f is closed on the formal (D
(for (cid:11); (cid:12); (cid:13) = 0;
1)-
dimensional manifold .(cid:28); x/. Hence, if this manifold is contractable, then f is an exact form
da. The Lorentz
which can be obtained as the derivative of some potential with the form f
force equation becomes
D
C
m
x(cid:22)
D
P
D(cid:28) D
m(cid:140)
x(cid:22)
R
C
(cid:128) (cid:22)(cid:21)(cid:23)
x(cid:21)
P
x(cid:23)(cid:141)
P
D
f (cid:22)(cid:23).(cid:28); x/
x(cid:23)
P
C
g(cid:22).(cid:28); x/
f (cid:22)
(cid:12) .(cid:28); x/
x(cid:12) :
P
D
(5.7)
Following Dyson and Feynman, we observe that given Equation (5.6), the two-form f (cid:11)(cid:12) is
determined if we know functions (cid:26) and j (cid:22) such that
D(cid:11) f (cid:22)(cid:11)
j (cid:22)
D
D(cid:11) f d(cid:11)
(cid:26);
D
5.1. ELECTRODYNAMICS FROM COMMUTATION RELATIONS 101
where D(cid:11) is the covariant derivative. By denoting (cid:26)
pactly as
j d , these equations can be written com-
D
D(cid:11) f (cid:12) (cid:11)
j (cid:12) ;
D
(5.8)
0.
D
where, due to the antisymmetry of f (cid:12) (cid:11), we see that j (cid:12) is conserved as D(cid:11)j (cid:11)
D(cid:11)j (cid:11)
0 :
D
By comparing the Lorentz force (5.7) with (3.6), and the field Equations (5.6) and (5.8) with
(3.19) and (3.17), we see conclude that the assumption of unconstrained commutation relations
leads to a field theory equivalent to classical SHP electrodynamics.
In Sections 3.2 and 3.3 we found the Lorentz force and field equations from an action
principle. Hojman and Shepley [3] set out to prove that the assumed commutation relations
are sufficient to establish the existence of a unique Lagrangian of electromagnetic form. To
accomplish this goal, they demonstrate a new connection between the commutation relations
and well-established results from the inverse problem in the calculus of variations, a theory
which concerns the conditions under which a system of differential equations may be derived
from a variational principle. We consider a set of ordinary second-order differential equations
of the form
Fk.(cid:28); q;
q;
P
q/
R
D
0
qj
P
D
dqj
d (cid:28)
qj
R
D
d 2qj
d (cid:28) 2
j; k
1;
(cid:1) (cid:1) (cid:1)
D
; n:
Under variations of the path
q.(cid:28)/
the function Fk.(cid:28); q;
q;
P
q.(cid:28)/
dq.(cid:28)/
(cid:0)!
C
d
C
q.(cid:28)/
q.(cid:28)/
(cid:0)! P
q.(cid:28)/
P
q.(cid:28)/
P
d
d (cid:28)
d 2
d (cid:28) 2 dq.(cid:28)/
q/ admits the variational one-form defined by
R
q.(cid:28)/
R
q.(cid:28)/
R
(cid:0)! R
dq.(cid:28)/
q.(cid:28)/
q.(cid:28)/
D R
D P
C
C
C
d
dFk
@Fk
@qj dqj
C
D
@Fk
qj d
@
P
qj
P
@Fk
qj d
@
R
qj
R
C
and the variational two-form
dqkdFk
@Fk
@qj dqk
^
D
dqj
where the 3n path variations .dqk; d
(cid:1) (cid:1) (cid:1)
independent. The system of differential equations Fk.(cid:28); q;
q;
P
exists a two-form (cid:127)2.dq; d
qk; d
P
D
1;
; n are understood to be linearly
q/ is called self-adjoint if there
R
q/ such that for all admissible variations of the path,
P
d
qj
P
^
C
@Fk
qj dqk
@
R
d
qj ;
R
^
C
@Fk
qj dqk
@
P
qk/ for k
R
dqkdFk.dq/
d
d (cid:28)
D
(cid:127)2.dq; d
q/:
P
102
5. ADVANCED TOPICS
Through integration by parts, one may show [5] that such a two-form exists and is unique up to
an additive constant, if and only if
@Fi
qk D
@
R
@Fk
@
qi D
P
@Fk
@qi D
@Fi
qk C
@
P
@Fi
@qk (cid:0)
@Fk
qi
@
R
d
d (cid:28)
1
2
d
d (cid:28)
(cid:20) @Fi
qk C
@
R
(cid:20) @Fi
@
qk (cid:0)
P
(cid:21)
@Fk
qi
@
R
@Fk
qi
@
P
(5.9)
(5.10)
(5.11)
(cid:21) ;
known as the Helmholtz conditions [6, 7]. Introducing the notation
dqk
(cid:12)
(cid:14)
D
@
@qk
(cid:12)
qk
(cid:12) D
(cid:18) d
d (cid:28)
(cid:12)
(cid:19)
qk
0; 1; 2;
(cid:12)
D
it follows that
(cid:14)2
dqk
(cid:12) ^
dql
(cid:11)
D
@2
@qk
(cid:12) @ql
(cid:11) D
0;
which permits the equivalence of a set of self-adjointness differential equations to a Lagrangian
formulation to be easily demonstrated [8]. Varying the Lagrangian L,
(cid:14)L
D
@L
@qk
dqk
C
@L
qk
@
P
d
qk
P
(cid:20)
D
(cid:0)
d
d (cid:28)
@L
qk C
@
P
@L
@qk
(cid:21) dqk
d
d (cid:28)
(cid:18) @L
qk
@
P
C
dqk(cid:19)
Fkdqk
D
d
d (cid:28)
C
(cid:127)1
so that
(cid:14)2
0
D
H) (cid:0)
dqk(cid:14)Fk
d
d (cid:28)
C
(cid:14)(cid:127)1
D (cid:0)
dqk(cid:14)Fk
d
d (cid:28)
C
(cid:127)2
0
D
which demonstrates self-adjointness. Conversely, self-adjoint of Fk requires that dqk(cid:14)Fk
d
d (cid:28) (cid:127)2
0 and since (cid:14)2
0,
(cid:0)
D
D
d
d (cid:28)
(cid:127)2
(cid:14)
d
d (cid:28)
D
(cid:127)1:
Therefore,
dqk(cid:14)Fk
0
D
d
d (cid:28)
(cid:0)
(cid:127)2
D
(cid:14) (cid:18)dqkFk
d
d (cid:28)
(cid:0)
(cid:19)
(cid:127)1
(cid:14)L
D
by variation of L under (cid:28)-integration, one obtains the differential equations Fk
0.
D
For the second-order equations considered here, it follows [5] from self-adjointness that
the most general form of Fk is
Fk.(cid:28); q;
q;
P
q/
R
D
Akj .(cid:28); q;
q/
P
qj
R
C
Bk.(cid:28); q;
q/:
P
(5.12)
5.1. ELECTRODYNAMICS FROM COMMUTATION RELATIONS 103
To see this, notice that Fk is independent of d 3qi =dt 3, so that the right-hand side of (5.10) must
qi . Inserting (5.12) into (5.9)–(5.11), one finds the Helmholtz conditions on
be independent of
R
Akj and Bk
Aij
Aj i
D
@Bi
qj C
@
P
@Bi
@qj (cid:0)
@Bj
@
qi D
P
@Bj
@qi D
2 (cid:20) @
@Akj
@Aij
qi
qk D
@
@
P
P
qk @
(cid:21) Aij
@qk
qk @
@qk
@(cid:28) C P
(cid:20) @
@(cid:28) C P
1
2
(cid:21) (cid:18) @Bi
qj (cid:0)
@
P
(cid:19)
@Bj
qi
@
P
along with the useful identity
@Aij
@qk (cid:0)
@Akj
@qi D
1
2
@
qj
P
@
(cid:18) @Bi
qk (cid:0)
@
P
@Bk
qi
@
P
(cid:19) :
In the domain of invertibilty of the Aj k, one can write (5.12) as
Fk.(cid:28); q;
q;
P
and the Helmholtz conditions become
Akj .(cid:28); q;
q/(cid:140)
P
q/
R
D
qj
R
f j (cid:141)
(cid:0)
f j .(cid:28); q;
q/
P
D (cid:0)
.A(cid:0)
1/j kBk
Aij
Aj i
D
Aij
#
D
D(cid:28)
@f k
qi
@
P
Aj k
1
2
D (cid:0)
Aik
D
1
2
D
D(cid:28)
"
Aik
@f k
qj (cid:0)
@
P
@Akj
qi
@
P
Aj k
@Aij
@
"
qk D
P
@f k
Aik
qj C
@
P
@f k
@qj (cid:0)
Aj k
@f k
@qi ;
#
@f k
qi
@
P
(5.13)
(5.14)
(5.15)
where
D
D(cid:28) D
@
@(cid:28) C P
qk @
@qk C
is the total time derivative subject to the constraint
f k @
qk
@
P
The identity (5.13) becomes
qk
R
(cid:0)
f k.(cid:28); q;
q/
P
D
0 :
(5.16)
@Aij
@qk (cid:0)
(cid:20) @
qk
@
P
Within the domain of applicability of the inverse function theorem, (5.16) is equivalent to (5.12),
and the Helmholtz conditions become the necessary and sufficient conditions for the existence
qi .Ak nf n/(cid:21) :
@Akj
@qi D (cid:0)
@
qj
@
P
.Ainf n/
(5.17)
@
@
1
2
(cid:0)
P
104
5. ADVANCED TOPICS
of an integrating factor Aj k such that
Fk
D
Akj .(cid:28); q;
q/(cid:140)
P
qj
R
(cid:0)
f j (cid:141)
D
d
d (cid:28)
(cid:18) @L
qk
@
P
(cid:19)
@L
@qk
:
(cid:0)
(5.18)
Employing this apparatus, Hojman and Shepley prove that given the quantum mechanical
commutation relations
the classical function
has an inverse
(cid:140)xi .(cid:28)/;
xj .(cid:28)/(cid:141)
P
i
Gij ;
(cid:132)
D
gij
Gij
D
lim
0
(cid:132)!
!ij
.g(cid:0)
1/ij
D
which satisfies the Helmholtz conditions. Following Santilli, we take the function A(cid:22)(cid:23)
g(cid:22)(cid:23).x/ to be a Riemannian metric independent of
tomatically. Since g(cid:22)(cid:23) does not depend on
D
x, so that Equation (5.14) is satisfied au-
P
x(cid:22), Equation (5.15) becomes
P
D
D(cid:28)
g(cid:22)(cid:23)
and Equation (5.17) becomes
x(cid:27) @
@x(cid:27) g(cid:22)(cid:23)
D P
1
2
(cid:20) @f(cid:22)
x(cid:23) (cid:0)
@
P
@f(cid:23)
x(cid:22)
@
P
(cid:21)
D (cid:0)
Acting on (5.19) with @=@
(cid:21)
@g(cid:22)(cid:23)
@x(cid:27) (cid:0)
@g(cid:27)(cid:23)
@x(cid:22) :
D
@
(cid:0)
1
2
@
x(cid:23)
P
(cid:20) @f(cid:22)
x(cid:27) (cid:0)
@
P
@f(cid:27)
x(cid:22)
@
P
x(cid:27) and exchanging ((cid:23)
P
g(cid:22)(cid:27);(cid:23)
1
2
D (cid:0)
(cid:27)), we obtain
$
(cid:20) @2f(cid:22)
x(cid:27) @
@
P
x(cid:23) C
P
@2f(cid:27)
x(cid:22)@
P
x(cid:23)
P
@
(cid:21) ;
(5.19)
(5.20)
(5.21)
where g(cid:22)(cid:27);(cid:23)
D
@g(cid:22)(cid:27) =@x(cid:23). Combining (5.20) and (5.21), we find
1
2
@2f(cid:22)
x(cid:27) @
@
P
x(cid:23) D (cid:0)
P
1
2
.g(cid:22)(cid:23);(cid:27)
g(cid:22)(cid:27);(cid:23)
C
(cid:0)
g(cid:27)(cid:23);(cid:22)/
D (cid:0)
(cid:128)(cid:22)(cid:27)(cid:23);
where (cid:128)(cid:22)(cid:27)(cid:23) is the Levi-Civita connection. Thus, the most general expression for f(cid:22).(cid:28); x;
x/ is
P
f(cid:22)
(cid:128)(cid:22)(cid:23)(cid:27)
x(cid:23)
P
x(cid:27)
P
(cid:0)
D (cid:0)
(cid:26)(cid:22)(cid:23).(cid:28); x/
x(cid:23)
P
(cid:0)
(cid:27)(cid:22).(cid:28); x/:
(5.22)
Now from (5.19) we find
x(cid:27) @g(cid:22)(cid:23)
P
@x(cid:27) D
1
2
(cid:2)2(cid:128)(cid:22)(cid:23)(cid:27)
x(cid:27)
P
C
2(cid:128)(cid:23)(cid:22)(cid:27)
x(cid:27)
P
C
(cid:26)(cid:22)(cid:23)
C
(cid:26)(cid:23)(cid:22)(cid:3)
5.1. ELECTRODYNAMICS FROM COMMUTATION RELATIONS 105
and using
we find that all terms except for those in (cid:26)(cid:22)(cid:23) cancel, so that
.(cid:128)(cid:22)(cid:23)(cid:27)
(cid:128)(cid:23)(cid:22)(cid:27) /
x(cid:27)
P
g(cid:22)(cid:23);(cid:27)
x(cid:27)
P
D
C
We now apply Equation (5.16) which becomes
(cid:26)(cid:22)(cid:23)
0
D
C
(cid:26)(cid:23)(cid:22) :
1
2
D
D(cid:28)
g(cid:23)(cid:27)
@f (cid:27)
(cid:20)g(cid:22)(cid:27)
x(cid:23) (cid:0)
@
P
(cid:20) @f(cid:22)
D
1
x(cid:23) (cid:0)
@
D(cid:28)
2
P
@f (cid:27)
x(cid:22)
@
P
@f(cid:23)
x(cid:22)
@
P
(cid:21)
(cid:21)
D
D
using (5.22) to expand the left-hand side,
1
2
D
D(cid:28)
(cid:20) @f(cid:22)
x(cid:23) (cid:0)
@
P
@f(cid:23)
x(cid:22)
@
P
(cid:21)
D (cid:0)
1
2
D
D(cid:28)
(cid:20) @
x(cid:23)
@
P
g(cid:22)(cid:27)
@f (cid:27)
@x(cid:23) (cid:0)
g(cid:23)(cid:27)
f(cid:22);(cid:23)
f(cid:23);(cid:22)
(cid:0)
(cid:0)
@f (cid:27)
@x(cid:22)
g(cid:22)(cid:27);(cid:23)f (cid:27)
g(cid:23)(cid:27);(cid:22)f (cid:27) :
C
(5.23)
(cid:16)(cid:128)(cid:22)(cid:21)(cid:27)
x(cid:21)
P
x(cid:27)
P
C
(cid:27)(cid:22).(cid:28); x/(cid:17)
(cid:0)
C
(cid:20)2.(cid:128)(cid:22)(cid:23)(cid:21)
x(cid:27) @
@x(cid:27) C
g(cid:23)(cid:27);(cid:22)/f (cid:27)
x(cid:21)
(cid:0) P
x(cid:21)
(cid:128)(cid:23)(cid:22)(cid:21)/
P
f (cid:27) @
x(cid:27)
@
P
(cid:26)(cid:22)(cid:23);(cid:28)
(cid:0)
x(cid:27) .g(cid:22)(cid:21);(cid:23)(cid:27)
P
(cid:0)
(cid:0)
1
D
2
D(cid:28)
(cid:18) @
@(cid:28) C P
.g(cid:22)(cid:27);(cid:23)
D (cid:0)
D (cid:0)
D (cid:0)
x(cid:21)
P
(cid:23)/(cid:21)
(cid:21)
(cid:26)(cid:23)(cid:22)
(cid:26)(cid:22)(cid:21).(cid:28); x/
.(cid:22)
(cid:0)
$
(cid:26)(cid:22)(cid:23)
C
(cid:0)
(cid:19) h.g(cid:22)(cid:21);(cid:23)
g(cid:23)(cid:21);(cid:22)/
x(cid:21)
P
C
(cid:26)(cid:22)(cid:23)i
(cid:0)
g(cid:23)(cid:21);(cid:22)(cid:27) /
x(cid:21)(cid:26)(cid:22)(cid:23);(cid:21);
C P
(5.24)
where (cid:26)(cid:22)(cid:23);(cid:28)
D
2.(cid:128)(cid:22)(cid:23)(cid:21)
@(cid:26)(cid:22)(cid:23)=@(cid:28), and we have used
(cid:128)(cid:23)(cid:22)(cid:21)/
x(cid:21)
P
(cid:0)
x(cid:21).
g(cid:23)(cid:21);(cid:22)
D P
(cid:0)
x(cid:21).g(cid:22)(cid:21);(cid:23)
2
P
D
g(cid:22)(cid:21);(cid:23)
C
g(cid:23)(cid:21);(cid:22)/ :
C
(cid:0)
g(cid:23)(cid:22);(cid:21)
g(cid:22)(cid:21);(cid:23)
g(cid:23)(cid:21);(cid:22)
(cid:0)
(cid:0)
C
g(cid:22)(cid:23);(cid:21)/
Again using (5.22) we have
f(cid:22);(cid:23)
D (cid:0)
D (cid:0)
(cid:20)(cid:128)(cid:22)(cid:21)(cid:27)
x(cid:21)
P
(cid:20)(cid:128)(cid:22)(cid:21)(cid:27);(cid:23)
x(cid:27)
P
C
(cid:26)(cid:22)(cid:21).(cid:28); x/
x(cid:21)
P
C
(cid:27)(cid:22).(cid:28); x/(cid:21)
;(cid:23)
x(cid:21)
P
x(cid:27)
P
C
(cid:26)(cid:22)(cid:21);(cid:23)
x(cid:21)
P
C
(cid:21)
(cid:27)(cid:22);(cid:23)
so that the right-hand side of (5.23) is
f(cid:22);(cid:23)
f(cid:23);(cid:22)
(cid:0)
(cid:0)
g(cid:22)(cid:27);(cid:23)f (cid:27)
g(cid:23)(cid:27);(cid:22)f (cid:27)
D (cid:0)
C
(cid:20).(cid:128)(cid:22)(cid:21)(cid:27);(cid:23)
D
(cid:128)(cid:23)(cid:21)(cid:27);(cid:22)/
x(cid:21)
x(cid:27)
P
P
g(cid:23)(cid:27);(cid:22)/f (cid:27) :
(cid:0)
.g(cid:22)(cid:27);(cid:23)
(cid:0)
(cid:0)
.(cid:26)(cid:22)(cid:21);(cid:23)
(cid:26)(cid:23)(cid:21);(cid:22)/
x(cid:21)
P
C
(cid:27)(cid:22);(cid:23)
(cid:0)
(cid:0)
(cid:21)
(cid:27)(cid:23);(cid:22)
C
106
5. ADVANCED TOPICS
Now canceling common terms, we are left with
which, because the
@(cid:26)(cid:22)(cid:23)
@(cid:28) C P
x(cid:21)(cid:26)(cid:22)(cid:23);(cid:21)
x(cid:21).(cid:26)(cid:22)(cid:21);(cid:23)
(cid:26)(cid:23)(cid:21);(cid:22)/
(cid:27)(cid:22);(cid:23)
(cid:27)(cid:23);(cid:22)
(cid:0)
D P
(cid:0)
x(cid:21) are arbitrary, is equivalent the two expressions
P
@(cid:26)(cid:22)(cid:23)
@(cid:28) D
@(cid:27)(cid:22)
@x(cid:23) (cid:0)
@(cid:27)(cid:23)
@x(cid:22)
@(cid:22)(cid:26)(cid:23)(cid:21)
@(cid:21)(cid:26)(cid:22)(cid:23)
C
C
C
@(cid:23)(cid:26)(cid:21)(cid:22)
0:
D
Therefore, we may identify
f(cid:22)(cid:23)
(cid:26)(cid:22)(cid:23)
D (cid:0)
and
f5(cid:22)
(cid:27)(cid:22)
D (cid:0)
showing that SHP electrodynamics is the most general interaction consistent with the uncon-
strained commutation relations.
Moreover, these commutation relations are sufficient to establish the existence of an equiv-
alent Lagrangian for the classical problem associated with the quantum commutators. We ob-
serve that in flat space (5.18) implies
(cid:17)(cid:22)(cid:23)(cid:140)M
x(cid:23)
R
(cid:0)
f (cid:23)(cid:141)
D
d
d (cid:28)
(cid:18) @L
x(cid:22)
@
P
@2L
x(cid:22)@
P
x(cid:23)
x(cid:23) R
P
(cid:19)
(cid:0)
C
@L
@x(cid:22)
@2L
x(cid:23)
x(cid:22)@x(cid:23) P
P
@
@2L
x(cid:22)@(cid:28) (cid:0)
P
@L
@x(cid:22)
C
@
D
@
so that the solution
M (cid:17)(cid:22)(cid:23)
D
@
@2L
x(cid:22)@
P
x(cid:23)
P
(cid:17)(cid:22)(cid:23)f (cid:23)
(cid:0)
D
@
@2L
x(cid:23)
x(cid:22)@x(cid:23) P
P
@2L
x(cid:22)@(cid:28) (cid:0)
P
@L
@x(cid:22)
C
@
is unique. Therefore, we see that L may consist of the quadratic term integrated from the first
expression, plus terms at most linear in
x(cid:22). Thus, we may write the Lagrangian
P
e
c
which is the SHP event Lagrangian (3.3) in flat space. This demonstrates that SHP electro-
dynamics represents the conditions on the most general velocity dependent forces that may be
obtained from a variational principle.
a(cid:22).(cid:28); x/
ec5
c
x(cid:22)
P
x(cid:22)
P
x(cid:22)
P
1
2
a5
M
C
D
C
L
CLASSICAL NON-ABELIAN GAUGE THEORY
5.2
A classical non-Abelian gauge theory was given by Wong [9] possessing the following structure:
m R(cid:24)(cid:22)
I
P
F(cid:22)(cid:23)
F(cid:22)(cid:23)
F(cid:22)(cid:23)
(cid:2)
@(cid:22)F(cid:22)(cid:23)
A(cid:22)
D
gA(cid:22)
C
Aa(cid:22)I a
I.(cid:28)/ P(cid:24) (cid:23)
P(cid:24) (cid:22)
I
(cid:2)
@(cid:23)A(cid:22)
(cid:0)
gF(cid:22)(cid:23)
(cid:1)
gA(cid:22)
@(cid:22)A(cid:23)
j(cid:23)
Fa(cid:22)(cid:23)I a
D
D (cid:0)
D
D (cid:0)
D
gA(cid:22)
A(cid:23)
(cid:2)
C
(cid:140)I a; I b(cid:141)
"abcIc;
i
(cid:132)
D
5.2. CLASSICAL NON-ABELIAN GAUGE THEORY 107
where (cid:24) (cid:22).(cid:28)/ is the particle worldline and the I a.(cid:28)/ are an operator representation of the gener-
ators of a non-Abelian gauge group. From the form of the field f(cid:22)(cid:23), one has the inhomogeneous
equation
D(cid:22)F(cid:23)(cid:26)
D(cid:23)F(cid:26)(cid:22)
D(cid:26)F(cid:22)(cid:23)
C
0
D
with covariant derivative
.D(cid:22)F(cid:22)(cid:23)/a
C
D
@(cid:22)Fa(cid:22)(cid:23)
(cid:0)
" bc
a Ab(cid:22)Fc(cid:22)(cid:23) :
Lee [10] followed Feynman’s method, supplementing the phase space commutation relations
with
I
P
gAi
I
xi
P
0
(cid:140)I a; I b(cid:141)
i
"abcIc
(cid:140)xi ; I a.t/(cid:141)
0
(cid:132)
D
D
for i
1; 2; 3, and arrived at the Wong’s equations in Newtonian form. Tanimura [2] gen-
eralized Lee’s derivation to D-dimensional flat Minkowski space and a general gauge group
satisfying
D
C
D
(cid:2)
(cid:140)I a; I b(cid:141)
f ab
c
I c
i
(cid:132)
D
I a
P
D
F ab
c Ab(cid:22).x/
x(cid:22)I c
P
(5.25)
for (cid:28)-independent fields.
We now extend the presentation of Section 5.1 by generalizing the Helmholtz condi-
tions to take account of classical non-Abelian gauge fields according to Wong’s formulation. To
achieve this, we associate with variations dq of the path q.(cid:28) /, a variation dI a of the genera-
tors I a, which may be understood as the variation of the orientation of the tangent space under
q.(cid:28)/
dq.(cid:28)/. The explicit form of this variation follows from (5.25): for small d (cid:28),
q.(cid:28)/
!
C
dI a
D
f ab
c
(cid:140)Ab(cid:22).(cid:28); x/ dx(cid:22)
C
(cid:30)b.(cid:28); x/d (cid:28)(cid:141)I c;
(5.26)
where we have allowed an explicit (cid:28)-dependence for the gauge field, and have included a Lorentz
MaI a undergoes the
scalar gauge field (cid:30)a, in analogy with the Abelian case. The quantity M
variation of the path
D
.(cid:28); x/
.(cid:28)
C
(cid:0)!
d (cid:28); x
C
dx/
according to
d M
.dMa/I a
(cid:18) @Ma
@(cid:28)
d (cid:28)
D
D
x(cid:22)(cid:19) I a
R
f bc
a Ab(cid:22)Mc
(cid:21) I adx(cid:22)
C
Ma.dI a/
@Ma
@x(cid:22) dx(cid:22)
C
Ma(cid:140)f ab
C
f bc
a (cid:30)bMc
@Ma
x(cid:22) I ad
@
P
D(cid:22)Mdx(cid:22)
C
C
@Ma
x(cid:22)
x(cid:22) d
@
C
P
R
(cid:30)bd (cid:28)(cid:141)I c
@Ma
x(cid:22) d
@
C
P
c Ab(cid:22) dx(cid:22)
(cid:21) I ad (cid:28)
C
(cid:20) @Ma
@x(cid:22) (cid:0)
x(cid:22)
R
C
@Ma
x(cid:22) I ad
@
C
R
@M
x(cid:22)
x(cid:22) d
@
P
P
C
x(cid:22)
P
C
@M
x(cid:22) d
@
R
x(cid:22)
R
(cid:20) @Ma
D
@(cid:28) (cid:0)
D(cid:28) Md (cid:28)
D
108
5. ADVANCED TOPICS
in which the spacetime part of the covariant derivative D(cid:22) has the form
D
and a similar covariant derivative for the (cid:28) component appears which contains (cid:30)a.
(cid:0)
.D(cid:22)F(cid:23)(cid:26)/a
@(cid:22)Fa(cid:23)(cid:26)
f bc
a ab(cid:22)Fc (cid:23)(cid:26)
Now, the entire structure of self-adjoint equations follows with the replacements
@x(cid:22) (cid:0)!
so that the Helmholtz conditions become
@
D(cid:22)
@
@(cid:28) (cid:0)!
D(cid:28) ;
A(cid:22)(cid:23)
D
A(cid:23)(cid:22)
D
D(cid:28)
A(cid:23)(cid:27)
A(cid:22)(cid:23)
D (cid:0)
@f (cid:27)
x(cid:22)
@
P
(cid:21)
D
1
2
D
D(cid:28)
(cid:20)A(cid:22)(cid:27)
@f (cid:27)
x(cid:23) (cid:0)
@
P
@A(cid:22)(cid:23)
@A(cid:27)(cid:23)
x(cid:27) D
x(cid:22)
@
@
@f (cid:27)
P
P
(cid:20)A(cid:22)(cid:27)
A(cid:23)(cid:27)
x(cid:23) C
@
P
A(cid:22)(cid:27) D(cid:23)f (cid:27)
@f (cid:27)
x(cid:22)
@
P
A(cid:23)(cid:27) D(cid:22)f (cid:27) ;
1
2
(cid:21)
(5.27)
(5.28)
(5.29)
(cid:0)
where
is the total (cid:28) derivative subject to
D
D(cid:28) D
D(cid:28)
x(cid:27) D(cid:27)
C P
C
f (cid:27) @
x(cid:27)
@
P
Since Hojman and Shepley’s argument relates only to the commutation relations among the
coordinates, not to the structure of the forces, their result carries over unchanged.
x(cid:22)
R
(cid:0)
fa(cid:22).(cid:28); x;
x/I a
P
D
0:
In flat spacetime, with A(cid:22)(cid:23)
g(cid:22)(cid:23)
(cid:17)(cid:22)(cid:23)
D
D
D
constant, (5.27) is trivially satisfied and (5.28)
becomes
@f(cid:22)
x(cid:23) C
@
P
@f(cid:23)
x(cid:22) D
@
P
0
H)
@2f(cid:22)
x(cid:23)@
@
P
x(cid:21) C
P
@
@2f(cid:23)
x(cid:22)@
P
x(cid:21) D
P
0:
(5.30)
Recalling the identity (5.17), we may also write (since the metric carries no group indices)
@2f(cid:23)
x(cid:21) D
x(cid:22)@
P
P
and so the most general form of fa(cid:22) is
@2f(cid:22)
x(cid:23)@
P
x(cid:21) (cid:0)
P
@
@
0
(cid:0)!
@2f(cid:22)
x(cid:23)@
@
P
x(cid:21) D
P
0;
where (5.30) requires that fa(cid:22)(cid:23)
0. Finally, applying (5.29) leads to
fa(cid:22)
fa(cid:22)(cid:23).(cid:28); x/
x(cid:23)
P
C
ga(cid:22).(cid:28); x/;
(5.31)
D
fa(cid:23)(cid:22)
C
(cid:21)
D
D(cid:23)f(cid:22)
D
D
1
2
D
D(cid:28)
(cid:20) @f(cid:22)
x(cid:23) (cid:0)
@
P
@f(cid:23)
x(cid:22)
@
P
D(cid:22)f(cid:23)
(cid:0)
1
2
D
D(cid:28)
.D(cid:28)
(cid:140)fa(cid:22)(cid:23)
fa(cid:23)(cid:22)(cid:141)
(cid:0)
D(cid:23)fa(cid:22)(cid:21)
x(cid:21)
P
C
D(cid:23)ga(cid:22)
(cid:0)
D(cid:22)fa(cid:23)(cid:21)
x(cid:21)
P
C
D(cid:22)ga(cid:23)
x(cid:21)D(cid:21)/fa(cid:22)(cid:23)
C P
x(cid:21).D(cid:23)fa(cid:22)(cid:21)
D P
D(cid:22)fa(cid:23)(cid:21)/
D(cid:23)ga(cid:22)
D(cid:22)ga(cid:23)
(cid:0)
C
(cid:0)
5.2. CLASSICAL NON-ABELIAN GAUGE THEORY 109
and since
x(cid:22) is arbitrary, we obtain
P
D(cid:21)fa(cid:22)(cid:23)
C
D(cid:28) fa(cid:22)(cid:23)
D(cid:22)fa(cid:23)(cid:21)
C
D(cid:22)ga(cid:23)
(cid:0)
C
D(cid:23)fa(cid:21)(cid:22)
D(cid:23)ga(cid:22)
0
0
D
D
for the fields fa(cid:22)(cid:23) and ga(cid:22). Now, in analogy to the Abelian case, we may write
1
2
and applying the Euler-Lagrange equations, we obtain
Aa(cid:22).(cid:28); x/I a.(cid:28)/
x(cid:22)
P
x(cid:22)
P
M
C
D
L
x(cid:22)
P
(cid:30)a.(cid:28); x/I a.(cid:28)/
C
d
d (cid:28)
(cid:2)m
x(cid:22)
P
C
Aa(cid:22)I a(cid:3)
D
@
@x(cid:22) (cid:140)Aa(cid:23)I a
@Aa(cid:23)
@x(cid:22) P
x(cid:23)I a
x(cid:23)
P
C
(cid:30)aI a(cid:141)
@(cid:30)a
@x(cid:22) I a:
C
C
M
I a
x(cid:22)
R
@Aa(cid:22)
@x(cid:23) P
@Aa(cid:22)
@(cid:28)
C
Rearranging terms and using (5.26) to express P
x(cid:23)
@x(cid:22) P
@Aa(cid:22)
@x(cid:23) P
x(cid:23)(cid:19) I a
(cid:20)(cid:18) @Aa(cid:23)
Aa(cid:22) P
x(cid:23)I a
x(cid:22)
R
M
D
C
(cid:0)
(cid:0)
I a
Aa(cid:22) P
D
I a, we find
I a(cid:21)
@(cid:30)a
@x(cid:22) I a
(cid:0)
@Aa(cid:22)
@(cid:28)
I a
C
(cid:20)(cid:18) @Aa(cid:23)
x(cid:23)
@x(cid:22) P
D
(cid:0)
@Aa(cid:22)
@x(cid:23) P
x(cid:23)(cid:19) I a
(cid:0)
Aa(cid:22)f ab
c
.Ab(cid:23)
(cid:30)b/I c(cid:21)
x(cid:23)
P
C
@(cid:30)a
@x(cid:22) I a
(cid:0)
@Aa(cid:22)
@(cid:28)
I a
C
(cid:20) @Aa(cid:23)
D
@x(cid:22) (cid:0)
@Aa(cid:22)
@x(cid:23) C
f bc
a Ab(cid:22)Ac(cid:23)
(cid:21)
x(cid:23)I a
P
(cid:20) @(cid:30)a
@Aa(cid:22)
C
@x(cid:22) (cid:0)
@(cid:28) C
f bc
a Ab(cid:22)(cid:30)c
(cid:21) I a:
Comparing this with (5.31), we may express the field strengths in terms of the potentials as
f(cid:22)(cid:23)
(cid:20) @Aa(cid:23)
D
@x(cid:22) (cid:0)
@Aa(cid:22)
@x(cid:23) C
f bc
a Ab(cid:22)Ac(cid:23)
g(cid:22)
(cid:20) @(cid:30)a
@Aa(cid:22)
D
@x(cid:22) (cid:0)
@(cid:28) C
f bc
a Ab(cid:22)(cid:30)c
(cid:21)
x(cid:23)I a
P
(cid:21) I a;
from which it follows that the field equations are satisfied. Introducing the definitions
xD
(cid:28)
D
@(cid:28)
@D
D
f(cid:22)D
fD(cid:22)
g(cid:22);
D
D (cid:0)
the field equations and Lorentz force assume the form
where
M
x(cid:22)
R
f (cid:22)(cid:23)
a
x(cid:23)I a
P
D
C
@(cid:11)f(cid:12)(cid:13)
@(cid:12) f(cid:13)(cid:11)
@(cid:13) f(cid:11)(cid:12)
0
C
a I a
g(cid:22)
C
a I a
f (cid:22)(cid:23)
D
x(cid:23)
P
C
D
f (cid:22)
a DI a
xD
P
x(cid:12) ;
f (cid:22)
a(cid:12) P
D
f(cid:11)(cid:12)
(cid:20) @Aa(cid:12)
D
@x(cid:11) (cid:0)
@Aa(cid:11)
@x(cid:12) C
f bc
a Ab(cid:11)Ac(cid:12)
(cid:21) I a
recovers the usual relationship of the field strength tensor to the non-Abelian potential.
110
5. ADVANCED TOPICS
5.3
EVOLUTION OF THE LOCAL METRIC IN CURVED
SPACETIME
General relativity has been summarized as: “Space acts on matter, telling it how to move. In
turn, matter reacts back on space, telling it how to curve.” [11] The action of space on matter
is expressed in equations of motion describing geodesic evolution with respect to a local metric
g(cid:22)(cid:23).x/. Such equations were found from a Lagrangian in (3.6) and from canonical commuta-
tion relations in (5.7). They can also be described in a Hamiltonian formulation on the phase
space of position and momentum, an approach amenable to the canonical quantum dynamics
for general relativity developed in [12, 13]. To express the action of matter on space, we look
to Einstein equations that relate the local metric to sources of mass and energy, which evolve
dynamically with (cid:28). We therefore consider a (cid:28)-dependent metric that may also evolve along with
its sources. One possible approach, proposed by Pitts and Schieve [14, 15], is to develop general
relativity on the 5D manifold .x(cid:22); (cid:28)/, introducing an ADM-type foliation with (cid:28) as a preferred
time direction. In the approach followed here, we adhere to the restriction imposed in SHP
electrodynamics, maintaining the role of (cid:28) as external, non-dynamical parameter throughout.
General relativity treats the interval between a pair of instantaneously displaced points in
spacetime
(cid:14)x2
D
g(cid:22)(cid:23)(cid:14)x(cid:22)(cid:14)x(cid:23)
.x2
(cid:0)
D
x1/2
as an invariance of the manifold. To transform geometry into dynamics, a particle trajectory
maps an arbitrary parameter (cid:16) to a continuous sequence of events x(cid:22).(cid:16)/ in the manifold. For any
timelike path we may put (cid:16)
proper time, and although the path consists of instantaneous
displacements in a 4D block universe, “motion” is observed through changes in x0.s/ with proper
time. Treating the sequence as a function, the invariant interval can be written
D
D
s
(cid:14)x2
D
g(cid:22)(cid:23)(cid:14)x(cid:22)(cid:14)x(cid:23)
g(cid:22)(cid:23)
D
dx(cid:22)
ds
dx(cid:23)
ds
(cid:14)s2
suggesting a dynamical description of the path by the action
Z ds
S
D
1
2
g(cid:22)(cid:23)
dx(cid:22)
ds
dx(cid:23)
ds
x2
P
D (cid:0)
which removes the constraint
c2 associated with the usual square root form.
A physical event x(cid:22).(cid:28)/ in SHP theory occurs at time (cid:28) and chronologically precedes
events occurring at subsequent times. The physical picture that emerges in SHP electrodynamics
can thus be understood as describing the evolution of a Maxwell–Einstein 4D block universe
defined at time (cid:28) to an infinitesimally close 4D block universe defined at (cid:28)
0,
evolution slows to zero, recovering Maxwell theory as an equilibrium limit. The form of the
gauge fields draws our attention to idea that while geometric relations on spacetime, such as
O(3,1) invariance, are defined within a given block universe, the dynamics operate through the
d (cid:28). As c5
!
C
5.3. EVOLUTION OF THE LOCAL METRIC IN CURVED SPACETIME 111
(cid:28)-dependent gauge interaction, and in this sense are defined in the transition from one 4D block
manifold to another. We therefore consider the interval
between an event x(cid:22).(cid:28)/ and an event
a subsequent time, and expand as
dx(cid:22)
x(cid:22).(cid:28)
(cid:14)(cid:28)/
x(cid:22).(cid:28)/
D N
x(cid:22).(cid:28)
N
C
(cid:0)
C
(cid:14)(cid:28)/ occurring at a displaced spacetime location at
dx2
g(cid:22)(cid:23)(cid:14)x(cid:22)(cid:14)x(cid:23)
g5(cid:23)(cid:14)x(cid:23)(cid:14)x5
g55(cid:14)x5(cid:14)x5
g(cid:11)(cid:12) .x; (cid:28) / (cid:14)x(cid:11)(cid:14)x(cid:12)
D
C
referred to the coordinates of x. This interval contains both the geometrical distance (cid:14)x(cid:22) between
two neighboring points in one manifold, and the dynamical distance (cid:14)x5
c5(cid:14)(cid:28) between events
occurring at two sequential times. This leads to the Lagrangian
D
C
D
L
D
1
2
Mg(cid:11)(cid:12) .x; (cid:28)/
x(cid:11)
P
x(cid:12)
P
and equations of motion
x(cid:22)
D
P
D(cid:28) D R
x(cid:22)
0
D
(cid:128) (cid:22)
x(cid:11)
(cid:11)(cid:12) P
x(cid:12)
P
C
x5
P
D
x5
D(cid:28) D R
0
D
(cid:128) 5
x(cid:11)
(cid:11)(cid:12) P
x(cid:12) ;
P
C
where (cid:128) (cid:13)
we do not treat x5.(cid:28)/
quantities, not elements of a 5D tensor. This symmetry breaking of 5D
through the prescription
(cid:11)(cid:12) is the standard Christoffel symbol in 5D. But as in the electrodynamic Lagrangian,
c5(cid:28) as a dynamical variable, and take the 5-index to denote scalar
4D+1 is expressed
(cid:0)!
(cid:17)
(cid:128) (cid:22)
5(cid:11) D
1
2
g(cid:22)(cid:23) .@5g(cid:23)(cid:11)
@(cid:11)g(cid:23)5
(cid:0)
C
@(cid:23)g(cid:11)5/
(cid:128) 5
(cid:11)(cid:12) (cid:17)
0
(5.32)
which extends the geodesic Equations (3.6) and (5.7) to 5D.
We define n.x; (cid:28)/ to be the number of events (non-thermodynamic dust) per spacetime
volume, so that
is the 5-component event current, and
j (cid:11) .x; (cid:28) /
(cid:26).x; (cid:28)/
x(cid:11).(cid:28)/
P
D
D
M n.x; (cid:28)/
x(cid:11).(cid:28)/
P
(cid:11)j (cid:11)
r
D
@j (cid:11)
@x(cid:11) C
j (cid:13) (cid:128) (cid:11)
(cid:13)(cid:11) D
@(cid:26)
@(cid:28) C r
(cid:22)j (cid:22)
0
D
is the continuity equation. Generalizing the 4D stress-energy-momentum tensor to 5D, the
mass-energy-momentum tensor [16, 17] is
T (cid:11)(cid:12)
Mn
x(cid:11)
P
x(cid:12)
P
D
(cid:26)
x(cid:11)
P
x(cid:12)
P
D
(cid:0)!
( T (cid:22)(cid:23)
T 5(cid:12)
Mn
D
x5
D P
x(cid:23)
x(cid:22)
P
P
x(cid:12) (cid:26)
P
D
x(cid:23)
P
(cid:26)
x(cid:22)
D
P
c5j (cid:12)
112
5. ADVANCED TOPICS
combining T (cid:22)(cid:23) with j (cid:11), and is conserved by virtue of the continuity and geodesic equations.
The Einstein equations are similarly extended to
G(cid:11)(cid:12)
R(cid:11)(cid:12)
D
1
2
(cid:0)
Rg(cid:11)(cid:12)
8(cid:25)G
c4 T(cid:11)(cid:12) ;
D
where the Ricci tensor R(cid:11)(cid:12) and scalar R are obtained by contracting indices of the 5D curva-
ture tensor R(cid:14)
(cid:13)(cid:11)(cid:12) . Since conservation of T (cid:11)(cid:12) depends on prescription (5.32), we must similarly
suppress (cid:128) 5
0. Working through the
(cid:11)(cid:12) when constructing the Ricci tensor to insure
4D
r(cid:12) G(cid:11)(cid:12)
D
algebra we find that R(cid:22)(cid:23)
(cid:0)R(cid:22)(cid:23)(cid:1)
and obtain
D
R(cid:22)5
R55
1
c5
1
c5
D
D
@(cid:28) (cid:128) (cid:21)
(cid:22)(cid:21) (cid:0)
@(cid:21)(cid:128) (cid:21)
(cid:22)5 C
(cid:27)5(cid:128) (cid:27)
(cid:128) (cid:21)
(cid:22)(cid:21) (cid:0)
(cid:27)(cid:21)(cid:128) (cid:27)
(cid:128) (cid:21)
(cid:22)5
@(cid:28) (cid:128) (cid:21)
5(cid:21) (cid:0)
@(cid:21)(cid:128) (cid:21)
55 C
(cid:27)5(cid:128) (cid:27)
(cid:128) (cid:21)
5(cid:21) (cid:0)
(cid:27)(cid:21)(cid:128) (cid:27)
(cid:128) (cid:21)
55
as new components.
The weak field approximation [11] is generalized to 5D as
g(cid:11)(cid:12)
(cid:17)(cid:11)(cid:12)
h(cid:11)(cid:12)
C
D
(cid:0)!
@(cid:13) g(cid:11)(cid:12)
D
@(cid:13) h(cid:11)(cid:12)
2
(cid:0)h(cid:11)(cid:12) (cid:1)
0
(cid:25)
leading to
R(cid:11)(cid:12)
1
2
Defining Nh(cid:11)(cid:12)
’
(cid:16)@(cid:12) @(cid:13) h(cid:13)
h(cid:11)(cid:12)
D
@(cid:11)@(cid:13) h(cid:13)
@(cid:13) @(cid:13) h(cid:11)(cid:12)
@(cid:11)@(cid:12) h(cid:17)
(cid:12) (cid:0)
(cid:11) C
1
2 (cid:17)(cid:11)(cid:12) h, the Einstein equations become
(cid:0)
(cid:0)
16(cid:25)G
c4 T(cid:11)(cid:12)
@(cid:12) @(cid:13) Nh(cid:13)
(cid:11) C
@(cid:11)@(cid:13) Nh(cid:13)
(cid:12) (cid:0)
D
@(cid:13) @(cid:13) Nh(cid:11)(cid:12)
@(cid:11)@(cid:12) Nh
(cid:0)
R
(cid:17)(cid:11)(cid:12) R(cid:11)(cid:12)
’
(cid:17)(cid:11)(cid:12) h(cid:11)(cid:12) :
h
’
which take the form of a wave equation
16(cid:25)G
c4 T(cid:11)(cid:12)
@(cid:13) @(cid:13) Nh(cid:11)(cid:12)
D (cid:0)
D (cid:0)
(cid:18)@(cid:22)@(cid:22)
(cid:17)55
c2
5
C
(cid:19)
@2
(cid:28)
Nh(cid:11)(cid:12)
after imposing the gauge condition @(cid:21) Nh(cid:11)(cid:21)
for this equation leads to
D
0. Using the Green’s function GMaxwell from (3.24)
Nh(cid:11)(cid:12) .x; (cid:28) /
D
4G
c4
Z d 3x0
T(cid:11)(cid:12) (cid:16)t
(cid:0)
x
j
x
j
(cid:0)
x0jc
x0j
(cid:0)
; x0; (cid:28) (cid:17)
relating the field Nh(cid:11)(cid:12) .x; (cid:28) / to the source T(cid:11)(cid:12) .x; (cid:28) /. In analogy to the Coulomb problem, we
take a point source X
.cT .(cid:28)/; 0/ in a co-moving frame, with
D
T 00
D
mc2
T 2(cid:14)3 .x/ ’ .t
P
(cid:0)
T .(cid:28) //
T (cid:11)i
0
D
T 55
c2
c2 T 00;
5
D
5.3. EVOLUTION OF THE LOCAL METRIC IN CURVED SPACETIME 113
where ’.(cid:28)/ is the smoothing function (3.15). Writing M
m ’ .t
(cid:0)
D
T .(cid:28)// produces
Nh00 .x; (cid:28) /
4GM
T 2
c2R P
D
Nh(cid:11)i .x; (cid:28) /
0
D
Nh55 .x; (cid:28) /
2 (cid:17)(cid:11)(cid:12) Nh and neglecting c2
so using h(cid:11)(cid:12)
(cid:17)(cid:11)(cid:12) h(cid:12)(cid:13) the non-zero Christoffel symbols are
D Nh(cid:11)(cid:12)
(cid:0)
1
5 =c2
1, we see that h00
(cid:28)
c2
c2 Nh00
5
D
D Nh00. Since g(cid:11)(cid:12) h(cid:12)(cid:13)
’
1
(cid:128) (cid:22)
00 D (cid:0)
2
1
(cid:128) (cid:22)
50 D
2c5
(cid:17)(cid:22)(cid:23)@(cid:23)h00
(cid:17)(cid:22)0@(cid:28) h00
1
2
(cid:128) (cid:22)
0i D
(cid:128) (cid:22)
55 D (cid:0)
(cid:17)(cid:22)(cid:23)@i h(cid:23)0
1
2
(cid:17)(cid:22)(cid:23)@(cid:23)h55
so the equations of motion split into
.@(cid:28) h00/
t
R
D
t
P
x
C P
(cid:1)
.
r
h00/
t 2
P
c2
2
x
R
D
h00/
.
r
t 2:
P
In spherical coordinates, putting (cid:18)
D
(cid:25)=2, the angular and radial equations are
2
R
P
(cid:30)
P
R
(cid:30)
R
C
D
0
(cid:30)
(cid:0)! P
D
L
R
MR2 (cid:0)! R
L2
(cid:0)
M 2R3 D (cid:0)
GM
t 2
R2 P
T 2
P
and the t equation is
2G@(cid:28) M
t
c2R P
C
4GM
T
c2R P
T
R
t
P
(cid:0)
2GM
R
R2c2 P
T 2
P
(cid:25)
2GM
c2R
(cid:18)1
C
(cid:19)
(cid:11) .(cid:28)/
2
t
R
D
(cid:11) .(cid:28) /
P
t;
P
where we neglect P
R=c
T
P
1
C
D
(cid:25)
(cid:11) .(cid:28) /
0 and @(cid:28) ’
0 (taking (cid:21) large), and define
(cid:25)
T 2
2 (cid:0)! P
1
C
’
(cid:11) .(cid:28) /
T
(cid:0)! P
T
R
’
(cid:18)1
(cid:11) .(cid:28) /
2
(cid:19) P
(cid:11) .(cid:28) /
2
:
C
In the Newtonian case, (cid:11)
0
1, but this t equation has the solution
D
exp (cid:20) 2GM
c2R
t
P
D
t
(cid:0)! P
(cid:18)(cid:11)
C
D
1
(cid:11)2(cid:19)(cid:21)
4
t 2
(cid:0)! P
T 2
P
1
C
’
(cid:18)1
1
2
C
2GM
c2R
(cid:19) (cid:11)
which, since 2GM=c2R
1, leads finally to the radial equation in the form
l 2
M 2R2 (cid:0)
GM
R
(cid:18)1
1
2
C
(cid:11) .(cid:28) /(cid:19)(cid:27)
dK
d (cid:28) D (cid:0)
GM
2R
d
d (cid:28)
D
(cid:11) .(cid:28)/ :
(cid:28)
1
2
d
d (cid:28)
(cid:26) 1
R2
2 P
C
We recognize K on the LHS as the Hamiltonian of the particle moving in this local metric.
The mass fluctuation of the point source is seen to induce a fluctuation in the mass of a distant
particle through the field g(cid:11)(cid:12) .x; (cid:28)/, producing a small modification of Newtonian gravity.
114
5. ADVANCED TOPICS
Interactions in SHP electrodynamics form an integrable system in which event evolution
generates an instantaneous current defined over spacetime at (cid:28), and in turn, these currents induce
(cid:28)-dependent fields that act on other events at (cid:28). We expect that in a similar way, a fully developed
SHP formulation of general relativity will describe how the instantaneous distribution of mass
at (cid:28) expressed through T(cid:11)(cid:12) .x; (cid:28)/ induces the local metric g(cid:11)(cid:12) .x; (cid:28)/, which, in turn, determines
geodesic equations of motion for any particular event at x(cid:22).(cid:28)/.
ZEEMAN AND STARK EFFECTS
5.4
As discussed in Section 2.4, reasonable solutions to the relativistic central force problem are
obtained in a restricted Minkowski space (RMS) with fixed unit vector n(cid:22)
RMS.n/
x
(cid:8)x
(cid:140)x
.x
n/n(cid:141)2
0(cid:9)
(cid:1)
2
(cid:0)
(cid:21)
D
j
invariant under O(2,1) but not general Lorentz transformations. Because quantum states are
classified by their symmetry representations, Horwitz and Arshansky [18–20] generalized their
solutions to the quantum central force problem to an induced representation of O(3,1). Studying
the Lorentz transformations on n(cid:22) and the RMS(n), they found the generators h(cid:22)(cid:23) of O(3,1) for
the combined space, formed a maximally commuting set of operators, and solved for eigenstates
of these operators. The energy levels of these degenerate quantum states split in a constant elec-
tromagnetic field—the Zeeman and Stark effects. To couple the electromagnetic field to these
states, we construct a gauge theory for the induced representation in its classical form [21, 22].
.0; 0; 0; 1/ so that the parameterization (2.5) describes RMS.n(cid:14)/. Given
We denote n(cid:14)
the Lorentz transformation n(cid:14)
L.n/ n it follows that
D
D
RMS.n(cid:22)/
and
x
2
L.n/ x
y
D
and so we may characterize the full spacelike region x
transformation (cid:131) acts as n
(cid:131) n and x
n0 D
!
(cid:131)L.n/T y
D
x0
D
(cid:131) x
D
y
RMS.n(cid:14)(cid:22)/
H)
2
LT .n/y by (cid:16)
(cid:131) x , it follows that
D
D
x0 D
L.(cid:131)n/T L.(cid:131)n/ (cid:131) L.n/T y
!
L.n0/T y0:
D
.n; y/. Since a Lorentz
Thus, y transforms under the O(2,1) little group defined through
y
y0
and since D(cid:0)
(cid:28)-dependent, but one can show that since d=d (cid:28) is Lorentz-invariant and commutes with (cid:131)
D(cid:0)
D
n(cid:14), the little group preserves RMS(n(cid:14)). We have taken L.n.(cid:28)// to be
!
1.(cid:131); n/n(cid:14)
D(cid:0)
D
D
L.(cid:131)n/ (cid:131) L.n/T
1.(cid:131); n/ y
1.(cid:131); n/
D
is form-invariant. Representing the Lorentz transform (cid:131)
D
D(cid:0)
1.(cid:131); n/
D(cid:0)
1.(cid:131); n/ y
y
P
C P
.
y/0
P
(cid:131)
1
C
D
1
2
!(cid:22)(cid:23) M(cid:22)(cid:23)
o.!2/
C
d
d (cid:28)
(cid:140)D(cid:0)
1.(cid:131); n/ y(cid:141)
x
x0 as
!
W
.M(cid:22)(cid:23)/(cid:27)(cid:21)
(cid:17)(cid:27)(cid:22)(cid:17)(cid:21)(cid:23)
(cid:17)(cid:27)(cid:23)(cid:17)(cid:21)(cid:22)
(cid:0)
D
5.4. ZEEMAN AND STARK EFFECTS 115
.n0; y0/ can be represented as
(cid:131)
the Lorentz transform N
(cid:16)
W
D
.n; y/
!
and the generators are found as
1
(cid:131)
N
D
(cid:16)0 D
1
2
C
!(cid:22)(cid:23) X (cid:22)(cid:23)
o.!2/
C
X(cid:22)(cid:23)
D (cid:0)
(cid:0)xT M(cid:22)(cid:23)
x
r
C
nT M(cid:22)(cid:23)
n(cid:1)
r
D (cid:0)
(cid:0)yT LM(cid:22)(cid:23)LT
y
r
C
nT M(cid:22)(cid:23)D(cid:1) ;
where we introduce
L.@=@n(cid:22)/LT
S(cid:22)
D
D(cid:22)
.
D
n/(cid:22)
r
C
yT S(cid:22)
y :
r
It is easily shown that for a function of x alone (even as n varies with (cid:28)) D(cid:22) acts as a kind of
D(cid:22)f .L.n/T y/
covariant derivative with D(cid:22)f .n; y/
(cid:17)
(cid:16) ; P(cid:16)
g
f
As a classical Lagrangian on the phase space
0.
we put
D
L
1
2
1
2
M (cid:0)
x2
P
M h.
y
P
D
D
(cid:26)2
n2(cid:1)
C
P
n(cid:27) S(cid:27) y/2
C
C P
e .
A.x/
(cid:31).n//
V .x2/
x
P
(cid:1)
(cid:26)2
n2i
P
C
C
n
C P
(cid:1)
e h.
y
P
(cid:0)
n(cid:27) S(cid:27) y/
C P
A.n/.y/
(cid:31).n/i
n
C P
(cid:1)
(cid:0)
V .x2/;
(cid:1)
where (cid:26) is a length scale required because n is a unit vector, A.n/
little group, and we used
D
LA transforms under the
x
P
D
LT
y
P
LT y
C P
D
LT (cid:16)
y
P
L
LT y(cid:17)
P
LT .
y
P
D
C
n(cid:27) S(cid:27) y/ :
C P
This L is scalar and represents a generalized Maxwell electrodynamics including n as a new
dynamical degree of freedom. The conjugate momenta are found to be
p(cid:22)
(cid:25)(cid:22)
D
D
@L
y(cid:22) D
@
P
@L
n(cid:22) D
@
P
M (cid:16)
y(cid:22)
P
M (cid:0)(cid:26)2
n(cid:27) S(cid:27) y(cid:22)
C P
C
eA.n/(cid:17)
n(cid:22)
P
yT S(cid:22)p
e(cid:31)(cid:1)
C
(cid:0)
having used the antisymmetry of the matrices S(cid:22). The Hamiltonian is obtained from the La-
grangian as
K
y
D P
(cid:1)
p
n
C P
(cid:1)
(cid:25)
(cid:0)
L
D
1
2M
(cid:16)p
(cid:0)
2
eA.n/(cid:17)
1
2Md 2 .P
(cid:0)
C
e(cid:31)/2
V;
C
where P
(cid:31)
tion produced by a Lorentz transformation (cid:14)(cid:16)
yT S(cid:27) p . Taking A.n/
(cid:25)(cid:27)
C
D
D
D
0 and applying Noether’s theorem to the varia-
2 !(cid:22)(cid:23)X(cid:22)(cid:23) (cid:16) we obtain the conserved quantities
1
h(cid:22)(cid:23)
D
p(cid:26)X(cid:22)(cid:23)y(cid:26)
C
(cid:25) (cid:26)X(cid:22)(cid:23)n(cid:26)
D
D
yT (cid:2)L.n/M(cid:22)(cid:23)LT (cid:3) p
nT M(cid:22)(cid:23)P
C
116
5. ADVANCED TOPICS
which satisfy Poisson brackets (cid:8)h(cid:22)(cid:23); K(cid:9)
0.
D
Now interpreting p and (cid:25) as quantum operators, so that
p(cid:22)
@
@y(cid:22)
i
(cid:132)
D
(cid:25)(cid:22)
@
@n(cid:22)
i
(cid:132)
D
P(cid:22)
D(cid:22)
i
(cid:132)
D
the h(cid:22)(cid:23) are precisely the Lorentz generators found by Horwitz and Arshansky for solutions
.x; (cid:28)/ to the Stueckelberg–Schrodinger equation i@(cid:28)
0. This
system is invariant under U.1/ gauge transformations
K and satisfy (cid:2)h(cid:22)(cid:23); K(cid:3)
D
D
ie(cid:130).(cid:16)/=
e(cid:0)
(cid:132)
!
A.n/
(cid:22) (cid:0)!
A.n/
(cid:22) C
@
@y(cid:22) (cid:130)
(cid:31)(cid:22)
(cid:31)(cid:22)
C
(cid:0)!
D(cid:22)(cid:130):
For interactions cyclic in n, we may put
RMS.n/ with fixed n, so the classical and quantum dynamics reduce to
n
P
D
0 for the classical system which remains within
L
D
L0
D
1
2
M
y2
P
(cid:0)
V
K
D
K0
D (cid:0)
2
(cid:132)
2M
@
@y(cid:22)
@
@y(cid:22) C
V
and quantum wavefunctions satisfy D(cid:22)
in perturbation theory by expressing a constant field strength F (cid:22)(cid:23) as
D
0. The Zeeman and Stark effects are thus obtained
A(cid:22).x/
1
2
D (cid:0)
F (cid:22)(cid:23)x(cid:23)
(cid:31)(cid:22).n/
D
(cid:26) A(cid:22).n/
D (cid:0)
(cid:26)
2
F (cid:23)
(cid:27) n(cid:27)
and writing
D
for the potential in RMS(n(cid:14)). To first order in e, the Hamiltonian is just
D (cid:0)
A.n/
(cid:22) .y/
L(cid:22)(cid:23)A(cid:23).LT y/
.LF LT y/(cid:22)
1
2
F(cid:22)(cid:23)X (cid:22)(cid:23)
K
D
K0
e
4M
C
4m F(cid:11)(cid:12) X (cid:11)(cid:12)
so that the Zeeman effect follows from e
2m B kLk splitting the en-
ergy levels along the diagonal component Lk of angular momentum. For the Stark effect, we put
e
2m EkAk, where Ak is a boost, and to reproduce the phenomenol-
4m F(cid:11)(cid:12) X (cid:11)(cid:12)
e(cid:15)(cid:22)x(cid:22), hinting
ogy we must include an additional scalar potential V
at the 5D gauge theory.
A5, where A5
2m F0i X 0i
4m Fij X ij
D (cid:0)
!
!
!
D
C
D
V
e
e
e
e
5.5
CLASSICAL MECHANICS AND QUANTUM FIELD
THEORY
Although quantum field theory differs from classical mechanics in both methodology and re-
sults, classical SHP electrodynamics presents a number of interesting qualitative implications
for QED.
5.5. CLASSICAL MECHANICS AND QUANTUM FIELD THEORY 117
D
As seen in Sections 1.3 and 3.1, the Stueckelberg–Schrodinger equation is first-order in
(cid:28), and the Hamiltonian operator is a Lorentz scalar, so that manifest covariance is preserved
throughout the second quantization procedure. In constructing canonical momenta, the kinetic
term for the fields f (cid:11)(cid:12)
(cid:136) f(cid:11)(cid:12) formed from the cross derivatives of a(cid:11) leads to momentum fields
(cid:25)(cid:22)
@(cid:28) a(cid:22) but no (cid:25)5 component, because @(cid:28) a5 does not appear in f(cid:11)(cid:12) . In Dirac quantization
for gauge theories [23], one inserts a momentum (cid:25)5 conjugate to a5 and a Lagrange multiplier
0. The secondary constraint (that the primary constraint
to enforce the primary constraint (cid:25)5
commutes with the Hamiltonian) leads to the Gauss law @(cid:22)f 5(cid:22)
.ec5=c/j 5. But because this
system is first-order, one may apply the Jackiw quantization scheme [24], in which we first
eliminate the constraint from the Lagrangian by solving the Gauss law, and then construct
the Hamiltonian from the unconstrained degrees of freedom, which are the matter fields and
the transverse electromagnetic modes. Since the momentum modes are not constrained to be
lightlike, as we saw for plane waves in Section 4.4, there can be three transverse polarization
modes. The resulting system is amenable to both canonical and path integral quantization.
D
D
In canonical quantization, one finds the propagator G.x; (cid:28)/ for the matter fields as the
vacuum expectation value of (cid:28)-ordered operator products (equivalent to a Fourier transform of
the momentum representation with a Feynman contour). The propagator enforces (cid:28)-retarded
causality, with G.x; (cid:28)/
0 for (cid:28) < 0, so that SHP quantum field theory is free of matter loops.
Extracting the propagator for a sharp mass eigenvalue recovers the Feynman propagator for the
Klein–Gordon equation.
D
As in classical mechanics, quantum systems evolve as (cid:28) increases, with advance or retreat
of x0 treated on an equal footing. Perturbation theory is constructed in an interaction picture
obtained by a unitary transformation constructed from the scalar interaction Hamiltonian and
(cid:28). As a result, this method has been shown [25] to circumvent the Haag no-go theorem [26],
summarized as, “Haag’s theorem is very inconvenient; it means that the interaction picture exists
only if there is no interaction.” [27]
As seen in Section 4.7, particles interacting through the electromagnetic field can ex-
change mass. The treatment of Moller scattering leads to a cross-section identical to the standard
QED result for spinless particles when mass exchange is absent. When mass is exchanged, the
usual pole in the cross-section at 0o splits into a zero and two poles close to but away from the
forward beam axis, providing a small experimental signature (and one very difficult to observe).
Because there are no matter loops in this theory, the problem of renormalization reduces
to treatment of photon loops in the matter field (gauge and vertex factors become unity by the
Ward identities). Mass renormalization can be absorbed into the first order mass term (cid:3)i@(cid:28)
in the quantum Lagrangian. To remove singularities from the loop contributions to the matter
propagator in standard QED, some regularization scheme is required. However in SHP QED,
the field interaction kernel (3.11) places a multiplicative factor h1
in the photon
propagator. This factor acts as a mass cut-off rendering the theory superrenormalizable. Un-
like a momentum cut-off, this factor leaves the Lorentz and gauge symmetries of the original
1
.(cid:24)(cid:21)(cid:20)/2i(cid:0)
C
118
5. ADVANCED TOPICS
theory unaffected, recalling Schwinger’s motivation for his “proper time method” discussed in
Section 1.3.
5.6
BIBLIOGRAPHY
[1] Dyson, F. J. 1990. American Journal of Physics, 58:209–211. https://doi.org/10.1119/
1.16188 97
[2] Tanimura, S. 1992. Annals of Physics, 220:229–247. http://www.sciencedirect.com/
science/article/pii/000349169290362P 97, 107
[3] Hojman, S. A. and Shepley, L. C. 1991. Journal of Mathematical Physics, 32:142–146. ht
tps://doi.org/10.1063/1.529507 97, 101
[4] Land, M., Shnerb, N., and Horwitz, L. 1995. Journal of Mathematical Physics, 36:3263. 97
[5] Santilli, R. M. 1990. Foundations of Theoretical Mechanics I, Springer-Verlag. 102
[6] Helmholtz, H. 1887. Journal für die Reine Angewandte Mathematik, 100:137. 102
[7] Darboux, G. 1894. Leçons sur la Théory Générale des Surfaces, 3, Gauthier-Villars. 102
[8] Dedecker, P. 1950. Bulletin de l’Académie Royale des Sciences de Belgique Classe des Sciences,
36:63. 102
[9] Wong, S. K. 1970. Nuovo Cimento, 65A:689. 106
[10] Lee, C. R. 1950. Physics Letters, 148A:36. 107
[11] Misner, C. W., Thorne, K. S., and Wheeler, J. A. 1973. Gravitation, W.H. Freeman and
Co., San Francisco, CA. 110, 112
[12] Horwitz, L. P. 2019. Journal of Physics: Conference Series, 1239. https://doi.org/10.
1088%2F1742-6596%2F1239%2F1%2F012014 110
[13] Horwitz, L. P. 2019. The European Physical Journal Plus, 134:313. https://doi.org/10.
1140/epjp/i2019-12689-7 110
[14] Pitts, J. B. and Schieve, W. C. 1998. Foundations of Physics, 28:1417–1424. https://do
i.org/10.1023/A:1018801126703 110
[15] Pitts, J. B. and Schieve, W. C. 2001. Foundations of Physics, 31:1083–1104. https://do
i.org/10.1023/A:1017578424131 110
[16] Saad, D., Horwitz, L., and Arshansky, R. 1989. Foundations of Physics, 19:1125–1149. 111
[17] Land, M. 2019. Journal of Physics: Conference Series, 1239. https://doi.org/10.1088%
2F1742-6596%2F1239%2F1%2F012005 111
5.6. BIBLIOGRAPHY 119
[18] Arshansky, R. and Horwitz, L. 1989. Journal of Mathematical Physics, 30:66. 114
[19] Arshansky, R. and Horwitz, L. 1989. Journal of Mathematical Physics, 30:380.
[20] Horwitz, L. P. 2015. Relativistic Quantum Mechanics, Springer, Dordrecht, Netherlands.
114
[21] Land, M. and Horwitz, L. 1995. Jounal of Physics A: Mathematical and General, 28:3289–
3304. 114
[22] Land, M. and Horwitz, L. 2001. Foundations of Physics, 31:967–991. 114
[23] Dirac, P. 1964. Lectures on Quantum Mechanics, Yeshiva University, New York. 117
[24] Jackiw, R. 1993. https://arxiv.org/pdf/hep-th/9306075.pdf 117
[25] Seidewitz, E. 2017. Foundations of Physics, 47:355–374. 117
[26] Haag, R.
1955. Kong. Dan. Vid.
Sel. Mat. Fys. Med.,
29N12:1–37.
Philosophical Magazine Series, 746, 376. 117
[27] Streater, R. F. and Wightman, A. S. 1964. PCT, Spin, Statistics, and All That, Princeton
University Press. 117
Authors’ Biographies
121
MARTIN LAND
Martin Land was born in Brooklyn in 1953. He grew up in the
New York City area, strongly influenced by his mother, a social
worker who worked with Holocaust survivors, and his father, a
second-generation engineer in small manufacturing businesses
associated with the garment industry. In his school years he
cleaned swimming pools and stables, worked as a carpenter on
a construction site, and expedited orders in the garment center.
In 1972, he entered Reed College in Portland, Oregon, where
he received a Kroll Fellowship for original research which per-
mitted him to devote an extra year to extensive study in the
humanities along with his specialization in physics. After com-
pleting his BA in 1977, he returned to New York City where
he received an M.S. in electrical engineering from Columbia University in 1979 as a member of
the Eta Kappa Nu engineering honor society. He joined Bell Laboratories, developing special-
ized hardware for fiber optic communication with application in computer networks and video
transmission. In 1982, he worked as a telecommunications engineer at a major Wall Street bank.
Returning to theoretical physics at Hebrew University in Jerusalem, he worked with Eliezer Ra-
binovicci on supersymmetric quantum mechanics to receive a second M.S. in 1986. In 1985, he
married Janet Baumgold, a feminist therapist and co-founder of the Counseling Center for
Women. Following a year devoted to full-time fatherhood and another in compulsory national
service, he began working toward a Ph.D. in high energy physics with Lawrence Horwitz at
Tel Aviv University in 1988. He elaborated many aspects of the classical and quantum theories
known as Stueckelberg-Horwitz-Piron (SHP) theory, producing a dissertation developing the
SHP quantum field theory. Concurrently with his doctoral work, he was on the research faculty
of the Computer Science Department at Hebrew University, developing specialized hardware
for parallel computing. After submitting his dissertation in 1995, he taught communications
engineering for three years at the Holon Institute of Technology, before joining the Depart-
ment of Computer Science at Hadassah College in Jerusalem, teaching computer architecture,
microprocessors, embedded systems, and computer networking. He was a founding member
of the International Association for Relativistic Dynamics (IARD) in 1998 and has served as
IARD president since 2006. In parallel to his activities in physics and computer science, he has
122 AUTHORS’ BIOGRAPHIES
enjoyed a long collaboration with Jonathan Boyarin of Cornell University in various areas of the
humanities, critical theory, and Jewish studies. This collaboration has allowed him to communi-
cate contemporary thinking in physics, especially notions of time associated with SHP theory,
to scholars in other fields as modern context for philosophical consideration of temporality.
AUTHORS’ BIOGRAPHIES 123
LAWRENCE P. HORWITZ
Lawrence Paul Horwitz was born in New York City on Oc-
tober 14, 1930. He lived in Westchester County until 1934,
then went to London where his father founded and managed
a chain of womens wear shops, called the Richard Shops, and
then returned to the United States in 1936. After a few years
in Brooklyn, NY, his family moved to Forest Hills in Queens,
NY, where he learned tennis and attended Forest Hills High
School, a school dedicated to teaching students how to think,
where he came to love physics. He then went to the College
of Engineering, New York University, where he studied Engi-
neering Physics and graduated summa cum laude with a Tau
Beta Pi key and the S.F.B. Morse medal for physics. He met a young lady, Ruth Abeles, who
arrived from Germany in the U.S. in 1939 and became his wife before moving on to Harvard
University in 1952 with a National Science Foundation Fellowship. He received his doctorate
at Harvard working under the supervision of Julian Schwinger in 1957. He then worked at the
IBM Watson Research Laboratory where he met Herman Goldstine, a former assistant to John
von Neumann and, among other things, explored with him octononic and quaternionic Hilbert
spaces from both physical and mathematical points of view. He then moved on to the University
of Geneva in 1964, becoming involved in scattering theory as well as continuing his studies of
hypercomplex systems with L. C. Biedenharn and becoming involved in particle physics with
Yuval Neeman at CERN. He became full professor at the University of Denver in 1966–1972;
he then accepted a full professorship at Tel Aviv University. After stopping for a year to work
with C. Piron at the University of Geneva on the way to Israel, he has been at Tel Aviv Univer-
sity since 1973, with visits at University of Texas at Austin, Ilya Prigogine Center for Statistical
Mechanics and Complex Systems in Brussels, and at CERN, ETH (Honggerberg, Zurich),
University of Connecticut (Storrs, CT), IHES (Bures-sur-Yvette, Paris), and Institute for Ad-
vanced Study (Princeton, NJ), where he was a Member in Natural Sciences, 1993, 1996, 1999,
2003 with short visits in August 1990, and January 1991, working primarily with S. L. Adler.
He is now Professor Emeritus at Tel Aviv University, Bar Ilan University, and Ariel University.
His major interests are in particle physics, statistical mechanics, mathematical physics, theory
of unstable systems, classical and quantum chaos, relativistic quantum mechanics, relativistic
many body theory, quantum field theory, general relativity, representations of quantum theory
on hypercomplex Hilbert modules, group theory and functional analysis, theories of irreversible
quantum evolution, geometrical approach to the study of the stability of classical Hamiltonian
systems, and to the dark matter problem, and classical and quantum chaos. He is a member
of the American Physical Society (Particle Physics), Swiss Physical Society, European Physi-
cal Society, International Association for Mathematical Physics, Israel Physical Society, Israel
Mathematics Union, European Mathematical Society, International Quantum Structures As-
124 AUTHORS’ BIOGRAPHIES
sociation, Association of Members of the Institute for Advanced Study, and the International
Association for Relativistic Dynamics.
|
https://ieeexplore.ieee.org/xpl/ebooks/bookPdfWithBanner.jsp?fileName=8672399.pdf&bkn=8672398&pdfType=book
|
Ancient Hindu Science
Its Transmission and Impact on World Cultures
Copyright © 2019 by Morgan & Claypool
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in
any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations
in printed reviews, without the prior permission of the publisher.
Ancient Hindu Science: Its Transmission and Impact on World Cultures
Alok Kumar
www.morganclaypool.com
ISBN: 9781681735306
ISBN: 9781681735313
ISBN: 9781681735320
paperback
ebook
hardcover
DOI 10.2200/S00906ED1V01Y201903ENG034
A Publication in the Morgan & Claypool Publishers series
SYNTHESIS LECTURES ON ENGINEERING
Lecture #34
Series ISSN
Print 1939-5221 Electronic 1939-523X
Ancient Hindu Science
Its Transmission and Impact on World Cultures
Alok Kumar
State University of New York at Oswego
SYNTHESIS LECTURES ON ENGINEERING #34
CM&cLaypoolMorganpublishers&ABSTRACT
To understand modern science as a coherent story, it is essential to recognize the accomplish-
ments of the ancient Hindus. They invented our base-ten number system and zero that are
now used globally, carefully mapped the sky and assigned motion to the Earth in their as-
tronomy, developed a sophisticated system of medicine with its mind-body approach known
as Ayurveda, mastered metallurgical methods of extraction and purification of metals, includ-
ing the so-called Damascus blade and the Iron Pillar of New Delhi, and developed the science
of self-improvement that is popularly known as yoga. Their scientific contributions made im-
pact on noted scholars globally: Aristotle, Megasthenes, and Apollonius of Tyana among the
Greeks; Al-Birūnī, Al-Khwārizmī, Ibn Labbān, and Al-Uqlīdisī, Al-Jāh. iz among the Islamic
scholars; Fa-Hien, Hiuen Tsang, and I-tsing among the Chinese; and Leonardo Fibbonacci,
Pope Sylvester II, Roger Bacon, Voltaire and Copernicus from Europe. In the modern era,
thinkers and scientists as diverse as Ralph Waldo Emerson, Johann Wolfgang von Goethe, Jo-
hann Gottfried Herder, Carl Jung, Max Müller, Robert Oppenheimer, Erwin Schrödinger,
Arthur Schopenhauer, and Henry David Thoreau have acknowledged their debt to ancient
Hindu achievements in science, technology, and philosophy.
The American Association for the Advancement of Science (AAAS), one of the largest
scientific organizations in the world, in 2000, published a timeline of 100 most important sci-
entific finding in history to celebrate the new millennium. There were only two mentions from
the non-Western world: (1) invention of zero and (2) the Hindu and Mayan skywatchers astro-
nomical observations for agricultural and religious purposes. Both findings involved the works
of the ancient Hindus.
The Ancient Hindu Science is well documented with remarkable objectivity, proper citations,
and a substantial bibliography. It highlights the achievements of this remarkable civilization
through painstaking research of historical and scientific sources. The style of writing is lucid and
elegant, making the book easy to read. This book is the perfect text for all students and others
interested in the developments of science throughout history and among the ancient Hindus, in
particular.
KEYWORDS
Hindu science, History of science, Vedic science, Hindu religion, Ancient Indian
science, Indian science and technology
v
This book is dedicated to my parents,
Late Ganga Saran Sarswat, father
and
Late Shanti Devi, mother.
They taught me virtues of life.
Contents
vii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 The Multicultural Science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 The Ancient Hindu Science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
About the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3
2 The Building Blocks of Science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1 Geography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2 The Power of Questioning: Śāstrārtha (Debate) to Acquire Knowledge . . . . . 17
Respect for Knowledge: The Role of a Guru . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3
Smr. ti (Memory), An Answer to Book Burning . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4
Yoga and Meditation for Self-Improvement . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.5
3 The Hindu Mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.1 The Hindu Numerals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.1.1 The Word-Numerals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.1.2 The Place-value Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
From Śūnyatā and Neti-Neti to Zero and Infinity (Ananta) . . . . . . . . . . . . . . . 32
3.2
3.3 The Binary Number System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.4 The Fibonacci Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.5 The Square-root Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.6
3.6.1 Sum of a Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.6.2 Sum of a Series with (cid:134)n2 and (cid:134)n3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.6.3 Solution to a Quadratic Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.7 Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.7.1 Transforming a Square into a Circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.7.2 Height of a Tall Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
viii
4
5
6
7
3.7.3 The Value of (cid:25) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.8 The Pythagorean Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.9
Trigonometry: From Jyā to Sine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.10 Diffusion of Hindu Mathematics to Other Cultures . . . . . . . . . . . . . . . . . . . . 50
3.10.1 The Middle East . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.10.2 China . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.10.3 Europe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.10.4 Support of Pope Sylvester II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Astronomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.1 Heliocentric Solar System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.1.1 Ujjain, Greenwich of the Ancient World . . . . . . . . . . . . . . . . . . . . . . . . 68
4.1.2 Diurnal Motion of the Earth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.2 Hindu Calendar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.3 Hindu Cosmology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.4 Diffusion of Hindu Astronomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.4.1 The Middle East . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.4.2 China . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.4.3 Europe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Space (Ākāśa) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.1
Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
5.2
5.3 Matter and Mass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
5.3.1 Conservation of matter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Atom (Paramān. u) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.4
5.5 Gravitation and Ocean Tides . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Chemistry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
6.1 Mining and Metallurgy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
6.1.1 The Iron Pillar of New Delhi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.2 Wootz or Damascus Steel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Fermentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
6.3
Biology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
7.1
7.2
Sacred Rivers and Mountains: Ecological Perspectives . . . . . . . . . . . . . . . . . . 118
Sacred Tulsī and Sacred Cow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
ix
7.2.1 Vegetarianism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Life in Plants: Similarities with Humans . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
7.3
8 Medicine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
8.1 Doctors, Nurses, Pharmacies, and Hospitals . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Ayurveda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
8.2
8.2.1 Pañca-karma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Surgery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
8.3.1 Plastic Surgery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
8.3.2 Cataract Surgery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
8.3.3 Carpenter Ants Suturing and Leech Therapy . . . . . . . . . . . . . . . . . . . 141
8.4 Hindu Medicine in Other World Cultures . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
8.3
9.1
9 The Global Impact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Impacts during the Ancient and Medieval Periods . . . . . . . . . . . . . . . . . . . . . 149
9.1.1 Impact on Arabia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
9.1.2 Impact on China . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
9.1.3 Impact on Greek Science and Philosophy . . . . . . . . . . . . . . . . . . . . . . 151
Impacts During the Modern Period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
9.2.1 Emerson and Thoreau–Two Celebrated American Scholars . . . . . . . . 156
9.2.2 Impact on Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
9.2
A Timeline of the Hindu Manuscripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Author’s Biography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Preface
xi
I was raised in Haridwar, a famous Indian city that is known for religion, philosophy, mysticism,
and the Ganges river. I heard about the greatness of India often as a child from my father
who was a learned man. I did not notice about this greatness in the scientific literature that
was a part of my academic curricula. It created a great emotional dilemma to me. Why India,
with so much philosophy, intellect, and prosperity, could not make a substantial contribution to
science? I found the answer only after I came to America and had access to good library facilities
in California. The history of science as we know from the textbooks is simply incomplete. By
writing this book and other books, I am trying to fill these gaps.
It is not possible to provide details of the achievements of the ancient Hindus in one
introductory book. Their contributions are enormous and this book presents only the ‘tip of the
iceberg,’ as the phrase goes. I have chosen only those topics that are interesting to me and I have
some knowledge.
The accomplishments of the ancient Hindus span many fields. In mathematics, they in-
vented our base-ten number system and zero that are now used globally, carefully mapped the
sky and assigned motion to the Earth in their astronomy, developed a sophisticated system of
medicine with its mind-body approach known as Ayurveda, mastered metallurgical methods
of extraction and purification of metals, including the so-called Damascus blade and the Iron
Pillar of New Delhi, and developed the science of self-improvement that is popularly known
as yoga. Their scientific contributions made impact on noted scholars from all over the world,
Aristotle, Megasthenes, and Apollonius of Tyana among the Greeks; Al-Birūnī, Al-Khwārizmī,
Ibn Labbān, and Al-Uqlīdisī, Al-Jāh. iz among the Islamic scholars; Fa-Hien, Hiuen Tsang, and
I-tsing among the Chinese; and Leonardo Fibbonacci, Pope Sylvester II, Roger Bacon, Voltaire
and Copernicus from Europe. Their testimony about Hindu science provide a clear sense of
the immense contributions of the ancient Hindus. In the modern era, thinkers and scientists
as diverse as Ralph Waldo Emerson, Johann Wolfgang von Goethe, Johann Gottfried Herder,
Carl Jung, Max Müller, Robert Oppenheimer, Erwin Schrödinger, Arthur Schopenhauer, and
Henry David Thoreau have acknowledged their debt to ancient Hindu achievements in science,
technology, and philosophy.
In this book, I have used scientific norms of analysis and have sorted out the hard facts
from fantasy. In other words, the analysis here is rational and objective. For important state-
ments, I have provided citations to the peer-reviewed literature. This can help the readers to
investigate further, if needed.
No culture or civilization has prospered to great heights without knowing and preserving
their historic and existing knowledge base. Preserving knowledge is a process in which all gen-
xii PREFACE
erations must participate otherwise the knowledge become prone to be lost forever. This is my
mindset in writing this book.
After I was done with the manuscript of one of my previous books, Sciences of the Ancient
Hindus: Unlocking Nature in the Pursuit of Salvation, I submitted it to an internationally renowned
publisher. After about two years of review process, the publisher agreed to publish the book
provided I drop the term Hindus from the title and replace with Indian. One reviewer warned
me of “the deeply contested nature of the adjective Hindu and its association with a particular
kind of nationalist politics” in India. This was prior to Narendra Modi’s government in India. I
have no involvement in Indian politics now nor I ever had one at any stage. I have lived most of
my adult life in America. I rejected the suggestion since I wanted to be truthful.
It has been a challenging and rewarding experience for me to write this book. I hope the
readers enjoy reading this book as much as I have enjoyed writing it. Only the readers can judge
the validity of this endeavor.
Alok Kumar
March 2019
Acknowledgments
xiii
After I completed my book, Sciences of the Ancient Hindus, I told my wife that I would not be
writing another book on this topic. I said so since writing a book is a long arduous journey. It
was difficult for me to conduct research for the book in the absence of a network of collaborators
and proper academic support. I had to work on that book during my off hours from the job and
the family-life suffered in the process.
Much was changed after I published the above-mentioned book. My family and I realized
that this book was not just another academic publication. The book struck a chord with the
readers and we were inspired to observe it. It changed our mindset. As a result, when I was
approached by the editors of Morgan and Claypool Publishers, I readily accepted their offer.
Any arduous task becomes much simpler with a network of capable people to assist. I
would like to thank the following people for their assistance:
• My wife, Kiran Singh-Kumar, daughter, Aarti Kumar, and mother-in-law, Chaya Singh,
provided me constant encouragement and assistance. They are the silent heroes in this
project.
• My brother, Nand Kishore Sharma, sister-in-law, Bina Sharma, and sister, Pushpa
Sharma, who take so much pride in knowing my achievements.
• Dr Ved Chaudhary, President of Educators’ Society for Heritage of India (ESHA); Dr
Deen Khandelwal, Founder-President, Hindu University of America; Dr Ambalavanar
Somaskanda, a medical doctor from Rochester; and Dr. John Kares Smith, my colleague
from SUNY Oswego, for reading the first draft of the book. They made corrections on
the draft, provided valuable suggestions to improve the book, and, above all, provided
encouragement.
• Chris Hebblethwaite, librarian, who tirelessly searched databases to collect relevant infor-
mation for me. His office was my first stop when I could not find a specific information.
• Dr. John Zielinski, my colleague in the physics department, who often goes for long walks
with me on campus. He was always a willing participant in any discussion related to this
book.
• Editors, Jeanine Burke and Joel Claypool, for providing me with excellent tips for effective
writing.
xiv ACKNOWLEDGMENTS
I have tried hard to avoid printing and scholarly mistakes. However, if some remain, please
bring them to my notice ([email protected]). If you like the book, the credit goes to the
people mentioned above. I am responsible for the errors.
Alok Kumar
March 2019
C H A P T E R 1
Introduction
1
“The first nation (to have cultivated science) is Hind. This is a powerful nation having a large
population, and a rich kingdom. Hind is known for the wisdom of its people. Over many cen-
turies, all the kings of the past have recognized the ability of the Hindus in all branches of
knowledge.”1 This was the conclusion that S. ā‘id al-Andalusī (1029–1070) made in his book,
T. abaqāt al-‘Umam (Book of the Categories of Nations), in 1068. S. ā‘id lived in Spain and com-
piled perhaps the first popular book on the global history of science. S. ā‘id analyzed the scholarly
contributions of various nations, chose eight nations that were well versed in sciences, and ranked
Hind at the top of the list. The people that were described in his book for their contributions to
science are: the Hindus, the Persians, the Chaldeans, the Greeks, the Romans, the Egyptians,
the Arabs and the Hebrews.
S. ā‘id was a Muslim, a historian of science, and a mathematician with interest in astronomy.
S. ā‘id, his father, and his grandfather served as religious judges (qād. i) in Spain. In his role as judge,
S. ā‘id mastered the sciences of jurisprudence and law, implemented the Sharia law in resolving
conflicts, served as a mediator, and also supervised and audited the public works. Obviously, such
roles were entrusted to persons of repute and influence. One of his students, Azarquiel (Arzachel
or Zarqālī), is known for the famed Toledan Tables. These astronomical tables were used to predict
the movements of the Sun, Moon and planets relative to the fixed stars. In the view of S. ā‘id,
“[t]he Hindus, as known to all nations for many centuries, are the metal (essence) of wisdom, the
source of fairness and objectivity. They are peoples of sublime pensiveness, universal apologues,
and useful and rare inventions.” In giving examples of such rare inventions, S. ā‘id mentioned the
disciplines of mathematics, astronomy, medicine, and the invention of chess.
S. ā‘id’s book was quite popular during the medieval period. During the colonial period,
however, the book lost its repute, its contents did not fit well with the colonial agenda, and it
was conveniently forgotten. It was introduced to the English-speaking world in 1991.2
S. ā‘id was familiar with the contributions of the Egyptians, Greeks and Romans to science.
Yet, while comparing the significance of their contributions to science, he chose Hind to be the
top nation in science. This is in contrast to what we teach today in sciences. Did S. ā‘id commit an
1Salem and Kumar, 1991, p. 11. In the original manuscript, the same term, Hind, is used to define the geographical
region and the people. In today’s context, the medieval term Hind describes the present India, Pakistan, Bangladesh, Nepal,
and Afghanistan, popularly also known as the Greater India.
2Salem and Kumar, 1991. I was reading the scientific literature produced during the medieval world while researching
for my book, Sciences of the Ancient Hindus. I noticed that S. ā‘id’s book was cited by several medieval scholars. I tried to acquire
the book and did not succeed. This led to more efforts and finally the original Arabic version was acquired, authenticated and
published with proper translation and annotations.
2
1. INTRODUCTION
honest scholarly mistake by placing Hind, also popularly called Bharat, India, and Hindustan,
at the top of his list? Was he the only scholar to rank Hind at the top among all other nations
in science? What are the important contributions of the Hindus to science?
Further, in year 2000, many events were organized and some landmarks were set to cele-
brate the new millennium. The American Association for the Advancement of Science (AAAS)
tried to compile a list of the top 100 scientific findings that made significant impacts on the
world. It was a major undertaking where quite a few historians of science were involved. Only
two discoveries were selected from the non-Western world: (1) invention of zero and (2) the
astronomical observations of the Hindu and Mayan skywatchers for agricultural and religious
purposes. Both findings involved the works of the ancient Hindus. Why did the Hindus invent
zero as a mathematical entity? What was the connection of astronomical observations with agri-
cultural and religious purposes? Did they make any interesting astronomical observations in the
process? These are some of the questions that this book has tried to answer. After going through
its pages, readers will be able to make their own judgments on these issues.
1.1 THE MULTICULTURAL SCIENCE
While covering the ancient and medieval periods, most science courses focus on inventions
and discoveries from Greece and Europe. Students learn that scientific rational thinking orig-
inated with the Greeks around the seventh century BCE, and flourished there for about 800
years. Greek philosopher-scientists such as Thales (624–546 BCE), Pythagoras (562–500 BCE),
Democritus (460–370 BCE), Hippocrates (460–370 BCE), Plato (427–347 BCE), Aristotle
(386–322 BCE), and Archimedes (287–212 BCE) are responsible for most basic ideas in sci-
ence. The period after the beginning of the Common era is defined as the Dark Ages (475–1000
CE) or the Middle Ages (475 CE to the Renaissance). The term, the Dark Ages, signifies the lack
of intellectual and scientific activities in Europe. After the fourteenth century, the Europeans
reacquainted themselves with the scientific tradition of the Greeks that led to the European
Renaissance. In relation to the Renaissance period, we learn about Galileo, Faraday, Newton,
Kepler, and Boyle who lived in Europe. There are not many examples of scientists, discoveries,
or inventions that have any connection to Asia, Africa, and Latin America.
Science evolves out of human necessities and curiosities. With the growth in science, our
lives are constantly changing/improving in myriad ways. The Earth that was considered to be
boundless by our ancestors can now be traveled around in a day or two. We have landed on the
Moon and are planning our visit to Mars. We can easily make a telephone call to our loved
ones halfway around the Earth for a nominal expense. The increased food demand in the past
century is met by the green revolution. The life expectancy is increasing all over the world. Most
civilizations in the past have found material benefits and intellectual satisfaction in attempting
to understand the world’s physical and biological phenomena. Science was bound to prosper in
most cultures. The question is why it did not happen in Asia, Africa, the Middle East, and Latin
America. Or, may be our science textbooks are simply providing incomplete information.
1.1. THE MULTICULTURAL SCIENCE 3
Indeed, this popular version of our history of science is full of gaps that are finally catch-
ing the attention of scholars. A major gap was first demonstrated by Joseph Needham (1900–
1995) with his multi-volume book, Science and Civilisation in China. Needham was a British
biochemist and historian who raised the famous question, popularly known as the Needham
question: “Why did modern science, the mathematization of hypotheses about Nature, with all
its implications for advanced technology, take its meteoric rise only in the West at the time of
Galileo [but] had not developed in Chinese civilisation or Indian civilisation?” Needham an-
swered this question in his book with a specific focus on China and proved that the history of
science that we teach in science courses is simply incomplete. No such major effort to compile
Indian science has taken place so far for a variety or reasons. The present book is a small effort
to fill that gap.
Roger Bacon (1214–1294), a noted Franciscan natural philosopher from England, wrote
a book, The Opus Majus, under the instruction of Pope Clement IV (1190–1268). The main
purpose of the book was to improve the training of missionaries to Christianize distant ethnic
lands.3 This book clearly establishes that India was a leader in science. Bacon knew the works
of Ibn al-Haytham (Alhazen)4 (965–1040 CE), Al-Battānī (858–929 CE), and Ibn Sīnā (980–
1037 CE) from the Middle East. He also knew about the Hindu science from his days in the
University of Paris.5
Geoffrey Chaucer (c. 1343–in the Prologue of his Canterbury Tales6, writes about a physi-
cian who was well versed with the works of Serapion the Elder of Syria, al-Rāzī and Ibn Sīnā
of Persia, along with the works of Hippocrates, Rufus of Ephesus, Dioscorides and Galen:
“With us ther was a Doctour of Phisyk
In al this world ne was ther noon him lyk
To speke of phisik and surgerye,
3Roger Bacon was not the only one who worked tirelessly to produce a book to assist the training of missionaries to
Christinize India. Max Müller, Professor of Comparative Philology, Robert Boyle, Director of the East India Company, and
Monier Monier-Williams, Boden Professor of Sanskrit in Oxford University, are some other noted scholars who produced
literature or used their resources to assist missionaries to Christinize India. Monier-Williams even candidly wrote that the
purpose of translation was to aid in “the conversion of the natives of India to the Christian religion.” (Goldberg, 2010, p. 28.)
Another person who made a significant impact to achieve this goal was Lord T. B. Macaulay (1800–1859), member of the
Supreme Council of India. In this capacity, in his Minute on Indian Education, he suggested the British empire to introduce
western-based reforms in Indian schools. This document became quite successful. Macaulay believed that (1) “a single shelf of a
good European library was worth the whole native literature of India and Arabia” and (2) “all the books written in the Sanscrit
[Sanskrit] language are less valuable than what may be found in the most paltry abridgments used at preparatory schools in
England.” With this mindset, the education policy of India was framed during the colonial period. Lord Macaulay’s reforms
largely remained in place in India even after the independence.
4The year 2015 was declared as the “Year of Light” by the United Nations to emphasize the importance of light science
and to celebrate 1,000 years of Ibn al-Haytham’s book, Kitāb al-Manazir, a book on optics. Several centuries later, many
noted scientists, such as Roger Bacon, Robert Grosseteste, Leonardo Da Vinci, Galileo Galilei, Christiaan Huygens, René
Descartes, Johannes Kepler, and Issac Newton, had studied optics from a Latin translation or the original Arabic copy of his
book. Some of them wrote their own books on optics later.
5Smith, a chapter, The Place of Roger Bacon in the History of Mathematics, in the book by Little, 1914, p. 156.
6The Canterbury Tales, Prologue, 411–413, 429–432. It is interesting to note the evolution of the English language in the
past millennium.
4
1. INTRODUCTION
Wel knew he the olde Esculapius,
And Deiscorides, and eek Rufus,
Old Ypocras, Haly, and Galien,
Serapion, Razis, and Avicen”
England, France, Spain, Portugal, and the Netherlands controlled most of the world dur-
ing the eighteenth century. Popular literature was produced and disseminated by European na-
tions to support their dream of domination. In this literature, Egypt, India, Persia, and China
entered the scientific age only through their interactions with the Europeans. Thus, the histories
of Asians, Africans, and other indigenous peoples often appear only after their encounter with
the Europeans.7
Though the British and French governments were brainwashing these ethnic civilizations
with their propaganda, they were recruiting their best scholars and scientists to learn from these
ethnic cultures. For example, when Napoleon Bonaparte (1769–1821) invaded Egypt, he took
about 150 biologists, mineralogists, linguists, mathematicians, and chemists with him to learn
Egyptian science. This group included mathematician Jean-Baptiste Joseph Fourier, mineralo-
gist Déodat Guy Grater de Dolomieu, and botanical artist Pierre Joseph Redouté.8 Why did
Napoleon take the best scientists of France to Egypt in a war? Napoleon had brought 400 ships,
40,000 soldiers, and 167 scientists, engineers, and artists to Alexandria. In less than a month,
he lost to the British soldiers led by Admiral Horatio Nelson. Yet his mission was a magnifi-
cent triumph. The savants (as French scientists and philosophers were called) had uncovered an
invaluable treasure of relics in Egypt, including the Rosetta Stone, and established the Institut
d’Égypte in Paris, the first institution in the world devoted to Egypt’s ancient culture.
Thus, in Western literature, the history of Asians, Africans, or the indigenous peoples of
the Americas often appears to begin only after their encounter with European people. Science is
thus Eurocentric and incomplete in the process. This omission of the non-Western literature in
the history of science is “deeply unjust to other civilizations. And unjust here means both untrue
and unfriendly, two cardinal sins which mankind cannot commit with impunity,” writes Joseph
Needham.9
The failure to mention multicultural contributions in present-day science education has
been noted by many scholars who have provided examples after examples of historical gaps
in their books.10 For example, the Greek philosopher Leucippus (born ca. 490 BCE) or his
disciple Democritus (born 470 BCE) are generally credited for being the originators of the
atomic theory. The contemporaneous Indian philosopher Kan. āda, who lived sometime between
7This topic caught the attention of scholars during the later part of the twentieth century. However, considerably more
research is needed to better understand the contributions of other civilizations. For more information, consult Baber, 1996;
Goonatilake, 1984, 1992; and Said, 1978 and 1993.
8For more information, read Brier, 1999.
9Needham in Nakayama and Sivin, 1973, p. 1.
10Bernal, 1971, Harding, 1991, 1994; Needham, 1954–99; Rashed, 1996; and Teresi, 2002. This knowledge is yet to be
incorporated appropriately in introductory science textbooks.
1.1. THE MULTICULTURAL SCIENCE 5
the sixth to tenth centuries BCE, is simply ignored in most science texts. Henry Margenau
(1901–1997), a noted philosopher-physicist who served as Eugene Higgins Professor of Physics
and Natural Philosophy at Yale University, pointed to this gap in his book, Physics and Philosophy,
and wrote, “But the most remarkable feature . . . which I have never seen in American textbooks
on the history of science is the atomic theory of philosopher Kanada [Kan. āda].”11 This is not
an American issue, as listed by Henry Margenau; it is an academic issue that is global in scope.
Kan. āda’s work is still not covered in science texts even in India, the region where he was born
and lived. As mentioned earlier, the colonial education pattern established by Lord Macaulay
still continues in India, a sad reflection of the still pervasive colonial mindset of Indian academia.
Although progress is being made, it is at a very slow pace. We have a long way to go to
establish science as a truly global enterprise. A significant number of articles and books have
been published in the last 25 to 30 years to add contributions from Islamic countries, includ-
ing an Encyclopedia of the History of Arabic Science.12 The scholarship on the Indian, Egyptian,
and Mayan civilizations is still highly incomplete. Our new knowledge on Islamic science and
Chinese science happened due to the large influx of money and human resources from China
and the Middle East. In contrast, the governments from Latin American countries, India, and
Egypt have not allocated much resources for this cause. We need another Joseph Needham(s)
to raise resources and preserve the knowledge of these civilizations. The time is now.
India, Egypt, Baghdad, and Persia were centers of learning, along with Athens and Rome,
during the ancient and/or early medieval periods. For example, the place-value system to write
numbers that was invented by the Hindus is central to the growth of mathematics. Many of the
medicinal treatments, surgical procedures, and anatomical knowledge came to the West from the
Caraka-Sa ˙mhitā and the Suśruta-Sa ˙mhitā of India, the Edwin Smith Surgical Papyrus, the Ebers
Papyrus, and the Kahun Papyrus of Egypt, and the Book of Healing and the Canon of Medicine of
Ibn Sīnā of Persia. Similarly, the modern astronomy owes a lot to the works of Āryabhat.a I (ca.
500) from India and al-Khwārizmī from Baghdad.
The Islamic influence on science is evident from the Arabic terms that are commonly used
in science: alcohol, Aldebaran, algebra, almanac, alkali, algorithm, Altair, azimuth, Betelgeuse,
calendar, Deneb, magazine, monsoon, nadir, ream, and zenith are all derived from the Arabic
language. These words have become a part of the Western heritage and are listed in most English
dictionaries. A similar list of Sanskrit words in English is provided in Chapter 9.
The three crucial inventions that influenced the modern world came from China: paper,
gunpowder, and magnetic compass. Paper remained the most important tool for documentation
for over a millennium after it was recently replaced by digital electronic technologies. Magnetic
compass allowed traveler to navigate through an ocean while the gunpowder became a tool of
conquest and subjugation after its use was discovered for making guns and cannons.13
11Margenau, 1978, p. XXX.
12Hogendijk and Sabra, 2003; Kennedy, 1970; King, 1983 and 1993; Kunitzsch, 1989 and 1983; Rashed, 1996; Saliba,
1994; Samsó, 1994; and Selin, 1997 and 2000.
13The list of such contributions is long, and is covered by Kumar, 2014; Montgomery and Kumar, 2015.
6
1. INTRODUCTION
1.2 THE ANCIENT HINDU SCIENCE
As mentioned in the previous section, the modern place-value notation system (base 10) that
is used to represent numbers, is Hindu in origin. In this system we write, for example, eleven
as one and one, side by side (11). The one on the left is in the second place, as we count from
the right to left. The magnitude of this one (1) on the left is equal to ten. Any number in this
place has to be multiplied by ten. Practically all cultures invented their own number system: the
Greeks, Romans, Egyptians, Babylonians, Mayans, and Chinese. The Greeks, the Romans, and
the Egyptians did not use a place-value notational system although they did use base-10 system.
In their systems, eleven was written as ten and one (for example, XI in the Roman system). The
Hindu place-value notation system made it possible to write very large numbers and simplified
mathematical calculations; therefore, it prevailed over the other systems. Just imagine reading
the values of various stocks in a newspaper. It is easy to figure out that a number with three digits
will be greater than a number with two digits. Similarly, a number with four digits will be greater
than a number with three digits. It provides a quick comparison which is not possible with other
systems. It also allows much faster arithmetical calculations. Nicolaus Copernicus (1473–1543)
in his book, On the Revolutions, used Hindu numerals in mathematical computations to provide
a heliocentric model of the solar system. He noted the usefulness of this system over the Roman
or Greek numeral system for quick computations. Copernicus was not the first one to use Hindu
numerals. Several hundred years before him, Leonardo Fibonacci (1170–1250) wrote a popular
book, Liber Abaci, and introduced the Hindu methods of numeration and computation to the
Western world. S. ā‘id al-Andalusī wrote about the presence of this numeral system in Spain
during the eleventh-century. (Read Chapter 3).
Trigonometry deals with relationships involving lengths and angles of right-angled tri-
angles. Such mathematical relationships are highly applicable in the disciplines of architecture,
mathematics, physics, and astronomy. With the work of Āryabhat.a I (ca. 500), trigonometry
began to assume its modern form. He used the half chord of an arc and the radius of a circle to
define the sine of an angle.14 The origin of the subject of trigonometry and word “sin,” used to
define trigonometric function “sin” (pronounced as sine), can be traced to the Sanskrit language.
(Read Chapter 3).
About a millennium before Copernicus, Hindu astronomer Āryabhat.a I assigned mo-
tion to the earth. He considered motion of the planets and considered stars to be stationary.
Āryabhat.a I used the analogy of a boatman in a river, observing objects on the shore moving
backward, to explain the apparent motion of the Sun and other stars. He explained the con-
cept of relative motion many centuries before its more formal discussion by the noted Parisian
scholar Nicholas Oresme in the fourteenth century.15 Interestingly, Copernicus used more or
less the same analogy of a boatman to explain the apparent motion of the Sun in his book, On
the Revolutions (Book 1, Chapter 8). (Read Chapter 4).
14Clark, 1930; Kumar, 1994.
15Kumar and Brown, 1999.
1.2. THE ANCIENT HINDU SCIENCE 7
The ancient Hindus defined the age of the universe to be of the order of billions of years.
This large number assigned to the age of the universe intrigued Carl Sagan, a noted astrophysicist
who is famed for his Cosmos TV series. He wrote: “The Hindu dharma is the only one of the
world’s great faiths dedicated to the idea that the Cosmos itself undergoes an immense, indeed
an infinite, number of deaths and rebirths. It is the only dharma in which time scales correspond
to those of modern scientific cosmology. Its cycles run from our ordinary day and night to a day
and night of Brahma, 8.64 billion years long, longer than the age of the Earth or the Sun and
about half the time since the Big Bang.” (Read Chapter 4.)
The ancient Hindus defined standards for physical measurements of space (length), mass,
4 second, as noted by
and time. The smallest unit of time used in India was of the order of 10(cid:0)
al-Bīrūnī, the Islamic scholar who visited India during the eleventh century. These standards
were prevalent in India at least 500–1000 years before al-Bīrūnī. Similarly, multiple standards
of length were also defined and, in Mārkan. daya-Purān. a, the size of an atom was defined of
9 meter. Kaut.ilya (4th century BCE) defined various lever arms and scale-pans
the order of 10(cid:0)
for balances for different range of weights. Superintendents were assigned to stamp labels for
different weight-standards for public use to prevent cheating. Traders who did not use these
stamped standard weights in business transactions were fined. Kan. āda, in his Vaiśes. ika-sūtra,
defined the concept of atom while discussing the distinctive properties of different matters and
considering infinite divisions of matters. (Read Chapter 5).
Caraka, Suśruta, and Kaut.ilya documented chemical transformations where oxidation,
reduction, calcination, distillation, and sublimation were explained. Caraka, in his Caraka-
Sa ˙mhitā, lists gold, copper, lead, tin, iron, zinc, and mercury in making drugs. Mining was a
highly organized activity among the ancient Hindus. Kaut.ilya defines the role of mining for a
sound economy. The Iron Pillar near Qutub-Minar in New Delhi is a testimony to metal forg-
ing of the ancient Hindus. The pillar, although about 1600 years old and weathering the heat,
humidity, and rain in the open air, is still rust-free. We only hope that the car manufacturers of
today can learn this ancient technology from India to make better rust-free cars. Hardened steel
was also produced to allow a warrior to enter in a battlefield without worrying about breaking or
bending his sword. This steel, though invented in India, is popularly known as Damascus steel.
The Europeans first learned of this process from Damascus where it was called “steel of India.”
King Poros of Sindh, a province of India (now in Pakistan), after receiving a gift of life from
Alexander the Great, gave 6,000 pounds of steel as precious gift to Alexander. (Read Chapter 6).
Plants have life; they try to protect themselves from the predators or attract bees for pol-
lination purposes. At a time when human activities are destroying the ecology, the ecological
perspectives of the ancient Hindus are relevant where rivers, mountains, plants, and animals
are deemed sacred. The principle of ahi ˙msā as a moral principle and its consequences on global
warming and world hunger are discussed in the chapter on biology. (Read Chapter 7).
The so-called plastic surgery and cataract surgery find its roots in the surgical skills of the
ancient Hindus. The role of a doctor and a patient, the design of a hospital, the role of food and
8
1. INTRODUCTION
and the quality of air and water were considered by the ancient Hindus. They emphasized the
body-mind approach to medicine and evolved ayurveda or “science of life,” a system of medicine.
(Read Chapter 8).
Ralph Emerson, Henry Thoreau, Leonardo Fibonacci, Schrödinger, Tolstoy, Tesla,
Goethe, Schopenhaur, Robert Oppenheimer, Brian David Josephson (known for Josephson
junction in physics) are some of the leading western thinkers who studied the scholarly work
of ancient Hindus and formed their own worldview based on it. Similarly, on the east of In-
dia, Xuanzang (also known as Hiuen Tsang), Faxian (also known as Fa-Hien or Fa-Hsien),
Yijing (also known as I-Ching, I-Tsing) were the leading scholars in China. These scholars are
as much known for their wisdom as their arduous journeys to India to collect scholarly books.
Similarly, on the West of India, Al-Jāh. iz, (ca. 776–868 CE), al-Khwārizmī (ca.800–847 CE),
al-Uqlīdisī (ca. 920–980 CE) and Ibn Labbān (ca. 971–1029 CE), all noted Islamic philoso-
phers, are known as much for their scholarly activities as for their efforts to introduce Hindu
wisdom to the Middle East. These Islamic scholars were quite honest in writing their books, as
they should be, and openly acknowledged their gratitude to the Hindus. (Read Chapters 3, 4,
and 9).
1.3 ABOUT THE BOOK
This book explains the religious, social and cultural contexts that allowed some distinctive in-
ventions and discoveries in Hindu science. For the Hindus, the disciplines of physics, chemistry,
mathematics, astronomy, and medicine were sacred. A mastery of either of these disciplines al-
lowed a person to achieve moks. a (liberation from the cycle of birth and death), the highest goal
of life for any Hindu.
This book primarily deals with the ancient period when the sacred books of the Hindus
were composed(Vedas, Upanis. ads, and Purān. as) and ends after the work of Āryabhat.a I, the
fifth-century. Therefore, the works of Brahmagupta, Bhāskara, and Mādhava, all prominent
Hindu natural philosophers, are not included. Later contributions are covered in a few selective
cases to know the impact of the ancient period. Since the works of the ancient Hindus took
many centuries to become known to the Arabs and then to the Europeans, the later accounts
from these cultures are discussed. For example, al-Bīrūnī’s work during the eleventh century is
discussed in relation to Āryabhat.a I’s work. The works of Emerson, Thoreau, Schrödinger, and
Oppenheimer are discussed in connection to Vedanta or the Bhagavad-Gītā. Similarly, the work
of Jagdish Chandra Bose related to plants is discussed in relation to Mahābhārata, an epic, where
life in plants is explained in detail. Copernicus’ comment about the usefulness of the place-value
notation and his work in astronomy are discussed, although the works were done during the
sixteenth-century.
The religious philosophy of the ancient Hindus may have played a crucial role in the in-
vention of zero. The ancient Hindus tried to explain the nature of God that is devoid of all
1.3. ABOUT THE BOOK 9
attributes (Nirgun. a-swarūpa or amūrta). This religious and philosophical approach of attribute
negation or nothingness (śūnyatā) led them to the mathematical entity of infinite and zero.
In Chapter 2, readers will also learn the role of śāstrārtha (debate or discussion on the
meaning of sacred texts) in resolving personal, social, and religious disputes. Conflicts were
resolved without any rancor or violence. Even marriages were arranged using this practice
(svaya ˙mvara). Also, the ancient Hindu literature was mostly written as poetry in lucid stories
that are rich in similes and metaphors in order to facilitate its memorization and oral trans-
mission. As a result, despite the destruction of libraries in the Indian peninsula after the Islamic
invasion, this knowledge remained intact to a large extent. Pythagoreans in Greece also practiced
orality to preserve their knowledge, like the ancient Hindus.
“Why do they call it by this name?” This etymological question is often asked by curious
students. The epistemology and origins of various words and concepts that are commonly used
in science texts, such as the so-called Arabic numerals, the zero, the trigonometric function
“sine,” algebra, and algorithm, are explained in this book. Information on these developments
demonstrates the migration of knowledge from one culture to another and helps the readers to
understand the multicultural nature of science.
The worldwide use of the Hindu numerals is perhaps a great triumph of the ancient Hin-
dus. Although Hindu numerals were not accepted at first, rivalries had ensued in Arabia and
Europe, decrees were issued against their use; their practical merit and their usefulness in math-
ematical calculations finally established their supremacy and they gradually became prevalent
worldwide. Leonardo Fibbonacci (1170–1250) is well known for Fibbonacci’s Sequence. How-
ever, little is known about his gratitude to the ancient Hindus, as he had clearly acknowledged
in Liber Abaci.
Kan. āda’s book, Vaiśes. ika-Sūtra, defines the nature of time and space, conservation of mat-
ter, gravitation, and the concept of the atom. Time is a commonly used word in our daily con-
versation. However, its nature is enigmatic and subtle. Readers will learn the subtleties in the
concept of time in Chapter 5. They will also learn the concept of the atom, as suggested by
Kan. āda, and its comparison with Democritus’ atom.
This book is written primarily for readers who are trained in the Western knowledge sys-
tem and are interested in learning about Hindus’ contributions to science. Many references are
deliberately chosen from the primary sacred literature of the Hindus as well as authentic sec-
ondary sources from the Western sources. Such a selection is made to counter a general skepti-
cism demonstrated by some Western scholars concerning Hindu accounts of their history. These
Western scholars generally complain that scholars in the East tend to stretch their imaginations
to suit their views and do not provide logical steps and facts when deriving their conclusions.
A hallmark of this book is in the documentation of the scholarly comments and acknowledge-
ments made by Greeks, Persians, Egyptians, Arabs, Chinese, and Europeans in support of the
scientific achievements of the ancient Hindus. Salient features of this book are in providing
10
1. INTRODUCTION
cross-cultural perspectives and comparisons, and portraying a coherent picture of the scientific
contributions of the ancient Hindus.
This book is not encyclopedic or compendious. It is not possible to achieve that in such a
small introductory work since the ancient Hindu literature is vast. This book presents only the
“tip of the iceberg,” as the saying goes. I have chosen only those topics where my knowledge and
interests lie.
Several Sanskrit and Hindi words are now commonly used in English and have entered in
English dictionaries. However, their English spellings in some cases are not in accordance with
the system of transliteration. As these words are in common use in the English-speaking world,
using any other spelling might create confusion for a broad range of readers. For this reason, we
have kept the popular usage in some cases. For example, the spelling of the word “ayurveda,” as
mentioned in most dictionaries, is incorrect; the proper spelling is āyurveda. For Sanskrit terms
that are not present in most English dictionaries, diacritic marks have been retained. To keep
the book readable, simple, and enjoyable to non-scholars in the field, an amalgamated system
of popular spelling as well as proper spelling is used. It is a common practice among scholars
dealing with non-English literature.
I have used scientific norms of analysis and have sorted out the hard facts from fantasy.
In other words, the analysis here is rational and objective. In philosophy and mysticism, there
are several areas where the ancient Hindu literature stands abreast with the later concepts in
science, such as causality and duality (or dualism). But these concepts are not covered in this
book, because the borderline between facts and opinion is hazy.16 Duality, as defined in the
concept of the Creator and the creation as two independent entities, in Hindu philosophy, is
quite different from the de-Broglie’s Wave-Particle Duality of matter. The parallelism between
ancient theories and modern science is fascinating to read; but in many cases, this is where the
connection ends. The works of noted Noble Laureate physicists such as Brian Josephson and
Erwin Schrödinger do pose an interesting dilemma in which their beliefs, based on the ancient
Hindu literature, played a significant role in their discoveries in science.17
In the Lawrence Livermore Laboratory in California, the Shiva Target Chamber is a 20-
laser-beam facility to study laser fusion. This facility was constructed in 1977. Edward Teller, the
father of the hydrogen bomb and a designer of this facility, explained the design of the chamber
in the following words: “Laser light is brought in simultaneously from ten pipes on the top and
ten pipes on the bottom. Compression and nuclear reaction occurs in a tiny dot at the middle of
the sphere. Apparatus practically filling a whole building feeds the twenty pipes, or the arms of
the god Shiva [Śiva]. According to Hindu Creed, Shiva [Śiva] had three eyes: two for seeing,
and one (usually kept closed) to emit annihilating radiation. The Hindus obviously knew about
16The Tao of Physics by Fritzof Capra, The Wu Li Masters by Gary Zukav, and Mysticism and the New Physics by Michel Talbot
are examples of such works. These books are bestsellers for their insights. These are scholarly works that brought together the
disciplines of religion and science.
17Read Capra, 1980; Josephson, 1987; Restivo, 1978 and 1982; Schrödinger, 1964; Talbot, 1981; and Zukav, 1979.
1.3. ABOUT THE BOOK 11
lasers.”18 Is it really true that the Hindus “knew about lasers”? Perhaps not. In my mind, what
Teller has mentioned is a mere conceptual notion of Hindus in an immensely powerful laser like
light that can destroy every thing in an instant, like the third eye of Lord Śiva, but it cannot be
cited as a historical fact or an established theory.
The term Hindu was commonly used in science texts in the last century.19 The leading
journals of science, like The American Mathematical Monthly, ISIS,20 Islamic Culture, Science, and
The Mathematical Gazette and many books used the term Hindu science in the past. However,
such usage is less these days since some authors are concerned about the reaction of the readers.21
For example, Philip Goldberg, in his bestseller book, American Veda, avoided the term Hindu
because he was concerned that “many potential readers would miscontrue the nature of the
book.”22 In my previous book, Sciences of the Ancient Hindus, a reviewer warned me of “the deeply
contested nature of the adjective Hindu and its association with a particular kind of national
politics” in India. I have ignored such concerns. I do not have any involvement with the politics
of India, nor do I want to push any political agenda through this book. The Vedas, the Upanishads,
and Purān. as are the sacred books of the Hindus. I want my book to be based in truth.
Once the readers realize the truthfulness of my assertions, I am confident that such myopic
criticisms will disappear. “A class in arithmetic would be pleased to hear about the Hindoos
[Hindus] and their invention of the ’Arabic notation’,” suggested Cajori.23 “They will marvel at
the thousands of years which elapsed before people had even thought of introducing into the
numeral notation that Columbus-egg—the zero.”24 It is this Columbus-egg, the zero, that is
captivating the historians of science in AAAS as they include zero among the top 100 scientific
finds that made significant impact in human history, as discussed earlier.
18Teller, 1979, p. 216.
19Datta, 1927; Hammett, 1938; Herschel, 1915; Karpinski, 1912; Mukhopādhyāya, 1994; Ray, 1919 and 1956; Renfro,
2007; Royle, 1837; Saidan, 1965; Seal, 1915; Zimmer, 1948.
20It is a mere coincidence that this term is also associated with a terrorist organization. The journal ISIS is a premier
journal of the history of science.
21I am pleased that there are no such concerns with Islam and Buddhism where a good number of new book titles are
published every year with explicit mention of religion.
22Goldberg, 2010, p. 2–3.
23Cajori, 1980, p. 3. Florian Cajori (1859–1930) was a professor and the first chair in history of mathematics at the
University of California at Berkeley. Many of his books on the history of mathematics are still a landmark.
24Cajori, 1980, p. 3. Columbus-egg, a term that Cajori related to the invention of zero, refers to a brilliant discovery or
idea that looks simple after fact.
C H A P T E R 2
13
The Building Blocks of Science
As explained in the first chapter, human necessities play an important role in the evolution
of science. People set up their goals based on their personal needs or the needs of the society
and achieve them with the help of science. What were the geographical, social, and religious
conditions that allowed the growth of science among the ancient Hindus? How did the ancient
Hindus preserve their scientific knowledge and transmit it to the following generations? How
people who contributed to Hindu science were treated in their society? These are important
questions that need to be explored to understand the social, cultural, and religious contexts of
the development of science among the ancient Hindus.
The sciences of the ancient Hindus were an essential and integral part of their religious
practices. An important tenet of Hinduism is in the transmigration of soul which is defined in
so many ways in the Vedas and Upanis. ads. This doctrine tells us that the soul is immortal and
it transmigrates or reincarnates from one life form to another.1 Our deeds in this life decide
our fate in the next life. Therefore, the notion of rebirth is tightly coupled with the notion of
karma (action) that provides a great incentive toward leading a moral life minimizing wrong
deeds. Pythagoras, Socrates, Empedocles, Plato, Plotinus, Apollonius, and other Pythagorean
philosophers also believed in the transmigration and immortality of the soul.
“The soul is neither born, nor dies. This one has not come from anywhere, has not become
anyone. Unborn, constant, eternal, primeval, this one is not slain when the body is slain. If the
slayer think to slay, if the slain think of himself slain, both these understand not. This one slays
not, nor is slain,” suggests Kathā-Upanis. ad.2
“Either as a worm, or as a moth, or as a fish, or as a bird, or as snake, or as a tiger, or as a
person, or as some other in this or that condition, he is born again here according to his deeds,
according to his knowledge,” suggests Kaushītaki-Upanis. ad.3
The goal of life is to avoid this cycle of birth and death and achieve liberation (moks. a).
As a mother nourishing her children, gives capāti (chapāti or rotī, a flat bread) and dāl (lentils)
to one, and khicad. ī (a rice and lentil preparation) and yogurt to another; similarly, according to
their needs, the Hindu religion offers four major choices to human rational minds, and allows
individuals to choose their own path to liberation:
1. Karma-yoga (the Path of Action, Selfless Service to Humanity)
1Atharvaveda, 12: 2: 52.
2Kathā-Upanis. ad, 2: 18, 19.
3Kaushītaki-Upanis. ad, 1: 2.
14
2. THE BUILDING BLOCKS OF SCIENCE
2. Bhakti-yoga (the Path of Devotion or Love to God)
3. Jñāna-yoga (the Path of Knowledge of Ultimate Truth)
4. Rāja-yoga (the Path of Yoga and Meditation)
All paths are equally effective. It is up to an individual to select the appropriate path for himself
or herself. Jñāna-yoga (Jñāna means knowledge), the path relevant here, encourages an individual
to understand the ultimate truth by raising such questions as, “Who am I?”, “Why am I here?”, or
“What is the purpose of my life?” This mode of Socratic questioning allows self-introspection;
the person discovers the ultimate truth from within. Like Socrates suggested in Greece, the
Bhagavad-Gītā tells us that one can find knowledge from within and perfect it by yoga: “There
is no better means of achieving knowledge; in time one will find that knowledge within oneself,
when one is oneself perfected by yoga.”4 “We contemplate that adorable glory of the deity, that
is in the earth, the sky, the heaven! May He stimulate our mental power.”5 This hymn is the
popular Gāyatri-mantra that is chanted over and over by millions of Hindus as a ritual prayer
every day.
The mind is the primal source of our knowledge. Our five senses would not function with-
out the mind. It is the mind that unravels most of the obstacles that we face from day to day. One
can even achieve liberation (moks. a) through the exercise of mind to comprehend the ultimate
truth. For this reason, knowledge has always remained so central to the Hindus. If knowledge
is the key to salvation, then the issue of ownership of knowledge and defining spatio-temporal
attributes associated with knowledge become trivial issues. Therefore, the ancient Hindus did
not care much to define the period or author of their knowledge. This practice contrasts greatly
with the Western tradition. As Charles Eliot elucidates: “They [Hindus] simply ask, is it true,
what can I get from it? The European critic, who expects nothing of the sort from the work,
racks his brain to know who wrote it and when, who touched it up and why?”6 Hindu tradition
has subordinated the pride of authorship, invention, or discovery to the self-satisfaction one
gains from discovering the truth and sharing it with the world in the spirit of selfless service to
humanity. While this avoids the cult of personality, it results in a lack of chronological records
of discoveries and inventions. This explains why Hindus had developed such a vast literature yet
there is no chronological records, so valued by the Western historians.
The ancient Hindus did not separate the disciplines of astronomy, mathematics, chem-
istry, physics, yoga, and medicine from moral codes, prayers, and the so-called divine literature.
These scientific disciplines were labeled as sacred disciplines, necessary to know the ultimate
knowledge. The Chāndogya-Upanis. ad cites an incidence in which the vagabond saint, Nārada,
approaches another sage, Sanatkumāra, to learn about the ultimate knowledge—a knowledge
4Bhagavad-Gītā, 4: 38.
5R. gveda, 3: 62: 10.
6Eliot, 1954, vol. 1, p. LXVII. Charles Norton Edgecumbe Eliot (1862–1931) was a botanist, linguist, and diplomat. He
was the British ambassador to Japan and was fluent in 16 languages and could converse in another 20 languages, including
Sanskrit.
2. THE BUILDING BLOCKS OF SCIENCE 15
that could provide him freedom from sa ˙msāra (worldly manifestation) and lead to liberation
(moks. a). As all good teachers do, Sanatkumāra asked Nārada to apprise his existing knowledge
base, so that an appropriate lesson can be designed. Nārada pointed out astronomy (Naks. atra-
vidyā) and mathematics (rāsi-vidyā), along with logic, history, grammar, fine arts, and the four
Vedas as the knowledge that he had already mastered in his efforts to achieve moks. a.7
In Hindu tradition, secular knowledge (aparā-vidyā) is considered to be helpful in achiev-
ing liberation, along with spiritual knowledge (parā-vidyā), as advised in Mun. d. aka-Upanis. ad.8
When Śaunaka, a seeker, went to A ˙ngirās, a teacher, and asked: “By knowing which a man
comes to know the whole world.” A ˙ngirās’ reply included a long list of disciplines, such as as-
tronomy, the sacred Vedas, Sanskrit grammar, etymology and metrics, as suggested in Mun. d. aka-
Upanis. ad.9
The natural philosophy of Hindus fulfills the spiritual needs of people as well as their
need for rational thinking. It is for this reason that Ārybhat.a I, in his book Āryabhat. īya, cov-
ering astronomy, mathematics, and physics, suggested that one can achieve Brahman (moks. a)
by becoming well versed in the disciplines of astronomy, physics, and mathematics.10 Similarly,
Kan. āda, in his book Vaiśes. ika-sūtra, defined the physical properties of matter and suggested that
the knowledge is helpful in achieving moks. a.11 Also, the Agni-Purān. a suggests that knowledge
of the human anatomy can also lead to moks. a. “Said the God of Fire: Now I shall describe the
system of veins and arteries [Nād. ī-cakra] that are to be found in the human body. A knowledge
of these [arteries and veins] leads to a knowledge of the divine Hari [God].”12
Scientific activities had important functions that were valued in the ancient Hindu society.
For example, the role of astronomers was to fix the calendar, to set dates of religious festivals,
and to predict eclipses and other astronomical events. These disciplines and duties became as
important as composing and promoting moral codes. Learning human anatomy and functions
helped in treating diseases in people and animals.
Knowing science had another important function: One way to know about the Creator
(God) is to learn about God’s creation. Science is an important tool to learn the physical prop-
erties of the created universe. Creation is the physical phenomenon that can tell us about the
Creator. For this reason, Albert Einstein once wrote: “I maintain that the cosmic religious feel-
ing is the strongest and noblest motive for scientific research. Thus, science became a tool to
learn about the God. . . in this materializtic age of ours the serious scientific workers are the
only profoundly religious people.”13
7Chāndogya-Upanis. ad, 7: 1: 2–4.
8Mun. d. aka-Upanis. ad, 1: 1: 3–5.
9Mun. d. aka-Upanis. ad, 1: 1: 5
10Āryabhat. īya, Daśgītika, 13. “Whoever knows Daśgītika Sūtra [ten verses] which describes the movements of the Earth
and the planets in the sphere of the asterisms passes through the paths of the planets and asterisms [stars] and goes to the
higher Brahman [God].”
11Vaiśes. ika-Sūtra, 1: 1: 4.
12Agni-Purān. a, 214: 1–5.
13Einstein, 1930.
16
2. THE BUILDING BLOCKS OF SCIENCE
Progress in science has never been a hindrance to spiritual growth in the history of Hin-
duism. Knowing the truth was the main focus. Thus, science could grow freely and indepen-
dently and no artificial boundaries within the moral codes limited the scientists in their investi-
gations. This led al-Mas‘udī (d. 957 CE), an Islamic historian during the tenth century, to write
that science and technology were established without the aid of religious prophets in India. In
his opinion, wise men could deduce the principles without the need of religion. It is not the
prophets who dictated the domain of science, it was the logic, intuition, and experience of dili-
gent observers who contributed to the domain of science. Al-Mas‘udī considered Hind (India)
as the land of “virtue and wisdom.”14 This is in contrast to prolong periods in some parts of
the world where a chasm existed between science and religion and people had to make a choice
between the two. It is well known that Bruno and Michael Servetes were burned to death and
Galileo was imprisoned when their scientific beliefs were in conflict with the religious doctrines
during the Inquisition period in Europe.
The lofty heights reached by the ancient Hindus in the realm of philosophy and religion are
well recognized and extensive literature exists on these topics.15 However, not much is known
in the popular literature about their contributions to the natural sciences. The sciences of the
ancient Hindus are embedded in their religious books, along with other disciplines. In ancient
India, the various domains of knowledge, including science and religion, progressed hand-in-
hand and grew under the shelter of one another. Religion flourished with the help of science
and science flourished with the multi-faceted development of religion.
2.1 GEOGRAPHY
The current boundaries of India lie in the Northern Hemisphere between 8(cid:14)40N to 37(cid:14)60N lati-
tudes and from 68(cid:14)70 E to 97(cid:14)250 E longitudes. Thus, the latitudinal as well as the longitudinal
extent of India is about 29 degrees. Although India accounts for only 2.4 per cent of the world’s
total land area, it sustains 16 percent of the world population. The Tropic of Cancer (23(cid:14)300 N)
divides the country into two equal halves: (1) the southern half lies in the tropical zone while
the northern half belongs to subtropical zone. India has the Himalayan mountain range in the
north, Vindhya mountain range in the middle, Indian Ocean in the south, Thar desert and Pun-
jab plain in the west, forested mountains in the north-east, and tropical and watershed region
of the Indo-Gangetic Plain in the east. The high Himalayan mountain range, with the tallest
peak of mount Everest, blocks the frigid wind from the Tibetal Plateau and augurs temperate
climate in the north. The south, in contrast, is always warm and humid. Thus, the geography of
India provided all different kinds of climate.
With this geography comes vast mineral resources, and a great diversity of flora and fauna.
It was easy for its inhabitants to meet their basic needs for food and shelter. The climate allowed
a lifestyle where they could be close to the nature. The high population density in India is sus-
14Khalidi, 1975, p. 102–106.
15Dasgupta, 1922–1955; Durant, 1954; and Radhakrishnan, 1958.
2.2. THE POWER OF QUESTIONING 17
tainable due to the favorable climatic conditions. Ralph Waldo Emerson (1803–1882 CE), an
eminent American philosopher and poet, who lived in the northeast region of America where
the temperature during the winter season is frigid, wrote: “The favor of the climate, making sub-
sistence easy and encouraging an outdoor life, allows to the Eastern nations a highly intellectual
organization, - leaving out of view, at present, the genius of Hindoos [Hindus] (more Orient
in every sense), whom no people have surpassed in the grandeur of their ethical statement.”16
Emerson carefully studied the Hindu literature, including Vedas, Upanis. ads, Bhagavad-Gītā, and
the works of Kālidāsa, a poet. Hindu philosophy was the source of Emerson of his transcenden-
talism and helped him in his quest to define a truly representative man.
Ancient India had acquired great fame as a society rich in spiritual as well secular dimen-
sions. Therefore, it was visited by many travelers from Greece, Rome, China, and Arabia during
the ancient and early medieval periods who provided accounts of the prosperity in the region.
An early account is from Megasthenes (350–290 BCE), an ambassador of Seleucus I who ruled
India for a short period. Megasthenes writes that the Indians, “having abundant means of subsis-
tence, exceed in consequence the ordinary stature, and are distinguished by their proud bearing.
They are also found to be well skilled in the arts, as might be expected of men who inhale a
pure air and drink the very finest water.”17 Megasthenes is emphatic that the region never suf-
fered from “famine” and “general scarcity in the supply of nourishing food.”18 Strabo (63 BCE
- 21 CE), a Greek geographer and traveler, also visited India and found that the country was
“abounding in herbs and roots,”19 indicating prosperity.
The Chinese traveler Yijing (or I-tsing; 643–713 CE), who lived in India for about 22
years, mostly in Nalanda, a center of learning in modern Bihar, wrote: “. . . ghee [clarified butter],
oil, milk, and cream are found everywhere. Such things as cakes and fruit are so abundant that
it is difficult to enumerate them here.”20 Yijing visited India to acquire knowledge and carried
some 400 Sanskrit texts back with him to China.21
Most Chinese, Greek, Persian, Arabian, and European documents written in different
periods testify to the prosperity of the Indus-Sarsvatī Valley region. It is only after the later part
of the British occupation, in the late nineteenth century that the region suffered from hunger
and poverty due to colonial exploitation.
2.2 THE POWER OF QUESTIONING: ŚĀSTRĀRTHA
(DEBATE) TO ACQUIRE KNOWLEDGE
Curiosity raises questions and questions lead to ideas and creativity. The power of questioning as
a learning tool goes far back into history. The Socratic dialog is a standard tool for learning where
16Emerson, 1904, vol. 8, p. 239. For more information, read Acharya, 2001.
17McCrindle, 1926, p. 30.
18McCrindle, 1926, p. 31.
19Strabo, 15: 1: 22
20Takakusu, 1966, p. 44.
21Takakusu, 1966, p. xvii.
18
2. THE BUILDING BLOCKS OF SCIENCE
the answer of a question is a question. Socrates (470–399 BCE) didn’t give lectures or write
books. He propagated a dialectical technique in which he led his students by asking appropriate
questions that demanded critical thinking to arrive at the correct answer. Socrates’ questions
revealed the ways in which his student’s thinking was dogmatic and in error.
A good question is an excellent way to start a conversation. Good questions lure people
to open up about themselves and divulge their thoughts and feelings. Questions are also instru-
mental in allowing you to introspect yourself and find answers. In a group, questions promote
discussion. Questions have the ability to spawn more questions. This process is a hallmark of
learning and has been an essential ingredient of Hindu thought.
In some religions, unquestioning faith in their scriptures is emphasized and used as the
yardstick to judge the person’s spirituality. In contrast, in Hinduism, asking questions is a com-
mon norm. The Bhagavad-Gītā, a sacred book of the Hindus, is a narrative dialog between prince
Arjuna and Lord Kr.s.n. a. Arjuna was not fearful in asking tough questions; he was not fearful of
being defined as a person with no faith. He bluntly asked, “Why should I fight with my own
family members?” Kr.s.n. a, by fulfilling Arjuna’s curiosity, could teach the concepts of dharma,
duty, and moks. a. The Bhagavad-Gītā is not the only book with such dialogs in the Hindu cor-
pus; Maitreyī, in Br. hadāran. yaka-Upanis. ad, raises question about the futility of wealth and love
for self. The sacred Rāmāyan. a is the compilation of questions asked by sage Vālmīki and answers
by Nārada Muni.
Henry David Thoreau, (1817–1862 CE), an American philosopher, naturalist, and au-
thor, praised the Hindus for their openness to new ideas. “The calmness and gentleness with
which the Hindoo [Hindu] philosophers approach and discourse on forbidden themes is ad-
mirable.”22 Thoreau wrote this statement after his studies of the Vedas and Upanis. ads. Truthful-
ness, goodness, and beauty, as marked in satya ˙m, śiva ˙m, and sundara ˙m [Only truth is beneficial
and beautiful], have always been the guiding principles for the Hindus.
Interrogation, cross-examination, debate, symposium, and discussion were well defined
tools practiced from the ancient period among the Hindus. In the Hindu tradition, scholastic
debate (vāda) is practiced not only for the disciplines of philosophy; it is also for sciences and
religion. Questioning is a powerful tool of investigation to discover unchartered territory of
knowledge. Usually multiple possibilities are probed when people try to resolve a question. How
do you select one possible solution over the others? How do you decide, when different people
support different solutions? A debate is a way in which scholars can present their cases; such
debates can easily filter novices from scholars. The practice of debate was so ingrained and valued
among the Hindus that it was used as one of the eight ways in which a woman could select a
groom for herself. Prospective grooms debated with established scholars or among a group of
prospective grooms in public on various issues to win the bride. It was this practice that led
22Thoreau, 1906, vol. 2, p. 3.
2.2. THE POWER OF QUESTIONING 19
Gautama (who later became Lord Buddha) to debate Arjuna, as mentioned in Lalitvistara, an
ancient book (See Chapter 3).23
The Br. hadāran. yaka-Upanis. ad narrates an episode in which King Janaka decided to donate
one thousand cows to the best Brahmin. Yājñavalkya, a revered saint, took all the cows and thus
infuriated several other holy men who also needed the gift for their livelihood. To resolve the
matter, a debate ensued and Yājñavalkya had to demonstrate his superior intellectual abilities
by answering questions posed by other holy men.24 Sages like Aśvala, Jāratkārava Ārtabhāga,
Bhujyu Lāhyāyani, Us.hasta Cākrāyan. a, and Kahola Kaus.ītakeya debated with him and asked
questions on philosophical subjects to which Yājñavalkya provided convincing replies. They all
lost the debate one-by-one. These sequence of debate was chosen by the rank of these scholars.
In the end came Gārgi Vācaknavī, the daughter of sage Vācaknu who was in the lineage of sage
Gārga, and took her name from the lineage and her father. But, she also lost to Yājñavalkya.25
A healthy śāstrārtha (debate or discussion) is an essential element and practice in the
Hindu religion. It has led to the idea of “monism”—there is only one existence (God)—as well
as the idea of “dualism”—there are two separate realities, Him (God) and me (my soul). The
quest to know the ultimate truth led to the evolution of various systems of knowledge which
outwardly seem to be divergent. The Hindu tradition allowed divergent opinions to coexist and
established śāstrārtha as a means to resolve scholarly differences. Each person was allowed to
test and discover the truth in his/her own way. Debate was considered a good way to effectively
formulate the thought process, a good way to understand the subject matter, and to become
established among scholars. Forty-four different forms of debates or discussions are described
in the Caraka-Sa ˙mhitā, demonstrating debate as a highly evolved system.26 This book advises
to decide the purpose of the debate in advance: For example, is it for curiosity or to subjugate
the opponent? The latter purpose of subjugation was practiced in the special situations, e.g.,
śāstrārtha for marriage.
The Sanskrit term ānuvīks. ikī, defined as investigation through reasoning, has a long tra-
dition among the ancient Hindus. It is a tool of investigation that is applicable in all aspects of
learning: scientific, religious, and social. The earliest text on economics, Arthaśāstra of Kaut.ilaya,
lists27 ānuvīks. ikī as one of the four cognitive (vidyā) disciplines, along with trayī (vedic learn-
ing), dan. d. anīti (jurisprudence), and vārttā (economics). Ānuvīks. ikī is considered as a “source
of all knowledge” or a “means for all activities,” and a “foundation for all social and religious
duties.” “When seen in the light of these sciences, the science of ānuvīks. ikī is most beneficial
to the world, keeps the mind steady and firm in weal and woe alike, and bestows excellence
of foresight, speech, and action. Light to all kinds of knowledge, easy means to accomplish all
23Bays, 1983, p. 224.
24The Br. hadāran. yaka-Upanis. ad, 3: 1.
25In contrast to many cultures, females could excel in philosophy and science among the ancient Hindus. The practice
continues even today.
26Caraka-Sa ˙mhitā, Vimānasthāna, 8: 27.
27King, Richard, 1999, p. 34; Arthaśāstra, 1: 2: 6–7.
20
2. THE BUILDING BLOCKS OF SCIENCE
kinds of acts and receptacle of all kinds of virtues, is the science of ānuvīks. ikī ever held to be,”
suggests Kaut.ilaya.28
The medical treatise Caraka-Sa ˙mhitā emphasizes the importance of debate and discus-
sion in the learning process. “Discussion with a person of the same branch of science increases
knowledge and brings happiness. It contributes toward the clarity of understanding, increases
dialectical skills, broadcasts reputation, dispels doubts regarding things heard. . . Hence it is the
discussion with men of the same branch of science, that is applauded by the wise.”29 The Caraka-
Sa ˙mhitā suggests yukti or heuristic reasoning as a valid and independent means of knowledge.
Medical practitioners were advised to free themselves from bias and search for the truth dispas-
sionately.30
According to Caraka, discussions can be friendly or hostile. In the Hindu tradition, even
hostile discussions were an organized tradition in which scholars with differing opinions shared
their point of view, and debated with each other.31 The rule was to avoid any “celebration for
the victor” or “any insult to the loser.”32 Since knowing truth was the purpose of a debate, it was
the knowledge that became central and not the person. This discouraged a feeling of triumph or
defeat for the participants. It was suggested that all assertions in a debate should be made in a
polite manner. A person in anger can do anything to win, even inappropriate actions. Therefore,
wise people debate in a polite manner.33 This chapter of the Caraka-Sa ˙mhitā reminds us of the
Robert’s Rules of Order that were established in the West centuries later, in the nineteenth
century.34
2.3 RESPECT FOR KNOWLEDGE: THE ROLE OF A GURU
Hinduism does not enforce any undue restraint upon the freedom of human reasoning, the
freedom of thought, or the will of an individual. Hindus’ respect for knowledge is inherent in
the core values of the religion. Respect for learning is obvious from the status that was bestowed
on gurus. A festival, guru-pūrn. imā, is celebrated every year when Hindus offer thanks, love, and
devotion to their gurus. This festival falls on the full moon day (pūrn. imā) in the month of Ās. ād. ha
( June - July). The Śvetāśvatara-Upanis. ad tells us that the Vedic knowledge is automatically
revealed to a person who has the deepest love for God and the same love toward his teacher.35
28Arthaśāstra, 1: 2: 6–7.
29Caraka-Sa ˙mhitā, Vimānasthāna, 8: 15.
30Caraka-Sa ˙mhitā, Sūtrasthānam, 25: 32.
31Caraka-Sa ˙mhitā, Vimānasthāna, 8: 16.
32Caraka-Sa ˙mhitā, Vimānasthāna, 8: 17.
33Caraka-Sa ˙mhitā, Vimānasthāna, 8: 22–23.
34Henry Martyn Robert (1837–1923) was an engineering officer in the U.S. Army. In 1875, he self-published a book,
The Pocket Manual of Rules of Order for Deliberative Assemblies, in two parts. The book gained popularity and the first formal
edition was published in 1876 with a new title: Robert’s Rules of Order. The book is still a classic and frequently consulted by
groups for smooth interactions and discussion.
35Śvetāśvatara-Upanis. ad, 6: 23.
2.3. RESPECT FOR KNOWLEDGE: THE ROLE OF A GURU 21
In Hindu tradition, gurus36 have a very high status that is comparable to parents and
God. For example, Kabir, a famous medieval poet, shared a situation where his guru and God
both appeared before him at the same time. In the Hindu tradition, dignitaries are greeted and
honored by the host by touching their feet in the order of their comparative stature. Kabir’s
dilemma was who should he greet first, his guru or God? Kabir quickly resolved this dilemma as
he realized that the guru should be the first one since he (guru) enabled him to see God.37 Gurus
shared their own personal experiences with disciples, guided disciples in their interactions with
the community, taught social rules, taught various intellectual disciplines, and most important
of all, became the spiritual guides of disciples. It was the guru who assigned a skilled trade or
profession varn. a, later known as jāti (or caste), to a disciple after the completion of his or her
education.
Hindus divided the human lifespan of 100 years into four stages (āśrama): brahmacarya
(learning stage), gr. hastha (householder stage), vānaprastha (stage of teaching, and doing com-
munity service) and sanyasa (stage of contemplation and renunciation). Brahmacarya is the first
stage of life where a child, after the infancy stage, joins gurūkula (gurū = teacher, kula = home or
lineage), boarding school run by a guru in a natural environment (or forest). Parents left their
children here in the care of the guru, to live in the school compound with the family of their guru
and not with their parents to get appropriate education suited to their aptitude and inclinations.
Irrespective of the social and economic status of the parents, each child had to live at the same
standard of living as the guru’s family. This allowed disciples to live near the guru, who could
observe firsthand aptitudes and inclinations of disciples. The disciples understood the complex-
ities of life quickly because of their continued association with the guru. Young children not
only gained content-based knowledge from their guru, they also learned virtuous lifestyles and
ethics. This system worked quite well for the ancient Hindus and they excelled in the sciences,
along with other skill trades and disciplines.38
Gurus were considered to be authority figures, but were not considered infallible. It was
considered a healthy practice for the disciples to raise questions about gurus’ teachings to under-
stand reality in their own way. The Prasna-Upanis. ad (Prasna = question), one of the principal
Upanis. ads, is entirely based on the questioning between disciples and their teacher.
The role of the guru was and still is paramount to most Hindus. It does not stop after
the brahmacarya stage; it continues for the rest of the life. Carl Jung, a noted psychoanalyst,
emphasized the importance of the guru in the following words: “. . . practically everybody of a
certain education, at least, has a guru, a spiritual leader who teaches you and you alone what you
36Śiks. ak, ācārya, śrotriya Upādhyāya, purohit are other words for guru.
37Guru Govind dāū khare, kāke lāgu pāya, balihārī guru āpnu jin Govind diyo batāya. My teacher and God are in front of
me. Whom should I prostrate first? It has to be the guru who taught me to recognize God.
38Joshi, 1972.
22
2. THE BUILDING BLOCKS OF SCIENCE
ought to know. Not everybody needs to know the same thing and this kind of knowledge can
never be taught in the same way.”39
Some of the famous teachers in Hindu history are Kaut.ilya (fl. 300 BCE), teacher of
Candragupta Maurya (reigned c. 321–297 BCE) and Vasis.t.ha (pre-historic), teacher of Lord
Rāma. Kaut.ilya laid the rules of administration for Candragupta, especially for psychological
warfare, political philosophy, and economics, which are compiled in his book, Arthaśāstra. It
was Kaut.ilya’s strategies that ended the Greek rule in a western state of India (now in Pakistan).
Similarly, it was Vasis.t.ha who convinced King Daśaratha to allow Lord Rāma to relinquish the
worldly comforts of his palace and live in a forest to protect sage Viśvāmitra’s gurukula from
rāks. asa (evil mongers).
2.4
SMR. TI (MEMORY), AN ANSWER TO BOOK BURNING
In the gurukulas, the ancient Hindus memorized their literature (mostly poetry) verbatim. The
spoken words, not the written words, have been the basis of literary and scientific traditions of the
Hindus. The people who memorized the texts were highly respected as they became the tools that
could keep the tradition alive. This tradition continues even today. People who memorized Vedas
or Upanis. ads are highly respected in today’s Hindu society.40 This memorization tradition was
facilitated by composing their literature in Sanskrit either in stories or in poetry with a rhythm
or pattern. Stories have characters with links with each other. Similarly, it is much easier to
memorize a poem than a prose due to the rhythm. Sanskrit grammar was developed by Hindus
to facilitate composition of poetry. Even math problems were composed in beautiful poetry in
Sanskrit. The first written accounts in India are from the period of Aśoka (r. 269–232 BCE), the
third emperor in line of the Mayura dynasty. He erected stone obelisks with his edicts inscribed
in the stones rock edicts all over India. These edicts are useful guide to life in ancient India.
However, such inscriptions or manuscripts are limited in number.
Yijing (also I-tsing, 635–713), a Chinese traveler who visited India, was impressed when
he met people who could recite hundreds of thousands of verses of Vedas. “The Vedas have been
handed down from mouth to mouth, not transcribed on paper or leaves. In every generation
there exist some intelligent Brahmans who can recite 100,000 verses . . . This is far from being
a myth, for I myself have met such men,” writes Yijing.41
39Jung, Carl, CW Letters, 1973, vol. 1, p. 237. Jung (1875–1961) was an influential psychiatrist and thinker of the twentieth
century. Hindu philosophy played an important role in his theories on symbolism and the unconscious mind.
40Vedi (or Bedi), Dvivedi, Trivedi, and Caturvedi are common last names (surnames) among brahmins, the people who
were entrusted to preserve the Vedas. These names literally symbolize the number of Vedas memorized by them. Dwi means
two and Dwivedi is the surname of a person who has memorized two Vedas. Similarly, tri and catur mean three and four,
respectively. Thus, Trivedi and Caturvedi are the people who have memorized three or four Vedas, respectively. Of course,
today these surnames are inherited from father to children without any connection to memorization.
41Takakusu, 1966, p. 182. I have thought of this statement along with the people who memorized Vedas in India. Suppose,
it takes 10 seconds to narrate one verse. To narrate 100,000 verses, this equals to one million seconds. One million seconds
equal a period of non-stop chanting for more than 11 days. How can one remember voluminous texts that take more than
11 days to read? Obviously, the person had to breathe in between, take rest, eat food, and sleep. This means that, in a rough
2.4. SMR. TI (MEMORY), AN ANSWER TO BOOK BURNING 23
Al-Bīrūnī, an Islamic scholar who lived in India for some thirteen years during the
eleventh century, wrote of the importance of poetic literature in popularizing science: “By com-
posing their books in metres [poetry] they [Hindus] intend to facilitate their being learned by
heart, and to prevent people in all questions of science ever recurring to a written text, save
in case of bare necessity. For they think that the mind of man sympathizes with everything in
which there is symmetry and order, and has an aversion to everything in which there is no or-
der. Therefore, most Hindus are passionately fond of their verses, and always desirous of reciting
them, even if they do not understand the meaning of words, and the audience will snap their
fingers in token of joy and applause. They [Hindus] do not want prose compositions, although
it is much easier to understand them.”42
During the Islamic invasion of India, libraries were burnt. As an example, the libraries in
Nalanda and Vikramsila were destroyed around 1200 CE by Bhkhtiyar Khilji. However, most
of the sacred literature of the Hindus was easily reproduced because Hindus had memorized the
poetic verses of their sacred literature. The fictional novel Fahrenheit 451 by American writer Ray
Bradbury dramatizes this feat. The title is based on the temperature at which paper catches fire.
This novel deals with a futurist society where books were outlawed and firemen burned books.
A small group of people countered this situation by memorizing books.
The tradition of memorization of sacred texts (or lack thereof ) had a profound outcome for
world cultures. Comparatively speaking, when the textual riches of Alexandria, China, Baghdad,
and Rome were flamed, the glory of these cultures dissipated like smoke in the sky. In contrast,
the Hindus could still salvage much of their textual riches.43
Memorization was facilitated by the abundant use of polysemous words in Sanskrit to
maintain rhythm and tone. In some situations, the meanings of a word are so divergent that
multiple interpretations can be made of the same sentence. It creates a dilemma. What is the
original intended meaning of a verse? To make matter complex, the authors of the Hindu lit-
erature deliberately intended multiple meanings of their verses, depending on the expertise of
a person. A layperson as well as a scholar could enjoy the hymns. As a result, scholars today
argue with each other to validate their own interpretations. The problem becomes tense when
a native interpretation is deemed biased by a foreign experts. In contrast to Catholicism, Ju-
daism, and Islam, the contemporary Hindu literature is mostly produced by non-practitioners.
For example, Max Müller, Monier-Williams, and Rudyard Kipling were devout Christians who
translated the sacred Hindu texts with the intent of changing the religious landscape of India.
Even after a century has elapsed, their translations are still popular in US, UK and Europe.
estimate, the person memorized a text that took about 25–30 days to narrate. Is it possible? Historical documents support
such memorization.
42Sachau, 1964, vol. 1, p. 137.
43Montgomery and Kumar, 2000.
24
2. THE BUILDING BLOCKS OF SCIENCE
2.5
YOGA AND MEDITATION FOR SELF-IMPROVEMENT
Our physical body and mind are the basic instruments for all our actions, perceptions, and
thoughts. It is the body and mind (popularly defined as two separate entities) that are the ba-
sic tools of our knowledge, virtues, and our happiness. In the modern culture, more emphasis
is placed on the body. Gyms are popularly becoming a place where people go for exercise and
sculpt their bodies for self improvement. Our icons are the supermodels and superathletes. What
about the mind? Or, more importantly, a unison of body and mind?
The first organizing principle underlying human movements and postures is our existence
in the gravitational field. The earth’s gravitational field has an influence on every movement
we make. The combination of the nervous system and skeletal muscles functioning together
in gravity forms the basis of various yoga postures.44 The practice of yoga perhaps is the most
cost-effective and non-invasive treatment in many medical conditions. It is the most integrated
science of self improvement where body and mind are both nourished that allows people to reach
their fullest potential.
The Sanskrit term yoga comes from the root ‘yuj’ which means “union,” “join” or “balance.”
The term “union” defines the union of our physical self and the mind for some, while for others,
a union of self with the divine. However, in both unions, a person transcends from everyday
mundane existence to his/her fullest potential that leads to the understanding of the unity of
all living creatures and ultimately to moks. a—an ultimate goal for all Hindus. Body and mind
are the vehicles for the Hindus that allowed them to liberate their soul from the cycle of birth
and death. Body and mind are the Siamese twins that create synergy for this ultimate goal of
liberation.
Yoga is all about balance; it is a balance of mind and body, a balance of strength and
flexibility, a balance in a particular posture, even a balance in breathing from left and right
nostrils that harmonizes the left and right brains. It is a philosophy; it is a way of life. It helps
to avoid sickness. It helps a person to reach his/her best potential. Yoga is perhaps the oldest
system of personal development that is effective.45 Although the Vedas were the first to mention
the importance of yoga, the Upanis. ads were the first to provide a systematic form of yoga. The
first major treatise of yoga known to us is Patañjali’s Yoga-Sūtra. The Indus seals of Harappa
and Mohenjo-daro depicting human figurines in lotus postures in a meditative state provide
archaeological support for the existence (Figure 2.1).46
In Patañjali’s yoga-sūtra, yoga is defined as a system of eight limbs (as. t. ā ˙nga-yoga):47
“Restraint (yama), observance (niyama), posture (āsana), breath-control (prān. āyāma), sense-
44Coulter, 2001, 23.
45Coulter, 2001; Cowen, 2010; Eliade, 1969; Feuerstein, 1989; Iyengar, 1966; and Kulkarni, 1972. For a beginner who is
interested in the medical aspects of yoga, Coulter’s book is good. For the philosophical aspects, consult Eliade’s book.
46Worthington, 1982, p. 9.
47Patañjali’s yoga-sūtras, 1: 40.
2.5. YOGA AND MEDITATION FOR SELF-IMPROVEMENT 25
Figure 2.1: A monk sitting in a lotus posture in a meditative state (taken from Wikimedia).
withdrawal (pratyāhāra), concentration (dhāran. ā), meditative-absorption (dhyāna) and enlight-
enment (samādhi) are the eight members [of Yoga].”48
Yoga postures (āsana) are an outcome of biomimicry practiced by the ancient Hindus.
They observed various life forms—small and big, their life styles, the ways they exercised, the
ways they cured themselves, the ways they relaxed, and the ways they avoided sickness. These
studies evolved into a system of medicine called Ayurveda (science of life). It is no coincidence
that many āsana are named after animals. The names of various postures are either based on
their geometry or their similarity to an object, bird or animal. The following are some popular
āsana: dhanura-āsana (bow posture), garud. a-āsana (eagle posture), krounc-āsana (heron posture),
makar-āsana (crocodile posture), man. dūka-āsana (frog posture), mayur-āsana (peacock posture),
padma-āsana (lotus posture), trikon. a-āsana (triangle posture), vakra-āsana (curved posture), and
hala-āsanas (plough posture).
Yoga is also a science that is cognizant of the bones, muscles, joints, organs, glands and
nerves of the human body (biology). It uses the physics of balance in designing postures, the
physics of motion and balance to allow changes in postures and a deep understanding of the
strength and connectivity of various muscles to make the body strong and flexible. At the same
48Patañjali’s Yoga-Sūtra, 2: 29.
26
2. THE BUILDING BLOCKS OF SCIENCE
time it is based on a deep understanding of the power and functioning of the mind (psychology)
in controlling the thought processes (meditation) for optimum or holistic health.
The word yoga has appeared in the R. gveda to define the yoking, connection, achieving
the impossible.49 By yoga, one gains contentment, endurance of the pairs of opposites, and
tranquility, tells the Maitreyī-Upanis. ad.50 “When cease the five senses, together with the mind,
and the thoughts do not stir. That, they say, is the highest course. This they consider as yoga,
firm holding back of the senses. Then one becomes undistracted. Yoga, truly, is the origin and
end,” suggests the Kathā-Upanis. ad.51 In other words, yoga is alpha and omega of the art of
self-improvement.
Here “origin and end” means that yoga is essentially involved in knowledge and experi-
ence; it is a process at all stages. Yoga is advocated for the knowledge or realization of the self
(ātman).52 Patañjali equated yoga with samādhi (tranquil state) in the very first verse of his book.
Yoga is not a mere abstract speculation of human mind; it is real with a concrete referent: “[This
supreme ecstasy] is near to [him who is] extremely vehement [in his practice of Yoga].”53
Our body and mind are intimately linked. If the muscles of our body are toned and relaxed,
it is easier for our mind to relax. Similarly, if our mind is anxious, it directs stimuli to our physical
body and changes the chemical composition and physical state of our muscles. The outcome is
the drainage of physical and emotional energies. The ancient Hindus realized the intimate link
of mind and body, and formulated exercises for both. The physical body received its vital energies
through yogic postures (āsana), while mind gained its vital energies through meditation.
An active organ receives a larger flow of blood than an inactive organ. Blood is an essential
ingredient for the proper functioning of the various organs of body, and these organs get enriched
due to a higher and more efficient transfusion of oxygen through our lungs. The yoga postures
work on the body frame as well as on the internal organs, glands, and nerves. Most joints and
organs are put into isometric or other ranges of motions.
Most people take short and shallow breaths throughout the day using their chest. This
kind of breathing does not allow our “lungs to expand and soak up the oxygen.”54 Although
hyperventilation or heavy breathing can be useful in the short term by boosting sympathetic
nervous system activity, a better way to breathe is using the abdomen and diaphragm, called the
belly breath and take slow and steady breaths. You need to use your diaphragm, which is the
muscle underneath your lungs. When the diaphragm flexes, it pulls down and opens the lower
lobes of your lungs, allowing more air inside. Chest breathing comes from stress; it is a reaction
to stress.
49R. gveda, 1: 34: 9; 3: 27: 11; 7: 67: 8; 10: 114: 9; Dasgupta, 1963, vol. 1, p. 226.
50Maitreyī-Upanis. ad, 6: 29.
51Kathā-Upanis. ad, 6: 10–11.
52Kathā-Upanis. ad, 2: 12; Mun. d. aka-Upanis. ad, 3: 2: 6; Śvetāśvatara-Upanis. ad, 1: 3: 6–13; Maitreyī-Upanis. ad, 6: 18, 19,
27.
53Patañjali’s Yoga-Sūtra, 1: 21.
54Dollemore, Giuliucci, Haigh, Kirchheimer and Callahan, 1995, p. 152; Gilbert, 1999a and 1999b.
2.5. YOGA AND MEDITATION FOR SELF-IMPROVEMENT 27
All yoga exercises start with basic deep breathing techniques along with proper āsanas.
It serves two purposes: first, to bring an optimum amount of oxygen into the lungs; second, to
control the mind by controlling the breath. Deep breathing is not the fast pumping action of our
lungs where we take a fast deep breath and puff it out; it involves a controlled rhythmic action
to fill the lungs with air, to retain it for a brief period, and slowly exhale. The whole process
has four parts: inhalation (pūraka), retention (kumbhaka), exhalation (recaka), and suspension
(kumbhaka).
The word meditation derives from a Latin word, mederi, which means to heal. It is the
healing of a mental affliction caused by psychological stress. Managing our thought processes is
a key to managing the stress that affects our lives. Meditation can help a person find new ideas
and practical answers to problems. It gives ample stillness to the mind to think properly in order
to make proper judgments. It helps the mind to control emotion without suppression but with
an outlet where the emotional waste could be discarded in order to become more at peace with
the world. For this reason, the Maitreyī-Upanis. ad considers meditation as essential to realize
God, along with knowledge (vidyā) and austerity (tapas).55
Meditation is a process of knowing for the Hindus. The Sanskrit words that reflect medita-
tion are: cintan, dhyāna, or manana. Nowhere in the world was the art of meditation as perfected
as was done by the ancient Hindus. The Śvetāsvetara-Upanis. ad tells us that meditation and yoga
are the way to know the self-power (ātma-śakti) of God within us.56 The Chāndogya-Upanis. ad
tells us that all people who achieved greatness in the past did so with the help of meditation.57
The practice of concentration (ekāgratā or dhārn. a) is the endeavor to control the two generative
sources of mental fluidity: sensory activity (indriya) and subconscious activity (sam. skāra).58 It
is difficult to achieve pursuit of one object (ekāgratā, single-mindedness) with a tired body or
restless mind, and with unregulated breathing.
Meditation can decrease our reaction time, increase alertness and improve the efficiency
of a person. Insomnia, headache, lack of appetite, shaking of the hands and other symptoms
can be either reduced or nearly eliminated. It also helps in asthma, anxiety, high blood pressure,
back pain, heart disease, etc. Concentration of the mind acts like a focusing lens (convex lens) of
Sun’s rays. It concentrates thought and invokes miraculous powers to the mind’s activities. The
only way to understand the impact of yoga is to go through the experience.
The diaphragmatic movements provide a “massaging action to the heart” as well as to the
inferior vena cava as the latter passes through the diaphragm, thus propelling the blood forward
toward the heart and can be labeled as the “second heart.”59 In the Universidade Federal de
São Paulo, Brazil, Danucalov et al. investigated the changes in cardiorespiratory and metabolic
intensity resulting from prān. āyāma and meditation during the same hatha yoga session. Nine
55Maitreyī-Upanis. ad, 4: 4.
56Śvetāsvetara-Upanis. ad, 1: 3.
57Chāndogya-Upanis. ad, 7: 6: 1.
58Eliade, 1975, p. 62.
59Thomas, 1993; taken from Gilbert, 1999a
28
2. THE BUILDING BLOCKS OF SCIENCE
yoga instructors were subjected to analysis of the air exhaled during the three periods, each of
30 minute duration: rest, respiratory exercises, and meditation. The oxygen uptake and carbon
dioxide output were proven to be statistically different during the active sessions (prān. āyāma
and meditation) compared to the rest phase. In addition, the heart rate showed decreased lev-
els during rest as compared to meditation. Therefore, the results from this study suggest that
meditation reduces the metabolic rate while the prān. āyāma techniques increases it.60
Researches have shown that yoga can help people suffering from asthma, heart disease,
high blood pressure, type 2 diabetes, and obsessive-compulsive disorder, lower their dosage of
medications, and sometimes eliminate the use of medication.61 Practicing yoga not only relieves
temporary symptoms such as headaches, sinus pressure and hot flashes, but it can also improve
health during more serious medical conditions such as cancer, diabetes, anxiety, and heart dis-
ease, to name a few. “Yoga is not a panacea, but it is powerful medicine indeed for body, mind,
and spirit,” suggests Dr. McCall, a medical doctor who studied and practiced the effects of
yoga.62
60Danucalov et al., 2008.
61McCall, 2007, p. 43.
62McCall, 2007, p. XIX.
C H A P T E R 3
29
The Hindu Mathematics
“Of the development of Hindu mathematics we know but little. A few manuscripts bear testi-
mony that the Indians had climbed to a lofty height, but their path of ascent is no longer trace-
able,” wrote Cajori in his book, A History of Mathematics.1 This indicates the status of scholarship
on Hindu mathematics in 1893 when Cajori first published this book. However, the scholarship
of the last hundred years has filled many gaps in our understanding of Hindu mathematics.
Most mathematics textbooks are devoid of historical anecdotes and stories. Mathematics
is taught as a discipline that someday somehow appeared in fully developed form. Therefore,
students of mathematics do not learn much about the dynamic aspect of the discipline. This led
George Sarton, a leading historian of science and chemistry in the twentieth-century, a former
professor at Harvard University, to write, “if the history of science is a secret history, then the
history of mathematics is a doubly secret, as secret within a secret.”2 Mathematics is also the
discipline where the non-Western cultures have made significant contributions. Mathematics
owes a lot to its Hindu and Middle Eastern roots.3
3.1 THE HINDU NUMERALS
The number system that we use today in most of the civilized world has come to us from the
ancient Hindus. This system of counting is so simple that it is difficult to realize its profundity
and importance.4 Young children all over the world are generally taught this counting system
first and the alphabet of their native language later. Children are taught to write eleven as one
and one (11), written side-by-side, which they learn without much difficulty. “Our civilization
uses it unthinkably, so to speak, and as a result we tend to be unaware of its merits. But no one
who considers the history of numerical notations can fail to be struck by the ingenuity of our
system, because its use of the zero concept and the place-value principle gives it an enormous
advantage over most of the other systems that have been devised through the centuries.”5
1Cajori, 1980, p. 83.
2Taken from Dauben, Joseph W., Mathematics: a Historian’s Perspective, a chapter in the book by Chikara, Mitsuo and
Dauben, 1994, p. 1.
3Bag, 1979; Colebrooke, 1817; Datta and Singh, 1938; Ifrah, 1985; Joseph, 1991; Rashed, 1996; Smith, 1925; Srini-
vasiengar, 1967.
4Ifrah, 1985, p. 428. Ifrah’s book is still one of the most engaging and scholarly book on numerals in various ethnic
cultures. The other noted books are by Calinger, 1999; Cook, 1997; Katz, 1993; Smith, 1925; Smith and Karpinski, 1911;
Suzuki, 2002.
5Ifrah, 1985, p. 428.
30
3. THE HINDU MATHEMATICS
Several civilizations such as the Egyptians, Chinese, and Romans used the additive deci-
mal systems although their symbols were different. In their system, multiple repetition of sym-
bols were used and added to increase magnitude. Thus, in the Roman system, X means 10 and
XX becomes (10 + 10) = 20. Also, XVI amounts to 10 + 5 + 1 = 16. The Romans also used
1), respectively.
subtractive symbolism where IX and IL represents 9 (as 10
1) and 49 (as 50
(cid:0)
(cid:0)
3.1.1 THE WORD-NUMERALS
In an oral tradition, numbers are chosen as words that can be used in a narrative. This happened
with the ancient Hindus where words representing numbers were chosen in verses of the sacred
literature. The word-numerals in Sanskrit language are written similar to the way numbers are
written in German language. For example, 23 is called three and twenty (drei und zwanzig) in
German while twenty and three in the English system. Similarly, fourteen is called vier-zehn
(four and ten) and fifty-three is called drei und fünfzig (three and fifty). In the Sanskrit language,
the mother language of all Indo-European languages, the language of the Vedas, twelve is written
as two and ten,6 34 as four-thirty,7 53 as three and fifty,8 77 as seven and seventy.9 just like the
German system.
For compound numerals, the number of higher order was placed as qualifier and the lower
as qualified. For example, eleven is defined as ten qualified by the addition of one, thus giving
eka-dasa, translated as one-ten. The number 720 is denoted as seven-hundred-twenty10 and the
number 60,099 is written as sixty-thousand-nine-ninety11 in the R. gveda. The R. gveda mentions
“three thousand and three hundred and nine-thirty (3339)”12 as the count of people in a yajna,
a holy gathering where worshipping is done around fire. The Atharvaveda defines a hundred,
thousand, myriad, hundred-million.13 Similarly, consistent with the practice in the R. gveda, the
Baudhāyana-śulbasūtras expressed the number 225 as (200 and 5 and 20)14 and the number 187
as (100 + 7 + 80).15
For the numbers provided in the Vedas and Śulbasūtras, the following points are clearly
established:
1. The numbers from one to ten have specific names.
2. After ten, specific words are designated to 20, 30, . . . 100. All other numbers in between
are defined as a combination of these words.
6R. gveda, 1: 25: 8.
7R. gveda, 1: 162: 18; R. gveda, 10: 55: 3.
8R. gveda, 10: 34: 8.
9R. gveda, 10: 93: 15.
10R. gveda, 1: 164: 11.
11R. gveda, 1: 53: 9.
12R. gveda, 10: 52: 6.
13Atharvaveda, 8: 8: 7.
14Baudhāyana-śulbasūtras, 16: 8.
15Baudhāyana-śulbasūtras, 11: 2.
3.1. THE HINDU NUMERALS 31
3. After 100, new words are assigned to 1,000, 10,000, 100,000, etc.
4. In word numerals, numbers between 10 and 20 were defined as a number above ten and
then 10. For example, 12 is defined as 2 and 10 (dvi-daśa). A similar practice was made
for other numbers. For example, 99 is written as (9 + 90), 76 as (6 + 70), etc.
The ancient Hindus wondered about the total number of atoms in the universe, the age of
the universe, size of atom, etc. To define these physical quantities, they used really large numbers
as well as really small numbers. Al-Bīrūnī, an Islamic philosopher who lived in India during the
eleventh-century, criticized the Hindus for their passion for large numbers: “I have studied the
names of the orders of the numbers in various languages with all kinds of people with whom
I have been in contact, and have found that no nation goes beyond thousand. The Arabs, too,
stop with the thousand, which is certainly the most correct and the most natural thing to do.
Those, however, who go beyond the thousand in their numeral system are the Hindus, at least in
their arithmetical technical terms.”16 This criticism of al-Bīrūnī demonstrates that the ancient
Hindus were simply much ahead of their time. I am fairly confident that if al-Bīrūnī were to
write his book today, he would not criticize the Hindus for their fondness of large numbers.
Al-Bīrūnī mentioned 1019 as the largest number used by the Hindus.
3.1.2 THE PLACE-VALUE NOTATIONS
The Greeks, Egyptians, and Romans, did not use place-value notations in writing numbers while
the Babylonians, Chinese, Mayans, and Hindu, did use. The Babylonian system was base-sixty
while the Mayan system was base-twenty. The current place-value system (also called position-
value system) that is base-10 is Hindu in origin. This is certain. The uniqueness of the Hindu
system lies in the fact that the position of a number qualifies its magnitude. Tens, hundreds,
or thousands were not represented by different signs; they are represented by using digits in
different positions. For example, the one is in second place in 10 (ten), in third place in 100
(hundred), and in fourth place in 1,000 (thousand).
In a positional- or place-value system, a number, represented as x4x3x2x1 can be con-
structed as follows:
x1
.x2
C
(cid:2)
101/
.x3
C
(cid:2)
102/
.x4
C
(cid:2)
103/
Where x1, x2, x3, and x4 are nonnegative integers that have magnitudes less than the chosen
base (ten in our case). As you may have noticed, the magnitude of a number increases from right
to left. For example, the number 1234 will be written as
Similarly,
16Sachau, 1964, vol. 1, p. 174.
.3
4
C
(cid:2)
101/
.2
(cid:2)
C
102/
.1
(cid:2)
C
103/
1:2345
1
C
D
2
10 C
3
102 C
4
103 C
5
104
32
3. THE HINDU MATHEMATICS
In India, Āryabhat.a I (born 476 CE) used a positional value system in describing numbers,
did not even bother to explain much about it, and claimed it to be ancient knowledge. This
indicates that the system was prevalent then.17
The earliest written record of the place-value notation comes from Vāsumitra, a leading
figure of Kaniska’s Great Council. According to Xuan Zang (also known as Hiuen Tsang, 602–
664), Kusāna King Kaniska (144–178 CE) called a convocation of scholars to write a book,
Mahāvibhāsa. Four main scholars under the chief monk, named Pārśva, wrote the book in 12
years. Vāsumitra was one of the four scholars. In this book, Vāsumitra tried to explain that
matter is continually changing as it is defined by an instant (time), shape, mass, etc. As time
is continually changing, therefore, matter is different in each situation although its appearance
and mass do not change. He used an analogy of the place-value notation to emphasize his point.
Just as location of digit one (1) in the place of hundred is called hundred (100) and in place of
thousand (1,000) is called thousand, similarly matter changes its state (avasthā) in different time
designations.18
New explanations are generally given in terms of known and established facts. Thus, the
very reason Vāsumitra used place-value notation as an example to define change in matter es-
tablishes that the place-value notation was considered as an established knowledge during the
early Christian era.
In modern perspective, just imagine reading the values of various stocks in a newspaper.
In a quick scan, you can recognize easily that 1089 is greater than 951. All you need to see is
that the first number has four digits while the second number has only three. This is enough
for a quick comparison. In contrast, in the Roman numerals, XC (90) is five times more in
magnitude than XVIII (18). This is not easy to figure out in a quick glance. Also, mathematical
operations of multiplication, division, addition and subtraction become much simpler in a place-
value notation.
3.2
FROM ŚŪNYATĀ AND NETI-NETI TO ZERO AND
INFINITY (ANANTA)
Zero and infinity are perhaps two grandest concepts ever invented in mathematics. Zero is one
of the top hundred discoveries/inventions ever produced in history, as listed by American Asso-
ciation for the Advancement of Science in its flagship journal, Science, in year 2000. It is the last
numeral invented by the Hindus that prompted the natural philosophers to dump the abacus.
It became much easier to perform calculations on a tablet or paper. “In the simple expedient of
17Āryabhat. īya, Gola, 49–50.
18Ruegg, 1993; Sinha, 1983, p. 130.
3.2. FROM ŚŪNYATĀ AND NETI-NETI TO ZERO AND INFINITY (ANANTA)
33
cipher [zero], which was permanently introduced by the Hindus, mathematics received one of
the most powerful impulses,” writes Cajori in his book A History of Mathematics.19
Zero is a numeral and it has the same status as any other numerals, all in the absence of any
magnitude. The presence of zero indicates a specific absence of the symbols of 1, 2, . . . , 9 at that
location. Zero is thus a sign that defines no value or a missing value in a particular location in a
number. It also defines the starting point in measurements, such as the coordinate axes, meter
sticks, stop watches, and thermometers. Zero is the denial of number in a particular location and
gains its meaning from digits that are on the left of it. Zero plays the role of a number and at the
same time signifies the metaphysical reality of the absence of substance (emptiness). In Hindu
philosophy, the terms śunyata and neti-neti define emptiness which evolved into a mathematical
reality in the form of zero, as explained later.
Zero, as a “void,” is an integral part of the Hindu philosophy. The nirgun. a-rūpa
(nonmanifested-form, amūrta-rūpa) of God is worshiped by the Hindus. In this form, no at-
tributes can be assigned to God. Nothingness (śūnyatā, emptiness or void) as such, in Hindu
tradition, has a substance; it is not an absence of everything with 100% mathematical certitude;
it is an absence of all attributes that are within the realm of māyā (illusion of the manifested
world).
In the nirgun. a manifestation, God is beyond any attributes yet the source of all. He is
nowhere and yet everywhere. The mathematical symbol zero has similar qualities. Zero has no
magnitude and, therefore, is present in every number as a
a. It shows its presence when
associated with a number in the decimal system.
D
C
0
The concept of neti-neti (not this, not that; essential for nirgun. a-rūpa, as we cannot
assign any particular attribute to God) dates back to the Vedic and Upanis. adic periods. The
Br. hadāran. yaka-Upanis. ad explains the Supreme God (ultimate reality) by defining God as the
absence of all the attributes (neti-neti).20
The absence of a number in place-value notation has a meaningful function. It does not
have a similar usefulness in any other system that is not place-value. For example, the location
of two digits 5 and 4 can be fifty four (54), five hundred four (504), five hundred forty (540),
etc., depending on the location of the two digits. It is only natural to assign a symbol—a circle
or a dot—for this absence of a number, for convenience. This is how zero became so central to
the place value system and mathematics.
The concept of zero was known in ancient India philosophically during the Upanis. ad-
period. We do know that zero was used as mathematical entity in Chandāh. -sūtra. When did
it become, along with a philosophical entity, to mathematical reality is not clearly established.
In a short aphorism, to find the number of arrangements of long and short syllables in a meter
containing n syllables, zero is defined: “[Place] two when halved, when unity is subtracted then
19Cajori 1980, p. 147; Accounts on the invention of zero are provided by Bronkhorst, 1994; Datta, 1926; Gupta, 1995;
Kak, 1989, 1990; Ruegg, 1978. Ruegg has provided perhaps the best review that includes philosophical insights and historical
developments.
20Br. hadāran. yaka-Upanis. ad, 3: 9: 26.
34
3. THE HINDU MATHEMATICS
(place) zero . . . multiply by two when zero . . .”21 Pi ˙ngala was the younger brother of eminent
grammarian Pān. ini, who lived near Peshawar around 2850 BCE22
King Devendravarman of Kalinga, Orissa inscribed his deed on a copper plate in 681 CE.
This deed provides an archaeological evidence of place-value notation. It lists twenty as two
and zero (20) in a place-value notation.23 In the Chaturbhuj Temple, Gwalior Fort, in Gwalior,
India, a plaque is mounted where 270 number is listed with zero as a symbol. The plaque is about
1500 years old. The Bakhshālī manuscript mentions śūnya for zero at hundreds of places in the
text.24 By the seventh-century, this concept already reached in the Far-East, where inscriptions
of zero, in writing 605, is carved on a sandstone. This stone was discovered at the archaeological
site of Trapang Prei, in Kratie province, in northeastern Cambodia.
By the eighth-century, the concept of zero was already in China. Zero is mentioned and
symbolized as a dot in Chapter 104 of the Kaiyun Zhanjing (Astronomico-astrological Canon), a
book written during the reign of Kaiyun (713–741 CE): “Each individual figure is written in
one piece. After the nine, the ten is written in the next row. A dot is always written in each empty
row and all places are occupied so that it is impossible to make a mistake and the calculations
are simplified.”25 The dot is an obvious reference to zero. This book was written by Qutan Xida
(Gautama Siddārtha), an Indian scholar who settled in China, between 718 and 729 CE during
the Tang dynasty.26 This book also contains Āryabhat.a’s sine tables.
When the Arabs learned about zero, they literally translated the Sanskrit word śūnya
(empty) into sifr (empty) in Arabic. “While the Arabs, as we have learned, did not invent the
cipher [zero], they nevertheless introduced it with the Arabic numerals into Europe and taught
Westerners the employment of this most convenient convention, thus facilitating the use of
arithmetic in everyday life . . . al-Khwārizmī . . . was the first exponent of the use of numerals,
including the zero, in preference to letters. These numerals he called Hindu, indicating the In-
dian origin.”27 Leonardo Fibonacci (1170–1250) called zero as zephir, in Latin, in his book, Liber
Abaci.28 Adelard of Bath (1080–1152) used the term cifrae in his translation of al-Khwārizmī’s
21Pi ˙ngala’s Chandāh. -sūtra, 7: 28, 29, 30; Datta and Singh, 1938, vol. 1, p. 75. This quotation does not indicate the origin
of number zero but provides a testimony that the zero was used as a mathematical number in India. Similar accounts of zero
are also made elsewhere in the same manuscript (Chandāh. -sūtra, 3: 2 and 17; 4: 8, 11, and 12; 18: 35, 44, 48 and 51).
22Shyam Lal Singh, Pi ˙ngala Binary Number, in the book by Yadav and Mohan, 2011, p. 121.
23Filliozat, 1993.
24Hayasi, 1995, p. 210, 213. The Bakhshālī manuscript consists of seventy fragmentary leaves of birch bark and is presently
preserved in the Bodleian Library at Oxford University. The original size of a leaf is estimated to be about 17 cm wide and 13.5
cm high, containing mathematical writings. The manuscript was accidentally found in 1881 near a village, Bakhshālī, that is
now near Peshawar in Pakistan. It is currently preserved in Bodleian Library at Oxford University. In his detailed analysis,
Hayasi assigns the seventh century CE as the date when this manuscript was written. However, recently researchers at the
Bodelian library at Oxford University investigated an old copy of Bakhshālī manuscript and found it to be written during
the third or fourth century using carbon-dating—some five hundred earlier than previously thought. For more information,
http://www.bodleian.ox.ac.uk/news/2017/sep-14.
25Martzloff, 1997, p. 207
26Yoke, 1985, p. 83
27Hitti 1963, p. 573.
28Horadam, 1975; Sigler, 2002, p. 17.
3.3. THE BINARY NUMBER SYSTEM 35
Zīj al-Sindhind.29 The word zero in the English language evolved from the terms in Latin and
Italian.
Opposite to the nirgun. a-rūpa approach to define God, the ancient Hindus tried to assign
attributes connected to God. Once you start the process of assigning attributes, there is no limit;
you can continue for ever. This led to the concept of infinity. For this reason, in the ancient
literature of the Hindus, God is also named as ananta (infinity). The Br. hadāran. yaka Upanis. ad
(5: 1) tells us: “The world there is full; The world here is full; Fullness from fullness proceeds.
After taking fully from the full, It still remains completely full.” When you take away something
from a given quantity, it becomes less. Simple arithmetic dictates this. However, it all fails when
we deal with infinity. You subtract something from infinity, it still remains infinity. So when you
take away “fully from the full,” the remaining is still the same, infinity. In Surya Prajnapati, a
text that was written around 400 BCE, numbers are defined as enumerable, innumerable, and
infinite. This text defined infinity in one direction, infinity in two directions (infinity in area),
infinity in three directions (infinity in volume), and perpetually infinite.
3.3 THE BINARY NUMBER SYSTEM
A system in which numbers are represented as linear combinations of powers of two (2) is called
the binary number system. This is a positional-numeral system employing only two kinds of bi-
nary digits, namely 0 and 1. The importance of this system lies in the convenience of representing
decimal numbers using a two-state system in computer technology. A simple “on-off ” or “open-
closed” system can effectively represent a number. Similarly, a set of condensers “charged” or
“not-charged” can represent a number, or a set of two different voltages on a device can effec-
tively represent a number. For this reason, the binary number system is popular in electronic
circuitry.
The presence of a binary number system is an example of the place-value notational system.
1). In the binary system, we write 3
In this system, the number 3 can be written as (1
as 11. To understand the system, a few examples are provided in Table 3.1.
21
C
(cid:2)
(cid:2)
(cid:2)
(cid:2)
(cid:2)
C
C
C
C
C
.0
.1
.1
.1
26/
25/
23/
28/
27/
Using this system, a somewhat larger number 444 is represented as 110111100 [(1
(cid:2)
24/
.1
0].
To convert a number into a binary number, divide the number by two. If there is one as
remainder, then write one (1). If it is divisible, then write zero. For example, to write 45 in the
binary system, let us divide by two:
45
the remainder is 0. Let us write 0. 11
1. 5
the remainder is 0. Let us write 0. 2
1.
2. The resultant is 11 and
(cid:4)
2. The resultant is 5 and the remainder is 1. Let us write
2. The resultant is 1 and
1. The resultant is 0 and the remainder is 1. Let us write
(cid:4)
2. The resultant is 2 and the remainder is 1. Let us write 1. 2
2. The resultant is 22 and the remainder is 1. Let us write 1. 22
22/
21/
.0
.1
C
C
C
(cid:4)
(cid:4)
(cid:4)
(cid:4)
(cid:2)
(cid:2)
(cid:2)
29Neugebauer, 1962, p. 18.
36
3. THE HINDU MATHEMATICS
Table 3.1: Decimal Numbers and Their Binary Equivalent
By taking the first quotient 1 and the remainders in the reverse order, we can write 45
(cid:2)
C
C
.0
.1
24/
in binary number: 101101. This number implies .1
(cid:2)
23/
25/. This converts to (1 + 4 + 8 + 32) and equals to 45. Interestingly, this
rule is provided in Pi ˙ngala-Sūtra (8: 24–25) in a cryptic language which is nicely explained by
Barend A. van Nooten using history documents that provided commentary to Pi ˙ngala’s verse.30
Recently, Shyam Lal Singh has explained the rules of poetic metrics in an article, Pi ˙ngala Binary
Numbers, in the book, Ancient Indian Leaps into Mathematics.31
.1
.0
.1
C
C
C
(cid:2)
(cid:2)
(cid:2)
(cid:2)
20/
21/
22/
The ancient Hindus carefully defined the methodology to write hymns which involved the
study of language and prosody. This allowed a verse to have specific rhythm (metrical structure)
in chanting. The first comprehensive treatise that is known to us was written by Pi ˙ngala, called
Chandāh. -sūtra (or Candāh) or Chandāh. -śāstra.32
In Sanskrit, the science of versification (prosody) consists of verse feet, padas, that are
composed of syllables: light (laghu) and heavy (guru). A knowledge of metrics is essential which
is determined by the permutations and combinations of short and long syllables. Each verse
usually consists of a set of four quarter verses, called pādas. All verses contain the same number
of syllables. For example, most verses in the Sanskrit language are either 8-, 11-, or 12-syllabic.
30van Nootan, 1993. Barend A. van Nooten is a former professor of Sanskrit at the University of California at Berkeley.
He is known for his co-authored books, Rigveda: A Metrically Restored Text and The Sanskrit Epics.
31Yadav and Mohan [Editors], 2011.
32Weber, 1863; taken from van Nooten, 1993, also reproduced in Rao and Kak, 1998.
Decimal NumbersBinary Equivalent1 = (0 × 21) + 1012 = (1 × 21) + 0103 = (1 × 21) + 1114 = (1 × 22) + (0 × 21) + 01005 = (1 × 22) + (0 × 21) + 11016 = (1 × 22) + (1 × 21) + 01107 = (1 × 22) + (0 × 21) + 11118 = (1 × 23) + (0 × 22) + (0 × 21) + 010009 = (1 × 23) + (0 × 22) + (0 × 21) + 1100110 = (1 × 23) + (0 × 22) + (1 × 21) + 0101011 = (1 × 23) + (0 × 22) + (1 × 21) + 1101115 = (1 × 23) + (0 × 22) + (1 × 21) + 1111117 = (1 × 24) + (0 × 23) + (0 × 22) + (0 × 21)+ 1100013.4. THE FIBONACCI SEQUENCE 37
These quarters, (pādas), which are again subdivided into various groups or subgroups, depending
on the number syllables (aks. ara) in each quarter and the placement of short and long syllables.
The vowels, such as a, i, u, r., and l. are short syllables while ā, e, ai, ī, o, au, and ū are long syllables.
There are some more rules to define the short and long syllables which are beyond the scope of
this book.33
Pi ˙ngala gave the following rule: “(Place) two when halved” “when unity is subtracted then
(place) zero;” “multiply by two when zero;” “square when halved.”34. It was van Nootan’s expertise
in Sanskrit metrics that allowed him to discover this unique binary system. In this system, each
syllable is assigned a numerical value, based on its position in the meter. The work of Pi ˙ngala
in Candāh. -sūtra definitely shows that he knew the place-value system of numeric notations and
used a binary numerical base, and not base-10.35
The discovery of binary numbers is generally attributed to Gottfried Leibniz (1646–1716
CE) at the end of 17th century. Leibniz is said to have come up with the idea when he interpreted
Chinese hexagram depictions of Fu Hsi in I-Ching (The Book of Changes) in terms of a binary
code.36 Pi ˙ngala, in van Nooten’s views, did not provide the further applications of the discovery.
However, this knowledge was available to Sanskrit scholars of meterics.37 “Unlike the case of
the great linguistic discoveries of the Indians which directly influenced and inspired Western
linguistics, this discovery of the theory of binary numbers has so far gone unrecorded in the
annals of the West,” remarks van Nooten.
3.4 THE FIBONACCI SEQUENCE
Leonardo Fibonacci (1170–1250) was an Italian mathematician who popularized Hindu nu-
merals in the Western world by writing a book, Liber Abaci. This book played an important role
in the growth of mathematics in Europe. Fibonacci came in contact with Hindu mathematics
during his stay in Bugia, located on the Barbary Coast of Africa. In Fibonacci’s own account, his
father was a public official there to help the visiting Pisan merchants there. His father wanted
Fibonacci to learn mathematics for “a useful and comfortable future.” He arranged some lessons
in mathematics for the young Fibonacci from well known scholars. This is how Leonardo learned
about Hindu numerals. Leonardo recognized the superiority of this new system over his native
Roman numeral system and wrote the above-mentioned book about it.
The Fibonacci sequence is connected with cumulative growth, and plays a role in various
number games and natural phenomena, including a botanical phenomenon called phyllotaxis
33For more information, read Singh, Shyam Lal, Pi ˙ngala Binary Numbers, in the book, Ancient Indian Leaps into Mathe-
matics. Van Nooten, 1993 and Datta and Singh, 1962 are the other resources to understand this system.
34Datta and Singh, 1962, vol. 1, p. 76; van Nooten has provided a slightly different translation. However, both systems
provide similar results.
35van Nooten, 1993
36Loosen and Vonessen, 1968, p. 126–131; reference taken from van Nooten, 1993.
37van Nooten, 1993.
38
3. THE HINDU MATHEMATICS
where the arrangement of leaves on a system is studied.38 The seeds-pattern in a sunflower
follow the Fibonacci sequence.
The following is the problem that Fibonacci posed in Liber Abaci that is well known today
as the Fibonacci sequence: “A certain man had one pair of rabbits together in a certain enclosed
place, and one wishes to know how many are created from the pair in one year when it is the
nature of them in a single month to bear another pair, and in the second month those born to
bear also. Because the above written pair in the first month bore, you will double it; there will
be two pairs in one month. One of these, namely the first, bears in the second month, and thus
there are in the second month 3 pairs; of these in one month 2 are pregnant, and in the third
month 2 pairs of rabbits are born, and thus there are five pairs in the month; in this month 3
pairs are pregnant, and in the fourth month there are 8 pairs, of which 5 pairs bear another 5
pairs; these are added to the 8 pairs making 13 pairs in the fifth month; these 5 pairs that are
born in this month do not mate in this month, but another 8 pairs are pregnant, and thus there
are in the sixth month 21 pairs; to these are added the 13 pairs that are born in the seventh
month; there will be 34 pairs in this month; to this are added the 21 pairs that are born in the
eighth month; there will be 55 pairs in this month; to these are added the 34 pairs that are born
in the ninth month; there will be 89 pairs in this month; to these are added again the 55 pairs
that are born in the tenth month; there will be 144 pairs in this month; to these are added again
the 89 pairs that are born in the eleventh month; there will be 233 pairs in this month. To these
are still added the 144 pairs that are born in the last month; there will be 377 pairs, and this
many pairs are produced from the above written pair in the mentioned place at the end of the
one year.”39
Based on the assumptions, at the beginning of the second month, there will be two pairs.
After the second month, there will be three pairs. In the third month there will be 5 pairs. In
the consecutive month 8, 13, 21, 34, 55, 89, 144, 233, and 377 pairs will be there. Thus, 1, 2, 3,
5, 8, 13, 21, 34, 55, 89, 144, 233, and 377 pairs of rabbits will be available. As one can notice,
in this number sequence,
Also,
xn
xn
D
1
(cid:0)
C
xn
2
(cid:0)
xn
1
C
(cid:2)
xn
1
(cid:0)
D
x2
n C
.
(cid:0)
1/n
It was the French mathematician Édouard Lucas (1842–1891) who assigned this mathe-
matical series as “the Fibonacci sequence.” Later, an Oxford botanist, A. H. Church, recognized
that the number of seeds in the spiral patter of sunflower head match with the Fibonacci sequence
numbers. In 1963, an International Fibonacci Society was formed to study related topics.40
In India, this sequence appeared in the science of hymn-composing, or metrics, just like
the binary numbers that were discussed in the previous section. In one particular category, the
38For more information, see Hoggatt, 1969.
39Sigler, 2002, p. 404–405.
40Gies and Gies, 1969, p. 81–83.
3.5. THE SQUARE-ROOT OPERATION 39
number of variations of meters having 1, 2, 3, . . . morae (syllabic instant) are 1, 2, 3, 5, 8, 13,
. . ., respectively, the so-called Fibonacci numbers. Ācārya Virāhanka (lived sometime between
600–800 CE), Gopāla (before 1135 CE) and Hemachandra (ca. 1150 CE) had provided this
sequence before it was suggested by Leonardo Fibonacci. To understand this metrical science, a
knowledge of Sanskrit is required. Readers can get more information on this from the work of
Permanand Singh that is published in a prestigious journal, Historia Mathematica. He concludes
that “the concept of the sequence of these numbers in India is at least as old as the origin of the
metrical sciences of Sanskrit and Prakrit poetry.”41
3.5 THE SQUARE-ROOT OPERATION
The approximate value of the square root of the number 2 can be calculated using an arithmetical
equation that is an alternative form of the Pythagorean theorem, as explained later in Section 3.8.
This theorem is provided in several Śulbasūtras.42 If we apply the expression to a square of side
1 in any unit system, we can find the value of p2. According to this theorem, the diagonal of a
right-angle triangle with two equal sides (a) is
D
(cid:2)
For a = 1, this gives us the approximate value of p2.
4 (cid:0)
C
(cid:2)
3
3
L
a
a
a
3 C
a
4
(cid:2)
34
p2
L
D
1
C
D
1
4
(cid:2)
(cid:2)
577
408
34 D
1
3 C
3
p2
1
(cid:2)
D
4 (cid:0)
3
1:41
This is the accepted value. If one tries to get higher accuracy in the results, the value is
correct to the fifth decimal place, 1.41421. After the fifth decimal place, the value given in this
formula is slightly higher (1.4142156) than the actual value (1.4142135).
Professor John F. Price of the University of New South Wales, Australia, provides the
rationale of this formula in Baudhayana Śulbasūtra.43 According to him, we know the value of
p2 is between 1 and 2. If we equate p2 to 1 or 2 and square both sides we get 2 on one side
while 1 or 4 on the other side. Using similar considerations, we know that p2 will be less than
.1
1
3 /, Therefore, our first initial approximation is
1
2 / and more than .1
C
C
C
If we improve this value by adding a small term (x) to our value of p2, square both sides,
(cid:25)
p2
1
1
3
assume x2 to be quite small and neglect it, we go through the following sequence:
41Singh, 1985.
42Baudhayāna-Śulbasūtra, 2: 12; Āpastambā-Śulbasūtra, 1: 6; and Kātyāyana-Śulbasūtra, 2: 9.
43Price, in Gorini, 2000, p. 46–55.
40
3. THE HINDU MATHEMATICS
p2
1
C
D
1
3 C
x
D
4
3 C
x
2
(cid:19)
(cid:18) 4
3
2
D
(cid:18) 4
3
2
D
8
3
x
C
2
(cid:19)
x2
C
8
3
x
C
1
(cid:2)
4
In our next approximation,
x
D
3
p2
1
C
(cid:25)
1
3 C
3
1
(cid:2)
y
4 C
This reduces to
Again, square and ignore y2 term. The next step yields
p2
4
3 C
3
D
1
(cid:2)
y
4 C
16
9 C
9
2
D
1
2
9
16 C
4
4 C
y (cid:18) 2
4
(cid:2)
3 C
3
(cid:2)
(cid:2)
(cid:2)
If we simplify this equation and calculate y, we get
(cid:19)
4
2
(cid:2)
If we continue further, the next approximation will be
y
D (cid:0)
3
1
4
34
(cid:2)
(cid:2)
L
1
C
D
1
3 C
3
1
(cid:2)
4 (cid:0)
3
1
4
(cid:2)
(cid:2)
34 (cid:0)
3
4
(cid:2)
(cid:2)
1
34
(cid:2)
577
2
(cid:2)
This will give us p2 = 1.414213562374, a value accurate up to the thirteenth place. This
approximation was done by David W. Henderson of Cornell University44 and Price45. It is inter-
esting that the Śulbasūtra does mention that the value is only approximate and a little bit higher
than the actual value (saviśes. a).46 This process is called the method of successive approximations
in most mathematics books, a modern technique.
44Read, Square Roots in the Śulba Sūtras by Henderson, in Gorini, 2000, p. 39–45.
45Price, John F., Applied Geometry of the Śulba Sūtras, in the book by Gorini, 2000, p. 46–55.
46Read, Square Roots in the Śulba Sūtras by Henderson, in Gorini, 2000, p. 39–45.
This process continues with more terms added. It always remains only an approximate
value. Depending on the type of accuracy we need in everyday situation, the ancient Hindus felt
appropriate to stop after reading to the fifth place of the decimal. Henderson also compares the
popular “divide-and-average” method, also called the Newton’s method, with the Baudhayāna’s
method and concludes that Baudhayāna’s method “uses significantly fewer computations” than
the Newton’s method to reach the same order of accuracy.
3.6. ALGEBRA 41
3.6 ALGEBRA
Al-Khwārizmī, Muhammad ibn Mūsā is one of the earliest known astronomers and mathemati-
cians from the glorious period of Baghdad/Islam. He was native to the Khwarizm Region in
Persia, as the name suggests. He later moved to Baghdad and served in the court of al-Mamūn
(813–833 CE). His best known works are: Kitāb al-jabr wa‘l Muqābala (The Book of Manipula-
tion and Restoration), Kitāb al Hisāb al-Hindī (Book of Indian Mathematics), and Zīj al-Sindhind
(Astronomy Table from India). The last two books are obviously the works of the Hindus as the
titles suggest. Even the third book was based on the work of the Hindus. The impact of al-
Khwārizmī was so great in Europe that the title of his one book became synonymous with the
theory of equations. The word “algebra” stemmed from the word al-jabr which appeared in the
title of al-Khwārizmī’s book. Al-jabr literally means bone-setting, indication manipulation of
equations. Al-Khwārizmī played a crucial role as a disseminator of science. Latinization of his
name took many forms: from algorizmus to algoritmus to algorithmus. The mathematical term,
algorithm, has possibly stemmed from his Latinized name.47
1
John Wallis (1616–1703), an English mathematician, taught at Oxford University, wrote
a book, A Treatise of Algebra both Historical and Practical in 1685. He is credited with introducing
. In this book, he writes: “However, it is not unlikely that the Arabs,
the symbol for infinity,
who received from the Indians the numerals figures (which the Greeks knew not), did from
them also receive the use of them, and many profound speculations concerning them, which
neither Latins or Greeks did know, till that now of late we have learned them from thence.
From the Indians also they might learn their algebra, rather than from Diophantus, (who only
of the Greeks wrote of it, and he but late, and in a method very different from theirs) . . . And
the name they [Arabs] gave it (Al-gjabr W‘al-mokabala) seems to have no affinity with any Greek
name . . . ”48
By the fifth century CE, the following are some of the rules that were known to the ancient
Hindus. All these rules are taken from Āryabhat. īya. Āryabhat.a I did not provide the derivation
of the rules in most cases and called the knowledge ancient.
47For more information, read Crossley and Henry, 1990; Hughes, 1989; King, 1983; Sesiano, in the book by Selin, 1997.
48Wallis, 1685, page 4, Chapter 2.
42
3. THE HINDU MATHEMATICS
3.6.1 SUM OF A SERIES
“The desired number of terms minus one, halved, then increase by the number of the preceding
terms (if any), multiply by the common difference between the terms, and then increase by
the first term of the (whole) series. The result is the arithmetic mean (of the given number of
terms). This multiplied by the given number of terms is the sum of the given terms. Alternatively,
multiply the sum of the first and last terms by half the number of terms.”49
This rule can be mathematically written for a series with initial term a and common dif-
ference between terms as d . mathematically, it can be written as
.a
a
C
C
d /
C
:::(cid:140)a
C
.n
(cid:0)
1/d (cid:141)
The sum of the series for n terms is:
Sn
D
n (cid:20)(cid:18) n
1
(cid:19) d
(cid:0)
2
a(cid:21)
C
n
2
(cid:140)a
.a
.n
(cid:0)
C
C
1/d /(cid:141)
(cid:140)first term
last term(cid:141)
C
D
n
2
D
These results are correct.
3.6.2 SUM OF A SERIES WITH (cid:134)n2 AND (cid:134)n3
“The continued product of the three quantities viz, the number of terms, number of terms plus
one, and twice the number of terms increased by one when divided by 6 gives the sum of the
series of squares of natural numbers. The square of the sum of the series of natural numbers gives
the sum of the series of cubes of natural numbers.”50
Mathematically, the above verse translates as follows:
32
For the series 12
C
correct value for all values of n.
Similarly, for the series 13
33
23
22
C
C
C
:::
n2, the sum equals to n.n
to .1
2
3
C
C
:::
C
C
:::
C
C
n/2. This reduces to h n.n
C
2
C
n3, according to Āryabhat.a I, the sum equals
which provides correct result.
C
2
1/
i
1/.2n
6
C
C
1/
. This provides the
3.6.3 SOLUTION TO A QUADRATIC EQUATION
Āryabhat.a I also gave the solution of a complex money-transaction problem: “Multiply the sum
of the interest on the principal and the interest on this interest by the time and by the principal.
Add to this result the square of the half of the principal. Take the square-root of this. Subtract
49Āryabhat. īya, Gan. itpada, 19.
50Āryabhat. īya, Gan. itpada, 22.
3.7. GEOMETRY 43
half the principal and divide the remainder by the time. The result will be the interest on the
principal.”51
The problem can be written as follows: “A certain amount of money (P for principal
money) was given on loan for one month with unknown interest x. The unknown interest x
that was accrued in one month was again loaned for time T months. On the completion of this
period, the original interest x and the interest on this interest, all together, became I . Find out
the rate of interest x on the amount P . The answer provided by Āryabhat.a I is as follows:
pIP T
C
.P =2/2
T
(cid:0)
P =2
x
D
The solution provided by Āryabhat.a I is correct.
3.7 GEOMETRY
The ancient Hindus used an elaborate knowledge of geometry for the construction of altars
for religious purposes, for arranging various battalions of soldiers in wars, and for planning the
design of their cities.52 In scientific warfare, a smaller army can prevail with proper geometri-
cal constructions in achieving their tactical goals. For example, Pān. d. ava in Mahābhārata used
their much smaller army and these tactical geometrical constructions against a much larger army
of Kaurava. Cakravyūha was an important geometrical construction of the placements of war-
riors to trap enemy warriors during the period of Mahābhārata. Abhimanyu, son of Arjuna, as
mentioned in Mahābhārata, knew how to break and enter into the geometrical fortification of
cakravyūha. However, he did not know the ways to come out of it. He was trapped and lost
his life. In Mahābhārata war, multiple geometrical constructions, defined by their similarity to
mostly animal or object shapes, were used in organizing soldiers to fight.
The word śulba means a chord, a rope, or a string and Śulbasūtra signifies geometry using
strings. The Śulbasūtra are not the books of geometry or mathematics; these books deal mostly
with rituals. The knowledge of geometry and mathematics is used to perform the rituals. One
can consider them as the “first applied geometry text in the world” to “combine geometry and
numerical techniques.”53
3.7.1 TRANSFORMING A SQUARE INTO A CIRCLE
“If it is desired to transform a square into a circle, (a cord of length) half the diagonal (of the
square) is stretched from the center to the east (a part of it lying outside the eastern side of the
square); with one-third (of the part lying outside) added to the remainder (of the half diagonal),
the (required) circle is drawn.”54
51Āryabhat. īya, Gan. itpada, 25.
52Datta, 1932; Kulkarni, 1983; Sarasvati Amma, 1979; Sen and Bag, 1983; Staal, 1999; Thibaut, 1875.
53Henderson, in Gorini, 2000
54Baudhayāna-Śulbasūtra, 2: 9.
44
3. THE HINDU MATHEMATICS
Let PQRS be the given square and O is the center of the square (see Figure 3.1). The half
diagonal OP is drawn and an arc PAQ is formed, implying OP = OA. A circle of radius OB (r)
is drawn where OB = OC + CB and CB = 1
3 CA.
This implies that OB = OC + 1
This equals OB = OC + 1
Let 2a be the side of square PQRS,
3 CA.
3 (OA - OC).
r
a
C
D
1
3
.p2a
a/
(cid:0)
a(cid:140)1
r
D
C
1
3
.p2
1/(cid:141)
(cid:0)
The area of the circle is (cid:25) r 2
with the area of the given square (4a2).
D
.2
r
a
3
D
3:14(cid:140) a
3 .2
p2/
C
p2/(cid:141)2
C
4:06a2. This is in rough agreement
D
Figure 3.1: Transforming square into a circle of the same area.
Hindus’ mathematical books did not provide axioms and postulates; the mathematical
rules are based on experimental verification. As a result, some rules are only approximately cor-
ABPQSRCOrect. Seidenberg tried to assign a possible period for the general use of geometry among the
Hindus but realized, “no matter how far we go back in ’history’, we find geometrical rituals.”55
3.7. GEOMETRY 45
3.7.2 HEIGHT OF A TALL OBJECT
“(When two gnomons of equal height are in the same direction from the lamp-post), multiply
the distance between the tips of the shadows (of the two gnomons)[CD] by the (larger [GD]
or shorter [EC]) shadow and divide by the larger shadow diminished by the shorter one [GD -
EC]: the result is the upright (i.e., the distance of the tip of the larger [BD] or shorter shadow
[BC] from the foot of the lamp-post). The upright multiplied by the height of the gnomon and
divided by the (larger or shorter) shadow gives the base (i.e. height of the lamp-post).”56
Mathematically,
and
BD
D
CD
GD
(cid:2)
GD - EC
AB
h
BC
(cid:2)
EC D
D
h
(cid:2)
GD
CD
EC
(cid:0)
The proof of this theorem is as follows (see Figure 3.2). Let AB be the unknown height
that we want to find out. We place a gnomon of height h at point E. This gives us the shadow
EC. Now we place the same gnomon or another one of the same size at point G. This gives us
shadow GD.
Figure 3.2: Determining height of a tall object using a small stick.
Using similar triangles,
AB
h D
BC
EC D
BD
GD
55Seidenberg, 1962.
56Āryabhat. īya, Gan. itapada,16. For more information, read Mishra and Singh, 1996; and Shukla and Sarma, 1976, p. 58.
ABECGDHhhF46
3. THE HINDU MATHEMATICS
As given before,
Since, BD = BC + CD,
AB
D
h
BC
(cid:2)
EC
BC
EC D
BD
GD
BC
CD
C
GD
BC
CD
C
BC
BC
EC
GD
EC
D
D
CD
BC D
GD
EC
1
C
CD
BC D
GD
EC (cid:0)
1
CD
BC D
GD
EC
(cid:0)
EC
Therefore,
BC
D
CD
GD
EC
EC
(cid:2)
(cid:0)
AB
h
BC
(cid:2)
EC D
D
h
(cid:2)
GD
CD
EC
(cid:0)
Thus, by measuring the lengths of shadows at two different locations for a stick, one could
measure the height of the unknown object. Or, by aligning the stick in such a way that one could
see the top of the object in line with the top of stick while looking from the ground level from
two different locations, one could measure the height of the unknown object.
3.7.3 THE VALUE OF (cid:25)
The Greek letter (cid:25) (pronounced as pi) indicates the ratio of the circumference of a circle to
its diameter that is a constant for any circle. The ancient Hindu books generally provide two
values of (cid:25): one for rough calculations and second for precise measurements. The knowledge of
(cid:25) is useful in the construction of altars, wheels of a cart, the metallic rims of a wheel, and in
geometry.
The Mānava-Śulbasūtra provided the value of (cid:25) to be 3.2: “The fifth part of the diameter
added to the three times the diameter gives the circumference (of a circle). Not a hair of length
is left over.”57 This provides the value of circumference, C , from diameter, D, as,
3.8. THE PYTHAGOREAN THEOREM 47
16
5
This gives a value of (cid:25) which is close to the actual value of 3.14. Since the purpose of these
D
5 C
3:2D
3D
D
D
D
D
C
books was to prepare altars, these calculations were good enough for that purpose.
Āryabhat.a I gave the value of (cid:25) that is correct to the fourth decimal place: “Add four to
hundred, multiply by eight, and add sixty two thousand. The result is approximately the circum-
ference of a circle [C] of which the diameter [D] is twenty thousand.”58
Mathematically, we know that C
I’s method, is equal to:
(cid:140).4
(cid:25)
D
D
C
(cid:25) D. Therefore, the value of (cid:25), based on Āryabhat.a
100/
8(cid:141)
(cid:2)
20; 000
C
62; 000
62; 832
20; 000 D
D
3:1416
This is equal to the presently accepted value of 3.1416, for up to 4 decimal places.
Āryabhat.a I called this value to be approximate. This makes sense since the value of (cid:25) can only
be determined approximately since the ratio of circumference to diameter is not evenly divisive;
it can have an innumerable number of significant figures. It is an endeavor for many mathe-
maticians to calculate a more precise value of (cid:25). The value of (cid:25) to a large number of significant
figures is commonly used to check the speed, efficiency, and the accuracy of computers.
3.8 THE PYTHAGOREAN THEOREM
The so-called Pythagorean theorem connects the three sides of a right-angle-triangle with the
relation,
D
where a, b, and c are the base, perpendicular, and hypotenuse, respectively of a right-
angle triangle (see Figure 3.3). Pythagoras, a Greek philosopher is said to be the originator of
the theorem sometime during sixth century BCE
C
a2
b2
c2
Let us provide a few statements from the Śulbasūtra that indicate the understanding of
the Pythagorean theorem among the Hindus:
57Mānava-Śulbasūtra, 11: 13.
58Āryabhat. īya, Gan. itapada, 10. Also, Read Hayashi et al., 1989 and Kulkarni, 1978a. The work of Hayasi et al. has a good
review of the later developments on the issue in India. For example, Madhav (14th century) calculated the value of (cid:25) that was
correct to eleven places.
48
3. THE HINDU MATHEMATICS
Figure 3.3: A right-angled triangle, PQR.
“The diagonal of a square produces double the area (of the square). It is p2 (dvikaran. ī) of
the side of the square (of which it is the diagonal).”59 The word dvikaran. ī literally means “that
which produces 2,” implying the twice area of the square constructed using dvikaran. ī as one side
in comparison to the original square.
“The diagonal (of a right angle triangle) of which the breadth is pada and the length 3
padas is p10 padas.”60
“The diagonal (of a right triangle) of which the breadth is 2 pada and the length is 6 pada
is p40 pada.”61
The Baudhayāna-Śulbasūtra provides another rule for a right-angle triangle: “The areas (of
the squares) produced separately by the length and the breadth of a rectangle together equal the
area (of the square) produced by the diagonal.”62
Similar rules are given elsewhere in the ancient Hindu literature: “The areas (of the
squares) produced separately by the length and the breadth of a rectangle together equal the
area (of the square) produced by the diagonal. By the understanding of these (methods) the
construction of the figures as stated (is to be accomplished).”63
59Āpastambā-Śulbasūtra, 1: 5.
60Kātyāyana-Śulbasūtra, 2: 4.
61Kātyāyana-Śulbasūtra, 2: 5.
62Baudhayāna-Śulbasūtra, 1: 48.
63Āpastambā-Śulbasūtra, 1: 4.
QacbPRIIIIII3.9. TRIGONOMETRY: FROM JYĀ TO SINE 49
There is a curious similarity between Baudhayāna and Pythagoras on the so-called
Pythagorean theorem. It is important to understand that Baudhayāna’s work was compiled at
least around 1700 BCE, while the Pythagorean theorem was compiled over a millennia later,
i.e., around sixth-century BCE.64 There are indications that Pythagoras perhaps came in contact
with India or Indian wisdom.
The Pythagorean theorem is mostly given in terms of geometry in most cultures. However,
among the Hindus, they used a mathematical expression for the Pythagorean theorem which
is unique to them. Several Śulbasūtras provided a arithmetical method to find the diagonal of a
right-angled triangle with two equal sides: “The measure is to be increased by its third and this
(third) again by its own fourth less the thirty-fourth part (of the fourth); this is (the value of )
the diagonal of a square (whose side is the measure).65
Mathematically, for a right-angled triangle with two equal sides of length a, the hy-
potenuse is equal to
L
a
D
C
a
3 C
3
a
(cid:2)
a
4
4 (cid:0)
3
34
(cid:2)
(cid:2)
No other culture has provided a similar mathematical form to calculate the hypotenuse.
3.9 TRIGONOMETRY: FROM JYĀ TO SINE
Trigonometry is a branch of science which deals with specific functions of angles and their
application to calculations in geometry. The sine function, as defined in trigonometry, is essential
to the study of geometry. For a right-angle-triangle, if (cid:18) is the acute angle of a right triangle,
the mathematical symbol sin, pronounced as sine, of (cid:18) is the ratio of the side opposite (b) and
b
the hypotenuse (c). Mathematically, sin (cid:18)
c .
Why do we call this function sine? Who chose this word for the scientific community
for what reason? What is the meaning of this word? These are simple questions that intrigue
curious minds when they first learn about this trigonometric function. The answer is as follows:
The sine function was called jyā by Āryabhat.a I. Al-Khwārizmī, when borrowed this concept,
chose an Arabic that is similar in pronunciation. He used geib or jaib for this term. This word
has a specific meaning: fold or pocket. The Latin word for pocket or fold is sinus, and, thus, the
term sine for this trigonometric function evolved.
D
“The most significant contribution of India to medieval mathematics is in trigonometry.
For a circle of unit radius the length of an arc is a measure of the angle it subtends at the center
[center] of the circle. The Greeks, to facilitate calculations in geometry, tabulated values of the
chord of arcs. This method was replaced by Hindu mathematicians with half chord of an arc,
known as sine of the angle . . . No influence on the West was exerted by the development in
India . . . Thus, methods had been known in India were not rediscovered until 1624 by the
64Seidenberg, 1962. It is likely that the date assigned by Seidenberg will be revised with new scholarship.
65Baudhayāna-Śulbasūtra, 2: 12; Āpastambā-Śulbasūtra, 1: 6; and Kātyāyana-Śulbasūtra, 2: 9.
50
3. THE HINDU MATHEMATICS
French mathematician Claude-Gaspar Bachet, sieur de Méziriac,”66 suggests the Encyclopedia
Britannica (see Figure 3.4).
Figure 3.4: Āryabhat.a I’s method to define an arc.
Āryabhat.a I used the half chord on an arc, defined the sine function and gave a table of
21; 600
sines of the angles. In his book, the circumference of a circle was divided into 360
equal segments. He also divided the quadrant of a circle into 24 equal parts. The smallest is thus
2250 or 3(cid:14)450. Āryabhat.a I defined jyā (or sin (cid:18)) for a circle of radius, R, of 3438 units. What is the
mystery behind this strange number 3438? Well, we know that there are 21600 minutes of arc in
a circle of 360 degrees. If we divide this number by 2(cid:25), we get 3437.74 = 3438. A consequence of
this is that if angle (cid:18) is measured in minutes of arc and is small, then 3438 sin (cid:18) is approximately
equal to (cid:18). For example, for a 5-degree angle, the angle comes out to be 300 in minutes. R sin (cid:18)
= 3438 sin (300) = 299.6 = 300, implying that R = 3438 is similar to using radians instead of
degrees in modern mathematics, where we have that sin x is approximately equal to x.67 In
Table 3.2, a comparison of Āryabhat.a I’s values with the modern values is provided.
60
D
(cid:2)
3.10 DIFFUSION OF HINDU MATHEMATICS TO OTHER
CULTURES
3.10.1 THE MIDDLE EAST
Owing their gratitude to the Hindus, numerals were always called arqam hindiya in the Middle
East, meaning the Hindu numerals.68 These numerals were known as Hindu numerals through-
out the medieval period in the Arab world and are known so even today. Al-Jāh. iz, (ca. 776–868
CE), al-Khwārizmī (ca.800–847 CE), al-Uqlīdisī (ca. 920–980 CE) and Ibn Labbān (ca. 971–
1029 CE), several famous natural philosophers from the Middle East or nearby regions, have
testified to the Hindu origin of the so-called Arabic numerals.
66Encyclopedia Britannica, vol. 23, p. 605, 1989.
67For more details, read Achar, 2002; Clark, 1930; Hayasi, 1997; and Shukla and Sarma, 1976.
68Sarton, 1950.
ABCORh = R sin (λ/2) = R sin θhθλTable 3.2: Āryabhat.a I’s and Modern sine values
3.10. DIFFUSION OF HINDU MATHEMATICS 51
Interestingly, in conformity with Arabic tradition, these numerals were called Hindu all
through the medieval and early Renaissance periods in Europe by their top scholars. Adelard
of Bath (1116–1142 CE) and Roger Bacon (1214–1292 CE) in England, Leonardo Fibonacci
(1170–1250 CE) in Italy, S. ā‘id Al-Andalusī (1029–1070 CE) and Ibn Ezra (11th century CE)
in Spain, and Voltaire (1694–1778 CE) in France called them Hindu numerals. They were la-
AngleĀryabhata I's ValueModern sine value × 34383°45’2252257°20’44944911°15’67167115°0’89089018°45’1105110522°30’1315131526°15’1520152130°0’1719171933°45’1910191037°30’2093209241°15’2267226645°0’2431243148°45’2585258452°30’2728272856°15’2859285960°0’2978297763°45’3084308367°30’3177317671°15’3256325675°0’3321332178°45’3372337282°30’3409340986°15’3431343190°0’3438343852
3. THE HINDU MATHEMATICS
beled Arabic only after the sixteenth century by some.69 This was the beginning of the colonial
period. Why the Hindu numerals got changed to Arabic numerals is a mystery in the history of
mathematics. Did it happen due to a coincidence or was it a result of complex political ambitions
of the West? This is a curious mystery that is not yet resolved. Today, most textbooks call these
numerals the Hindu-Arabic numerals.
Al-Jāh. iz, (776–868 CE), a well-known Arab theologian and natural philosopher, at-
tributed the origin of numerals to the Hindus.70 Using Hindu numerals, Kūshyār Ibn Labbān
(ca. 971–1029 CE) wrote a book on Hindu arithmetic, Principles of Hindu Reckoning,71 which
became quite popular during the eleventh-century Islamic world. Kūshyār ibn Labbān was pri-
marily an astronomer from Jilan, a village in Persia on the south of Caspian Sea, and wrote his
manuscripts in Arabic. He started his book with the following sentence: “In the name of Allāh,
the Merciful and Compassionate. This book on the principle of Hindu arithmetic is an arrange-
ment of Abu al-Hasan Kushyār bin [Ibn] Labbān al-Jīlī, may God have mercy on him.”72
In the time of communal tensions around the world, Labbān’s book sets an example for
all—a Muslim who wrote a book on the Hindu arithmetic with the mercy of Allāh. It is com-
pletely in line with the doctrines of Prophet Mohammad who suggested that his followers travel
around the world for knowledge: “You should insist on acquiring knowledge even if you have to
travel to China.”
Ibn Sīnā (980–1037 CE), a noted Persian scholar of the eleventh century mentions that
the Hindu method of calculation was common in Persia during his period. Even merchants
with small businesses used the Hindu mathematical methods for calculations. Ibn Sīnā himself
learned such methods from a vegetable merchant, Mahmūd al-Massāhī.73 Ibn Sīnā was a prolific
writer with books on medicine, metaphysics, natural philosophy, and mathematics. He explained
the Greek works of Aristotle, Plato, and Plotinus within the framework of Islam.
Since the Greek numerals were already in use in Arabia when the Hindu numerals were
introduced, both numeral systems competed with each other. Lobbies were formed where the
superiority of one system over the other was debated/discussed. Severus Sebokht (died, 662
CE), a Syrian natural philosopher mentioned of such a rivalry between the Greek and Hindu
numerals. 74
“I will omit all discussion of the science of the Hindus, a people not the same as Syrians,
their subtle discoveries in the science of astronomy, discoveries which are more ingenious than
those of the Greeks and the Babylonians; their valuable method of calculation; their computing
69Clark, 1929, p. 217.
70Pellat, 1969, p. 197; Levy and Petruck, 1965, p. 6.
71Levy and Petruck, 1965.
72Levy and Petruck, 1965.
73Gohlman, 1974, p. 21 and 121.
74Wright, 1966, p. 137. Sebokht was born in Nisibis in Persia and taught Greek philosophy in Syria. Later, he became
the bishop of the convent of Ken-neshre (Qenneshre), a center of Greek learning in Upper-Euphrates in West-Syria. He
had a good grasp of various sciences and wrote on astronomy, geography, and mathematics. His manuscripts are preserved in
museums in London and Paris. Sebokht believed in the universal nature of science and was of the opinion that many cultures,
not just the dominant Greeks, contributed to the Arab scientific pool.
3.10. DIFFUSION OF HINDU MATHEMATICS 53
that surpasses description. I wish only to say that this computation is done by means of nine
signs. If those who believe, because they speak Greek, that they have reached the limits of sci-
ence, should know these things, they would be convinced that there are also others [Hindu]
who know something.”75 This quotation is a proof that the Hindu numerals were in practice in
Arabia by the seventh-century. Also, by the seventh century, people started to discard the Greek
numerals in favor of the Hindu numerals in Arabia.
Al-Uqlidīsī (920–980 CE, also written as Uklidisi, meaning ‘the Euclid-man’ for his role
as copyist of Euclid’s work) in his book, Kitāb al-Fusūl f ī al-hisāb al-Hindī defined the Hindu
numeral system and explained Hindu arithmetic: “I have looked into the works of the past
arithmeticians versed in the arithmetic of the Indians [Hindus, in the original] . . . We can thus
dispense with other works of arithmetic. The reader who has read other works will realize this
fact and thus adhere to this work and prefer it to others, new or old, for it contains more than
any other books of this kind.”76 The title of the above book indicates that the book contained
the mathematics of the Hindus. According to al-Uqlidīsī, “even the blind and the weak sighted
will find in our explanations and summaries something to benefit from without toil or cost,
by the will and help of God.”77 He mentioned the use of the dust-board abacus for the Hindu
arithmetic [hisāb al-Ghubār, hisāb al-takht (board), al-Hisāb al-Hindī or Hisāb al-turāb (dust)].78
This kind of board became so popular for mathematical calculations, arithmetic was called Hisāb
al-Ghubār, meaning mathematics of the dust-board. This term was common in the medieval
Spain.79 On the practicality of a dust board in mathematical calculations, al-Uqlidīsī writes,
“If some persons dislike it because it needs the takht (board), we say that this is a science and
technique that needs a tool.”80
Al-Khwārizmī wrote a book in the Arabic language on the use of Hindu numerals, Kitāb
al-hisāb al-Hindī. Although the original book in the Arabic language has been lost, its Latin
translation by Robert of Chester has survived.81 A thirteenth century manuscript from Spain
has been found recently which is more complete than the first Latin translation. This manuscript
along with its German translation was published by Menso Folkerts in 1997.82 Al-Khwārizmī’s
book is a compilation of the mathematical procedures used by the Hindus, as acknowledged
by himself.83 Adelard of Bath (1080–1152 CE), an English philosopher, mathematician, and
75Datta and Singh, pt. 1, p. 96; Clark, 1929, p. 220.
76Saidan, 1978, p. 35.
77Saidan, 1978, p. 189. This book was composed in Damascus in 952–953 CE and a 1186 CE manuscript has survived.
Read Burnett, 2006.
78Saidan, 1978, p. 12.
79Read, Salem and Kumar, 1991.
80Saidan, 1978, p. 35; Saidan has also compiled a list of books on Hindu-Arabic arithmetic that were in the Arab world.
See, Saidan, 1965.
81Hughes, 1989.
82Folkerts, Menso, Die älteste Lateinische Schrift über das Indische Rechnen nach al-H. wārizmī, Übersetzung und Kommentar
von M. Fokerts under Mitarbeit von Paul Kunitzsch, Bayerische Akademie der Wissenschaften, Philosophisch-historische
Klasse, Abhandlungen, Neue Folge, Heft 113, 1997. Taken from Folkerts, 2001.
83Folkerts, 2001. Al-Khwārizmī was perhaps the most influential Islamic philosopher in Europe before the European
Renaissance. The mathematical terms sine, algorithm, and zero are attributed to him.
54
3. THE HINDU MATHEMATICS
scientist, translated al-Khwārizmī’s astronomical tables, that had been previously revised by al-
Majrīt.ī (d. 1007 CE), from Arabic to Latin. These tables included tables of sine, and, thus, the
sine function was introduced to the Latin world.84 Further, this Latin translation was translated
into English by O. Neugebauer.85 It is this English translation that is our major source of al-
Khwārizmī’s knowledge in astronomy and trigonometry.
Trigonometry was introduced to the Arab world by India.86 Al-Battānī, Abu ‘Abd Allāh
Muhammad ibn Jābir ibn Sinān al-Raqqī al-Harrānī al-Sābi‘ (858–929 CE) used the half-cord
(that leads to sine function) instead of the chord in his book Kitāb al-Zīj(Book of Tables) fol-
lowing the examples of his predecessors who used the Hindu method rather than the Greek
method.87 Al-Battānī’s Zīj was translated into Latin by Robert of Ketton and Plato of Tivoli
during the twelfth century. This table was also translated into Spanish under the patronage of
Alfonso X.88
3.10.2 CHINA
Yoshio Mikami, in his book, The Development of Mathematics in China and Japan, wrote of the
Indian influence on Chinese mathematics: “Things Indian exercised supremacy in art and liter-
ature, in philosophy, in the mode of life and the thoughts of the inhabitants, in everything. It
is even said, astronomy and calendrical arts had also felt their influence. How then could arith-
metic remain unaffected? No doubt the Chinese studied the arithmetical works of the Hindoos
[Hindus].”89 On the popularity of Indian literature in China, Mikami wrote: “we read in history,
the Indian works were read in translation ten times more than the native [Chinese] classics, a
fact that vividly tells how the Indian influence had swept over the country [China].”90
The biggest export of India to China is definitely the religion founded on the teaching of
Lord Buddha, Buddhism. When Buddhism was introduced to China, this country was already
an old civilization with a powerful tradition and history. Philosophers such as Confucius guided
Chinese society with their philosophies. Confucianism mostly deals with ethical rules that are
applicable to this world; it does not deal with the spiritual life. Buddhism filled that niche to
allow people to think about the “ultimate questions” of life, liberation, etc. There was a contrast
in the intentions of these visitors who came from China in comparison to the Europeans. Most
Chinese travelers visited India for their spiritual pursuits while Marco Polo and Vasco de Gamma
visited to collect wealth. For example, Xuan Zang (b. 603 CE, also known as Hiuen Tsang,
Huan Chwang, Yuan Chwang, Hiouen Thsang, and Hsuan Tsang)91 studied in Nālandā and
carried books on twenty-two horses on his way back to China. He build a pagoda in Xian,
84Sarton, 1931, vol. 2, p. 167.
85Neugebauer, 1962.
86Needham and Ling, 1959, vol. III, p. 108.
87Dictionary of Scientific Biographies, vol. I, p. 508
88Julio Samsó in Selin, 1997
89Mikami, 1913, p. 57.
90Mikami, 1913, p. 57.
91Bernstein, 2001, p. 22.
3.10. DIFFUSION OF HINDU MATHEMATICS 55
China to house the books. Kumārjīva, an Indian scholar, went to Chang-an in 401 CE and
served as “Grand Preceptor” of an enormous translation project involving thousands of monks
and scholars that advanced the philosophy of the great Indian philosopher Nāgārjuna.92 He was
from the Andra Pradesh region and was the founder of the Madhyāmika (Middle Path) school
of Buddhism. This school became popular in China under the name Sanlun. Nāgārjuna was
based in Nālanda University.
Ch’u-t’an Hsi-ta from Tang’s Court translated a Sanskrit text into Chinese and intro-
duced decimal notation and the arithmetic rules during the early eighth century. His work was
continued further by Yijing (or I-tsing, 643–713 CE), under the order of the emperor.93 (See
Chapter 2.)
The trigonometric term jyā, as defined by Āryabhat.a I, was called ming in China.94 In the
section, Tui yueh Chien Liang Ming (On the prediction of the Moon’s positions) of Chiu-Chih-
li, an astronomical book that is based on Sanskrit works, chia is the term used in this book to
define the trigonometric function sine which is an obviously a minor change in pronunciation
of the term jyā.95 The term ming may have been adopted from the title of this book later as
the book became popular in China. In 718 CE, a new calender, called Jiuzhi li (Nine Upholder
Calendar), was compiled in China.96 This calendar is based on Varāhamihira’s Pañca Siddhānta
and contains a table of sines and the Hindu numeral zero in the form of a dot (bindu).97
3.10.3 EUROPE
Monk Vigilia of the monastery of Albelda in the Rioja, Asturias (an autonomous community
in Spain), in 976 CE, made a copy of the Etymologies that was originally written by Isidore of
Seville (560–636 CE). In this copy, the monk provided information about the Hindu numerals
with the following remark: “We must know that the Indians have a most subtle talent and all
other races yield to them in arithmetic and geometry and the other liberal arts. And this is clear
in the nine figures with which they are able to designate each and every degree of each order (of
numbers).”98
S. ā‘id al-Andalusī (1029–1070 CE), a Spanish scholar of the eleventh century, writes about
the Hindu numerals and arithmetic: “That which has reached us from their [Hindus’] work on
numbers is Hisāb al-Ghubār (dust board arithmetic) which was simplified by Abū Jā‘far Muh-
mmad ibn Mūsā al-Khwārizmī. This method of calculating is the simplest, fastest, and easiest
method to understand and use and has a remarkable structure. It is a testimony to the intelligence
of the Hindus, the clarity of their creativity, and the power of their inventiveness.”99
92Bernstein, 2001, p. 95
93Sarton, 1927, vol. 1, p. 513.
94Martzloff, 1997, p. 90 and 100.
95Sen, 1970
96Yoke, 1985, p. 162
97Yoke, 1985, p. 162.
98Burnett, 2006.
99Salem and Kumar, 1991, p. 14.
56
3. THE HINDU MATHEMATICS
S. ā‘id al-Andalusī was not the only one in Spain to write about the dominance of India in
the sciences. Rabbi Abrahm Ibn Ezra (1096–1167 CE), a poet, scholar, and author of numer-
ous books on grammar, philosophy, medicine, astronomy, astrology and mathematics, wrote on
the ways the Hindu numerals were introduced in Arabia. Ibn Ezra mainly lived in Spain and
translated al-Bīrūnī’s and al-Khwārizmī’s work into Hebrew. Ibn Ezra mentions the role of the
Jewish scholars in bringing a Hindu scholar, named Kanaka, who taught place-value notations
to the Arabs.100
Leonardo Fibonacci (1170–1250 CE) is well known for his contributions in arithmetic
and geometry. He is also known for the Fibonacci Sequence. He learned the Hindu mathemat-
ics from the Arabs and popularized Hindu numerals in the western world. “Of all the methods
of calculation, he [Fibonacci] found the Hindu [method] to be unquestionably the best,”101
concludes Cajori in his studies. Fibonacci attributed the numerals correctly to the Hindus. Fi-
bonacci wrote a statement in this book which erases any confusion about his knowledge of the
Hindu numerals: “The nine Indian figures are: 9 8 7 6 5 4 3 2 1. With these nine figures, and
with the sign 0 . . . any number may be written, as is demonstrated below.”102 He further writes
to explain the place value notation: “The first place in the writing of the numbers begins at the
right. The second truly follows the first to the left. The third follows the second. . . . the nine
figures that will be in the second place will represent as many tens . . . the figure that is in the
third place denotes the number of hundreds. . . if the figure four is in the first place, and the unit
in the second, thus 14, undoubtedly xiiii will be denoted . . if the figure of the unit is in the first
place, and the figure four in the second, thus 41, xli will be denoted.”103
Fibonacci was fascinated with Hindu mathematics. “. . . the art of the nine Indian figures,
the introduction and knowledge of the art pleased me so much above all else, and I learned from
them, whoever was learned in it, from nearby Egypt, Syria, Greece, Sicily and Provence, and
their various methods, to which locations of business I traveled considerably afterward for much
study, and I learned from the assembled disputations,” writes Fibonacci.104
Leonardo used the mathematical knowledge of the Hindus for business transactions from
one currency to another, investment of money, calculation of simple and compound interest, and
in defining the rules of barter. Fibonacci found the prevalent Roman numerals inferior to the
Hindu numerals. The purpose of writing his book Liber Abaci was to introduce the Romans to
the very best available tools of mathematical calculations and their applications.
Fibonacci started the book with the following sentence: “You, my Master Michael Scott,
most great philosopher, wrote to my Lord about the book on numbers which some time ago
I composed and transcribed to you; whence complying with your criticism, your more subtle
examining circumspection, to the honor of you and many others I with advantage corrected this
100Goldstein, 1996.
101Cajori, 1980, p. 121.
102Boncompagni, 1857–1862; Horadam, 1975.
103Sigler, 2002, p. 17–18.
104Sigler, 2002, p. 15–16.
3.10. DIFFUSION OF HINDU MATHEMATICS 57
work. In this rectification I added certain necessities, and I deleted certain superfluities. In it
I present a full instruction on numbers close to the method of the Indians whose outstanding
method I chose for this science.”105
Roger Bacon (1214–1294 CE), a noted Franciscan natural philosopher from England,
studied at Oxford University and the University of Paris. Several manuscripts about the Hindu
mathematics were available in French at the University of Paris that were read by Bacon.106 Ba-
con emphasized the role of mathematics for Europe in his Opus Majus and deemed it “necessary
to the culture of the Latins”107 Fibonacci’s Liber Abaci was published thirteen years prior to the
birth of Bacon. As an adult, this book became known to Bacon. Bacon suggested that people
should study mathematics from “all the sages of antiquity” and chastised them for being “ig-
norant” of its “usefulness.”108 Bacon felt that mathematics is “the gate and key” to learn other
sciences. In his view, a study of mathematics was essential for a study of philosophy that, in turn,
was essential for the study of theology. Therefore, mathematics “has always been used by all the
saints and sages more than all other sciences,”109 and is essential for theology and to know about
the creatures and other creations.110 Thus, mathematics, in the mind of Roger Bacon, was a tool
to know more about theology or to know about God’s creations. This is similar to how Nārada
viewed mathematics to achieve salvation (Chapter 2).
The triumph of the Hindu numeral system over the Roman numerals precipitated neither
rapidly nor without conflict. It was not easy to discard the Roman numerals as they were deeply
rooted in the system. Initially, the new Hindu system was mixed with the Roman by some
where place value notation was practiced using Roman symbols. For example, 1482 was written
as M.CCCC.8II, whereas 1089 was written as I.0.VIII.IX.111 In the twelfth and early thirteenth
century, it was common to write numbers below 360 in Arabic, Greek or Latin symbols and the
numbers above 360 in Hindu numerals.112 In 1299 CE, the City Council of Florence forbade the
use of Hindu numerals in official accounting books. Similarly, as late as the fifteenth century, the
mayor of Frankfurt issued an order to the master calculators to abstain from the use of the Hindu
numerals.113 Some even tried to create a parallel system with the first nine Roman numerals.
For example, H. Ocreatus, a student of Adelard of Bath, tried to create this system which could
have been appropriate scientifically. However, it did not catch on among the scholars.114 As late
105Sigler, 2002. Michael Scot was Leonardo’s mentor.
106Smith, a chapter, The Place of Roger Bacon in the History of Mathematics, in the book by Little, 1914, p. 156.
107Bacon, 1928, vol. 1, p. 27.
108Bacon, 1928, vol. 1, p. 27.
109Bacon, 1928, vol. 1, p. 116.
110Bacon, 1928, vol. 1, p. 195.
111Gupta, 1983; Menninger, 1970, p. 287. In the number 1482, an amalgamation of the Roman and Hindu system is
practiced. The number 1 represents 1000 and M is the corresponding symbol. Similarly, C represents 100. The second number
1089 is written in place-value notations with Roman symbols for different numbers. As mentioned earlier, symbols can easily
be changed without compromising quality.
112Burnett, 2006.
113Gazalé, 2000, p. 48.
114Burnett, 2006
58
3. THE HINDU MATHEMATICS
as the eighteenth century, the National Audit Office (Cour des Comptes) in France was still using
Roman numerals in their accounting.115
As mentioned earlier, the Arabic term geib or jaib is a metamorphosed form of the term
jyā used by the Hindus. The Arabic word jaib has multiple meanings: pocket, fold, or bosom.
Adelard of Bath used the term elgeib for the trigonometric function sine, following the word
used by al-Khwārizmī who use geib or jaib for this term.116 Rabbi Ben Ezra (born c. 1093), of
Toledo, Spain, also used a similar term, al-geib, for this trigonometric function.117 Eventually,
Gherardo of Cremona (ca. 1114–1187 CE, sometimes also spelled as Gerard or Gerhard, born
in Carmona, Spain) literally translated the Arabic term into Latin and used the term sinus to
define the operation.118 The term sinus means bosom, fold, or pocket in Latin. Pocket or bosom
has nothing to do with the trigonometric function. However, this term has been in use for about
a millennium.
In 1631, Richard Norwood (1590–1675), a British mathematician and surveyor, pub-
lished a book on trigonometry where the word sine was used for this trigonometric function and
the symbol “s” was used to depict the function in his mathematical equations.119 In 1634, the
French mathematician Pierre Hérigone (1580–1643) became the first person to use the symbol
“sin” for sine and the practice has continued since then.120 David Eugene Smith has provided
an excellent review of sine function.121
3.10.4 SUPPORT OF POPE SYLVESTER II
Pope Sylvester II was instrumental in popularizing the Hindu numerals, in his life as Gerbert
d’Aurillac (945–1003 CE), his name prior to becoming pontiff. He was born in a poor family
in France and was first educated at Aurillac, France. He later moved to Catalonia, Spain. He
visited the cities of Barcelona, Seville, and Cordova in Spain which were leading centers of
learning during the period. Gerbert became the headmaster of the cathedral school at Reims,
later became the archbishop of Reims and then of Ravenna in Italy. His education in France
and Spain in Latin grammar, science, and mathematics and his political astuteness, allowed
him to become the hundred forty-sixth pope. With the patronage of Otto III of Saxony, on
April 9, 999, he was elected as pontiff and chose Pope Sylvester II as his new name. It was
an important period for Christianity as it was about to complete one thousand years since the
birth of Christ. Pope Sylvester II is known for his efforts to translate Greek and Arabic texts
on natural philosophy into Latin. He raised the profile of natural philosophy and reinforced
intellectual aspects of theology in the mainstream activities of the Church.
115Sarton, 1950.
116Neugebauer, 1962, p. 44, 45.
117Goldstein, 1996.
118Smith, 1925, vol. II, p. 616.
119Smith, 1925, vol. II, p. 618.
120Smith, 1925, vol. II, p. 618.
121Smith, 1925, vol. II, p. 614–619.
3.10. DIFFUSION OF HINDU MATHEMATICS 59
Pope Gerbert had a unique training: monastic life from Christian teachers, pagan life
from Latin classics, and academic learnings (astronomy and mathematics) from the Muslim
teachers of tenth century Spain. His interests included literature, music, philosophy, theology,
mathematics and the natural sciences. He became familiar with the Hindu numerals during his
stay in Spain. Later, he wrote and taught in his own school in Rheims about the Hindu numerals
with its positional notations and rules related to the arithmetical operations.122 Gerbert used the
nine signs of the Hindus to construct his abacus and “gave the multiplication or the division of
each number, dividing and multiplying their infinite numbers with such quickness that, as for
their multiplication, one could get the answer quicker than he could express in words.”123
Gerbert wrote five books dealing with science and mathematics that are now lost. What
we know about Gerbert is from the writings of his students, particularly Richer, the son of a
French nobleman,124 and through his letters.125 Richer later became a monk in St. Rémy of
Rheims and wrote a popular book on the history of France, Historia Francorum.126. Nikolai
Bubnov, a Russian scholar, extensively studied the works of Gerbert and published a book about
his contributions to mathematics, in Latin.127 Recently, another book on the life of Gerbert was
published by Pierre Riché in French.128
In one letter that was written to Constantine, monk of Fleury in 980 CE, Gerbert ex-
plained the Hindu numerals and mathematics to his monk friend using the abacus that was pop-
ular in Spain. Following are the excerpts of the letter: “Do not let any half-educated philosopher
think they [Hindu numerals and mathematics] are contrary to any of the arts or to philosophy.
. . For, who can say which are digits, which are articles, which the lesser numbers of divisors, if
he disdains sitting at the feet of the ancient?”129
Richer described Gerbert’s abacus in the following words: “in teaching geometry, indeed,
he expended no less labor [than in teaching astronomy]. As an introduction to it he caused an
abacus, that is, a board of suitable dimensions, to be made by a shield maker. Its length was
divided into 27 parts [columns] on which he arranged the symbols, nine in number, signify-
ing all numbers. Likewise, he had made out of horn a thousand characters, which indicated
the multiplication or division of each number when shifted about in the 27 parts of the abacus.
[He manipulated the characters] with such speed in dividing and multiplying large numbers
that, in view of their very great size, they could be shown [seen] rather than be grasped men-
tally by words. If anyone desires to know this more thoroughly let him read his book which he
122Lattin, 1961.
123Darlington, 1947.
124Darlington, 1947.
125Lattin, 1961.
126Richer, 1964–67.
127Bubnov, 1899.
128Riché, 1987.
129Lattin, 1961, p. 45.
60
3. THE HINDU MATHEMATICS
wrote to Constantine, the grammaticus; for there he will find these matters treated completely
enough.”130
Another disciple of Gerbert, Bernelinus, has described his abacus as consisting of a smooth
board with a uniform layer of blue sand. For arithmetical purposes, the board was divided into
30 columns, of which 3 were reserved for fractions. These 27 columns were grouped with three
columns in each group. These were marked as C (cenlum), D (decem), and S (singularis) and M
(monas). Bernelinus gave the nine numerals and suggested that the Greek letters could also be
used instead.131
Pope Sylvester lived in a period that is before the times of Fibonacci of Italy and Roger
Bacon of England, both credited for the introduction of the Hindu numerals in Europe. Un-
fortunately, Pope Sylvester II died in 1003, less than four years after he was crowned as Pope.
He was a champion of scientific scholarships along with his usual duties as a religious leader.
***********
In summary, this chapter provides a brief sketch of the contributions of the Hindus, and reminds
us of the mathematical tools that were perfected by the Hindus more than a thousand years ago
and which have become a part of mainstream mathematics. “Think of our notation of numbers,
brought to perfection by the Hindus, think of the Indian arithmetical operations nearly perfect
as our own, think of their elegant algebraic methods, and then judge whether the Brahmins
on the banks of the Ganges are not entitled to some credit,” questions Cajori.132 The answer is
definitely in the affirmative.
130Lattin, 1961, p. 46. The 27 columns in his abacus is an interesting number. The ancient Hindus divided the ecliptic
circle into 27 parts. (Read 4.2)
131Cajori, 1980, p. 116.
132Cajori, 1980, p. 97.
C H A P T E R 4
Astronomy
61
As mentioned in Chapter 1, the American Association for the Advancement of Science (AAAS)
ranked Hindus’ contributions to astronomy among the top 100 scientific discoveries in human
history. This recognition was a result of careful and systematic observations of the sky by the
ancient Hindus. They noticed changes in the positions of some luminaries (planets, meteors,
comets) against the fixed background of other luminaries (stars), tried to know the shape of the
Earth, looked for an explanation for the various phases of the Moon, the changing seasons, de-
signed their own luni-solar calendar, assigned motion to the earth in their cosmological model,
and correctly assigned the age of the universe of the order of billions of years. Once scientific
observations were made and conclusions were drawn, the ancient Hindus documented the facts
into stories or poetic verses. For example, a story was written in which Rohin. ī (Aldebaran; a
star in constellation Taurus), the celestial female deer, was pursued by the stars (male) of Orion,
labeled as the celestial stag. Sirius in his role as a hunter pins Orion with his arrow to protect
Aldebaran. The line that connects Sirius with Aldebaran go through the Belt of Orion, indicat-
ing the arrow’s flight. This story is provided in the R. gveda.1 Similarly, the Big Dipper or Ursa
Major, with seven stars was labeled as saptrishi mandal (group of seven sages).
The Sanskrit word for astronomy is khagola-śāstra. Another term, jyotis. a-śāstra (astrology)
covers considerable astronomy too. Hindu astronomy partly flourished out of the need to per-
form religious rituals on proper days at particular times that were governed by the positions of
the Sun or the Moon with respect to various constellations. The ancient Hindus performed rites
at sunrise and sunset, at the rising and setting of the Moon, and at other well-defined entrances
of the Moon or the Sun into particular constellations. These needs required keen observations
and mapping of the sky. A special class of priests (khagola-śāstri, scholars of the sky) made obser-
vations of astronomical events, including the motions of the planets, and documented them in
their hymns.2 The Yajurveda mentions Naks. atra-daraśa3 as the term for astronomer. This term
is made up of two words: naks. atra means constellation or a prominent star and daraśa means
seer or observer, signifying a person who studies astronomy. The Chāndogya-Upanis. ad mentions
naks. atra-vidyā (knowledge of constellations or astronomy) as a discipline. Most Hindu festivals
are governed by astronomical events. For example, Sa ˙mkrānti, an important festival in India, is
1Krupp, October 1996. Krupp is an internationally known astronomer and works at the Griffith Observatory in Los
Angeles. Krupp failed to provide a proper reference in his article. I have not been able to trace it in the R. gveda. However,
knowing the reputation of Krupp, I have decided to share it with the reader.
2see Brennard, 1988; Burgess, 1893; Paramhans, 1991; Saha and Lahri, 1992; Shukla and Sarma, 1976; Somayaji, 1971;
Subbarayappa and Sarma, 1985.
3Yajurveda, 30: 10.
62
4. ASTRONOMY
celebrated on the day when the Sun moves (apparent motion) away from one rāśi (zodiac) and
enters in a new zodiac. There are 12 zodiac signs and, therefore, twelve Sa ˙mkrānti festivals every
year. The most popular of these festivals is makar-Sa ˙mkrānti. This is the day when the Sun moves
from the dhanu zodiac (Sagittarius) to makar (Capricorn) zodiac. On this day, the Hindus visit
temples, fast for the day, donate money to needy people, feed hungry people, and some bathe in
the holy rivers. This religious need required regular scientific observations of the constellations
and the apparent motion of the Sun.
The ancient Hindus designed sophisticated instruments to facilitate their observations.
Yukio Ōhashi, a Japanese scholar, based on his research, notice the following instruments that
the Hindus had for astronomical observations during the period of Āryabhat.a I:4
1. Chāyā-yantra (Shadow-Instrument)
2. Dhanur-yantra (semi-circle instrument)
3. Yas.t.i-yantra (staff )
4. Cakra-yantra (circular instrument)
5. Chatra-yantra (umbrella instrument)
6. Toya-yantrān. i (water instrument)
7. Ghat.ikā-yantra (clepsydra)
8. Kapāla-yantra (clepsydra)
9. Śa ˙nku-yantra (gnomon)
The Taittirīya-Brāhman. a advised the khagola-śāstris to the study the stars before sunrise
to figure out the exact time of the rituals. “The position of an auspicious star [relative to the Sun]
has to be determined at sunrise. But when the Sun rises, that star would not be visible. So, before
the Sun rises, watch for the adjacent star. By performing the rite with due time adjustment, one
would have performed the rite at the correct time.”5
The ancient Hindus knew the shape of the Earth as spherical from the earliest periods.
The Śatapatha-Brāhman. a, an ancient book of the Hindus, mentions the spherical shape of the
earth: “. . . womb is spherical and moreover this terrestrial world doubtless is spherical in shape.”6
In his book Geography, Strabo (ca. 63 BCE - 25 CE), a Greek traveler and historian, mentions
that the Indians, like the Greeks, believed in the spherical shape of the Earth: “According to the
Brachmanes, the world . . . is of a spheroidal figure.7
4Ōhashi, 1994.
5Taittirīya-Brāhman. a, 1: 5: 2: 1.
6Śatapatha-Brāhman. a, 7: 1: 37.
7Strabo, 15: 1: 59.
4. ASTRONOMY 63
Al-Bīrūnī (973–1050 CE) also affirms this view: “According to the religious traditions of
Hindus, the Earth on which we live is round.”8 The key word in this quotation is “traditions.”
There is about a thousand years of time gap between Strabo and al-Bīrūnī. Since the Vedic
period, the Earth was considered to be spherical. Al-Bīrūnī also quoted the Hindu astronomers
to indicate that the size of the Earth was very small in comparison to the visible part of the
universe:9 “These are the words of Hindu astronomers regarding the globular shape of heaven
and earth, and what is between them, and regarding the fact that the earth situated in the center
of the globe, is only a small size in comparison with the visible part of heaven.”
Āryabhat.a I used an analogy of a kadamba flower (neolamarckia cadamba) (Figure 4.1) to
demonstrate the distribution of various life forms on the Earth: “Half of the sphere of the Earth,
the planets, and the asterisms is darkened by their shadows, and half, being turned toward the
Sun, is lighted according to their size. The sphere of the earth, being quite round, situated in
the center of space, in the middle of the circle of asterisms [constellations or stars], surrounded
by the orbits of the planets, consists of Earth, water, fire, and air. Just as the ball formed by a
kadamba flower is surrounded on all sides by blossoms just so the Earth is surrounded on all
sides by all creatures terrestrial and aquatic.”10
Figure 4.1: Kadamba flower (taken from Wikimedia).
There are numerous accounts in the Hindu literature indicating that the Moon gets its
luminosity from the Sun. The R. gveda tells us that “He [the Moon] assumed the brilliancy of
the Sun,”11 “He [the Moon] is adorned with Sun’s beams . . .,”12 or, “the Sun-God Savitar
bestows his sunlight to his Lord, the Moon.”13 The Yajurveda (Śukla) also tells us that “the
Moon whose rays are the Sun’s ray.”14
Al-Bīrūnī (973–1050 CE), in stating the Hindu view that planets are illuminated by the
Sun, wrote: “the Sun aloft is of fiery essence, self-shining, and ’per accidens’ illuminates other
8Sachau, 1964, vol. 1, p. 233.
9Sachau, 1964, vol. 1, p. 269.
10Āryabhat. īya, Gola, 5–7.
11R. gveda, 9: 71; 9.
12R. gveda, 9: 76: 4.
13R. gveda, 10: 85: 9; Atharvaveda, 14: 1: 9.
14Yajurveda(Śukla), 18: 40.
64
4. ASTRONOMY
stars when they stand opposite to him.”15 “The Moon is watery in her essence,” writes Al-Bīrūnī.
“therefore the rays which fall on her [Moon] are reflected, as they reflect from the water and the
mirror toward the wall.”16
Varāhamihira stated that, in the words of al-Bīrūnī: “The Moon is always below the Sun,
who throws his rays upon her, and lit up the one half of her body, whilst the other half remains
dark and shadowy like a pot which you place in the sunshine. The one half which faces the
Sun is lit up, whilst the other half which does not face it remains dark.”17 “If the Moon is in
conjunction with the Sun [new moon], the white part of her turns toward the Sun, the black
part toward us. Then the white part sinks downward toward us slowly, as the Sun marches away
from the Moon.”18
The above statements from al-Bīrūnī demonstrate that the ancient Hindus knew that the
moonlight in the night sky is actually the reflected sunlight. The night sky does not have the
Sun. If so, how could it illuminate the Moon? Based on the statements provided, the ancient
Hindus had the understanding of the changing geometry of the Earth, the Moon, and the Sun.
Information to support this argument is provided later, especially in relation to Rāhu and Ketu.
The ancient Hindus knew that the Sun does not dissolve after the sunset. The Sun does not set
or rise. (Section 4.1.) Al-Bīrūnī states that “every educated man among Hindu theologians, and
much more so among their astronomers, believes indeed that the Moon is below the Sun, and
even below all the planets.”19
Āryabhat.a I (Figure 4.2) explained the geometry of the solar system and the universe:
“Half of the globe of the Earth, the planets, and the stars are dark due to their own shadows;
the other halves facing the Sun are bright in proportion to their sizes.”20 “Beneath the stars are
Saturn, Jupiter, Mars, the Sun, Venus, Mercury, and the Moon, and beneath these is the Earth .
. .”21 This observation of the heavenly objects, as given by Āryabhat.a I, is correct for an observer
on the Earth.
The apparent path of the Sun as viewed from the Earth against the background of stars
is called the ecliptic. This path results from the Earth’s motion around the Sun against the
backdrop of the celestial sphere. The Moon with its changing phases moves around the Earth
in a plane that is close to the ecliptic plane. The plane of the Moon lies 5(cid:14) north or south of
the ecliptic plane. The two points at which the Moon’s path crosses the elliptic plane, known as
the descending and ascending nodes. The nodal points are not fixed and move along the ecliptic
plane of the Earth’s motion. The ancient Hindus called them Rāhu and Ketu, respectively. The
rotational periods of Rāhu and Ketu as defined in Hindu astrological charts, the horoscopes, are
15Sachau, 1964, vol. 2, p. 64.
16Sachau, 1964, vol. 2, p. 66.
17Sachau, 1964, vol.2, p. 66.
18Sachau, 1964, vol. 2, p. 67.
19Sachau, 1964, 2, p. 67.
20Āryabhat. īya, Gola, 5.
21Āryabhat. īya, Kālakriyā, 15.
4. ASTRONOMY 65
Figure 4.2: Āryabhat.a I statue on the premise of the Inter-University Center for Astronomy and
Astrophysics, Pune, India (taken from Wikimedia)
the same as that of these nodal points. These two nodes are diametrically opposite to each other
and so are the Rāhu and the Ketu in the horoscope.
These nodal points describe the relation of the Moon and the Sun to the Earth. Along
with the individual effects of the Sun and Moon, there is also a collective effect exhibited by the
Sun and the Moon, as noticed in lunar and solar eclipses. The solar eclipse and the lunar eclipse
can be explained with the help of these nodal points. The ancient Hindus knew this fact and
tried to explain eclipses by the motion of Rāhu and Ketu, the two nodal points.
The R. gveda mentions solar eclipse22 which is a common observation in most cultures.
An explanation for the cause was warranted. In Hindu scriptures, as given in the Ādi-parva of
Mahābhārata, eclipses are described as the swallowing of the Sun or the Moon by two demons,
Rāhu or Ketu. As the story goes, once the gods decided to churn the ocean to get the divine
nectar (amr. ta) to become immortal. As amr. ta came out from the churning process, all gods
started to drink it. A demon took the guise of a god and drank amr. ta too. When he gulped it,
the Sun and the Moon recognized him. They both warned Lord Vis.n. u about his transgression.
Lord Vis.n. u acted promptly and slit his throat. By then, however, amr. ta had already entered in
his body and, therefore, he could not die. His head is called Rāhu and torso Ketu. Ever since
there has been a long lasting feud of Rāhu and Ketu with the Sun and the Moon. They chase
them across the sky and try to swallow them. Whenever Rāhu or Ketu succeed, an eclipse occurs.
An eclipse is therefore symbolizes a momentary victory of Rāhu and Ketu over the Sun or the
22R. gveda, 5: 40: 5–9.
66
4. ASTRONOMY
Moon. In essence, it is a temporary victory of evil over good. Therefore, eclipse is considered
as an inauspicious occasion in Hindu scriptures. Hindus, therefore, pray during an eclipse and
bathe in their holy rivers to purify themselves afterward.
The Chāndogya-Upanis. ad provided an account of the lunar eclipse: “From the dark, I go
to the varicolored. From the varicolored, I go to the dark. Shaking off evil, as a horse his hairs;
shaking off the body, as the Moon releases itself from the mouth of Rāhu.”23 Here “varicolored”
is indicative of the corona rings that appear during an eclipse. The mention of Rāhu is indicative
of the role of the nodal points.
The Atharvaveda provides the cause of a solar eclipse: “Peace to us during lunar eclipse,
peace to us during the period when Rāhu swallow the Sun.”24 Āryabhat.a I used scientific terms
and explained the cause of solar and lunar eclipses as the Moon blocks the Sun or the Earth
comes in between the Sun and the Moon. He also provided the method to calculate the area of
the Moon or the Sun that would be affected during an eclipse: “The Moon obscures the Sun and
the great shadow of the Earth obscures the Moon.”25 This is a clear indication that the geometry
of eclipse was known to Āryabhat.a I.
“When at the end of the true lunar month the Moon, being near the node, enters the
Sun, or when at the end of the half-month the Moon enters the shadow of the Earth that is the
middle of the eclipse which occurs sometimes before and sometimes after the exact end of the
lunar month or half-month.”26
“Multiply the distance of the Sun by the diameter of the Earth and divide (the product)
by the difference between the diameters of the Sun and the Earth: the result is the length of the
shadow of the Earth (i.e., the distance of the vertex of the Earth’s shadow) from the diameter
of the Earth (i.e., from the center of the Earth).27
Thus,
Length of the Earth0s Shadow
Sun0s distance
Sun0s diameter
D
Earth0s diameter
Earth0s diameter
(cid:2)
(cid:0)
“The difference between the length of the Earth’s shadow and the distance of the Moon
from the Earth multiplied by the diameter of the Earth and divided by the length of the Earth’s
shadow is the diameter of the Earth’s shadow (in the orbit of the Moon).”28
Al-Bīrūnī confirms that the cause of solar and lunar eclipse was known to the Hindus. “It
is perfectly known to the Hindu astronomers that the Moon is eclipsed by the shadow of the
Earth, and the Sun is eclipsed by the Moon.”29
23Chāndogya-Upanis. ad, 8: 13: 1.
24Atharvaveda, 19: 9: 10.
25Āryabhat. īya, Gola, 37.
26Āryabhat. īya, Gola, 38.
27Āryabhat. īya, Gola, 39, taken from Shukla and Sarma, 1976, p. 152.
28Āryabhat. īya, Gola, 40.
29Sachau, 1964, vol. 2, p. 107.
4.1. HELIOCENTRIC SOLAR SYSTEM 67
4.1 HELIOCENTRIC SOLAR SYSTEM
Most planetary models during the ancient period considered a geocentric system where the
Earth remained stationary, like in the Ptolemy’s model. Planets moved around the earth in
epicyclic motions in these models. Āryabhat.a I, on the contrary, came up with a detailed and
innovative model of the solar system in which the Earth was in axial motion. With the known
spherical shape of the Earth, Āryabhat.a I assigned diurnal motion to the Earth, and was able to
explain the repeated occurrence of day and night. He somehow knew the time difference between
various locations on earth: “Sunrise at Lanka (Sri-Lanka) is sunset at Siddhapura, mid-day
at Yavakoti (or Yamakoti, Indonesia), and mid-night at Romaka (Rome).”30 With the current
knowledge, we can safely say that Āryabhat.a I somehow knew the longitudes of these locations
and could correctly infer the time differences.
The above statement is remarkable since it involves information that requires simultaneous
observation. Needless to say, Āryabhat.a I had no way to make a phone call since the phones were
not available. Similarly, all other means to contact with people in Rome was not possible for a
person sitting in India. The only possibility is to predict it using geometry and astronomy in
which the relative motion of the Sun and the Earth, including its shape, is known to the person.
This is similar to what Eratosthenes did when he made simultaneous measurements at Syene
and Alexandria, both in Egypt, to measure the size of the Earth.31
Several Hindu scriptures indicate that the Sun constantly illuminates the Earth. Sunrise
or sunset happen depending on the side of the Earth illuminated by the Sun at that particular
instant. Following are some of representative statements: “Actually the Sun neither rises nor sets.
. . When the Sun becomes visible to people, to them He [the Sun] is said to rise; when He [the
Sun] disappears from their view, that is called his [the Sun] setting. There is in truth neither
rising or setting of Sun, for he is always there; and these terms merely imply his presence and
his disappearance,” suggests Vis. n. u-Purān. a.32
“He [the Sun] never sets or rises. When [we] think that he is setting, he is only turning
round, after reaching the end of the day, and makes night here and day below. Then, when
[we] think he is rising in the morning, he is only turning round after reaching the end of the
night, and makes day here and night below. Thus, he (the Sun) never sets at all,” suggests the
Āitareya-Brāhman. a.33
“Never did the Sun set there nor did it rise. . . the Sun neither rises nor sets. He who
thus knows this secret of the Vedas, for him, there is perpetual day,”34 suggests the Chāndogya-
Upanis. ad.
30Āryabhat. īya, Gola, 13. The locations of Siddhapura is not known. Assuming Āryabhat.a I to be correct, this longitude
falls somewhere in the continent of America, near Mexico.
31Brown and Kumar, 2011.
32Vis. n. u-Purān. a, 2: 8.
33Āitareya-Brāhman. a, 14: 6; taken from Subbarayappa and Sarma, 1985, p. 28.
34Chāndogya-Upanis. ad, 3: 11: 1–3; taken from Subbarayappa and Sarma, 1985, p. 28. Some translators have translated
the text somewhat differently. However, in most translations, “Sun never sets” is stated.
68
4. ASTRONOMY
4.1.1 UJJAIN, GREENWICH OF THE ANCIENT WORLD
Sri Lanka was used as a reference point because the prime meridian of Ujjain, a city in India,
(Longitude 75(cid:14)430E, Latitude 23(cid:14)130N) intersects the equator near Sri Lanka. The location of
Ujjain played an important role in astronomy during the ancient and medieval periods. Ujjain
was the Greenwich of the ancient and medieval world and most ancient astronomers in India
and Arabia used this city as a reference point for their astronomical observations. Āryabhat.a I
defined the distance between Ujjain and Sri Lanka as one-sixteenth of the Earth’s circumference
in the north direction.35 This gives the latitude of Ujjain as 360
16 = 22(cid:14)300 which is quite close
to the actual number. A difference of 1(cid:14) in latitude creates an error of 69 miles in distance.
Therefore, the value given by Āryabhat.a I for the latitude differs with the actual value by 430
which is equivalent to less than 50 miles. People in most metropolitan cities would agree that
this is a small distance. For example, suburban sites in New York or Los Angeles can be 50
miles away from the city center and still would be considered as a part of these cities. Since the
settlements around Ujjain have evolved over the last thousand years, it may easily be the distance
of modern city center and the ancient location of the observatory.
During the medieval period, Ujjain was considered as the prime meridian in the Middle
East. More details on Ujjain is provided in Section 4.4.
4.1.2 DIURNAL MOTION OF THE EARTH
Āryabhat.a I assigned diurnal motion to the Earth and kept the Sun stationary in his astronomical
scheme. According to Āryabhat.a I the motion of the stars that we observe in the sky is an illusion.
To explain the apparent motion of the Sun, Āryabhat.a I used an analogy of a boat in a river. “As
a man in a boat going forward sees a stationary object moving backward just so in Sri-Lanka a
man sees the stationary asterisms (stars) moving backward exactly toward the West.”36
Āryabhat.a I Āryabhat.a I was the Head at the famous Nālandā University near modern
Patna. He composed a book, Āryabhat. īya, dealing with mathematics (gan. ita-pada), spheri-
cal astronomy (gola-pada), and time-reckoning (kāla-kriyā-pada). The book was composed
in 499 CE by Āryabhat.a I when he was only 23 years old. His work made a major impact
in India for several centuries as the following writers like Brahmgupta (born around 598
CE) and Varāhamihira (505–587 CE) wrote extensive commentaries on his work.
Āryabhat. īya is an invaluable document for the historians of science; it provides an
account of the ancient sciences of the Hindus. The content of this book was not a new
knowledge created by Āryabhat.a I. He was emphatic to not take the credit and labeled the
content of his book as “old knowledge.”
35Āryabhat. īya, Gola, 14; Shukla and Sarma, 1976, p. 123–126.
36Āryabhat. īya, Gola, 9; to read more about Āryabhat.a I, read Hooda and Kapur, 1997; and Shukla and Sarma, 1976.
4.1. HELIOCENTRIC SOLAR SYSTEM 69
Āryabhat. īya has 118 metrical verses subdivided into four chapters. The first chap-
ter, Daśa-gītikā has ten verses and provided astronomical constants. The second chapter is
on mathematics. The third chapter is on time-reckoning, and the fourth chapter concerns
spherical astronomy. In this book, Āryabhat.a I provided the value of (cid:25) as approximately
equal to 3.1416, the solution of indeterminate equations and quadratic equations, theory of
planetary motions, and calculations of the latitudes of planets. And most important of all,
a millennium before Copernicus, he assigned axial motion to the Earth in his astronomical
model and kept the stars stationary.
In Āryabhat.a I’s honor, the first artificial satellite of India was launched in 1975 and
was named after him. The International Astronomical Union has also named a lunar crater
after Āryabhat.a I in 1979.
The interpretation is that a person standing on the equator, that rotates from the west to
the east, would see the asterisms (constellations) moving in the westward motion. This clear
grasp of Earth’s rotational motion is splendidly explained in the analogy of a boat man by
Āryabhat.a I.
Interestingly, about one millennium after Ārybahat.a I, Copernicus used a similar argu-
ment to assign motion to the Earth. “For when a ship is floating calmly along, the sailors see
its motion mirrored in everything outside, while on the other hand they suppose that they are
stationary, together with everything on board. In that same way, the motion of the earth can
unquestionably produce the impression that the entire universe is rotating.”37 This similarity be-
tween Ārybahat.a I’s statement and Copernicus’ statement is intriguing. Did Copernicus know
of the work of Āryabhat.a I? This issue is not clearly resolved as yet. However, there is a possibility
of Copernicus knowing the work of Āryabhat.a I, as discussed in Section 4.4.
******
Once the rotational (axial or spin) motion of the Earth is established, what kind of issues
it creates to explain the observed phenomena? Why do we have different seasons? Does it lead
to the heliocentric theory of solar system? Let us investigate further to answer these questions.
With spin motion assigned to the Earth, a set of other questions immediately arise: Is
there a motion of the Sun? This question pops up since there is no longer a necessity to explain
day and night with the Sun’s motion. According to Āryabhat.a I, the Earth spins on its axis like
a merry-go-round. However, we do not experience any fly-away feeling on the Earth as we do
on a merry-go-round. The spin motion of the Earth also creates problems to explain flights of
flying birds. How do they go back to their nest with the Earth spinning so fast, especially if they
fly to the West? Assigning any motion to the Earth seems contrary to human experiences. It is a
much bigger triumph to assign any kind of motion to the Earth than to just add orbital motion
to the already known spinning (rotational) motion of the Earth.
37Copernicus’ On the Revolution, Book 1, Chapter 8; Copernicus, 1978, p. 16.
70
4. ASTRONOMY
This leads to the next question. If day and night are due to the Earth spinning in one place,
why do we have different seasons? Why do we observe northward or southward motions of the
Sun? Āryabhat.a I considered constellations and stars to be stationary in the sky and attributed
their apparent motion to the moving Earth. Can the Sun be also stationary like other stars?
No where did Āryabhat.a I struggle in dealing with such questions in his book. His statements
are fairly conclusive and straight forward. In summary, the axial rotation of the Earth compli-
cates the simplicity of the geocentric system. On the contrary, in a heliocentric system, the axial
rotation is a necessity.
In describing the spin motion of the Earth, Āryabhat.a I makes another explicit statement,
“The revolutions of the Moon multiplied by 12 zodiac are signs [rāśi]. The signs multiplied by 30
are degrees [360(cid:14)]. The degrees multiplied by 60 are minutes. . . . The Earth moves one minute
in a prān. a.”38
Āryabhat.a I provided the following definition of prān. a, a unit of time: “One year consists
of twelve months. A month consists of thirty days. A day consists of sixty nād. ī. A nād. ī consists
of sixty vinād. ikā. Sixty long syllables or six prān. as make a sidereal vinād. ikā. This is the division
of time.”39
Let us transcribe it in a modern set of standards. Assuming a day to be 24 hours long with
a 360(cid:14) rotation (86,4000), one nād. ī comes out to be 1,4400, vinād. ikā equals to 240, and prān. a
comes out to be four seconds. Therefore, Āryabhat.a I’s statement can be modified as follows:
“The Earth rotates by an angle of one minute (10) in 4 seconds.” One minute in angle multiplied
by 21,600 gives us 360(cid:14). Thus, 4 seconds multiplied by 21,600 should give us the time that is
equal to one day (or 24 hours). This is the case when we multiply the two numbers and change
the units to hours. Therefore, not only Āryabhat.a I assigned spin motion to the Sun, he also
correctly provided the speed of the spin. Āryabhat.a I makes explicit statement elsewhere: “The
Earth rotates through [an angle of ] one minute of arc in one prān. a.”40
Āryabhat.a I provided the periods of revolution for different planets, the Moon, and the
Sun in one yuga: “In a yuga the revolutions of the Sun are 4,320,000, of the Moon 57,753,336,
of the Earth 1,582,237,500, of Saturn 146,564, of Jupiter 364,224, of Mars 2,296,824, of Mer-
cury and Venus the same as those of the Sun. of the Moon’s apogee, 4,88,219; of [the śighrocca
(conjunction)] of Mercury, 1,79,37,020; of (the śighrocca) of Venus, 70,22,388; of (the śighrocca)
of the other planets, the same as those of the Sun; of the Moon’s ascending node in the opposite
direction (i.e., westward), 2,32,226. These revolutions commenced at the beginning of the sign
Aries on Wednesday at sunrise at Sri Lanka (when it was the commencement of the current
yuga.)”41. Yuga is an important concept in Hindu cosmology, as given in the Hindu scriptures
(Section 4.3).
38Āryabhat. īya, Dasgītika, 4.
39Āryabhat. īya, Kālakriyā, 2.
40Āryabhat. īya, Dasgītika, 6.
41Āryabhat. īya, Dasgītika, 3–4
4.1. HELIOCENTRIC SOLAR SYSTEM 71
The “Moon’s apogee” defines the point of the Moon’s orbit when it is farthest from the
Earth. The śighrocca of a planet is the imaginary body which is supposed to revolve around the
Earth with the heliocentric mean velocity of the planet. Shukla and Sarma have done a careful
analysis of these periods in their translation of Āryabhat. īya, calculated the sidereal period in
terms of days, and compared them with the modern values.42 (Table 4.1.)
Table 4.1: Mean motion of the planets
As one can notice, all values are quite close to the modern accepted values. Interestingly,
the period of Mercury and Venus are explicitly considered equal to the Sun by Āryabhat.a I.
Since Mercury and Venus are the only two planets inside the Earth’s orbit, their orbital period,
as observed from the Earth, comes close to that of the Sun. Obviously, this observation is in a
geocentric system. Was it because Āryabhat.a I believed in the geocentric solar system or because
it was a prevalent practice of the period to describe motion as viewed from the Earth? In the very
next verse, Āryabhat.a I erased any doubt about the two different motions assigned to Mercury
and Venus. Thus, the sidereal period of Mercury and Venus comes out to 87.97 and 224.70 days,
in the calculation of Shukla and Sarma.43 This compares well with the modern values of 87.97
and 224.70, respectively for Mercury and Venus. Thus, Āryabhat.a I provides one statement in
the geocentric system while the other in the heliocentric system with the Earth as the point of
observation. This has intrigued astronomers through the ages.
42Shukla and Sarma, 1976, p. 7.
43Shukla and Sarma, 1976, p. 7.
PlanetRevolutions in 4,320,000 YearsSidereal PeriodĀryabhaṭa ISidereal PeriodModernsSun4,320,000365.25868365.25636Moon57,753,33627.3216727.32166Moon’s apogee488,2193,231.987083,232.37543Moon’s asc. node232,2266,794.749516,793.39108Mars2,296,824686.99974686.9797Śighrocca of Mercury17,937,02087.9698887.9693Śighrocca of Venus7,022,388224.69814224.7008Jupiter364,2244,332.272174,332.5887Saturn146,56410,766.0646510,759.20172
4. ASTRONOMY
According to a detailed analysis given by B. L. van der Waerden,44 the motion of Mercury
and Venus as given by Āryabhat.a I were in a heliocentric model.45 Van der Waerden makes the
following assertions to back up his conjecture that Āryabhat.a I proposed a heliocentric model
and not a geocentric model:46
1. In a geocentric system, there is no need to assume axial rotation for the Earth as most
observations, though wrong, are easier to explain. However, in a heliocentric system, we
are forced to think of axial rotation. Āryabhat.a I clearly argue for the axial rotation of the
Earth and provides accurate period for the axial motion.
2. In the Midnight system of Āryabhat.a I, the apogees (farthest point from the earth) of
the Sun and Venus are both at 80(cid:14), and their eccentricities (a measure of astronomical
orbit deviation from circularity) are also equal. This fact can be explained by assuming
that the system was originally derived from a heliocentric system.47 “In genuine epicycle
theory for Venus, the eccenter carrying the epicycle is independent of the Sun’s orbit. Its
apogee and eccentricity are determined from observations of Venus, whereas the apogee
and eccentricity of the Sun are determined from eclipse observations. The probability that
the apogee and eccentricity of Venus coincide with those of the Sun is very small.”48
3. The periods of revolution for the outer planets are essentially the same in the geocentric and
heliocentric models. It is the planetary periods of the inner planets, Mercury and Venus,
that separates the two theories. In a geocentric theory, these two periods will essentially be
the same as the solar period. However, in a heliocentric system, these periods are quite dif-
ferent. Āryabhat.a did assign different periods for the Sun, Mercury and Venus, indicating
a heliocentric system.
4. The revolutions of Mercury and Venus considered by Āryabhat.a I are heliocentric revolu-
tions, not geocentric. These periods provided by Āryabhat.a I have “no importance what-
soever” in a geocentric system.49
Based on the descriptions given in Āryabhat. īya, Van der Waerden concludes that “it is
highly probable that the system of Āryabhat.a [I] was derived from a heliocentric theory by
setting the center of the Earth at rest.”50 The reason for this kind of torturous path in the work
44B. L. van der Waerden (1903–1996) was a prolific author with several books on algebra, geometry, and astronomy to
his credit. He taught at the University of Leipzig in Germany, the University of Amsterdam in the Netherlands, and the
University of Zurich in Switzerland.
45van der Waerden, The Heliocentric System in Greek, Persian, and Hindu Astronomy, in the book by King and Saliba,
1987, p. 534.
46van der Waerden, The Heliocentric System in Greek, Persian, and Hindu Astronomy, in the book by King and Saliba,
1987, p. 530–535.
47van der Waerden, in the book by King and Saliba, 1987, p. 532.
48van Waerden, in the book by King and Saliba, 1987, p. 531.
49Van Waerden in the book by King and Saliba, 1987, p. 534.
50Van der Waerden in the book by King and Saliba, 1987, p. 534.
4.1. HELIOCENTRIC SOLAR SYSTEM 73
of Āryabhat.a I is perhaps due to an overwhelming tendency among all early astronomers and
their students, in the words of Van der Waerden, “to get away from the idea of a motion of
the Earth.”51 In the history of astronomy, for convenience purposes, astronomers did transform
the heliocentric theory into equivalent geocentric one. This was done by Tycho Brahe when he
transformed Copernican heliocentric model into a geocentric one.52
Van der Waerden’s conclusion that Āryabhat.a I proposed a heliocentric model of the so-
lar system has received independent support from other astronomers.53 For example, Hugh
Thurston came up with a similar conclusion in his independent analysis. “Not only did Āryabhat.a
believe that the Earth rotates, but there are glimmerings in his system (and other similar Indian
systems) of a possible underlying theory in which the earth (and the planets) orbits the Sun,
rather than the Sun orbiting the earth.”54 The evidence used by Thurston is in the periods of the
outer planets and the inner planets. Āryabhat.a basic planetary periods are relative to the Sun
which is not so significant for the outer planets. However, it is quite important for the inner
planets (Mercury and Venus).
The motion that Āryabhat.a I assigned to the Earth is not a mere speculation of modern
astronomers. Āryabhat.a I’s thesis was well known in the Middle East even after six centuries.
Al-Bīrūnī (973–1050 CE) erroneously criticized Hindu astronomers for assigning motion to the
Earth (Figure 4.3). He referred to the work of Varāhamihira, a Hindu astronomer, to support
his idea of the geocentric universe: “If that were the case, a bird would not return to its nest
as soon as it had flown away from it toward the west,”55 and “stones and trees would fall.”56
In the first drawing, Figure 4.3, the bird flies to the West and leave his nest. After a while,
when he comes back, the nest has moved considerably due to the motion of the Earth. This was
the argument of al-Bīrūnī against the moving Earth. A similar argument was used by Aristotle
(384–322 BCE) to favor his theory of the geocentric universe. Al-Bīrūnī’s criticism validates the
work of Āryabhat.a I in India and its existence in the Middle East prior to the eleventh century.
After viewing various possibilities of the motion of the Earth, al-Bīrūnī favored the sta-
tionary Earth: “The most prominent of both modern and ancient astronomers have deeply stud-
ied the question of the moving of the Earth, and tried to refute it. We, too, have composed a
book on the subject called Miftah-ilm-alhai‘a (Key of astronomy), in which we think that we have
surpassed our predecessors, if not in the words, at all events in the matter.”57 This depicts that at
least some six centuries after the heliocentric theory was proposed by Āryabhat.a I, the Islamic
philosophers from Arabia still could not grasp the idea of moving Earth.
51Van der Waerden in the book by King and Saliba, 1987, p. 530.
52Van der Waerden in the book by King and Saliba, 1987, p. 530.
53Billard, 1977; Thurston, 1994 and 2000.
54Thurston, 1994, p. 188.
55Sachau, 1964, vol. 1, p. 276.
56Sachau, 1964, vol. 1, p. 277.
57Sachau, 1964, vol. 1, p. 277.
74
4. ASTRONOMY
Figure 4.3: Al-Bīrūnī’s argument against Āryabhat.a’s assignment of the motion of the earth.
(Designed with the help of David Valentino)
Science texts still do not cover Āryabhat.a’s work along with the work of Copernicus.
The opponents of Āryabhat.a’s heliocentric system are totally silent in providing explanation
to why Āryabhat.a assigned spin motion to the earth and not struggled to explain the simple
astronomical observations with this spin motion. Also, why the periods of Mercury and Venus
are not equal to the period of the Sun, as it should be in a geocentric system. Obviously, a lot
more research on Āryabhat. īya is needed to resolve this issue.
4.2 HINDU CALENDAR
Pañcā ˙nga (Pañcā = five and a ˙nga = limb, meaning five-limbed) is the term used for the Hindu al-
manac. The five limbs are: day (vāra), date (tithi), naks. atra (asterism), yuga (period), and kāran. ā
(half of tithi). Pañcā ˙nga is popular even today among the Hindus. Hindu priests use it for pre-
dicting eclipses, defining time of various rituals, including marriage, casting horoscopes, and for
solemn entrance into a house (gr. ah-praveśa) or business. Hindu families commonly use Pañcā ˙nga
to check the day of fasting, auspicious times for worshiping, and days of festivals.
Most cultures have calendars that are either based on the motion of the Moon or the
Sun. The regular appearance of the new or the full moon forms a basis of most lunar calendars,
like the Islamic calendar. Solar calendars are based on the cyclic motion of the Sun in different
zodiacs that is due to the orbital motion of the Earth around the Sun. The Hindu calendar is
luni-solar in which the months are based on the motion of the Moon while the year is defined by
the Sun. A year is the time the Earth takes to complete one revolution around the Sun, starting
4.2. HINDU CALENDAR 75
from Mes. a (Aries). This calendar is similar to the Jewish or Babylonian calendar that are also
luni-solar.
In the Hindu Calendar, a month is divided into two equal parts, known as paks. a, each of
roughly fifteen days depicting the waxing and waning of the Moon. The paks. a starting from the
new moon to the full moon is considered the bright-half (Śukla-paks. a) while the second part
starting from a full moon to a new moon, is known as the dark-half (Kr. s. n. a-paks. a).58 The new
moon day, when the longitude of the Sun and Moon are equal, is called amāvāsya. The full moon
night, when the Sun and the Moon are 180(cid:14) out of phase, is known as pūrn. imā. It gives a mean
lunar year to be 354 days 8 hours 48 minutes and 34 seconds.
A day (vāra) begins at sunrise. The date (tithi) is indicative of the position of the Moon
relative to the Sun. A month (māsa) starts and ends on amāvāsya (meaning dwelling together,
implying conjunction of the Sun and the Moon, the new moon). The word amāvāsya is used in
Atharvaveda59 which signifies that the ancient Hindus knew the cause of the new moon during
the Vedic period. The days in between amāvāsya and pūrn. imā are counted as numbers: ekādaśī
(eleventh day of the fortnight), caturthī (fourth day of the lunar fortnight), etc. The ecliptic circle
was divided into 27 parts, each consisting of 13(cid:14)200, called naks. atra.
To understand the features of the Hindu calendar, let us compare it with the Western cal-
endar that is popular internationally. The Western calendar, also called the Gregorian calendar,
was proposed by Pope Gregory XIII in 1582. It was a modified form of the calendar estab-
lished by Julius Caesar, known as the Julian calendar. This calendar was based on the Egyptian
calendar of the period. The Catholic kingdoms adopted the Gregorian calendar soon after its
inception. However, England resisted its use and adopted it in 1752, under some resistance from
the Protestant majority.
The Western calendar is irregular and inconvenient to use because:
1. There exists no easy way to figure out the date of a particular day from simple observations.
2. Different months have different numbers of days. This creates difficulties in the business
world where, at times, monetary transactions are made based on the day devoted to a
particular task.
3. Because the span of a month is different for different months, performance records are
difficult to compare.
4. There is a problem of the leap year. One has to remember the year to decide the number of
days in the month of February. There is no possible way to figure it out using astronomical
observations. One has to remember the empirically defined rules to figure this out.
Most people use the Western calendar since their childhood and are not familiar with
alternatives, they do not realize its weaknesses. In the lunar calendar, one year equals 354 days
58see Arthaśāstra, 108.
59defined in Atharvaveda, 7: 79.
76
4. ASTRONOMY
and, in the solar calendar, one year is roughly equal to 365 days. The difference of 11 days in a
year can cause radically different seasons for the same month in two years in a lunar calendar
that are about 15–17 years apart from each other. This is the case with the Islamic calendar.
Āryabhat.a I explained the civil and sidereal days: “The revolutions of the Sun are solar
years. The conjunctions of the Sun and the Moon are lunar months. The conjunctions of the
Sun and Earth are [civil] days. The rotations of the Earth are sidereal days.”60
This defines the sidereal day as the period from one star-rise to the next, civil days as one
sun-rise to the next, and the lunar month, or synodic month, as from one new moon to the next
new moon. The ancient Hindus, who knew both the lunar and solar calendars, realized that 62
solar months are equal to 64 lunar months. Therefore, they added one extra month after every
30–35 months.
The R. gveda described the Moon as “the maker of months” (māsa-krt).61 “True to his holy
law, he knows the twelve Moons with their progeny: He knows the Moon of later birth,”62.
Here “the twelve Moons with their progeny” means the twelve months and “the Moon of the
later birth” means the 13th month, the supplementary or the intercalary month of the luni-solar
calendar. This is a clear indication of the luni-solar calendar during the R. gvedic-period.
The Atharvaveda also mentions the 13th month in some years. “He [Sun] who meets
out the thirteenth month, constructed with days and nights, containing thirty members, . .”63
The creation of 13th month or the intercalary month of thirty days is ascribed to the Sun, the
Moon being the originator of the ordinary months of the year. This is a clear indication that the
thirteenth month was added to keep up with the seasons since it is ascribed to the Sun.
Al-Bīrūnī explained the Hindu luni-solar calendar in his book, Alberuni’s India: “The
months of the Hindus are lunar, their years solar; therefore their new year’s day must in each solar
year fall by so much earlier as the lunar year is shorter than the solar (roughly speaking, by eleven
days). If his precession makes up one complete month, they act in the same way as the Jews, who
make the year a leap year of thirteen months . . . The Hindus call the year in which a month is
repeated in the common language malamasa [malamāsa]. Mala means the dirt that clings to the
hand. As such dirt is thrown away out of the calculation, and the number of months of a year
remains twelve. However, in the literature, the leap month is called adhimāsa.”64 Kaut.ilya (ca.
300 BCE), in his Arthaśāstra, mentions a separate intercalary month and calls it malamāsa.65
This month is generally added every third year.66
In the Hindu calendar, most of the festivals have religious, social, and seasonal importance.
In societies where the lunar calendar is in practice, the seasonal festivals are not much celebrated.
India is an agricultural country where approximately 65% of the population still lives in villages.
60Āryabhat. īya, Kālakriyā, 5.
61R. gveda, 1: 105: 18.
62R. gveda, 1: 25: 8.
63Atharvaveda, 13: 3: 8.
64Sachau, 1964, vol. 2, p. 20.
65 Arthśāstra, 60; Shamasastry, 1960, p. 59.
66Arthaśāstra, 109; Shamasastry, 1960, p. 121.
4.2. HINDU CALENDAR 77
During the Holī festival, a big fire is burned every year to symbolize the death of Holikā, the
aunt of Lord Dhruva. People bring a sample of their harvest, and place it over a fire to roast the
wheat or barley seeds which they tie to sugarcane. They share these seeds and sugarcane with
friends and family members and decide whether the crop of wheat, barley and cane sugar is
ready for harvest or not. During Daśaharā, the quality of barley and wheat seeds is tested in a
social gathering; people carry sprouted seeds and share them with their friends. Similarly, after
the monsoon season from July to September, one needs to get ready for the winter in India.
Cleaning spider webs, dusting rooms, painting walls, and decorating the houses are common
chores before Dīpāvalī (Dīvālī).
Most Hindu festivals are defined either by the position of the Moon or the Sun. Makara-
Sa ˙mkrānti (the Sun enters the sign of Makara (Capricorn) constellation in its northward jour-
ney), Gan. eśa-caturthī (fourth day of the Moon, starting from amāvāsya, the new moon), Kr. s. n. a-
Janmās. t. amī (eighth day of the Moon) are all defined by the phase of the Moon or the Sun in
a particular constellation. Basant-Pañcamī (fifth day of the new Moon), Rām-Navamī (a day
to honor Lord Rāma, falls on the ninth day of the new moon), Guru-pūrn. imā (a day to honor
teachers, always fall on the full moon), and Nāga-Pañcamī (a day to honor snakes, falls on the
fifth day of the new moon) are some of these festivals that are defined by the Moon.
The Moon is seen in the sky on almost all nights unless it is close to the Sun. The position
of the Sun can be fixed against a constellation only a little before sunrise or after sunset—the
time when the sunlight is too weak to suppress the light of other stars. Hindu astronomy, unlike
Western astronomy, mapped the sky with the phases of the Moon rather than with the stars. It
simplified their calculations—at full moon, the position of the Sun can automatically be given by
that of the Moon. Similarly, the position of the Sun can easily be determined with the different
phases of the Moon.
The following are the months in the Hindu calendar:
1. Caitra (March-April)
2. Vais. ākha (April-May)
3. Jyais. t. ha (May-June)
4. Ās. ād. ha ( June-July)
5. Śrāvan. a ( July-August)
6. Bhādrapad (August-September)
7. Āśvina or Kwār (September-October)
8. Kārttika (October-November)
9. Agarhayana or Aghan (November-December)
78
4. ASTRONOMY
10. Paus. a (December-January)
11. Māgha ( January-February)
12. Phālguna (February-March)
The names of the months are derived from the names of the naks. atra (star or constellation)
in which the Sun dwells (or nearby). Xuan Zang (or Hiuen Tsang), a Chinese traveler who
visited India during the seventh century, and al-Bīrūnī who traveled in India during the eleventh
century also used the similar names for various months in India.67
Nothing was more natural for the sake of counting days, months, or seasons than to ob-
serve the twenty-seven places which the Moon occupied in her passage from any point in the
sky back to the same point. The location of the Moon and its shape provided the tithī (date) as
well as the particular time in the night sky for astute observers. This procedure was considerably
easier than determining the Sun’s position either from day to day or from month to month. As
the stars are not visible during daytime and barely visible at sunrise and sunset, the motion of
the Sun in conjunction with certain stars was not an easily observable task. On the contrary, any
Vedic shepherd was able to decide day and time easily with the observation of the Moon.
The ancient Hindus formulated a theory of creation which was cyclic in nature. The uni-
verse followed a cycle of manifestated and non-manifestated existence. A new unit of time,
yuga, was chosen to define the period of this cycle. The yuga system is based on astronomical
considerations, and is frequently mentioned and explained in the Purān. ic literature.
In the yuga system of the Hindus, a mahā-yuga (mahā means big in the Sanskrit language)
is divided into four yugas: Satya or Kr. ta, Tretā, Dvāpar, and Kali. The spans of Satya-yuga, Tretā-
yuga, Dvāpara-yuga, and Kali-yuga are in the ratio 4: 3: 2: 1. According to the Hindu scriptures,
life on the Earth diminishes in goodness as we go from Satya- to Kali-yuga. At present, we live
in the age of Kali-yuga that started in 3102 BCE of the Julian Calendar.
A Kali-yuga is equal to 432,000 solar years. A maha-yuga is equal to 4 + 3 + 2 + 1 = 10
Kali-yuga, equivalent to 4,320,000 solar years. Seventy maha-yuga constitute a manvantara and
fourteen manvantara constitute one kalpa. A kalpa is the duration of the day of Brahmā, the
creator of the universe. The night is equally long and the creation dissolves into the unman-
ifested form during the night of Brahmā. After that, it starts again.68 Thus, the total cycle of
creation is about 8.5 billion years. It is this number that caught the attention of Carl Sagan, after
recognizing this number to be so close to number accepted by modern science which is also of
the order of billion of years. In contrast, just two centuries ago, a large number of scientists or
scholars in the West considered earth to be just six thousand years old.
67For Hiuen Tsang, see Beal, book II, p. 72; for al-Bīrūnī, see Sachau, vol. 1, p. 217.
68Garud. a-Purān. a, Chapter 233.
4.3. HINDU COSMOLOGY 79
4.3 HINDU COSMOLOGY
The R. gveda raises questions about the creation: “What was the tree, what wood, in sooth, pro-
duced it, from which they fashioned forth the Earth and Heaven?”69 Or, “What was the place
whereon he took his station? What was it that supported him? How was it? Whence Visvakar-
man [God], seeing all, producing the Earth, with mighty power disclosed the heaven.”70
In Hindu theory of creation, matter was not created from nothing, as the Bhāgavad-Gītā
(2: 16) tell us: “Nothing of non-being comes to be, nor does being cease to exist.” The R. gveda
gives an account of creation and explains the state of the universe just before the blast: “Then
was neither non-existent nor existent: there was no realm of air, no sky beyond it. What covered
in, and where? And what gave shelter? Was water there, unfathomed depth of water? Death was
not then, nor was there aught immortal: no sign was there, the day’s and night’s divider. That one
thing, breathless, breathed by its own nature: apart from it was nothing whatsoever. Darkness
there was: at first concealed in darkness this all was indiscriminate chaos. All that existed then
was void and formless: by the great power of warmth was born that unit.”71
The R. gveda uses the analogy of a blast furnace of a blacksmith to explain the creation
process: “These heavenly bodies produced with blast and smelting, like a smith. Existence, in an
earlier age of Gods, from non-existence sprang. Thereafter were the regions born. This sprang
from the productive power.”72 This is quite similar to the Big-Bang Theory in which the universe
was created with a Big Bang, like a bomb explosion.
The Srimad-Bhāgvatam provides a description of the process of creation in which the
whole universe was in noumenal form and came to existence under the desire of the Creator
(God). The existence will continue for a period of time before the universe would again go
back to its noumenal form of matter. This will form a cyclic (oscillating) process will continue
forever.73 The concept of matter that cannot be experienced was not a part of science only 100
years ago. Today, the issues of dark matter and energy as a form of matter are scientific realities.
The Bhagavad-Gītā describes the creation of the universe as a transformation of noumenal matter
into matter: “From the noumenal all the matter sprung at the coming of the day; at this coming
of the night they dissolve in just that called the noumenal.”74
For the Hindus, the end of each creation comes with heat death, somewhat similar to
the accelerated global warming which is causing concerns to modern scientists. This process of
destruction is described in the Vis. n. u-Purān. a. “The first, the waters swallow up the property of
Earth, which is the rudiment of smell; and Earth, deprived of its property, proceeds to destruc-
tion. Devoid of the rudiment of odor, the Earth becomes one with water. The waters then being
much augmented, roaring, and rushing along, fill up all space, whether agitated or still. When
69R. gveda, 10: 31: 7
70R. gveda, 10: 81: 2.
71R. gveda, 10: 129: 1–4.
72R. gveda, 10: 7: 2, 3.
73Srimad-Bhāgvatam, 3, 11–12.
74Bhāgavad-Gītā, 8: 8.
80
4. ASTRONOMY
the universe is thus pervaded by the waves of the watery element, its rudimental flavor is licked
up by the element of fire, and, in consequence of the destruction of the rudiments, the waters
themselves are destroyed. Deprived of the essential rudiment of flavor, they become one with
fire, and the universe is therefore entirely filled with flame, which drinks up the water on ev-
ery side, and gradually overspreads the whole of the world. While space is enveloped in flame,
above, and all around, the element of wind seizes upon the rudimental property, or form, which
is the cause of light; and that being withdrawn, all becomes of the nature of air.”75
Al-Bīrūnī used the views of Varāhamihira, a fifth century Hindu philosopher, to explain
the Hindu view of creation. “It has been said in the ancient books that the first primeval thing
was darkness, which is not identical with the black color, but a kind of non-existence like the
state of a sleeping person.”76 “Therefore, they [Hindu] do not, by the word creation, understand
a formation of something out of nothing.”77 “By such a creation, not one piece of clay comes
into existence which did not exist before, and by such a destruction not one piece of clay which
exists ceases to exist. It is quite impossible that the Hindus should have the notion of a creation
as long as they believe that matter existed from all eternity.”78
Let us sum up the Hindus’ theory of creation as provided above: There was a void in the
beginning. This void was not the one that we perceive in a strict physical sense; this void was
full of energy, in analogy, similar to the fields of modern physicists. The voluminous writings in
Hindu literature does not describe the special creation as arising out of nothing. Almost all the
scriptures of the Hindus, including the earliest R. gveda, advocate that the present form of the
universe evolved from the noumenal form of matter.
“Void” or “nonexistence” is like the noumenal matter or dark matter proposed by the mod-
ern scientists and philosophers; like the wavefunctions of Schrödinger to explain the microscopic
reality that cannot be experienced but, when squared, gives the probability of existence of a par-
ticle. The noumenal matter is beyond the senses’ experience but gives rise to a manifested form
of matter with a blast under the desire of God.79 The language used to describe the process of
creation in the Vedas is at least four thousand years old. The account of the Hindu theory of
creation by al-Bīrūnī is about 1,000 years old. Yet, there are striking similarities in the modern
theory of creation and the Hindu theory of creation. In both theories, the present universe was
created with a blast. In addition, the ancient Hindus believed that the creation process is cyclic in
nature, i.e. it goes through the cycle of creation and destruction. The destruction will start with
75Vis. n. u-Purān. a, 6: 4.
76Sachau, 1964, vol. 1, p. 320.
77Sachau, 1964, 1, p. 321.
78Sachau, 1964, vol. 1, p. 323.
79Kant defined the term, noumenon, which means a thing that cannot be perceived and can only be inferred from experi-
ence. It is a product of intellectual intuition, like the interaction of electrical fields that gives rise to a force on a charge particle
when placed near other charge particles. Such intuitions are an integral part of most religions as well science where realities are
defined from inferred experiences. The analogy mentioned above is an effort to describe nonexistence of the ancient Hindus by
sharing similar concepts in physics. However, we are dealing with two different domains of knowledge and the nonexistence
of the ancient Hindus is not the dark matter or Schrödinger’s wavefunction.
4.4. DIFFUSION OF HINDU ASTRONOMY 81
an increase in heat that will give rise to an increase in the water levels of the oceans. Eventually,
the heat will become so high that it will destroy all life forms.
Since everything animated in the universe should have a cause or the beginning, what was
the beginning of the void or the beginning of the noumenal matter? Instead of drawing a picture
of the beginning of the universe, the Hindus exposed the limit of human inquiry. The R. gveda
says: “Who verily knows and who can here declare it, whence it was born and whence comes
this creation? The Gods are later than this world’s production. Who knows then whence it first
came into being? He, the first origin of this creation, whether he formed it all or did not form
it, Whose eye controls this world in highest heaven, He verily knows it, or perhaps He knows
not.”80 In this way, the R. gveda exposes the limits of human inquiries to define the first cause of
creation.
4.4 DIFFUSION OF HINDU ASTRONOMY
The sacred books of the Hindus and Āryabhat.a I’s and Brahmgupta’s books on astronomy
reached the Middle East and China, and caused a spurt of new books on astronomy in these
regions. With the influence of the Middle East in Europe, it also indirectly impacted there too.
Several noted European astronomers, including Copernicus and Kepler, read translations of the
books from the Middle East that were based on Hindu astronomy.
4.4.1 THE MIDDLE EAST
If we had to choose just ten astronomy books in human history, Zīj al-Sindhind of al-Khwārizmī
(c. 800–850 CE) would be among these books. Al-Khwārizmī, Abu Ja‘far Muh. ammad ibn Mūsā
(ca. 800–847 CE) was a member of al-Ma‘mun’s Bayt al-Hikma. The motion of seven celestial
bodies, the mean motions, and the positions of apogee and the nodes are described in his Zīj
al-Sindhind.81 As the title indicates, this book was based on Hindu astronomy. The contents and
the mathematical procedures82 used by al-Khwārizmī in his book agree well with Brahmsphuta-
siddhanta(composed in 628 CE) of Brahmgupta, a Hindu astronomer.83
“al-Khwārizmī’s . . . treatise on astronomy was . . . a set of tables concerning the movements
of the Sun, the Moon and the five known planets, introduced by an explanation of its practical
use. Most of the parameters adopted are of Indian origin, and so are the methods of calculation
described, including in particular use of the sine,” concludes Régis Morelon in his analysis.84 al-
Khwārizmī’s book has been translated into English by Neugebauer.85 This book of al-Khwārizmī
80R. gveda, X: 129: 6–7; for more information on Hindu cosmology, see Jain, 1975 and Miller, 1985.
81Translated by Neugebauer, 1962.
82Goldstein, 1967; Kennedy and Ukashah, 1969.
83Salem and Kumar, 1991, p. 47.
84Régis Morelon, Eastern Arabic Astronomy Between the Eighth and the Eleventh Centuries, in the book by Roshdi
Rashed, 1996, vol. 1, p. 21. He taught in CNRS in Paris for many years and later served as director of IDEO (Dominican
Institute of Oriental Studies) in Cairo from 1984 to 2008.
85Neugebauer, 1962.
82
4. ASTRONOMY
became one of the most documented books of astronomy in Europe during the medieval period.
The famous Toledo Tables and the Alfonsine Tables were based on this book. In the title of his
book, al-Khwārizmī has acknowledge Hindus’ contribution to astronomy. Several medieval and
modern historians have written about the connection of Zīj al-Sindhind to Hindu astronomy.
“Three Indian astronomical texts are cited by the first generation of Arab scientists: Aryab-
hatiya [Āryabhat. īya], written by Aryabhata [Aryabhat.a I] in 499 [CE] and referred to by Arab
authors under the title al-arjabhar; Khandakhadyaka by Brahmgupta (598–668 CE), known in
Arabic under the title Zīj al-arkand; and Mahasidhanta [Mahāsiddhānta], written toward the
end of the seventh or at the beginning of the eighth century, which passed into Arabic under
the title Zīj al-Sindhind,” writes Régis Morelon.86 A multitude of Zījs were written in India
and Afghanistan first, and in Persia and Baghdad later. A typical Zīj covered information on
trigonometry; spherical astronomy; solar, lunar, and planetary mean motions; solar, lunar and
planetary latitudes, parallax, solar and lunar eclipses, and geographical coordinates of various
locations, particularly to locate qibla.87 “The Arabic text is lost and the work has been transmit-
ted through a Latin translation made in the twelfth century by Adelard of Bath from a revision
made in Andalusia by al-Majrīt.ī (d. 1007 CE).”88
Al-Khwārizmī even used metamorphosed Sanskrit terms in his astronomical calculations.
For example, for the rules when finding the sizes of the Sun, the Moon and the Earth’s shadow,
al-Khwārizmī used the term elbuht, that comes from the Sanskrit word bhukti where the shadows
on the Earth from the Sun and the Moon was observed at the same time daily, indicating the
mathematical processes were essentially taken from Hindu astronomy.89
S. ā‘id al-Andalusī (1029–1070 CE) writes that “a person originally from Hind came to
Caliph al-Mans.ūr in A.H. 156 [773 CE] and presented him with the arithmetic known as
Sindhind for calculating the motion of stars. It contains ta‘ādyal [equations] that give the posi-
tions of stars with an accuracy of one-fourth of a degree. It also contains examples of celestial
activities such as the eclipses and the rise of the zodiac and other information. . . . Al-Mans.ūr
ordered that the book be translated into Arabic so that it could be used by Arab astronomers as
the foundation for understanding celestial motions. Muhammad ibn Ibrahim al-Fazārī accepted
the charge and extracted from the book that astronomers called al-Sindhind. . . . This book was
used by astronomers until the time of Caliph al-Ma‘mūn, when it was abbreviated for him by
Abu Ja‘far Muh. ammad ibn Mūsā al-Khwārizmī, who extracted from it his famous tables, which
were commonly used in the Islamic world.”90
86Régis Morelon, General Survey of Arabic Astronomy, in the book by Roshdi Rashed, 1996, vol. 1, p. 8.
87King, in the book by Selin, 1997, p. 128; Mercier in Selin, 1997, p. 1057. Zij is a common term used in Arabic for tables
and qibla is the direction of Kaaba (Mecca) from your location.
88Régis Morelon, Eastern Arabic Astronomy Between the Eighth and the Eleventh Centuries in the book by Roshdi
Rashed, 1996, vol. 1, p. 21; see also Toomer, G. J., 1973, Dictionary of Scientific Biographies, vol. 7, p. 360.
89Neugebauer, 1962, p. 57; Goldstein, 1996.
90Salem and Kumar, 1991, p. 46–47. This tells us that Europeans were aware that al-Khwārizmī’s work was taken from
Hindu astronomers.
4.4. DIFFUSION OF HINDU ASTRONOMY 83
Caliph al-Mans.ūr was a ruler of Baghdad and his Abbasid dynasty was known for its
respect of knowledge. He established Bayt al-Hikma (House of Wisdom) which became a model
for other empires in Arabia and Europe. The House of Wisdom was a court or school where
scholars worked in history, jurisprudence, astronomy, mathematics, and medicine, etc. These
scholars were supported by Caliph al-Mans.ūr and, in return, they helped the Caliph in his
personal and kingdom affairs. It was a practice of the rulers of Abbasid dynasty to patronize
scholars from foreign lands. With time, Baghdad became a center of learning. Scholars from
the nearby regions visited Baghdad to acquire knowledge.
Abū Ma‘sher, Jafar ibn Moh. ammad ibn Amar al-Balkhī (787–886 CE), a Persian as-
tronomer who mostly lived in Baghdad, also mentioned Kanaka’s role in Baghdad. He labeled
Kanaka as the foremost astronomer among all the Indians of all times.91 We do not know much
about Kanaka. Most of the information about him has come from the manuscripts written later
in Arabia, the Mediterranean region and Europe. Apparently, Kanaka made his impact outside
India.
In Persia, under the Sasanids (226–651 CE), observational astronomy was practiced under
the influence of Indian and Greek astronomy. We know from al-Hashīmī (fl. ninth century) that
Shāh Anūshirwān compared the work of Ārybahat.a I’s Arkand with Ptolemy’s Almagest. He
found Āryabhat.a I’s work better than Ptolemy’s. Thus, the king asked his astronomers to compile
a Zīj on Āryabhat.a I’s system. This is how the “Royal tables” (Zīj al-Shāh) were compiled.92 Al-
Mans.ur, while deciding the auspicious time for the foundation of the capital Baghdad, asked
his astronomers to use a Pahlavi version of Zīj al-Shāh to calculate this time.93
Like the Greenwich observatory in England has become a standard location to define
the time and longitude of various locations in the world, al-Khwārizmī used Arin (Ujjain), the
Greenwich of the ancient and medieval worlds, as the central place of the Earth.94 This is an
important piece of information. Almost any point on the Earth can be chosen as a standard for
this. Al-Khwārizmī could have chosen Baghdad as the prime meridian, his place of residence.
However perhaps due to the prevalent practice in Arabia and al-Khwārizmī’s dependence on
the Hindu astronomical tables, he preferred to choose Ujjain. It is the city that was also chosen
by Āryabhat.a I and Brahmgupta, from whom al-Khwārizmī derived his work, as zero meridian.
al-Khwārizmī defined one sidereal year equal to 365.15302230 days. This is exactly the same
value used by Brahmgupta.95 In Al-Khwārizmī’s book, the “era of flood” was the era of Kaliyuga
(February 17, 3102 BCE). Al-Khwārizmī’s elwazat, a procedure to calculate the mean positions
of the planets, was similar to the ahargan. a method of Hindu astronomy.96
91Pingree, 1968, p. 16.
92van der Waerden, The Heliocentric System in Greek, Persian, and Hindu Astronomy, in the book by King and Saliba,
1987; Régis Morelon, General Survey of Arabic Astronomy, in the book by Roshdi Rashed, 1996, vol. 1, p. 8. On a different
note, under Anūshirwān’s reign, chess was introduced from India, and the famous book, Kalilah and Dimnah was translated.
93David King, in the book by Selin, 1997, p. 126; F. Jamil Ragep, in the book by Selin, 1997, p. 395.
94Neugebauer, 1962, p. 10, 11.
95Neugebauer, 1962, p. 131.
96Sen, 1970.
84
4. ASTRONOMY
4.4.2 CHINA
The influence of the Hindu thought on China can be judged by the fact that, between 67 CE
and 518 CE, in less than five centuries, some 1,432 works in 3,741 fasciculi were translated
from Sanskrit to Chinese and were cataloged in 20 disciplines.97 By 433 BCE, the Chinese as-
tronomers recorded a system of 28 hsiu (lunar mansions or Moon stations) marked by a promi-
nent star or constellation. The Moon traveled past and lodged in each of these mansions. This
system probably originated from the Hindu system of 28 naks. atra (star or constellation).98 The
Vedas provide a complete list of these naks. atra. One difference between these two systems is that
whereas the Hindus named their naks. atra after their gods, the Chinese honored their emperors,
queens, princes, and even bureaucrats by assigning their names to stars.
The Navagraha-siddhānta (Nine Planet Rule), a popular Indian astronomy book, was trans-
lated into Chinese as Kaiyuan Zhanjing in 718 CE by Indian astronomer Gautama Siddhārtha
(Qutan Xida, his Chinese name). This book is still preserved in a collection from the Tang pe-
riod. It contains the Indian calendar, known as navagr. aha (nine houses; used for the five planets,
the Sun, the Moon, and the two nodal points, Rāhu and Ketu). This calendar was known as Ji-
uzhi li (Nine Upholder Calendar) in China.99 This calendar is based on Varāhamihira’s Pañca
Siddhānta that was written during the sixth century. It contains the astronomical tables along
with the methods to calculate an eclipse. It has a table of sines and the Hindu numeral zero
in the form of a dot (bindu).100 The Chinese, in following the Hindu tradition, also used nine
planets in their astronomical work that included the Sun, the Moon, Mercury, Venus, Saturn,
Jupiter, Mars, and the ascending and descending nodes known as Rāhu and Ketu in the Hindu
work.101 In May 1977, a tomb of an Indian astronomer from the Gautama family was excavated
at Chang-an county in modern Shaanxi province. This tomb had manuscripts and inscriptions
that has provided valuable information about Gautama clan. As a result of this excavation, we
now know that Gautama Siddhārtha was not the only famous person in Gautama clan. Gau-
tama Zhuan also played an important role in the Bureau of Astronomy and at the Tang court.
He got married to a Chinese woman and the following generations were assimilated into the
Chinese culture.102
During the Tang Dynasty (618–907 CE), three schools of Indian astronomical systems
were based in China to guide the emperors. These schools are: Siddhārtha school, Kumāra
school, and Kaśyapa school.103 At least two experts from the Siddhārtha school served as Di-
rector of the astronomical bureau during Tang dynasty. It is important to mention that these
directors were crucial for emperors in their day-to-day activities as they had to find auspicious
97Gupta, 1981; Mukherjee, 1928, p. 32, taken from Gupta, 1981.
98Sarma, 2000.
99Yoke, 1985, p. 162
100Yoke, 1985, p. 162.
101Bagchi, 1950, p. 169 and Yoke in Selin, 1997, p. 78.
102Sen, Tansen, 1995.
103Yoke in Selin, 1997, p. 110.
4.4. DIFFUSION OF HINDU ASTRONOMY 85
times for rituals and government actions, consult the director on astrological matters, and even
take advice on dealing with people. This is perhaps the most powerful position, after the emperor
himself. This position was occupied by Hindus for several generations during the Tang dynasty.
This fact tells a lot about the status of Hindu astronomy and Hindus in China.
Amoghavajra (Chinese name, Bukong), a brahmin from India arrived in China in 719
at the age of 15 with his uncle. In 741, he went back to India and again returned to China
in 746. He was given the title of Zbizang by the Tang Emperor Xuanzong (713–756). In 759
CE, Amoghavajra wrote Xiu yao jing (Lunar mansion and planet sutra). A commentary to this
work by Yang Jingfeng mentioned the three Indian schools of astronomy prevalent in China
and occupants of powerful Bureau of Astronomy: “Those who wish to know the positions of the
five planets adopt Indian calendrical methods. One can thus predict what hsiu (a planet will be
traversing). So we have the three clans of Indian calendar experts, Chia yeh ( Jia ye Kasyapa),
Chhu than (Qutan Gautama) and Chu mo lo ( Jiu mo lo Kumara), all of them hold office at
the Bureau of Astronomy. But now most use is made of the calendrical methods of Master
Chhu than chuan (Qutan zhuan) together with his Great Art.”104 Another person from this
clan, Gautama Rahula, was the director of astronomy between 627 to 649, and compiled two
calendars viz. Jingweili and Guangzaili. As the family settled down in China, marry the Chinese,
they are not labeled as Indian in Chinese records.
Rāhu (Chiao chhu) and Ketu (chiao chung) were frequently mentioned in the Chinese texts
written during and after the Tang Dynasty (618–907 CE).105 The book, Qi Yao Rang Zai Jue
(Formulae for Warding off Calamities According to the Seven Luminaries), was compiled by an In-
dian Buddhist monk, Jin Ju Zha, in China. This book has detailed ephemerides of Rāhu and
Ketu. According to this book: “The luminary E Luo Shi Rāhu is also known by the following
names: The Yellow Standard (Huang Fan), The Head of the God of Eclipse (Shi Shen Tou), Su-
perposition (Fu), and The Head of the Sun (Tai Yang Shou). It always moves invisibly and is
never seen when it meets the Sun or the Moon, an eclipse occurs; if the meeting is at a new
moon or a full moon, then an eclipse necessarily occurs; when it is opposite to the Sun or the
Moon, there will also be an eclipse.”106 In summary, the text supports the following points:
1. Rāhu and Ketu are invisible luminaries.
2. The motion of Rāhu and Ketu have a bearing on the occurrence of eclipses.
3. The theory depicted in the tales of Rāhu and Ketu is different than the ancient theory of
eclipses in China.
4. Rāhu and Ketu execute a uniform motion against the background of fixed stars and its
speed does not vary.
104Deshpande, 2015; Sen, 1995. A similar account is also provided by Needham and Ling, 1959, vol. 3, p. 202 and Yoke,
in Selin, 1997, p. 78.
105Needham and Ling, 1959, vol. 3, p. 416.
106Wei-xing, 1995.
86
4. ASTRONOMY
4.4.3 EUROPE
As mentioned earlier, during the eleventh century, the European scholars in Spain knew that al-
Khwārizmī’s work in the Middle East was an extension of Hindu astronomy. S. ā‘id al-Andalusī,
a prominent scholar from Spain, provided ample information about the work of Āryabhat.a I,
Brahmgupta, etc. He believed that the Hindus’ work on astronomy formed the basis for Arab
astronomy,107 as mentioned in the previous subsection. S. ā‘id al-Andalusī was not the only person
stating this fact. Rabbi Abraham Ibn Ezra (1096–1167 CE) provided a similar story of this
transfer of knowledge from India to Arabia.108 Ibn Izra was based in Spain and wrote about
Hindu mathematician and astronomer, Kanaka, who shared his knowledge and allowed the
Arabs to know about Hindu astronomy: “The scholar, whose name was Kanaka, was brought to
the king, and he taught the Arabs the basis of numbers, i.e., the nine numerals. Then, from this
scholar with the Jew as Arabic-Indian interpreter, a scholar named Jacob b. Sharah translated a
book containing the tables of seven planets [five planets, the Sun, and the Moon]. . .”109
Abū Ishaq Ibrāhim ibn Yahya al-Naqqash, better known as al-Zarqalī (ca.1029–1087
CE), a Spanish astronomer who worked under S. ā‘id al-Andalusī, compiled the famous Toledan
Tables that are based on the Sindhind system.110 “One of the first Latin authors to use tables of
Arabic origin was Raymond of Marseilles. In 1141 [CE], he composed a work on the motions
of the planets, consisting of tables preceded by canons and an introduction in which he claims
to draw on al-Zarqāllu [Zarqalī].111
Prior to al-Zarqalī, around 960 CE, ‘Arīb bin Sa‘īd and Mozarab bishop Rabī b. Zayd
compiled the Calendar of Códoba for al-H. akam II after his accession to the caliphate. The
calendar provided the dates when the Sun enters in different zodiacs. These dates are provided
according to the Sindhind and used the mathematics of Brahmgupta.112
“The Toledan tables are a composite collection, including the parts taken from the tables
of al-Zarqāllu [Zarqalī] alongside extracts from al-Khwārizmī (notably the planetary latitudes),
elements from Al-Battānī (in particular, the tables of planetary equations), and yet other parts
derived from the Almagest or the Handy Tables of Ptolemy.”113 The Toledan tables were translated
by Gerard of Cremona114 and played an important role in the growth of European astronomy.
Al-Zarqalī is not the only person in Europe who studied and wrote astronomy books
on the Hindu system of Sindhind. Al-Majrīt.ī (d. 1007 CE) also modified the Sindhind of al-
107Salem and Kumar, 1991
108Goldstein, 1967, p. 1478.
109Goldstein, 1996.
110Sarton, 1927, vol. I, p. 759.
111Henri Hugonnard-Roche, Influence of Arabic Astronomy in the Medieval West, in the book by Roshdi Rashed,1996,
vol. 1, p. 287; also see, Toomer, 1968.
112Juan Vernet and Julio Samsó, Development of Arabic Science in Andalusia, in the book by Roshdi Rashed,1996, vol. 1,
p. 250–251.
113Henri Hugonnard-Roche, Influence of Arabic Astronomy in the Medieval West, in the book by Roshdi Rashed, 1996,
vol. 1, p. 289.
114Henri Hugonnard-Roche, Influence of Arabic Astronomy in the Medieval West, in the book by Roshdi Rashed, 1996,
vol. 1, p. 292.
4.4. DIFFUSION OF HINDU ASTRONOMY 87
Khwārizmī. A student of al-Majrīt.ī, Ibn al-Samh. (979–1035 CE), composed a zij that was
based on the Sindhind system.115 Another student of al-Majrīt.ī, Ibn al-Saffār, also authored
a brief astronomical table that was based on the Sindhind system.”116 The third scholar from
Spain, Ibn al-Ādāmi also compiled the astronomical tables known as Kitab Naz. m al-‘Iqd (Book
on the Organization of the Necklace) that was completed after his death by his student al-Qāssim
bin Moh. ammad ibn Hashim al Madā’inī, better known as al-’Alawī, in 950 CE. “This book
contains all that was known about astronomy and the calculation of the motions of the stars
according to the system of the Sindhind, including certain aspects of the trepidant motions of
the celestial bodies, which were never mentioned before,” writes S. ā‘id al-Andalusī.117
It is the Latin version of al-Khwārizmī’s text that was used by Adelard of Bath, a British
scientist and Arabist of the early twelfth century, to teach Henry Plantagenet, the future King
Henry II of England.118 Thus, the work of the Brahmins on the banks of Ganges became the
subject matter of learning to the Royalty in England.
Alfonso X, King of Castile119 (1252–1284 CE) gathered a group of Muslims, Jewish, and
Christian scholars in the tradition of Bayt al Hikma or House of Wisdom of Baghdad. These schol-
ars translated the earlier works in Arabic that were based on Hindu astronomy and compiled
them into a single book in the Castilian language, called the Alfonsine Tables. This book reached
Paris in the early 14th century and was translated into Latin. Soon the book spread throughout
Europe and used as computing tool by European astronomers.120 Thus, medieval astronomy in
Europe was a derived and improved mixture of Hindu, Persian, and Arabic contributions. It is
established that a bound volume of the Alfonsine Tables was owned by Copernicus and he did
copy planetary latitudes from these tables.121 The extent to which Copernicus was in debt of the
Hindu science is a matter of interpretation and debate. However, there is no dispute that he did
use Hindu numerals and mathematics in his calculations. His dependence on Hindu astronomy
is an open question which only the future scholarships will ascertain.
Johannes Kepler gave his table of parallax of the Moon which is essentially identical
with the theory of parallax given in Khandakhadyaka of Brahmgupta.122 The theory of Brah-
mgupta was proposed about one millennium before Kepler. A possible connection could be
al-Khwārizmī’s book on Hindu astronomy, Zīj al-Sindhind. This book quite popular in Europe
when Kepler was learning astronomy in his youth.
115Salem and Kumar, 1991, p. 64.
116Salem and Kumar, 1991, p. 65.
117Salem and Kumar, 1991, p. 53.
118Burnett, 1997, p. 31.
119Castile is presently a part of Spain. As a side note, soaps made from olive oil and sodium hydroxide or any hard soap
from fat or oil are called Castile soaps. It is the legacy of this region.
120Chabás, 2002; King and Saliba in the book by Gingerich, 1993.
121Swerdlow and Neugebauer, 1984, p. 4 and Chabás, 2002.
122Neugebauer, 1962, p. 124. This is the conclusion of Neugebauer (1899–1990), a noted historian of science, who was
trained in Gröningen, Germany, and taught at Brown University, USA.
C H A P T E R 5
Physics
89
Physics deals with matter and energy and their interactions. Measurements are central to the
growth of physics and length (space), time, and mass, are the three most important physical
quantities, called the fundamental quantities. Most other physical quantities are generally ex-
pressed in their terms of mass, length, and time. For example, speed is measured in miles per hour
(or kilometer per hour) and involves a measurement of space (distance) and time. This means
that a car moving with 65 miles/hour moves 65 miles (one measurement) in one hour (second
measurement). Similarly, force is measured in terms of mass, length, and inverse-square of time.
Therefore, for convenience purposes, most civilization defined standards for these fundamental
quantities. The ancient Hindus also methodically and carefully defined these standards.
5.1
SPACE (ĀKĀŚA)
Space is a three dimensional matrix into which all objects are situated and move without produc-
ing any interaction between the object and the space. In the classical sense, space allows a physical
ordering of objects without any reference to time. Objects appear to be near or distant due to
this physical ordering. In the Newtonian (classical) world, where objects move with speeds much
smaller in comparison to the speed of light, space and time are independent of each other, and
are considered as separate fundamental quantities. In the relativistic world, where objects move
108 m/s), space and time do not
with a velocity that is comparable to the velocity of light (3
have their independent status; they are integrated and form a new space-time (spatio-temporal)
reality. In the present context, only the classical picture of space and time as two independent
realities are considered.
(cid:2)
Ākāśa is one of the terms used to describe “space” in the Sanskrit language. According to
the Chāndogya-Upanis. ad, space was the first entity in the creation of the universe. To a question,
“To what does the world go back?,” the Chāndogya-Upanis. ad answers: “To space, all things are
created from space and they dissolve into space. Space alone is greater than any manifestation;
space is the final goal.”1 The Chāndogya-Upanis. ad identified space as an entity that contains the
Sun, the Moon, and other material objects: “In Space are both the Sun and the Moon, light-
ening, the stars, and fire. Through space one calls out; through Space one hears; through Space
one answers. In Space one enjoys himself. In space one is born. Reverence Space.”2 Thus, even
the heavenly objects were a part of something larger and more basic than them; this was space
1Chāndogya-Upanis. ad, 1: 9: 1.
2Chāndogya-Upanis. ad, 7: 13: 1–2.
90
5. PHYSICS
that encompassed humans, sound, the Sun, the Moon, and the stars. It is space that provides
individuality to objects as they are recognized based on the space they occupy: “Space is the
accomplisher of name and form (individuality).”3
The Vaiśes. ika-Sūtra of Kan. āda explained the attribute of space: “That which gives rise to
such cognition and usage as, this is remote from this, is the mark of space.”4 Thus, distance is
an attribute of space. Kan. āda considered the various directions (east, west, north, south) as the
attributes of space.5
A standard of length is imperative for any measurement of space. The Mārkan. daya-purān. a
provides the following standards of length: “A minute atom, a para-suks. ma, the mote in a sun-
beam, the dust of the earth, and the point of a hair, and a young louse, and a louse, and the body
of a barley-corn; men say each of those things is eight times the size of the preceding thing.
Eight barley-corns equal an “a ˙ngula” (finger-breadth), six finger-breaths are a “pada” (step), and
twice that is known as bitasti; and two spans make a cubit measured with the fingers closed in at
the root of the thumb; four cubits make a bow, a “dan. d. a” (stick), and equal to two “nadikayaga;”
two thousand bows make a “gavyūti;” and four times that are declared by the wise to be a “yojna;”
this is the utmost measure for the purpose of calculation.”6
9 meter which is strikingly closer to the actual value (of the order of 10(cid:0)
In Sanskrit, “pāda,” literally meaning a human step. If we consider “pāda” to be equal to
one foot and calculate the size of an atom using Mārkan. daya-purān. a, its value comes out to be
2.9
10(cid:0)
Kaut.ilaya, the teacher and Prime Minister of Chandragupta Maurya (Candragupta,
reigned, 322–298 BCE) defines several standards of length in his book, Arthaśāstra7 that are
compiled in Table 5.1.
10 m).
(cid:2)
5.2 TIME
It is easy to measure time. Most people carry a watch and some of these watches can be bought
for just a few dollars. To define the nature of time is comparatively a lot difficult task. Despite
consistent efforts for at least two millenniums, it is still an open question. In other words, it is
a lot easier to set a standard for time; it is a lot difficult to set an abstract definition of time that
can exemplify its nature.8
Time and action (event) are related with each other. Events are the basis of time. Time
deals with the order and the duration of events. Since events are discrete in nature, this makes
time discrete in nature. If there is a sequence of events at a definite time interval, careful mea-
surements of such a body constitutes a clock. In simple words, time divides two events from one
another. What happens when events move faster and the time between the two events becomes
3Chāndogya-Upanis. ad, 7: 14: 1.
4Vaiśes. ika-Sūtra, 2: 2: 10.
5Vaiśes. ika-Sūtra, 2: 2: 11–13. For this book, I have used a Hindi edition by Pundit Shriram Sharma Ā ¯carya.
6Mārkan. deya-purān. a, 49: 37.
7Arthaśāstra, Chapters 106–107; Book 2, Chapter 20; Shamasastry, 1960, p. 117.
8For the history of time and clocks, read Balslev, 1983; Panikkar, 1984; and Prasad, 1992.
Table 5.1: A Standard of Length from Kaut.ilaya’s Arthaśāstra
5.2. TIME 91
smaller and smaller? Do we have continuous events and, therefore, continuous time? Philoso-
phers have considered the question as to whether time is continuous or discrete for ages, without
coming to a definite conclusion.
Any temporal existence is evolved with time and vice-versa. It is time that is the origin of
the first cause, as the Atharvaveda tells us: “Time created living things and first of all, Prajāpati.”9
In Hindu scriptures, “Prajāpati” is the first God created who, in turn, created the present uni-
verse. In this view, it is time that unfolds the spatio-temporal universe and divides the past from
the present and the future. “In time, texts are produced—what is, and what is yet to come.”10
“Time is the supreme power in the universe.”11
The Maitreyī-Upanis. ad relates time to the mūrta (existent) world. “There are two forms
of Brahmā: Time and Timeless. That which is before the Sun is Timeless and that which begins
9Atharvaveda, 19: 53: 10.
10Atharvaveda, 19: 54: 3.
11Atharvaveda, 19: 54: 6.
8 paramāṇavah, atoms1 particle thrown off by the wheel of a chariot8 particles1 likṣā (egg of louse or young louse)8 likṣā1 yūka (louse of medium size)8 yūka1 yava (barley of medium size)8 yava1 aṅgula (finger breadth)4 aṅgula1 dhanurgṛaha8 aṅgula1 dhanurmuṣaṭi12 aṅgula1 vitasti14 aṅgula1 or pāda2 vitasti1 aratni or prajāpatya hasta (arm-length)42 aṅgula1 kiṣku (forearm)54 aṅgula1 hasta used in measuring timber forests.84 aṅgula1 vyama4 aratini1 rajju (rope)10 daṇḍa1 rajju2 rajju1 a3 rajju1 nivartana66.5 nivartana1 goruta4 goruta1 yojna.92
5. PHYSICS
with the Sun is Time.”12 This quotation states that the Sun was created with the creation of
universe and that time existed only after the creation. Prior to the creation, time did not exist,
in the absence of any event. “[A]nything before the Sun” defines the period before the physical
manifestation of the world, the situation before the Big Bang, when matter was in noumenal
form, and no events were possible. Therefore, time could not have existed in the absence of
events. Only after the creation, matter became accessible to experience.
The Vaiśes. ika-Sūtra considered time as an entity that exists only in the manifested world
(non-eternal), like in the Maitreyī-Upanis. ad mentioned above. “The name Time is applicable
to a cause, in as much as it does not exist in eternal substances and exists in noneternal sub-
stances.”13 Therefore, time can only be noticed in a dynamic world (temporal world) where
events are happening and serve as distinguishing factors. In a void, after this temporal world
dissolves into “darkness” or “non- existence,” time cannot exist.14
The Vis. n. udharmotra-Purān. a explains the subtlety of time and its quantized nature: “if one
pierces 1,000 lotus petals (put on top of each other) with a needle, the foolish man thinks that
they are pierced simultaneously, but in reality they were pierced one after the other and the subtle
difference between the instants in which the successive petals have been pierced represents the
subtlety of time.”15
Āryabhat.a I explains the nature of time and defines a method to measure it: “Time, which
has no beginning and no end, is measured by (the movements of ) the planets and the asterisms
on the sphere.”16 To measure time, the apparent motion of the Sun defined the events; sunrise
and sunset were two easily observed events. A scale based on the average-solar day was estab-
lished by the ancient Hindus. The duration of an average day was divided into several segments
that became a standard of time.
To measure the magnitude of time, a simple method is defined in the Srimad-
Bhāgvatam.17 (Table 5.2.) The duration of “nād. ikā” is measured as follows: “Take a copper vessel
measuring six “pala;” make a hole into the copper vessel by a pin made of gold of which the
length shall be four fingers and measure four “ma ˙sa” (unit of mass). Put the vessel on water.
The time taken to make the vessel filled with the water and sink constitutes one nād. ikā.” If we
consider the average length of a day to be 12 hours then the smallest unit of time, “trasaren. u,”
comes out to be about 1.7
4seconds.18
10(cid:0)
Within the accuracy of a solar clock, the method provided in the Srimad-Bhāgvatam
for the measurement of time is fairly good. All factors that can influence a measurement are
provided—the size of the copper vessel, the size of the hole is defined from the length of the
gold pin and its weight that defines the diameter of the pin, and the type of liquid.
(cid:2)
12Maitreyī-Upanis. ad, 6: 15.
13Vaiśes. ika-Sūtra, 2: 2: 9.
14see R. gveda, 10: 129: 1–4; see Chapter 4.
15Vis. n. udharmotra-Purān. a, 1: 72: 4–6.
16Āryabhat. īya, Kalākriya, 11.
17Srimad-Bhāgvatam, 3: 11: 6–11.
18Srimad-Bhāgvatam, 3: 11: 6–11.
Table 5.2: A Standard of Time from Srimad-Bhāgvatam
5.2. TIME 93
Table 5.3 defines a standard of time provided in Kaut.ilaya’s Arthaśāstra.19 One nālika is
defined as the time during which one adhaka (a measurement of mass) of water passes out of a
pot through an aperture of the same diameter as that of a wire of 4 a ˙ngula and made of 4 ma ˙sa
(measurement of mass) of gold. The length and mass of gold defines the diameter of the wire.
Here, trut.i comes out to be 0.06 s, if we assume 1 day to be 12 hours.
Table 5.3: A Standard of Time from Kaut.ilaya’s Arthaśāstra
(cid:2)
(cid:2)
60/=.88; 473; 600/
Al-Bīrūnī explained that the smallest unit of time used in India was an. u that is equal to
4 seconds.20 He did not see a use of such a small
(24
60
unit for time, just like he did not like the Hindus to define large numbers. “The Hindus are
foolishly painstaking in inventing the most minute segment of time, but their efforts have not
9s) or femto-
resulted in a universally adopted and uniform system.”21 With nano-second (10(cid:0)
9:766
10(cid:0)
D
(cid:2)
19Arthaśāstra, 107; Shamasastry, 1960, p. 119.
20Sachau, 1964, vol. 1, p. 337.
21Sachau, 1964, vol. 1, p. 334.
3 trasareṇu1 truti100 truti1 bedha3 bedha1 lāva3 lava1 nimeṣa3 nimeṣa1 kṣaṇa5 kṣaṇa1 kāṣṭha15 kāṣṭha1 laghu15 laghu1 nāḍikā or daṇḍa2 nāḍikā1 muhūrta6 or 7 daṇḍa1 prahara (one-fourth of a day or night)2 truṭi1 lava2 lava1 nimeṣa5 nimeṣa1 kāṣṭha30 kāṣṭhā1 kalā40 kalā1 nālika2 nālika1 muhūrta15 muhūrta1 day or night94
5. PHYSICS
second (10(cid:0)
ahead of their time.
15s) prevalent in scientific works today, it simply shows that the Hindus were simply
5.3 MATTER AND MASS
In order to define and distinguish matter, Kan. āda defines matter (padārtha) into six categories.22
1. Substance (dravya)
2. Quality (gun. a)
3. Action (karma)
4. Generality (sāmānya)
5. Individuality (viśes. a)
6. Inherence (samavāya)
Matters of different materials have different attributes. For example, identical sizes of iron
and cotton would have different weights. Apples can have differences in their colors and yet be
called apples. It is the assembly of attributes that defines matter. Table 5.4 defines the attributes
of matter as defined by Kan. āda.
Table 5.4: Attributes of matter from Vaiśes. ika-Sūtra (1: 1: 6)
To explain what he meant by different attributes of matters, Kan. āda provides examples:
Earth has the attributes of color, taste, and touch;23 water has the attributes of color, taste, touch,
fluidity, and viscidity;24 fire with color, and touch;25 air has the attribute of touch;26 while space
22Vaiśes. ika-Sūtra, 1: 1: 4.
23Vaiśes. ika-Sūtra, 2: 1: 1.
24Vaiśes. ika-Sūtra, 2: 1: 2.
25Vaiśes. ika-Sūtra, 2; 1: 3.
26Vaiśes. ika-Sūtra, 2; 1: 4.
1.color (rūpa)10.priority (paratva)2.taste (rasa)11.posteriority (aparatva)3.smell (gandha)12.intellect (buddhi)4.touch (sparśa)13.pleasure (sukha)5.number (saṅkhyā)14.pain (duḥkha)6.measure (parimāṇa)15.desire (icchā)7.individuality (pṛthaktā)16.aversion (dveṣa)8.conjunction (saṁyoga)17.volition (prayatnaḥ)9.disjunction (vibhāga)5.3. MATTER AND MASS 95
has no such properties.27 While analyzing air, Kan. āda concludes that “air is a substance since it
has action and attributes.”28 “Air is matter, but its non-perception, in spite of being substance, is
due to the non-existence of color in it.”29 With the non-perception of air, Kan. āda defines reality
beyond appearance. Not all that exists in the world is visible to human eyes. In his attempt to
clarify a difference between space and air, Kan. āda used the attribute of touch which is absent in
space: “Air has the property of touch while space does not have such a property.”30 To support
that air is matter, Kan. āda argued that a breeze can easily move the leaves of grass.31 Thus, air,
though invisible, can exert force and move things.
Kaut.ilaya’s Arthaśāstra provided standards of mass and suggested that the standards should
be made of iron or stones, available in Magdha and Mekala, both parts of Central India. A
replacement could be of objects that do not change their physical conditions. These standards
should not contract or expand when wet, and a change of temperature should not affect them.32
For a measurement of mass, Kaut.ilaya defined balances with different lever arms and scale-
pans for a range of masses:33 With the fulcrum in the middle, two identical pans hang on both
sides of the fulcrum at equal distances. Standard appropriate masses are placed on one side and
the unknown mass on the other so that the beam is balanced when the device is suspended by the
fulcrum. Kaut.ilaya provides descriptions of sixteen types of such balances. He suggested different
balances for different accuracy: “public-balance” (vyāvahārikā), “servant-balance” (bhājini) and
“harem-balance” (antah. pura-bhājini).34 Superintendents were assigned to stamp mass-standards
for public use to avoid cheating, and money was charged for such services.35 Traders were not
allowed to use their own standards, and were fined if found guilty of doing so.36
Much before Kaut.ilaya, a beam balance with two identical pans of metallic copper or
bronze hanging from each end of the beam and a fulcrum in the middle have been found in
Harappa and Mohenjodaro archaeological excavations. The beams are thicker in the middle
around fulcrum and tapered at the end where pans are suspended. Most pans have three sym-
metrical holes to suspend using strings. The diameter of these pans range from 5.50 to 8.25 cm.
From Lothal, Gujrat, even teracotta pans have been excavated. Some micro-balances with stan-
dard weights ranging from 0.89 to 1.2 g were also excavated.37
27Vaiśes. ika-Sūtra, 2; 1: 5.
28Vaiśes. ika-Sūtra, 2: 1: 12.
29Vaiśes. ika-Sūtra, 4: 1: 7.
30Vaiśes. ika-Sūtra, 2: 1: 4, 5.
31Vaiśes. ika-Sūtra, 5: 1: 14.
32Arthaśāstra, 103; Book 2, Chapter 19; Shamashastri, 1960, p. 113.
33Arthaśāstra, 103; Book 2, Chapter 19; Shamasastri, 1960, p. 114.
34Arthaśāstra, 104; Book 2, Chapter 19; Shamasastri, 1960, p. 114.
35Arthaśāstra, 105; Book 2, Chapter 19; Shamasastri, 1960, p. 116.
36Arthaśāstra, 105; Book 2, Chapter 19; Shamasastri, 1960, p. 116.
37Sharma and Bhardwaj, 1989.
96
5. PHYSICS
5.3.1 CONSERVATION OF MATTER
Conservation of matter is a fundamental law in physics that was introduced by John Dalton
(1766–1844 CE) in 1803. Einstein’s mass-energy relation added energy to the conservation of
mass since mass can be transformed into energy and vice-versa. Mass and energy are two dif-
ferent manifestations of the same reality. Therefore, the law of conservation of matter is now
transformed into a broader law of conservation of energy to incorporate mass-energy transfor-
mations.
Conservation of matter, in the ancient Hindu literature, is an extension of the theory of
reincarnation. The ancient Hindus noted that life is a continual process and birth and death
are just two different stages of life. In an extension of birth and death to the inorganic world,
they believed that matter, like the soul, cannot be destroyed or created; only a transformation
from one form to another is possible. As discussed in Chapter 4, the possibility of creation from
“nothing” is rejected by the ancient Hindus.
The Vaiśes. ika-Sūtra tells us that a substance is produced from other substances.38 Mixing
or separation of different substances creates new substances and it is impossible to create matter
from “nothing.”39 In his attempt to explain the properties of air, Kan. āda suggests the law of
conservation of matter that is strikingly similar to the definition accepted by the scientists about
hundred years ago. “Matter is conserved. Since air and fire are made up of atoms, these are also
conserved.”40 Kanada also makes an explicit statement that “molecules and materials are eternal,”
implying that they are conserved.41 The Sanskrit term nitya is used to define that atoms and
matter are conserved. The most common translation of nitya is “eternal,” defining something
that will exist forever. This essentially means that atoms are eternal or conserved.
5.4 ATOM (PARAMĀN. U )
Karl Marx (1818–1883), the father of communism, wrote his doctoral thesis on the differences
of the philosophies of Democritus and Epicurus in 1841. While studying the life of Democritus,
Karl Marx suggested that Democritus came in contact with Indian gymnosophists and the con-
cept of atom was already present in India during that period. Karl Marx was not the first person
to indicate this. The connection of the Greek philosophers with India has been accepted by some
scholars through out in antiquity and the medieval world. This is also the view of Diogenes Lu-
cretius (First Century BCE; IX: 35) whom Marx studied extensively for his work. Marx writes,
“Demetrius in the Homonymois [Men of the Same Name] and Antisthenes in the Diadochais [Suc-
cessions of Philosophers] report that he [Democritus] traveled to Egypt to the priests in order to
38Vaiśes. ika-Sūtra, 1: 1: 23 and 27.
39Vaiśes. ika-Sūtra, 1: 1: 25 and 28.
40Vaiśes. ika-Sūtra, 7: 1: 4; 7: 1: 19–20.
41Vaiśes. ika-Sūtra, 7: 1: 8.
5.4. ATOM (PARAMĀN. U )
97
learn geometry, and to the Chaldeans in Persia, and that he reached the Red Sea. Some maintain
that he also met the gymnosophists in India . . .”42
Kan. āda’s atomic theory was formulated sometime between the sixth century BCE and
tenth century BCE.43 Kan. āda was from Pabhosa, Allahabad, and was perhaps the first atomist in
India.44 His theory was quite popular in ancient and medieval India. The Vāyu-Purān. a, Padma-
Purān. a, Mahābhārata, and Srimad-Bhāgvatam list Kan. āda’s work and bear infallible testimony
to the antiquity and popularity of the Vaiśes. ika-Sūtra.
In Kan. āda’s theory, the combination of molecules (an. u) is possible and the non-perception
of atoms disappears when they amass together to become bigger.45 This indirectly defines limits
to the resolving power of human eyes. The smallest state of matter is paraman. u (atom) which
cannot be seen. These atoms aggregate to form the world we see. The shape of atoms is spher-
ical.46 Two atoms combine to form a dyad. Three dyads of the same type form a triad which is
large enough in size to be visible, ‘as motes in a sunbeam.’ Lucretius, who wrote about the con-
nection of Democritus and India, used a similar analogy to show how invisible objects become
visible.47 The above statement of combination of atoms tells us that “atoms possess an innate
propensity to aggregate. This idea is thus a forerunner of the modern concept of van der Waal
forces,” suggests Prof. S. K. Bose of the University of Notre Dame in Indiana, USA.48 In his
views, Vaiśes. ika-Sūtra “recognizes the existence of properties of the composite body that result
from the manner in which the atoms are put together and organized.” As an example, one can
use the difference between water as liquid vs. soild. Both have the same atoms. However, the
distinct properties of the bulk materials are different.
The attributes such as color, taste, smell, and touch of earth, water, fire, and air are non-
eternal on account of their substrata.49 These attributes disappear with the disappearance of their
substance. Some attributes of the substrata are prone to change in its combinative (chemical
action; pākajah) causes under the action of heat.50 Thus, odor, flavor, and touch are attributes
of atom in the Vaiśes. ika-Sūtra whereas Democritus denied such connections. The Greek atoms
possessed only minimal properties: size, shape, and weight.51
The invisibility of an object, as we divide it into small parts, can simply be observed from
the experience of wet clothes hung in air. The water in clothes can be seen and experienced with
42Marx, 1841, The Difference Between the Democritean and Epicurean Philosophy of Nature, 1841, Doctoral Thesis,
Part 1, Chapter 3, online version.
43Sinha, 1911, p. VI; Sarsvati, 1986, p. 302; Subbarayappa, 1967. A recent paper by Karin Preisendanz assigns the second
century CE to the commentary of Candrananda on Vaiśes. ika-Sūtra.
44Sarasvati, 1986, p. 295.
45Vaiśes. ika-Sūtra, 4: 1: 6; 7: 1: 9–11.
46Vaiśes. ika-Sūtra, 7: 1: 20. The statement of Kan. āda is pretty clear. On the other hand, it was not at all possible for the
ancient scholars to see an atom. How can you define a shape for something that you cannot even see? This statement is
presented here to promote further scholarship on this issue.
47Bose, 2015. This article provides an excellent review of atomism in different cultures.
48Bose, 2015.
49Vaiśes. ika-Sūtra, 7: 1: 2.
50Vaiśes. ika-Sūtra, 7: 1: 6.
51Horne, 1960.
98
5. PHYSICS
touch. As the sunlight and air dry the cloth, the moisture escapes from the cloth and becomes
invisible. The same water drops that were visible to the human eye disappear with extended
exposure to air and sunlight. Thus, smallness of an object causes this invisibility, as suggested by
Kan. āda. Continual division of matter does not lead it to disappear from existence, as it becomes
invisible. It is for this reason “an absolute non-existence of all things is not possible because an
atom remains in the end.”52 The Vāyu-Purān. a suggested a similar ultimate division of matter, “A
paramān. u (atom) is very subtle. It cannot be seen by the eye. It can be imagined. What cannot
be (ultimately) split in the world should be known as atom.”53
The Srimad-Bhāgvatam suggested a theory of the atom that is similar to Kan. āda’s: “That
which is the ultimate division of matter, that which has not gone through any change, that which
is separated from others, and that which helps the perception of objects, that which remains
after all is gone, - all those go under the name of parām. anu (atom).54 “Two atoms make one an. u
(molecule) and three an. u make one trasaren. u. This trasaren. u is discovered in the line of solar
light that enters into a room through a window and due to its extreme lightness such trasaren. us
couresth the way to sky.”55
Cyril Bailey (1871–1957), a British philologist, analyzed Kan. āda’s theory in his book, The
Greek Atomists and Epicurus. In his view, “It is interesting to realize that at an early date Indian
philosophers had arrived at an atomic explanation of the universe. The doctrines of this school
were expounded in the Vaicesika Sutra [Vaiśes. ika-Sūtras] and interpreted by the aphorisms of
Kanada [Kan. āda]. While, like the Greek Atomists, they reached atomism through the denial
of the possibility of infinite division and the assertion that indivisible particles must ultimately
be reached in order to secure reality and permanence in the world, there are very considerable
differences between the Indian doctrines and that of the Greeks.”56 On the relation of atoms
and matter in Kan. āda’s system and its similarity to Greek atomism, Bailey continues, “Kanada
[Kan. āda] works out the idea of their combinations in a detailed system, which reminds us at once
of the Pythagoreans and in some respects of modern science, holding that two atoms combined
in a binary compound and three of these binaries in a triad which would be of a size to be
perceptible to the senses.”57
There are marked differences between Kan. āda and Democritus. Atoms, as defined by
Kan. āda, are indestructible, indivisible, and have the minutest dimension. In the case of Dem-
ocritus, atoms of all elements (substances) are made up of just one substance; they differ only
in regards to form, dimension, position, etc. These atoms were indivisible, like Kan. āda sug-
gested, and had infinite shapes and, therefore, infinite types of substances.58 The roots of Indian
atomism were epistemological and based on empiricism and observation while those of Greek
52Nyāya-sūtra, 4: 2: 16.
53Vāyu-Purān. a, 2: 39: 117.
54Srimad-Bhāgvatam, 3: 11: 1.
55Srimad-Bhāgvatam, 3: 11: 5.
56Bailey, 1928, p. 64.
57Bailey, 1928, p. 65.
58McDonell, 1991, p. 11–12.
atomism were ontological or a priori.59 Kan. āda gained knowledge of atoms through empiricism,
intuition, logic, observation, and experimentation.
5.5. GRAVITATION AND OCEAN TIDES 99
5.5 GRAVITATION AND OCEAN TIDES
The Vaiśes. ika-Sūtra suggested that objects fall downward when dropped as a result of gravity.
“The falling of water, in absence of conjunction, is due to gravity,”60 and “flowing results from
fluidity.”61 Kan. āda used gurutva word, in Sanskrit, for heaviness or gravity; gurutvākars. an. (im-
plying attraction of the earth) is the Hindi term for the gravitational force today.
To elaborate the gravitational properties, Kan. āda wrote: “In absence of conjunction, falling
results from gravity.”62 In the absence of any conjunction, due to gravity, only downward motion
is possible: “In absence of propulsive energy generated by action, falling results from gravity.”63
Kan. āda also explained why water should fall due to gravitational effect, even though it goes up in
the form of rain-clouds during the evaporation process: “The Sun’s rays cause the ascent of wa-
ter, their conjunction with air.”64 Kan. āda explains the upward motion of the objects: “Throwing
upward is the joint product of gravity, volition, and conjunction.”65 An example is the upward
motion of water in a tree.66
Al-Bīrūnī (973–1050 CE) explains the shape of the Earth and force of attraction between
the Earth and human: “The difference of the times which has been remarked is one of the results
of the roundity of the earth, and of its occupying the center [center] of the globe. . . the existence
of men on earth is accounted for by the attraction of everything heavy toward its center [center],
i.e., the middle of the world.”67 He continues, “. . . we say that the earth on all its sides is the
same; all people on earth stand upright, and all heavy things fall down to earth by a law of nature,
for it is the nature of the earth to attract and to keep things. . . .”68
Al-Bīrūnī provides an analogy of a kadamba flower (Fig. 4.1) to describe humans on the
Earth. In this analogy, spikes are like humans. The force of gravitation allows humans to live on
all sides of the Earth where nothing is up or down. He ascribed this knowledge to Varāhmihir
(505–587 CE).69 As mention in Chapter 4, Āryabhat.a I also knew this because he used the
same analogy. He said that there was no “up” or “down” side of the Earth. What this means is
that, from its location in the great vastness of space, among the immense number of stars and
59Horne, 1960
60Vaiśes. ika-Sūtra, 5: 2: 3.
61Vaiśes. ika-Sūtra, 5: 2: 4.
62Vaiśes. ika-Sūtra, 5: 1: 7.
63Vaiśes. ika-Sūtra, 5: 1: 18.
64Vaiśes. ika-Sūtra, 5: 2: 5.
65Vaiśes. ika-Sūtra, 1: 1: 29.
66Vaiśes. ika-Sūtra, 5: 2: 7.
67Sachau, 1964, 1, p. 270.
68Sachau, 1964, vol. 1, p. 272.
69Sachau, 1964, vol. 1, p. 272.
100
5. PHYSICS
other celestial bodies, it is impossible to designate an “up” or “down” side, or direction, with the
Earth.
Ocean tides are noticed since antiquity. Is it a phenomenon like the overflow spill of water
in cup? Do we have a similar effect in the ocean? Vālmīki, in his famous epic Rāmāyan. a, connects
high ocean tides to the full moon: “the roaring of the heaving ocean during the fullness of the
Moon.”70 The Vis. n. u-Purān. a clearly suggests that the amount of water is not increased: “In all
the oceans the water remains at all times in the same quantity, and never increases or diminishes;
but like the water in a pot, which, expands with heating, so the water of the ocean expand and
contract with the phase of the Moon.”71
The Matsya-Purān. a also gives a similar picture, “When the Moon rises in the East, the
sea begins to swell. The sea becomes less when the Moon wanes. When the sea swells, it does
so with its own waters, and when it subsides, its swelling is lost in its own water . . . the store of
water remains the same. The sea rises and falls, according to the phases of the Moon.”72
This is an excellent analogy that indicates that there is an expansion in water during high
tides in the ocean and the quantity of water in the ocean does not increase or decrease with a
change in the phase of the Moon. Additionally, this also explains that the Hindus knew the
thermal expansion of matter.73
Al-Bīrūnī claims that “the educated Hindus determine the daily phases of the tides by the
rising and setting of the Moon, the monthly phases by the increase and waning of the Moon .
. .”74 Johannes Kepler (1571–1630 CE) is generally credited with suggesting the correlation of
ocean tides with the phases of the Moon. Kepler lived about five centuries after Al-Bīrūnī and
more than at least a millennium after the Vis. n. u-Purān. a was written. Since the Moon and the
Sun are so far away from the Earth, it was difficult for scientists of that period to comprehend
the theory of ocean tides. Newton’s action-at-a-distance later described its validity.
70Vālmīki’s Rāmāyan. a, 2: 6: 27.
71Vis. n. u-Purān. a, 2: 4.
72Matsya-Purān. a, 123: 30–34.
73The expansion of matter with an increase in temperature has been known to modern scientists only from the last 2–3
hundred years.
74Sachau, 1964, 2, p. 105.
C H A P T E R 6
Chemistry
101
Chemistry, like medicine, evolved primarily as a science of rejuvenation among the ancient Hin-
dus. Suśruta defined chemistry (Rasāyana-tantra) as the science “for the prolongation of human
life, and the invigoration of memory and the vital organs of man. It deals with the recipes that
enable a man to retain his manhood or youthful vigor up to a good old age, and which generally
serve to make the human system immune to disease and decay.”1 Soma—the elixir of life—was
produced with a knowledge of chemistry, and is mentioned in the R. gveda and Atharvaveda.2
Soma is a tonic to rejuvenate a person and to slow down the aging process. The Atharvaveda
suggests: “Invest this Soma for long life, invest him for great hearing power.”3
Rasāyana is the rejuvenation therapy in ayurveda which was practiced to prolong life. This
therapy is used to replenish the rasa (soup or sauce) and other dhātus (element) in our physical
body.4 Caraka explains the purpose of rasāyana: “Long life, excellent memory and intelligence,
freedom from disease, a healthy glow, good complexion, a deep powerful voice, strong bodily
and sensory powers, and beauty can be obtained from rasāyana.”5
Caraka and Suśruta define calcination, distillation, and sublimation processes in the chem-
ical transformation or purification of metals.6 Caraka mentions gold, copper, lead, tin, iron, zinc,
and mercury used in drugs and prescribed various ointments made from copper sulphate, iron
sulphate, and sulfur, for external application in various skin diseases.7 The oxides of copper, iron,
lead, tin and zinc were also used for medicinal purposes. All these metals are native to the Indian
Peninsula. Thin sheets of gold, silver, and iron were treated with salts and alkali in the prepara-
tion of drugs by Caraka.8 The Chāndogya-Upanis. ad tells us of the alloys (reaction or joining) of
gold with salt (borax), silver with gold, tin with silver, and copper with lead.9
Poisons and the drugs to counter poison were developed. The Mahābhārata, Rāmāyan. a,
and Kaut.ilaya’s Arthaśāstra mention chemical weapons (astra) that were used in wars. For exam-
ple, Kaut.ilaya provided a recipe of poison gas that was deadly: “The smoke caused by burning
the powder of śatakardama, uchchidi ˙nga (crab), karavira (nerium odorum), kat. utumbi (a kind of
1Suśruta-Sa ˙mhitā, Sūtrasthanam, 1: 10.
2R. gveda, 10: 57: 3–4; R. gveda, 9: 62: 1; R. gveda, 9: 2: 1; R. gveda, 8: 2: 1; Atharvaveda, 19: 24: 3.
3Atharvaveda 29: 24: 3.
4Caraka-Sa ˙mhitā, Cikitsāstānam, 1: 5
5Caraka-Sa ˙mhitā, Cikitsāstānam, 1: 6–7.
6Biswas and Biswas, 1996 and 2001; Ray, 1948; and Bhagvat, 1933.
7Ray, 1956, p. 61–62.
8Ray, 1956, p. 62.
9Chāndogya-Upanis. ad, 4: 17: 7.
102
6. CHEMISTRY
bitter gourd), and fish, together with chaff of the grains of madana and kodrave (paspalam scro-
biculatum), or with the chaff of the seeds of hastikarn. a (castor oil tree) and palāśa (butea frondosa)
destroys animal life as far as it is carried off by the wind.”10
Several deadly poisons were also concocted using complicated procedures: “The smoke
caused by burning the powder made of the mixture of the dung and urine of pigeons, frogs,
flesh-eating animals, elephants, men, and boars, the chaff and powder of barley mixed with
kāsīsa (green sulphate of iron), rice, the seeds of cotton, kut. aja (nerium antidysentericum), and
kośātaki (luffa pentandra), cow’s urine, the root of bhān. di (hydroeotyle asiatica), the powder of
nimba (nimba meria) śigru (hyperanthera morunga), phan. irjaka (a kind of basil or tulsī plant),
kshibapiluka (ripe coreya arborea), and bhā ˙nga (a common intoxicating drug), the skin of a snake
and fish, and the powder of the nails and tusk of an elephant, all mixed with the chaff of madana
and kodrava (paspalam scrobiculatum) or with the chaff of the seeds of hastikaran. a (castor oil tree)
and palāśa (butea frondosa) causes instantaneous death wherever the smoke is carried off by the
wind.”11 One can notice that the concoction is fairly complicated to produce and the ingredients
are not so common. The dung of pigeons, frog, and skin or a snake are not common items in
most chemical preparations. However, this is how most drugs/poisons were made during the
ancient and medieval periods.
The ancient Hindus used metal, minerals, gems and jewels to treat obstinate incurable
diseases. Complex chemical transformations were performed before a concoction was prepared.
Metals were converted into bhasms using oxidation or reduction processes. In some cases, these
were transformed into biologically active nanoparticles.12 The concept of reducing particle size
for improving the effectiveness of a drug is as old as the Caraka-Sa ˙mhitā. In some cases, metals
are heated at a high temperature and quenched with plant extracts. In this process, flakes or metal
turn into a fine nano-size powder with new chemical structure and properties. To understand
if the process was achieved or not, one criterion was that the bhasmas should be lusterless. Al-
Bīrūnī provides the accounts of alchemical practices of the Hindus.13 “They [the Hindus] have
a science similar to alchemy which is quite peculiar to them. They call it Rasāyana, a word
composed with rasa, i.e., gold.14 It means an art which is restricted to certain operations, drugs,
and compound medicines, most of which are taken from plants. Its principles restore the health
of those who are ill beyond hope, and give back youth to fading old age, so that people become
again what they were in the age near puberty; white hair becomes black again, the keenness of
the senses is restored as well as the capacity for juvenile agility, and even for cohabitation, and
the life of people in this world is even extended to a long period.”15
10Arthaśāstra, 411; Book 14, Chapter 1; Shamasastri, 1960. p. 442.
11Arthaśāstra, 411; Book 14, Chapter 1; Shamasastri, 1960. p. 442–443.
12Chaudhary and Singh, 2010; Sarkar and Chaudhary, 2010
13Sachau, vol. 1, p. 187–193.
14Al-Bīrūnī is mistaken in translating rasa as gold. Since gold is used in some recipes in Rasāyana, it may have caused him
to make this mistake.
15Sachau, vol. 1, p. 188–189.
6.1. MINING AND METALLURGY 103
“In India the earliest allusions to alchemical ideas appear in the Atharva Veda (Athar-
vaveda), where mention is made of the gold which is born from fire,” writes C. J. Thompson, in
his book The Lure and Romance of Alchemy.16 The R. gveda mentions surā as an intoxicating drink
that was used along with soma. Today, surā is a term used in India for alcoholic drinks. The
Arthaśāstra of Kaut.ilaya gives recipes for fermented alcoholic drinks made from rice (similar to
Sake, a popular Japanese drink), sugarcane (similar to rum), grapes (similar to wine), and various
spices. Suśruta wrote a complete chapter on the elixirs for rejuvenation of humans and suggested
recipes for people to become immune to disease and decay.17
Cyavanaprāśa is a very popular tonic for rejuvenation in India. The name has stemmed
from the name of Cayavana r. s. i (seers) who rejuvenated his body with the help of Aśvin Kumāras,
two Vedic doctors. This is mentioned at several places in the R. gveda.18 People in northern In-
dia take Cyavanaprāśa as a precautionary measure particularly in the winter season against the
common cold and cold-related symptoms. The use of Cyavanaprāśa is fairly ancient as Caraka
has referred to it for rejuvenation.19
Caraka provided several recipes for making iron tonics in his Caraka-Sa ˙mhitā. The process
of making these elixirs is known as the killing of metal in Sanskrit, and the final product is called
bhasma. To accomplish the killing of iron, Caraka recommended the use of fine thin plates of
iron with āmlā (a fruit) extract and honey for one year in an underground pot; it reduces iron to
a ferrous compound. Also, Caraka mentioned the conversion of yellow gold into a red colloidal
form using plant extract, before being used as a tonic. Many of these reactions were done to make
the metals nontoxic. Suśruta devoted one whole chapter on the use of alkalis (ks. āra) in various
diseases and noticed its corroding, digestive, mucous destroying, and virile potency destroying
properties.20
6.1 MINING AND METALLURGY
The ancient Hindus excelled in mining and metallurgical processes. In Persia, due to the high
quality of the so-called Damascus steel that was originally produced in Hind, steel was called
“foulade Hind,” indicating the steel of India. Similarly, after the conquest of Poros, Alexander
the Great received steel as the precious gift that the Greeks did not have. (Read Section 6.2.)
Aristotle, in his book, On Marvelous Things Heard, writes that Indian-copper is known to be
good and is indistinguishable from gold.21 This mention was made perhaps to acknowledge the
high quality of the Indian bronze made from copper that was “indistinguishable from gold” due
to the mixing of zinc and other metals or minerals.
16Thompson, 1932, p. 54.
17Suśruta-Sa ˙mhitā, Cikitsāstānam, 27: 1.
18R. gveda, 5: 74: 5 and 6; R. gveda, 7: 71: 5.
19Caraka-Sa ˙mhitā, Cikitsāstānam, 1: 74.
20Suśruta-Sa ˙mhitā, Sūtrasthānam, Chapter 11.
21Aristotle, On Marvelous Things Heard, 49, part of Aristotle’s Minor Work.
104
6. CHEMISTRY
The smelting operation is defined in R. gveda22 where ore was dumped into fire and bellows
were used to fan the fire. A good example of its existence is the theory of creation, as shared in
Chapter 4, provided in R. gveda, where the analogy of a smelting device was used to describe the
creation of the universe. The Atharvaveda tells us that the chest of the earth contains gold, indi-
cating gold mines and mining operation.23 Most ancient metals such as iron, copper, gold, silver
and lead were naturally available in various parts of India.24 The remains of Mohenjo-daro and
Harappa contain metallic sheets and pottery that are inscribed. Bronze, copper, iron, lead, gold,
and silver were used to make axes, daggers, knives, spears, arrow heads, swords, drills, metal mir-
rors, eating and cooking utensils, and storage utensils. The 10 centimeter high Mohenjo-daro’s
dancing girl is made of copper with armful of bangles (Figure 6.1). Out of the 324 objects from
these archaeological sites that have been analyzed, about 184 objects are of pure copper25 and
four archaeological copper processing kilns are discovered at Harappa, Lothal, and Mohenjo-
daro’s sites.26
The timber from an old mine in Hutti, Karnataka was collected at a depth of about
200 meters and was carbon dated. It was concluded that the mining process was carried out
in this mine around 4th century BCE.27 Ktesias, a Greek traveler who lived in Persia during
the 5th century BCE, mentions of the vast amount of gold mined from “high-towering moun-
tains.”28 He also mentions a congealing processing to get quality gold.29 Several ancient Greek
historians, such as Herodotus, Ktesias, Arrian, and Megasthenes, mentioned the use of metals
in India.30
The chemical analysis of the brass artifacts from Taxila (third-fourth century BCE) reveal
that 35–40 percent zinc content in these artifacts, giving a golden appearance to them. This was
achieved by mixing zinc with copper. It is quite natural to assume that a metallurgical process was
practiced to get pure zinc. Zinc is found as sphalerite, a sulphide of zinc. This ore is first converted
into an oxide in a combustion process. This is further converted into zinc in a reduction process
which is quite cumbersome. However, the smelting technicians found an innovative procedure
to get pure zinc. In contrast, such extraction methods were practiced in Europe only around the
sixteenth century.31
Zinc was smelted in a downward distillation process where the zinc vapors were swiftly
cooled down to avoid reduction process that happens at a slightly higher temperature. The main
concern is that the zinc oxide needs a minimum temperature of 1150 degree Celsius for the
22R. gveda, 9: 112.
23Atharvaveda, 12:1:6.
24Agarwal, 2000; Biswas and Biswas, 1996.
25Lahiri, 1995.
26Agrawal, 2000, p. 40.
27Radhakrishna and Curtis, 1991, p. 23–24.
28McCrindle, 1973, p. 16, 17.
29McCrindle, 1973, p. 68.
30Bigwood, 1995.
31Subbarayappa, 2013 p. 299.
6.1. MINING AND METALLURGY 105
Figure 6.1: The dancing girl of Mohenjo-daro. (Taken from Wikimedia).
reduction of oxide while the boiling point of zinc is 900 degree Celsius. Thus, it vaporizes and
escapes. The excavated furnaces at Zawar, in Udaipur district, Rajasthan, have two chambers,
top and bottom, separated by a thick perforated brick plate. The top chamber is sealed from
the top, forcing vapor to go to the lower chamber which was filled with water. Water reduced
106
6. CHEMISTRY
the temperature and solidified zinc. Even today, this area is known for mining operations and
Hindustan Zinc Ltd. is still in operation there.32
The copper statue of Buddha in Sultanganj (Figure 6.2), Bihar, is over 2.3 m high, 1 m
wide, and weighs over 500 kg. It was found in 1864 in the North Indian town of Sultanganj,
Bhagalpur district, Bihar, by an East India Company engineer. Today, the statue is housed in
Birmingham Museum and Art Gallery, Birmingham, England. The statue was made using a
technique which is popularly known as ‘lost wax technique’. It is a method of metal casting in
which a molten metal is poured into a mold, mostly of clay, that is created by means of a wax
model first. Once the soft clay solidifies, the whole assembly is baked, melting the wax and
extracted. This clay mold is then used for metal casting.
The Painted Grey Ware of the Gangetic Valley were manufactured around 600 BCE.33
Two glass bangles found at Hastinapur, near New Delhi, are from the 1100–800 BCE period.
The bangles are made from soda-lime-silicate glass with ornamental coloring from iron. Similar
bangles of green color were also found elsewhere in northern India.34 Glass bangles are the
ornamental jewelry that are worn by Hindu ladies even today. Beads of green glass with cuprous
oxide coloring were found in Taxila belonging to the 700–600 BCE period.35
Kaut.ilya realized the importance of mining to the state economy. He suggested that
“mines are the source of treasury; from treasury comes the power of government.”36 He described
various metallic ores and mining operations along with the processes of distillation, refinement,
and quality control. Kaut.ilya even defined the duties of the Director of mining: “must possess
the knowledge of the science dealing with copper and other minerals (sulbadhātu-śāstra), expe-
rienced in the art of distillation and condensation of mercury (rasapāka) and of testing gems,
aided by experts in mineralogy and equipped with mining laborers and necessary instruments,
the Director of mines shall examine mines which, on account of their containing mineral excre-
ment, crucibles, charcoal, and ashes, may appear to have been once exploited or which may be
newly discovered on plains or mountain slopes possessing mineral ores, the richness of which
can be ascertained by weight, depth of color, piercing smell, and taste.”37
Kaut.ilaya provided clues for finding good locations for mining: observe soil, stones, or
water that appears to be mixed with metal and the color is bright or if the object is heavy or
with strong smell, this indicates the possibility of a mine nearby.38 He provides techniques to
locate glass, gold, silver, iron, and copper mines.39 Mining operations were leased to private
investors at a fixed rate.40 According to him, the head of mining operations must know metal-
32Subbarayappa, 2013, p. 300; Srinivasan, 2016.
33Dikshit, 1969, p. 3.
34Dikshit, 1969, p. 3.
35Dikshit, 1969, p. 4.
36Arthaśāstra, 85; Book 2, Chapter 12; Shamasastri, 1960. p. 89.
37Arthaśāstra, 82; Book 2, Chapter 12; Shamasastri, 1960. p. 83.
38Arthaśāstra, 82; Book 2, Chapter 12; Shamasastri, 1960. p. 84.
39Arthaśāstra, 82; Book 2, Chapter 12; Shamasastri, 1960. p. 84.
40Arthaśāstra, 83; Book 2, Chapter 12; Shamasastri, 1960. p. 86.
6.1. MINING AND METALLURGY 107
Figure 6.2: Statue of Buddha from Sultanganj. (Taken from Wikimedia.)
lurgy, chemistry, and the refinery processes.41 This tells us that metallurgy was an established
discipline during the third century BCE in India. Kaut.ilaya defined the ethical rules of conduct
for goldsmiths. Severe penalties were prescribed for goldsmiths who fraudulently adulterated
gold. Any mixing of inexpensive impurities of tin, copper, and brass during the melting process
of gold and silver was considered a crime and the criminals were prosecuted. Thieves of mineral
products were punished with a financial penalty that was eight times the value of the stolen
products.42
41Arthaśāstra, 81; Book 2, Chapter 12; Shamasastri, 1960. p. 83.
42Arthaśāstra, 83; Book 2, Chapter 12; Shamasastri, 1960. p. 86.
108
6. CHEMISTRY
Kaut.ilaya described gold, silver, arsenic, copper, lead, tin, and iron ores and their pu-
rification processes.43 Coins, as currency, were used in business transactions during his period.
Compositions of different coins are defined in Kaut.ilaya’s book,44 and rules of punishment are
defined for people making counterfeit coins.45 Kaut.ilaya also explained the methods to catch
manufacturers of counterfeit coins46 and defined a penalty for the coin-examiner to accept a
counterfeit coin into the treasury.47
Sixteen gold standards based on the percentage of gold content in a specimen, that is
similar to the current “carat” system, were defined by Kaut.ilaya. Copper was mixed with gold to
form an alloy of different carat standards.48 Several gold alloys were defined by Kaut.ilaya that
were blue, red, white, yellow, and parrot (green) in colors. This was achieved by the processing
of gold with rock salt, lead, copper, silver, mercury, etc. The processes of chemical reactions were
well-defined with exact proportions of each element or compound.49 Gold-plating and other
metal-plating procedures are defined using amalgams, heating processes, rock salt, mica, wax,
etc.50 Kaut.ilaya also defined color, weight, characteristics, hammering, cutting, scratching, and
rubbing as tools to test precious stones.51
6.1.1 THE IRON PILLAR OF NEW DELHI
The Iron Pillar near Qutab-Minar in New Delhi is a testimonial to the metal forging skills
of the ancient Hindus (Figure 6.3). The pillar marks the renunciation of kingly duties and the
beginning of aesthetic life for King Candra. We know of this from the inscription on the pillar.
The inscription is dated between 400–450 CE and the inscribed letters have minimal corrosion
despite 1600 years of weathering in the open air.52 Air, heat, and heavy rains of Northern India
have not caused significant rusting of the pillar, even with high heat and humid weather from
July to September. The pillar is indisputably a long standing permanent record of the excellent
metallurgical skills and the engineering skills of the ancient Hindus.
King Candra is most likely the famous King Candragupta Vikramāditya II (375–413 CE)
and the current site of the pillar was chosen by Tomar King Ana ˙ngpāla who erected it on the
site of a temple. In 1739 CE, Nadir Shah occupied the city and decided to break the pillar using
cannons and failed. Several spots where the cannon balls hit the pillar are still marked.53
43Arthaśāstra, 93; Book 2, Chapter 14; Shamasastri, 1960. p. 98.
44Arthaśāstra, 84; Book 2, Chapter 2; Shamasastri, 1960. p. 86–87.
45Arthaśāstra, 59; Book 2, Chapter 5; Shamasastri, 1960. p. 57, 58.
46Arthaśāstra, 212; Boook 4, Chapter 4; Shamasastri, 1960. p. 239.
47Arthaśāstra, 203; Book 4, Chapter 1; Shamasastri, 1960. p. 230.
48Arthaśāstra, 86; Book 2, Chapter 13; Shamasastri, 1960. p. 90.
49Arthaśāstra, 88; Book 2, Chapter 13; Shamasastri, 1960. p. 92.
50Arthaśāstra, 92; Book 2, Chapter 14; Shamasastri, 1960. p. 97.
51Arthaśāstra, 92; Book 2, Chapter 14; Shamasastri, 1960. p. 97.
52Balasubramaniam, 2001.
53Balasubramaniam, 2001; Balasubramaniam, Prabhakar and Shanker, 2009.
6.1. MINING AND METALLURGY 109
Figure 6.3: The Iron Pillar of New Delhi. (Taken from Wikimedia.)
The pillar is about 7.16 m long (23 feet, 6 inches) with a 42.4 cm (16.4 inches) diameter
near the bottom and about 30.1 cm (11.8 inches) at the top. It weighs over six tons. The pillar is
a solid body with a mechanical yield strength of 23.5 tons per square inch and ultimate tensile
strength of 23.9 tons per square inch. It is made of wrought iron. There is no other pillar from
the early medieval period of that size anywhere else in the world.54
The composition of the wrought iron is as follows:55 carbon 0.15%, silicon 0.05%, sulfur
0.005%, manganese 0.05%, copper 0.03%, nickel 0.05%, nitrogen 0.02%, phosphorous 0.25%,
and 99.4% pure iron.
Iron of such purity is not naturally available in Indian mines. This composition is a strong
indication of an iron refining process in the making of this pillar. The pillar has a coating of a
thin protective layer of Fe3O4 using salts and quenching. The excavation of the buried portion
revealed that the base of the pillar was covered by a sheet of lead, about 3 mm in thickness. This
absence of rusting in not unique to the Iron Pillar. The iron beams of the Sun Temple in coastal
54Lal, B. B., On the Metallurgy of the Meharauli Iron Pillar, in the book by Joshi and Gupta, 1989, p. 25 and 26;
Balasubramaniam, 2001.
55Biswas and Biswas, 1996, vol. 1, p. 394.
110
6. CHEMISTRY
Orissa and the Iron Pillar of Dhar (Madya Pradesh), both in relative higher humidity regions,
do not show much rusting either. This is perhaps due to a higher phosphorus content of the iron.
The Sun Temple of Kon. ārka, a ninth century construction, in Orissa has 29 iron beams of
various dimensions used in its construction. The largest beam is 10.5 meter (35 feet) in length
and about 17 cm (about 7 inches) in width with a square cross-section and is about 2669 kg
(6000 lbs.) in weight.56 Iron nails are used to connect stone pieces. The composition of the iron
beams and nails is similar to those of the famed Iron Pillar of Delhi. Similarly, the iron pillar of
Dhar is about thousand years old and has little or no rusting.57 The pillar is 7.3 tons in mass and
42 feet long. Dhar is an old capital of Malwa which is about 60 km from the well known city
of Indore. The pillar was possibly constructed by King Bhoja (1010–1053 CE) and currently
lies in three parts in front of Lāt. mosque in Dhar. King Bhoja was well versed in metallurgy
and wrote a book on metallurgical processes and metal weapons, called Yuktikalpataru. Bahadur
Shah, a Muslim king, captured the region and decided to move the erected pillar to Gujrat. In
the process of digging ground to take out the pillar, it fell down and was broken into pieces, as we
know from the memoirs of Emperor Jahāngir. The pillar has been weathering the monsoons of
India for over a thousand years and has little rusting. The surface of the pillar is coated with a thin
optically dull layer and on top of it is another thick layer of optically bright material. Professor
R. Balasubramaniam of the Indian Institute of Technology, Kanpur, India has compiled a nice
summary of the history of the pillar and its chemical constitution.58
Recent researches have figured out the process on why the pillars do not rust. The main
reason is the mixing of phosphorus with iron. Once the surface of iron is rusted, it becomes
porous and allows phosphorus to react with other chemical compounds which reduces to phos-
phoric acid. This acid interacts with iron to form dihydrogen phosphate. The chemical reactions
are as follows:59
2H3PO4
Fe
C
!
Fe.H2PO4/2
!
This further dissociates into two forms:
C
2H3PO4
FeO
Fe.H2PO4/2
H2
H2O
C
C
3Fe.H2PO4/2
Fe3.PO4/2
4H3PO4
C
!
Fe.H2PO4/2
FeHPO4
H3PO4
C
!
Both of these phosphates are amorphous and insoluble in water. These amorphous phos-
phates reorganize themselves into crystalline ferric phosphate, and drastically reduces the poros-
56Ray, 1956, p. 212.
57Balasubramaniam and Kumar, 2003.
58Balasubramaniam, 2002.
59Balasubramaniam, 2001.
ity of the surface. This large reduction in porosity reduces any further rusting of the iron.60 Thus,
it is the phosphorus content of the pillar that does the trick, making the pillar to be rust free.
Can we use the same technology in making car bodies which are prone to rusting in cold regions
where salt is used on roads?
6.2. WOOTZ OR DAMASCUS STEEL 111
6.2 WOOTZ OR DAMASCUS STEEL
The word steel comes from the Old High German (German language, around 11th century
CE) word stahal which is related to the Sanskrit word stakati, meaning “it resists”61 or “strike
against.” Sword-making was a popular use of this steel due to its hardness. Steel is an alloy of
iron containing 0.10 to 1.5% carbon in the form of cementite (Fe3C).62 The properties of steel
vary greatly with a minor change in carbon content, along with other elements. Metals such as
manganese, silicon, chromium, molybdenum, vanadium, or nickel are purposely mixed in the
process depending on the desired outcome. For example, stainless steel has approximately 12%
or more chromium content.
Steel was prepared and used in India for various purposes from the ancient period.63
Ktesias, who was at the court of Persia during the 5th century BCE, mentioned the two high
quality Indian steel swords that were presented to him. One sword was presented by the King
of Persia and the other by King’s mother, Parysatis.64 Nearchus (fl. 360–300 BCE), an officer
in the army of Alexander the Great, mentions that the Indians generally carried a broad three
cubit long sword with them.65
King Poros, a king from Punjab, lost a major battle with Alexander the Great and was
imprisoned. He was brought to the court and Alexander asked him how he should be treated
now. Poros replied: like a king. This surprised Alexander and the ensuing conversation between
them made him realize the futility of wars. Alexander decided to release Poros and gave his
kingdom back to him. This was a highly unusual move from Alexander. Usually defeated kings
wer slaughtered or imprisoned. Poros was grateful to receive a gift of life and wanted to show
his gratitude to Alexander the Great. Porus wanted to gift something that was precious and
Alexander did not have—more precious than gold, gems, or spices. He opted to give 100 talents
(6000 pounds)66 of steel as a precious gift to Alexander, as we know from the accounts of Quintus
Curtius (9: 8: 1), a Roman historian during the first century CE who wrote a biography of
Alexander the Great. Steel was called ferrum candidum, meaning white iron, by Curtius.67
60Balasubramaniam, 2001.
61Le Coze, 2003.
62Bhardwaj, 1979, p. 159.
63Prakash and Igaki, 1984.
64Fragments, 1: 4; McCrindle, 1973, p. 9.
65Bigwood, 1995.
66Casson, 1989, p. 114.
67Casson, 1989, p. 114.
112
6. CHEMISTRY
The Periplus Maris Erythraei text, written around 40–70 CE,68 provides information on
the importation of steel to the Roman empire. It was subjected to a custom duty under Marcus
Aurelius (121–180 CE) and his son Commodus (161–192 CE).69 During the period of Empe-
rior Justinian (482–565 CE) of Rome, the Digest of the Roman Law, (39, 15, 5–7) was compiled
to run the state. Indian iron (Ferrum Indicum) is in the list of objects that were subject to im-
port duty.70 Similarly, iron was taxed and traded in Alexandria, Egypt in the early first century
CE and was called koos. Steel is called wuz in the Gujrati language, and Wooku (or ukku) in the
Kannada language, a prominent language in South India.71 Perhaps this explains the origin of
wootz, the Syrian word for steel. This word was new to the Syriac literature and appeared late in
chronology.
The Arabs called steel Hundwáníy, meaning Indian.72 This word perhaps evolved into
andanic or ondanique for swords and mirrors, used by the medieval writers. This also led to the
words alhinde [or al-Hind, meaning India] and alinde [for steel mirror] in Spain.73 The best steel
in Persia was called foulade Hind, meaning steel of India. Another kind of steel, jawābae Hind,
meaning a Hindu answer, was also popular because it could cut a steel sword.74
Steel was called wootz in India and was traded in the form of castings (cakes) of the
size of ice-hockey pucks.75 Persians made swords from wootz and these swords were later erro-
neously known as Damascus swords.76 As is the case with the Arabic numerals, the Europeans
learned about steel-making in the Middle East during the War of the Crusades in 1192 CE.77 By
noticing the remarkable hardness and strength, they became interested in knowing the secrets of
making ultra-high carbon steel. The European did not know at that time that the manufacturing
process had originated in India.
Archaeological sites in the Periyar district of Tamil Nadu, which date back to about 250
BCE, provide indications of crucibles used to mix iron and carbon for the steel-making pro-
cess.78 Varāhmihir, during the sixth century CE, wrote a chapter on swords (khad. galaks. anam)
in his Br. hat-Sa ˙mhitā (50: 23–26), and provided a recipe for the hardening of steel. His processes
used chemical techniques as well as heating and quenching. Swords treated with these processes
did not “break on stones” or become “blunt on other instruments,” as reported by Varāhmihir.79
Steel production was very much limited due to the high consumption of fuel and the
required high temperature for melting. The situation prevailed until 1850 CE, when high tem-
68Casson, 1989, p. 7.
69Casson, 1989, p. 114.
70Schoff, 1915.
71Agarwal, 2000, p. 198; Biswas and Biswas, 1996, vol.1, p. 121, Le Coze, 2003.
72Schoff, 1915.
73Schoff, 1915.
74Royle, 1837, p. 47.
75Sherby and Wadsworth, 2001
76Sherby and Wadsworth, 2001
77Joshi, Narayan, R., Tough Steel of Ancient India, p. 293, in the book by Sharma and Ghose, 1998.
78Agarwal, 2000, 197; Prakash and Igaki, 1984.
79Biswas and Biswas, 1996, vol. 1, p. 276.
6.3. FERMENTATION 113
perature furnace technology improved. Thus, the use of steel was largely for making blades for
knives, daggers, and swords. With proper processing, these steels can be made to a strength
that is about five times greater than that of the strongest wrought iron.80 Damascus steel has an
attractive swirling surface pattern that is an outcome of the cooling process. The patterns result
from the alignment of the Fe3C particles that form on the surface during the cooling process.81
Thus, Damascus swords became famous for their hardness and could absorb blows in combat
without breaking. It did not fail a Middle Age warrior during combat.
Making wootz steel is complex process as even minute impurities change the outcome.
Temperature and the cooling period also affect the quality of steel. Giambatlista della Porta
(1535–1615), an Italian scholar from Naple, in 1589 CE wrote on the importance of temper-
ature in treating wootz, and suggested to avoid “too much heat.”82 Oleg D. Sherby and Jeffrey
Wadsworth, two researchers from Stanford University, figured out that a slow equilibrium cool-
ing is the best way to produce quality steel. When iron and carbon (1.3–1.9%) are heated to
1200(cid:14)C, they reach a molten state and the slow cooling allows the carbon to diffuse through the
iron to form white cementite patterns that result from alignment of the Fe3C (cementite) parti-
cles. With polishing, the Fe3C particles appear white in the near black steel matrix. The carbide
particles serve the role of strengthening without making the metal brittle.83 The blade was hard-
ened by heating it to 727(cid:14)C which allows a change in the crystal structure. Iron molecules that
were distributed as body-centered ferrite begin to form a face-centered lattice. The blade was
then quenched in water.84 If the heating was done above 800(cid:14)C before quenching, it made the
metal brittle.
Michael Faraday (1791–1867), a fellow of the Royal Society and discoverer of many elec-
tromagnetic discoveries, tried to duplicate Damascus steel and incorrectly concluded that alu-
minum oxide and silica additions contributed to the properties of the steel. Faraday and Stodart
also attempted to make steel by alloying nickel and noble metals like platinum and silver. All
their efforts were of no avail. Thus, the steel-making technology of ancient India alluded even
great European scientists till the nineteenth-century.85.
6.3
FERMENTATION
The R. gveda mentions fermentation of barley, a key ingredient in beer: “A mixture of a thick juice
of soma with barley powder.”86 In another place, there is a mention of fermented alcoholic drink
which took about 15 days of processing. “Fifteenth day old highly intoxicating soma,”87 which
80Sherby and Woodworth, 2001
81For more information, please read, Sache, 1994; Figiel, 1991.
82Smith, 1982.
83Sherby and Wadsworth, 1985.
84Sherby and Wadsworth, 1985.
85Faraday, 1819; Stodart and Faraday, 1822; Day, 1995
86R. gveda IX: 68: 4.
87R. gveda X: 27: 2.
114
6. CHEMISTRY
probably refers to the broth fermented in the vat for 15 days. The Yajurveda-Śukla tells us about
various kinds of alcoholic drinks:88 “Âtithya’s sign is Mâsara, the Gharm’s symbol Nagnahu.
Three nights with Surâ poured, this is the symbol of the Upasads. Emblem of purchased Soma
is Parisrut, foaming drink effused.” Elsewhere in the same book, the fermentation process is
mentioned. “Like shuttle through the loom the steady ferment mixes the red juice with the
foaming spirit.”89 Here Nagnahu is root of a plant that is used as yeast, Parisrut is a kind of beer,
and Surâ is another word for liquor.
The Chāndogya-Upanis. ad tells us that drinking liquor on a regular basis is as bad as stealing
gold, killing a Brahmin, or having an affair with your teacher’s wife.90 Elsewhere, five extremely
wealthy and immensely learned householders decided to examine the following two questions:
What is our self (ātman)? What is brahman? They decided to visit Aśvapati Kaikeya, a highly
learned scholar, to teach them about these two questions. Aśvapati Kaikeya tells his visitors that
“no one drinks” in his kingdom, indicating the presence of alcoholic beverages.91
Caraka mentions some 84 different kinds of alcoholic liquors.92 For the fermentation,
Caraka mentions the following sources of sugar: sugarcane juice, gud. a (jaggery), molasses,
honey, coconut water, sweet palmyra sap and mahua flowers. Some sweet fruits such as grape,
date, mango, banana, apricot, jackfruit, rose-apple, jāmun, pomegranate, kādamba, bilva, etc.
are also used in the fermentation. Similarly, rice and barley were used from the grain category
for fermentation. By the time of Kaut.ilaya, the superintendent of liquor was designated to su-
pervise this industry. The manufacturing of liquor and controlled liquor traffic in and out of
village boundaries were monitored.93 Liquor shops were required to have beds and seats and the
rooms were filled with the scents of flowers. Some recipes for liquor making are also provided:
“Medaka is manufactured with one dron. a (measure of capacity, volume) of water, half an ād. haka
(unit of mass) of rice, and three prasthas (mass) of kin. va (ferment).”94
Some ferments were used for medicinal purposes. Kaut.ilaya suggested that patients should
learn the preparation of these aris. t. a (fermented and distilled liquor and medicine) from physi-
cians. He has even provided several recipes: “one hundred pala of kapittha (Feronia Elephantum),
500 pala (mass) of sugar, and one prastha (mass) of honey form āsava.”95 Some hard liquors with
rice and lentils were also suggested: “one dron. a of either boiled or unboiled paste of māsa (Phrase-
olus Radiatus), three parts more of rice, and one kars.a of morata (Alangium salviifolium) and the
like form kin. va (ferment).”96 Additives were mixed in order to improve the taste or appearance
88Yajurveda-Śukla, 19: 13–15.
89Yajurveda-Śukla, 19: 83.
90“A man who steals gold, drinks liquor, and kill a Brahmin; A man who fornicates with his teacher’s wife—these four
will fall.” Chāndogya-Upanis. ad, 5: 10: 9.
91Chāndogya-Upanis. ad, 5: 11: 5.
92Achaya, 1991.
93Arthaśāstra, 119; Shamasastri, 1960. p. 131.
94Arthaśāstra, 120; Book 3, Chapter 25; Shamasastri, 1960. p. 133.
95Arthaśāstra, 120; Book 2, Chapter 25; Shamasastri, 1960. p. 132.
96Arthaśāstra, 120; Shamasastri, 1960. p. 133.
of liquors. These additives were used sweeteners, spices, and astringents.97 The variety listed in
Caraka-Sa ˙mhitā and Arthaśāstra is comparable or even better than the variety that is present in
the market today.
6.3. FERMENTATION 115
97For more information, read Achaya, 1991; Prakash, 1961; and Singh et al., 2010.
C H A P T E R 7
Biology
117
In the Caraka-Sa ˙mhitā, an early Hindu treatise on medicine, biology was considered the most
important of all sciences. “The science relating to life is regarded by the philosophers as the most
meritorious of all the sciences because it teaches mankind what constitutes their good in both
the worlds [spiritual and physical].”1 Good health helps people to fulfill their four purposes in
life: dharma (duty), artha (prosperity), kāma (sensuality), and moks. a (liberation).2
The ancient Hindus systematically studied various life forms and noticed inter-
dependencies and commonness in them. It was due to these commonnesses between plants
and animals that led Suśruta to suggest that new medical practitioners should dissect plants first
before they performed any dissection on animals or humans. About 739 plants and over 250
animals are mentioned in the ancient literature of the Hindus.3 The 24th chapter of Yajurveda
has a large variety of birds, animals, and snakes mentioned. Plants and animals are characterized
with respect to their utility, habitat, or some other special features: Four-footed, reptiles, claws,
born from an embryonic sac, born from an egg, born from sprouts (plants), or domesticated.4
The botanical world was divided into grasses, creepers, shrubs, herbs, and trees.5 The R. gveda
mentions the heart, lung, stomach, and kidneys. The Atharvaveda6 refers to the heart as a “lotus
with nine gates,” a correct description of heart when held with its apex upwards. From this top
view, one can observe nine openings in the heart: three in right atrium, four in the left atrium,
and one in each of the right and left ventricles.7 Similarly, perhaps there is a mention of blood
circulation in the Atharvaveda:8 “Who stored in him floods moving in all diverse directions and
formed to flow in rivers pink, rosy red, and coppery dark running in all ways in a man, upward
and downward.”
1Caraka-Sa ˙mhitā, Sūtrasthānam, 1: 43.
2Caraka-Sa ˙mhitā, Sūtrasthānam, 1: 15.
3Kapil, 1970.
4Smith, 1991.
5Kapil, 1970; Smith, 1991.
6Atharvaveda, 10: 8: 43.
7Narayana, 1995; Rajgopal et al., 2002.
8Atharvaveda, 10: 2: 11.
118
7. BIOLOGY
7.1
SACRED RIVERS AND MOUNTAINS: ECOLOGICAL
PERSPECTIVES
Ecology (paryāvaran. a) primarily deals with the relationships of organisms with their environ-
ment. The Hindus strongly believe in the mighty Earth as a living system that has several billion
years of experience in developing processes that are sustainable and cyclic. It is the respect for
the ecosystem that led the ancient Hindus to promote vegetarianism. Trees, mountains, rivers,
and animals were worshiped by the ancient Hindus. Cutting trees, dumping waste products in
a river, and killing animals are considered as sins. It is a common phrase in India that the God
lives in every particle of the universe. (kan. a kan. a mei Bhagwān hai). The Bhagavad-Gītā9 sug-
gests the omnipresence of God. Therefore, in the worldview of the Hindus, the ecology should
be preserved and respected. The Manu-Smr. ti tells us that that all plants and animals have spe-
cific functions and we must protect them: “Brahma, the God of Creation, has created all the
plants and animals with specific characteristics and functions, and none should disturb these
creatures.”10
The ancient Hindus considered nature (prakr. ti) as a living sacred entity. Nature is not
a property that is owned by the humans for their use; on the contrary, humans are a part of
nature. Nature is represented by its five forces: ākāśa (space or sky), vāyu (air), teja (fire), ap
(water) and pr. thivī (earth)—popularly known as pañcā-bhūta (five elements). All animate and
inanimate objects in nature are made up of these five elements, including humans and animals.
The ancient Hindus realized that they could neither control nature nor overpower its order; they
just tried to live with it in a manner that was based on respect and appreciation for the natural
forces and order. Rivers, mountains, and oceans were treated with reverence. It was not ethical
to dump waste into a river or deplete a mountain of its trees. The ecosystem connects living
beings with inanimate objects. For example, a little misuse of water, an inanimate object, has
dire consequences to life forms that drink it.
The geographical region of the Indus Valley and the surrounding area was considered
sacred by the ancient Hindus. It was only natural for them to preserve the ecology of the region.
The Earth is worshiped as mother; it is considered to be a devī (Goddess) and has numerous
names: Bhūmi, Pr. thivī, Vasudha, and Vasundharā. Even today, India is personified as Bhārat-
mātā (Mother India). The Earth is considered as mother and the inhabitants as her children,
as suggested in the Atharvaveda,11, and supports people of all races and remains fertile, arable
and nourisher of all.12 This chapter of the Atharvaveda does not discriminate between “us” and
“them;” it includes everyone. Another verse in the same book suggests that the Earth caters to
9Bhagavad-Gītā, 7: 19; 13:13.
10Manu-Smr. ti, 1: 28: 39–35.
11Atharvaveda, 12: 1: 12.
12Atharvaveda, 12: 1: 11
7.2. SACRED TULSĪ AND SACRED COW 119
people of different religions and languages.13. A request is made for Mother Earth to provide
medicines, medicinal plants and prosperity,14 and prestige.15
Every day, hundreds of thousands of people visit Ganges [Ga ˙ngā] or Yamunā rivers and
worship them. These rivers are considered to be purifiers of sins, and bathing in the sacred wa-
ters is a popular activity which is also associated with a prayer. In the evenings, group āratī
(prayer) is performed where people worship God by worshiping the river. As David Gosling, a
nuclear physicist who has studied South Asian ecology, suggests that the Hindu traditions serve
as an important model in “raising social and environmental awareness, underscoring the con-
tinuities between past and present and their possible transformations within an environmental
paradigm.”16
7.2
SACRED TULSĪ AND SACRED COW
Trees and plants are highly efficient chemical and pharmaceutical factories. Human survival is
dependent on the quality of the foods provided by trees and plants. It is only natural to respect
plant life and worship them. The respect that the Hindus bestow on plants can be observed by
noticing stone deities placed under many trees with flags and steamers adorning the branches.
The mythological kalpavr. ks. a, a tree that fulfills all human desires, is well known in Hindu tradi-
tion. In their popular belief, each tree has a vr. ks. a-devatā, or tree-deity, who is worshiped by the
Hindus with prayers and offerings of water, flower, sweets, and encircled by sacred thread. The
sacred thread symbolizes the wishes of the praying person. For example, in the Mahābhārata,
Sukra says: “Rubbed with the astringent powder of the hanging roots of the Banyan tree (Ficus
bengalensis) and anointed with the oil of priyango (panicum italicum), one should eat the shashlika
paddy mixed with milk, By so doing one gets cleansed of all sins.”17 The Padma-Purān. a advo-
cated that planting trees was a simple way to reach nirvān. a (liberation), to escape the cycle of
birth and death.18
Since the Hindus realized the common connection between plants, animals and humans,
and the importance of trees for human life, it became a religious code to preserve plant life.
Aśoka (reigned 272–232 BCE) issued a decree against the burning of forests.19 It was a hunting
practice during the period to set fire to grass and trees. As the animals got scared by the fire
and ran away to protect themselves, they were trapped and killed by the hunters. Thus, Aśoka’s
decree simultaneously took care of plants as well as animals.
Tulsī (ocimum sanctum) is perhaps the most sacred plant in Hindu households. It is like
a small shrub from 30–60 cm in height. Tulsī is worshiped by the Hindus as goddess, just like
13Atharvaveda, 12: 1: 45
14Atharvaveda, 12: 1: 17 and 27
15Atharvaveda, 12: 1: 63
16Gosling, 2001, taken from Van Horn, 2006.
17Mahābhārata, Anuśāsana Parva, 10.
18Padma-Purān. a, Srs. t. i-Kān. d. a, Chapter 28: 19–22.
19Smith, 1964, p. 187.
120
7. BIOLOGY
Laks.mī, Sītā, or Rādhā. It is common for the Hindus to worship a tulsī plant in the morning and
feed it water while facing the Sun. When the Hindus die, people put Tulsī leaves, mixed with
Ganges (Ga ˙ngā) water, in the mouth of the dead body for purification. In most pujas (Hindu
prayer), a nectar made from milk, honey, curd, and tulsī, called pancāmrt (meaning, nectar made
from five substances), is given to all participants. In a Hindu wedding, the groom, his family
members, and family friends generally go to bride’s house in a procession. In the midst of this
joyous occasion, they first go to a pipal or banyan tree and pray. Similarly, the bride and groom
visit a tree as a couple and pray before they enter in their house after marriage. The sacred areas
of most Hindu temples are surrounded by trees.
The Hindus enforce sanctity for plants and animals by associating them with gods and
goddesses. Swans are associated with Sarasvatī, mice with Lord Ganesha, snakes and bulls with
Lord Śiva, lions with the Goddess Durgā, Lord Kr.s.n. a with snakes and cows, and monkeys with
Lord Rāma. Cows are worshiped, so are snakes, and many other life-forms. The following is a
list of some trees and their corresponding deities: Goddess Laks.mī in tulsī (Ocinlum sanctum),
Goddess Śītalā in neem, God Vis.n. u in pīpal (Ficus religiosa), and Lord Śiva in Vata (Ficus in-
dica).20 Similarly, in other interpretations, tulasī is beloved of Lord Kr.s.n. a; pīpal is inhabited by
Brahmā, God Vis.n. u, and Lord Śiva; aśoka (Saraca indica) is dedicated to God Kāma; palāś, a
tree, to the Moon; bakula (Mimusops elangi) to Lord Kr.s.n. a; and rudrāks. a (Eleaecarpus ganitrus)
to Lord Śiva. “The Hindu spiritual heritage can provide new ways of valuing, thinking, and act-
ing that are needed if respect for the nature is to be achieved and future ecological disasters are
to be averted.”21
Feeding weak and old animals is considered a charitable act. Like the household dogs and
cats in the Western world, all animals were bestowed a proper treatment by the Hindus.22 When
Hindus eat food, they separate some food from their plate for birds, dogs, and cows. The first
chapāti usually goes to a cow and the last one to a dog in many Hindu households.23
The sanctity of cows in Hinduism is well known and much publicized in the West. In
legends and popular stories, the Earth is said to assume the form of a cow, especially in times
of distress, to implore the help of the gods. The sacredness of the cow represents an anomaly
to social scientists who wonder why in a country with hungry population the cow is virtually
untouched.24 The Mahābhārata tells us that the killing of a cow is wrong and has its bad conse-
quences and causes a person to go to hell.25 The relationship between the cow and man is not
competitive, but symbiotic. The cow augments the needs of human beings by providing them
20Dwivedi, 1997.
21Dwivedi, 1997
22Narayanan, 2001.
23Most Hindu families do not keep dogs as pets. Usually, each street has a few stray dogs that are patronized by the families
living on that street. These dogs recognize people who live there and protect the street from strangers at night. Mahatama
Gandhi used to say that the kindness of a culture can be judged by the way they treat animals.
24Weizman, 1974.
25 “All that kill, eat, and permit the slaughter of cows rot in hell for as many years as there are hairs on the body of the
cow so slain.” (Mahābhārata, 13: 74: 4.)
7.2. SACRED TULSĪ AND SACRED COW 121
milk, bullocks, dung, and hides. Judging the value of Indian cattle by the western standards is
inappropriate.26 Due to inherent differences in the value systems, such an evaluation generally
leads to confusion or misleading conclusions.
Kaut.ilaya’s Arthaśāstra instructed the director of forests, superintendent of cattles, horses,
and elephants to prevent cruelty to animals and protection of wildlife. Punishments were in-
flicted on those who violated these rules. For example, killing an elephant was punished with
death.27. Similarly, killing or injuring animals in reserve parks and sanctuary, especially those
that were protected species, were punished:28 “Elephants, horses, or animals having the form of
a man, bull or an ass living in oceans, as well as fish in tanks, lakes, channels and rivers; and such
game birds . . . shall be protected from all kinds of molestation. Those who violate the above
rule shall be punished first with amercement. Animals useful for riding, milk, hair, or to stud
were protected and hurting these animals was a crime.29
Because the earth is a closed system, we need processes that will sustain us in the long run.
We need to find effective methods to deal with the population growth. We also must develop new
technologies for food production, new family planning practices, and we must change our dietary
choices. Christian J. Peters, a scientist from Tufts University, Boston, along with a team of several
researchers from other universities, studies the environmental impact and food security of various
diets. The purpose of this study is to compare the per capita land requirements and potential
carrying capacity of the land base of the continental United States (U.S.) under a diverse set of
dietary scenarios. This team considered ten diet scenarios using a biophysical simulation model.
Eight of the diets complied with the 2010 Dietary Guidelines for Americans. Meat-eating diets
turned out to be not good. The highest capacity turned out to be with lacto-vegetarian diet, even
better than vegan diet.30 This is basically the diet of most Hindus.
The ancient Hindus, in their analysis, observed a balance in prakr. ti (primal nature); every
species plays an important role in the balance. This balance of prakr. ti was based on the knowl-
edge gained through empiricism, intuition, and experimentation. Once a common connection
between plants, animals, and humans was established, the coexistence of all living forms fol-
lowed. Hanumāna, and Gan. apati (Gan. eśa), as well as trees, are revered widely by the Hindus.31
7.2.1 VEGETARIANISM
Livestock production accounts for 70% of all agricultural land use and 30% of the land surface
of the planet, excluding the polar ice-caps. The livestock business contributes about 4.6 to 7.1
26Harris, The Myth of the Sacred Cow, in Leeds and Vayda, 1965.
27Arthaśāstra 2: 2: 9; Shamasastry 50, p. 49.
28Arthaśāstra, 2: 26: 1; Shamasastry, 112, p. 135.
29Arthaśāstra,4: 13: 21; Shamasastry, 235, p. 263.
30Peters et al., 2016
31Mythological stories are associated with these gods. It may be demeaning to some that the Hindus gods have some
animal attributes. However, most Hindus do not even think about that. Crowds in temples all over the India on Tuesday
testify their reverence to Lord Hanumāna. Similarly, it is common for people to worship Lord Gan. eśa first on any auspicious
occasion. For more information, see Nelson, Lance [Editor], 1998; and Chapple and Tucker [Editors], 2000; Roy, 2005.
122
7. BIOLOGY
billion tons of greenhouse gases each year to the atmosphere, which accounts for 15% to 24% of
the total current greenhouse gas production, according to a report from the United Nations.32
The greenhouse gases released from livestock production is higher than from all forms of trans-
portation combined. Yes, the climate change that the world is facing is directly connected to our
dinner plate.
To produce one pound of beef, a steer must eat sixteen pounds of grain and soy, the re-
maining fifteen pounds are used to produce energy for the steer to live.33 The production of one
kilogram of beef in a US feedlot causes the emission of about 14.8 kg of CO2. In comparison,
1 gallon of gasoline emits about 2.4 kg of CO2.34 An average American’s meat intake is about
124 kg per year, the highest in the world. In comparison, the average for a global citizen is 31
kg a year.35
The Manu-Smr. ti tells us that people who give consent to killing, who dismember a living
entity, who actually kill, who purchase or sell meat, who purify it, who serve it, and who eat
the meat are all sinners.36 Bhīs.ma explains to Yudhis.t.hira that the meat of animals is like the
flesh of one’s own son, and a person who eats meat is considered a violent human being.37 The
Mahābhārata38 also suggests that “dharma exists for the general welfare of all living beings; hence
by which the welfare of all living creature is sustained, that is real dharma.”
In the Hindi language, the word ācar-vicār (ācar = conduct, vicār = thought) signifies
the connection of mind and physical body. This term is commonly used to describe the nature
of a person. Diet and the state of mind are closely related. The Vaiśes. ika-Sūtra suggests that
“improper diet instigate violence.”39
The ancient Hindus, in their astute observation of prakr. ti (nature) noticed a distinct role
of each life-form and the importance of the “balance” in prakr. ti was realized: “A man who does
no violence to anything obtains, effortlessly, what he thinks about, what he does, and what he
takes delight in. You can never get meat without violence to creatures with the breath of life, and
the killing of creatures with the breath of life does not get you to heaven; therefore, you should
not eat meat. Anyone who looks carefully at the source of meat, and at the tying up and slaughter
of embodied creatures, should turn back from eating any meat, ” suggests the Manu-Smr. ti.40
Aśoka (reigned 272–232 BCE), grandson of Chandragupta (Candragupta, reigned 324–
300 BCE), tried to sanctify animal life as one of his cardinal doctrines. The Greek and Aramaic
edicts in Kandhar in Afghanistan show Aśoka’s resolute resolve against animal killing. “The
King abstains from the slaughter of living beings, and other people including the king’s hunters
32Steinfeld, et al., 2006.
33Kaza, 2005.
34Subak, 1999 and Fiala, 2008.
35Fiala, 2008.
36Manu-Smr. ti, 5: 51–52.
37Mahābhārata, Anuśāsana Parva, 114: 11.
38Śānti Parva, 109: 10
39Vaiśes. ika-Sūtra, 6: 1: 7.
40Manu-Smr. ti, 5: 47–49.
7.3. LIFE IN PLANTS: SIMILARITIES WITH HUMANS 123
and fishermen have given up hunting. And those who could not control themselves have now
ceased not to control themselves as far as they could. . .”41 Aśoka issued the following decree as
known in Rampurva text: “Those she-goats, ewes (adult female sheep) and sows (adult female
pig), which are either pregnant or milch, are not to be slaughtered, nor their young ones which
are less than six months old. Cocks are not to be caponed. Husks containing living beings should
not be burnt. Forests must not be burnt either uselessly or in order to destroy living beings.”42
These were the moral codes, not only the administrative one for ruling. Kinship with nature
became a basis of their kinship with God.
Hindu nonviolence made such an impact on the Muslim King Jahāngīr (r. 1605–1627
CE) that, in 1618 CE, he took a vow of nonviolence. In 1622 CE, this vow was broken when he
had to pick up his gun to save his own life and his reign against his own son Khurram. This was
the period when rulers used cruelty against animals and humans to demonstrate their imperial
authority. Jahāngīr took this vow after killing about 17,167 life forms in 37 years. He ordered
Thursday and Sunday to be holidays against the slaughter of animals and animal eating—Sunday
being the birthday of his father Akbar and Thursday as the day of his accession to the throne.43
Obviously, the ancient Hindus were not concerned with energy or water consumption
issues when they propagated vegetarianism. However, they were concerned with long life, good
health for themselves, morality, animal rights, and sustaining ecology. They studied nature and
came up with relationships between various life forms: plants, animals, and humans. These sci-
entific studies led them to define a life style that led them to vegetarianism. Their concerns have
a newly found meaning in today’s world. Their life style, if adopted today, can easily take care of
the energy crisis that the world is facing today, improve the quality of water, completely erase
the problem of hunger for a while with the current population growth, and conserve ecology
and nature.
7.3 LIFE IN PLANTS: SIMILARITIES WITH HUMANS
The Manu-Smr. ti suggests that plants are susceptible to pain and pleasure. “There are various
kinds of shrubs and bushy plants, and various kinds of weeds and grass, creepers and trailing
plants, some of which grow from seeds and others from graft. Variously enshrouded by the
quality of tamas (ignorance), the effects of their own acts, they retain their consciousness inward,
susceptible to pleasure and pain.”44
The Mahābhārata mentions a discourse between two monks, Bhr.gu and Bhārdvāja, on the
epistemology of plants. Bhr.gu asks: “Sir, all plant-life and animals are guided by the same five
primal elements (pañcabhūta). But I don’t see it in the plants. Plants do not possess body heat,
do not move their parts, remain at one place, and do not seem to have the five elements. Plants
41Sircar, 1957, p. 45.
42Sircar, 1957, p. 73–74. For more information, please also see, Smith, 1964; Barua, 1946.
43Findley, 1987.
44Manu-Smr. ti, 1: 48–49.
124
7. BIOLOGY
neither hear, nor see, nor smell, nor taste. They cannot feel the touch of others. Why then you
call them a component of five elements? They do not seem to possess any liquid material in them,
heat of body, any earth, any wind, and any empty space. How then can plants be considered a
product of five elements?”45
To such questions, Bhārdvāja replied: “Though trees are fixed and solid, they possess space
within them. The blooming of the fruits and flowers regularly take place in them. Body heat of
plants is responsible for the dropping of flower, fruit, bark, leaves from them. They sicken and
dry up. This proves the sense of touch in plants. It has been seen that sound of fast wind, fire,
lightening affect plants. It indicates that plants must have a sense of hearing. Climbers twine
the tree from all sides and grows to the top of it. How can one proceed ahead unless it has sight?
Plants therefore must have the vision. Diseased plants may be cured by specific fumigation. This
proves that plants possess sense of smell and breathe. Trees drink water from their roots. They
also catch diseases from contaminated water. These diseases are cured by quality water. This
shows that they have a perception of taste. As we suck water through a tube, so the plants take
water through their roots under the action of air in the atmosphere. Trees when cut produce new
shoots, they are favored or troubled by certain factors. So they are sensitive and living. Trees take
in water through roots. Air and heat combine with water to form various materials. Digesting
of the food allows them to grow whereas some food is also stored.”46
A close analysis of the above lengthy discourse between Bhrgu and Bhārdvāja depicts that
the Hindus believed of life in plants, metabolism in plants, a need of food in plants, diseases
in plants, and possible cures of diseases in plants using medicine. All these properties are also
peculiar to animals and humans. The above quotation also tells us the Hindus’ belief that plants
are sensitive to sound, touch, and quality of irrigated-water just like humans.
The R. gveda mentions the hearing qualities in plants: “All plants that hear this speech, and
those that have departed away, Come all assembled and confer your healing power upon this
herb.”47 Thus, though plants do not have organs like humans, they do have similar functions—
they breathe, eat, sleep, listen, reproduce, and die similar to humans.
The Br. hadāran. yaka-Upanis. ad compares a man to a tree: “A man is indeed like a mighty
tree; his hairs are like his leaves and his skin is its outer bark. The blood flows from the skin [of
a man], so does the sap from the skin [of a tree]. Thus, blood flows from a wounded man in the
same manner as sap from the tree that is struck. His flesh [corresponds to what is] within the
inner bark, his nerves are as though the inner fibers [of a tree]. His bones lie behind his flesh as
the wood lies behind the soft tissues. Marrow [of a human bone] resembles with the pith [of a
tree].”48
The Chāndogya-Upanis. ad recognized the presence of life in plants. “Of this great tree,
my dear, if someone should strike at the root, it would bleed, but still live. If someone should
45Mahābhārata, Śānti-Parva, 184: 6–18.
46 Mahābhārata, Śānti-Parva, 184: 10–18.
47R. gveda, 10: 97: 21
48Br. hadāran. yaka-Upanis. ad, 3: 9: 28.
7.3. LIFE IN PLANTS: SIMILARITIES WITH HUMANS 125
strike its middle, it would bleed, but still live. If someone should strike at its top, it would bleed,
but still live. Being pervaded in ātman [soul or self ], it continues to stand, eagerly drinking in
moisture and rejoicing. If the life leaves the one branch of it, then it dries up. It leaves a second;
it dries up . . . Verily, indeed, when life has left it, this body dies.”49
The presence of life in plants and similarities between plants and humans are quite clearly
defined in the ancient Hindu literature. It was only natural for someone to test these ideas. A
major attempt came from Sir J. C. Bose (1858–1937 CE) of Calcutta during the earlier parts
of the twentieth century. Though Bose was trained in physics, he ventured into the discipline of
botany and made valuable contributions. As a result of his contributions, Bose was knighted by
the British monarchy in 1917 and was elected a Fellow of the Royal Society of London in 1920.
He was thus the third Indian and the first natural scientist from India to be so honored. The first
Indian to get elected as a Fellow of the Royal Society was Engineer, Adaseer Cursetjee, in 1841,
and the second was mathematician Srinivas Ramanujan, elected in 1918. Bose demonstrated life
essences in plants with his experiments during the late nineteenth century. His contributions are:
50
1. Power of response (reflex action) In a mimosa plant, if a single leaf is touched, all the leaves
in the branch slowly close. Bose devised an instrument to study such a behavior in other
plants. He concluded that all plants, like human beings, possess reflex actions.
2. Food Habits Bose demonstrated that plants need food, like humans; nourishment of a plant
comes from its roots as well as from its body. Bose experimentally demonstrated that, like
humans, plants nourish themselves from a distribution of nutrients through a continuous
cycle of contraction and expansion of cells. If a poison is administered to a plant, it may
succumb to it. On the removal of poison, it may gradually revive and become normal.
Plants also sway abnormally like a drunk person if treated with an alcoholic substance and
become normal when the cause is removed.
3. Mind Activity A sun-flower plant adjusts itself throughout the day so that it always faces
the Sun. Many other plants close their leaves during the night and turn to face a brighter
light source if the light is not uniformly distributed, indicating a mind activity in plants.
4. Nerves Plants also have nerves and feel sensation through leaves, branches, or trunks. If
certain areas of a plant are made numb with ice, less pain is felt in the plant.
Scientific American, a popular science magazine in America, published an article on J.C.
Bose and summarized his work in the following words: “By a remarkable series of experiments
conducted with instruments of unimaginable delicacy, the Indian scientist [ J. C. Bose] has dis-
covered that plants have a nervous system . . . With the possibilities of Dr. Bose’s crescograph,
49Chāndogya-Upanis. ad, 6: 12: 1–3.
50for more information, read Subrata Dasgupta, 1998.
126
7. BIOLOGY
in less than quarter of an hour the action of fertilizers, foods, electric currents and various stim-
ulants can be fully determined.”51
51Scientific American, April 1915.
C H A P T E R 8
Medicine
127
Diseases are as old as life itself. Manu (progenitor of humanity), or Adam and Eve, must have
also needed the science of medicine. The problems of colds, fever, headache, and exhaustion has
been experienced by people at one time or another. The ancient Hindus developed a system that
is popularly known as Ayurveda, literally translated as the science of life. According to Suśruta,
a noted ancient Hindu surgeon, Brahmā (creator of the universe) was the first to inculcate the
principles of Ayurveda and taught it to Prajāpati. The Aśvin Kumaras, two brothers, learned
from Prajāpati and taught it to Indra. Indra taught it to Dhanvantari who later taught it to
Suśruta.1 Dhanavantari’s status in India is similar to Aesculapius in the Western world. Caraka,
another ancient Hindu physician, considered Ayurveda to be eternal:2 “The science of life has
always been in existence, and there have always been people who understood it in their own
way: it is only with reference to its first systematized comprehension or instruction that it may
be said to have a beginning.”
The R. gveda suggests the human life expectancy to be 100 years.3 The Yajurveda-Śukla tells
everyone to aspire to have an active life till they reach 100.4 The Upanis. ads also suggest us that
we should expect to live an active life for hundred years. “Even while doing deeds here, one may
desire to live a hundred years,” suggests the Iśā-Upanis. ad.5 During the ancient period, in most
cultures, the life expectancy was about 40 years of age. However, among the ancient Hindus, it
was the second stage of life at that age. Marco Polo, when he visited India, has mentioned of
the long life of the Hindus.
The materia medica of the Indus-Sarasvatī region is particularly rich due to a large number
of rain forests around the country. The Indian climate is unique, as all four seasons are experi-
enced. The abundance of materia medica has been recorded by several foreigners who visited India
during the ancient and medieval periods. Although India’s total land area is only 2.4% of the
total geographical area of the world, today it accounts for 8% of the total global biodiversity with
an estimated 49,000 species of plants of which 4,900 are endemic, implying their presence only
in India or nearby regions.6 There are around 25,000 effective plant-based formulations available
1Suśruta-Sa ˙mhitā, Sūtrasthānam, 1: 16.
2Caraka-Sa ˙mhitā, Sūtrasthānam, 30, 26–28.
3“That Indra for a hundred years may lead him safe to the farther shore of all misfortune.” R. gveda, 10: 161: 3. “With the
most saving medicines which thou givest, Rūdra, may I attain a hundred winters.” The R. gveda, 2: 33: 2. As a comparison, the
life expectancy among the ancient Egyptian was only about mid-thirties while for the Roman it was mid-forties.
4Yajurveda-Śukla, 40: 2.
5Iśā-Upanis. ad, 2.
6Ramakrishnappa, K., 2002. This report can be accessed at http://www.fao.org/DOCREP/005/AA021E/AA021e00.htm.
128
8. MEDICINE
in Indian medicine. It is estimated that there are over 7,800 medicinal drug manufacturing units
in India, which consume about 2,000 tonnes of herbs annually.7 The Caraka-Sa ˙mhitā lists 341
plants and plant products for use in medicine. Suśruta described 395 medicinal plants, 57 drugs
of animal origin and 64 minerals and metals as therapeutic agents.8
The ancient Hindus understood that disease is a process that evolves over time. They tried
to prevent diseases by using proper hygiene, a healthy lifestyle, a balanced diet, yoga, internal
and external medicine, surgery, and prayer. Good health was considered as essential to achieve
moks. a.9 Detailed experiments were performed to evolve new medical knowledge. For exam-
ple, Suśruta selected a cadaver, cleaned it, and wastes were removed. Afterward, the body was
wrapped in grass and placed in a cage to protect it from animals. The cage was then lowered into
a river or a rushing stream of water. This was done to ensure constant cleaning. Once the decay
process took a toll on the body, it was taken out of water. Skin, fat, and muscles were brushed
away from the corpse with grass brushes, and the human anatomy was observed.10
In evaluating the diagnostic techniques of the ancient Hindus, Erwin Heinz Ackerknecht,
in his book A Short History of Medicine, writes that “[d]iagnostic were highly developed. The In-
dian healer used questioning, very thorough inspection (e.g., to diagnose pulmonary consump-
tion from the loss of weight), touch (including observation of the pulse), and examination with
other senses, such as tasting the urine for diabetes. Indians knew the sweet taste of diabetic urine
long before Europeans did.”11
8.1 DOCTORS, NURSES, PHARMACIES, AND HOSPITALS
During the Vedic period, doctors were divided into surgeons (śalya-vaidya) and physicians
(bhis. ak).12 These doctors were well respected in the society. “He who stores herbs [medicine
man] is like a king amid a crowd of men, Physician is that sage’s name, fiend-slayer, chaser of
disease,” suggests the R. gveda.13
The inscriptions of King Aśoka (third century BCE) refer to the cultivation of medicinal
plants and the construction of hospitals in his kingdom. The rock edict of Girinar in India,
erected by Aśoka (r. 269–232 BCE), indicates that separate hospitals for humans and animals
were built. Aśoka suggests ahi ˙msā in the treatment of all animals: “Meritorious is abstention
from the slaughter of living beings.”14 Indeed, hospitals were build all over India from the earliest
periods.
7Mukherjee and Wahile, 2006.
8Mukherjee and Wahile, 2006. Krishnamurthy has given quite a different number for the medicinal plants. According
to Krishnamurty, Suśruta defined 793 plants for medical usages. Krishnamurthy (1991) has compiled the list of these plants
in the ninth chapter of his book. Krishnamurthy, 1991.
9Caraka-Sa ˙mhitā, Sūtrasthānam, 1: 15–16.
10Rajgopal et al., 2002.
11Ackerknecht, 1982, p. 38.
12Mukhopādhyāya, 1994.
13R. gveda, 10: 97: 6.
14Sircar, 1957, p. 47.
8.1. DOCTORS, NURSES, PHARMACIES, AND HOSPITALS 129
Caraka defines following qualities of a doctor: good knowledge of the medical texts; ex-
perienced in curing various diseases, skillful in the preparation of drugs, and their uses; good in
quick thinking in difficult situations; and good in hygiene.15 The doctor should also have control
over his hands when in surgery or difficult situations. A person with shaking hands is not a good
surgeon.16 “Drugs are like nectar; administered by the ignorant, however, they become weapons,
thunderbolt or poison. One should therefore shun the ignorant physician,” suggests Suśruta.17
Emphasizing constant practice in diagnosis, Suśruta suggests that “the physician who studies
the science of medicine from the lips of his preceptor, and practices medicine after acquired
experience in his art by constant practice is the true physician.”18
Caraka advised people to avoid incompetent doctors,19 as it can even aggravate their con-
dition.20 Caraka has provided a detailed code of conduct for doctors which is similar to the
Hippocratic Oath in the Western world.21 This code of conduct forbids doctors to indulge in
money-making and lust: “He, who practices not for money nor for caprice but out of compas-
sion for living beings is the best among all doctors.”22 And, most importantly, a doctor must
continue to help his patients, despite adversity. This is the only way to save the life of a person.23
Physicians were instructed to be friendly, kind, eager to help underprivileged people, and re-
main calm in difficult diseases.24 “The underlined objective of a treatment is kindness. Kindness
to another human being is the highest dharma. A doctor becomes successful and finds happiness
with this objective.”25 Doctors were advised to concentrate only on the treatment of the patient
and not on materializtic rewards, the patient was advised to reward the doctor with money that
he can afford and to be respectful toward the doctor.26
According to an ancient legend, Jivaka Kumarabhacca was a revered physician who pro-
vided medical care to Lord Buddha and his disciples. Jivaka studied medicine under the leg-
endary and revered physician Ātreya, in the city of Taxila. After many years of training, one
day Jivaka decided to have his own independent practice and shared his views with his teacher.
Ātreya decided to take an exam before he could allow Jivaka to have his own practice. He asked
Jivaka to go around the āshram (monastery) and collect all plants with no medicinal value. Jivaka
walked around and toiled to look for a plant with no medicinal value. After considerable efforts
and somewhat dejected, he went back to Ātreya with no plant in his hand and told him that he
15Caraka-Sa ˙mhitā, Sūtrasthānam, 9: 6.
16Caraka-Sa ˙mhitā, Sūtrasthānam, 29: 5–6.
17Suśruta-Sa ˙mhitā, Sūtrasthānam, 3: 51.
18Suśruta-Sa ˙mhitā, Sūtrasthānam, 4: 7
19Caraka-Sa ˙mhitā Sūtrasthānam, 9: 15–17.
20Caraka-Sa ˙mhitā, Sūtrasthānam, 16: 4.
21Caraka-Sa ˙mhitā, Vimānasthānam, 8: 13.
22Caraka-Sa ˙mhitā, Vimānasthānam, 8: 13.
23Caraka-Sa ˙mhitā, Sūtrasthānam, 1: 134.
24Caraka-Sa ˙mhitā, Sūtrasthānam, 9: 26.
25Caraka-Sa ˙mhitā, Cikitsāsthānam, 4: 63.
26Caraka-Sa ˙mhitā, Cikitsāsthānam, 4, 55.
130
8. MEDICINE
could not find even a single plant with no medicinal value. Ātreya was pleased with the answer
and gave his blessing to Jivaka to start his own practice for the welfare of human beings.
Caraka and Suśruta emphasized that a good doctor must have all needed equipment with
him.27 Similarly, Suśruta suggested multitude of surgical tools, leeches, fire for sanitizing tools,
cotton pads, suture threads, honey, butter, oil, milk, ointments, hot and cold water, etc. for a
surgeon. For the tools, roundhead-knife, scalpel, nail-cutter, finger-knife, lancet that is as sharp
as the leaf of the lotus, single-edge knife, needle, bistoury, scissors of various shapes, curved
bistoury, ax-shaped hammer, trocar, hooks, narrow-blade knife, toothpick, etc. Surgeon must
know to hold these instruments properly for surgery28 and surgery should be avoided when one
could effectively use leeches to do the same job.29
“One who knows the text but is poor in practice gets confounded on seeing a patient;
he is like a coward on the battle field. One who is bold and dexterous but lacking in textual
knowledge fails to win the approval of peers and risks capital punishment from the king. Having
half-knowledge, implying either textual or practical knowledge, is a physician who is unfit for
the job and is like a one-winged bird,” suggests Suśruta.30 In the views of Suśruta, a person with
just theoretical knowledge in ayurveda only, is “like an ass that carries a load of sandal wood
[without ever being able to enjoy its pleasing scent].”31
Suśruta advised that doctors must involve all five senses in the physical examination of
the patient: inspection, palpation, auscultation (listening to the sound of the heart, lungs, etc.),
taste, and smell.32 For incisions, surgeons were advised to use the tool rapidly only once to
desirable depth.33 Obviously, this was done to avoid or minimize pain to patients. It was also
advised that the surgeon must be courageous, must be quick in action and decision-making, self
confident, should not sweat in tense situations, his hands should not shake, and he should have
sharp instruments.34
Caraka advised doctors to differentiate between the diseases that can be cured and those
that cannot be cured.35 He advised doctors to not treat patients with incurable diseases in ad-
vanced stages and alleviate the suffering of the patient.36 Since disease is a process with various
stages, he suggested that even serious diseases can be cured if caught in their early stages. Proper
medication at the early stage and proper lifestyle is essential to control these serious ailments.37
27Caraka-Sa ˙mhitā, Sūtrasthānam, 16: 1; Suśruta-Sa ˙mhitā, Sūtrasthānam, 5: 6.
28Suśruta-Sa ˙mhitā, Sūtrasthānam, 8: 3–4.
29Suśruta-Sa ˙mhitā, Sūtrasthānam, 13: 3–4.
30Suśruta-Sa ˙mhitā, Sūtrasthānam, 3: 48–50
31Suśruta-Sa ˙mhitā, Sūtrasthānam, 4: 2
32Suśruta-Sa ˙mhitā, Sūtrasthānam, 10: 4.
33Suśruta-Sa ˙mhitā, Sūtrasthānam, 5: 7.
34Suśruta-Sa ˙mhitā, Sūtrasthānam, 5: 5–6.
35Caraka-Sa ˙mhitā, Sūtrasthānam, 18: 39.
36Caraka-Sa ˙mhitā, Sūtrasthānam, 18: 40–44. This is a sensitive issue that has moral, cultural, and moral implications. Most
people incur major medical expenses in the last five years of their lives. Should we treat a patient with no or little chance of
recovery? The Edwin Smith Papyrus has dealt with the same issue more than 3000 years ago in Egypt. This issue is still under
discussion in many countries around the world.
37Caraka-Sa ˙mhitā, Sūtrasthānam, 18: 38.
8.1. DOCTORS, NURSES, PHARMACIES, AND HOSPITALS 131
It is essential for a doctor to recognize the disease by knowing the symptoms of the ailment.
However, since the number of diseases is very large, a doctor should not feel ashamed if he can-
not identify the disease from the symptoms.38 This is an example of honesty that is expected of
all physicians.
Doctors were responsible for treatment and were punished for wrongdoings. The penalty
for such wrongdoings depended on the extent of damage. Humans received more attention
than animals in their medical treatment.39 Physicians were not above the law; their greed and
carelessness were checked with laws during the period of Chandragupta Maurya (Candragupta,
reigned 322–298 BCE) in India.40
According to Caraka, the following are the four constituents that are crucial for the quick
recovery of a patient: a qualified doctor, a quality drug, a qualified nurse, and a willing patient.41
This clearly signifies the importance of nursing in medical care. The roles of a nurse are: the
complete knowledge of nursing, skillful, affection for the patient, and cleanliness.42 A good
patient is defined as someone with a good memory, follows the instructions of the doctor, free
of fear, and can explain the symptoms of the disease well. This combination of doctor, nurse,
drug, and patient is essential for a successful cure. Otherwise, the cure is incomplete.43
Patients were advised to ignore doctors who practice just for money, were not qualified, or
were too proud of their status.44 Patients should go to a doctor who has studied śāstra (texts), is
astute, is pure, understands the job well, has a good reputation, and has good control of his mind
and body, suggested Caraka.45 A good doctor should be respected like a guru.46 “The patient,
who may mistrust his own parents, sons, and relatives, should repose an implicit faith in his own
physician, and put his own life into his hands without the least apprehension of danger; hence
a physician must protect his patient as his own begotten child,” suggest Suśruta.47
Pharmacies prepared medicines as powders, pastes, infusions, pills, confections, or liquids
using chemical techniques of oxidation, fermentation, and distillation. Pharmacists were trained
in grafting, general care of plants, extract juices from flowers, concocting various medical prepa-
rations, and making medicinal bhasms and extracts.
In providing the design of a hospital, Caraka suggests that it should have good ventila-
tion and the rooms should be big enough for patients to walk comfortably. The rooms should
not experience the intense glare of sunlight, water, smoke, and or dust. The patient should not
experience any unpleasant sound, touch, view, taste, or smell. Expert chefs, masseuses, servants,
38Caraka-Sa ˙mhitā, Sūtrasthānam, 18: 45–46.
39Manu-Smr. ti, 9: 284.
40Kaut.ilaya’s Arthaśāstra, 144.
41Caraka-Sa ˙mhitā, Sūtrasthānam, 9: 3.
42Caraka-Sa ˙mhitā, Sūtrasthānam, 9; 8.
43Caraka-Sa ˙mhitā, Sūtrasthānam, 9: 9–13.
44Caraka-Sa ˙mhitā, Sūtrasthānam, 29: 12.
45Caraka-Sa ˙mhitā, Sūtrasthānam, 29: 13.
46Caraka-Sa ˙mhitā, Sūtrasthānam, 4: 51.
47Suśruta-Sa ˙mhitā, Sūtrasthānam, 25: 42–45.
132
8. MEDICINE
nurses, pharmacists, and doctors should be hired to take care of the patients. Birds, deer, cows
and other animals, along with singers and musicians should be kept in the hospital compound.48
Hospitals should have enough quality food in the storage room, stored drugs, and a com-
pound with drug plants. The doctors must be intelligent in knowing the books as well as equip-
ment. Only then, they were allowed to help a patient.49
8.2 AYURVEDA
Ayurveda is the Hindu science of healing and rejuvenation which is curative as well as preventive.
It is a holistic medicine that focuses on mind and body and a person needs to practice the
doctrines of ayurveda when healthy to avoid sickness. Dietary measures and lifestyle changes
are recommended to delay the aging process at the cellular level and to improve the functional
efficiency of body and mind. The ancient Hindus recognized the self-healing capacity of our
body and focused on creating a balance that allows the body to cure itself.
Ayurveda is a combination of two words–āyur and veda. Āyur means the span of life while
veda means unimpeachable knowledge or wisdom. Ayurveda signifies life science for the well
being of body and mind. This stemmed out of Hindus’ belief that health and disease co-inhere
in body and mind. Ayurveda has its origin in the Vedas for the “principal elements of its gen-
eral doctrines.”50 In the Hindu tradition, Ayurveda is considered as an Upa-Veda (subordinate
Veda) that deals with medicine and medical treatment for good health, longevity, and elimina-
tion of disease. It is considered to be in well-evolved form at the time of Atharvaveda. Suśruta
wrote: “The Ayurveda originally formed one of the subsections of the Atharvaveda and originally
consisted of 100,000 verses.”51
Ayurveda uses herbal medicine, dietetics, surgery, psychology, and spirituality to cure
diseases. Pañca-karma (detoxification of body), laxatives, herbal oil massages, and nasal thera-
pies are some of the treatments. Though ancient in origin, it is still the most popular system
of medicine in modern India. Several Indian universities provide doctoral degree programs in
Ayurveda. These universities train doctors who practice throughout India and provide low cost
medical care to poor people. These days, Banaras, Chennei, Haridwar, Lucknow, Patna, and
Trivandrum are some of the prominent cities that house universities to train ayurvedic doctors.
Caraka suggested that physical ailments should be cured by proper diet, changing daily
routines that included moderate physical exercise,52 and medicine, while the mental diseases
should be cured by self control, meditation, and even by charms (bhuta-vidyā).53 Caraka defines
48Caraka-Sa ˙mhitā, Sūtrasthānam, 15: 7.
49Caraka-Sa ˙mhitā, Sūtrasthānam, 16: 1.
50Filliozat, 1964, p. 188.
51Suśruta-Sa ˙mhitā, Sūtrasthānam, 1: 3.
52Caraka-Sa ˙mhitā, Sūtrasthānam, 7: 30.
53Caraka-Sa ˙mhitā, Sūtrasthānam, 1–8.
8.2. AYURVEDA 133
that the purpose of Ayurveda is to preserve the well-being of a healthy person and to get rid of
the diseases of a sick person.54
In the ayurvedic tradition, the human body is considered as a conglomeration of five el-
ements, known as mahābhūtas—earth (pr. thivī), air (vāyu), water (apah. ), fire (tejas), and space
(ākāśa). The five mahābhūtas produce seven dhātus: rasa (juice-plasma), rakta (blood), mām. sa
(flesh), medas (fat), asthi (bone), majjā (marrow), and śukra (reproductive fluids in male and
females).55 The five mahābhūta, or the seven dhātu, constitute the prakr. ti (attributes) of a per-
son. These dhātu are a product of the five elements. When in equilibrium, these dhātus make
a person happy and healthy. The various combinations of these dhātu constitute the tri-gun. as
(three-attributes) in a person: sattva, rājas, tāmas. A person with dominant sattva values truth,
and honesty; rājas value power, prestige, and authority. A tāmas dominant person lives in igno-
rance, fear, and servility.
The basic theory of Ayurveda rests on the three humors (tridos. a; tri = three, dos. a = the lit-
eral meaning is defect, popularly known as elements)–Kapha (phlegm), a heat regulating system,
and mucous and glandular secretion; vata (wind), a nerve force; and pitta (bile), metabolism and
heat production. These three dos. a control the normal functions of a human body and a balance
is necessary. The term dos. a is generally translated as “corrupting agents,” “defect,” or “imperfec-
tion.” However, in Ayurveda, these dos. a are the keys to good health. The theory of dos. a is pretty
complex as it “is affected by an almost infinite number of exogenous factors and combinations
of factors that make up human ecology: diet, rate, time, and context of food consumption, cli-
mate, direction of wind, age, behavioral patterns, rest, accidents, exercise, state of mind, and so
forth.”56 An imbalance of these three dos. as causes most diseases. It is the purpose of all treat-
ments to keep the balance of the dos. as by balancing the seven dhātus.57 The Suśruta-Sa ˙mhitā
gives various symptoms of the deficiency of these tridos. as and their effects on body.58
Balance is essential in the natural world and is a key concept in Hindu philosophy. It
is the balance of body and mind in āsana that is crucial in yoga. It is the balance of tridos. as
in ayurveda that is crucial in the well-being of a person, just like the yin and yang in Chinese
philosophy. Therefore, before ayurvedic doctors look for the cure of a disease, they inquire about
the patient, his/her constitution, eating habits, and nature to know the imbalance in dhātu. For
example, in describing the cause of diabetes and its cure, Suśruta and Caraka attribute lack of
exercise, laziness, sweet foods, alcoholic foods and beverages, elevation of kapha, and excess of
newly harvested food grains as the cause. A continual balance of tridos. a results in contentment,
longevity, and enlightenment. We eat food to nourish the seven dhātus in our body. By changing
diet, people can change the equilibrium of dhātus and thus their attributes. Therefore, among
the Hindus, it is a customary to describe the attributes of a person from a combination of two
54Caraka-Sa ˙mhitā, Sūtrasthānam, 30: 26.
55For the roles of these dhātu, read Suśruta-Sa ˙mhitā, Sūtrasthānam, Chapter 15.
56Alter, 1999; Lad, 1984; and Zysk, 1991.
57Caraka-Sa ˙mhitā, Sūtrasthānam, 16: 35–36.
58Suśruta-Sa ˙mhitā, Sūtrasthānam, Chapter 15.
134
8. MEDICINE
words, ācar-vicār, describing the thinking and eating habits of a person. Suśruta suggested eating
in moderation for a healthy life.59 Caraka clearly regulates the quantity of food that a person
should eat: “The food must be digested in time and should not cause any inconvenience in activity
of the person.”60
Patients should seek effective drugs immediately after recognizing a sickness.61 Thus,
quick diagnosis is the key in fighting against a disease. Following are the typical drugs used
in ayurveda:
1. Animal products (honey, milk, blood, urine, fat, horn, etc.)
2. Minerals and metals (salt, gypsum, alum, gold, lead, copper, iron, phosphorous, sulfur,
potassium, calcium, iodine, lime, mud, precious stones, etc.)
3. Herbs and plants (for example, aloe vera, barley, beans, cinnamon, clove, coriander, cumin,
black pepper, fenugreek, neem, etc.)
In Ayurveda, metals are oxidized, reduced, and transformed into a chemical compound
that is non-toxic, to be used as a drug.62 Several elements like gold, lead, copper, iron, phospho-
rous, sulfur, potassium, calcium, and iodine are processed with physio-chemical techniques. Wa-
ter, sunlight, milk, honey, and fermented extracts of fruits are also used as medicine in Ayurveda.
For example, Drāks. aśava, that is an extract of grape and apple-vinegar along with several miner-
als, with a specific fermentation technique, is perhaps the best natural medicine for indigestion.
It cleans the blood, improves the functioning of kidneys and liver, improves the pH value of
urine, and provides minerals. It is not only a curative medicine but could also be used on a
regular basis as a preventive medicine for good health.
Minerals are also used for curative purposes; śilājīta, perhaps the most important one, is
a gelatine substance secreted from mountain stones during the scorching heat of June and July
in Northern India, and contains traces of tin, lead, copper, silver, gold, and iron. Śilājīta has
heat-producing and body purifying properties and is used in the treatment of diabetes, leprosy,
internal tumor, and jaundice.63
The daily routine (dinacarayā) is crucial for good health. Caraka deals with the issue of life-
style in great detail in the first chapter of Caraka-Sa ˙mhitā.64 Suśruta also devoted a long chapter
to prescribe a good lifestyle for a healthy person.65 Almost as a rule, it was considered good to
get up early in the morning and practice yoga and meditation after toiletry functions and bath.
To optimize health and as preventive measures for various body organs and for general health,
dental cleaning with herbs, mouthwash, tongue scraping, daily bath, massage, clean clothes,
59Suśruta-Sa ˙mhitā, Sūtrasthānam, 46: 145.
60Caraka-Sa ˙mhitā, Sūtrasthānam, 5: 4.
61Caraka-Sa ˙mhitā, Sūtrasthānam, 11: 63.
62For more information, read Zimmer, 1948.
63Suśruta-Sa ˙mhitā, Cikitsāstānam, Chapter 13.
64Caraka-Sa ˙mhitā, Sūtrasthānam, sections 5, 7, and 8.
65Suśruta-Sa ˙mhitā,Cikitsāstānam, 24: 3–132.
8.2. AYURVEDA 135
good shoes, daily exercise, and good thought processes were suggested. The role of the mind
was known to the ancient Hindus. Thus, they suggested good thought process as essential to
good health.
Mouth hygiene received much attention in ancient writings. Caraka suggested to brush
the teeth twice a day66 and clean the tongue using a metallic scraper.67 He also suggests many
herbs such as clove, cardamom, beetle nut, and nutmeg to keep in the mouth for a pleasant
breath.68 Suśruta provides the following instructions for the hygiene of mouth. “A man should
leave his bed early in the morning and brush his teeth. The tooth-brush should be made of a
fresh twig of a tree or a plant grown on a commendable tract and it should be straight, not
worm-eaten, devoid of any knot or at most with one knot only (at the handle side), and should
be twelve fingers in length and like the small finger in girth.”69 “The teeth should be cleansed
daily with (a compound consisting of ) honey, powdered trikat. u, trivarga, tejovatī, saindhave and
oil. Each tooth should be separately cleansed with the preceding cleansing paste applied on (the
top of the twig) bitten into the form of a soft brush, and care should be taken not to hurt the
gum any way during the rubbing process. This tends to cleanse and remove the bad smell (from
the mouth) and uncleanliness (of the teeth) as well as to subdue the kapha (of the body). It
cleanses the mouth and also produces a good relish for food and cheerfulness of mind,”suggests
Suśruta.70 Afterward, the person should cleanse the tongue by scraping it with gold, silver, or
wood foil.71
8.2.1 PAÑCA-KARMA
Pañca-karma is an ayurvedic treatment to cleanse the body system to remove toxins. This treat-
ment has three phases: purva-karma (preparation), pradhā-karma (main treatment), and paśca-
karma or Uttara-karma (post-treatment).72 Before the actual process, a dietary and herbal treat-
ment is prescribed to cleanse the intestinal system, reduce the excess or vitiated dos. as, and in-
creasing the gastric fire. In the oleation process, oily substances like sesame, flaxseed, or mustard
oil or ghee are taken internally or externally in massage and sudation (sweating) techniques are
used. Ghee is clarified butter by removing its water contents and milk solids. It becomes lactose-
free and can be used by people with lactose intolerance. Ghee is antimicrobial and antifungal and,
therefore, can be preserved without refrigeration for an extended period.73 The oleation process
lubricates bodily tissues and should be taken in moderation especially people with kapha. The
exact prescription in purva-karma depends on the individual and the diet includes simple foods
66Caraka-Sa ˙mhitā, Sūtrasthānam, 5: 71.
67Caraka-Sa ˙mhitā, Sūtrasthānam, 5: 74.
68Caraka-Sa ˙mhitā, Sūtrasthānam, 5: 76.
69Suśruta-Sa ˙mhitā, Cikitsāstānam, 24: 3.
70Suśruta-Sa ˙mhitā, Cikitsāstānam, 24: 6–7.
71Suśruta-Sa ˙mhitā, cikitsāsthānam, 24: 11–12.
72Ninivaggi, 2008, p. 207
73Ninivaggi, 2008, p. 209.
136
8. MEDICINE
such as khicar. ī, a combination of basmati rice, split mung lentils and some mild spices, such as
turmeric, cumin and coriander.
The five actions of Pañca-karma are as follows: vamana, virecana, vasti, nasya, and rakta-
moksha.74 These actions are provided by Suśruta. First, the body is detoxified by inducing vomit-
ing using herbal mixture (vamana), then an herbal mixture is taken for numerous bowel move-
ments (virecana), oil or herbs are administered anally for some time and expelled (vasti). Nasya
is practiced by using neti-kriyā (luke warm water passed through the sinus) and oil is applied
inside to clean the sinus system. In some special cases, the blood is purified or even taken out
using needles, incisions, or by leeches.75. In the post-treatment, dietary and lifestyle lessons are
provided to have balanced dinacaryā.
Pañca-karma is advised for a variety of medical conditions: arthritis, rheumatism, respi-
ratory disorders, gastrointestinal disorders, menstrual problems, obesity, etc. A team of Har-
vard Medical School in Massachusetts conducted a clinical study of this practice to observe the
role of psychosocial factors in the process of behavior change and salutogenic process. The 20
female participants underwent in a 5-day ayurvedic cleansing retreat program. Measurements
were taken before the program, immediately after the program, and after three months for the
quality of life, psychosocial, and behavioral changes. This study indicates that pañca-karma “may
be effective in assisting one’s expected and reported adherence to new and healthier behavior
patterns.”76
8.3
SURGERY
Suśruta, in his Suśruta-Sa ˙mhitā, describes some one hundred medical surgical instruments of
metals in various shapes and sizes. These include probes, loops, hooks, scalpels, bone-nippers,
scissors, needles, lancets, saws, forceps, syringes, rectal speculums, and probes. Many surgical
operations including those for hernia, amputation, tumor, restoration of nose, skin grafting,
extraction of cataracts, and caesarean section are described in the Caraka-Sa ˙mhitā and/or the
Suśruta-Sa ˙mhitā. In both books, blunt as well as sharp tools depending on the needs are de-
scribed. Some of these instruments are sharp enough to dissect a hair longitudinally.77 Suśruta
performed lithotomy, cesarean section, excision of tumors, and the removal of hernia through
the scrotum.78
New doctors practiced incisions first on plants or dead animals. The veins of large leaves
were punctured and lanced to master the art of surgery. Cadavers were advocated as an indis-
pensable aid to the practice of surgery. Bandaging, amputations, and plastic surgery were first
practiced on flexible objects or dead animals just as is done by medical students today. Suśruta
74Ninivaggi, 2008, p. 212.
75Ninivaggi, 2008, p. 219
76Conboy, Edshteyn, and Garivsaltis, 2009.
77For a detailed account of the tools, see Mukhopādhāya, 1994.
78Singh, Thakral, and Deshpande, 1970. It is an excellent article with basic information.
8.3. SURGERY 137
suggests that “a surgeon in seeking a reliable knowledge must duly prepare a dead body and
carefully ascertain from his own eyes what he learned with books.”79
Suśruta classified surgery into eight parts: extraction of solid bodies, excising, incising,
probing, scarifying, suturing, puncturing, and evacuating fluid.80 He designed his surgical in-
struments based on the shapes of the beak of various birds and the jaws of animals. In his attempt
to explain surgery, Suśruta writes: “The scope of surgery, a branch of medical science, is to re-
move an ulcer and extraneous substance such as fragments of hay, particles of stone, dust iron,
ore, bone; splinters, nails, hair, clotted blood or condensed pus, or to draw out of the uterus a
dead fetus, or to bring about safe parturitions in cases of false presentation, and to deal with
the principle and mode of using and handling surgical instruments in general, and with the
application of cautery and caustics, together with the diagnosis and treatment of ulcers.”81
All surgical procedures were defined in terms of three stages of the process: Pre-operative
measures, operation, post-operation measures. This included the preparation of the surgical
room, the patient, and the tools. Starvation or mild dieting was advised on the day of surgery.
Intoxicating beverages were advised to avoid the pain of surgery. In the post-operative mea-
sures bandaging, redressing, healing and cosmetic restoration were also involved.82 Anesthesia,
antiseptic, and sterile techniques were used by the ancient Hindus.
Suśruta described fourteen different kinds of bandages (bandha).83 On the harmful effects
of non-bandaging, Suśruta suggests that flies and gnats are attracted to the wound. Foreign
matters, such as dust and weeds can settle on the wound. Sweat, heat and cold can make the
wound malignant.84 Suśruta advises doctors to clean wounds first to get rid of dust, hairs, nails,
loose pieces of bones, and other foreign objects before suturing to avoid suppuration. If not done,
it can cause severe pain and other problems.85 Suśruta found innovative ways to perform various
surgical procedures: horse hairs were used as suture material while black carpenter ants were
used in closing incisions in soft tissues. Blood or pus was drawn from body by using leeches.
The amputation of the leg and other body parts was done during the R. gvedic period. The
R. gveda tells us that the lower limb of Queen Viśpalā was severed in King Khela’s battle. Āsvins
connected an artificial limb to her at night and she was able to fight the battle again:86
A neolithic 4000 year-old skeleton with a multiple-trepanated skull was recently found
in Kashmir. The trepanation on this skull was perhaps accomplished using drills of various di-
ameters and could be a result of an elaborate medico-ritual ceremonial procedure.87 Similarly,
nearly perfect tiny holes as dental treatment were found in excavations in the Mehrgarh site,
79Suśruta-Sa ˙mhitā, Śarīrsthānam, 5: 6.
80Suśruta-Sa ˙mhitā, Sūtrasthānam, 5: 5
81Suśruta-Sa ˙mhitā, Sūtrasthānam, I: 4.
82Suśruta-Sa ˙mhitā, Sūtrasthānam, 5: 3.
83Suśruta-Sa ˙mhitā, Sūtrasthānam, 18: 14–18; also read Kutumbiah, 1962, p. 163.
84Suśruta-Sa ˙mhitā, Sūtrasthānam, 18: 23.
85Suśruta-Sa ˙mhitā, Sūtrasthānam, 25: 18.
86R. gveda, 1: 116: 15.
87Sankhyan and Weber, 2001.
138
8. MEDICINE
near Baluchistan, that are about 7,500 to 9,000 years old. Eleven drilled molar crowns from
nine adults were studied from this site. The drilled holes are about 1.3–3.2 mm in diameter
and angled slightly to the occlusal plane, with a depth of about 0.5 to 3.5 mm.88 Cavity shapes
were conical, cylindrical or trapezoidal. At least in one case, “subsequent micro-tool carving
of the cavity wall” was performed after the removal of the tooth structure by the drill.89 In all
cases, marginal smoothing confirms that drilling was performed on a living person who later
continued to chew on the tooth surfaces.90 Using the flint drills that are similar to the ones
found in Mehrgarh and using a bow-drill, the team of Coppa et al. could drill a hole in hu-
man enamel in less than a minute. When Andrea Cucina, a researcher from the University of
Missouri-Columbia, looked the cavities using an electron microscope, he found that the sides
of the cavities were perfectly round to be caused by bacteria. He also noticed concentric groves
left by the drill. According to him, some plants or other substances were put into the hole to
prevent any further decay in the molars. These fillings were decayed during the 9,000 years of
seasoning.91
8.3.1 PLASTIC SURGERY
In Rāmāyan. a, attractive and voluptuous Sūrpan. akhā, sister of Rāvan. a, tried to devour Lord
Rāma, a married man. Lord Laks.man. a, younger brother of Lord Rāma, decided to cut off her
nose and earlobes as punishment. King Rāvan. a took care of the problem by asking his surgeon
to reconstruct the nose and earlobes of his sister.92 In time, nasal amputation also worked its
way into the metaphors and the Hindi term Nāk kat gai (nose is chopped) implies that a person
is insulted. Also, “saving nose” (nāk bacā lī) is a colloquial term meaning to go through difficult
circumstances without embarrassment.
Suśruta described the technique to graft skin, presently known as “plastic surgery” as a
general umbrella term. He repaired the nose or earlobes using an adjacent skin flap. Today, this
procedure is popularly called as “the Indian method of rhinoplasty”93 although no plastic is used
in the process. Live flesh from the thigh, cheek, abdomen, or the forehead was used to make
the new artificial parts.
This procedure was not practiced in the West until the second half of the 15th century in
Sicily, an empire with considerable contact with Arabia.94 In England, the first article on rhino-
plasty appeared in the Gentleman’s Magazine in 1794, written by Colly Lyon Lucas, a British
surgeon and member of the Medical Board at Madras, India.95 He described the process in
88Coppa et al., 2006
89Coppa et al., 2006
90Coppa et al., 2006
91Coppa et al, 2006.
92Brain, 1993.
93Sorta-Bilajac and Muzur, 2007.
94Pothula, 2001.
95Brain, 1993; Lucas, The Gentleman’s Magazine, 64, pt. 2, no. 4, 891–92, October, 1794. In this article, only the initials
of the author are provided.
8.3. SURGERY 139
his letter to the Editor, describing the process as “long known in India” and not known to the
British. Colly Lyon Lucas witnessed a case where a local Indian serving the British army, in the
war of 1792 CE, was captured by King Tipu Sultan. Unable to defeat the British outright, the
sultan tried to starve his enemies by ambushing the Indian bullock drivers who transported grain
to the British. Tipu decided to humiliate the bullock drivers by mutilating their noses and ears.
Lucas describes one such victim of this practice, the Mahratta bullock driver Cowasjee, who, on
his capture, had his nose and one of his hands amputated by the sultan. After one year, this man
decided to get his nose repaired. An operation was performed by the Indian doctors who took
skin from the forehead and placed it on the nose in proper form. The whole process took about
25 days. The artificial nose looked “as well as the natural one” and the scar on the forehead was
not very observable after a “length of time.” The procedure generated great interest among the
British surgeons. Lucas writes:96 “For about 12 months he remained without a nose, when he
had a new one put on by a man of the Brickmaker cast near Poonah [Poona]. This operation is
not uncommon in India, and has been practiced from time immemorial.. . . A thin plate of wax
is fitted to the stump of the nose, so as to make a nose of good appearance. It is then flattened,
and laid on the forehead. A line is drawn round the wax, and the operator then dissects off as
much skin as it covered, leaving undivided a small slip between the eyes. This slip preserves the
circulation till an union has taken place between the new and old parts. The cicatrix of the stump
of the nose is next pared off, and immediately behind this raw part an incision is made through
the skin, which passes around both alae, and goes along the upper lip. The skin is now brought
down from the forehead, and, being twisted half round, its edge is inserted into this incision,
so that a nose is formed with a double hold above, and with its alae and septum below fixed in
the incision. A little Terra Japonica is softened with water, and being spread on slips of cloth,
five or six of these are placed over each other, to secure the joining. No other dressing but this
cement is used for four days. It is then removed and cloths dipped in ghee (a kind of butter)
are applied.. . . For five or six days after the operation, the patient is made to lie on his back;
and on the tenth day, bits of soft cloth are put into the nostrils, to keep them sufficiently open.
The artificial nose is secure, and looks nearly as well as the natural one; nor is the scar on the
forehead very observable after a length of time.”
In England, The first operation of rhinoplasty was performed by Joseph Constantine
Carpue on October 23, 1814, in front of a large group of surgical colleagues and his students.
Carpue performed the second operation on an army officer who lost his nose during the Penin-
sular War against Napoleon, and later wrote a monograph about it.97
In his book, A Short History of Medicine, the author Erwin Heinz Ackernecht states that
“[t]here is little doubt that plastic surgery in Europe, which flourished first in medieval Italy, is
a direct descendant of classic Indian surgery.”98 A well known European surgeon, who restored
96taken from Ang, 2005.
97Brain, 1993; Carpue, 1816; a brief summary of the history of this procedure in India is provided by Ang, 2005; Rana
and Arora, 2002.
98Ackernecht, 1982. p. 41.
140
8. MEDICINE
a lost nose, was Branca de Branca from Sicily, using Suśruta’s adjacent flap method. His son,
Antonio Branca, used tissue from the upper arm as the reparative flap in his operations (around
1460), and “the Italian method” using a distant flap was born. The method was most extensively
described by Gaspare Tagliacozzi in his Chirurgia curtorum in 1597,99 almost two centuries prior
to when the British learned.
The following procedure was provided by Suśruta in the repair of a nose: “The portion of
the nose to be covered should be measured with a leaf. A piece of skin of the required size should
then be dissected from the cheek, and turned back to cover the nose. The part of the nose to
which this skin is to be attached or joined, should be made raw, and the physician should join
the two parts quickly but evenly and calmly, and keep the skin properly elevated by inserting
two tubes in the position of nostrils, so that the new nose may look normal. When the skin has
been properly adjusted a powder composed of licorice, red sandal-wood, and extract of barberry
should be sprinkled on the part. It should be covered with cotton, and white sesame oil should
be constantly applied. The patient should take some clarified butter. When the skin has united
and granulated, if the nose is too short or too long, the middle of the flap should be divided and
an endeavor made to enlarge or shorten it.”100
The Suśruta-Sa ˙mhitā demonstrated a procedure to mend an earlobe with a patch of skin-
flap scraped from the neck or the adjoining parts. “The modes of bringing about an adhesion
of the two severed parts of an ear-lobe are innumerable; and a skilled and experienced surgeon
should determine the shape and nature of each according to the exigencies of a particular case.”101
“A surgeon well-versed in the knowledge of surgery should slice off a patch of living flesh
from the cheek of a person devoid of ear-lobes in a manner so as to have one of its ends attached
to its former seat. Thus, the part, where the artificial ear-lobe is to be made, should slightly
scarified (with a knife), and the living flesh, full of blood and sliced off as previously directed,
should be attached to it (so as to resemble a natural ear-lobe in shape).”102
Two thousand years after Suśruta, this operation essentially follows the same procedure.
In these operations, “the Indians became masters in a branch of surgery that Europe ignored
for another two thousands years,” acknowledges Majno, a well-known historian of medicine.103
Ilza Veith and Leo M. Zimmerman, in their book Great Ideas in the History of Surgery, make a
similar conclusion: “It is an established fact that Indian plastic surgery provided basic pattern
for Western efforts in this direction.”104
99Sorta-Bilajac and Muzur, 2007.
100Suśruta-Sa ˙mhitā, Sūtrasthānam, 16: 46–51.
101Suśruta-Sa ˙mhitā, Sūtrasthānam, 16: 25.
102Suśruta-Sa ˙mhitā, Sūtrasthānam, 16: 4.
103Majno, 1975, p. 291.
104Veith and Zimmerman, 1993, p. 63.
8.3. SURGERY 141
8.3.2 CATARACT SURGERY
The earliest form of cataract surgery, now known as ’couching,’ was first introduced by the ancient
Hindus. Suśruta explains the procedure in which a curved needle is used to push the opaque
phlegmatic matter in the eye out of the way of vision.105 Immediately after the surgery, the eye
is sprinkled with breast milk and clarified butter.
This procedure of the removal of cataract by surgery was also introduced into China from
India. Two poets from the Tang dynasty, Bo Juyi (772–846 CE) and Liu Yuxi (772–842 CE),
wrote about a brahmin removing the cataract using a golden probe.106 According to Guido
Majno, a professor of pathology from the University of Massachusetts, the cataract operation
described by Aulus Cornelius Celsus (25 BCE - 50 CE) is perhaps derived from Suśruta.107
Celsus was a celebrated Roman medical writer and physician. His book De Medicina was a
standard book of medicine in Rome.
8.3.3 CARPENTER ANTS SUTURING AND LEECH THERAPY
Suśruta used somewhat unconventional from the modern standards but highly successful meth-
ods to take care of some surgical issues. In the treatment of intestinal surgery to remove obstruct-
ing matter, Suśruta suggested to “bring the two ends of intestines that needed to join together.
The intestines should be firmly pressed together and large black ants should be applied to grip
them quickly with their claws. Then the bodies of the ants having their heads firmly adhering
to the spots, as directed, should be severed and the intestines should be gently reintroduced into
the original position and sutured up.”108 This may be a crude method for some. However, it was
effective procedure and worked well.
The leeches were used to suck pus and blood in the cure of boils, tumors, and other similar
diseases by Suśruta. He defines the types of leeches that should be used and provided details of
the process. Leeches worked well when the patients were not fit for operations and bleeding was
an issue since, with leeches, almost no quality blood was lost.109 Suśruta first describes twelve
different kinds of leeches and suggest to use only six types for the purpose of human treatment.
Suśruta suggests that the “leeches should be caught with a piece of wet leather, or by some
similar article, and then put in to a large-sized new pitcher filled with the water and ooze or
slime of a pool. Pulverized zoophytes and powder of dried meat and aquatic bulbs should be
thrown into the pitcher for their food, and blades of grass and leaves of water-plants should be
put into it for them to lie upon. The water and the edibles should be changed every second or
third day, and the pitchers should be changed every week.”110 “The affected part [of the patient]
should be sprinkled over with drops of milk or blood, or slight incisions should be made into it
105Suśruta-Sa ˙mhitā, Uttaratantra, 17–69: 55 , verses 55–69.
106Deshpande, 2008.
107Majno, 1975, p. 378.
108Suśruta-Sa ˙mhitā, Cikitsāsthānam, 14: 20–21.
109Suśruta-Sa ˙mhitā, Sūtrasthānam, Chapter 13
110Suśruta-Sa ˙mhitā, Sūtra-sthāna, 13: 15.
142
8. MEDICINE
in the event of their refusing to stick to the desired spot. . . while sucking, the leeches should
be covered with a piece of wet linen and should be constantly sprinkled over with cold water. A
sensation of itching and of a drawing pain at the seat of the application would give rise to the
presumption that fresh blood was being sucked, and the leeches should be forthwith removed.
Leeches refusing to fall off even after the production of the desired effect, or sticking to the
affected part out of their fondness for the smell of blood, should be sprinkled with the dust of
powered rock salt.”111
To use these leeches again, Suśruta suggests that the “leeches should be dusted over with
rice powder and their mouths should be lubricated with a composition of oil and common salt.
Then they should be caught by the tail-end with the thumb and the forefinger of the left hand
and their backs should be gently rubbed with the same fingers of right hand from tail upward to
the mouth with a view to make them vomit or eject the full quantity of blood they had sucked
from the seat of the disease. The process should be continued until they manifest the fullest
symptoms of disgorging.”112
Using leeches for surgical procedures is not just an ancient practice. In 2004, the Food
and Drug Administration in America cleared the use of leeches (Hirudo medicinalis) as a “med-
ical device” appropriate for certain procedures. Today, surgeons use leeches often in procedures
that require skin grafting or regrafting amputated appendages, such as finger or toes. If the
veins do not flow blood to the damaged region, leeches are used to ooze the blood so that the
body’s own blood supply eventually gets established and the limb survives.113 Also, the saliva
of leeches contains an anti-clotting agent, called hirudin, which allows the blood to flow freely
and avoid congested skin flaps problem. It also has hyaluronidase, histamine-like vasodilators,
collagenase, inhibitors of kallikrein and superoxide production, and poorly characterized anaes-
thetic and analgesic compounds. Thus, the saliva is analgesic, anaesthetic and has histamine-like
compounds.114 The therapy is pretty inexpensive, with each session taking about 40 minutes and
leeches worth $7–10.115 For each session, a new group of leeches are used and dumped as in-
fectious waste material after the treatment. In Germany alone, about 350,000 leeches were sold
in 2001.116 Their use is even more prevalent now.
In a study conducted at the Department for Internal and Integrative Medicine, University
of Essen, Germany, sixteen people in the test group were recruited who endured persistent knee
pain for more than six months and had definite radiographic signs of knee osteoarthritis. Ten out
of sixteen proceeded with leech therapy and avoided conventional treatment while the remaining
six tried only the conventional treatment. In all patients with leech therapy, no adverse effect or
local infection was observed. Some patients described the initial bite of leeches as painful. It was
111Suśruta-Sa ˙mhitā, Sūtrasthānam, 13: 18.
112Suśruta-Sa ˙mhitā, Sūtrasthānam, 13: 19–20.
113Michalsen et al., 2001; Michalsen et al., 2006; Rados, 2004.
114Michalsen et al., 2001; Oevi et al., 1992.
115Rados, 2004.
116Michalsen et al., 2001.
8.4. HINDU MEDICINE IN OTHER WORLD CULTURES 143
found that the group with leech therapy did considerably better than the group with conventional
therapy. Their pain was reduced significantly after three days and the good effects lasted up to
four weeks. Later, a second group with 52 people was was experimented with similar therapy
that yielded similar results.117
8.4 HINDU MEDICINE IN OTHER WORLD CULTURES
The Hindu medicine was popular in China, the Middle East, and Europe from the ancient
period. Ktesias, a Greek physician who lived in the Persian court during the early fifth-century
BCE for some 17 years, wrote that the Indians did not suffer from headaches, or toothaches,
or ophthalmia.118 They also did not have “mouth sores or ulcers in any part of their body.”119
Claudius Aelianus (175–235 CE), who lived during the period of Emperor Septimus Severus
in Rome, preferred drugs from India in comparison to the Egyptian drugs. He wrote: “So let
us compare Indian and Egyptian drug and see which of the two was to be preferred. On the
one hand the Egyptian drug repelled and suppressed sorrow for a day, whereas the Indian drug
caused a man to forget his trouble for ever.”120 This statement implies that the Indian herbs and
drugs were well received by the Romans.
Al-Bīrūnī (973–1050 CE) mentions the availability of an Arabic translation of Caraka-
Sa ˙mhitā in the Middle East.121 As Arab medicine became popular in Europe, the name
of Caraka is repeatedly mentioned in medieval-Latin, Arabic, and Persian literature on
medicine.122 Al-Jāh. iz (ca. 776–868 CE), a Muslim natural philosopher from Arabia, acknowl-
edged that the Hindus possess a good knowledge of medicine and “practice some remarkable
forms of treatment.”123 “They were the originators of the science of fikr, by which a poison can
be counteracted after it has been used.”124
S. ā‘id al-Andalusī (d. 1070 CE) of Spain also acknowledged the vast knowledge of the
Hindus in medicine. “They [Hindus] have surpassed all the other peoples in their knowledge
of medical science and the strength of various drugs, the characteristics of compounds and the
peculiarities of substances.”125 Nearby in England, Roger Bacon (1214–1292), a twelfth century
natural philosopher, wrote a book for the Pope and noticed that the Indians “are healthy without
infirmity and live to a great age.”126 Marco Polo, an Italian traveler, also noticed the long lives
of the Hindus. “[T]hese Brahmins live more than any other people in the world, and this comes
about through little eating and drinking, and great abstinence which they practise [practice]
117Moore and Harrar, 2003.
118McCrindle, 1973, p. 18.
119McCrindle, 1973, p. 18.
120Aelianus, 1958, 4: 41.
121Sachau, 1964, p. XXVIII.
122Royle, 1837, p. 153.
123Pellat, 1969, p. 197.
124Pellat, 1969, p. 197.
125Salem and Kumar, 1991, p. 12.
126Bacon, 1928, Opus Majus, p. 372.
144
8. MEDICINE
more than any other people. And they have among them regulars and orders of monks, according
to their faith, who serve the churches where their idols are, and these are called yogis, and they
certainly live longer than any others in the world, perhaps from 150 years to 200.”127 It is likely
that Marco Polo was overstating their longevity in his reference to brahmins living upto to 150
to 200 years. However, the point remains. The Hindus were known to have lived long lives.
“Hindoo [Hindu] works on medicine having been proved to have existed prior to Arabs,
little doubt can be entertained,” writes John Forbes Royle (1798–1858), in his book Antiq-
uity of Hindoo Medicine, to advocate the antiquity of Hindu medicine and its role in modern
medicine.128 “We can hardly deny to them [Hindu] the early cultivation of medicine; and this
so early, as, from internal evidence, to be second, apparently to none with whom we are ac-
quainted. This is further confirmed by the Arabs and Persians early translating of their works;
so also the Tamuls [Tamils] and Cingalese [Singhalese, people of Sri-Lanka] in the south; the
Tibetans and Chinese in the East; and likewise from our finding, even in the earliest of the
Greek writers, Indian drugs mentioned by corrupted Sanscrit [Sanskrit] names. We trace them
at still earlier periods in Egypt, and find them alluded to even in the oldest chapters of the
Bible.”129 Royle served as surgeon for the East India Company and lived in India. His interest
in traditional botanical Hindu remedies of various diseases led him to write his classic book.
Abu Yusuf Ya‘aub ibn Ishaq al-Kindī, popularly known as al-Kindī (ca. 800–870 CE), is
considered to be the “philosopher of Arabia.” He was born and lived in Baghdad. He wrote a
medical formulary, called Aqrābādhīn, which was translated by Martin Levey into English.130
While studying the materia medica defined by al-Kindī, Levey concludes that about 13 percent
comes from the Indian region. In his view, “many of the Persian materia medica may more prop-
erly be considered to be Indian.”131 In that case, the Indian-Persian materia medica in al-Kindī’s
work turns out to be about 31 percent.132
Alī B. Rabban Al-T. abarī (783–858 CE), a Persian physician from Tehran, who later lived
in Baghdad serving Caliph al-Mutawakkil (reigned 847–861 CE) as his physician, wrote an
encyclopedic book on medicine, Firdaws al-H. ikmah (Paradise of Wisdom).133 It was translated
into English by Muh. ammad Zubair Siddiqi in 1928.134 This book contains some thirty-six
chapters on Hindu medicine and refers to the works of noted Indian physicians/thinkers such
as Caraka, Suśruta, Cān. akya (Kaut.ilaya), Mādhavakara, and Vagbhata II. Al-T. abarī mentions
three dos. a and seven dhātu of ayurveda for medical treatments. This is in accordance with the
tradition of ayurveda in India.
127Needham, 1981, p. 81.
128Royle, 1837, p. 62.
129Royle, 1837, p. 152.
130al-Kindī, 1966.
131al-Kindī, 1966, p. 20.
132Levey and al-Khaledy, 1967, p. 33.
133see Meyerhof, 1931; Dictionary of Scientific Biography, 13: 230.
134Siddiqi, 1928.
8.4. HINDU MEDICINE IN OTHER WORLD CULTURES 145
Max Meyerhof has written excellent accounts of al-T. abarī’s work and connected it with
Suśruta-Sa ˙mhitā, Caraka-Sa ˙mhitā, the Nidāna and the As. t. ā ˙ngahr. dava-Sa ˙mhitā. The Nidāna is
a work on pathology written by Mādhavakara. This work was translated into Arabic under the
patronage of Hārun al-Rashīd. The As. t. ā ˙ngahr. dava-Sa ˙mhitā is the work of Vagbhata II.135 Max
Meyerhof concludes the existence of Indian drugs in Arabic treatises: “We find indeed, in the
earliest Arabic treatises on medicine, the mention of many Indian drugs and plants which were
unknown to Greeks, and all of them bearing Sanskrit names which had passed through the New
Persian language.”136
Triphalā is a popular drug in India. The word Triphalā means “three-fruits” since the drug
is made from three ingredients: haritaki or simply har (Terminalia chebula), bahera (Terminalis
belerica), and āmalaki (Phyllanthus emblica). The Arabs used the term atrifal for triphalā that is
similar in pronunciation, while the Chinese literally translated the term and called it san-teng,
implying three herbs.137 These literary evidences are examples of the transmission of medical
knowledge from India to Arabia and China.
In studying the transmission of ayurvedic knowledge to China, Chen Ming, a Chinese
scholar, suggests that “scholars of medical history have long been well aware of the Indian influ-
ence on Chinese medicine.”138 In providing examples of such knowledge, he cites the division of
medicine into eight branches in Chinese books, as done by Caraka and Suśruta. Dharmaks.ema,
a scholar, in his book Daban nie jing, followed the same division in 421 CE. Paramārtha (499–
569 CE) translated a Sā ˙mkhya text, jinqishilum, that refers to eight divisions of medical reme-
dies. Sun Simiao (581–682), a medical writer and physician for the Sun and Tang dynasties,
wrote about ayurvedic practices in his work Beiji Qianjin yao fang (Important medicinal formu-
lae [worth] a thousand [pieces of ] gold). He mentions Jīvaka’s story, discussed in Section 8.1,
that all plant life has medicinal value. Simiao is known for the text “On the Absolute Sincerity
of Great Physicians,” popularly known as the Chinese Hippocratic Oath, written in the first
chapter of the above-mentioned book. This oath is still required reading for Chinese physicians.
Simiao also attributed several medicinal formulations to Jīvaka: Jīvaka’s ball medicine for ten
thousand illnesses, Jīvaka’s medicines for illnesses caused by evil spirits, Jīvaka’s soup, Jīvaka’s
medicine for prolonging life without getting old. He introduced “chan” or “dhyāna (meditation)
and yoga into China, and also introduced a particular massage technique and called it Brāhmin’s
method.139
Tombs in Turfan (also Turpan, 42(cid:14)590N, 89(cid:14)110E, near the Silk Road) yield medical frag-
ments in Sanskrit and Tocharian, the local language. Fragments of the ayurvedic texts, Bhela
Sa ˙mhitā, Siddhasāra, and yoga-śataka, have been found in these tombs. The epitaph of Lüsbuai
(Battalion Commander) Zhang Xianghuan (681 CE) mentions two ayurvedic physicians, Jī-
135Meyerhof, 1931 and 1937.
136Meyerhof, 1937.
137Mahdihassan, 1978.
138Ming, 2006.
139Deshpande, 2008.
146
8. MEDICINE
vaka and Nāgārjuna. In his studies, Chen Ming concludes that “Indian medicine also influenced
medical practices in Medieval Turfan.”140
Su Jing in the seventh century revised the Tang pharmacopoeia called or Xinxiu bencao
(Newly revised materia medica) and noted down a remedy for Beriberi. He defined the prescrip-
tion as ‘Brahmin’s prescription’. In 752 AD, Wang Tao of the Tang dynasty wrote a large com-
pendium, Waitai miyao fang (Medical secrets of an official). This book again records a treatment
of beriberi using the ‘Brahmin’s prescription’. Both of these records suggest that a successful cure
for Beriberi came to China from India.141
140Ming, 2006.
141Deshpande, 2008.
C H A P T E R 9
The Global Impact
147
The preceding chapters covered the multi-faceted concepts of Hindu science: numeral sys-
tem with its place-value notations; mathematical processes that cover arithmetic, algebra, and
trigonometry; the shape of the Earth and planetary motions; the constitution of matter; the
properties of matter; standards for mass, length and time; and physical and chemical processes
involved in the making of drugs, poisons, and new compounds, so-called plastic surgery, rust-
free Iron Pillar, yoga, etc. These chapters basically cover most branches of modern science and
provide the substantial contributions that the ancient Hindus made to science. Also, as men-
tioned in Chapter 1, the rationale of why the American Association for the Advancement of
Science (AAAS) decided to credit the Hindus for their knowledge in mathematics and astron-
omy is also provided. These chapters also provided the global impact of specific discoveries and
inventions of the ancient Hindus. The current chapter focuses only on some additional new
information.
If we need to mention just one thought that is central to Hinduism and which could play
an important role in the future of humanity as it did in the past, it is a hymn of R. gveda: “To
what is One, sages call by different names.”1 The underlying meaning is that God is one and
people call Him by different names. Though this doctrine is not a part of science, it is perhaps
the most timely message in view of the religious strife around the world. Al-Bīrūnī, an Islamic
philosopher, understood this well when he wrote: “The Hindus believe with regard to God that
he is one, eternal, without beginning and end, acting by free will, almighty, all-wise, living,
giving life, ruling, preserving, . . .”2 It is due to the impact of this doctrine that the Hindus did
not subjugate other religions and tried to spread their own message by winning the hearts of
people. Christians, Jews, Zoroastrians, and Muslims found refuge in India at different times in
history and could flourish there. Out of the currently 2.6 million population of Zoroastrians in
the world, most live in India, a country of refuge. In contrast, elsewhere in world history, there
have been more wars in the name of religion than on anything else.
For the ancient (and modern) Hindus, exploring the truth, including science, was consid-
ered helpful in achieving liberation (moks. a). Therefore, scientific investigations were encouraged
through religious codes, as explained in Chapter 2. Long before the Christian era, the ancient
Hindus had established an educational system which was comparable to the present university
system. The intellectual centers of Nālandā (near Patna, India), Taxila (Taks.aśilā, near Islam-
1R. gveda, 6: 22: 46.
2Sachau, 1964, vol. 1, p. 27.
148
9. THE GLOBAL IMPACT
abad, Pakistan), Kānchipuram (Tamilnadu, India), Vikramśilā (Bihar, India), Varanasi (Uttar
Pradesh, India), and Valabhi (Gujrat, India) were perhaps the most well known during the an-
cient period. Taxila was visited by Alexander the Great and later supported by Aśoka. It had
a large number of stūpas (pillars) and monasteries. The center of Taxila was destroyed during
the fifth century by the Huns. Nālandā (near Patna, Bihar) is the place where Nāgārjuna and
Āryabhat.a I taught. Nālandā was visited by three famous Chinese visitors: Faxian (or Fa-Hien),
Xuan Zang (or Hiuen Tsang), and Yijing (or I-tsing). At one time, it had about 10,000 stu-
dents, 1,500 teachers, and 300 classrooms. It was completely destroyed in 1193 by the army of
Bakhtiyar Khilji. The library was set on fire and, due to the extensive collection of books, it took
over three months for the fire to be extinguished. Other intellectual centers, such as in Varanasi,
destroyed in 1194 CE by Qutb-ud-din Aibak, and Mathura, destroyed in 1018 CE by Mahmud
of Ghazni, also suffered.3
Substantially greater effort is required in order to unearth and fully comprehend the foun-
dational stones of a vast intellectual treasure that we owe to the ancient Hindus. For some reason,
the scholarship of the last century on Hindu science is scant and has not received the interest
of scholars as it did during the nineteenth century. There are only a small number of courses on
Hinduism in the West, while courses that focus on the anthropological studies of India and sen-
sational issues, such as sati, the caste-system, eroticism in literature, human trafficking, etc. have
been steadily growing. At the time of this writing, there is not even a single course in America
that deals primarily with Hindu science. In contrast, the ancient Greeks, whose contributions to
science were comparable to those of the Hindus, are covered in various courses in most univer-
sities. The main reason is that most academic institutions are shaped by the interests of donors.
It may sound ridiculous to some; however, it is the brutal reality of academia. Princeton Uni-
versity offered 13 courses just on Greece during the Spring 2008 semester. This is in part due to
Stanley J. Seeger’s endowment to the university.4 Similar endowments for Hinduism are lacking
and there is little or no support from the governments of various countries. This is not the case
with other religions.
Most of the knowledge of the ancient Hindus was transferred to the West via the Islamic
domination of Europe. This transfer of knowledge is reminded by the Sanskrit words that are
now a part of the English language in original or metamorphosed form. A list of these words
makes a strong case for the Hindu science, as they are related to the scholarly tradition of the
ancient Hindus. Numbers (two, three, six, seven, eight, and nine), pundit, guru, read, sine (as
a trigonometric function), geometry, zero, sulfur, gold, silver, lead, uterus, yoga are some of the
3I have decided to avoid any description of the destruction that the ancient Hindus and their institutions had to deal
with. The destruction of Baghdad by Hulagu Khan, the destruction of Spain during the Inquisition, and the destruction of
Alexandria by a religious zealot group are some examples of how a country or a culture can be subdued. Most examples of
destruction cited here outside the Indus-Valley region occurred for a short period. In contrast, the Hindus faced subjugation
and destruction of their intellectual centers for nearly a millennium. It is magical the way India, with majority Hindu popula-
tion, has bounced back in science and technology even after many centuries of foreign rule during Mogul and colonial period
that tried to destroy the educational system of Hindus and extinguish their spirit to search for truth.
4Arenson, 2008.
9.1. IMPACTS DURING THE ANCIENT AND MEDIEVAL PERIODS 149
examples of the Sanskrit words in English.5 All cultures have words to define a scholar in their
languages. However, it is the Sanskrit word “guru” or “pundit” that has prevailed in English.
9.1
IMPACTS DURING THE ANCIENT AND MEDIEVAL
PERIODS
During the ancient and medieval periods, the Hindu corpus attracted the great minds from
all over the world, such as: Pythagoras, Apollonius of Tyana, and Plotinus from the Greek or
Roman tradition; al-Bīrūnī, al-Fazārī, Ibn Sīnā, al-Jāh. iz, al-Khwārizmī, al-Mas‘udī, Kūshyār
Ibn Labbān, and al-Uqlidīsī from the Islamic tradition; Faxian (Fa-Hien), Xuan Zang (or Hi-
uen Tsang), and Yijing (or I-tsing) from the Chinese tradition; and Pope Sylvester, S. ā‘id al-
Andalusī, Roger Bacon, Adelard of Bath, and Leonardo Fibonacci from the European tradition.
The ancient Hindus invented the game of chess, called catura ˙nga in Sanskrit. The word
signifies “four members of an army”—elephants, horses, chariots, and foot soldiers. It was called
al-śatranj in Arabic. In the game, a key move is made to trap the king, called eschec in French and
“check” in English. This led to the word chess for the game. The game epitomizes the science of
warfare that helped the ancient and medieval strategists in warfare. The role of horses, elephants,
foot-soldiers, chariots, and the king are well defined in a set configuration. The main task was
to protect your own king and kill the king of your enemy. S. ā‘id al-Andalusī, (died in 1070
CE) of Spain considered the discovery of chess as a testimony to the “clear thinking and the
marvels of their [Hindu] inventions. . . . While the game is being played and its pieces are being
maneuvered, the beauty of structure and the greatness of harmony appear. It demonstrates the
manifestation of high intentions and noble deeds, as it provides various forms of warnings from
enemies and points out ruses as well as ways to avoid dangers. And in this there is considerable
gain and useful profit.”6 Today, chess is played all over the world and international tournaments
are conducted to select a world champion.
9.1.1
IMPACT ON ARABIA
The work of al-Khwārizmī in Baghdad was made possible due to the discoveries and inventions
of the ancient Hindus. Al-Khwārizmī work led to the inclusion of several new words in the
English dictionary: algebra, zero, algorithm, sine function, to name a few. His books were read by
established scholars in Europe, such as Copernicus, Adelard of Bath, and Leonardo Fibonacci,
either in Arabic or in translation. Al-Khwārizmī’s main contributions were to compile Hindu
5amrita, Aryan, ashram, atman, acharya, ahimsa, bandana, basmati, candy, cheetah, cot, jackal, jungle, karma, kin, kun-
dalini, lemon, mahout, man, Mandarin, mantra, maya, mix, namaste, nirvana, orange, pajamas, puja, samadhi, sapphire,
shampoo, sugar, swastika, tantra, yoga, yogi, and zen are non-science representative words that are of Sanskrit in origin. The
followers of Lord Kr.s.n. a celebrate festivals by carrying images of Him in a huge wagon all over India. Lord Kr.s.n. a’s another
name is Jagan-nāth, the lord of the world. For Europeans, it was a waste of time and money for senseless devotion. Thus, the
word juggernaut evolved to define large overpowering and destructive forces or objects. Similarly, thug and dacoit are also
Sanskrit in origin. In some case the path of adoption is a bit convoluted.
6Salem and Kumar, 1991, p. 14.
150
9. THE GLOBAL IMPACT
knowledge of mathematics and astronomy in his books, as indicated from the title of his books.
It was the Hindu mathematical tools that allowed Copernicus to figure out his heliocentric solar
system.
Copernicus knew some mathematical tools of the ancient Hindus that he may have
learned during his stay in Italy where people like Pope Sylvester and Leonardo Fibonacci had
already documented the mathematics of the Hindus. As mentioned earlier, Copernicus opted
to use Hindu arithmetic in writing his book, De revolutionibus orbium coelestium (On the Rev-
olutions of the Heavenly Spheres). Copernicus is not the only celebrated scientist who was
benefited with the contributions of the ancient Hindus; the table of parallax of the Moon that
was suggested by Johannes Kepler (1571–1630) was identical to the one given by Brahmgupta in
Khan. d. a-Khādyaka almost a millennium ago, as inferred by Otto E. Neugebauer in his analysis.7
Khalīl wa Dimna is one of the celebrated books in the Islamic world. This book is based
on an Indian book, the Pañca-tantra, meaning “five-principles.” The original Sanskrit book was
written before the Christian era by Vis.n. u Sharma to educate the sons of an Indian king on
human conduct and the art of governing (nīti) using animal fables. Burzuwaih (or Borzuy), a
personal physician of Anoushiravan (fl. 550 CE) of the Sassanid dynasty, made a trip to India
to collect medicinal herbs for the king. He came back with the book and translated it into
the Persian language. After the Islamic conquest of Persia, the book was translated by Ibn al-
Muqaffa (8th century CE) into Arabic as Khalīl wa Dimna where Khalīl and Dimna are the two
jackal characters in several stories in the book. All stories ended with a question and the next
story was the answer of the previous question. These stories have ethical, social, and political
wisdom. Khalīl wa Dimna became popular in the Middle East and was known in Europe by
the eleventh century. S. ā‘id Al-Andalusī praised the contents of Khalīl wa Dimna in his book
T. abaqāt al-‘Umam and correctly labeled Hind as the country of origin. In his view, the book is
useful for the “improvement of morals and the amelioration of upbringing” and “is a book of
noble purpose and great practical worth.”8
9.1.2
IMPACT ON CHINA
Mandarin is one of the two most prominent languages in China and sets the norms of aristo-
cratic communication—a language used by the upper class. It was the spoken language of the
bureaucrats who were courteous and polite in their behavior. The word itself is derived from the
Sanskrit word mantrin, meaning a minister or councilor. Even today in India, all cabinet min-
isters are called mantrī and the Prime-minister is called Pradhān-mantrī, meaning the prime
councilor. The Sanskrit language influenced the aristocrats of China who adopted the language.
As a result, some Sanskrit words entered in the Chinese language. For example, ks. an. a (a mo-
ment), śarīra (body) and nirvana in Sanskrit became chānā, shélizi, and niepān, respectively, in
Chinese. Of course, the most significant example of this influence is the fact that Buddhism
7Neugebauer, 1962, p. 124.
8Salem and Kumar, 1994, p. 14.
9.1. IMPACTS DURING THE ANCIENT AND MEDIEVAL PERIODS 151
became the most prominent religion in China without any war or conflict. This is in contrast to
several other nations where people had to choose between their lives or their religion. Such was
the influence of India on China.
9.1.3
IMPACT ON GREEK SCIENCE AND PHILOSOPHY
Cyrus the Great (558–530 BCE) of Achaemenid Dynasty ruled all the way from Greece to the
Indus River. Being part of a common empire, the Greeks learned about India. In 327 BCE,
Alexander the Great fought back, regained Greece from Persia, and later controlled Persia and
India. When the army of Alexander could go through hostile conditions to reach India, it was
also possible for a lone scholar to visit India too. The historical documents provide information
about the influence of Indian philosophy on two prominent Greek or Roman philosophers.
Pythagoras
Pythagoras advocated vegetarianism, metempsychosis (transmigration of the soul from one
physical body to another, just as we cast off old clothes and wear new one), fasting as a way
of purification, chastity as a virtue, and the sanctity of animal life that must be honored. He
promoted orality in the preservation of knowledge, monism as an ideology, and believed on the
existence of a soul in plants, animals, and humans. The above-mentioned doctrines of Pythago-
ras were essential to his school, and were practiced by his followers in the West for almost a
millennium. These doctrines were new to Greece during the period of Pythagoras. Elsewhere,
there is only one place in the world where all the above-mentioned doctrines existed prior to
Pythagoras. This is the land of the Hindus, the Indus-Sarasvatī region. As is the case with the
Hindu tradition, Pythagoras did not write any book since he contended that knowledge should
be veiled from undesirable people; only oral communication was used to transfer knowledge.
Apollonius of Tyana (15–100 CE), a noted Pythagorean philosopher, learned the doctrine
of nonviolence or the sanctity of human and animal life in the tradition of Pythagoras who, in
Apollonius’ view, was himself taught by Hindu philosophers. Philostratus quotes the words of
Apollonius: “I did not sacrifice, I do not: I do not touch blood, even on the alter, since that was
the doctrines of Pythagoras. . . it was the doctrines of the Naked Philosophers of Egypt and the
Wise Men of India, from whom Pythagoras and his sect derived the seeds of their philosophy.”9
Apollonius correctly considered the Indian civilization to be much more ancient than the
period of Pythagoras. “Pythagoras was anticipated by the Indians, lasts not for brief time, but
for an endless and incalculable period,” opines Apollonius.10 Apollonius was a person of repute,
visited India, and was compared with Jesus of Nazareth by some Christians during the early
centuries of Christianity.
Titus Flavius Clemens (c.150–215 CE), popularly known as Clement of Alexandria, was
a Christian theologian who was familiar with classical Greek philosophy and literature. He
9Book 8, Chapter 7, 12; Philostratus, 1960, vol. 2, p. 339.
10Philostratus, 1960, vol. 2, p. 49.
152
9. THE GLOBAL IMPACT
believed that the Greek philosophy had non-Greek origins. Similar to Apollonius, Clement, in
his book Stromata (The Miscellanies), writes that Pythagoras went to Persia and came in contact
with the Brahmins. In his view, Pythagoras learned philosophy from his interactions with these
brahmins: “Pythagoras was a hearer of . . . the Brahmins.”11
Eusebius of Caesarea (263–339 CE), a Roman Christian historian, in his book Praeparatio
Evangelica (Preparation for the Gospel) mentions Pythagoras’ visit to Babylon, Egypt and Persia.
Similar to Apollonius of Tyana and Clement of Alexandria, Eusebius also shares the opinion
that Pythagoras “studied under the Brahmans [or Brahmins],” and learned geometry, arithmetic,
and music from these foreign lands. Eusebius is also quite clear in his statement that Pythagoras
learned “nothing” from the Greek philosophers and “became the author of instruction to the
Greeks in the learning which he had procured from abroad.”12
The visit of Pythagoras to India was conclusive in the mind of Voltaire (1694–1778) (pen
name of Francois Marie Arouet) as he writes that “Pythagoras, the gymnosophist, may alone
serve an incontestable proof that true science was cultivated in India. . . . It is even more prob-
able that Pythagoras learned the properties of the right-angled triangle from the Indians, the
invention of which was afterward ascribed to him.”13 According to Voltaire, “The Orientals, and
particularly the Indians, treated all subjects under the veil of fable and allegory: for that reason
Pythagoras, who studied among them, expresses himself always in parables.”14 Voltaire had no
doubt about Pythagoras’ visit to India and wrote: “All the world knows that Pythagoras, while
he resided in India, attended the school of Gymnosophists and learned the language of beasts
and plants.”15
D. E. Smith, a noted historian who is known for his classic book, History of Mathematics,
points out a resemblance between the Hindu and Pythagorean philosophies: “In spite of the as-
sertions of various writers to the contrary, the evidence derived for the philosophy of Pythagoras
points to his contact with the Orient. The mystery of the East appears in all his teaching . . .
indeed his [Pythagoras’] whole philosophy savors much more of the Indian than of the Greek
civilization in which he was born.”16
On a possible Indian influence on Pythagoras in comparison to Egyptian influence, H.
W. Rawlinson (1810–1895 CE) concludes: “It is more likely that Pythagoras was influenced by
India than by Egypt. Almost all the theories, religious, philosophical, and mathematical, taught
by the Pythagoreans were known in India in the sixth century B.C. [BCE].”17 Rawlinson is the
person who first decoded cuneiform language of Babylon after discovering the Darius’ Behistun
inscriptions. He also served the British empire and lived in India.
11Stromata, Book 1, Chapter 15.
12Praeparatio Evangelica, Book 10, Chapter 4.
13Voltaire, 1901, vol. 29, p. 174.
14Voltaire, 1901, vol. 24, p. 39.
15Voltaire, vol. 4, p. 47.
16Smith, 1925, vol. I, p. 72.
17Rawlinson, p. 5 in the book by Garratt, 1938.
9.1. IMPACTS DURING THE ANCIENT AND MEDIEVAL PERIODS 153
Plotinus
Plotinus (205–270 CE), the founder of the Neo-Platonic school, attempted to visit India to
learn from Brahmins while he was in Alexandria. He joined the army of Emperor Gordian
III during his expedition to Persia in 242 CE against the Sassanians king, Sapor, in hopes of
visiting India. This is obviously a major sacrifice for a philosopher to pick up weapons and fight,
just for learning from India. However, he was not the first. Before Plotinus, Pyrrhon pursued a
similar dream to visit India by joining Alexander’s army. Pyrrhon succeeded while Plotinus failed
since Gordian III was assassinated in Mesopotamia, and Plotinus could not fulfill his dream of
an Indian expedition.18 Plotinus moved to Antioch and later to Rome where he spent the rest
of his life. Plotinus also did not write any book, following in the footsteps of Pythagoras and
Socrates and the Hindus. Similar to the oral tradition of the Hindu, he only gave lectures while
living in Rome. His lectures were later compiled by his followers under the title Enneads.
Ammonius Sakkas, Plotinus’ teacher, was perhaps an Indian since his name is not a typical
Greek name, and is similar to a Sanskrit name, where Śākya means a monk.19 Porphyry, a disciple
of Plotinus, tells us that Plotinus “acquired such a mastery of philosophy, that he became eager
to gain knowledge of the teaching prevailing among the Persians, as also among the Indians.”20
It is pretty clear that association with Ammonius kindled a desire in Plotinus to join Emperor
Gordian’s military expedition to Persia and India. The influence of the Hindu philosophy on
Neo-Platonism has been suggested by scholars over the years. Rawlinson writes: “It certainly
appears probable that Neo-Platonism was affected by Oriental [Hindu] philosophy . . . Hence
we may suppose that the doctrines it inculcates, – abstinence from flesh, subjection of the body
by asceticism, and so on—are derived from Oriental [Hindu] sources.”21 Rawlinson concludes:
“the debt of Neo-Platonism to Oriental sources is indisputable . . .”
Plotinus “produced a new philosophical synthesis: Greek rationalism in the service of
Oriental mysticism,” as suggested by Jean W. Sedlar.22 In Plotinus’ philosophy, the human soul
is a part of the world-soul, implying that we all have divinity within us. When Plotinus was close
to death, as we know from his student Porphyry, he said, “I am striving to give back the divine
which is in me to the divine in the universe.”23 This implies that his individual soul will merge
with the cosmic divine soul. This is the core essence of the Hindu philosophy of Aham brahmāsmi
(I am the God), which is central to the Vedas and Upanis. ads24 and is currently propagated by the
New Age movement in America.
Thomas McEvilley, in his book The Shape of Ancient Thought, explored the Greek works
and compared them with Indian thoughts. On the question of exchanges between these two
cultures, McEvilley suggests that “[i]t can be argued that Plato was already Indianized through
18Sedlar, 1980, p. 292; McEvilley, 2002, p. 550.
19Sedlar, 1980, p. 292; Halbfass, 1988, p. 17.
20Gregorios, 2002, p. 17; McEvilley, 2002, p. 550.
21Rawlinson, 1926, p. 175.
22Sedlar, 1980, p. 292.
23MacKenna, 1956, sec. 1, p. 1.
24Br.hadāran. yaka Upanis.ad, 1: 4: 10.
154
9. THE GLOBAL IMPACT
Orphic and Pythagorean influences, and on that basis alone some, at least, of his works cannot be
regarded as ‘purely Greek thought.’ Plotinus, then may have received the Indian influence from
Gymnosophists in Alexandria, or from the works of Plato, or both; it comes to the same thing:
he was philosophizing in an Indianized tradition. It is not just a question of whether Plotinus’
philosophy was derived from India by him. Its major outlines, in the view presented in this book,
had been derived from India almost a thousand years earlier and handed down through what
might be called the Indianized, or Indian-influenced, strand of Greek philosophy, to which
Plotinus emphatically belonged. He could, then, and perhaps would, have come up with his
model of things without any additional Indian input in his lifetime, though it seems clear, in
any case, that he had some.”25
IMPACTS DURING THE MODERN PERIOD
9.2
The literature of the ancient Hindus continued to attract modern scientists and philosophers:
Ralph Waldo Emerson, Johann Wolfgang von Goethe, Johann Gottfried Herder, Aldous Hux-
ley, Carl Jung, Max Müller, Robert Oppenheimer, Erwin Schrödinger, Arthur Schopenhauer,
Nikola Tesla, Henry David Thoreau, and Hideki Yukawa, to name a few. These scholars found
the origin or validity of their ideas in Hindu literature. “India has had a significant impact upon
the manner in which Europe has articulated, defined, and questioned itself and its fundamen-
tal and symptomatic concepts of theory, science and philosophy,” write Wilhelm Halbfass.26
This view is supported by the fact that many European philosophers, including Goethe, Jung,
Herder, Humboldt, and Schopenhauer studied Hindu books and introduced Hindu concepts
in their own works.27
Voltaire (Actual name, Francois-Marie Arouet; 1694–1778 CE), a French historian,
philosopher, and dramatist, visited India, Egypt, and Arabia during the eighteenth century,
and was well versed with the history of these regions. In the view of Voltaire, “As India supplies
the wants of all the world but is herself dependent for nothing, she must for that very reason
have been the most early civilized of any country. . .”28 In Voltaire’s view, Indian science was
more ancient than Egyptian science. The recent excavations in Margarh are proving him right.
Voltaire writes, “If the question was to decide between India and Egypt, I should conclude that
the sciences were much more ancient in the former [India],”29 Voltaire is not alone in assign-
ing priority to the Indians in comparison to Egypt. Benjamin Farrington, a noted historian of
science, expressed similar views in his analysis of the Pythagorean Theorem. “The degree of this
knowledge, and the possibility of this diffusion from a common center [center], are questions
that may one day be answered with a confidence that is now impossible. But when the answer
is given, if it ever is, perhaps neither Babylon nor Egypt will appear as the earliest exponent of
25McEvilley, 2002, p. 550–551.
26Halbfass, 1988, p. 159.
27Sedlar, 1982 and 1980.
28Voltaire, 1901, vol. 29, p. 180.
29Voltaire, 1901, vol. 24, p. 41.
9.2. IMPACTS DURING THE MODERN PERIOD 155
civilization. The Nile and the Euphrates may have to yield place to the Indus [India].”30 This is
in reference to the ancient Hindus’ Śulbasūtra that contains the so-called Pythagorean Theorem
long before it was proposed by Pythagoras. “I am convinced that everything has come down
to us from the banks of the Ganges, - astronomy, astrology, metempsychosis, etc. . . It is very
important to note that some 2,500 years ago at the least Pythagoras went from Samos to the
Ganges to learn geometry. . .But he would certainly not have undertaken such a strange journey
had the reputation of the Brahmins’ science not been long established in Europe,” wrote Voltaire
in a letter in 1775.
Johann Gottfried Herder (1744–1803), a German philosopher and poet, considered the
Ganges region as “the primordial garden” where human wisdom started and was nourished, and
the birth place of all languages, the Sanskrit language being the mother. He also considered
Sanskrit poetry as the mother of all other poetry, indicating Vedic literature as the source of
most other poetry works elsewhere.31 Herder influenced another famous German philosopher
and poet, Johann Wolfgang von Goethe (1749–1832 CE). Goethe’s interest in India came with
his reading of Kālīdāsa’s famous works – Śakuntalā and Meghduta—and flourished with his in-
teraction with noted philosophers such as Johann Gottfried von Herder.32 Goethe even tried to
learn Sanskrit to read Hindu literature.33 He tried to visit India, but without success.34 Goethe’s
poor health did not allow him to take the arduous sea journey to India. His popular drama,
Faust, was inspired by his reading of the German translation of Śakuntalā by Forster in 1791.35
He also wrote two ballads dealing with India: “Der Gott und die Bajadere” and “Der Paria.”36
Carl Gustav Jung (1875–1961), a Swiss psychiatrist and psychoanalyst who as a young
man collaborated with Sigmund Freud on human psychology, was greatly influenced with
Hindu scriptures. Jung’s contributions to psychology, dream analysis, and New Age movement
are immense. Freud and Jung developed differences on the role of libido on human personality
and that caused a break-up in their collaboration and personal friendship. Jung basically lost his
social circle where he had not many colleagues to discuss his research. He sought refuge in the
Hindu literature for philosophical ideas. The ancient works of Patañjali influenced Jung greatly.
The books of the Hindus not only influenced his thinking but also provided Jung confirming
parallels for his independent insights. “In the absence of like-minded colleagues, Hinduism pro-
vided him with evidence that his differences with Freud were founded on experiences shared by
other human beings, therefore were not simply the products of a deranged mind,” writes Harold
30Farrington, 1969, p. 14.
31Patton, 1994, p. 207–208.
32Herder was called German Brahmin by some for his interest in Hinduism. Herder wrote Gedanken einiger Brahmanen
(Thoughts of Some Brahmins) in 1792 which was based on Hitopadesha and the Bhagavad-Gītā. To read more about Herder and
Hinduism, read Ghosh, 1990.
33Dasgupta, 1973, p. 21.
34Steiner, 1950, p. 1.
35Remy, 1901, p. 20.
36Remy, 1901, p. 20.
156
9. THE GLOBAL IMPACT
Coward.37 Jung wrote two articles that directly dealt with his impression of India: “The Dream-
like World of India” and “What India Can Teach Us.” Both articles were published in the journal
Asia in the 39th volume in 1939. In the second article, Jung provided a very positive view of the
Indian civilization. In his view, India was “more balanced psychologically than the West and
hence less prone to the outbreaks of barbarism which at that time were only too evident” in
Europe.38 It is suggested that the concept of “self,” as developed by Jung was largely based on
the Upanis.adic notion of ātman.39 As Jung writes, “The East [India] teaches us another broader,
more profound, and higher understanding - understanding through life.”40 According to Jung,
the influence of Hindu literature is nothing new; such influences “may be found in the works of
Meister Ekhart, Leibniz, Kant, Hegel, Schopenhauer, and E. von Hartmann.”41
9.2.1 EMERSON AND THOREAU–TWO CELEBRATED AMERICAN
SCHOLARS
Ralph Waldo Emerson (1803–1882) and Henry David Thoreau (1817–1862 CE) were two of
the most celebrated American scholars of the nineteenth century. They proposed the philosophy
of transcendentalism and thereby influenced generations of Americans and others worldwide.
The worldviews of Emerson and Thoreau were influenced by the sacred books of the Hindus
and other classics of India. Emerson was fond of reading the Bhagavad-Gītā and popularized
the book among his friends by loaning his personal copy. This personal copy became quite worn
in time due to its extensive use.42 Emerson shared his appreciation for the Hindu philosophy
in the following language: “This belief that the higher use of the material world is to furnish us
types or pictures to express the thoughts of the mind, is carried to its extreme by the Hindoos
[Hindus] . . . I think Hindoo [Hindu] books are the best gymnastics for the mind.”43 Thoreau
also wrote quite approvingly about the Hindus: “the Hindoos [Hindus] . . . possess in a wonder-
ful degree the faculty of contemplation,” “their religious books describe the first inquisitive &
contemplative access to God,” or “their method is pure wisdom or contemplation.”44 Similar to
the Hindu tradition, Thoreau promoted vegetarianism: “I believe that every man who has ever
been earnest to preserve his higher or poetic faculties in the best condition has been particularly
inclined to abstain from animal food, and from much food of any kind.”45
Russell B. Goodman, in his analysis of Emerson’s work, concludes that “Emerson’s phi-
losophy, from his college days onward, grew up together with his knowledge of and interest in
37Coward, 1984.
38Clarke, 1994, p. 62.
39Nicholson, Hanson, Stewart, 2002, p. 116.
40Jung, Collected Works, vol. 13, p. 7.
41Jung, Letters, vol. 1, p. 87.
42Horton and Edwards, 1952, p. 118; taken from Gangadharan, Sarma and Sarma, 2000, p. 311. For more information
on Emerson, read Acharya, 2001.
43Emerson, 1904, vol. 8, p. 14.
44Scott, 2007.
45Henry David Thoueau, Walden; or, Life in the Woods, Dover, 1995, p. 139.
9.2. IMPACTS DURING THE MODERN PERIOD 157
Hindu philosophical writings.”46 “His relationship with Hinduism, as with many other systems
of thought, was transformative and respectful at once—reconstructive rather than deconstruc-
tive.” At the youthful age of seventeen, Emerson wrote that during the ancient period in India
“fair science pondered” and “sages mused.”47 Emerson wrote to John Chapman in May 1845 that
he “very much want[ed]” the Bhagavad-Gītā to read the dialogs of Lord Kr.s.n. a and Arjuna.48
In August 1845, Emerson went to the mountains of Vermont and read the Vis. n. u-Purān. a. He
commented that “[n]othing in theology is so subtle as this & the Bhagwat [perhaps Bhāgvat-
purān. a].”49 Emerson’s poem “Hamatreya” is based on a passage from the Vis. n. u-Purān. a, and
his essay “Immortality” is based on Kathā Upanis. ad. The beginning of his poem “Brahma” is
also derived from Kathā Upanis. ad and the Bhagavad-Gītā, concludes Philip Goldberg, in his
book American Veda.50 “The borrowings were not aesthetic embellishments; they were central to
Emerson’s worldview.”
Thoreau praised Hindu literature in his writing: “What extracts from the Vedas I have
read fall on me like the light of a higher and purer luminary . . . It rises on me like the full
moon after the stars have come out, wading through some far summer stratum of the sky.”51
A. K. B. Pillai, in his book Transcendental Self: A Comparative Study of Thoreau and the Psycho-
Philosophy of Hinduism and Buddhism, evaluated the connections between the tenets of Hinduism
and transcendentalism and concluded that “Walden is the closest to the Yogic system of all major
American writings.”52 Walden is perhaps the most popular book of Thoreau.
“Whenever I have read any part of the Vedas, I have felt that some unearthly and unknown
light illuminated me. In the great teaching of the Vedas, there is no touch of sectarianism. It
is of all ages, climes and nationalities, and is the royal road for the attainment of the Great
Knowledge,” writes Thoreau.53
In Walden, a book written by Thoreau, he writes, “In the morning I bathe my intellect
in the stupendous and cosmogonal philosophy of the Bhagvat-Geeta [Bhagavad-Gītā], since
whose composition years of the gods have elapsed, and in comparison with which our modern
world and its literature seem puny and trivial. . . I lay down the book and go to my well for
water, and lo! there I meet the servant of the Brahmin, priest of Brahma [Brahmā] and Vishnu
[Vis.n. u] and Indra, who still sits in his temple on the Ganges reading the Vedas, or dwells at the
root of a tree with his crust and water jug. . . The pure Walden water is mingled with the sacred
water of the Ganges.”54
46Goodman, 1990.
47Goodman, 1990.
48Rusk, 1939, vol. III, p. 288; taken from Goodman, 1990.
49Goodman, 1990.
50Goldberg, 2010, p. 34.
51Thoreau, 1906, 2: 4.
52Pillai, 1985, p. 4, 88.
53Goldberg, 2010, p. 39.
54Taken from Scott, 2007
158
9. THE GLOBAL IMPACT
Thoreau and Emerson were the leaders of the American Transcendentalism movement of
the nineteenth century.55 They were drawn to Hindu texts since they received support to their
own way of thinking in these texts. “There is little doubt that both Emerson and Thoreau shared
an overwhelming and sustained enthusiasm for Vedic literature and Vedāntic philosophy and be-
came their ardent votaries,” concludes Sundararaman.56 This support for Hinduism by Emerson
and Thoreau has also been noticed by others. Raj Kumar Gupta shares on the enthusiasm and
struggle Emerson and Thoreau dealt with: “[i]n their fervent and enthusiastic admiration for
Hindu idealism and spiritualism, Emerson and Thoreau lost sight of the fact that they were
contrasting American practice with Indian theory.”57 In his analysis, Gupta made the following
judgment about Emerson and Thoreau: “Hindu ideas and ideals are used to bring out, explicitly
or by implication, the failures and deficiencies of nineteenth century American life and thought,
and in an attempt to fill the void and supply the deficiency. Thus, Hinduism represents not only
ideals against which American values are judged and found wanting, but also a corrective and
an antidote to those values.”58
9.2.2
IMPACT ON PHYSICS
J. Robert Oppenheimer (1904–1967), an atomic physicist, director of the Manhattan Project,
and the so-called father of the atomic bomb, not only read books of the ancient Hindus but even
tried to learn the Sanskrit language to get the original intended meaning of them. While working
with Ernest Lawrence in Berkeley, after he got his B.S. degree from Johns Hopkins University in
1933, he wrote to his brother Frank: “Lawrence is going to the Solvay conference on nuclei, and
I shall have double chores in his absence . . . I have been reading the Bhagwad Gita [Bhagavad-
Gītā] with Ryder and two other Sanskrists. It is very easy and quite marvelous. I have read it
twice but not enough. . .”59 In another letter to his brother Frank, he writes, “Benevolences
starting with the precious Meghduta [a book by Kālīdāsa] and rather too learned Vedas . . .”60
Within a year of study on the side, Oppenheimer became well versed in reading classics in
Sanskrit on his own.61 It is obvious that reading of the sacred books of the ancient Hindus
shaped Oppenheimer’s worldview and helped him in his researches in physics. Oppenheimer
considered the Bhagavad-Gītā “the most beautiful philosophical song existing in any known
tongue.” He kept a well-worn copy of it conveniently on hand on the bookshelf closest to his
desk and often gave the book (in translation) to friends as a present.62
55Christy, 1932; Riepe, 1970.
56Sundararaman, David Thoreau: The Walden-R. s.i, in the book by Gangadharan, Sarma, and Sarma, 2000, p. 311.
57Gupta, 1986, p. 81.
58Gupta, 1986, p. 81.
59Smith and Weiner, 1980, p. 162. The reference alludes to Arthur W. Rider (1877–1938), a professor of Sanskrit at the
University of California, Berkeley.
60Smith and Weiner, 1980, p. 180.
61Nisbet, p. 136.
62Royal, 1969, 64.
9.2. IMPACTS DURING THE MODERN PERIOD 159
In his condolence message upon the death of President Roosevelt, he asked people to have
courage and continue with the task of the Manhattan Project at Los Alamos: “In the Hindu
Scripture, in the Bhagwad Gita [Bhagavad-Gītā], it says, ‘Man is a creature whose substance is
faith. What his faith is, he is.’ ”63 Most of his speech was taken from Bhagavad-Gītā. It is also
well known that he started chanting the verses of Bhagavad-Gītā when he witnessed the test
explosion of the atom bomb as director of the Manhattan Project. In the main verse, the aura of
God is described as brighter than a thousand suns. It has led to a book an engaging and popular
book on the atomic bomb with the same title.64
Brian Josephson (born 1940) received the Nobel Prize in physics in 1973 for his discov-
ery of the Josephson quantum tunneling effect in superconductors. SQUID (Superconducting
Quantum Interference Device), a magnetometer to measure low magnetic fields, is a device that
is based on the Josephson effect. Josephson suffered from insomnia especially after he received
Nobel Prize and was hooked on tranquilizers. In his attempt to protect his health, he started
practicing transcendental meditation, as propagated by Maharshi Mahesh Yogi (1918–2008).
These regular practices of meditation gave him “inner peace” and sound sleep. He is currently
the director of the Mind-Matter Unification Project at the Cavendish Laboratory in England
and a regular practitioner of yoga and meditation. His current researches mostly deal with the
uncommon subjects of science: consciousness, the role of the observer and mind in quantum me-
chanics, analogies between quantum mechanics and Hindu mysticism, physics and spirituality,
and yoga.65 Josephson believes that scientists “can enhance their abilities through meditation.”66
Erwin Schrödinger: the Guru of Dual Manifestations
Erwin Schrödinger, a Nobel Laureate in physics and one of the prominent architects of modern
physics, writes on the connectedness of the various events in nature in his book, My View of
the World: “Looking and thinking in that manner you may suddenly come to see, in a flash,
the profound rightness of the basic conviction in Vedanta . . . Hence this life of yours is not
merely a piece of the entire existence, but in a certain sense the whole. This, as we know, is
what the Brahmins express in that sacred, mystical formula which is yet really so simple and so
clear: Tat tvam asi (that is you). Or, again in such words as ‘I am in the east and in the west, I am
below and above, I am this whole world.”’67 Vedanta is one of the six orthodox schools of Hindu
philosophy. The word means “end of the Vedas,” reflecting concepts that are documented in the
Upanis.ads. Tat tvam asi is a Sanskrit term that mentioned in the Chāndogya-Upanis. ad (6: 8: 7)
and connects self with the ultimate reality.
63Smith and Weiner, 1980, p. 288.
64Jungk, Robert, 1958. He was an Austrian writer who wrote extensively on nuclear weapons. He even ran for the presi-
dency of Austria in 1992 for the Green Party.
65Josephson, 1987.
66Horgan, 1995
67Schrödinger, 1964, p. 21.
160
9. THE GLOBAL IMPACT
It is interesting to see similarities of Schrödinger’s philosophy of life that he derived from
the books of the Hindus and the quantum mechanical wavefunction he created to define the
microscopic reality. Though anecdotal, it is interesting to note the similarities between quantum
mechanical waves and wavefunctions with its all pervading nature, as defined by Schrödinger,
with the all pervading (omnipresent) description of God. The wavefunction, though abstract,
materializes itself in a region of space, in either its particle or its wave aspect, when squared
(probability density). This is similar to the Nirgun. a-svarūpa (amūrta, without form) of God
that manifests itself in human form (sagun. a-svarūpa) from time to time in different regions. It
is important to mention that these analogies are not real connections. However, they do play
an important role in the thinking of the inventors, as mentioned in Chapter 1. Schrödinger’s
worldview helped him in “hatching” scientific and mathematical ideas.
Schrödinger was influenced with the philosophy of Arthur Schopenhauer (1788–1860).
Several other noted European intellectuals and philosophers were influenced by Schopenhauer,
including Immanual Kant, Friedrich Nietzsche, Thomas Mann, Sigmund Freud, Albert Ein-
stein, Carl Jung, and Leo Tolstoy.68 Schopenhauer’s philosophy was greatly influenced by the
teachings of Hinduism and Buddhism. He suggested a life style of negating desires, similar to
Pythagoras and the Hindu philosophy. His book, The World as Will and Representation, empha-
sizes the Upanis.adic teaching: Tat tvam asi (that is you). Schopenhauer had a dog named Atman
(meaning soul in Sanskrit). Though Schopenhauer was popular in Europe, his philosophy was
not popular among physicists. Schrödinger realized that his views may not be appreciated by his
physicist colleagues: “I know very well that most of my readers, despite Schopenhauer and the
Upanis.ads, while perhaps admitting the validity of what is said here as a pleasing and appropri-
ate metaphor, will withhold their agreement from any literal application of the proposition that
all consciousness is essentially one.”69
Observation is central to the growth of science. However, it provides a plurality of realities
as the observation may differ from person to person, depending on the aspects these observers
are interested in. This creates multifarious realities. This plurality does not get much attention
by scientists. Schrödinger did consider the issue of plurality in life. “For philosophy, then, the
real difficulty lies in the spatial and temporal multiplicity of observing and thinking individuals.
. . the plurality that we perceive is only an appearance; it is not real. Vedantic philosophy, in which
this is fundamental dogma, has sought to clarify it by a number of analogies, one of the most
attractive being the many faceted crystal which, while showing hundreds of little pictures of
what is in reality a single existent object, does not really multiply that object.”70 Schrödinger
believed in the advocacy of the Hindu philosophy of oneness: “In all the world, there is no kind
of framework within which we can find consciousness in the plural; this is simply something we
construct because of the spatio-temporal plurality of individuals, but it is a false construction. . .
68Moore, 1992, p. 112
69Schrödinger, 1964, p. 29.
70Schrödinger, 1964, p. 18.
9.2. IMPACTS DURING THE MODERN PERIOD 161
The only solution to this conflict, in so far as any is available to us at all, is in the ancient wisdom
of the Upanishads.”71
In his essay on The Diamond Cutter, Schrödinger writes: “The ego is only an aggregate of
countless illusions, a phantom shell, a bubble sure to break. It is Karma. Acts and thoughts are
forces integrating themselves into material and mental phenomena—into what we call objec-
tive and subjective appearances . . . The universe is the integration of acts and thoughts. Even
swords and things of metal are manifestations of spirit. There is no birth and death but the birth
and death of Karma in some form or condition.”72 These statements led Walter Moore, who
has written a book on the life of Schrödinger, to infer that Schrödinger “thought deeply about
the teachings of Hindu Scriptures, reworked them into his own words, and ultimately came to
believe in them.”73 Also, his worldview helped him in creating the wave-mechanics which is
counterintuitive to Newtonian thinking: “Perhaps these thoughts recurred to Erwin when he
made his great discovery of wave mechanics and found the reality of physics in wave motions,
and also later when he found that this reality was part of an underlying unity of mind.”74
The philosophy of multiple representations of truth, thin boundaries between abstract
thoughts (amūrtā), actual realities (mūrta) and universal consciousness played an important role
in the science that Erwin Schrödinger created. Moore concludes, “Vedanta and gnosticism are
beliefs likely to appeal to a mathematical physicist, a brilliant only child, tempted on occasion
by intellectual pride. Such factors may help to explain why Schrödinger became a believer in
Vedanta, but they do not detract from the importance of his belief as a foundation for his life
and work. It would be simplistic to suggest that there is a direct causal link between his religious
beliefs and his discoveries in theoretical physics, yet the unity and continuity of Vedanta are
reflected in the unity and continuity of wave mechanics.”75 In the view of Moore, “Schrödinger
and Heisenberg and their followers created a universe based on superimposed inseparable waves
of probability amplitudes. This view would be entirely consistent with the Vedantic concept of
the All in One.”76
* * * * * * * *
Just think of the global impact of the efforts of the ancient Hindus for knowing the ul-
timate truth. They created disciplines that were for the welfare of all humanity. Hinduism has
transcended the boundaries of what is called ’religion’ in the West and has permeated the whole
world. Yoga is practiced by devoted Christians, Muslims, and atheists, without discrimination.
It is currently prescribed in many hospitals in the Western world for pain management and cures
of various diseases. The United Nations have declared June 21 as the “yoga day” which is being
celebrated all over the world.
71Schrödinger, 1964, p. 31.
72taken from Moore, 1992, p. 114.
73Moore, 1992, p. 113.
74Moore, 1992, p. 114.
75Moore, 1992, p. 173.
76Moore, 1992, p. 173.
162
9. THE GLOBAL IMPACT
Think of the kids that are schooled in Africa, Asia, or Europe. Everywhere in the civilized
world, kids learn to count even before they learn the alphabet of their own language. In different
languages, the numbers may be named differently; but, they all use the same (base 10) mathe-
matical principles (known as the decimal system) in counting and in arithmetic operations that
were invented long ago by the ancient Hindus. These mathematical methods are so effective
and important, most kids learn them without ever inquiring about the alternatives. “The Indian
system of counting has been the most successful intellectual innovation ever made on our planet.
It has spread and been adopted almost universally, far more extensively even than the letters of
the Phoenician alphabet which we now employ. It constitutes the nearest thing we have to a
universal language,”77 writes John D. Barrow in his book, Pi in the Sky: Counting, Thinking, and
Being.
In dealing with numbers, the size of the atom conceived by the ancient Hindus comes
10m. For the age of the universe, they came up with a number that is of the order
close to 10(cid:0)
of a billion years. Both of these numbers were considered mere speculations and ridiculed by
foreign scientists during the early medieval period. However, these numbers are in tune with
the modern science now. Is this a coincidence? How can one reach such striking conclusions
that were considered ridiculous by scientists just a century ago? For the Hindus, this information
was valuable enough to be preserved in their most sacred books. How did they reach to these
numbers? It is not an easy question to answer. The answer is perhaps in the role of the mind in
the meditative state to learn the mysteries of nature that are not otherwise accessible through the
faculties of the five senses we human beings are endowed with. This role of mind or conscious-
ness is what Noble Laureate Brian Josephson is investigating these days at the University of
Cambridge, London and has attracted the minds of Robert Oppenheimer, Erwin Schrödinger,
and others.
The social attitudes in India toward accepting new scientific theories were in contrast to
those prevailing in Europe. Unlike Galileo and Copernicus, Āryabhat.a never faced any hostile
sentiments or violent reaction from the priestly class or the possibility of any kind of persecution
from the king when he assigned motion to the earth. On the contrary, invention and promotion
of such ideas led a person to the greatest honor on earth while alive, and moks. a - ultimate goal of
liberation - after death, as suggested by Āryabhat.a I. Thus, becoming an astronomer, a medical
doctor, or a musician was as honorific profession as becoming a priest or swami. This is perhaps
the reason why Caraka and Suśruta were so deeply involved in the medical research and practice
and Āryabhat.a in astronomy and mathematics. Ultimately, they were working toward achieving
moks. a, liberation of their soul.
Multicultural Cooperation in Science
Many intellectual centers in our history used multicultural approaches to learning. Damascus
and Baghdad became leading centers of learning when they started attracting Greek, Indian,
77Barrow, 1992, p. 92.
9.2. IMPACTS DURING THE MODERN PERIOD 163
Persian, and Egyptian scholars. Similarly, Spain became a center of learning from the eighth
century to the eleventh century, because it attracted scholars from all over the world.78 However,
these centers eventually declined when the scholars were judged on ill-conceived grounds (race,
gender, religion, or culture), rather than on the basis of merit. Such declines occurred in Spain,
Egypt, the Middle East, and elsewhere in Europe.
Modern science is international in character. Scientific ideas are studied and developed in
many languages from all over the world. Thus, science education with a multicultural approach
can be a broadening and humanizing experience. Multicultural approach teaches us how human
beings are alike in the pursuit of knowledge. It inspires us to learn from each other and appreciate
each other. It allows us to function as a cohesive society. However, so far, science education has
not utilized this opportunity to its fullest potential.
Preserving knowledge is a process in which all generations must participate or become
susceptible to lose some valuable knowledge. Museums are created to slow down or stop this
tide of entropy. However, they mostly secure artifacts and have limitations. Perhaps, libraries
are the places where knowledge can be documented and preserved. However, these libraries are
prone to destruction as we know from the history of Alexandria, Nālandā, and Taxila. Therefore,
dissemination of knowledge to the next generation is a duty that each generation must involve
in. This is called guru-r. n. a (debt to the guru) that each disciple owes to his or her own teacher
to continue the tradition of knowledge, to carry forward the torch (light) of knowledge to the
next generation.
No culture or civilization has prospered to great heights without knowing and preserving
their historic and existing knowledge base. This book is written to preserve the knowledge of the
ancient Hindus and to recognize their rightful place in world history. Of course, it is only a small
step; much more effort needs to be exercised on a continual basis to research, disseminate and
preserve this ancient knowledge of the Hindus from the earliest period of the Indus-Sarasvati
civilization to the end of the last millennium when this knowledge was deliberately suppressed
in the pursuit of colonial objectives. The twenty-first century is the harbinger of free, open
and honest pursuit of scientific knowledge in all its varied dimensions unfettered by dogma
or national or cultural chauvinism.
78Salem and Kumar, 1991.
A P P E N D I X A
165
Timeline of the Hindu
Manuscripts
A lack of well-accepted chronology is the weakest point of the Hindu manuscripts. There are
several reasons for it: first, the Hindu culture practiced orality to preserve their ideas unlike the
Egyptian (papyrus) and Babylonians (clay tablets). It is difficult to find archaeological records in
a oral culture. Second, the Hindu culture basically followed in the same tradition from the earliest
periods. As discussed in Chapter 5, events are essential to define time not only in the physical
sense but in cultural sense too. Events of atrocities, change in value system, change in prosperity,
etc. are the landmarks of a culture. However, in the case of the Hindus, a similar pattern of
prosperity and cultural traits continued for so long, the culture paid little or no attention to
define events and time frame.
We know that the R. gveda is the earliest book of the Hindu literature, and is certainly one
of the earliest known manuscripts in human history. The ancient Hindus did not erect monu-
ments to document history, until the period of Emperor Aśoka who reigned in India between
268 to 232 BCE. The ancient Hindus did not even mention the names of the authors in their
books. Some Western historians look for a “hard-copy” or “hard-evidence” to accept a particular
date for the antiquity of the Hindu manuscripts. Due to the prevalent oral tradition, such evi-
dences are not possible with the ancient Hindus. Jessica Frazier, a Lecturer in Religious Studies
at the University of Kent and a Fellow of the Oxford Center for Hindu studies, in her quest
to define history, historiography, and timeline of various Hindu events, shares her experience
in the following words: “The search for an Indian historiography is like the work of a geologist
who must sift innumerable layers of compacted material, or perhaps better, a zoologist who can
never stop seeking new species of history, and never assume that any given specimen will remain
long in a single place or form.” She criticizes too much reliance on the accounts of travelers such
as Herodotus and others who visited India and wrote their accounts. In her view, “[t]he ideal
Western historian has classically been seen as an itinerant Godlike traveller.” These travellers
had a limited exposure of the culture and wrote their view from their bird’s eye view. Even the
astronomical observations of the heavenly bodies which should provide a linear scale of time fails
because of “multiple parallel chronologies.” She concludes that, “‘while India’s Hindu past was
ever-present in divine reality and recursive myth, it was nevertheless unreachable by accepted
historiographical methods.”1
1Frazier, 2009.
166 A. TIMELINE OF THE HINDU MANUSCRIPTS
During the colonial period, scholars in the West worked hard to define Hindu religion
and culture in their efforts to Christianize India. At that time in Europe, it was a prevalent
belief among Christians that the world was created around 4004 BCE on one Friday afternoon.
This was based on the genealogy presented in the book of Genesis, in the Bible, as concluded by
Archbishop James Ussher (1581–1656 CE), who was also the Vice-Chancellor of the Trinity
College in Dublin. In 1642, Dr. John Lightfoot (1602–1675 CE), then Vice Chancellor at the
Cambridge University improved the calculation further and made it more precise to be 9 AM,
October 23, 4004 BCE.2 These calculations were generally agreed on by the Western scholars
at the beginning of the twentieth century. As a result, the Western scholars assigned dates to
Vedas and upanis. ad that made sense to the misconstrued date of 4004 BCE. Once these dates
were arbitrarily defined, they have not been modified with current research.
Al-Bīrūnī (973–1050 CE) noticed the problem of historiography in assigning actual dates
to various events in the eleventh century and criticized the ancient Hindus: “Unfortunately the
Hindus do not pay much attention to the historical order of things, they are very careless in
relating the chronological succession of their kings.”3
Carl Jung (1875–1961 CE), a psychoanalyst and philosopher, aptly answered such crit-
icisms of the absence of a chronological history in India, by attributing it to the antiquity of
the Hindu civilization. “After all why should there be recorded history [in India]? In a country
like India one does not really miss it. All her native greatness is in any case anonymous and im-
personal, like the greatness of Babylon and Egypt. History makes sense in European countries
where, in a relatively recent, barbarous, and inhistorical past, things began to shape up . . . But in
India there seems to be nothing that has not lived a hundred thousand times before.”4 Even the
ancient writers such as Caraka (around 600 BCE), Suśruta (around 1000 BCE), and Ārybhat.a
I (5th century) do not represent the first efforts for most theories in their books; they all claim
to be merely reproducing old ideas in new form.
The first book in the modern era on the Hindu sciences was published by Brajendranath
Seal in 1915.5 Seal opted to deal with the absence of a proper chronology right in the first
paragraph of his Foreword. He basically assigned an umbrella period of 500 BCE to 500 CE
to all Hindu manuscripts considered in his book. Bose et al in 1971 and Chattopadhyaya in
1986, two prominent books on the history of Hindu science, encountered the same issue. This
situation has not improved much in the last century; we still cannot assign accurate dates for
ancient manuscripts.
The issue of antiquity can only be solved either from the artifacts of the ancient Hindus
or from the astronomical maps provided in their oral literature. Previously, the maps provided
in the Hindu oral literature were ignored by the scholars as they defined the period of R. gveda
to be more ancient than 1500 BCE, an arbitrary date assigned by Max Müller. Since the recent
2White, 1897, p. 9.
3Sachau, 1964, vol. 2, p. 10.
4Jung, Collected Works, vol. 10, p. 517.
5Seal, 1915
A. TIMELINE OF THE HINDU MANUSCRIPTS 167
excavations unearthed artifacts that are some 10,000 years old, a need for the revision of history
is warranted.6 A massive effort is needed to resolve this issue.
Though the dates are not certain, there is a general acceptance about the chronological
order of these books and periods can be assigned to these books, as listed in Table A.1. The
content of my book will not change much with corrections in this table. Most conclusions will
remain valid even with a different chronology.
Table A.1: The Antiquity of Hindu Books
6Kenoyar, 1998.
ManuscriptDate of PeriodĀryabhaṭīya5th century CEBakhshālī ManuscriptSeventh Century CEBrāhmaṇasBetween 2000 - 1500 BCEMahābhārata BattleBetween 2449 BCE to 1424 BCEPingala’s Chandah SutraBetween 480 - 410 BCEPurāṇaBetween the Christian era and 11th centuryManu-Saṁhitā ,Between 500 - 100 BCECaraka-SaṁhitāAround 600 BCESuśruta-SaṁhitāAround 1000 BCEŚulbasūtraBetween 1700 BCE to 1000 BCEUpanishads800 to the beginning of the Christian eraVaiśesika-SūtraBetween 900 - 600 BCEVedasAround 1500 BCE or earlierReferences
169
[1] Atharvaveda: The Hymns of the Atharvaveda, Ralph T. H. Griffith, Chowkhamba Sanskrit
Series, Varanasi, 1968.
[2] Bhagavad-Gītā, S. Radhakrishnan, Unwin Paperbacks, London, 1948.
[3] Caraka Sa ˙mhitā: The Caraka Sa ˙mhitā, Satyanārāyan. a Śāstrī, Chowkhamba Sanskrit Series,
Varanasi, 1992, 2 volumes, in Hindi.
[4] Kaut. ilya’s Arthaśāstra: Kaut. ilya’s Arthaśāstra, Shamasastry, R. Mysore Printing and Pub-
lishing House, Mysore, 1960. 4th ed., (first published in 1915).
[5] Purān. a, Translator: Pandit Shreeram Ji Sharma, Sanskrit Sansthan, Bareiley, 1988. In
Hindi and Sanskrit. All eighteen Purān. as have been translated by the same author and
published from the same publications.
[6] R. gveda: The Hymns of the R. gveda, Ralph T. H. Griffith, Chowkhamba Sanskrit Series,
Varanasi, 1963.
[7] Suśruta-Sa ˙mhitā: The Suśruta-Sa ˙mhitā, Bhishagratna, Kaviraj Kunjalal (Translator),
Chaukhambha Sanskrit Series, Varanasi, 1963, 3 volumes.
[8] Upanis. ads, Translator: Patrick Olivelle, Oxford University Press, Madras, 1996.
[9] Upanis. ads: The Thirteen Principal Upanishads, Translator: Robert Ernst Hume, Oxford
University Press, Madras, 1965.
[10] Upanis. ads: 108 Upanishads, Translator: Pandit Shreeram Ji Sharma, Sanskrit Sansthan,
Bareiley, 1990. In Hindi and Sanskrit.
[11] Vedas, Translator: Pandit Shreeram Ji Sharma, Sanskrit Sansthan, Bareiley, 1988. In
Hindi and Sanskrit. All four Vedas have been translated by the same author and published
from the same publications. DOI: 10.1093/acprof:oso/9780195658712.003.0022.
[12] Vis. n. u Purān. a, Wilson, H. H. (Translator), Punthi Pustak, Calcutta, 1961.
[13] Yajurveda (Vājasaneyisamhitā): The Texts of the White Yajurveda, Ralph T. H. Griffith,
Chowkhamba Sanskrit Series, Varanasi, 1976.
170 BIBLIOGRAPHY
[14] Achar, B. N. Narahari, Āryabhat.a and the tables of Rsines, Indian Journal of History of
Science, 37(2), 95–99, 2002.
[15] Acharya, Shanta, The Influence of Indian Thought on Ralph Waldo Emerson, The Edwin
Mellen Press, Lewiston, 2001.
[16] Achaya, K. T., Alcoholic fermentation and its products in ancient India, Indian Journal
of History of Science, 26(2), 123–129, 1991.
[17] Ackernecht, Erwin Heinz, A Short History of Medicine, Johns Hopkins University Press,
Maryland, 1982.
[18] Aelianus, Claudius, On the Characteristics of Animals, A. F. Scholfield (Translator), Har-
vard University Press, Cambridge, 1958.
[19] Agarwal, D. P. Ancient Metal Technology and Archaeology of South Asia: A Pan-Asian Per-
spective, Aryan Books International, New Delhi, 2000.
[20] Al-Kindī, The Medical Formulary or Aqrābādhīn of Al-Kindī, Martin Levey (Translator),
The University of Wisconsin Press, Madison, 1966.
[21] Alter, Joseph S., Heaps of health, metaphysical fitness, Current Anthropology, 40, S43–
S66, 1999. DOI: 10.1086/200060.
[22] Ang, Gina C., History of skin transplantation, Clinics in Dermatology, 23, 320–324, 2005.
DOI: 10.1016/j.clindermatol.2004.07.013.
[23] Aristotle, Aristotle Minor Works, W. S. Hett (Translator), Harvard University Press, Cam-
bridge, 1963 (first published 1936).
[24] Arenson, Karen W., When Strings are Attached, Quirky Gifts can Limit Universities,
The New York Times, April 13, 2008.
[25] Baber, Zaheer, The Science of Empire: Scientific Knowledge, Civilization, and Colonial Rule
in India, State University of New York, Albany, 1996.
[26] Bacon, Roger, The Opus Majus of Roger Bacon, Translator: Robert Belle Burke, University
of Pennsylvania Press, Philadelphia, 1928. (Originally published with Oxford University
Press). DOI: 10.1017/cbo9780511709678.
[27] Bag, A. K., Mathematics in Ancient and Medieval India, Chaukhambha Orientalia, Delhi,
1979.
[28] Bagchi, P. C., India and China, Hind-Kitab, Bombay, 1950.
[29] Bailey, Cyril, The Greek Atomists and Epicurus, Clarendon Press, Oxford, 1928.
BIBLIOGRAPHY 171
[30] Balasubramaniam, R., V. N. Prabhakar, and Manish Shankar, On technical analysis of
cannon shot crater on Delhi iron pillar, Indian Journal of History of Science, 44(1), 29–46,
2009.
[31] Balasubramaniam, R. and A. V. Ramesh Kumar, Corrosion resistance of the Dhar iron
pillar, Corrosion Science, 45, 2451–2465, 2003. DOI: 10.1016/s0010-938x(03)00074-x.
[32] Balasubramaniam, B., A new study of Dhar iron pillar, Indian Journal of History of Science,
37(2), 115–151, 2002.
[33] Balasubramaniam, B., New insights on the 1,600 year-old corrosion resistant Delhi iron
pillar, Indian Journal of History of Science, 36(1–2), 1–49, 2001.
[34] Balslev, A. N., A Study of Time in Indian Philosophy, Otto Harrassowitz, Wiesbaden,
1983.
[35] Barua, Beni Madhab, Aśoka and His Inscriptions, New Age Publishers, Calcutta, 1946.
[36] Barrow John D., Pi in the Sky: Counting, Thinking, and Being, Oxford University Press,
New York, 1992.
[37] Bays, G., The Voice of Buddha, Dharma Publishing, California, 1983 (original title, Lal-
itvistara).
[38] Beal, Samuel, Si-Yu-Ki Buddhist Records of the Western World, Paragon Book Reprint
Corp., New York, 1968. DOI: 10.4324/9781315011783.
[39] Bernal, J. D., Science in History, The MIT Press, Cambridge, 1971.
[40] Bernstein, Richard, Ultimate Journey, Alfred A. Knopf, New York, 2001.
[41] Bhagvat, R. N., Knowledge of the metals in ancient India, Journal of Chemical Education,
10, 659–666, 1933. DOI: 10.1021/ed010p659.
[42] Bhardwaj, H. C., Aspects of Ancient Indian Technology: A Research Based on Scientific Meth-
ods, Motilal Banarsidas, Delhi, 1979.
[43] Bigwood, J. M. Ctesias, His royal patrons and Indian swords, Journal of Hellenic Studies,
115, 135–140, 1995. DOI: 10.2307/631649.
[44] Billard, Roger, Aryabhat.a and Indian astronomy, Indian Journal History of Science, 12(2),
207–224, 1977,
[45] Biswas, Arun Kumar, and Sulekha Biswas, Minerals and Metals in Ancient India, D. K.
Printworld, New Delhi, 1996, 3 volumes.
172 BIBLIOGRAPHY
[46] Biswas, Arun Kumar, and Sulekha Biswas, Minerals and Metals in Pre-Modern India, D.
K. Printworld, New Delhi, 2001.
[47] Boncompagni, B., Ed., Scritti di Leonardo Pisano, Rome, 1857–1862, 2 volumes (in Ital-
ian).
[48] Bose, D. M., S. N. Sen, and B. V. Subbarayappa, A Concise History of Science in India,
Indian National Science Academy, New Delhi, 1971.
[49] Bose, S. K., The atomic hypothesis, Current Science, 108, 998–1002, March 10, 2015.
[50] Brain, David J., The early history of rhinoplasty, Facial Plastic Surgery, 9(2), 91–88, April
1993. DOI: 10.1055/s-2008-1064600.
[51] Brennard, W., Hindu Astronomy, Caxton Publications, Delhi, 1988.
[52] Brier, Bob, Napoleon in Egypt, Archaeology, 52(3), 44–53, May/June 1999.
[53] Bronkhorst, Johannes, A note on zero and the numberican place-value system in ancient
India, Asiatische Studien Études Asiatiques, 48(4) 1039–1042, 1994.
[54] Brown, Ronald A. and Alok Kumar, A new perspective on eratosthenes’ measurement of
earth, The Physics Teacher, 49, 445–447, October 2011. DOI: 10.1119/1.3639158.
[55] Bubnov, Nikolai Michajlovic, Gerberti, Opera Mathematica, Berlin, 1899 (in Latin).
[56] Burgess, J., Notes on Hindu astronomy and the history of our knowledge of it, Journal of
Royal Asiatic Society, 717–761, 1893. DOI: 10.1017/s0035869x00022553.
[57] Burnett, Charles, The Introduction of Arabic Learning into England, The British Library,
London, 1997.
[58] Burnett, Charles, The semantics of Indian numerals in Arabic, Greek and Latin, Journal
of Indian Philosophy, 34, 15–30, 2006. DOI: 10.1007/s10781-005-8153-z.
[59] Cajori, Florian, A History of Mathematics, Chelsea Publishing Company, New York, 1980.
[60] Calinger, Ronald, A Contextual History of Mathematics, Prentice Hall, NJ, 1999 (first pub-
lished in 1893).
[61] Capra, Fritjof, The Tao of Physics, Bantam New Age Books, New York, 1980. DOI:
10.1016/b978-0-08-028127-8.50009-x.
[62] Carpue, J. C., An Account of Two Successful Operations for Restoring Lost Nose, from
the Integuments of the Forehead, Longman, London, 1816. DOI: 10.1097/00006534-
198208000-00030.
BIBLIOGRAPHY 173
[63] Casson, Lionel, The Periplus Maris Erythraei, Princeton University Press, 1989. DOI:
10.1017/s0009838800031840.
[64] Chabás, José, The diffusion of the Alfonsine tables: The case of the tabulae resolutae,
Perspectives on Science, 10(2), 168–178, 2002.
[65] Chapple, Christopher Key and Mary Evelyn Tucker, Eds., Hinduism and Ecology: The
Intersection of Earth, Sky and Water, Harvard Divinity School, Cambridge, 2000.
[66] Chaudhary, Anand and Neetu Singh, Herbo mineral formulations (rasaoushadhies) of
Ayurveda: An amazing inheritance of Ayurvedic pharmaceutics, Ancient Science of Life,
30(1), 18–26, 2010.
[67] Chattopadhyaya, D., History of Science and Technology in Ancient India, Firma KLM Pvt.
Ltd., Calcutta, 1986.
[68] Chikara, Sasaki, Mitsuo Sugiura, and Joseph W. Dauben, Eds., The Intersection of History
and Mathematics, Birkhäuser Verlag, Basel, 1994.
[69] Christy, Arthur, The Orient in American Transcedentalism: A Study of Emerson, Thoreau and
Alcott, Columbia University Press, New York, 1932.
[70] Clark, W. E., The Āryabhat. īya of Ārybhat. a I, The University of Chicago Press, Illinois,
1930.
[71] Clark, W. E., Indian Studies in the Honor of Charles Rockwell Lanman, Harvard University
Press, Cambridge, 1929.
[72] Clarke, J. J., Jung and Eastern Thought: A Dialogue with the Orient, Routeledge, New York,
1994. DOI: 10.4324/9780203138533.
[73] Colebrooke, Henry Thomas (Translator), Algebra with Arithmetic and Mensuration from
the Sanscrit of Brahemegupta and Bhascara, John Murray, London, 1817.
[74] Conboy, Lisa. A., Ingrid Edshteyn, and Hilary Garivsaltis, Ayurveda and panchakarma:
Measuring the effects of a holistic health intervention, The Scientific World Journal, 9, 272–
280, 2009. DOI: 10.1100/tsw.2009.35.
[75] Cooke, Roger, The History of Mathematics, John Wiley & Sons, Inc., New York, 1997.
DOI: 10.1002/9781118033098.
[76] Copernicus, Nicholas, On the Revolutions, E. Rosen (translator), The Johns Hopkins Uni-
versity Press, Maryland, 1978.
174 BIBLIOGRAPHY
[77] Coppa, A., L. Bondioli, A. Cucina, D. W. Frayer, C. Jarrige, J. -F. Jarrige, G. Quivron,
M. Rossi, M. Vidale, and R. Macchiarelli, Palaeontology: Early neolithic tradition of
dentistry, Nature, 440, 755–6, April 6, 2006.
[78] Coulter, H. David, Anatomy of Hatha Yoga, Body and Breath, Pennsylvania, 2001.
[79] Coward, H., Jung and Hinduism, The Scottish Journal of Religions, 5, 65–68, 1984, also
read Jung and Eastern Thought, State University of New York Press, 1985.
[80] Cowen, Virginia, Functional fitness
improvements after a worksite-based yoga
initiative, Journal of Bodywork and Movement Therapies, 14, 50–54, 2010. DOI:
10.1016/j.jbmt.2009.02.006.
[81] Crossley, J. N. and A. S. Henry, Thus Spake al-Khwarizmi, Historia Mathematica, 17,
103–131, 1990.
[82] Danucalov, Marcello A., Roberto S. Simões, Elisa H. Kozasa, and José Roberto Leite,
Cardiorespiratory and metabolic changes during yoga sessions: The effects of respiratory
exercises and meditation practices, Applied Psychophysiology and Biofeedback, 32(2), 77–81,
2008. DOI: 10.1007/s10484-008-9053-2.
[83] Darlington, Oscar, G., Gerbert, The teacher, The American Historical Review, 52(3), 456–
476, 1947.
[84] Dasgupta, Subrata, Jagadis Bose, Augustus Waller and the discovery of “vegetable elec-
tricity,” Notes and Records of the Royal Society of London, 52(2), 307–322, 1998. DOI:
10.1098/rsnr.1998.0052.
[85] Dasgupta, S. N., History of Indian Philosophy, Cambridge University Press, Cambridge,
1922–1955, 5 volumes.
[86] Dasgupta, A., Goethe and Tagore, South Asia Institute, Heidelberg, 1973.
[87] Dasgupta, S. N., History of Indian Philosophy, Cambridge University Press, Cambridge,
1922–1955, 5 volumes.
[88] Dasgupta, Surendra, A History of Indian Philosophy, Cambridge University Press, Cam-
bridge, 1963.
[89] Datta, B. B., The Science of Sulba, University of Calcutta, Calcutta, 1932.
[90] Datta, Bibhutibhusan, On mula, the Hindu term for “root,” The American Mathematical
Monthly, 34(8), 420–423, October, 1927. DOI: 10.2307/2299170.
[91] Datta, B. B., Early literary evidence of the use of zero in India, American Mathematical
Monthly, 33, 449–454, 1926. DOI: 10.2307/2299608.
BIBLIOGRAPHY 175
[92] Datta, Bibhutibhusan and Avadhesh Narayan Singh, History of Hindu Mathematics, Asia
Publishing House, New York, 2 volumes, 1961 (first published in 1938).
[93] Deshpande, Vijaya Jayant, Glimpses of ayurveda in medieval Chinese medicine, Indian
Journal of History of Science, 43(2), 137–161, 2008.
[94] Deshpande, Vijaya J., Allusions to ancient Indian mathematical sciences in an early
eighth century Chinese compilation by Gautama Siddha, Indian Journal of History of Sci-
ence, 50(2), 215–226, 2015. DOI: 10.16943/ijhs/2015/v50i2/48237.
[95] Dikshit, Moreshwar G., History of Indian Glass, University of Bombay, Bombay, 1969.
[96] Dollemore, Doug, Mark Giuliucci, Jennifer Haigh, and Sid Kirchheimer, Jean Callahan,
New Choices in Natural Healing, Rodale Press, Inc., Emmaus, Pennsylvania, 1995.
[97] Durant, Will, Our Oriental Heritage, Simon and Schuster, New York, 1954.
[98] Dwivedi, O. P., Vedic heritage for environmental stewardship, Worldviews: Environment,
Culture, Religion, 1, 25–36, 1997.
[99] Einstein, Albert, On Prayer; Purpose in Nature; Meaning of Life; the Soul; A Personal
God, New York Times Magazine, 1–4, November 9, 1930.
[100] Eliade, Mircea, Yoga: Immortality and Freedom, Translated by Willard R. Trask, Princeton
University Press, NJ, 1969.
[101] Eliot, Charles, Hinduism and Buddhism, Routledge and Kegan Paul Ltd., London, 1954,
3 volumes.
[102] Emerson, Waldo, The Complete Works of Ralph Waldo Emerson, Houghton, Miffin and
Company, Cambridge, 1904, 10 volumes.
[103] Faraday, M., An analysis of wootz or Indian steel, Quarterly Journal of Science, Literature,
and the Arts, 7, 319–330, 1819.
[104] Farrington, B., Science in Antiquity, Oxford University Press, London, 1969 (first pub-
lished in 1936).
[105] Feuerstein, Georg, The Yoga-sutra of Patañjali, Inner Traditions International, Rochester,
Vermont, 1989.
[106] Fiala, Nathan, Meeting the demand: An estimation of potential future greenhouse
gas emissions from meat production, Ecological Economics, 67, 412–419, 2008. DOI:
10.1016/j.ecolecon.2007.12.021.
[107] Figiel, L. S., On Damascus Steel, Atlantas Arts Press, Atlanta, 1991.
176 BIBLIOGRAPHY
[108] Filliozat, J., The Classical Doctrine of Indian Medicine: Its Origins and its Greek Parallels,
(Translator) Dev Raj Chanana, Munshiram Manoharlal, Delhi, 1964.
[109] Filliozat, Pierre-Sylvain, Making something out of nothing, UNESCO Courier, 30–34,
November 1993.
[110] Findly, Ellison B., Jahāngīr’s vow of non-violence, Journal of the American Oriental Society,
107(2), 245–256, 1987. DOI: 10.2307/602833.
[111] Folkerts, Menso, Early texts on Hindu-Arabic calculation, Science in Context, 14(1/2),
13–38, 2001. DOI: 10.1017/s0269889701000023.
[112] Frazier, Jessica, History and historiography in Hinduism, The Journal of Hindu Studies, 2,
1–16, 2009. DOI: 10.1093/jhs/hip009.
[113] Gangadharan, N., S. A. S. Sarma, and S. S. R. Sarma, Studies on Indian Culture, Science,
and Literature, Sree Sarada Education Society, Chennai, 2000.
[114] Garratt, G. T., Legacy of India, Clarendon Press, Oxford, 1938.
[115] Gazalé, Midhat, Number: From Ahmes to Cantor, Princeton University Press, NJ, 2000.
[116] Ghosh, Pranabendra Nath, Johann Gottfried Herder’s Image of India, Visva-Bharati Re-
search Publication, Calcutta, 1990.
[117] Gies, Joseph and Frances Gies, Leonardo of Pisa and the New Mathematics of the Middle
Ages, Thomas Y. Crowell Company, New York, 1969.
[118] Gilbert, Christopher, Yoga and breathing, Journal of Bodywork and Movement Therapies,
3(1), 44–54, 1999a. DOI: 10.1016/s1360-8592(99)80042-4.
[119] Gilbert, Christopher, Breathing and the cardiovascular system, Journal of Bodywork and
Movement Therapies, 3(4), 215–224, 1999b. DOI: 10.1016/s1360-8592(99)80006-0.
[120] Gingerich, Owen, Ed., The Eye of Heaven: Ptolemy, Copernicus, and Kepler, American In-
stitute of Physics, New York, 1993.
[121] Gohlman, William E., The Life of Ibn Sina, State University of New York Press, Albany,
1974.
[122] Goldberg, Philip, American Veda, Three Rivers Press, New York, 2010.
[123] Goldstein, B. R., Astronomy and astrology in the works of Abraham Ibn Ezra, Arabic
Sciences and Philosophy, 6(1), 9–21, 1996. DOI: 10.1017/s0957423900002101.
[124] Goldstein, B. R., Ibn al-Muthannâ’s Commentary on the Astronomical Tables of al-
Khwârizmī, Yale University, New Haven, 1967.
BIBLIOGRAPHY 177
[125] Goodman, Russell B., East-west philosophy in 19th-century America: Emerson and
Hinduism, Journal of the History of Ideas, 51(4), 635–645, 1990. DOI: 10.2307/2709649.
[126] Goonatilake, S., Aborted Discovery: Science and Creativity in the Third World, Zed Books,
London, 1984.
[127] Goonatilake, S., The voyages of discovery and the loss and rediscovery of “others” knowl-
edge, Impact of Science on Society, 167, 241–264, 1992.
[128] Gorini, Catherine A., Ed., Geometry at Work, The Mathematical Association of America,
2000.
[129] Gosling, David L., Religion and Ecology in India and Southeast Asia, Routledge, New York,
2001.
[130] Gregorios, Paulos Mar, Ed., Neoplatonism and Indian Philosophy, State University of New
York Press, 2002.
[131] Gupta, R. C., Indian astronomy in China during ancient times, Vishveshvaranand Indo-
logical Journal, India, 9, 266–276, 1981.
[132] Gupta, R. C., Spread and triumph of Indian numerals, Indian Journal of the History of
Science, 18, 23–38, 1983.
[133] Gupta, R. C., Who Invented the zero?, Gan. ita Bhāratī, 17(1–4), 45–61, 1995.
[134] Gupta, Raj Kumar, Great Encounter: A Study of India-American Literature and Cultural
Relations, Abhinav Publications, New Delhi, 1986.
[135] Halbfass, W., India in Europe, State University of New York, Albany, 1988.
[136] Hammett, Frederik S., The ideas of the ancient Hindus concerning man, ISIS, 28(1),
57–72, 1938. DOI: 10.1086/347304.
[137] Harding, Sandra, Whose Science? Whose Knowledge?, Cornell University Press, Ithaca,
1991. DOI: 10.7591/9781501712951.
[138] Harding, Sandra, Is science multicultural? Challenges, resources, opportunity, uncertain-
ties, Configuration, 2, 301–330, 1994. DOI: 10.1353/con.1994.0019.
[139] Hayasi, T., Āryabhat.a’s rule and table for sine-differences, Historia Mathematica, 24, 396–
406, 1997. DOI: 10.1006/hmat.1997.2160.
[140] Hayashi, T, Takanori Kusuba, and Michio Yano, Indian values for (cid:25) from Āryabhat.a’s
value, Historia Scientiarum, 37, 1–16, 1989.
178 BIBLIOGRAPHY
[141] Hayasi, T., The Bakhshālī Manuscript: An Ancient Indian Mathematical Treatise, Egbert
Forsten, Groningen, 1995.
[142] Herschel, Sir John, Sir John Herschel on Hindu mathematics, The Monist, 25(2), 297–
300, April 1915. DOI: 10.5840/monist191525213.
[143] Hitti, P. K., History of the Arabs, Macmillian & Company Ltd., New York, 1963. DOI:
10.1007/978-1-137-03982-8.
[144] Hogendijk, Jan P. and Abdelhamid I. Sabra, The Enterprise of Science in Islam, The MIT
Press, Cambridge, 2003.
[145] Hoggatt, V. E., Fibonacci and Lucas Numbers, Fibonacci Association, San Jose, 1973.
[146] Hooda, D. S. and J. N. Kapur, Ārybhat. a: Life and Contributions, New Age International,
New Delhi, 2nd ed., 2001.
[147] Horadam, A. F., Eight hundred years young, The Australian Mathematics Teacher, 31, 123–
134, 1975.
[148] Horgan, John, Josephson’s inner juction, Scientific American, 272(5), 40–42, May 1995.
DOI: 10.1038/scientificamerican0595-40.
[149] Horne, R. A., Atomism in ancient Greece and India, Ambix, 8, 98–110, 1960. DOI:
10.1179/amb.1960.8.2.98.
[150] Horton, Rod W. and Herbert W. Edwards, Backgrounds of American Literary Thought,
New York, 1952, first self published, later by Prentice Hall.
[151] Hughes, Barnabas, Robert of Chester’s Latin Translation of al-Khwarizmi’s al-Jabr, Franz
Steiner Verlag, Stuttgart, 1989.
[152] Ifrah, Georges, From One to Zero, (Translator) Lowell Bair, Viking Penguin Inc., New
York, 1985 (original in French).
[153] Iyengar, B. K. S., Light on Yoga, Schocken Books, New York, 1966.
[154] Jain, G. R., Cosmology Old and New, Bhartiya Jnanpith Publications, New Delhi, 1975.
[155] Josephson, Brian, Physics and spirituality: The next grand unification?, Physics Education,
22, 15–19, 1987. DOI: 10.1088/0031-9120/22/1/002.
[156] Joshi, Rasik Vihari, Notes on guru, Dīks.ā, and mantra, Ethnos, 37, 103–112, 1972. DOI:
10.1080/00141844.1972.9981054.
[157] Joshi, M. C. and S. K. Gupta, Eds., King Chandra and the Meharauli Pillar, Kusumanjali
Prakashan, Meerut, 1989.
[158] Jung, C. G., Letters, G. Adler, Ed., Princeton University Press, 1973–75, 2 volumes.
[159] Jung, C. G., The Collected Works of C. G. Jung, Eds., H. Read, M. Fordham, and G. Alder,
Princeton University Press, Princeton, 1954–1979, 20 volumes.
BIBLIOGRAPHY 179
[160] Jungk, Robert, Brighter Than a Thousand Suns, HarCourt Brace, New York, 1958.
[161] Kak, S. C., Some early codes and ciphers, Indian Journal of science, 24, 1–7, 1989.
[162] Kak, S. C., The sign for zero, The Mankind Quarterly, 30, 199–204, 1990.
[163] Kapil, R. N., Biological sciences: Biology in ancient and medieval India, Indian Journal
History of Science, 5(1), 119–140, 1970.
[164] Karpinski, Louis C., The Hindu-Arabic numerals, Science, 35(912), 969–970, June 21,
1912. DOI: 10.1126/science.35.912.969.
[165] Katz, Victor J., History of Mathematics, HarperCollins College Publishers, New York,
1993.
[166] Kaza, Stephanie, Western Buddhist motivations for vegetarianism, Worldviews, 9(3),
211–227, 2005. DOI: 10.1163/156853505774841650.
[167] Kennedy, E. S., The Arabic heritage in the exact sciences, Quarterly Journal of Arab Studies,
23, 327–344, 1970.
[168] Kennedy, E. S. and W. Ukashah, Al-Khwārizmī’s planetary latitude tables, Centaurus,
14, 89–96, 1969. DOI: 10.1111/j.1600-0498.1969.tb00138.x.
[169] Khalidi, T., Islamic Historiogaphy, State University of New York Press, Albany, 1975.
[170] King, David, Al-Khwārizmī and New Trends in Mathematical Astronomy in the Ninth Cen-
tury, Hagop Kevorkian Center for Near Eastern Studies, New York, 1983.
[171] King, D. A., Astronomy in the Service of Islam, Variorum, Aldenshot 1993. DOI:
10.1007/978-1-4614-6141-8_13.
[172] King, D. A. and G. Saliba, Eds., From Deferent to Equant, The New York Academy of
Science, New York, 1987.
[173] King, Richard, Indian Philosophy: An Introduction to Hindu and Buddhist Thought, George-
town University Press, Washington D.C., 1999.
[174] Krishnamurthy, K. H., The Wealth of Suśruta, International Institute of Ayurveda, Coim-
batore, 1991.
[175] Krupp, E. C., Arrows in flight, Sky and Telescope, 92(4), 60–62, October 1996.
180 BIBLIOGRAPHY
[176] Kulkarni, R. P., The value of (cid:25) known to Śulbasūtrakarās, Indian Journal of History of
Science, 13, 32–41, 1978a.
[177] Kulkarni, R. P., Geometry According to Śulba Sūtra, Vaidika Samsodhana Mandala, Pune,
1983.
[178] Kulkarni, T. R., Upanishads and Yoga, Bhartiya Vidya Bhavan, Bombay, 1972.
[179] Kumar, Alok, Improving science instruction by utilizing historical ethnic contributions,
Physics Education, 11(2), 154–163, 1994.
[180] Kumar, Alok, Sciences of the Ancient Hindus: Unlocking Nature in the Pursuit of Salvation,
CreateSpace, South Carolina, 2014.
[181] Kumar, Alok and Ronald A. Brown, Teaching science from a world-cultural point of
view, Science as Culture, 8(3), 357–370, 1999. DOI: 10.1080/09505439909526551.
[182] Kunitzsch, P., How we got our Arabic star names, Sky and Telescope, 65, 20–22, 1983.
[183] Kunitzsch, P., The Arabs and the Stars: Texts and Traditions on the Fixed Stars
in Medieval Europe, Variorum, Northampton, 1989. DOI:
and their Influence
10.4324/9781315241340.
[184] Kutumbiah, P., Ancient Indian Medicine, Orient Longmans, New Delhi, 1962.
[185] Lad, Vasant, Ayurveda: The Science of Self-healing, a Practical Guide, Lotus Press, Santa Fe,
1984.
[186] Lahiri, Nayanjot, Indian metal and metal-related artifacts as cultural signifiers:
An ethnographic perspective, World Archaeology, 27(1), 116–132, 1995. DOI:
10.1080/00438243.1995.9980296.
[187] Lattin, Harriet Pratt (Translator), The Letters of Gerbert: With his Papal Privileges as
Sylvester II, Columbia University Press, New York, 1961.
[188] Le Coze, J., About the signification of Wootz and other names given to steel, Indian
Journal of History of Science, 38(2), 117–127, 2003.
[189] Leeds, Anthony and Andrew Vayda, Eds., The Role of Animals in Human Ecological Adjust-
ments, American Association for the Advancement of Science, Washington D.C., 1965.
[190] Levey, Martin and Marvin Petruck (Translator), Principles of Hindu Reckoning, by Kush-
yar Ibn Labban, University of Wisconsin Press, Milwaukee, 1965.
[191] Levey, Martin and Noury al-Khaledy, The Medical Formulary of Al-Samarqandi, Univer-
sity of Pennsylvania Press, Philadelphia, 1967. DOI: 10.9783/9781512803921.
BIBLIOGRAPHY 181
[192] Little, A. G., Ed., Roger Bacon, Clarendon Press, Oxford, 1914.
[193] Loosen, Penate and Franz Vonnessen, Ed., Gottfried Wilhelm Leibniz. Zwei Briefe über
das Binäre Zahlensystem und die Chinesische Philosophie, Belser-Presse, Stuttgart, 1968 (in
German).
[194] MacKenna, Stephen, Porphery on the Life of Plotinus in Plotinus’ Enneads, Faber and
Faber Limited, London, 1956.
[195] Mahdihassan, S., Triphalā and its Arabic and Chinese synonyms, Indian Journal of History
of Science, 13(1), 50–55, 1978.
[196] Majno, Guido, The Healing Hand: Man and Wound in the Ancient World, Harvard Univer-
sity Press, Cambridge, 1975. DOI: 10.1097/00006534-197602000-00022.
[197] Margenau, H., Physics and Philosophy, D. Reidel Publishing Company, London, 1978.
[198] Martzloff, Jean-Claude, A History of Chinese Mathematics, Springer-Verlag, New York,
1997. DOI: 10.1007/978-3-540-33783-6.
[199] Marx, Karl, The difference between the democritean and epicurean philosophy of nature,
1841, Doctoral Thesis, www.marxists.org/archive/marx/works/1841/dr-theses/
[200] McCall, Timothy B., Yoga as Medicine: The Yogic Prescription for Health and Healing, Ban-
tam Books, New York, 2007.
[201] McCrindle, J. W., Ancient India as Described by Megasthenes and Arrian, Chuckervertty,
Chatterjee and Company, Calcutta, 1926.
[202] McCrindle, J. W., Ancient India as Described by Ktesias the Knidian, Trubner and Com-
pany, London, 1882. Reprint by Manohar Reprints, Delhi, 1973.
[203] McDonell, John, J., The Concept of an Atom from Democritus to John Dalton, 11–12, The
Edwin Mellen Press, Lewiston, 1991.
[204] McEvilley, Thomas, The Shape of Ancient Thought, Allworth Press, New York, 2002.
[205] Menninger, Karl, Number Words and Number Symbols: A Cultural History of Numbers, MIT
Press, Massachusetts, 1970.
[206] Meyerhof, Max and Al at-Tabarî’s “Paradise of Wisdom,” one of the oldest Arabic com-
pendiums of medicine, ISIS, 16(1), 6–54, 1931.
[207] Meyerhof, M., On the transmission of Greek and Indian science to the Arabs, Islamic
Culture, 11, 17–29, 1937.
182 BIBLIOGRAPHY
[208] Michalsen, Andreas, Gustav J. Dobos, and Manfred Roth, Medicinal Leech Therapy,
Thieme Publishing, 2006. DOI: 10.1055/b-002-66250.
[209] Michalsen, A., U. Deuse, T. Esch, G. Dobos, and S. Moebus, Effect of leech therapy
(Hirudo Medicinalis) in painful osteoarthritis of the knee: A pilot study, Annals of the
Rheumatic Diseases, 60(6), 986, October 2001.
[210] Mikami, Yoshio, The Development of Mathematics in China and Japan, Chelsea Publishing
Company, New York, 1913. DOI: 10.2307/3604893.
[211] Miller, Jeanine, The Vision of Cosmic Order in the Vedas, Routledge and Kegan Paul, Boston,
1985.
[212] Ming, Chen, The transmission of Indian ayurvedic doctrines in medieval China: A study
of As.tâ ˙nga and Tridos.a fragments from the silk road, Annual Report of the International
Research Institute for Advanced Buddhology at Soka University for the Academic Year 2005,
9, 201–230, March 2006.
[213] Mishra, V. and S. L. Singh, Height and distance problems in ancient Indian mathematics,
Gan. ita Bhāratī, 18, 25–30, 1996.
[214] Montgomery, Scott L. and Alok Kumar, A History of Science in World Cultures: Voices of
Knowledge, Routledge, London, 2015. DOI: 10.4324/9781315694269.
[215] Montgomery, Scott L. and Alok Kumar, Telling stories: Some reflections on orality in
science, Science as Culture, 9(3), 391–404, 2000. DOI: 10.1080/713695250.
[216] Moore, Arden and Sari Harrar, Leeches cut knee pain, Prevention, 55(8), 161, 2003.
[217] Moore, Walter, Schrödinger: Life and Thought, Cambridge Univesity Press, London, 1992.
[218] Mukherjee, Pulok K. and Atul Wahile, Integrated approaches towards drug development
from Ayurveda and other Indian system of medicines, Journal of Ethnopharmacology, 103,
25–35, 2006. DOI: 10.1016/j.jep.2005.09.024.
[219] Mukherjee, P. K., Indian Literature Abroad, Calcutta Oriental Press, Calcutta, 1928.
[220] Mukhopādhyāya, Girinath, Ancient Hindu Surgery, Cosmo Publications, New Delhi,
1994, 2 volumes.
[221] Müller, M., India: What can it Teach us?, Funk and Wagnalls Publishers, London, 1883.
[222] Nakayama, S. and N. Sivin, Eds., Chinese Science, MIT Press, Massachusetts, 1973.
[223] Narayana, A., Medical science in ancient Indian culture with special reference to Athar-
vaveda, Bulletin of the Indian Institute of History of Medicine, 25(1–2), 100–110, 1995.
BIBLIOGRAPHY 183
[224] Narayanan, Vasudha, Water, wood, and wisdom: Ecological perspectives from the Hindu
traditions, Daedalus, 130(4), 179–197, 2001.
[225] Needham, J., Science and Civilization in China, Oxford University Press, London, 1954–
1999, 6 volumes.
[226] Needham, Joseph, Science in Traditional China: A Comparative Perspective, Harvard Uni-
versity Press, Cambridge, 1981. DOI: 10.1119/1.13293.
[227] Needham, J. and W. Ling, Science and Civilization in China, Cambridge University Press,
Chicago, 1959.
[228] Nelson, Lance, Ed., Purifying the Earthly Body of God: Religion and Ecology in Hindu India,
State University of New York Press, Albany, 1998.
[229] Neugebauer, Otto E., The Astronomical Tables of al-Khwārizmī, Hist. Filos. Skr. Dan. Vid.
Selsk, Copenhagen, 1962.
[230] Nicholson, Shirley J., Virginia Hanson, and Rosemarie Stewart, Karma: Rhythmic Return
to Harmony, Motilal Banarsidass, New Delhi, 2002.
[231] Ninivaggi, Frank John, Ayurveda: A Comprehensive Guide to Traditional Medicine for the
West, Praeger, Connecticut, 2008.
[232] Nisbet, Robert A., Teachers and Scholars: A Memoir of Berkely in Depression and War, Trans-
action Publishers, 1992. DOI: 10.4324/9781315130675.
[233] Oevi M., M. Rigbi, E. Hy-Am, Y. Matzner, and A. Eldor, A potent inhibitor of platelet
activating factor from the saliva of the leech Hirudo medicinalis, Prostaglandins, 43, 483–
95, 1992. DOI: 10.1016/0090-6980(92)90130-l.
[234] Ōhashi, Yukio, Astronomical instruments in classical Siddhāntas, Indian Journal of His-
tory of Science, 29(2), 155–313, 1994.
[235] Panikkar, R., Time and history in the tradition of India: Kala and Karma, in the book
Culture and Time, The UNESCO Press, Paris, 1976.
[236] Paramhans, A. A., Astronomy in ancient India—its importance, insight and prevalence,
Indian Journal of History of Science, 26(1), 63–70, 1991.
[237] Patton, Laurie L., Authority, Anxiety, and Canon: Essays in Vedic Interpretation, SUNY
Press, Albany, 1994.
[238] Pellat, C., The Life and Works of Al-Jahiz, University of California, Los Angeles, 1969.
184 BIBLIOGRAPHY
[239] Peters, Christian J., Jamie Picardy, Amelia F. Darrouzet-Nardi, Jennifer L. Wilkins,
Timothy S. Griffin, Gary W. Fick, Carrying capacity of U.S. agricultural land: Ten
diet scenarios, Elementa: Science of the Anthropocene, 4, 1–15, 2016. DOI: 10.12952/jour-
nal.elementa.000116.
[240] Philostratus, Life of Apollonius, C. P. Jones (Translator), Penguins Books, New York,
1970; The Life of Apollonius, Translated by F. C. Conybeare, Harvard University Press,
Cambridge, 1960 (first published in 1912).
[241] Pillai, A. K. B., Transcendental Self: A Comparative Study of Thoreau and the Psycho-
Philosophy of Hinduism and Buddhism, University Press of America, Lanham, 1985.
[242] Pingree, D., The Thousands of Abu Masher, Warburg Institute, London, 1968.
[243] Pothula, V. B., T. M.
J. Lesser, Ontology in ancient In-
dia, The Journal of Laryngology and Otology, 115, 179–183, March 2001. DOI:
10.1258/0022215011907091.
Jones, and T. H.
[244] Prakash, B. and K. Igaki, Ancient iron making in Bastar district, Indian Journal History
of Science, 19, 172–185, 1984.
[245] Prakash, Om, Food and Drinks in Ancient India, Munshi Ram Manohar Lal, Delhi, 1961.
[246] Prasad, Hari Shanker, Ed., Time in Indian Philosophy, Sri Satguru Publications, Delhi,
1992.
[247] Radhakrishna, B. P. and L. C. Curtis, Gold: The Indian Scene, Geological Society of India,
Bangalore, 1991.
[248] Radhakrishnan, Sarvepalli, Indian Philosophy, Macmillan, New York, 1958, 2 volumes.
DOI: 10.1093/mind/xxxvii.145.130.
[249] Rados, Carol, Beyond bloodletting: FDA gives leeches a medical makeover, FDA Con-
sumer, p. 9, September-October, 2004.
[250] Rajgopal, L., G. N. Hoskeri, G. S. Seth, P. S. Bhulyan, and K. Shyamkishore, History
of anatomy in India, Journal of Postgraduate Medicine, 48(3), 243–5, 2002.
[251] Ramakrishnappa, K., Impact of Cultivation and Gathering of Medicinal Plants on Biodiver-
sity: Case Studies from India, FAO, Rome, 2002. http://www.fao.org/DOCREP/005/AA
021E/AA021e00.htm
[252] Rana, R. E. and B. S. Arora, History of plastic surgery in India, Journal of Postgrad. Med.,
48, 76–78, 2002.
BIBLIOGRAPHY 185
[253] Rao, T. R. N. and Subash Kak, Computing Science in Ancient India, The Center for Ad-
vanced Computer Studies, University of Southwestern Louisiana, Lafayette, 1998.
[254] Rashed, Roshdi, Ed., Encyclopedia of the History of Arabic Science, Routledge, New York,
1996. DOI: 10.4324/9780203329030.
[255] Rawlinson, H. G., Intercourse between India and the Western World, Cambridge University
Press, Cambridge, 1926. DOI: 10.2307/1842656.
[256] Ray, Praphulla Chandra, Chemical knowledge of the Hindus of old, ISIS, 2(2), 322–325,
1919.
[257] Ray, Priyada Ranjan, Chemistry in ancient India, Journal of Chemical Education, 25, 327–
335, 1948. DOI: 10.1021/ed025p327.
[258] Ray, P. C., A History of Hindu Chemistry, Indian Chemical Society, 1956. It was first
published in 1902. There are several editions of this book with different publishers.
[259] Remy, A. F. J., The Influence of India and Persia on the Poetry of Germany, Columbia Uni-
versity Press, New York, 1901.
[260] Renfro, Dave L., The Hindu method for completing the square, The Mathematical
Gazette, 91(521), 198–201, July 2007. DOI: 10.1017/s0025557200181525.
[261] Restivo, Sal P., Parallels and paradoxes in modern physics and eastern mysticism: I-A crit-
ical reconnaissance, Social Studies of Science, 8(2), 143–181, 1978; Parallels and paradoxes
in modern physics and eastern mysticism: II–A sociological perspective on parallelism,
Social Studies of Science, 12(1), 37–71, 1982. DOI: 10.1177/030631282012001003.
[262] Riché, Pierre, Gerbert d’Aurillac, le Pape de L’an mil, Fayard, Paris, 1987 (in French).
[263] Riepe, Dale, The Philosophy of India and its Impact on American Thought, Thomas Press,
Springfield, 1970.
[264] Roy, Mira, Environment and ecology in the Rāmāyan. a, Indian Journal of History of Science,
40(1), 9–29, 2005.
[265] Royal, Denise, The Story of J. Robert Oppenheimer, St. Martin’s, New York, 1969.
[266] Royle, J. F., Antiquity of Hindo Medicine, W. H. Allen and Company, London, 1837.
[267] Ruegg, D. Seyfort, Mathematical and linguistic models in Indian thought: The case of
zero and Śûnatā, Wiener Zeitschrift für die Kunde Südasiens und Archiv für Indische Philoso-
phie, 22, 171–181, 1978.
186 BIBLIOGRAPHY
[268] Rusk, Ralph L., Ed., The Letters of Ralph Waldo Emerson, Columbia University Press, New
York, 1939.
[269] Sachau, E. C., Alberuni’s India, S. Chand & Company, Delhi, 1964. DOI:
10.4324/9781315012049.
[270] Sache, M., Damascus Steel, Myth, History, Technology Applications, Stahleisen, Düsseldorf,
1994.
[271] Saha, M. N. and N. C. Lahri, History of the Calendar, Council of Scientific and Industrial
Research, New Delhi, 1992 (First published in 1955).
[272] Said, Edward W., Orientalism, Pantheon Books, New York, 1978.
[273] Said, Edward, Culture and Imperialism, Vintage Books, New York, 1993.
[274] Saidan, A. S., The development of Hindu-Arabic arithmetic, Islamic Culture, 39, 209–
221, 1965.
[275] Saidan, A. S., The Arithmetic of al-Uqlīdisī, D. Reidel Publishing Company, Dordrecht,
1978.
[276] Salem, S. I. and A. Kumar, Science in the Medieval World, University of Texas Press,
Austin, 1991.
[277] Saliba, George, A History of Arabic Astronomy: Planetary Theories During the Golden Age of
Islam, New York University Press, New York, 1994.
[278] Samsó, J., Islamic Astronomy and Medieval Spain, Variorum, Aldenshot, 1994.
[279] Sankhyan, Anek R. and George H. J. Weber, Evidence of surgery in ancient India: Trepa-
nation at Burzahom (Kashmir) over 4,000 years ago, International Journal of Osteoachael-
ogy, 11, 375–380, 2001. DOI: 10.1002/oa.579.
[280] Sarasvati Amma, T. A., Geometry in Ancient and Medieval India, Motilal Banarsidas,
Delhi, 1979.
[281] Sarkar, Prasanta Kumar and Anand Kumar Chaudhary, Ayurvedic Bhasma: The most
ancient application of nano medicine, Journal of Scientific and Industrial Research, 69, 901–
905, 2010.
[282] Sarma, Nataraja, Diffusion of astronomy in the ancient world, Endeavour, 24(4), 157–
154, 2000. DOI: 10.1016/s0160-9327(00)01327-2.
[283] Sarsvati, Svami Satya Prakash, Founders of Sciences in Ancient India, Govindram Hasaram,
Delhi, 2 volumes, 1986.
BIBLIOGRAPHY 187
[284] Sarton, G., Introduction to the History of Science, Carnegie Institute, Washington, 1927–
1947, 3 volumes.
[285] Sarton, George, Decimal systems early and late, Osiris, 9, 581–601, 1950. DOI:
10.1086/368540.
[286] Schoff, Wilfred H., The eastern iron trade of the Roman empire, Journal of the American
Oriental Society, 35, 224–239, 1915. DOI: 10.2307/592648.
[287] Schrödinger, Erwin, My View of the World, Cambridge University Press, England, 1964.
DOI: 10.1017/cbo9781107049710.
[288] Scott, David, Rewalking Thoreau and Asia: “Light from the East” for “a Ver Yankee sort
of oriental,” Philosophy East and West, 57(1), 14–39, 2007. DOI: 10.1353/pew.2007.0011.
[289] Seal, Brajendranath, The Positive Sciences of the Ancient Hindus, Longmans, Green and
Company, New York, 1915. This book has been reprinted several times, most recently in
1985 by Motilal Banarsidass, Delhi.
[290] Sedlar, J. W., India and the Greek World, Rowman and Littlefield, New Jersey, 1980.
[291] Sedlar, J. W., India in the Mind of Germany, University Press of America, Washington,
D.C., 1982.
[292] Seidenberg, A., The ritual origin of geometry, Archive for History of Exact Sciences, 1, 488,
1962. DOI: 10.1007/bf00327767.
[293] Selin, H., Ed., Encyclopedia of the History of Science, Technology, and Medicine in Non-
Western Cultures, Kluwer Academic Publishers, 1997. DOI: 10.1007/978-1-4020-4425-
0.
[294] Selin, Helaine, Astronomy Across Cultures: The History of Non-Western Astronomy, Kluwer
Academic Publishers, Boston, 2000.
[295] Sen, S. N., Influence of India science on other culture areas, Indian Journal History of
Science, 5(2), 332–346, 1970.
[296] Sen, S. N. and A. K. Bag, The Śulbasūtras of Baudhāyana, Āpastamba, Kātyana, and Mā-
nava, Indian National Science Academy, New Delhi, 1983.
[297] Sen, Tansen, Gautama Zhuan: An Indian astronomer at the Tang Court, China Report,
31(2), 197–208, 1995. DOI: 10.1177/000944559503100202.
[298] Shamasastry, R., Kaut. ilya’s Arthśāstra, Mysore Printing and Publishing House, Mysore,
1960, 4th ed., (first published in 1915).
188 BIBLIOGRAPHY
[299] Sharma, Vijay Lakshmi and H. C. Bhardwaj, Weighing devices in ancient India, Indian
Journal of History of Science, 24(4), 329–336, 1989.
[300] Sharma, Bhu Dev and Nabarun Ghose, Revisiting Indus-Sarasvati Age and Ancient India,
World Association for Vedic Studies, 1998.
[301] Sherby, Oleg D. and Jeffrey Wadsworth, Damascus steel, Scientific American, 112–120,
February 1985. DOI: 10.1038/scientificamerican0285-112.
[302] Shukla, Kripa Shankar and K. V. Sarma, Āryabhat. īya of Āryabhat. a, Indian National Sci-
ence Academy, New Delhi, 1976.
[303] Siddiqi, M. Z., Paradise of Wisdom by Ali B. Rabban at-T. abarî, Berlin, 1928; reprinted
by Hamdard Press, Karachi, 1981.
[304] Sigler, Laurence (Translator), Liber Abaci by Leonardo Fibonacci, Springer-Verlag, New
York, 2002.
[305] Singh, L. M., K. K. Thakral, and P. J. Deshpande, Suśruta’s contributions to the funda-
mentals of surgery, Indian Journal History of Science, 5(1), 36–50, 1970.
[306] Singh, Nand Lal, Ramprasad, P. K. Mishra, S. K. Shukla, Jitendra Kumar, and Ramvi-
jay Singh, Alcoholic fermentation techniques in early Indian tradition, Indian Journal of
History of Science, 45(2), 163–173, 2010.
[307] Singh, Permanand, The so-called Fibonacci numbers in ancient and medieval India, His-
toria Mathematica, 12, 229–244, 1985. DOI: 10.1016/0315-0860(85)90021-7.
[308] Sinha, Nandlal, Vaisesika Sutras of Kanada, Bhuvaneswari Ashram, Allahabad, 1911.
[309] Sinha, Braj M., Time and Temporality in Sā ˙mkya-Yoga and Abhidharma Buddhism, Mun-
shiram Manoharlal Publishers, New Delhi, 1983.
[310] Sircar, D. C., Inscriptions of Aśoka, Ministry of Information and Broadcasting, Govern-
ment of India, New Delhi, 1957.
[311] Smith, A. K. and C. Weiner, Eds., Robert Oppenheimer, Harvard University Press, 1980.
[312] Smith, Brian K., Classifying animals and humans in ancient India, Man, 26(3), 527–548,
1991. DOI: 10.2307/2803881.
[313] Smith, C. S., Damascus steel, Science, 216, 242–244, 1982.
[314] Smith, David Eugene and Louis Charles Karpinski, The Hindu-Arabic Numerals, Ginn
and Company, Boston, 1911.
BIBLIOGRAPHY 189
[315] Smith, David Eugene, History of Mathematics, Ginn and Company, New York, 1925,
2 volumes.
[316] Smith, Vincent A., Aśoka, the Buddhist Emperor of India, S. Chand & Company, Delhi,
1964.
[317] Somayaji, Dhulipala Arka , A Critical Study of the Ancient Hindu Astronomy in the Light
and Language of the Modern, Karnatak University, Dharwar, 1971.
[318] Sorta-Bilajac, Iva and Amir Muzur, The nose between ethics and aesthetics:
Sushruta’s legacy, Otolaryngology-Head and Neck Surgery, 137, 707–710, 2007. DOI:
10.1016/j.otohns.2007.07.029.
[319] Srinivasan, Sharada, Metallurgy of zinc, high-tin bronze and gold in Indian antiquity:
Methodological aspects, Indian Journal of History of Science, 51(1), 22–32, 2016.
[320] Srinivasiengar, C. N., The History of Ancient Indian Mathematics, World Press, Calcutta,
1967.
[321] Staal, Fritz, Greek and Vedic Geometry, Journal of Indian Philosophy, 27, 105–127, 1999.
[322] Steiner, Rudolf, Goethe the Scientist, O. D. Wannamaker (Translator), Anthroposophic
Press, New York, 1950.
[323] Steinfeld, Henning, Pierre Gerber, Tom Wassenaar, Vincent Castel, Mauricio Rosales,
and Cees de Haan, Livestock’s Long Shadow: Environmental Issues and Options, Food and
Agriculture Organization of the United Nations, Rome, Italy, 2006.
[324] Stodart J. and M. Faraday, On the alloys of steel, Philosophical Transactions of the Royal
Society of London, Ser. A, 112, 253–70, 1822. DOI: 10.1098/rspl.1815.0182.
[325] Strabo, The Geography of Strabo, Horace Leonard Jones (Translator), Harvard University
Press, Cambridge, 1959.
[326] Subak, S., Global environmental costs of beef production, Ecological Economics, 30, 79–91,
1999; DOI: 10.1016/s0921-8009(98)00100-1.
[327] Subbarayappa, B. V. and K. V. Sarma, Indian Astronomy, Nehru Centre, Bombay, 1985.
[328] Subbarayappa, B. V., Science in India: A Historical Perspective, Rupa Publications, New
Delhi, 2013.
[329] Subbarayappa, B. V., An Estimate of the Vaiśesika Sūtra in the history of science, Indian
Journal of History of Science, 2(1), 22–34, 1967.
[330] Suzuki, Jeff, A History of Mathematics, Prentice Hall, New Jersey, 2002.
190 BIBLIOGRAPHY
[331] Swerdlow, N. M. and O. Neugebauer, Mathematical Astronomy in Copernicus’s De Revo-
lutionibus, Springer, New York, 1984. DOI: 10.1007/978-1-4613-8262-1.
[332] Takakusu, J. (Translator), The Buddhist Religion by I-tsing, Munshiram Manoharlal,
Delhi, 1966 (first published 1896).
[333] Talbot, Michael, Mysticism and New Physics, Routledge, London, 1981.
[334] Teller, Edward, Energy from Heaven and Earth, W. H. Freeman and Company, San Fran-
cisco, 1979. DOI: 10.1119/1.12078.
[335] Teresi, Dick, Lost Discoveries, Simon and Schuster, New York, 2002.
[336] Thibaut, G., On the Śulva-Sūtra, Journal Asiatic Society, Bengal, India, 1875, p. 227.
[337] Thompson, C. J., The Lure and Romance of Alchemy, George G. Harrap and Company
Ltd., London, 1932.
[338] Thomas, A. P., Yoga and cardiovascular function, Journal of the International Association of
Yoga Therapists, 4, 39–41, 1993.
[339] Thoreau, H. D., The Writings of Henry David Thoreau, AMS Press, New York, 1906.
[340] Thoreau, Henry David, Walden; or, Life
in the Woods, Dover, 1995. DOI:
10.5962/bhl.title.146169.
[341] Thurston, Hugh, Planetary revolutions in Indian astronomy, Indian Journal of History of
Science, 35(4), 311–318, 2000.
[342] Thurston, Hugh, Early Astronomy, Springer Verlag, New York, 1994. DOI: 10.1007/978-
1-4612-4322-9.
[343] Van Horn, Gavin, Hindu tradition and nature, Worldviews, 10(1), 5–39, 2006.
[344] Van Nooten, B., Binary numbers in Indian antiquity, Journal of Indian Philosophy, 21,
31–50, 1993. DOI: 10.1007/bf01092744.
[345] Veith, Ilza and Leo M. Zimmerman, Great Ideas in the History of Surgery, Norman Pub-
lishing, 1993.
[346] Voltaire, The Works of Voltaire, John Morley, William F. Fleming, and Oliver Herbrand
Gordon Leigh, Eds., E. R. Du Mont, London, 1901.
[347] Wei-Xing, Niu, An inquiry into the astronomical meaning of Rāhu and Ketu, Chinese
Astronomy and Astrophysics, 19(2), 259–266, 1995. DOI: 10.1016/0275-1062(95)00033-
o.
BIBLIOGRAPHY 191
[348] Weizman, Howard, More on India’s sacred cattle, Current Anthropology, 15, 321–323,
1974.
[349] White, Andrew D., A History of the Warfare of Science with Theology in Christendom,
D. Appleton and Company, New York, 1897 (reprinted by Prometheus, 1993). DOI:
10.1017/cbo9780511700804.
[350] Wright, W., A Short History of Syriac Literature, Philo Press, Amsterdam, 1966 (first pub.
1894).
[351] Yadav, B. S., Man Mohan, Eds., Ancient Indian Leaps into Mathematics, Birkhäauser,
2011. DOI: 10.1007/978-0-8176-4695-0.
[352] Yoke, Ho Peng, Li, Qi and Shu: An Introduction to Science and Civilization in China, Dover
Publications, inc., New York, 1985.
[353] Zimmer, Henry, Hindu Medicine, The Johns Hopkins Press, Baltimore, 1948. DOI:
10.1097/00007611-195104000-00031.
[354] Worthington, Vivian, The History of Yoga, Routledge & Kegan Paul, 1982.
[355] Zukav, Gary, The Dancing Wu-Li Masters, Fontana/Collins, London, 1979.
[356] Zysk, Kenneth, Asceticism and Healing in Ancient India: Medicine in the Buddhist
Monastery, Oxford University Press, New York 1991.
Author’s Biography
193
ALOK KUMAR
Alok Kumar is a Distinguished Teaching Professor of physics at the State University of New
York at Oswego. He was born and educated in India. Later, he taught at California State Univer-
sity at Long Beach and received the Meritorious Performance and Professional Promise Award
for excellence in teaching and research in 1990. He has been teaching in the American higher
education for about four decades. In Oswego, Kumar has received the Chancellor’s Award for
Excellence in Teaching, a life-time SUNY award, in 1997 and the President Award for Creative
and Scholarly Activity or Research, a life-time award, in 2002. He is a fellow of the Alexander
von Humboldt Foundation, Germany, and a NOVA/NASA fellow. Kumar is active in the fields
of atomic physics, chemical physics, history of science, and science education.
He has about 75 peer-reviewed publications, and has authored/coauthored three books:
(1) Science in the Medieval World, (2) Sciences of the Ancient Hindus: Unlocking Nature in the Pursuit
of Salvation, and (3) A History of Science in World Cultures: Voices of Knowledge. All three books
deal with the cultural heritage studies in science, including the non-Western cultures. Kumar
believes that, to understand modern science, it is essential to recognize that many of the most
fundamental scientific principles are drawn from knowledge amassed by ancient civilizations.
Kumar strongly believes that, in a gadget-filled world, scientific literacy is becoming an
essential requirement for everyday life. It is the duty of a scientist to disseminate scientific knowl-
edge to the general public. He has done so through articles and interviews in the popular media,
making documentary films on archaeological sites that are rich in science, offering institutes for
the underprivileged and underrepresented middle school students to pursue a career in science
and technology, and lecturing about science for the general public. There are about 120 arti-
cles/reports about his activities in the popular media. This includes press releases from Reuters,
the Press Trust of India, articles in The Washington Post, Family Life, The Scientists, The Post Stan-
dard, The Palladium Times, India Abroad, India West, The South Asian Times, Hinduism Today,
AramcoWorld, Organiser, and radio interviews.
195
Index
Abū Ma‘sher, 83
Adelard of Bath, 34, 51, 53, 57, 82, 87, 149
Agni-Purān. a, 15
Ahi ˙msā, 128
Al-Andalusī, S. ā‘id, 51, 55, 82, 86, 143, 149,
150
Al-Battānī, 3, 54, 86
Al-Jāh. iz, 8, 50, 52, 149
Al-Bīrūnī, 8, 23, 31, 56, 63, 64, 76, 80, 93,
99, 102, 147
Al-Khwārizmī, 8, 34, 50, 53, 81, 83, 87, 149
Al-Kindī, 144
Al-Majrīt.ī, 54, 82, 86, 87
Al-Mas‘udī, 16, 149
Al-T. abarī, 144
Al-Uqlidīsī, 53
Al-Uqlidīsī, 149
Almagest, 83, 86
Āpastambā-Śulbasūtra, 49
Apollonius of Tyana, 13, 149
Archimedes, 2
Aristotle, 2, 52, 73, 103
Arjuna, 19, 43
Arthaśāstra, 19, 22, 76, 90, 95, 101, 103, 106,
114
Āryabhat.a I, 5, 6, 8, 15, 41, 66, 68, 92
Aśoka, 22, 119, 128, 148
Atharvaveda, 13, 30, 75, 91, 101, 118, 132
Bakhshālī, 34
Baudhayāna-Śulbasūtra, 41
Bayt al-H. ikma, 81
Bhagavad-Gītā, 8
Bhagavad-Gītā, 79, 118, 157
Bhārdvāja, 123
Bhīs.ma, 122
Biomimicry, 25
Bonaparte, Napoleon, 4
Boyle, Robert, 2, 3
Brahmgupta, 68, 81–83, 87, 150
Cakravyūha, 43
Caraka-Sa ˙mhitā, 5, 7, 20, 101, 103, 117,
127, 128, 130, 132, 134, 136, 145
Chandāh. -sūtra, 33, 36
Chāndogya-Upanis. ad, 14, 27, 61, 67, 89, 90,
101, 114, 124
Clement of Alexandria, 151
Copernicus, 6, 69, 81, 87, 149, 150
da Vinci, Leonardo, 3
Daśaratha, 22
Daśaharā, 77
Democritus, 2, 4
Descartes, 3
Dhātu, 101, 144
Dincarayā, 134
Dolomieu, Déodat, 4
Babylon, 31, 52, 75, 154
Bacon, Roger, 3, 51, 57, 143
Egypt, 56, 96, 112, 130, 154
196
INDEX
Emerson, 17, 154, 156
Eusebius, 152
Faraday, Michael, 2
Fibbonacci, 9
Fourier, Joseph, 4
Galileo, 2, 3, 162
Ga ˙ngā, 119, 120
Grosseteste, Robert, 3
Hippocrates, 2
Holī, 77
Huygens, Christiaan, 3
Ibn al-Haytham, 3
Ibn Labbān, 8, 50, 52, 149
Ibn Sīnā, 3, 5, 52, 149
Indus-Sarasvatī, 127
Janaka, 19
Kan. āda, 4, 7, 9, 90, 94–96, 99
Kanaka, 56, 83, 86
Kathā-Upanis. ad, 13, 26
Kātyāyana-Śulbasūtra, 48
Kaurava, 43
Kaut.ilaya, 19, 90, 95, 101, 108, 114, 131, 144
Kepler, 2, 3, 81, 87, 100, 150
Ketu, 64, 84
Khalīl wa Dimna, 150
Khandakhadyaka, 87
Khan. d. a-Khādyaka, 150
Kr.s.n. a, 120, 149, 157
Laks.man. a, 138
Laks.mī, 120
Lalitvistara, 19
Leucippus, 4
Macaulay, T. B., 3
Mahābhārata, 8, 43, 65, 97, 120, 122, 123
Manu-Smr. ti, 122, 123
Margenau, 5
Mārkan. daya-purān. a, 90
Megasthenes, 17
Monier-Williams, Monier, 3
Müller, Max, 3
Nāgārjuna, 55, 146
Nāgārjuna, 148
Naks. atra, 61, 78, 84
Nālandā, 54, 68, 147, 163
Nārada, 14, 57
Needham, 3
Newton, Isaac, 2, 3
Pañca-karma, 132, 135, 136
Pān. d. ava, 43
Paramān. u, 97, 98
Patañjali, 24, 26, 155
Plastic surgery, 136, 138
Plato, 2, 13, 52, 153
Plotinus, 13, 52, 149, 153
Polo, Marco, 144
Prakr. ti, 118, 121, 122, 133
Pr. thivī, 118, 133
Pythagoras, 2, 13, 47, 149, 155
Rāhu, 64, 66, 84, 85
Rāma, 22, 138
Rāvan. a, 138
Rāmāyan. a, 100, 101, 138
Redouté, Joseph, 4
R. gveda, 26, 61, 63, 65, 79, 81, 101, 103, 113,
124, 127, 128, 137, 147
Royle, John F., 144
Sanatkumāra, 14
Sanskrit, 3
Sarasvatī, 120
Śāstrārtha, 9, 19
Sebokht, 52
Śilājīta, 134
Sītā, 120
Śiva, 10, 120
Socrates, 13
Srimad-Bhāgvatam, 79, 92, 97, 98
Śulbasūtra, 40, 43, 47, 155
Sūrpan. akhā, 138
Suśruta-Sa ˙mhitā, 5, 101, 103, 127, 136, 140,
141, 145
Sylvester, Pope, 58, 60, 149, 150
T. abaqāt al-‘Umam, 1
Thales, 2
Thoreau, 18, 156
Upanis. ad, 127, 153
Vaiśes. ika-Sūtra, 7, 9, 15, 90, 92, 94, 96, 97,
99, 122
INDEX 197
Varāhamihira, 55, 64, 68, 73, 80, 84
Vāyu-Purān. a, 97, 98
Vedas, 15, 18, 30, 67, 80, 132, 158
Vis.n. u, 65, 120, 157
Vis. n. u-Purān. a, 67, 79, 100, 157
Viśvāmitra, 22
Yājñavalkya, 19
Yajurveda, 61, 114, 117
Yudhis.t.hira, 122
Yuga, 70, 78
Zarqālī, 1
Zīj al-Sindhind, 35, 41, 81, 82, 87
|
https://ieeexplore.ieee.org/xpl/ebooks/bookPdfWithBanner.jsp?fileName=9340651.pdf&bkn=9340650&pdfType=book
|
Biologically Inspired Design
A Primer
Torben A. Lenau, Danmarks Tekniske Universitet
Akhlesh Lakhtakia, The Pennsylvania State University
As the existence of all life forms on our planet is currently in grave danger from the climate emergency
caused by Homo sapiens, the words “sustainability” and “eco-responsibility” have entered the daily-use
vocabularies of scientists, engineers, economists, business managers, industrialists, capitalists, and policy
makers. Normal activities undertaken for the design of products and systems in industrialisms must
be revamped. As the bioworld is a great resource for eco-responsible design activities, an overview of
biologically inspired design is presented in this book in simple terms for anyone with even high-school
education.
Beginning with an introduction to the process of design in industry, the book presents the bioworld
as a design resource along with the rationale for biologically inspired design. Problem-driven and
solution-driven approaches for biologically inspired design are described next. The last chapter is
focused on biologically inspired design for environment.
ABOUT SYNTHESIS
This volume is a printed version of a work that appears in the Synthesis Digital Library of
Engineering and Computer Science. Synthesis Lectures provide concise original presentations
of important research and development topics, published quickly in digital and print formats.
For more information, visit our website: http://store.morganclaypool.com
store.morganclaypool.com
L
E
N
A
U
•
L
A
K
H
T
A
K
I
A
B
I
O
L
O
G
I
C
A
L
L
Y
I
N
S
P
I
R
E
D
D
E
S
I
G
N
:
A
P
R
I
M
E
R
M
O
R
G
A
N
&
C
L
A
Y
P
O
O
L
Biologically Inspired Design
A Primer
Synthesis Lectures on
Engineering, Science, and
Technology
Each book in the series is written by a well known expert in the field. Most titles cover subjects
such as professional development, education, and study skills, as well as basic introductory
undergraduate material and other topics appropriate for a broader and less technical audience.
In addition, the series includes several titles written on very specific topics not covered
elsewhere in the Synthesis Digital Library.
Biologically Inspired Design: A Primer
Torben A. Lenau and Akhlesh Lakhtakia
2021
Engineering Design: An Organic Approach to Solving Complex Problems in the Modern
World
George D. Catalano and Karen C. Catalano
2020
Integrated Process Design and Operational Optimization via Multiparametric
Programming
Baris Burnak, Nikolaos A. Diangelakis, and Efstratios N. Pistikopoulos
2020
The Art of Teaching Physics with Ancient Chinese Science and Technology
Matt Marone
2020
Scientific Analysis of Cultural Heritage Objects
Michael Wiescher and Khachatur Manukyan
2020
Case Studies in Forensic Physics
Gregory A. DiLisi and Richard A. Rarick
2020
iv
An Introduction to Numerical Methods for the Physical Sciences
Colm T. Whelan
2020
Nanotechnology Past and Present
Deb Newberry
2020
Introduction to Engineering Research
Wendy C. Crone
2020
Theory of Electromagnetic Beams
John Lekner
2020
The Search for the Absolute: How Magic Became Science
Jeffrey H. Williams
2020
The Big Picture: The Universe in Five S.T.E.P.S.
John Beaver
2020
Relativistic Classical Mechanics and Electrodynamics
Martin Land and Lawrence P. Horwitz
2019
Generating Functions in Engineering and the Applied Sciences
Rajan Chattamvelli and Ramalingam Shanmugam
2019
Transformative Teaching: A Collection of Stories of Engineering Faculty’s Pedagogical
Journeys
Nadia Kellam, Brooke Coley, and Audrey Boklage
2019
Ancient Hindu Science: Its Transmission and Impact on World Cultures
Alok Kumar
2019
Value Rational Engineering
Shuichi Fukuda
2018
v
Strategic Cost Fundamentals: for Designers, Engineers, Technologists, Estimators,
Project Managers, and Financial Analysts
Robert C. Creese
2018
Concise Introduction to Cement Chemistry and Manufacturing
Tadele Assefa Aragaw
2018
Data Mining and Market Intelligence: Implications for Decision Making
Mustapha Akinkunmi
2018
Empowering Professional Teaching in Engineering: Sustaining the Scholarship of
Teaching
John Heywood
2018
The Human Side of Engineering
John Heywood
2017
Geometric Programming for Design Equation Development and Cost/Profit
Optimization (with illustrative case study problems and solutions), Third Edition
Robert C. Creese
2016
Engineering Principles in Everyday Life for Non-Engineers
Saeed Benjamin Niku
2016
A, B, See... in 3D: A Workbook to Improve 3-D Visualization Skills
Dan G. Dimitriu
2015
The Captains of Energy: Systems Dynamics from an Energy Perspective
Vincent C. Prantil and Timothy Decker
2015
Lying by Approximation: The Truth about Finite Element Analysis
Vincent C. Prantil, Christopher Papadopoulos, and Paul D. Gessler
2013
Simplified Models for Assessing Heat and Mass Transfer in Evaporative Towers
Alessandra De Angelis, Onorio Saro, Giulio Lorenzini, Stefano D’Elia, and Marco Medici
2013
vi
The Engineering Design Challenge: A Creative Process
Charles W. Dolan
2013
The Making of Green Engineers: Sustainable Development and the Hybrid Imagination
Andrew Jamison
2013
Crafting Your Research Future: A Guide to Successful Master’s and Ph.D. Degrees in
Science & Engineering
Charles X. Ling and Qiang Yang
2012
Fundamentals of Engineering Economics and Decision Analysis
David L. Whitman and Ronald E. Terry
2012
A Little Book on Teaching: A Beginner’s Guide for Educators of Engineering and
Applied Science
Steven F. Barrett
2012
Engineering Thermodynamics and 21st Century Energy Problems: A Textbook
Companion for Student Engagement
Donna Riley
2011
MATLAB for Engineering and the Life Sciences
Joseph V. Tranquillo
2011
Systems Engineering: Building Successful Systems
Howard Eisner
2011
Fin Shape Thermal Optimization Using Bejan’s Constructal Theory
Giulio Lorenzini, Simone Moretti, and Alessandra Conti
2011
Geometric Programming for Design and Cost Optimization (with illustrative case study
problems and solutions), Second Edition
Robert C. Creese
2010
Survive and Thrive: A Guide for Untenured Faculty
Wendy C. Crone
2010
Geometric Programming for Design and Cost Optimization (with Illustrative Case Study
Problems and Solutions)
Robert C. Creese
2009
vii
Style and Ethics of Communication in Science and Engineering
Jay D. Humphrey and Jeffrey W. Holmes
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Analog Multimedia
Explorations
Lina J. Karam and Naji Mounsef
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Digital Multimedia and
Robotics Explorations
Lina J. Karam and Naji Mounsef
2008
CAD/CAM of Sculptured Surfaces on Multi-Axis NC Machine: The DG/K-Based
Approach
Stephen P. Radzevich
2008
Tensor Properties of Solids, Part Two: Transport Properties of Solids
Richard F. Tinder
2007
Tensor Properties of Solids, Part One: Equilibrium Tensor Properties of Solids
Richard F. Tinder
2007
Essentials of Applied Mathematics for Scientists and Engineers
Robert G. Watts
2007
Project Management for Engineering Design
Charles Lessard and Joseph Lessard
2007
Relativistic Flight Mechanics and Space Travel
Richard F. Tinder
2006
Copyright © 2021 by Morgan & Claypool
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in
any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations
in printed reviews, without the prior permission of the publisher.
Biologically Inspired Design: A Primer
Torben A. Lenau and Akhlesh Lakhtakia
www.morganclaypool.com
ISBN: 9781636390475
ISBN: 9781636390482
ISBN: 9781636390499
paperback
ebook
hardcover
DOI 10.2200/S01064ED1V01Y202012EST014
A Publication in the Morgan & Claypool Publishers series
SYNTHESIS LECTURES ON ENGINEERING, SCIENCE, AND TECHNOLOGY
Lecture #14
Series ISSN
Print 2690-0300 Electronic 2690-0327
Biologically Inspired Design
A Primer
Torben A. Lenau
Danmarks Tekniske Universitet
Akhlesh Lakhtakia
The Pennsylvania State University
SYNTHESIS LECTURES ON ENGINEERING, SCIENCE, AND
TECHNOLOGY #14
CM&cLaypoolMorganpublishers&ABSTRACT
As the existence of all life forms on our planet is currently in grave danger from the climate emer-
gency caused by Homo sapiens, the words “sustainability” and “eco-responsibility” have entered
the daily-use vocabularies of scientists, engineers, economists, business managers, industrialists,
capitalists, and policy makers. Normal activities undertaken for the design of products and sys-
tems in industrialisms must be revamped. As the bioworld is a great resource for eco-responsible
design activities, an overview of biologically inspired design is presented in this book in simple
terms for anyone with even high-school education.
Beginning with an introduction to the process of design in industry, the book presents the
bioworld as a design resource along with the rationale for biologically inspired design. Problem-
driven and solution-driven approaches for biologically inspired design are described next. The
last chapter is focused on biologically inspired design for environment.
KEYWORDS
bioinspiration, biomimicry, biomimetics, bioreplication, bionik, bionics, nature-
inspired design, circular economy, contraindicated performance, design for envi-
ronment, eco-efficiency, engineered biomimicry, multifunctionality, sustainability
xi
Dedicated to sustainable societies
Contents
xiii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
1
Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 What is Design? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Design Thinking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 The Design Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.4 Design Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4.1 Task Clarification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4.2
Function Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4.3 Design Brief and Product Specification . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4.4 Conceptual Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4.5 Concept Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4.6 Toward Detailed Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.5
3
Engineered Biomimicry: Solutions from the Bioworld . . . . . . . . . . . . . . . . . . . 21
3.1 The Case for Engineered Biomimicry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Engineered Biomimicry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.2
3.2.1 Bioinspiration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2.2 Biomimetics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2.3 Bioreplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Examples of Engineered Biomimicry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.3.1 Bioinspired Computational Techniques . . . . . . . . . . . . . . . . . . . . . . . . 24
3.3.2 Biomimetic Production of Human Insulin . . . . . . . . . . . . . . . . . . . . . 26
3.3.3 Bioreplicated Visual Decoys of Insects . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.4 Design Teams for Bioworld Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.5
3.3
xiv
4
5
6
7
Rationale for Biologically Inspired Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Energy Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.1
4.2 Circular Economy of Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.3 Multifunctionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.4 Multicontrollability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.5
Suboptimality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.6 Contraindicated Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.7
Problem-Driven Biologically Inspired Design . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.1
5.2
5.3
5.4
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Phases of Problem-Driven BID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
First Phase: Problem Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.2.1
5.2.2
Second Phase: Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.2.3 Third Phase: Understand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Fourth Phase: Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.2.4
5.2.5
Fifth Phase: Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Engineers and Biologists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Solution-Driven Biologically Inspired Design . . . . . . . . . . . . . . . . . . . . . . . . . . 61
6.1
6.2
6.3
6.4
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Examples of Solution-Driven BID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
6.2.1 Mycelium Bio-Composites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
6.2.2 Bombardier-Beetle Spray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
6.2.3 Tubercles for Flow Control
6.2.4 Abalone-Shell Armor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Steps for Solution-Driven BID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
6.3.1 Application Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
6.3.2 Eight-Step Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Biologically Inspired Design for the Environment . . . . . . . . . . . . . . . . . . . . . . . 77
7.1
Sustainability and the Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
7.2 Matter of Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Sustainable Practices from Nature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
7.3
xv
7.4 Circular Economy of Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
7.5 Mutually Beneficial Coexistence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
7.6
Energy Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
7.7 Design Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
7.7.1 Environmental Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
7.7.2 Circular Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Impact Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
7.7.3
7.8 Grafting “Biologically Inspired Design” onto “Design for Environment” . . . . 87
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
7.9
Authors’ Biographies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
xvii
Preface
This primer on biologically inspired design (BID) was initiated during a sabbatical semester
spent by Akhlesh Lakhtakia at Danmarks Tekniske Universitet (DTU) during the second half of
2019, at the invitation of Torben A. Lenau. The close collaboration between both of us resulted
not only in the descriptions of BID approaches and the case stories required to make the reading
of this book interesting to undergraduate students enrolled for BID courses, but it also made
a collaboration possible with Daniela C. A. Pigosso and Tim C. McAloone for grafting BID
onto design for environment. The combination of the two design foci makes it possible to tap into
the enormous knowledge bank that the bioworld represents and apply well-proven solutions in
the quest to secure sustainable societies and ecosystems on our planet.
Torben A. Lenau started in 2009 to teach BID to engineering students at DTU. More
than 400 students have marched through the course since then. The course is focused on the
problem-driven approach to BID illustrated by around a hundred case studies.
The solution-driven approach to BID complements the problem-driven approach. Both
are treated in two chapters in this book. They are described in sufficient detail to allow practi-
tioners as well as students to follow and apply the approaches to their own BID activities. As
this book explains BID in simple terms for anyone with even high-school education, we hope
that not only engineering and design students but also members of the general public interested
in sustainability will profit from the time they will spend on reading this primer on BID.
Torben A. Lenau and Akhlesh Lakhtakia
January 2021
Acknowledgments
xix
Torben A. Lenau thanks the many students of Danmarks Tekniske Universitet (DTU) who
took his course 41084 Biologically Inspired Design over the years for providing the empirical
experience and contextual setting that stimulated the development of methodological support
tools. He is also highly grateful for insightful discussions with and support from his wife Ingrid.
Akhlesh Lakhtakia is grateful to the Trustees of The Pennsylvania State University for a
sabbatical leave of absence, the Otto Mønsted Foundation for partial financial support, and the
Department of Mechanical Engineering, DTU for gracious hospitality in Fall 2019 semester.
He also thanks Mercedes for wonderful spousal support during that period.
Both of us are grateful to Daniela C. A. Pigosso and Tim C. McAloone for discussions
on grafting biologically inspired design onto design for environment. We thank Patrick D. McAtee
for several suggestions as well as for alerting us to several errors in a draft manuscript, and the
staff of Morgan & Claypool for splendid cooperation in producing this book.
Torben A. Lenau and Akhlesh Lakhtakia
January 2021
C H A P T E R 1
Definitions
1
“Begin at the beginning,” the King said, very gravely,
“and go on till you come to the end: then stop.”
Lewis Carroll, Alice in Wonderland (1865)
First things first, we must begin with definitions. This is all the more necessary for a rapidly
emerging area such as engineered biomimicry, which encompasses both basic research on
outcomes and mechanisms of diverse phenomena displayed by living organisms and the appli-
cation of fundamental principles uncovered by that basic research to devise useful processes and
products [1]. Engineered biomimicry can thrive in an industrialism, which is a society replete
with manufacturing industries for mass production of a diverse array of products.
Biomimicry lies within the ambit of engineered biomimicry. Although the two terms
are often used as synonyms of each other, biomimicry additionally incorporates the attributes of
sustainability evinced by the bioworld. Sustainability is defined as the maintenance of natural
resources for ecological balance; hence, present-day needs are satisfied without endangering the
ability of future generations to do the same [2]. Sustainability mandates the formation of those
industrial ecosystems that are founded on the principles of circular economy. The main out-
puts, byproducts, and wastes of every segment of a circular economy become inputs to one or
more of the other segments of that economy, thereby minimizing the overall resource inputs to
the circular economy [3]. The inter-relationships of engineered biomimicry, biomimicry, sus-
tainability, and industrialism are schematically depicted in Fig. 1.1.
Design and manufacture are the two main engineering activities in any industry. Accord-
ingly, engineered biomimicry encompasses both biologically inspired design and manufacture,
as depicted in Fig. 1.2. The scope of biologically inspired design is the formulation of de-
sign strategies to reproduce desirable outcomes, mechanisms, and structures from the bioworld.
A manufacturing action may or may not be provenanced in the bioworld.
The history of Homo sapiens is marked by numerous approaches to the solution of engineer-
ing problems based on solutions from the bioworld. These approaches of engineered biomimicry
can be classified as bioinspiration, biomimetics, and bioreplication, as shown also in Fig. 1.2.
The goal in bioinspiration is to reproduce a biological outcome without reproducing the
underlying physical mechanism(s) and the biological structure(s). As an example, powered flying
machines were inspired by birds in self-powered flight. But airplanes do not flap their wings like
birds, and the tails of birds are horizontal unlike the vertical tails of aeroplanes. Rotorcraft do
2
1. DEFINITIONS
Figure 1.1: Engineered biomimicry and biomimicry within the contexts of sustainable actions
and mass production.
Figure 1.2: Conceptual anatomy of engineered biomimicry.
SustainableactionsBiomimicryEngineeredbiomimicryIndustrialism(mass production)Engineered biomimicryBiomimeticsBioinspirationBiologicallyinspireddesignProductionBioreplication1. DEFINITIONS 3
not fly like birds either. But these engineered structures do reproduce the natural outcome of
moving from one location to another without being in physical contact with the ground.
Biomimetics is the reproduction of a physical mechanism responsible for a specific func-
tionality exhibited by a biological structure. The classic example of biomimetics is Velcro™ com-
prising dense assemblies of hooks and loops, the former emulating the hooked barbs on a bur-
dock seed and the latter the fur of a furry animal. When a furry animal brushes against a burdock
seed, the hooks get fastened to the fur. The International Standards Origanization (ISO) has
formulated a set of criteria for whether a product can be considered as biomimetic [4]. The crite-
ria relate to the biomimetic design process which was applied to develop the product and require
that
(i) a function analysis has been performed on an available biological system,
(ii) the essential mechanisms in that biological system have been abstracted into a model, and
(iii) the model has been transferred and applied to design the product.
Bioreplication is the direct replication of a structure found in a biological organism
in order to reproduce one or more functionalities exhibited by the biological structure copied.
Decoys created by nanoscale replication of the actual elytra of a female emerald ash borer for the
purpose of sexually attracting male emerald ash borers provide an example of bioreplication [5].
The term biomaterial refers to either a material harvested from a biological organism
to be used for the same purpose as in the organism or an artificial material used for a biological
purpose. In the latter case, the term biocompatible material is also used. Bovine milk is a
biomaterial of the first kind, being produced in the bioworld as a nutrient for calves and also
used by humans as food. Prostheses for human hips and knees are made of biomaterials of the
second kind.
Biomanufacturing uses a biological process to produce a synthetic product. Thus, syn-
thetic insulin is produced by inserting the human insulin genes in open loops of bacterial DNA to
close the latter, the closed loops are inserted in bacteria which multiply rapidly in a fermentation
chamber, the insulin then being harvested from the bacteria being produced in that chamber.
Escherichia coli and Saccharomyces cerevisiae are commonly used species of bacteria, but yeast, a
fungus, often replaces the bacteria in this biomanufacturing process [6]. A specific biomanufac-
turing process has at least one component that is either biomimetic or bioreplicatory.
A structure is multifunctional if it can perform two or more distinct functions that
are not highly related to each other [7]. An example of multifunctionality is displayed in the
bioworld by skin, which contains the organism, defines its shape and size, hosts a variety of sen-
sors, and may be used to camouflage as well as to advertise. The fuselage of an aircraft functions
both as a thermal isolator and an acoustic isolator. The well-known Swiss Army™ knife is a
multifunctional tool.
The output of a multicontrollable structure can be controlled independently by more
than one mechanisms [8]. As natural examples: the same sound can be uttered using two or three
4
1. DEFINITIONS
different configurations of the tongue and the buccal cavity, and multiple modes of locomotion
can be used by an organism to propel itself from one location to another.
1.1
REFERENCES
[1] A. Lakhtakia and R. J. Martín-Palma (Eds.), Engineered Biomimicry, Elsevier, Waltham,
MA, 2013. DOI: 10.1016/c2011-0-06814-x. 1
[2] M. Mulligan, An Introduction to Sustainability: Environmental, Social and Personal Perspec-
tives, 2nd ed., Routledge, Abingdon, Oxford, UK, 2018. DOI: 10.4324/978131588852.
1
[3] W. R. Stahel, Circular Economy: A User’s Guide, Routledge, Abingdon, Oxford, UK, 2019.
1
[4] ISO 18458:2015, Biomimetics—Terminology, Concepts and Methodology, International
Standards Organization, Geneva, Switzerland, 2015. https://www.iso.org/standard/
62500.html DOI: 10.3403/30274979. 3
[5] M. J. Domingue, A. Lakhtakia, D. P. Pulsifer, L. P. Hall, J. V. Badding, J. L. Bischof, R. J.
Martín-Palma, Z. Imrei, G. Janik, V. C. Mastro, M. Hazen, and T. C. Baker, Bioreplicated
visual features of nanofabricated buprestid beetle decoys evoke stereotypical male mating
flights, Proceedings of U.S. National Academy of Sciences, 111:14106–14111, 2014. DOI:
10.1073/pnas.1412810111. 3
[6] N. A. Baeshen, M. N. Baeshen, A. Sheikh, R. S. Bora, M. M. M. Ahmed, H. A. I.
Ramadan, K. S. Saini, and E. M. Redwan, Cell factories for insulin production, Microbial
Cell Factories, 13:141, 2014. DOI: 10.1186/s12934-014-0141-0. 3
[7] A. Lakhtakia, From bioinspired multifunctionality to mimumes, Bioinspired, Biomimetic
and Nanobiomaterials, 4:168–173, 2015. DOI: 10.1117/12.2258683. 3
[8] A. Lakhtakia, D. E. Wolfe, M. W. Horn, J. Mazurowski, A. Burger, and P. P. Banerjee,
Bioinspired multicontrollable metasurfaces and metamaterials for terahertz applications,
Proceedings of SPIE, 10162:101620V, 2017. DOI: 10.1117/12.2258683. 3
C H A P T E R 2
What is Design?
5
It is not enough that we build products that function, that are
understandable and usable, we also need to build products
that bring joy and excitement, pleasure and fun,
and, yes, beauty to people’s lives.
Donald A. Norman (2004)1
2.1
INTRODUCTION
Design has been around for as long as humans have created things. Design and making were not
separate until the rise of the age of factories, since the craft-person designed the product while
making it [1]. For example, a potter would make a pot by working with clay without first making
drawings. This was possible as long as the product was simple and the production process was
implemented close to the people using the product. However, modern products are usually very
complicated and are often produced at locations far away from their users’.
This development engendered the need for more formalized design activity whereby
designers analyze user needs and create documentation so that the product can be later man-
ufactured by others elsewhere. The documentation must be detailed and accurate in specifying
form, materials, dimensions, and other variable parameters.
A design activity need not be formal and often it is not; however, it must be effective.
Methods and tools are therefore developed to improve the likelihood of matching user needs to
a good new product whose production is cost effective and which can be expediently disposed
off after use. In writing this book, we expect that biologically inspired design will help
designers in identifying good solution principles and even get detailed inputs for how to realize
the product structure and functionality.
Apart from finding solutions to functional needs, design is also about product appearance
and the messages the product sends. This is clearly obvious for clothing and automobiles, because
high premiums are paid for exclusive looks. Many automobiles are designed to communicate the
impression of speed and power. This is done by borrowing design features from animals with
those characteristics. For example, automobile headlights are designed to remind the bystander
of the eyes of tigers or lions. Cute animals inspire children’s toys and sports equipment draw
1D.A. Norman, Introduction to this special section on beauty, goodness, and usability, Human-Computer Interaction,
19:311–318, 2004.
6
2. WHAT IS DESIGN?
on visual inspiration from agile animals such as cheetahs. Biological inspiration for product
appearance is a huge area, but this book is focused on how to utilize functional solutions found
in the bioworld.
2.2
DESIGN THINKING
The concept of design thinking is often invoked to distinguish design activities from scientific
problem solving wherein underlying principles are uncovered systematically to find optimal so-
lutions. In contrast, design thinking requires multiple explorations to identify a range of possible
solutions from which a satisfactory one is identified.
The major difference in the thought processes of scientists and designers was exposed in
an experiment more than four decades ago [2]. A group of fifth-year students from architecture
and a similar group from science were asked to arrange building blocks with colored sides with
the goal of maximizing the number of sides of a specific color. The results suggested that science
students selected blocks in order to discover the structure of the problem, whereas architecture
students generated sequences of blocks until a combination proved acceptable.
Design thinking is claimed to be suitable for solving an ill-posed problem by sketching
several possible solutions to understand it from different viewpoints [1, 3]. It calls for a mindset,
as can be seen from analyzing the preferred ways of working of many designers. The mindset
includes a strong user focus and the will to understand the core of the problem by generating a
large space of many solutions. Several of these solutions will be visualized and even prototyped
before a solution is finally selected for production.
2.3
THE DESIGN OBJECT
From a first look, it seems obvious that the product is the design object. However, further
analysis clarifies that the design object also includes other elements such as single components
or parts within the product; the overall system within which the product functions; and the
non-material services associated with its merchandizing, use, and eventual disposal.
When designing, the goal is to produce a thing to satisfy a need. This thing can be a phys-
ical product such as a toothbrush or an automobile, or it can be a service such as linen laundry
in a hotel. Clearly, the complexity of the design process varies with the type of the product or
service to be designed. The delimitation of the design object is therefore important. The tooth-
brush is a single component even though it is permanently assembled from a plastic handle
and several clumps of brushing hairs. On the other hand, an automobile is a larger collection
of single components that are configured in subsystems which together are assembled into the
complete product: the automobile. However, an automobile is part of a larger system includ-
ing gas stations, repair shops, roads, and parking spots, which together are necessary to provide
the transportation functionality to the user. Furthermore, the product is part of a larger con-
text which has a major impact on how the product is designed. Automobiles of different types
2.4. DESIGN PROCESS 7
are made to satisfy needs in diverse contexts, as exemplified by minivans to transport families
with children, mobile homes for leisure activities, mobile workshops for mechanics, and taxis to
transport visitors with luggage.
The bioworld shows similar features. Organisms of many different types co-exist in a
mutualistic relationship system that is the prerequisite for the existence of a single organism in
that system. In other words, the system comprises its constituent organisms as subunits with
specific roles and interfaces to the rest of the organisms. Removal of organisms of a certain
type from the system can seriously alter, and even demolish, the latter. In the same way, each
organism consists of several organs and other subunits with specific roles and interfaces to the
rest of the organism. When seeking inspiration from the bioworld, it is therefore beneficial to
also look at the larger system that the organism is part of.
One major difference between the bioworld and design activity must be noted. A new
feature in an organism arises in the bioworld as a result of random modifications of parental
DNA. Most of these mutations are either inconsequential or harmful, but a certain mutation
may confer reproductive success in the prevailing environment. That mutation becomes more
prevalent in succeeding generations. A new species emerges in consequence of a succession of
numerous mutations, which makes sudden innovation impossible in the bioworld [4], the oc-
currence of elevated emergence rates of new species in the fossil record [5, 6] notwithstanding.
For example, a marine species cannot evolve into an avian one through just one mutation. In
contrast, although design activity is greatly limited by the availability of materials, tools, and
expertise, disruptive innovation is possible by the interjection of a radically new concept. As
an example, the emergence of the smartphones in 1992 from the predecessor telephones was
a single-step achievement inasmuch as a smartphone possesses a touch screen, can email, store
notes, keep a calender, and run diverse apps and widgets that would become widespread within
a decade. Furthermore, smartphones began to provide very convenient access to the internet,
thereby taking away a market segment from laptop manufacturers [7].
2.4
DESIGN PROCESS
While there is a general agreement that every design process starts with a user need and is
expected to end with a solution, there are many models for structuring, organizing, and docu-
menting design processes. The Pahl–Beitz model shown in Fig. 2.1 encompasses the following
stages in sequence: task clarification, development of concepts (i.e., principal solutions), pre-
liminary layout, definitive layout, and documentation [8]. Even though the model is sequential,
it is recognized that many iterative loops will be made if the result of an activity is unsatisfactory.
The Cross model shown in Fig. 2.2 is organized so the different stages in a design activity
form a circle [1]. The model makes it more apparent that design is an iterative activity wherein
all decisions are revisited several times before a good final solution is found. Both the Pahl–Beitz
model and the Cross model require a function analysis to be undertaken before the design is
specified. In the Pahl–Beitz model, function analysis takes place during concept development,
8
2. WHAT IS DESIGN?
Figure 2.1: The Pahl–Beitz model of systematic design activity [8].
Analyze the market and company situationFind and select product ideasFormulate product proposalClarify the taskElaborate requirement listTaskMarket, company, economyInformationUpgrade and improvePlanning andclarifying the taskConceptual designEmbodiment designDetail designRequirements list(design specification)Concept(principal solution)Preliminary layoutDefinitive layoutProduct documentationSolutionPlan andclarifythe task:Develop theprincipalsolution:Develop theconstructionsolution:Define theconstructionstructure:Prepareproductionand operatingdocuments:Identify essential problemsEstablish function structuresSearch for working principles and working structuresCombine and firm up into concept variantsEvaluate against technical and economic criteriaPreliminary form design, material selection, and calculationSelect best preliminary layoutsRefine and improve layoutsEvaluate against technical and economic criteriaEliminate weak spotsCheck for errors, disturbing influences, and minimum costsPrepare the preliminary parts list and production and assembly documentsElaborate detail drawings and parts listComplete production, assembly, transport, and operating instructionsCheck all documents2.4. DESIGN PROCESS 9
Figure 2.2: The Cross model of systematic design activity [1].
with problem identification linked to the search for working principles. In the Cross model,
function analysis is used to analyze the overall design problem and break it into subproblems.
A difference between the two models is the explicit focus on alternatives in the Cross model.
Generating alternative solutions is an important way to figure out how the design problem is
best solved. The Pahl–Beitz model, of course, recognizes this matter but the linear format of the
model does not invite consideration of alternatives.
Illustrated in Fig. 2.3, the Tjalve model is a sequential model of design activity contain-
ing iterative loops [9]. With more emphasis on concept development than in the previous two
models, the Tjalve model encompasses the stages of problem analysis; identification of main
functions; identification of sub-functions and means; formulation of the basic structure of the
product; quantification of the product structure; delimitation of materials, dimensions, and sur-
faces; and the overall form of the product. Whereas the Pahl–Beitz and Cross models are well
suited for managing and coordinating design processes, the Tjalve model aims at guiding the
designer in creative activities.
Indeed, the Tjalve model defines the product in terms of five basic attributes. These are
the structure of the product with its constituent elements and relations, along with the form,
material, dimensions, and surface of each element. Thus, this model is a journey from an abstract
description of the product using functions toward gradually more and more concrete descriptions
of the overall structure and the constituent elements. Solutions are identified for each function,
the arrangement of the solutions being called the basic structure. The basic structure is typically
described using symbolic graphs rather than drawings illustrating the appearance of the prod-
uct. The quantified structure developed thereafter contains dimensions as well as the physical
arrangement of the constituent elements.
RequirementsCharacteristicsObjectivesOpportunitiesStage flowdirectionAlternativesEvaluationFunctionsImprovementOverallproblemOverallsolutionSub-problemSub-solutions10
2. WHAT IS DESIGN?
Figure 2.3: The Tjalve model of systematic design activity [9].
The Tjalve model emphasizes the need for alternatives and gives detailed inspiration for
how to systematically explore different basic structures that will satisfy the functional require-
ments. Several alternatives are also generated for the quantified structure, wherein the con-
stituent elements can be configured differently in relation to each other. A useful division be-
tween the total form of the product and the forms of the constituent elements allows for the
search for partial solutions for single functions which later are combined into the total solution.
Finally, the integrated-product-development model shown in Fig. 2.4 emphasizes that
product design is not done in isolation but in parallel and close collaboration with market- and
production-oriented activities [10]. While designers consider the type of product to design, the
marketing team investigates competing products and determines whether there is room for a new
MainfunctionsCriteriaProblemanalysisSub-functionsand meansBasicstructureQuantifiedstructureTotal formForm ofthe elementsMaterialDimensionSurface2.4. DESIGN PROCESS 11
Figure 2.4: The integrated-product-development model of systematic design activity [10].
product in the market, and the production team investigates diverse options for manufacturing
the product.
2.4.1
TASK CLARIFICATION
A design process can be initiated by many different triggers [11]. A common trigger is the user
need for a good solution, inadequate and barely adequate solutions being unsatisfactory. Another
trigger is the introduction of a new technology, as exemplified by the emergence of social media
triggered by the introduction of smartphones. Yet another trigger is the economic affordability
of a technology. For instance, the low prices of efficient batteries have caused a boom in the use
of electrical scooters and bicycles.
Once a user need has been identified, the design process has to be articulated. This is most
often done either verbally or in writing, but a powerful alternative way is to provide diagrams. In
order not to restrict the design process, the focus should be on highlighting the need but not on
describing how it has been solved previously. This can be done by describing the consequences
of satisfying the need through before/after pictures like the ones commonly used in advertise-
ments for diet pills and other weight-reduction regimen. By visualizing the need, the designer
avoids fixation on existing solutions and becomes receptive to innovative solutions. Being readily
understood, graphics are also effective in communicating with stakeholders.
Many design processes actually involve re-design of existing products. Re-design can be
initiated either for improved functionality or to alleviate shortcomings experienced in existing
products. A detailed analysis of how users behave in the situations calling for the use of a product
Determiningthe basicneedDeterminingthe type ofproductConsiderationof producttypeUserinvestigationProductprincipledesignDeterminingtype ofproductionMarketinvestigationProductdesignphaseProductprinciplephaseInvestigationof needphaseRecognitionof needphaseProductionpreparationphaseExecutionphasePreliminaryproductdesignDeterminingproductionprinciplesPreparationfor salesModificationformanufacturePreparationforproductionSalesProductadaptationProduction543210TheNeed* INTEGRATED PRODUCT DEVELOPMENT12
2. WHAT IS DESIGN?
Figure 2.5: Three stages in the use of a venflon catheter for injecting a polymer tube in a vein.
(1) A metal needle labeled A penetrates the skin and guides a soft polymer tube labled B into
the vein. (2) The metal needle is retracted and disposed of, leaving the polymer tube in the vein.
(3) The polymer tube is ready for use.
and interact with that product is typically carried out by the marketing department, but personal
experiences of the designers will often lead to better results. Many companies therefore encour-
age their designers to directly meet users in order to understand their needs and constraints as
well as how the users will actually interact with the product. User contact is also valuable for
getting feedback on design proposals comprising sketches and/or prototypes.
An example of re-design is furnished by new types of the venflon catheter. A soft polymer
tube in the venflon catheter is injected into a vein with the help of a stiff metal needle, as shown
in Fig. 2.5. The metal needle is retracted after the polymer tube is in place and is then disposed
of. Nurses revealed in interviews that the retraction as well as the disposal of the metal needle
are problematic for them. Not only have two extra processes to be carried out, but the sharp
needle also represents a hazard. For this reason, the venflon catheter is nowadays equipped with
a small safety device which prevents the nurse from touching the needle tip after it has been
retracted.
When analyzing the needs and constraints during a design activity, it can be advantageous
to meet not only the direct users but also other stakeholders such as sales personnel, repair and
maintenance personnel, and other persons who will come in contact with the product. Face-to-
face interviews, questionnaires requiring both qualitative and quantitative answers, and personal
observations provide insights. Personal experience of the product can also benefit similarly, but
it also carries the risk of introducing bias. The observations and experiences of a designer are
not necessarily the same as those of users. Observations are valuable since they reveal the true
behavior of a user. When interviewed, users tend to give more favorable descriptions of their use
patterns.
SkinA1BVein23The results obtained from the analysis of user needs and existing products are described in
a user-need document. This document can include information on the actual use (and misuse) of
the existing products and the context of use. Sometimes, the context is explained using personas
which are descriptions of typical users and their use patterns.
2.4. DESIGN PROCESS 13
2.4.2
FUNCTION ANALYSIS
A way to stimulate creativity and generate new and better ways of solving problems is to for-
mulate an abstract description of how a product functions. Instead of describing a product as an
assemblage of its components, it can be described as a set of abstract functions. Another advan-
tage is to avoid product fixation which is a risk when the names of previously used components
are used. If a product is described in terms directly coupled to a specific action or form, the
designer may become fixated on a specific solution and find it difficult to imagine alternatives.
For example, a concrete functional description such as “drive a person from point A to point
B” will fixate the designer in thinking of vehicles, whereas the abstract functional description
“transport a person from point A to point B” will foster more open thinking so that a wider
palette of solutions can emerge, e.g., conveyer belts and horseback riding.
A product function is normally formulated using a verb/noun combination, such as “con-
taining a liquid” or “cutting a piece of paper.” A complete description of the functions of and
within a product can be made using a functions-means tree diagram [1, 9]. The sole trapezoid
at the top of the diagram describes the main function that justifies the reason why the product
exists or should exist, whereas sub-functions describe what the constituent elements do. Means
are physical manifestations that carry out a function. Both functions and means are explained in
general terms without considering details such as shape or dimension. Figure 2.6 is a functions-
means tree diagram in which the main function and sub-functions pertinent to drug delivery are
identified along with the various means to accomplish each function. Note a difference in the
way functions and means are described. All of the associated sub-functions are required for the
realization of the main function. The more means that are recorded under a function, the more
numerous are the ways of realizing that function.
A functional surface identifies where a specific function resides within the product [9].
A functional surface is typically marked on a sketch of the design object using hatched lines, as
illustrated in Fig. 2.7. The figure shows that the two knife edges in a pair of scissors can be
marked with hatched lines to indicate where the cutting function resides. Similarly, the holding
function resides in a handle. The hatched lines do not invoke specific shapes, thus leaving the
design assignment more open to innovation. A functional surface indicates the existence of
an interface from the product to something else.
Another notion used to describe the functionality of a product in an abstract way is that
of an organ [11]. Like functional surfaces, an organ does not include information about shape
and materials, but it does describe in abstract terms how a function is carried out. Two or more
14
2. WHAT IS DESIGN?
Figure 2.6: A functions-means tree diagram for ways of delivering medicine inside a patient.
Each trapezoidal block contains a function, each rectangular block a means.
Figure 2.7: Examples of functional surfaces and organs for a pair of scissors.
functional surfaces can be combined into an organ. For example, the two cutting surfaces in a
pair of scissors form a cutting organ.
Another example of an organ is the sealing organ in a container such as a bottle or a jar. A
sealing organ can be realized as a lid with sealing surfaces in the lid and on the container. Using
the term “lid” will automatically bring up mental pictures of existing solutions for bottles and
jars. But referring to a “sealing organ” instead will make it easier to think freely of conceptually
Using vacuumWith pressureTearing processCutting processUse impulseOralintakeInjectionthrough the skinAbsorptionthrough the skinStretch skinTopositionTofixate skinTopenetrate skinReduceresistancePreventbucklingInitiatefracturePropagatefractureTo delivermedicineTo transferliquidConnectingorganHoldingsurface2 cutting surfaces=cutting organdifferent solutions. The designer could then propose a flexible bottle where the opening is closed
like a bag or the sealing organ could be a valve.
2.4. DESIGN PROCESS 15
2.4.3 DESIGN BRIEF AND PRODUCT SPECIFICATION
A design assignment is typically specified using two different documents: the design brief
and the product specification. The design brief is a visionary document that explains the
context of the intended product and how future users will benefit, without going into details of
the specification. The design brief is targeted toward the conceptual-design phase discussed in
Section 2.4.4.
The product specification includes more detailed descriptions and is targeted toward the
design phases that follow the conceptual-design phase. The product specification can be formu-
lated in various ways. In the performance-specification approach [1], several requirements are
formulated, e.g., the performance metrics that the product must satisfy and the performance
characteristics that it must exhibit. For instance, the product must be manufactured in a certain
range of colors and/or that it must be small enough to be stored in a standard storage space. The
performance-specification approach is useful as it delivers a checklist to ensure that a design
proposal is within acceptable limits. However, this approach is less helpful when comparing
different design proposals and existing solutions.
The product-specification approach [10] overcomes that problem. In this approach, both
requirements and criteria are stated. The requirements are fixed and must be met by the design
proposal; otherwise, the design proposal is unviable. The criteria are used to compare different
design proposals using evaluation matrixes (Section 2.4.5) and need to be formulated to indicate
a desirable direction but not set limits. For a new toothbrush as an example, the requirements
could include maximum dimensions and color; the criteria could specify that the toothbrush
should be pleasant to the mouth, be easy to clean, and have a low weight.
2.4.4
CONCEPTUAL DESIGN
Conceptual design is the creative process of generating ideas for solving the overall design
problem and finding partial solutions to each of the functional challenges identified in the
functions-means tree diagram.
A typical way to come up with ideas is to form a brainstorming team. There are many prac-
tical approaches [1] for brainstorming, but all have in common that as large a number of ideas
be generated as possible and that criticism of ideas be avoided during brainstorming sessions.
Different brainstorming approaches introduce different ways of viewing the design problem;
hence, using several different approaches increases the number of ideas.
Brainstorming can be done by single persons on their own, but doing it together with
other people will drastically increase the chance of finding new ideas because participants will
inspire each other. Also, the brainstorming session can have many different formats. One format
involves a whiteboard on which a moderator writes up all the ideas that the participants propose.
16
2. WHAT IS DESIGN?
In this way, a participant needs to articulate each of his/her ideas so that it is intelligible to
the moderator, which also facilitates the uptake of that idea by other participants to generate
additional as well as downstream ideas. Another brainstorming format lets each participant write
down their ideas on colored post-it notes that are then stuck on a whiteboard. The advantage
of this format is the delivery of multiple ideas by each participant, but there is a risk that the
participants do not see the ideas of others and do not use those ideas to generate more ideas.
Brainstorming is a creative idea-generation method whereby the participants are not re-
quired to follow rigid rules or constraints. In contrast, systematic idea-generation methods set
rigid rules for the participants, those rules possibly stimulating the participants to examine is-
sues that they would not have otherwise thought of. Examples of systematic idea-generation
methods include analysis of existing or competing solutions, TRIZ, and biologically inspired
design.
Analysis of existing or competing solutions requires identification of their weaknesses and
strengths so that a comparative assessment may be made of the opportunities offered by each.
TRIZ is the Russian acronym for the Theory of Inventive Problem Solving which is a
method for developing innovative solutions [12]. It is based on a large (39
39) matrix of con-
tradictory features (e.g., larger in volume but lighter in weight). Each field in the matrix contains
a list of possible solutions developed from analyses of very large number of patents.
(cid:2)
Biologically inspired design emerges from the thought that the bioworld offers a palette of
solutions to numerous technological problems. A function analysis must be made of a candidate
structure or process in the bioworld, the relevant principles must be extracted from that analysis,
and finally those principles must be applied to the problem being addressed in the design activity.
It is important to differentiate between ideas and concepts. An idea in creative design
work is basically just a principle for how to solve a problem. Application of that principle in a
specific context transforms the idea into a concept that satisfies the context-specific constraints
and conditions. For example, the idea of using a lid to seal a container becomes a concept in the
context of closing a beer bottle, a context-specific constraint being that the lid must be able to
withstand the pressure within the bottle.
Conceptual design normally starts by an examination of partial solutions to each of the
functions required in the product. It is assumed that the superposition principle applies so the
partial solutions can be combined to form the overall solution. One way of combining partial
solutions into an overall solution is by using a morphology chart [1]. This chart is a table con-
taining lists of possible partial solutions for each of the functions required in the product. The
partial solutions can be described in words but even better with small icons for easier communi-
cation within the design team. A candidate overall solution can be formulated by selecting one
partial solution for each function.
2.4. DESIGN PROCESS 17
2.4.5
CONCEPT EVALUATION
The diverse concepts gathered for a given design activity can be comparatively assessed in dif-
ferent ways. Often, the concepts in the form of prototypes can be ranked by major stakehold-
ers including prospective users, who can also provide feedback in the form of written and oral
comments. A risk in this type of assessment is that the evaluation may be based on parame-
ters other than what is important for the product. For example, a poorly produced prototype
could be ranked poorly even though the underlying concept could lead to superior performance.
To avoid such biased evaluations, the major criteria must be clearly formulated in the product
specification and used for structuring the interviews with stakeholders and users.
A good tool for concept evaluation is the evaluation matrix which can be one of two
types [1]:
(i) the comparison matrix also referred to as the Pugh matrix and
(ii) the rating matrix also called the weighted-objective-method matrix.
1), worse (
The comparison matrix compares each concept with a reference product which is typically an
existing product. For every criterion in the product specification, each concept is rated better
1), or similar (0) to the reference product and the ratings are added to produce
(
C
the overall rating for the concept. The major advantage of the comparison matrix is its simplicity
which makes it easy to understand and discuss its results. A limitation is that all criteria need to
be of equal importance since they are weighted equally.
(cid:0)
That limitation is taken care of in the rating matrix. Each criterion is given a weight w de-
pending on its importance. For every criterion in the product specification, each concept is given
a numerical score. The overall rating for the concept is found by multiplying the score numbers
with the weight for each criterion and adding the numbers together. The rating matrix should
produce a better comparison of the diverse concepts than the comparison matrix. However, the
rating matrix is more complicated than the comparison matrix, and stakeholders and users may
find it harder to understand and accept the outcomes of a comparison.
2.4.6
TOWARD DETAILED DESIGN
What has been described so far in this chapter is normally referred to as conceptual design. The
results are design concepts describing the product at a general level without going into detail
about materials, shapes, and dimensions. Conceptual-design proposals are often made as hand
drawings to signal that they are not finished and many details can still be altered. The more
detailed design work is done thereafter: the product embodiment is carefully planned, typically
using software for computer-aided design; precise dimensions are determined; tolerances are
added; materials are selected; and manufacturing techniques are specified.
Besides drawings by hand and the more detailed drawings produced using computer-aided
design tools, physical models are often made. Depending on their fidelity and purpose, physical
models are referred to using different terms. A rudimentary model using rough materials such as
18
2. WHAT IS DESIGN?
cardboard, clay, and paper is typically called a mock-up. It serves to demonstrate only a few rele-
vant aspects of the product such as physical size and how it interfaces to other products. When a
physical model appears close to the final product, it is referred to as a visual model. Another type
of physical model is a functional model, which only serves to demonstrate that a given principle
and embodiment will actually fulfill the requirements. Finally, prototypes are traditionally used
to denote models that comes very close to the final product but are typically made as one-offs.
However, the terminology is drifting and many designers use the term prototype as a synonym
for a mock-up or a functional model.
While all of these more detailed design activities are important elements in the overall
design process, we will not go into more detail with them in this book. Biologically inspired
design is typically incorporated in the conceptual-design phase.
2.5
REFERENCES
[1] N. Cross, Engineering Design Methods—Strategies for Product Design, Wiley, Chichester,
UK, 2008. 5, 6, 7, 9, 13, 15, 16, 17
[2] B. R. Lawson, Cognitive strategies in architectural design, Ergonomics, 22:59–68, 1979.
DOI: 10.1080/00140137908924589. 6
[3] N. Cross, Design Thinking: Understanding how Designers Think and Work, Berg, Oxford,
UK, 2011. DOI: 10.5040/9781474293884. 6
[4] D. Adriaens, Evomimetics: The biomimetic design thinking 2.0, Proceedings of SPIE,
10965:1096509, 2019. DOI: 10.1117/12.2514049. 7
[5] N. Eldredge and S. J. Gould, Punctuated equilibria: An alternative to phyletic gradual-
ism, Models in Paleobiology, T. J. M. Schopf, Ed., pages 82–115, Freeman Cooper, San
Francisco, CA, 1972. 7
[6] M. J. Benton and P. N. Pearson, Speciation in the fossil record, Trends in Ecology and
Evolution, 16:405–411, 2001. DOI: 10.1016/s0169-5347(01)02149-8. 7
[7] C. M. Christensen, M. Raynor, and R. McDonald, What is disruptive innovation?, Har-
vard Business Review, 93(12):44–53, 2015. 7
[8] G. Pahl, W. Beitz, J. Feldhusen, and K.-H. Grote, Engineering Design: A Systematic Ap-
proach, 3rd ed., Springer, London, UK, 2007. DOI: 10.1007/978-1-84628-319-2. 7, 8
[9] E. Tjalve, A Short Course in Industrial Design, Butterworth, London, UK, 1979. DOI:
10.1016/C2013-0-00824-9. 9, 10, 13
[10] M. M. Andreasen and L. Hein, Integrated Product Development, IFS (Publications) Ltd.,
Kempston, UK, 1987. 10, 11, 15
2.5. REFERENCES 19
[11] M. M. Andreasen, C. T. Hansen, and P. Cash, Conceptual Design: Interpretations, Mindset
and Models, Springer, Cham, Switzerland, 2015. DOI: 10.1007/978-3-319-19839-2. 11,
13
[12] J. F. V. Vincent, O. A. Bogatyreva, N. R. Bogatyrev, A. Bowyer, and A.-K. Pahl,
Biomimetics: Its practice and theory, Journal of the Royal Society Interface, 3:471–482, 2006.
16
C H A P T E R 3
21
Engineered Biomimicry:
Solutions from the Bioworld
If a group of engineers, mindful of our need to tap natural energy sources,
were to embark on designing a machine that would pump water out of
the ground over an area of 100 square meters continuously, and would
boil off the water into steam, using only the energy directly from the
sun for the whole process, it is possible that they might do it. But
their finished machine would certainly never resemble a tree!
Eric R. Laithwaite (1988)1
Although we humans have long been envious of feats of performance displayed by a variety of an-
imal species [1], and we have been creative in emulating and even surpassing some of those feats,
biomimicry began to acquire an organizational framework only during the 1990s. Coinage of
the term biomimetics is usually attributed to Otto Schmitt during the late 1950s [2]. The simi-
lar term biomimesis coined during the next decade [3] does not have much currency nowadays.
The term bionics, once synonymous with biomimetics [4], is nowadays employed in English
exclusively to the science and practice of replacing an organ in a living being by a prosthesis. The
umbrella term biomimicry has come to subsume its precedents, although one (namely, bionics)
survives as bionik in German.
Biomimicry opens “the possibility of a new industrialism that is more attuned to nature’s
needs” [5] and therefore intersects with the discipline of sustainable design. As discussed
in Chapter 1, engineered biomimicry does not require consideration of sustainability. In this
chapter, we first lay out the case for engineered biomimicry, then present a few representative ex-
amples, identify some characteristics of the solutions available in the bioworld for technological
problems, and finally discuss the importance of having biologists on design teams for bioworld
solutions.
3.1
THE CASE FOR ENGINEERED BIOMIMICRY
Charles Darwin used the word evolve only once in the first edition [6] and just 16 times in the
sixth edition [7] of his most famous book On The Origin of Species. Instead, he used the term
1E. R. Laithwaite, Gaze in wonder: an engineer looks at biology, Speculations in Science and Technology, 11:341–345, 1988.
22
3. ENGINEERED BIOMIMICRY
descent with modification to describe the origin of new species. Most traits of a child are
derived from those of its parents, but some modifications may occur.
Later scientists realized that genes are the vehicles for heritability or descent and that
imperfect replication of parental DNA results in random modifications called mutations. Most
mutations are either inconsequential or harmful, but a certain mutation may confer reproductive
success in the prevailing environment. That mutation becomes more prevalent in succeeding
generations, the process being called natural selection.
Whereas mutations are random, natural selection is not. Only those mutations that lead
to better adaptation to altering or altered environments are successful. A continuum of mor-
phological varieties thus arises in a species. A series of successful mutations, genetic transfer
from one population to another as a result of migration, and random changes in the frequencies
of certain genes are mechanisms which eventually result in a new species that does not have
morphological intermediates between it and the older species.
As of now, about 1.3 million species have been identified, but some 86% of terrestrial
species and 91% of marine species are estimated to still await description [8]. Add the 4 billion
species that are estimated to have gone extinct [9] since life began on our planet some 4 billion
years ago [10]. Each of those species can be considered as being successful for a certain period,
dying out only when the environmental conditions were no longer conducive enough to sustain
it.
The success of any mutation cannot be predicted and there is no prescient agency for nat-
ural selection. Still, looking at the history of the bioworld, both recent and in the prehistoric
past, we may regard all species as data points in a multidimensional space. The mutually orthog-
onal axes of this space are physical variables (such as ambient temperature, ambient pressure,
and mass density) and performance characteristics (such as speed of locomotion, longevity, and
fecundity). Each species as a data point represents a successful experiment.
Since the laws of physics hold sway over every biological process just as completely as over
every technological operation, we should then consider the bioworld as a repository of answers
to billions of technological questions [11]. Some of those answers may not be optimal for our
technological requirements but can still illuminate possible research directions. Other answers
may be used by us without much fuss. Furthermore, the bioworld contains a plethora of processes
some of whom can be replicated either partially or wholly in industrial operations. No wonder,
humans have long been inspired by attractive outcomes and functionalities evident in plants and
animals.
3.2
ENGINEERED BIOMIMICRY
Engineered biomimicry encompasses both basic research on outcomes and mechanisms of di-
verse phenomena displayed by living organisms and the application of fundamental principles
uncovered by that basic research to devise useful processes and products. Engineered biomimicry
is classified into bioinspiration, biomimetics, and bioreplication, as shown in Fig. 3.1 [12], based
3.2. ENGINEERED BIOMIMICRY 23
Figure 3.1: Classification of engineered biomimicry into bioinspiration, biomimetics, and
bioreplication.
on whether outcomes, mechanisms, or structures in the bioworld are aimed for reproduction in
technoscientific settings.
3.2.1
BIOINSPIRATION
Ancient stories provide numerous examples of the human desire to fly. After rescuing two chil-
dren from a sacrificial altar, a flying ram became the constellation Aries in Greek mythology.
Zeus, the king of Greek gods, had a winged steed named Pegasus. Quetzalcoatl, the Aztec god
of wind and learning, was a winged serpent. Hindu mythology is replete with flying chariots
and palaces. Mohammad, the prophet of Islam, was flown to heaven by a white mule-donkey
hybrid named Bur
Some 500 years ago, Leonardo Da Vinci (1452–1519) studied birds to conceptualize sev-
eral flying contraptions which evidently never took off. Sir George Cayley (1773–1857) made a
pilotless glider that did fly in 1804. Orville and Wilbur Wright were to first to successfully fly
a heavier-than-air machine with a person onboard, on December 17, 1903. The emergence of
aeroplanes inspired by birds in self-powered flight is an excellent example of bioinspiration, but
birds and aeroplanes have different flying mechanisms. The goal in bioinspiration is to reproduce
a biological outcome but not the underlying biological mechanism(s) and structure(s).
aq.
N
3.2.2
BIOMIMETICS
Biomimetics is the reproduction of a physical mechanism responsible for a specific outcome or
functionality exhibited by a biological structure. Greek mythology furnishes the classical exam-
ple of biomimetics through Icarus, a flying human who escaped from a Cretan prison using
wings made of feathers and wax. Sadly, he perished after the wax melted when he flew too close
to the sun.
A modern example is that of insulin, a hormone produced naturally in mammalian pan-
creas but nowadays modified and synthesized in either yeasts or Escherichia coli bacteria [13, 14].
Yet another example of biomimetics is Velcro™ that comprises dense assemblies of hooks and
loops, the former emulating the hooked barbs on a burdock seed and the latter, the fur of a furry
animal. The commercialization of this biomimetic analog of a natural mechanism of adhesion
is a fascinating story of determination [15].
BioinspirationBioworldOutcomeBiomimeticsMechanismBioreplicationStructure24
3. ENGINEERED BIOMIMICRY
3.2.3
BIOREPLICATION
Bioreplication is the direct replication of a structure found in a biological organism in order
to reproduce one or more functionalities exhibited by the biological structure copied. During
the last ten years, diverse physical techniques have been harnessed to replicate several biological
structures such as the eyes and wings of several types of insects [16]. The techniques include the
sol-gel method; atomic layer deposition; physical vapor deposition; and some combination of
imprint lithography, casting, and stamping [17]. Some of these techniques are more suitable for
reproducing surface features, others for bulk three-dimensional structures.
3.3
EXAMPLES OF ENGINEERED BIOMIMICRY
3.3.1
BIOINSPIRED COMPUTATIONAL TECHNIQUES
Every multicellular organism contains one or more networks in which information is sensed,
transmitted, processed, transmitted again, and then acted upon. Relying on physical and chem-
ical phenomena, all of these processes are quantitative and therefore may be mathematically
modeled by us, albeit not always easily.
Mathematical models of many biological processes employ differential equations to re-
late spatial and temporal gradients of physical quantities, such as the concentrations of some
chemicals, partial pressure of various fluids, and the electric charge density transported by ions.
Initial and boundary conditions therefore must be concurrently considered [18, 19]. Successful
examples include models of oxygen-deficient dermal wounds [20] and cancer growth [21].
Often, the data gathered about a biological process is both discrete and huge, as exem-
plified by tumor growths [22] and neuronal activity [23]. To analyze these data, mathematical
methods commonly used for time series [24] and dynamical systems [25] are pressed into service.
The two foregoing paragraphs provide examples of mathematical methods applied to un-
derstand biological processes. Are some mathematical methods to analyze non-biological phe-
nomena inspired by the bioworld? An affirmative answer to that question has emerged in modern
times [26]. Inspired by the structure of animal brains, artificial neural networks (ANNs)
are being used for pattern recognition tasks, including speech recognition, machine translation,
video games, and traffic control; fuzzy logic seeks to emulate human cognition for automated
decision making; swarm intelligence guides mathematical investigations of emergent phe-
nomena; genetic algorithms are often used for optimization; and so on. Let us focus on two
of these bioinspired computational techniques.
Artificial Neural Networks
ANNs have been inspired by animal brains which are networks of neurons connected to other
neurons through synapses [27, 28]. In an ANN, neurons are replaced by nodes and synapses by
connections, as depicted schematically in the top panel of Fig. 3.2.
3.3. EXAMPLES OF ENGINEERED BIOMIMICRY 25
Figure 3.2: Top: Schematics for artificial neural networks.
The middle panel of Fig. 3.2 shows two nodes providing inputs I1 and I2 to a node whose
output is denoted by O. The output is related to the inputs by a nonlinear function f .x/ such
that
(
O
D
0 ;
w1I1
w2I2 < b ;
f .w1I1
C
w2I2/ ; w1I1
w2I2
b :
(cid:21)
C
C
(3.1)
The on/off characteristic of real neurons is simulated by the conditionality on the right side of
Eq. (3.1), with b as the bias or the threshold value of the argument x of f .x/, and the relative
importance of the inputs coded through the weights w1 and w2.
An ANN can have several input nodes arranged in a layer and several output nodes ar-
ranged in a different layer. In between is at least one layer of hidden nodes, called thus because
these nodes have no direct connection to: (i) the sensors providing data to the input layer and
HiddenlayerOutputlayerOWeightConnection(synapse)Node(neuron)Node(neuron)I1I2w2I2w1I1Inputlayer26
3. ENGINEERED BIOMIMICRY
(ii) the actuators implementing actions controlled by the output layer. The bottom panel of
Fig. 3.2 shows an ANN in which information moves in the forward direction, i.e., from the
input nodes, through the hidden nodes, to the output nodes. ANNs of other types can have
backward connections and even loops.
Known sets of input-output data are used to train an ANN, i.e., determine the weights.
More training data will determine the weights better (usually but not always), the assumption
being that the ANN learns just like a biological brain. After the ANN is deemed to have learned
enough, it can be fed data to predict the output with confidence.
Genetic Algorithms
Genetic algorithms are commonly used to design a device or structure to meet a numerical cri-
terion for performance [29]. The device performance depends on the values of a certain number
(say N ) of characteristic variables. The algorithm begins by randomly selecting M1 > 1 sets of
the N characteristic variables. A performance function denoted by p is calculated for every one
b1 for a specific set, where b1 is a threshold value, then that particular set
of the M1 sets. If p
M1 sets survive to reproduce
is retained; if not, that set is eliminated. The result is that
the next generation comprising M2 new sets.
M1
N
(cid:20)
(cid:21)
D
The simplest reproduction method is mutation, whereby each new set is based on a single
surviving set of the previous generation. If the population is being doubled by mutation (i.e.,
M1), each set of the old generation is reproduced twice, once as itself and once by multi-
2
M2
N
plying its characteristic variables by a random factor. A more complex method of reproduction is
crossover, whereby each set of the new generation is based on some combination of the surviving
sets of the previous generation. The performance function p is calculated for each one of the M2
b2 for a specific set, where b2 > b1 is a new threshold value, then that particular set
sets. If p
is retained; if not, that set is eliminated.
(cid:21)
This process of creating new generations continues until a criterion for terminating it is
satisfied. At that stage, several devices satisfying the performance criterion p
b1; b2; : : :
g
f
could have been identified. Then comes the task of selecting and making at least one of those
devices.
max
(cid:21)
3.3.2
BIOMIMETIC PRODUCTION OF HUMAN INSULIN
The peptide hormone insulin began to be used in 1922 to treat diabetic patients. In a normal
person, insulin is produced in the pancreas where it is stored well in excess of daily needs. A series
of biochemical reactions in response to elevated concentration of glucose in blood triggers the
release of insulin from the pancreas. Its half-life ranging between four and six minutes, it lasts
outside the pancreas for about an hour, and is eventually cleared by the liver and the kidneys.
The human insulin molecule has 51 amino acids,
its molecular formula being
C257H383N65O77S6. Insulin is produced and stored in the pancreas as a hexamer, i.e., an ag-
3.3. EXAMPLES OF ENGINEERED BIOMIMICRY 27
gregate of six molecules. The hexamer is very stable. Insulin is released from the pancreas as a
monomer, which acts very rapidly.
Until about three decades ago, virtually all insulin injected into patients was derived from
the glands of either cows or pigs obtained as waste products from abattoirs. Bovine insulin differs
from human insulin in only three amino acids, porcine insulin in just one. Fourteen cattle or 70
pigs had to be slaughtered to harvest enough insulin to last a patient for a year. However, the
responses of some patients were unpredictable and some patients had severe reactions.
Research began in the 1970s for a biomimetic route to synthesize human insulin itself [13].
That research has been wildly successful [14]. The sequence of biochemical reactions in mam-
malian pancreas is replicated in yeasts and bacteria. The reproduction of yeasts and bacteria can
be regulated fairly easily, which then eliminates the need for continually harvesting mammalian
pancreas. Moreover, as the production process is initiated with human insulin, the biomanufac-
tured insulin is maximally compatible with human patients.
Pancreatic Production
The production of a molecule called preproinsulin is encoded in a gene found in chromosome
11 in the nuclei of human cells. A chromosome is a DNA molecule comprising nucleotides
of four different types arranged into two strands that are coupled to each other by hydrogen-
hydrogen bonds. There are also packing proteins in the chromosome to keep the DNA molecule
untangled.
Every nucleotide contains a nitrogenous base. There are four types of nitrogenous bases:
adenine, thymine, guanine, and cytosine. Whereas adenine can form a hydrogen-hydrogen bond
only with thymine, guanine can form a hydrogen-hydrogen bond only with cytosine. Thus, ade-
nine and thymine are mutually complementary, and so are guanine and cytosine. The sequence
of bases in one strand of a DNA molecule is matched by the sequence of complementary bases
on the accompanying strand.
Three consecutive bases form a codon. A codon contains the instructions to produce a
protein-creating amino acid. There are 22 protein-creating amino acids. Of the 64 codons pos-
sible, 61 provide instructions for producing 20 of those amino acids. Some amino acids can be
produced by more than one codon. The final two protein-creating amino acids are synthesized
through complex reactions.
A short sequence of amino acids is called a peptide. A long sequence of amino acids is
called a polypeptide or a protein. Three codons are used to indicate the end of an amino-acid
sequence, the start of that sequence being signaled in a more complex way.
Thus, the DNA molecule in a chromosome comprises two complementary chains of
codons. A gene is a sequence of codons that contains instructions to produce a molecule that
performs a function. Some genes contain instructions to produce proteins, others to produce
different types of RNA molecules. An RNA molecule is a single strand of nucleotides of four
28
3. ENGINEERED BIOMIMICRY
types, each containing either adenine, thymine, guanine, or uracil (different from cytosine found
in DNA molecules).
The DNA molecule can then be considered as two chains of identical genes, but it also
contains codon sequences that may either have no purpose or whose purpose has yet not been
discovered.
The process of insulin production in pancreatic cells begins when an enzyme called RNA
polymerase, accompanied by molecules called transcription factors, attaches to a region in the
DNA molecule just before the start of the preproinsulin-producing gene. Then the two DNA
strands separate, and RNA nucleotides attach via hydrogen-hydrogen bonds to the nucleotides
in one of the two strands of the DNA molecule until the stop codon is encountered. At that
stage, the RNA molecule dissociates from the DNA strand, and the two strands of the DNA
molecule couple again.
The RNA molecule thus synthesized is called a messenger RNA (mRNA). It has the in-
structions to produce preproinsulin. That process begins when a transfer RNA (tRNA) molecule
and a ribosome attach themselves to the start codon of the mRNA molecule. Depending on the
next codon, the appropriate amino acid attaches itself to the end of the tRNA molecule. The
ribosome then translocates to the next codon, and the next appropriate amino acid attaches
itself to the previous amino acid. This elongation of the tRNA molecule continues until the
stop codon is reached. At that stage, the single-chain preproinsulin molecule is attached to the
original tRNA molecule. The two then dissociate.
A chemical reaction in the endoplasmic reticulum in the pancreas causes the removal
of 12 amino acids from the preproinsulin molecule, which then folds into two linear chains
connected by a peptide. The resulting molecule is called proinsulin. Removal of the connecting
peptide turns the proinsulin molecule into the insulin molecule.
Biomimetic Production of Insulin
This complex process had to be reproduced biomimetically. Researchers chose E. coli, a bac-
terium that contains a circular chromosome [13]. Some strains of E. coli also contain a circular
plasmid, which is a genetic structure that is not a chromosome. The gene INS is responsible for
producing preproinsulin in humans. This gene is inserted in the plasmids of some bacteria, as
shown schematically in Fig. 3.3. As the bacteria with the altered plasmids reproduce in a fermen-
tation chamber, the number of the altered plasmids increases. Biochemical reactions are then
used to harvest proinsulin molecules, which are then converted chemically to insulin molecules.
Some manufacturers use yeasts in place of E. coli.
The dominant mode of reproduction in both types of single-celled organisms is asexual.
In a process named mitosis, a cell elongates and then divides once to form two identical cells.
Both of these cells are genetically identical to the cell that underwent mitosis.
3.3. EXAMPLES OF ENGINEERED BIOMIMICRY 29
Figure 3.3: Schematic for biomimetic production of insulin.
The entire biomimetic process is initiated by some copies of INS, but no more are needed
after production begins. The proclivity of single-cell organisms to reproduce rapidly via mitosis
makes the biomimetic production of insulin economically viable.
Fast-acting insulins are produced by slight interchanges of codons in the initiating copies
of the human genes to minimize the tendency to form hexamers. The type of interchange se-
lected regulates the ratio of monomers to hexamers. Intermediate-acting insulins are produced
by adding chemicals that help maintain hexamers. Long-acting insulins are produced by slight
modifications of an amino acid. Thus, a therapeutically significant functionality is imparted to
biomanufactured insulin in comparison to insulin produced in the pancreas.
3.3.3
BIOREPLICATED VISUAL DECOYS OF INSECTS
An industrially scalable bioreplication process with nanoscale fidelity has been devised to pro-
duce visual decoys of females of the buprestid insect species Agrilus planipennis, commonly called
the emerald ash borer (EAB). The decoys are more successful than freshly sacrificed females in
luring males of the species toward attempted copulation followed by electrocution [30, 31],
thereby providing forestry managers a tool to limit the spread of the invasive species.
The emerald ash borer is a native of northeast Asia. Its shipborne arrival in North America
was detected in 2002. That very year, it was identified as devastating ash trees. EAB females
deposit eggs in the bark of ash trees; the EAB larvae chew long meandering tunnels in the
Human DNAINS geneFermentation chamberNuclear DNAE. coliRecombinant E. coliRecombinantDNA insulinExtractionandpurificationstepsPlasmid DNAHuman pancreas cell30
3. ENGINEERED BIOMIMICRY
Figure 3.4: Top: Female of the species Agrilus planipennis. Middle: Three types of bioreplicated
decoys produced with an industrially scalable process [30]. Bottom: 3D-printed decoy [35].
trunks as they feed, thereby disrupting the transport of nutrients and water to the leaves; and
adults chew their way back to the bark and exit the trunk [32]. EAB are thriving in North
America in the absence of natural predators and parasitoids. Although their populations spread
about 20 km per year, long-distance transport of wood products allows them to colonize far-
flung areas. Ash wood being used for numerous purposes, the destruction of ash trees is having
a severe economic impact. Furthermore, as other invasive species move into the affected areas,
native species suffer from habitat reduction and the soil chemistry changes [32].
EAB do not have sex pheromones to attract mates, relying instead on visual communica-
tion. Adult EAB are conspicuous by their bright metallic green elytra (hardened forewings), as
shown in the top panel of Fig. 3.4. Adult EAB males patrol tree canopies for adult EAB females
resting and feeding on ash leaves. After seeing a female from as high as 100 cm, a male drops
like a paratrooper toward her and makes vigorous attempts to copulate [33].
3.3. EXAMPLES OF ENGINEERED BIOMIMICRY 31
A visual decoy looking very similar to an EAB female with its elytra folded over its body
would be necessary to lure EAB males. The decoy’s color must be iridescent green to contrast
10-(cid:22)m surface features present on the
against the the background of ash foliage. Additionally,
elytra must be reproduced on the decoy.
(cid:24)
An industrially scalable bioreplication process was therefore devised [30]. This process
involved two major stages. In the first stage, a pair of matching positive epoxy and negative
nickel dies were bioreplicated from an euthanized female EAB. The negative die was made
by the deposition of a
500-nm-thick conformal film of nickel on the upper surface of the
euthanized female EAB in a low-pressure chamber. The nickel thin film was then thickened
by electroforming to about 100 (cid:22)m. The female EAB was then plucked out, leaving behind a
negative die with fine-scale features, the conformal film comprising
22-nm-diameter nickel
grains. A positive die of epoxy was made from the negative die of nickel using several casting
steps and the deposition of a conformal thin film of chalcogenide glass.
(cid:24)
(cid:24)
In the second stage, a sheet of poly(ethylene terephthalate) (PET) was hot stamped be-
tween the pair of matching dies. The PET sheet had been previously coated on the upper side
with a quarter-wave-stack Bragg filter [34] made of two distinct polymers to reflect normally
incident green light and on the lower side by black paint to absorb visible light of all other
colors. Light stamping between the pair of matching dies kept the Bragg filter intact. How-
ever, heavy stamping for better reproduction of the fine-scale features of the elytra pulverized
the Bragg filter, for which reason the lower side of the decoy was spray-painted metallic green,
again to mimic the actual color of the EAB elytra. The middle panel of Fig. 3.4 is a photograph
of bioreplicated decoys of three different types.
In a preliminary field experiment, males of the related species A. bigutattus were targeted,
the inter-species attraction having been previously recorded by entomologists. The bioreplicated
decoys were 40% more effective in luring males than dead EAB females [30]. The lower effective-
ness of the dead EAB females is indicative of the suboptimality of many biological phenomena,
as discussed in Section 4.5.
The effectiveness of the bioreplicated decoys was evaluated against that of 3D-printed
decoys, an example of which is shown in the bottom panel of Fig. 3.4. Although EAB males
were almost equally attracted to decoys of both types, they would fly toward and alight on the
bioreplicated decoys for a couple of seconds, but they would break off midway toward the 3D-
10-(cid:22)m surface features on the 3D-printed
printed decoys and veer away. The absence of the
decoys rendered them insufficiently authentic on closer inspection by the EAB males.
(cid:24)
In the third field experiment [31], the bioreplicated decoys were offered to EAB males.
These decoys evoked complete attraction, paratrooper flight, and attempted copulation from
EAB males. Some decoys were electrically wired for alighting males to be electrocuted. The
electrocuting decoys could assist forestry managers in slowing the spread of the pest species.
The bioreplication process for industrial-scale production of these decoys was sped up [36]
by making the negative nickel die from an array of several female EABs instead of only one.
32
3. ENGINEERED BIOMIMICRY
Also, the positive die was eliminated by a decision to fill up the multiple cavities of the negative
die with the thermally curable liquid polymer poly(dimethyl siloxane). Multiple decoys made
simultaneously were painted metallic green.
The tale of EAB decoys is one in which a biological structure is directly replicated by
technoscientists in order to fulfill a societal goal: to eliminate a pest species, or at least reduce its
proliferation. Can this nanoscale bioreplication process also assist biologists in answering certain
questions that cannot be answered otherwise? The answer is a guarded “yes.” For instance, the
spectral ranges of buprestid vision systems could be determined by coloring the decoys red, blue,
or yellow, or even ultraviolet. Of course, humans cannot see ultraviolet, but many insect species
can [37]. The same bioreplication technique could be applied to determine the spectral ranges
of the vision systems of their predator species. Even evolutionary scenarios could be investigated
by determining the aversion or affinity of a predator species to color mutations in a prey species.
3.4
DESIGN TEAMS FOR BIOWORLD SOLUTIONS
The examples of engineered biomimicry presented in some detail in this chapter strongly in-
dicate that this topic transcends the boundary between science and engineering. Until perhaps
the middle of the 19th century, there was no distinction between engineers and scientists. The
explanation of natural phenomena, today considered the domain of scientists, and the commer-
cially viable exploitation of those phenomena for the betterment of the human condition, today
considered the domain of engineers, were conjoint goals of a person who functioned either as a
scientist or as an engineer in different phases of professional life. Sometimes, that person even
functioned concurrently as an engineer and a scientist.
The English word scientist was coined in 1834 for someone dedicated to the pursuit of
new knowledge in any branch of science [38, 39]. In the ensuing decades, scientists were dif-
ferentiated from engineers, the former as the discoverers of new facts in nature and formulators
of potentially verifiable theories to explain those facts, the latter as those who apply scientific
knowledge to solve practical problems at a cost that society can bear.
This differentiation is less pronounced nowadays, especially when multidisciplinary teams
are formed to undertake complex research projects, whether at universities or in industries or
in university-industry consortia. Teams comprise physicists, chemists, materials scientists, me-
chanical engineers, chemical engineers, electrical engineers, medical scientists, etc., as dictated
by project requirements.
The scope of biologically inspired design is the formulation of design strategies to re-
produce desirable outcomes, mechanisms, and structures from the bioworld. The practice of
biologically inspired design requires both scientists and engineers to work collaboratively, just
as for other types of complex research projects with an industrial focus. There is, however, a
crucial issue.
Such a team must have biologists each of whom who specializes in a particular species that
exhibits a desirable outcome, mechanism, or structures. But the expertises of these biologists may
3.5. REFERENCES 33
not be enough for optimal design. There may be other species—in even other genera, families,
orders, and classes—that could be the sources of better designs than the species that could have
inspired the formation of the biologically inspired design project. The independent evolution
of a similar feature in two very different species is called homoplasy, if that feature was not
inherited from a recent common ancestor [40]. The evolution of a feature that has similar form
or function in widely different species attests to the robustness of that feature. But not every
manifestation of a certain feature would be equally efficacious for the process or product to be
designed. A biologist who is focused on a desirable outcome, mechanism, or structure—instead
of a particular species—could therefore guide the other team members to better choices [41].
3.5
REFERENCES
[1] H. D. Wolpert, The world’s top olympians, Engineered Biomimicry, A. Lakhtakia and R. J.
Martín-Palma, Eds., pages xix–xxv, Elsevier, Waltham, MA, 2013. DOI: 10.1016/c2011-
0-06814-x. 21
[2] J. M. Harkness, A lifetime of connections: Otto Herbert Schmitt, 1913–1998, Physics in
Perspective, 4:456–490, 2002. DOI: 10.1007/s000160200005. 21
[3] E. O. Attinger, Biomedical engineering: From biomimesis to biosynthesis, Bioengineer-
ing: An Engineering View, G. Bugliarello, Ed., pages 213–246, San Francisco Press, San
Francisco, CA, 1968. 21
[4] O. H. Schmitt, Where are we now and where are we going? Proceedings of USAF Air Re-
search and Development Command Symposium on Bionics, pages 483–486, WADD Technical
Report 60-600, Dayton, OH, Sept. 1960. 21
[5] E. R. Johnson and J. Goldstein, Biomimetic futures: Life, death, and the enclosure of
a more-than-human intellect, Annals of the Association of American Geographers, 105:387–
396, 2015. DOI: 10.1080/00045608.2014.985625. 21
[6] C. Darwin, On the Origin of Species by Means of Natural Selection, 1st ed., John Murray,
London, UK, 1859. 21
[7] C. Darwin, On the Origin of Species by Means of Natural Selection, 6th ed., John Murray,
London, UK, 1882. 21
[8] C. Mora, D. P. Tittensor, S. Adl, A. G. B. Simpson, and B. Worm, How many species
are there on earth and in the ocean? PLoS Biology, 9:e1001127, 2011. DOI: 10.1371/jour-
nal.pbio.1001127. 22
[9] G. G. Simpson, How many species? Evolution, 6:342, 1952. DOI: 10.2307/2405419. 22
34
3. ENGINEERED BIOMIMICRY
[10] M. S. Dodd, D. Papineau, T. Grenne, J. F. Slack, M. Rittner, F. Pirajno, J. O’Neil, and C.
T. S. Little, Evidence for early life in Earth’s oldest hydrothermal vent precipitates, Nature,
543:60–64, 2017. DOI: 10.1038/nature21377. 22
[11] V. Davidov, Biomimicry as a meta-resource and megaproject, a literature review, Environ-
ment and Society: Advances in Research, 10:29–47, 2019. DOI: 10.3167/ares.2019.100103.
22
[12] A. Lakhtakia and R. J. Martín-Palma (Eds.), Engineered Biomimicry, Elsevier, Waltham,
MA, 2013. 22
[13] J. A. Kehoe, The story of biosynthetic human insulin, Frontiers in Bioprocesssing, S. K.
Sikdar, M. Bier, and P. W. Todd, Eds., pages 45–49, CRC Press, Boca Raton, FL, 1990.
DOI: 10.1016/c2011-0-06814-x. 23, 27, 28
[14] N. A. Baeshen, M. N. Baeshen, A. Sheikh, R. S. Bora, M. M. M. Ahmed, H. A. I.
Ramadan, K. S. Saini, and E. M. Redwan, Cell factories for insulin production, Microbial
Cell Factories, 13:141, 2014. DOI: 10.1186/s12934-014-0141-0. 23, 27
[15] S. D. Strauss, The Big Idea: How Business Innovators Get Great Ideas to Market, pages 14–18,
Dearborn Trade Publishing, Chicago, IL, 2002. 23
[16] D. P. Pulsifer and A. Lakhtakia, Background and survey of bioreplication techniques,
Bioinspiration and Biomimetics, 6:031001, 2011. DOI: 10.1088/1748-3182/6/3/031001.
24
[17] R. J. Martín-Palma and A. Lakhtakia, Nanotechnology: A Crash Course, SPIE Press,
Bellingham, WA, 2010. DOI: 10.1117/3.853406. 24
[18] D. S. Jones, M. J. Plank, and B. D. Sleeman, Differential Equations and Mathematical Bi-
ology, 2nd ed., CRC Press, Boca Raton, FL, 2009. DOI: 10.1201/9781420083583. 24
[19] W. E. Schiesser, Differential Equation Analysis in Biomedical Science and Engineering, Wiley,
Hoboken, NJ, 2014. DOI: 10.1002/9781118705292. 24
[20] C. Xue, A. Friedman, and C. K. Sen, A mathematical model of ischemic cutaneous
wounds, Proceedings of U.S. National Academy of Sciences, 106:16782–16787, 2009. 24
[21] J. S. Lowengrub, H. B. Frieboes, F. Jin, Y.-L. Chuang, X. Li, P. Macklin, S. M. Wise, and
V. Cristini, Nonlinear modelling of cancer: Bridging the gap between cells and tumours,
Nonlinearity, 23:R1–R91, 2010. DOI: 10.1088/0951-7715/23/1/r01. 24
[22] K. A. Rejniak and A. R. A. Anderson, Hybrid models of tumor growth, Wiley Interdisci-
plinary Reviews: Systems Biology and Medicine, 3:115–125, 2011. DOI: 10.1002/wsbm.102.
24
3.5. REFERENCES 35
[23] S. J. Schiff, K. Jerger, D. H. Duong, T. Chang, M. L. Spano, and W. L. Ditto, Controlling
chaos in the brain, Nature, 370:615–620, 1994. DOI: 10.1038/370615a0. 24
[24] W. A. Woodward, H. L. Gray, and A. C. Elliott, Applied Time Series Analysis with R, 2nd
ed., CRC Press, Boca Raton, FL, 2017. DOI: 10.1201/b11459. 24
[25] R. J. Brown, A Modern Introduction to Dynamical Systems, Oxford University Press, Oxford,
UK, 2018. 24
[26] W. Banzhaf, Evolutionary computation and genetic programming, Engineered Biomimicry,
A. Lakhtakia and R. J. Martín-Palma, Eds., pages 429–447, Elsevier, Waltham, MA,
2013. DOI: 10.1016/c2011-0-06814-x. 24
[27] W. S. McCulloch and W. Pitts, A logical calculus of the ideas immanent in nervous activity,
Bulletin of Mathematical Biophysics, 5:115–133, 1943. DOI: 10.1007/bf02478259. 24
[28] I. N. da Silva, D. H. Spatti, R. A. Flauzino, L. H. B. Liboni, and S. F. dos Reis Alves,
Artificial Neural Networks: A Practical Course, Springer, Cham, Switzerland, 2016. DOI:
10.1007/978-3-319-43162-8. 24
[29] D. Simon, Evolutionary Optimization Algorithms, Wiley, Hoboken, NJ, 2013. 26
[30] D. P. Pulsifer, A. Lakhtakia, M. S. Narkhede, M. J. Domingue, B. G. Post, J. Kumar,
R. J. Martín-Palma, and T. C. Baker, Fabrication of polymeric visual decoys for the male
emerald ash borer (Agrilus planipennis), Journal of Bionic Engineering, 10:129–138, 2013.
DOI: 10.1016/s1672-6529(13)60207-3. 29, 30, 31
[31] M. J. Domingue. A. Lakhtakia, D. P. Pulsifer, L. P. Hall, J. V. Badding, J. L. Bischof, R. J.
Martín-Palma, Z. Imrei, G. Janik, V. C. Mastro, M. Hazen, and T. C. Baker, Bioreplicated
visual features of nanofabricated buprestid beetle decoys evoke stereotypical male mating
flights, Proceedings of U.S. National Academy of Sciences, 111:14106–14111, 2014. DOI:
10.1073/pnas.1412810111. 29, 31
[32] D. A. Herms and D. G. McCullough, Emerald ash borer invasion of North America:
History, biology, ecology, impacts, and management, Annual Review of Entomology, 59:13–
30, 2013. DOI: 10.1146/annurev-ento-011613-162051. 30
[33] J. P. Lelito, I. Fraser, V. C. Mastro, J. H. Tumlinson, K. Böröczky, and T. C. Baker,
Visually mediated “paratrooper copulations” in the mating behavior of Agrilus planipennis
(Coleoptera: Buprestidae), a highly destructive invasive pest of North American ash trees,
Journal of Insect Behavior, 20:537–552, 2007. DOI: 10.1007/s10905-007-9097-9. 30
[34] N. Dushkina and A. Lakhtakia, Structural colors, Engineered Biomimicry, A. Lakhtakia
and R. J. Martín-Palma, Eds., pages 267–303, Elsevier, Waltham, MA, 2013. DOI:
10.1016/c2011-0-06814-x. 31
36
3. ENGINEERED BIOMIMICRY
[35] M. J. Domingue, D. P. Pulsifer, A. Lakhtakia, J. Berkebile, K. C. Steiner, J. P. Lelito, L.
P. Hall, and T. C. Baker, Detecting emerald ash borers (Agrilus planipennis) using branch
traps baited with 3D-printed beetle decoys, Journal of Pest Science, 88:267–279, 2015. DOI:
10.1007/s10340-014-0598-y. 30
[36] T. Gupta, S. E. Swiontek, and A. Lakhtakia, Simpler mass production of polymeric visual
decoys for the male emerald ash borer (Agrilus planipennis), Journal of Bionic Engineering,
12:263–269, 2015. DOI: 10.1016/s1672-6529(14)60118-9. 31
[37] A. D. Briscoe and L. Chittka, The evolution of color vision in insects, Annual Review of
Entomology, 46:471–510, 2001. DOI: 10.1146/annurev.ento.46.1.471. 32
[38] W. Whewell, On the connexion of the physical sciences, Quarterly Review, 51:54–68,
1884. 32
[39] S. Ross, Scientist: The story of a word, Annals of Science, 18:65–85, 1962. DOI:
10.1080/00033796200202722. 32
[40] T. Pearce, Convergence and parallelism in evolution: A neo-Gouldian account, British
Journal for the Philosophy of Science, 63:429–448, 2012. DOI: 10.1093/bjps/axr046. 33
[41] E. Graeff, N. Maranzana, and A. Aoussat, Engineers’ and biologists’ roles during
biomimetic design processes, towards a methodological symbiosis, Proceedings of the 22nd
International Conference on Engineering Design (ICED19), pages 319–328, Delft, The
Netherlands, Aug. 5–8, 2019. DOI: 10.1017/dsi.2019.35. 33
C H A P T E R 4
37
Rationale for Biologically
Inspired Design
Wilful waste makes woeful want.
James Kelly, A Complete Collection of Scottish Proverbs (1721)
Since the laws of physics hold sway over every biological process just as completely as over
every technological operation, the bioworld should be considered as a repository of answers to
billions of technological questions [1]. Some of these answers have already been implemented
by humans. Some answers may not be optimal for our technological requirements but can still
illuminate possible research directions.
The fact is that the bioworld offers a palette of solutions that may be otherwise unavail-
able to humans. An example is furnished by three-dimensional photonic crystals with diamond
crystal structure, which reflect incident electromagnetic waves in a specific spectral regime, re-
gardless of the direction of incidence [2]. These photonic crystals have been made for operation
in the microwave and infrared spectral regimes, but no technique has been successful to fabricate
them for operation in the visible spectral regime [3]. Yet the exocuticle of the Brazilian weevil
Lamprocyphus augustus displays the desired response characteristics in the yellow-green portion
of the visible spectral regime [4], as shown in Fig. 4.1. Clearly, then a fabrication route exists in
the bioworld that is not known yet to humans.
A review of relevant characteristics of bioworld solutions is undertaken in this chapter to
offer the rationale for biologically inspired design.
ENERGY EFFICIENCY
4.1
Neither biological processes nor industrial processes can overcome the fundamental limitations
encoded in the laws of physics. Nevertheless, the contrast between the two types of processes is
remarkable. Chemical routes are commonplace for material transformations in biological pro-
cesses, whereas physical routes are routinely employed in industrial processes. This difference is
quite succinctly captured by just one environmental variable: temperature.
Consider the temperature differences between biological and industrial processes. Al-
though a few animals live in extreme conditions, the range of temperature in the majority of
the bioworld is quite restricted. Deep-sea creatures at 1000 m below sea level have to live at
70(cid:14)C and desert foxes with 50(cid:14)C. But the tem-
about 5(cid:14)C, polar bears have to contend with
(cid:0)
38
4. RATIONALE FOR BIOLOGICALLY INSPIRED DESIGN
Figure 4.1: Exocuticle fragment from the Brazilian weevil Lamprocyphus augustus.
peratures of internal tissue vary in a much smaller range, because biological cells are mostly
water. Accordingly, numerous biological processes occur between, say, 5(cid:14)C and 45(cid:14)C. In con-
trast, very high temperatures are routinely employed in industrial processes. Wood combusts at
about 300(cid:14)C, clay bakes at about 760(cid:14)C, and iron melts at higher than 1500(cid:14)C.
The metal zirconium is produced on reducing zirconium chloride by liquid magnesium at
about 825(cid:14)C [5]. The hardness of zirconium on the Mohs scale is 5, the scale ranging from 1
(talc) to 10 (diamond). Tooth enamel, which has the same hardness as zirconium, is formed at
a much lower temperature of about 37(cid:14)C.
The production of high temperatures requires considerable expenditure of energy, imply-
ing that biological processes are energy efficient in comparison to industrial processes [6]. This
energy efficiency is a persuasive argument for mimicking biological processes when designing
an industrial production line, especially during the time of climate emergency we are presently
living in [7]. Indeed, one can justifiably argue that an embrace of biologically inspired design
is essential to the survival of the human species as well as numerous other species, in the 21st
century on Earth.
4.2
CIRCULAR ECONOMY OF MATERIALS
About 40,000 metric tons of cosmic dust fall on our planet every year [8], but about 50,000
metric tons of hydrogen and helium escape every year too [9]. These changes to the mass of
Earth are so tiny that it can be regarded as a closed system wherein materials cycle between the
lithosphere, atmosphere, and hydrosphere.
The biosphere comprises parts of each of these three regions of Earth. Biomass, i.e., the
mass of living organisms, varies greatly with time [10]. For example, it reduces significantly
during the autumn season in the northern hemisphere. Despite such variations, the main out-
puts, byproducts, and wastes of every living organism become inputs to living organisms of other
4.3. MULTIFUNCTIONALITY 39
species. Herbivores eat plants, carnivores eat herbivores, numerous organisms sustain themselves
on the excretions and secretions of other species, and the bodies of dead organisms return nutri-
ents to the ground in which plants grow. Leaving aside the sequestration of materials through
geological processes, materials thus circulate in the bioworld.
In other words, the bioworld exhibits circular economy [11] of materials, especially
when annually averaged over ecologically distinct parts of the biosphere. This circular economy
becomes easily evident when an island subspecies are compared to its continental counterpart
in average size. Adjustment to serious restrictions on the availability of edible matter on islands
is commonly shown by increases and decreases of average sizes of diverse species in relation to
their continental cousins [12].
The circular economy of materials evinced by various swathes of the bioworld is not ac-
companied by the circular economy of energy. This is because our planet is a closed but not an
isolated system thermodynamically. A closed system can exchange energy but not mass with
its surroundings. An isolated system can exchange neither energy nor mass with its surround-
ings. The sun supplies energy to Earth, which is in addition to the energy made available to the
biosphere by the planetary core.
In the bioworld, every organ is functional over a certain period of time that is, on average,
not less than the time needed to reproduce at least once. Many organs are repairable and some
organs are not totally necessary for the survival of the individual. The byproducts and waste
products of bioworld processes are used as inputs to other bioworld processes, not necessarily
in the same organism. Materials in a dead organism provide sustenance to other organisms,
either directly or indirectly. Biologically inspired design can influence the manufacture, use, and
disposal of specific products with minimal depletion of materials and with minimal impact on
the rest of the biosphere; furthermore, energy could be harvested from whatever remains that
cannot be cannibalized after use.
4.3 MULTIFUNCTIONALITY
Multifunctionality is commonplace in living organisms [13–15]. Thus, limbs are used for
moving, signaling, gathering and preparing food, wielding weapons, and initiating as well as
warding off physical assaults, among other things. Mouths are used for ingesting food and fluids,
releasing sounds, breathing, and kissing. As certain organs can perform two or more distinct
functions that are not highly related to each other, fewer organs need to be formed and housed
in the organism and fewer structures need to be coordinated by the organism’s brain.
This economy of multifunctionality is an attractive feature of biologically inspired de-
sign [16, 17]. A multifunctional module can be incorporated in a variety of products, thereby
reducing inventory costs, enhancing repairability and product lifetimes, and promoting stan-
dardization. A multifunctional product may designed and fabricated as an assembly of mono-
functional components. A simple example is a Swiss Army knife. A multifunctional product
could also be made from multifunctional materials, whether natural or composite. The costs of
40
4. RATIONALE FOR BIOLOGICALLY INSPIRED DESIGN
eventual disposal may be higher when composite materials are used, and designers will have to
make choices based on lifecycle audits [18].
4.4 MULTICONTROLLABILITY
The concept of multicontrollability [19] is closely allied to multifunctionality. Multicon-
trollability is also exhibited commonly in the bioworld. Thus, multiple modes of locomotion can
be used by an organism to propel itself from one location to another, and often the same sound
can be uttered using two or three different placements of the tongue in the buccal cavity. We
get alarmed by hearing the sound of an approaching car and/or by seeing it. Reliance on mul-
tiple mechanisms thus builds resilience via redundancy. That’s why multiple control modalities
are used to ensure specific actions in critical facilities such as nuclear power plants and missile
guidance centers.
4.5
SUBOPTIMALITY
When mimicking a bioworld product or process, it is important to remember that biological
phenomena are adapted to a specific context with a given set of constraints. This means that the
solutions derived from a biological phenomenon may not be suitable in contexts with different
constraints. For instance, the wings of an owl are silent but are unsuitable for rapid flight, the
wings of a swan are noisy but can lift a heavy body, and the wings of a swift allow for very high
speed but make it very difficult for the bird to take off from the ground.
A bioworld solution is also constrained by evolutionary history since it arises from succes-
sive mutations of several species [20]. Each mutation could be suboptimal that performs just well
enough in a particular niche. A succession of such mutations will definitely produce a solution
that too is viable in its niche, but that solution could be suboptimal even in that niche.
Suboptimality in the bioworld has long been exemplified by the plethora of visual prob-
lems that plague humans [21], not to mention other mammals. Aberrations, astigmatism, and
blindspots are structural deficiencies that have kept generations of ophthalmologists gainfully
employed. Although all of their patients would like to keep using their eyes for as long as pos-
sible, the human eye can hardly be regarded as the product of a well-designed instrument [22].
As a bioworld solution is not necessarily optimal even in the bioworld, it is likely to re-
quire some modification to optimize it for a specific technoscientific application. This should be
viewed as a welcome opportunity, all the more so as the need for modification may allow the
incorporation of functionalities not associated with the bioworld solution in the bioworld. Thus,
the rapidity of action of biomimetic insulin can be controlled by the alteration of the codon se-
quence, as mentioned in Section 3.3.2. Similarly, bioreplicated decoys can be colored differently
from the the species being replicated, as discussed in Section 3.3.3.
Further opportunities may arise after realizing that several bioworld solutions can be com-
bined for a specific technoscientific application. This is exemplified by the tennis racquets in
4.6. CONTRAINDICATED PERFORMANCE 41
the Dunlop Biomimetic 200™ series. The racquet beam is made of Dunlop HM6 carbon sand-
wiched between aerogel-enhanced carbon sheets. Dunlop HM6 carbon mimics the morphology
of honeycombs which are extraordinarily strong and lightweight structures [23]. The surface of
the racquet frame is covered by a fabric with overlapping scale-like protrusions to reduce aero-
dynamic drag. These protrusions mimic denticles that reduce hydrodynamic drag and prevent
fouling of shark skins [24, 25]. The surface of the racquet grip mimics the setae on the feet of a
gecko that enable it to walk upside down on smooth surfaces [26, 27].
4.6
CONTRAINDICATED PERFORMANCE
For over two millennia, humans have known that an object denser than water sinks in a bathtub
but an object of lesser density than water floats. Well, boats float in rivers and seas, but that is
because the volume-averaged density of a boat’s hull and superstructures as well as of air below
the waterline is the same as of water.
Air is a liquid and a rigorous scientific study [28] is not needed to prove that a bird is
definitely heavier than air on a unit-volume basis. Although avian flight is thus contraindicated,
birds of most species can fly well, some even at altitudes higher than 10 km [29]. The secret lies
in the arrangement of flight feathers arranged on concave wings that can be flapped to raise the
underwing pressure and provide lift.
Mushrooms and their mycelium roots are well known to be very fragile. But a fungus
growing in a fibrous material functions as a glue that provides the resulting composite material
with surprisingly high stiffness and strength. This can be seen in the forest floor where the soil in
places with fungus can become harder and stiffer, provided the soil is a good mixture of organic
material of various sizes. The same phenomenon can be utilized for making building components
and plates from straw by letting a fungus grow in the humidified material. The mycelium roots
will bind the straw fibers together and form a stiff composite material, as depicted in Fig. 4.2.
Both foams and structural composites are being made of mushrooms [30, 31].
Mollusk shells are calcareous, created by the secretion of calcium carbonate mixed in
a broth of polysaccharides and glycoproteins which controls the position and elongation of
calcium-carbonate crystals [32]. As talc, calcium carbonate is among the softest natural ma-
terials known. As aragonite, the material’s hardness does not exceed 4 on the Mohs scale. Yet,
mollusk shells comprising interlaced plates of aragonite are extremely durable, with a modulus of
elasticity similar to wood’s, tensile strength similar to copper’s, and compressive strength higher
than porcelain’s [33]. The secret lies in the arrangement of aragonite plates that prevents crack
propagation and thereby provides the toughness needed to protect the enclosed body. The same
arrangement of plates of Norwegian slate has been used in the retaining walls constructed on the
undulating terrain of the Lyngby campus of Danmarks Tekniske Universitet (DTU) as shown
in Fig. 4.3.
42
4. RATIONALE FOR BIOLOGICALLY INSPIRED DESIGN
Figure 4.2: Mycelium bio-composite made from straw and other agricultural byproducts.
Figure 4.3: (a) Retaining wall on the Lyngby campus of DTU. (b) The inter-plate regions of the
wall provide habitat for terrestrial mollusks of the species Cepaea nemoralis.
The examples of contraindicated performance in the bioworld offer unexpected
routes to the seemingly impossible satisfaction of mutually incompatible constraints. Thus, bio-
logically inspired design has the potential to engender innovative products and processes.
(a)(b)4.7. REFERENCES 43
4.7
REFERENCES
[1] V. Davidov, Biomimicry as a meta-resource and megaproject, a literature review, Environ-
ment and Society: Advances in Research, 10:29–47, 2019. DOI: 10.3167/ares.2019.100103.
37
[2] M. Maldovan and E. L. Thomas, Diamond-structured photonic crystals, Nature Materials,
3:593–600, 2004. DOI: 10.1038/nmat1201. 37
[3] A. Risbud, A. Lakhtakia, and M. H. Bartl, Towards bioreplicated texturing of solar-
cell surfaces, Encyclopedia of Nanotechnology, Part 20, pages 2755–2762, B. Bhushan, Ed.,
Springer, Heidelberg, Germany, 2012. DOI: 10.1007/978-90-481-9751-4_18. 37
[4] J. W. Galusha, L. R. Richey, J. S. Gardner, J. N. Cha, and M. H. Bartl, Discovery of a
diamond-based photonic crystal structure in beetle scales, Physical Review E, 77:050904,
2008. DOI: 10.1103/physreve.77.050904. 37
[5] L. Xu, Y. Xiao, A. van Sandwijk, Q. Xu, and Y. Yang, Production of nuclear
grade zirconium: A review, Journal of Nuclear Materials, 466:21–28, 2015. DOI:
10.1016/j.jnucmat.2015.07.010. 38
[6] J. F. V. Vincent, O. A. Bogatyreva, N. R. Bogatyrev, A. Bowyer, and A.-K. Pahl,
Biomimetics: Its practice and theory, Journal of the Royal Society Interface, 3:471–482, 2006.
DOI: 10.1098/rsif.2006.0127. 38
[7] P. Gilding, Why I welcome a climate emergency, Nature, 573:311, 2019. DOI:
10.1038/d41586-019-02735-w. 38
[8] H. A. Zook, Spacecraft measurements of the cosmic dust flux, Accretion of Extraterres-
trial Material Throughout Earth’s History, B. Peucker-Ehrenbrink and B. Schmitz, Eds.,
pages 75–92, Springer, New York, 2001. DOI: 10.1007/978-1-4419-8694-8_5. 38
[9] D. C. Catling and K. J. Zahnle, The planetary air leak, Scientific American, 300(5):36–43,
2009. DOI: 10.1038/scientificamerican0509-36. 38
[10] R. A. Houghton, Biomass, Encyclopedia of Ecology, S. E. Jørgensen and B. D. Fath, Eds.,
pages 448–453, Elsevier, New York, 2008. 38
[11] W. R. Stahel, Circular Economy: A User’s Guide, Routledge, Abingdon, Oxford, UK, 2019.
39
[12] J. B. Foster, Evolution of mammals on islands, Nature, 202:234–235, 1964. DOI:
10.1038/202234a0. 39
44
4. RATIONALE FOR BIOLOGICALLY INSPIRED DESIGN
[13] D. H. Evans, P. M. Piermarini, and K. P. Choe, The multifunctional fish gill: Dominant
site of gas exchange, osmoregulation, acid-base regulation, and excretion of nitrogenous
waste, Physiological Reviews, 85:97–177, 2005. DOI: 10.1152/physrev.00050.2003. 39
[14] S. N. Patek, J. E. Baio, B. L. Fisher, and A. V. Suraez, Multifunctionality and mechanical
origins: Ballistic jaw propulsion in trap-jaw ants, Proceedings of U.S. National Academy of
Sciences, 103:12787–12792, 2006. DOI: 10.1073/pnas.0604290103. 39
[15] D. M. Neustadter, R. L. Herman, R. F. Drushel, D. W. Chestek, and H. J. Chiel, The
kinematics of multifunctionality: Comparisons of biting and swallowing in Aplysia califor-
nica, Journal of Experimental Biology, 210:238–260, 2007. DOI: 10.1242/jeb.02654. 39
[16] A. Lakhtakia, From bioinspired multifunctionality to mimumes, Bioinspired, Biomimetic
and Nanobiomaterials, 4:168–173, 2015. DOI: 10.1680/jbibn.14.00034. 39
[17] A. Lakhtakia and W. Johari, Engineered multifunctionality and environmental sus-
tainability, Journal of Environmental Studies and Sciences, 5:732–734, 2015. DOI:
10.1007/s13412-015-0305-1. 39
[18] D. F. Ciambrone, Environmental Life Cycle Analysis, CRC Press, Boca Raton, FL, 1997.
DOI: 10.1201/9780203757031. 40
[19] A. Lakhtakia, D. E. Wolfe, M. W. Horn, J. Mazurowski, A. Burger, and P. P. Banerjee,
Bioinspired multicontrollable metasurfaces and metamaterials for terahertz applications,
Proceedings of SPIE, 10162:101620V, 2017. DOI: 10.1117/12.2258683. 40
[20] D. Adriaens, Evomimetics: The biomimetic design thinking 2.0, Proceedings of SPIE,
10965:1096509, 2019. DOI: 10.1117/12.2514049. 40
[21] H. Helmholtz, Popular Lectures on Scientific Subjects, Appleton, New York, 1885. DOI:
10.1037/12825-000. 40
[22] R. S. Fishman, Darwin and Helmholtz on imperfections of the eye, Archive of Ophthal-
mology, 128:1209–1211, 2010. DOI: 10.1001/archophthalmol.2010.189. 40
[23] T. Blitzer, Honeycomb Technology: Materials, Design, Manufacturing, Applications and Test-
ing, Chapman and Hall, London, UK, 1997. DOI: 10.1007/978-94-011-5856-5. 41
[24] G. D. Bixler and B. Bhushan, Biofouling: Lessons from nature, Philosophical Transactions
of the Royal Society of London A, 370:2381–2417, 2012. DOI: 10.1098/rsta.2011.0502. 41
[25] T. Sullivan and F. Regan, The characterization, replication and testing of dermal denticles
of Scyliorhinus canicula for physical mechanisms of biofouling prevention, Bioinspiration
and Biomimetics, 6:046001, 2011. DOI: 10.1088/1748-3182/6/4/046001. 41
4.7. REFERENCES 45
[26] K. Autumn and A. M. Peattie, Mechanisms of adhesion in geckos, Integrative and Com-
parative Biology, 42:1081–1090, 2002. DOI: 10.1093/icb/42.6.1081. 41
[27] C. Majidi, R. E. Groff, Y. Maeno, B. Schubert, S. Baek, B. Bush, R. Maboudian, N.
Gravish, M. Wilkinson, K. Autumn, and R. S. Fearing, High friction from a stiff polymer
using microfiber arrays, Physical Review Letters, 97:076103, 2006. DOI: 10.1103/phys-
revlett.97.076103. 41
[28] T. W. Seamans, D. W. Harnershock, and G. E. Bernhardt, Determination of
body density for twelve bird species, Ibis, 137:424–428, 1995. DOI: 10.1111/j.1474-
919x.1995.tb08046.x. 41
[29] R. C. Laybourne, Collision between a vulture and an aircraft at an altitude of 37,000 feet,
The Wilson Bulletin, 86:461–462, 1974. https://www.jstor.org/stable/4160546 41
[30] E. Bayer and G. McIntyre, Method for making dehydrated mycelium elements and product
made thereby, US Patent 2012/0270302 A1, October 25, 1997. https://patents.google.com/
patent/US20120270302A1/en 41
[31] C. Bruscato, E. Malvessi, R. N. Brandalise, and M. Camassola, High performance of
macrofungi in the production of mycelium-based biofoams using sawdust—sustainable
technology for waste reduction, Journal of Cleaner Production, 234:225–232, 2019. DOI:
10.1016/j.jclepro.2019.06.150. 41
[32] F. Marin and G. Luquet, Molluscan biomineralization: The proteinaceous shell con-
stituents, of Pinna nobilis L., Materials Science and Engineering C, 25:105–111, 2005. DOI:
10.1016/j.msec.2005.01.003. 41
[33] F. Barthelat, Nacre from mollusk shells: A model
for high-performance struc-
tural materials, Bioinspiration and Biomimetics, 5:035001, 2010. DOI: 10.1088/1748-
3182/5/3/035001. 41
C H A P T E R 5
47
Problem-Driven Biologically
Inspired Design
It is, that as existing human inventions have been anticipated
by Nature, so it will surely be found that in Nature lie the proto-
types of inventions not yet revealed to man. The great discoverers of
the future will, therefore, be those who will look to Nature for Art,
Science, or Mechanics, instead of taking pride in some new invention,
and then finding that it has existed in Nature for countless centuries.
Rev. John G. Wood, Nature’s Teachings, Human Invention
Anticipated by Nature ()
INTRODUCTION
5.1
Biologically inspired design (BID) can be approached from two different directions [1–3]. The
approach from the engineering side is referred to as problem-driven BID, whereas the ap-
proach from the biology side leads to solution-driven BID. The former is treated in this
chapter, the latter in Chapter 6.
As the name implies, problem-driven BID is initiated by an engineering problem whose
solutions are sought; hence, it is very similar to traditional engineering design. The major differ-
ence is that the solution principles are searched in the bioworld. As engineering designers will
be familiar with the design-oriented parts of the process but are likely to be less knowledgable
and experienced in the tasks that relate to biology, problem-driven BID should be carried out
in a collaboration between engineers and biologists. However, there are strong limitations for
problem-driven BID in such a collaboration, as explained in Sections 5.2.2–5.2.4.
Problem-driven BID is the term used by researchers at the Georgia Institute of Tech-
nology [1], Arts et Métiers ParisTech [4, 5], and Danmarks Tekniske Universitet (DTU) [2].
The International Standards Organization calls it technology-pull biomimetics because a
technological need initiates it and drives the work [6]. The term top-down bionik has been
used by researchers at the Technische Universität München for many years [7]. It is also this
type of BID that is handled with the design spiral from the Biomimicry Institute [8].
There are other ways than problem-driven BID to generate new ideas for how to design
products and other artifacts. One can look at already existing products or even search patents. Or
48
5. PROBLEM-DRIVEN BID
Figure 5.1: The five phases of problem-driven BID implemented using the DTU biocard
method.
one could turn to a range of different creativity techniques such as brainstorming, 635-method
or the Scamper method [9, 10]. Two questions naturally arise. First, how well does BID perform
as an idea-generation technique? Second, are its outcomes worth the effort?
Answers to these questions have been sought by comparing the BID methodology to
traditional brainstorming [11]. Several design students were given an assignment to generate
ideas to a given problem, with half of the students asked to use brainstorming and the other
half to use the BID methodology. The novelty of each resulting design proposal was identified
by comparing it with other solutions found on the internet. The comparison was made using
the SAPPhIRE model for causality [12] where the similarity between new and existing design
proposals was compared at seven levels of abstraction. The use of BID methodology resulted in
fewer design proposals, but the ones that were found were more novel (and, therefore, presum-
ably of higher quality). This is a key argument for using the BID methodology. Brainstorming is
easy to learn and requires little preparation or skills, thereby producing many design proposals.
On the contrary, the BID methodology requires a stricter procedure to be followed as well as
some interest in and some knowledge of biology, but results in novel proposals.
5.2
PHASES OF PROBLEM-DRIVEN BID
Implementation of problem-driven BID is done in five phases, beginning with an initial analysis
of the design problem, followed by a search for biological analogies, then distilling an under-
standing of biological phenomena to extract key principles, followed by a reformulation of design
principles, and ending with the actual design of new objects after validating the principles in the
context of the design problem. The flow chart in Fig. 5.1 illustrates these five phases using the
biocard method developed at DTU.
Abstraction is done at least twice during the problem-driven BID process, as is clear
from Fig. 5.2. First, the technical problem is abstracted from the initial analysis of the design
Formulate problem (context- specificFunctional keywordgenerationSelectAbstractionExtractionRedefine problemRedefine problemLearn moreRefine principlesBiological keywordsAbstractionContextualizationValidationProblem analysisUnderstandDesignTransferSearchFormulate challenge (generalized) Make search termsSketch conceptual solutionsValidate principles in contextUnderstand biological phenomenaSearch databasesMake biocards describe generalized principlesExtract key principles5.2. PHASES OF PROBLEM-DRIVEN BID 49
Figure 5.2: Interaction of technology and biology in problem-driven BID [5].
problem. Second, key biological principles or strategies are abstracted from the understanding
of a biological phenomenon and brought into a form useful for design work.
5.2.1
FIRST PHASE: PROBLEM ANALYSIS
The first phase in problem-driven BID is no different from those in numerous other goal-
oriented projects, since a thorough understanding of the problem being tackled is required.
Asking the right question(s) is halfway to finding good solutions. The problem-analysis phase
can involve just a single person but is often better carried out as a collaboration of several peo-
ple. Discussions among team members force clarity in the description of the problem so that
every member can get a clear and complete picture. The following tasks have to be undertaken
in the problem-analysis phase: problem description, function analysis, and engineering-biology
translation.
Problem-Description Task
Describing the design problem adequately is among the most important activities in BID, just
as it is in design work in general. Adequate understanding of the core issues and the short-
comings of existing products determines the form and the substance of the remainder of the
design process. Understanding the design problem and describing it clearly for others can be
difficult enough for a single person, but it becomes even more complicated when many persons
3. Transposetobiology1. Problemanalysis5. Selectbiological model(s)of interest2. Abstracttechnicalproblem6. Abstractbiologicalstrategies4. Identifypotentialbiological models8. Implementand test in theinitial context7. TransposetotechnologyInputOutput50
5. PROBLEM-DRIVEN BID
Figure 5.3: A hand-drawn sketch describing the window problem. The window must allow an
external view but prevent the solar infrared radiation from entering the room.
collaborate in a design team. Therefore, the problem must be described and communicated in
a way that it is easily and uniformly understood by many persons. A sequence of illustrations,
whether drawn by hand or on computers, accompanied by bulleted points in text can docu-
ment the problem reasonably well. Illustrations can be rapidly made and transcend barriers of
language, terminology, and expertise. The technical problem can then be abstracted quite easily.
But, care must be taken that the illustrations focus on the desired functionality but not
on the manner in which the problem is to be solved. As an example, consider the window
problem that architects often encounter when designing buildings in normally sunny locales.
People inside a building are interested both in having sunlight enter rooms through windows
and in being able to view the outside. However, solar radiation contains not only visible light but
also infrared waves that heat the room and may necessitate the increased use of air-conditioning
systems. The design problem is that of a window that allows the occupants of a room to enjoy the
external view but (partially) prevents solar infrared radiation from entering the room. This design
problem can be described by the simple sketch shown in Fig. 5.3. The window pane is represented
by two parallel lines, the external view is illustrated by a dashed straight line that begins at one
eye of a stickperson and crosses the window pane to the exterior, and the infrared restriction
is represented by bouncing arrows. Such an abstract description will stimulate an open-minded
approach to identify the core functionality and allow for a broader and goal-oriented search for
biological organisms displaying that functionality.
Another method that is useful for problem analysis and description is the four-box
method [13]. This method requires the design team to specify
5.2. PHASES OF PROBLEM-DRIVEN BID 51
Figure 5.4: The four-box method for problem analysis and description [13].
(i) the operational environment for the product (i.e., the context),
(ii) the core functions delivered by the product,
(iii) the main specifications of the product, and
(iv) the performance criteria that the product must satisfy.
The responses are entered as bulleted lines of text in the table shown in Fig. 5.4. For the win-
dow example, the operational environment includes the type of room in which the window is
to placed (i.e., office, school room, bed room, etc.) as well as the geographical location and cli-
matic conditions (e.g., dry/humid, sandy/salty, hot/cold, etc.). Functions could include “provide
transparency,” “prevent solar infrared radiation to pass through,” and “allow cleaning;” see also
Section 5.2.1. Specifications include linear and areal dimensions and orientation toward the sun
in summer. Performance criteria could include the fraction of visible light that is allowed to
pass through the window, the color tint that is acceptable, and the minimum acceptable viewing
angle.
Function-Analysis Task
A problem is typically specified using a terminology which is closely related to the context of the
problem. For instance, will a car driver explain a puncture in a tire as “having a flat tire”? However,
as described in Section 2.4.1, it is important to provide an abstract functional description rather
than a concrete one, in order to prevent fixation. The puncture problem can, of course, be solved
by changing the tire; but if the goal is to prevent punctures, it is advantageous to describe the
function in more abstract terms. The tire is a solution to the functions “provide road grip” and
OperationalenvironmentFunctionsSpecificationsPerformance criteria52
5. PROBLEM-DRIVEN BID
Figure 5.5: Functions-means tree diagram for a window. Each trapezoidal block contains a func-
tion, each rectangular block a means.
“provide driving comfort.” By broadening the problem description using such abstract terms, it
is more likely that a completely different solution will be found. The road grip could be provided
by spiked solid wheels and the comfort could be supplied by a sturdy mechanism for wheel
suspension. Such a wheel solution will not suffer from punctures.
More generally, an engineering problem can be analyzed by describing an artifact that
solves the problem. The artifact can be decomposed into functional units each of which is de-
scribed in terms appropriate for the context. The next step is then to formulate the function(s)
of each artifact with a more abstract terminology that allows for a broader search for alternative
means to solve the problem. The overall problem is decomposed into sub-functions, each de-
scribing specific aspects of what the artifact does and defining a set of metrics for the required
performance.
Function analysis for the window problem of Fig. 5.3 can be performed as follows. The
main function of a window is to provide a view. This can be done with glass panes, but an open
hole in the wall will also deliver this function. A functions-means tree diagram, as described in
Section 2.4.2, helps to define which functionalities are required and thus support a search for
alternative solutions. Figure 5.5 shows a functions-means tree diagram for the window problem
with each trapezoidal box containing a function or sub-function and each square box containing
a means to provide the needed functionality. The search for solutions is thus broken down into
identification of various means, each of which solves a specific aspect of the overall problem.
The top-level functionality in the functions-means tree diagram is “to provide a view.” The main
function can be broken down into five sub-functions: “to allow light to enter,” “to prevent ex-
cess heat from entering,” “to prevent heat loss,” “to keep out insects,” and “to prevent sound
Glass paneFinely meshed netDouble layerglazingBlindsWindowGlass paneTo allow light to enterTo prevent excessheat from enteringTo preventheat lossTo keepout insectsTo providea viewTo preventtransport of sound5.2. PHASES OF PROBLEM-DRIVEN BID 53
transport.” The sub-function involving insects can be solved by using a finely meshed net as an
alternative to a glass pane. The last sub-function rules out a hole as a window and also the finely
meshed net. The functions-means tree diagram therefore is a tool for qualifying the search for
solutions and it is also very helpful in the search for analogous solutions from the bioworld.
A challenge in describing functionalities for the functions-means tree diagram is to select
the right phrases that will be helpful in the search phase. Assistance can be taken from on-line
thesauri wherein synonyms and antonyms can be found [14]. Another helpful resource is the
WordNet database from Princeton University [15].
Engineering-Biology Translation Task
After a designer (or a design team) has described the problem and identified the desired func-
tionalities, the formulation is often very technical. This is a good starting point, since the designer
should be familiar with the engineering terminology and therefore should be able to formulate
the problem precisely enough to find good technical terms for searching the literature.
In Fig. 5.1, this task is referred to as the context-specific formulation of the design prob-
lem. However, it is not very likely that exactly the same terms are used to describe similar func-
tions in the engineering and biology literatures. Before searching in the biology literature, it is
therefore beneficial to translate the engineering terms to biology terms. This task can be ap-
proached by looking at synonyms in a thesaurus as well as by looking in those segments of the
biology literature wherein similar phenomena are likely to be found. If, for instance, a new type
of cleaning mechanism is to be designed, then one could consult the literature on how domesti-
cated animals as well as animals housed in diverse research institutions (such as zoological parks)
keep themselves tidy. In that literature, terms such as “washing,” “licking,” and “removing hair
and dirt” are used instead of “cleaning.” The terms from biology literature could be more useful
in finding similar phenomena among other animals. The abstraction activity of finding good
biological search terms is referred to as the formulation of generalized challenges in Fig. 5.1.
For the window example, technical search terms could be “semi-transparent,” “sun block-
ing,” and “shield light.” These would only find a few biological analogies but that is a good
starting point. Once the first biological analogy has been identified, the biology literature could
be consulted to find out what terms are used to describe protection from high-intensity light.
As animal eyes are likely to possess features for such protection, literature on veterinary oph-
thalmology would be appropriate. Animals protect their eyes from high-intensity light by con-
tracting the iris, closing eye lids, moving the eyelashes, and using skin folds that shade. Another
search could be for plants growing in sunny deserts, because those plants somehow avoid being
overheated, e.g., how cacti utilize corrugated surfaces and spines for cooling by convection. The
insight gained could then be used to define search terms more likely to be found in the biology
literature. Examples of search terms could be “eye protection” and “temperature regulation.”
Another helpful approach is to translate the terms used for biological phenomena into
Latin. Latin words are universally used in the scientific literature, especially in the biology lit-
54
5. PROBLEM-DRIVEN BID
erature. Taxonomists use Latin terms for kingdoms, phyla, classes, orders, suborders, families,
genera, and species, each term usually referring to a specific biological attribute. After a Latin
term is found in taxonomy, it is straightforward to move up, down, or sideways in the hierarchy
to find other organisms and then explore other bioworld solutions to the design problem.
Researchers at the University of Toronto have developed a natural search approach for
BID and a method for identifying good biological search terms. They have proposed a set of
techniques for abstraction and identification of relevant search terms to be used for the bio-
logical search [3]. Verbs are recommended instead of nouns since it is more likely that nouns
will lead to pre-conceived analogies. Furthermore, verbs describe actions and hence are better
for finding a greater variety of biological forms. As an example, the verb “protect” helps find a
greater variety of phenomena than the noun “cuticle” does. If certain verbs that can be consid-
ered as biologically meaningful (significant or connotative) occur more commonly than others,
they can be considered as bridge words that are more likely to be helpful in the biological search.
5.2.2
SECOND PHASE: SEARCH
Searches can be done in many different ways. Most straightforwardly today, internet search en-
gines such as Yahoo, Google, and Bing should be used. The challenge is that, as no search engine
is restricted to biology, a large number of hits will result that can be difficult to navigate through.
It is therefore important to identify a good starting point when using an internet engine. One
way is to apply a bio-brainstorm where the person or groups of persons formulate a question of
following kind: “How would this particular problem be overcome in the bioworld?” Based on the
biological knowledge already available in the design team, animals and plants can be identified.
For instance, many people would readily propose mammalian eyes as biological solutions to the
window problem. This first hit will be a good starting point for a wider search.
Another approach is to use dedicated biology databases that will be more likely to propose
relevant biological organisms. Among the better known databases is AskNature developed by
the Biomimicry Institute [16]. AskNature contains a large number of examples of biomimicry,
with biological organisms described alongside how a biological strategy has been transferred
into technical applications. AskNature provides at least two ways to initiate a search. One is a
simple free-text search very similar to the use of internet search engines. Another is to use a
biomimicry taxonomy [17] which describes functions on three hierarchical levels: group, sub-
group, and function. Relevant for the window example could be to focus on the group “protect
from physical harm,” the subgroup “protect from non-living threats,” or the function “(protect
from) light.”
The terms from the biomimicry taxonomy can be used not only for a focused search in
AskNature but also when searching more broadly in other databases or on the internet. As
additional search terms are needed to limit an internet search to biological phenomena, terms
such as “biology,” “animal,” and “plant” or other biology-related terms should be added. The
5.2. PHASES OF PROBLEM-DRIVEN BID 55
right terms must be found through an iterative approach where relevant hits can be used to
identify relevant biological terms that will guide the search in a fruitful direction.
Yet another approach is to use library search engines to search scientific books and papers.
The library databases are prepared to offer goal-directed searchs where the focus is on recognized
and quality-tested scientific knowledge. What is found using library search engines therefore has
a high degree of trustworthiness. The difficulty in using scientific literature is that is written in
the language of a sub-culture, i.e., it can be difficult for a layperson to understand a paper written
for a specialist journal or book.
There are also other ways to search for biological organisms with relevant mechanisms and
functionalities. An obvious one is to consult a professional biologist. They have broad insights
about the bioworld, they know how many biological organisms function, and they can easily
peruse scientific literature to gather further information. However, as they may require payment
for their services, the value of conducting a biological search must be higher than the cost of
hiring a biologist. Besides, a limitation is the growing specialization within the broad discipline
of biology. Many biologists today have deep knowledge of only a narrow sub-discipline and
therefore are less suited for the broad search for biological phenomena that could help solve a
specific design problem.
Finally, there is the option to visit some parts of the bioworld. Once a mind is tuned to
looking for a functionality, it is natural to wonder about the things that we see in a forrest, a
zoological park, a botanical garden, or a protected area set aside as a nature reserve [18]. For
instance, if one is searching for new strategies for bearing structural loads (e.g., columns for
holding motorway signs, large tents, or bridges), it is natural to wonder about how trees are
structured and anchored in the ground so they can resist high wind pressure in storms. Or, if
one is look for self-cleaning strategies, one will find many plants that stay clean despite dirty
surroundings.
A possible pitfall when searching for biological phenomena is that only well-known ones
are explored. Experiences from teaching BID courses show that many students limit their
searches to the larger animals, i.e., mammals, birds, and insects [19]. By limiting the search
to the more familiar fauna and flora, the probability of finding really novel ideas decreases. If
the search is forced to be broader to cover items such as marine life, microbiology, and single-cell
organisms, more and novel ideas emerge [19].
5.2.3
THIRD PHASE: UNDERSTAND
Once a list of promising biological phenomena has been created, the next step is to understand
the underlying mechanisms. The mechanism is straightforward to understand in some cases,
but not for all. See, for example, Section 3.3.2 for the complexity of insulin production in the
human pancreas. It can also be that the overall functionality is easy to understand but becomes
more complex after additional detail is required for implementation. For the window example, it
is easy to understand that the iris in a mammalian eye functions like a camera aperture with the
56
5. PROBLEM-DRIVEN BID
size of the hole determining how much light is allowed through, but the activation of muscles
causing the contraction and widening of the iris is more complicated for a non-specialist to
understand.
Better understanding normally requires access to trustworthy literature which can inform
about a particular biological phenomenon and explain the underlying mechanism(s) in adequate
but not overwhelming detail. Whereas internet searches will supply the needed insight in some
cases, a proper library search is necessary more often than not. Relevant keywords and descriptive
names of the biological phenomenon combined with boolean operators (and, or, not) will help
identify relevant books and journal papers that can retrieved though the library facilities. Latin
terms will be especially useful in library searches since they precisely define the type of biological
phenomenon that is described, thereby offering the opportunity to select a more general level in
biological taxonomy and find literature for a wider group of organisms.
It can be advantageous to use a dedicated database such as BIOSIS Previews [20] at the li-
brary. Another useful tool is the Encyclopedia of Life [21], a community-driven resource to which
many biologists worldwide supply information about animals and plants. A supplementary valu-
able resource are the biologists themselves. If approached correctly and politely, they will often
help with basic explanations and guide toward the relevant literature for deeper understanding.
5.2.4
FOURTH PHASE: TRANSFER
In the next phase of BID, the findings must be transferred to the design problem by describing
the underlying functional principle of each biological phenomenon found. This is important to
facilitate precise and accurate communication among the members of the design team. If the
findings are communicated too loosely, much is left to interpretation and the final design may
be inspired by something other than what was intended by the person(s) who found a relevant
biological phenomenon.
One way to document the findings is to use biocards [22], an example of which is presented
in Fig. 5.6. The figure shows two similar yet different biocards on the mechanism that keeps
equine eyes clean: a concrete description using biological terminology and graphics in the left
biocard but an abstract description using neutral non-biological terms and graphics in the right
biocard. The biocard on the left mentions a tear film to which dust particles adhere, that is
removed periodically removed by the eyelid, and which is replenished periodically by tears. This
description is suitable for designers to generate ideas, but its scope is limited compared to the
abstract description in the biocard on the right. Terms such “tears” and “liquid” will fixate the
designer in thinking only of solutions that rely on a liquid to collect and clean. The abstract
description replaces both of those terms by the more neutral “substance.” This will make it more
likely for the designer to think freely and consider both liquid and solid substances for collecting
dirt particles. The same argument applies to the graphics in the biocard. Drawings of bioworld
solution should be eschewed in favor of more symbolic drawings.
5.2. PHASES OF PROBLEM-DRIVEN BID 57
Figure 5.6: (left) Concrete and (right) abstract descriptions in a biocard. The biocard on the right
is better suited for problem-driven BID.
5.2.5
FIFTH PHASE: DESIGN
The biocards can be used in different ways in the fifth phase of BID. One way is to make a
collection of biocards describing different functional principles based on different biological
phenomena. The designer or design team can then take one card at a time and sketch solutions
based on the functional principle in that biocard. It is important not to evaluate the quality of
that principle but wait until a design proposal utilizing it has emerged. In that way, it will be
the physical embodiment in a given context that will be evaluated.
For each promising design proposal, a physical model should be constructed for demon-
stration in order to convince decision makers about investing resources needed to better investi-
gate the proposal. Students taking a BID course at DTU routinely build such proof-of-principle
models [23]. Figure 5.7 shows an example. The design problem is that of reducing drag on a ship
and thereby lower energy consumption. The model ship in the figure is inspired by emperor pen-
guins [24]. When threatened, an emperor penguin releases air from underneath its feathers. The
resulting air bubbles form a thin layer that encapsulate its body to drastically reduce friction. The
penguin then increases its speed several times and escapes its enemies by rocketing out of the
water to the ice flakes where it will be safer. To prove the air-bubble principle for drag reduction,
,,58
5. PROBLEM-DRIVEN BID
Figure 5.7: Inspired by the use of air bubbles by emperor penguins to reduce friction in water,
this toy ship as a physical model demonstrated that the same functional principle will reduce
drag on a full-size ship. Courtesy: David Maage, Enzo Hacquin, and Anders Lui Soerensen.
a student team made a toy ship and equipped it with two aquarium pumps. On pumping air in
tubes with tiny holes underneath the toy ship, its bottom and sides were surrounded by a layer
of air bubbles. Measurements of the drag resistance confirmed that a reduced force was needed
to propel the toy ship.
5.3
ENGINEERS AND BIOLOGISTS
Since BID is basically about transferring biological knowledge to the engineering domain, it
seems obvious to carry out the design work as a collaboration of people with the two compe-
tences. There are good examples of successful and sustained collaborations. For instance, Julian
Vincent is a biologist who has worked for many years at an engineering college.
Biologists employed at agricultural universities are oriented toward developing more effi-
cient techniques for agriculture and forestry. Although endowed by their education with deep
insights into biology, they are oriented toward development work in institutions where the re-
sults are solutions that keep the citizenry well fed. Furthermore, their orientation must be suf-
ficiently broad to encompass both botany and zoology.
In contrast, typical faculty members in a biology department are highly specialized, be-
cause they achieve professional rewards by focusing on narrow topics within sub-disciplines such
as entomology, mycology, and molecular biology. Although beneficial for conducting novel re-
search on narrow topics, that outlook poses a challenge to engineering-biology collaborations.
When an engineering designer searches the bioworld to find applicable biological strategies,
5.4. REFERENCES 59
those strategies may have to be searched through the length and breadth of available biolog-
ical knowledge. If the biologist in the collaboration is a marine biologist, they will have little
knowledge about strategies involving insects or mountain plants.
A research group in Paris examined the role of biologists in biologically inspired de-
sign [25]. They found that mixed teams are more effective in coming up with more ideas and
make fewer mistakes. They also found that there is an increase in the diversity of biological
strategies identified as potentially useful in design work.
5.4
REFERENCES
[1] M. Helms, S. S. Vattam, and A. K. Goel, Biologically inspired design: Process and prod-
ucts, Design Studies, 30:606–622, 2009. DOI: 10.1016/j.destud.2009.04.003. 47
[2] T. A. Lenau, A.-L. Metze, and T. Hesselberg, Paradigms for biologically inspired design,
Proceedings of SPIE, 10593:1059302, 2018. DOI: 10.1117/12.2296560. 47
[3] L. H. Shu, K. Ueda, I. Chiu, and H. Cheong, Biologically inspired design, CIRP Annals—
Manufacturing Technology, 60:673–693, 2011. DOI: 10.1016/j.cirp.2011.06.001. 47, 54
[4] P.-E. Fayemi, N. Maranzana, A. Aoussat, and G. Bersano, Bio-inspired design character-
isation and its links with problem solving tools, Proceedings of DESIGN: 13th International
Design Conference, pages 173–182, Dubrovinik, Croatia, May 19–22, 2014. 47
[5] P. E. Fayemi, K. Wanieck, C. Zollfrank, N. Maranzana, and A. Aoussat, Biomimet-
ics: Process, tools and practice, Bioinspiration and Biomimetics, 12:011002, 2017. DOI:
10.1088/1748-3190/12/1/011002. 47, 49
[6] ISO 18458:2015, Biomimetics—Terminology, Concepts and Methodology, International
Standards Organization, Geneva, Switzerland, 2015. https://www.iso.org/standard/
62500.html DOI: 10.3403/30274979. 47
[7] T. Lenau, K. Helten, C. Hepperle, S. Schenkl, and U. Lindemann, Reducing consequences
of car collision using inspiration from nature, Proceedings of IASDR: 4th World Conference
on Design Research, Delft, The Netherlands, Oct. 31–Nov. 4, 2011. 47
[8] D. DeLuca, The Power of the Biomimicry Design Spiral, Biomimicry Institute, Missoula,
MT, 2017. https://biomimicry.org/biomimicry-design-spiral/ 47
[9] N. Cross, Engineering Design Methods—Strategies for Product Design, Wiley, Chichester,
UK, 2008. 48
[10] G. Pahl, W. Beitz, J. Feldhusen, and K.-H. Grote, Engineering Design: A Systematic Ap-
proach, 3rd ed., Springer, London, UK, 2007. DOI: 10.1007/978-1-84628-319-2. 48
60
5. PROBLEM-DRIVEN BID
[11] S. Keshwani, T. A. Lenau, S. Ahmed-Kristensen, and A. Chakrabarti, Comparing novelty
of designs from biological-inspiration with those from brainstorming, Journal of Engineer-
ing Design, 28:654–680, 2017. DOI: 10.1080/09544828.2017.1393504. 48
[12] V. Srinivasan and A. Chakrabarti, Investigating novelty–outcome relationships in engi-
neering design, Artificial Intelligence for Engineering Design, Analysis and Manufacturing,
24:161–178, 2010. DOI: 10.1017/s089006041000003x. 48
[13] M. Helms and A. K. Goel, The four-box method: Problem formulation and analogy evalu-
ation in biologically inspired design, Journal of Mechanical Design, 136:111106, 2014. DOI:
10.1115/1.4028172. 50, 51
[14] Merriam-Webster, Thesaurus, Springfield, MA. https://www.merriam-webster.com/
thesaurus 53
[15] Princeton University, WordNet: A Lexical Database for English, Princeton, NJ. https://
wordnet.princeton.edu/ 53
[16] The Biomimicry Institute, AskNature: Innovation Inspired by Nature, Missoula, MT. https:
//asknature.org/ 54
[17] The Biomimicry Institute, The Biomimicry Taxonomy, Missoula, MT. https://asknature.org/
resource/biomimicry-taxonomy/ 54
[18] Protected Planet, https://www.protectedplanet.net/en 55
[19] T. A. Lenau, Do biomimetic students think outside the box? Proceedings of the 21st In-
ternational Conference on Engineering Design (ICED17), Vol. 4: Design Methods and Tools,
4:543–551, Vancouver, Canada, Aug. 21–25, 2017. 55
[20] BIOSIS Previews®. https://www.ebsco.com/products/research-databases/biosis-previews
56
[21] Encyclopedia of Life. https://eol.org/ 56
[22] T. A. Lenau, S. Keshwani, A. Chakrabarti and S. Ahmed-Kristensen, Biocards and
level of abstraction, Proceedings of the 20th International Conference on Engineering Design
(ICED15), pages 177–186, Milan, Italy, July 27–30, 2015. 56
[23] Posters from DTU-BID course. http://polynet.dk/BID/ 57
[24] J. Davenport, R. N. Hughes, M. Shorten, and P. S. Larsen, Drag reduction by air release
promotes fast ascent in jumping emperor penguins—a novel hypothesis, Marine Ecology
Progress Series, 430:171–182, 2011. DOI: 10.3354/meps08868. 57
[25] E. Graeff, N. Maranzana, and A. Aoussat, Biomimetics, where are the biologists?, Journal
of Engineering Design, 30:289–310, 2019. DOI: 10.1080/09544828.2019.1642462. 59
C H A P T E R 6
61
Solution-Driven Biologically
Inspired Design
I think the biggest innovations of the 21st century will
be at the intersection of biology and technology.
A new era is beginning.
Steven P. Jobs (2011)1
6.1
INTRODUCTION
Biologically inspired design (BID) can be approached from two distinctly different direc-
tions [1–3], leading to problem-driven BID and solution-driven BID. Whereas the former
was described in Chapter 5, this chapter explains the latter approach which is called biology-
push biomimetics by the International Standards Organization because it is the experience
from biology that initiates and drives industrial application [4]. Although the term bottom-up
bionik was initially used by researchers at the Technische Universität München [5], solution-
driven BID is now referred to as solution-based biomimetics by them [6].
The challenge in solution-driven BID is to identify technical applications that will bene-
fit from a set of solution principles identified from the bioworld. Solution-driven BID is often
initiated by biologists with deep insights into biological functionalities but, typically, only lit-
tle knowledge of technical applications and design methodologies. The search for applications
followed by design work can therefore be quite arduous tasks for many biologists. Nevertheless,
several examples of solution-driven BID exist in the literature, two of the most well-known ex-
amples originating from burdock seeds that inspired Velcro™ [7] and the self-cleaning leaves of
the lotus plant [8] that inspired superhydrophobic surfaces [9, 10]. A few examples are described
in this chapter to illustrate how observations of and inspirations from bioworld phenomena have
been transformed into technical applications, followed by a description of the eight steps of an
approach to implement solution-driven BID [11].
1Walter Isaacson, Steve Jobs, Simon & Schuster, New York, 2011.
62
6. SOLUTION-DRIVEN BIDS
6.2
EXAMPLES OF SOLUTION-DRIVEN BID
6.2.1 MYCELIUM BIO-COMPOSITES
Mycelium is the root system of mushrooms and other types of fungus. It is typically a fine mesh
of tiny white strands referred to as hyphae [12]. The root system grows very rapidly through soil
where it degrades dead lignocellulosic material such as straw and wood into nutrients used by
the fungus. Other organisms also benefit from this process, since many fungi form symbiotic
relationships with plants. The fungus lives at the base of many plants, the mycelium spreading
along the plant’s roots. In a symbiotic relationship, the plant supplies the fungus with carbon in
the form of sugars made via photosynthesis in exchange for water and minerals such as phospho-
rus [13]. The exchange is actually more complex since the mycelium also serves as a connector
between larger plants such as trees and small seedlings for exchange of water and nutrients.
The fungi, especially due to the mycelium, act as important waste-treatment actors in the
bioworld, first degrading organic material and then transforming it into other types of organic
material. This process can be technologically adapted for the production of mycelium bio-
composites that can be used for insulation, packaging material, and other lightweight struc-
tural products [12, 14, 15]. Agricultural waste streams comprise straw and husk which can be
transformed into porous solids using fungi [16].
The left panel of Fig. 6.1 illustrates a corrugated panel made of a mycelium bio-composite.
The surface is similar to that of plastics but is a bit rougher in texture and appearance. The
natural origin of the bio-composite is evident to both eyes and fingers, promoting its use as
a natural and biodegradable alternative to foamed plastics. The American company Ecovative
has commercialized the manufacturing process for a range of foamy products [17, 18]. The first
products were insulation and packaging items to replace foamed polystyrene. In these products,
sometimes referred to as mycocomposites, the mycelium functions as a self-assembling biological
binder for agricultural byproducts. Ecovative has also used the mycelium-based technology to
produce a refined material for clothing fabrics and foamy skincare products.
As the mycelium is edible, mycelium bio-composites can be consumed as food. It is pos-
sible to achieve a texture and flavor similar to meat and in that way offer a vegetarian alternative.
No animal products are used at all, which makes mycelium bio-composites attractive as food for
vegans.
A limitation of the currently available mycelium bio-composites is their relatively high
weight; hence, these materials cannot compete with the very lightweight foamed plastics. This
has to do with the manufacturing method in which the finely chopped agricultural wastes are
kept in shape by loading them into the cavity of a mold, thereby limiting the growth of the
hyphae to the void regions between the fibers of the agricultural material. This was the experi-
ence of a design team at Danmarks Tekniske Universitet (DTU) when making the foam core
of a 2-m-long surfboard of a mycelium bio-composite, shown in the right panel of Fig. 6.1.
Although such a large object could be made with the required strength, it was still too heavy for
the intended purpose.
6.2. EXAMPLES OF SOLUTION-DRIVEN BID 63
Figure 6.1: (Left) Corrugated panel made of a mycelium bio-composite with a similar but more
natural appearance compared to foamed plastics. (Right) Foam core of a 2-m-long surfboard
made from hemp fibers bound together by mycelium. Courtesy: Dan Skovgaard Jensen, Kristian
Ullum Kristensen, and Lasse Koefoed Sudergaard.
To improve the mycelium bio-composite, DTU researchers are working to combine the
mycelium growing process with 3D printing [16]. One approach is to 3D print a porous matrix
material in which the fungus grows much the same way as it does in the bioworld when degrading
dead lignocellulosic material. Another approach is to use a 3D-printing technique in which the
printing nozzle is maneuvered by a robotic arm to place the matrix material in space in the same
way as spiders make their webs. After the hyphae spread in the 3D web, the resulting foamy
material is very light and highly suitable for high-performance sandwich composites.
6.2.2
BOMBARDIER-BEETLE SPRAY
Ground beetles of many species such as Stenaptinus insignis [19] and Brachinus crepitans [20]
are commonly called bombardier beetles because they exhibit an extraordinary self-protecting
behavior. When approached by a predator such as an ant, a bombardier beetle sprays a boiling
liquid toward the approaching predator, as depicted in Fig. 6.2. The liquid is ejected through
a nozzle at the abdomen which can be directed to point toward the desired target. The amaz-
ing feature is that it is possible for the beetle to generate and handle a boiling liquid without
harming itself. Another remarkable feature is the way in which the very hot aerosol is made.
A gland containing hydrogen peroxide and another gland containing hydroquinone shoot their
respective contents through the anus. When the two liquids mix with the enzymes catalase and
peroxidase, hydrogen peroxide decomposes into water and oxygen and hydroquinone oxidizes
64
6. SOLUTION-DRIVEN BIDS
Figure 6.2: Hot spray is used by Stenaptinus insignis as a defense against predators [19]. Copy-
right (1999) National Academy of Sciences, U.S.A.
into p-quinones. Both reactions are exothermic, bringing the mixture to the boiling point and
vaporizing it partially before expulsion along with free oxygen.
At the University of Leeds, the entire defense mechanism of the bombardier beetle species
was found relevant to gas turbine igniters [20, 21]. The initial part of a research project under-
taken at Leeds can be considered to be problem driven, as the desire to improve the combustion
process in a gas turbine led to interest in a biological phenomenon. After studying the spray
mechanism in the beetle, researchers constructed a scaled-up replica of the combustion chamber
to demonstrate a similar spray formation. It was soon realized that the fascinating and remark-
able properties of the bombardier spray mechanism could be useful for pharmaceutical sprays,
fire extinguishers, and fuel injectors in combustion engines. That realization moved the work
from problem-driven BID toward solution-driven BID. This can be seen as a definition of the
attractive characteristics of a biological phenomenon (which is the first step in solution-driven
(a)(b)6.2. EXAMPLES OF SOLUTION-DRIVEN BID 65
Figure 6.3: Tubercles on the leading edges of the flippers of a humpback whale improve lift and
reduce drag as well as the risk of stalling. Courtesy: Whit Welles (Wwelles14) https://commons.
wikimedia.org/w/index.php?curid=2821387.
BID, as explained in Section 6.3.2) for a spray technology that can reduce the environmental
impact typical of existing spray technologies [21]. That understanding led to the identification
of spray applications, such pharmaceutical sprays and fire extinguishers, that release polluting
gases such as propane into the atmosphere.
The biomimetic spray technology is being applied in other scenarios too. For example,
exhaust from internal combustion engines contains nitrogen oxides (NOx), which contribute to
smog and acid rain [22]. The release of NOx is normally regulated by flow-restricting mixers.
However, the principles for vapor formation in the bombardier beetles can be exploited to inject
small droplets of a solution of urea into the exhaust and thereby inhibit NOx release [23].
Swedish Biomimetics 3000 is commercializing the bombardier-beetle spray technol-
ogy [24, 25]. This industrial company realized the potential of this biomimetic technology and
began to explore applications in diverse industrial sectors, e.g., for air humidifiers in supermar-
kets.
6.2.3
TUBERCLES FOR FLOW CONTROL
Serendipity can play a big role in solution-driven BID. Frank Fish, a biology professor at West
Chester University, happened to notice a curious feature in a figurine of three humpback whales
(Megaptera novaeangliae) displayed at an antiques store [26]. The leading edges of the large
flippers of the whales in the figurine were not straight but had tubercles. A little research showed
that the artist had not made an error; indeed, the flippers of humpback whales have turbercles,
as illustrated in Fig. 6.3.
Why do whale flippers have this strange geometry? Flippers are adaptations of hands.
The biologist Fish found that the tubercles closely follow the joints in the phalanges of the
66
6. SOLUTION-DRIVEN BIDS
“fingers” in each flipper. The flippers of a fully grown whale are about 3.60 m long, relatively
long for an animal that is four times longer. The natural assumption for the biologist was that
the tubercles serve a special purpose. This turned out to be true because humpback whales are
very agile swimmers and can quickly change direction when swimming at a high speed. Unlike
whales of other species, a group of humpback whales forms a circle when hunting a shoal of
fish. The whales release bubbles from their blowholes to collectively form a cylindrical curtain
to confine the shoal. The curtain is tightened as the radius of the circular formation decreases,
delivering the prey in densely packed mouthfuls to the predators.
Computer models using the equations of fluid dynamics as well as experiments confirmed
that the tubercles affect motion in a fluid significantly: lift is increased and drag is reduced [27].
These features explain the agility of humpback whales. The tubercles also reduce the risk of
stalling which can happen when the lift of the flipper suddenly drops.
Having uncovered the physical principles underlying the fascinating capability of an ani-
mal arising from its anatomy, Fish wondered which technical applications could benefit from the
enhanced flow characteristics when using a flipper or a fin with tubercles on the leading edge.
This is the classic initiation of solution-driven BID, which justifies the appellation biology-
push biomimetics. Many applications were investigated [28]. One is on the fin of a short
surfboard which would enable a surfer to make a more sudden cutback, i.e., change direction
when riding a wave. Another is on the keel of a sailing boat which can allow the boat to make
tighter turns.
Like water, air is a fluid. Could applications in air also benefit from tubercles? Truck mir-
rors can be fitted with tubercles to reduce the drag and thereby improve fuel economy. The same
effect can be achieved by adding tubercles to fins on racing cars. Helicopter rotors can deliver
more lift with a reduced risk of stalling. Fans in stables can save energy while also becoming
quieter. Windmills can generate more energy because of reduced drag.
Many good proposals for possible applications of the tubercles on the flippers of hump-
back whale emerged. Which of those proposals becomes commercial depends on the trade-off
between achievable technical benefits and possible drawbacks as well as on the ease of produc-
tion.
6.2.4
ABALONE-SHELL ARMOR
Abalone is the common name for marine mollusks belonging to the family Haliotidae. An
abalone shell is shown in Fig. 6.4. The inner layer of the shell is made of nacre which is extremely
tough, considering that most of it is aragonite which is very brittle because it is essentially chalk.
2 for aragonite
The toughness can be measured as the specific work of fracture which is 0.2 J m(cid:0)
2 for nacre [29]. The explanation for nacre being 2,000 times tougher than aragonite
but 400 J m(cid:0)
is found in the layered “brick-and-mortar” micromorphology also shown in Fig. 6.4 [30, 31].
The bricks are plates of aragonite and the mortar is a ductile proteinaceous material [32].
6.2. EXAMPLES OF SOLUTION-DRIVEN BID 67
Figure 6.4: (Left) Nacre in an abalone shell. (Right) Schematic of the crack-resistant brick-and-
mortar micromorphology of the abalone shell.
Toughness is often explained as prevention of crack propagation. A propagating crack is
arrested when it encounters the proteinaceous mortar. When the shell experiences an impact
from a crab or another predator, the impact energy causes the formation of microcracks in the
aragonite plates. In a more homogeneous material, the microcracks would propagate and cause
a failure, but the proteinaceous mortar absorbs the impact energy by deforming elastically and
distributing part of the energy for microcrack formation in many other aragonite plates. Shell
failure is thereby averted, the abalone shell thus providing an example of contraindicated per-
formance.
The abalone shell happens to provide a documented example of a misapplication of
solution-driven BID [1]. Fascinated by the impact resistance of the abalone shell, a group of
engineering students at the Georgia Institute of Technology exploited the brick-and-mortar
micromorphology for a bullet-proof vest. It was clearly a solution-driven approach where the
inspiration came from a biological solution and was applied to a technical problem. However,
the design team did not approach the exercise with sufficient rigor, going directly from a de-
scription of a fascinating functionality of a biological structure to a detailed specification of a
solution to an appealing technical problem. They did not spend time on a closer analysis of what
the properties of the abalone shell actually are and what type of impact it is best suited for. The
abalone shell is very good at resisting the force from the jaws of a predator which typically applies
the force at a slow speed. This is very different from the very sudden impact of a bullet flying at a
high speed. Furthermore, the team designed the vest mimicking not only the micromorphology
but also the chemical constituents (small flakes of chalk and elastic matrix) of the shell. Not only
was the vest incapable of resisting bullets, it was much too heavy as well.
“Brick” = Aragonite chalk“Mortar” = protein68
6. SOLUTION-DRIVEN BIDS
6.3
STEPS FOR SOLUTION-DRIVEN BID
Section 5.2 provides a five-phase implementation scheme along with several ways to adopt in
each phase. In contrast, literature contains much less information on formal implementation
of solution-driven BID. Researchers at the Georgia Institute of Technology have formulated a
seven-step implementation plan as follows [1]:
(i) become aware of a biological phenomenon,
(ii) define the functionalities that brought attention to that biological phenomenon,
(iii) extract the key principles underlying the attractive functionalities,
(iv) specify the usefulness of the biological functionalities for human activities,
(v) search for technical problems that can be solved using the identified functionalities,
(vi) select a technical problem from the ones identified, and
(vii) apply the key principles to that technical problem.
However, the instructions available in the design literature for some of these steps are scant.
DTU researchers therefore developed an eight-step procedure to implement solution-driven
BID, with inspiration from the way application search is done for conventional technology [11].
6.3.1
APPLICATION SEARCH
Application search is routinely carried out in any company that is focused on using a specific
production technology. In order to ensure future sales, the company will regularly evaluate its
present portfolio of products and search for new areas that will benefit from its production tech-
nology. The company will encounter challenges when seeking expansion into industrial sectors
that it has no experience in. Due to this limitation, the company will serve as a subcontractor to
companies that have both the required experience and contacts with end users.
Another limiting factor for such a company is that its principals are not trained in design
thinking and are less experienced in working with open problems and large spaces of solutions.
Instead, their forte is a deep knowledge of the specific production technology which enables their
company to mass produce at a competitive price. The company will also be good at improving
the technology to incorporate new features. But unlike companies with end-user contact (such
as manufacturers of furniture or household appliances), it does not have a well-defined user
group that can be explored to identify expansion potential. Identification of industrial sectors
for expansion can therefore be a challenge. Application search is a way to meet this challenge.
As an example, consider application search carried out by a company that specializes in re-
action molding of polyurethane, which is used to make toilet seats, panels for interior decoration,
and dashboards of cars. A design-oriented approach to application search for this company is to
6.3. STEPS FOR SOLUTION-DRIVEN BID 69
Figure 6.5: Pinart toy for children.
first identify the attractive characteristics of the reaction-molding technology and then search
for end-user applications in order to identify candidate companies that will benefit from its tech-
nology. The low tooling price for manufacturing polyurethane objects enables: (i) the production
of small batches of custom-designed objects, (ii) a high degree of freedom for free-form geom-
etry, and (iii) the production of lightweight components with foamed core that can be inserted
in metal, wood, and textile items. For each of these three enabling attributes, an open search for
applications can be made, in brainstorming sessions and/or on internet search engines.
Another example is a project carried out by two engineering students to develop a new
type of production technology based on the pinart toy shown in Fig. 6.5 [11]. The production
technology is based on a mold that can change shape on demand and hence be useful for casting
individually shaped items. An application search to justify the development of the mold iden-
tified 136 quite different applications encompassing prosthetics, contact lenses, hearing aids,
chocolates, compact-disk covers, jewelry, propellers for sailing boats, and concrete bridges. A
specific application must be selected in the development phase, since many parameters for the
production tool (in this case, the mold), such as dimensions, accuracy, resolution, and through-
put rate depend on the application. Based on an analysis of the applications and dialogue with
possible collaborators for each of the application areas, two applications were selected: (i) a tool
to fabricate individually shaped curved concrete facade elements and (ii) a tool for inscribing
marks on casts to enable subsequent traceability during manufacture. The two resulting tools
are shown in Fig. 6.6. Both applications are very different and addressed very different business
areas.
70
6. SOLUTION-DRIVEN BIDS
Figure 6.6: (Left) A tool for the fabrication of individually shaped curved concrete facade ele-
ments [33] and (right) a tool for inscribing marks on casts [34], both developed based on the
pinart toy shown in Fig. 6.5.
Figure 6.7: Lotus leaves repel water and stay clean thereby.
6.3.2
EIGHT-STEP PROCEDURE
With the knowledge that application searches are routinely carried out in some industrial sectors,
DTU researchers devised an eight-step procedure that has some overlap with the seven-step
implementation plan from the Georgia Institute of Technology. The eight steps of the DTU
approach for solution-driven BID are provided in Table 6.1.
The eight-step procedure is exemplified in Table 6.1 by the leaves of the lotus (Nelumbo
nucifera), a plant native to many tropical countries. Considered sacred by Hindus, Buddhists,
and Jains, lotus grow in wetlands, ponds, and lakes. The remarkable characteristic of this plant is
that the ventral surfaces of its leaves stay clean even in dirty surroundings because those surfaces
are superhydrophobic [9], as may be noticed in Fig. 6.7.
Table 6.1: The eight steps of the DTU approach for solution-driven BID along with the lotus-
leaf example of self-cleaning surfaces in the bioworld
6.3. STEPS FOR SOLUTION-DRIVEN BID 71
Step 1. Solution-driven BID begins with the awareness of a biological phenomenon that
could either constitute or provide a solution to a technical problem that has not been identified.
Thus, solution-driven BID can be initiated by merely an interest in an animal or a plant with a
fascinating behavior or capability. It can also be initiated by a biologist who has studied biological
organisms of a certain species or genus for many years and begins to wonder which engineering
applications could benefit from the biological insight. Defining the biological solution then
requires a description of its characteristics that may be relevant to some applications. Biological
organs are typically multifunctional, so it may be arduous to describe all of its characteristics.
Fortunately, a complete description is not called for, since it was a specific characteristic that drew
attention. In the first step, that attractive characteristic of the biological phenomenon must be
defined.
The persistent clean condition of lotus leaves can be explained by its water-repellence char-
acteristic which prevents dust particles and other detritus from attaching to its ventral surface.
The superhydrophobicity is responsible for the formation of water beads that roll off the surface,
thereby removing foreign matter. In turn, this superhydrophobicity arises from surface topol-
ogy at the 10-(cid:22)m length scale [35]. However, as the matte appearance of lotus leaves is quite
No.StepLotus-Leaf Example1biological phenomenonCharacteristic: Water repellence2Make an open search for applicationsSelf-cleaning vehicles3Formulate constraints to limit the scope of the searchConstraint: Only applications for which 4Apply constraints one by one to eliminate some results of Step 2Inside the shield of a lawn mower5Create a concept for each result of Step 4Coat the inside of the shield of the lawn mower for cleaning with a garden water hose6Consult selected stakeholdersTalk to a few gardeners7Repeat Step 5 for new application Wheelbarrows8Assess every concept against Criteria: (i) Longer life time for lawn mower and (ii) lower risk of spreading pests72
6. SOLUTION-DRIVEN BIDS
different from the glossy appearance of clean and hygienic surfaces, the superhydrophicity due
to surface topology may not be attractive enough for certain applications.
Step 2. Next, an open search is made for applications that will benefit from the attractive
characteristic defined in the first step. This can be done in different ways, but a simple one is
for the design team to brainstorm in order to answer the following question: “In what situations
can the described characteristic be advantageous?”
The question for the lotus-leaf example is: “Where can self cleaning be advantageous?” A
more general question is: “In which situations do surfaces become dirty?” The unwanted con-
sequence of having a matte surface could lead to the following question: “Where are clean but
non-glossy surfaces required?”
Step 3. The characteristic defined in the first step will most likely result in finding a large
number of possible applications in the second step. Therefore, the third step requires the for-
mulation of constraints that will not only limit the scope of the search but also force deeper
explorations of the fewer possible applications.
A constraint can require focus on items of specific types—e.g., household items, leisure
and sports equipment, hospital articles, professional tools, etc. Another constraint can be on the
type of materials deemed acceptable. A third way to approach setting up constraints could be to
analyze daily or professional routines while looking for activities that benefit from the defined
characteristic. Such a routine could be what a person does while working in an office or while
traveling every week to meet clients on site. Professional routines can also be incorporated by
choosing a professional activity such as gardening, hospital sanitation, painting houses, and graf-
fiti removal. The framing of a context makes it easier to imagine where the defined characteristic
of the biological solution may be beneficial.
A simple constraint for the lotus-leaf example is to focus on situations in which particle
accumulation is undesirable and the particles are difficult to remove.
Step 4. Application of the constraints formulated in the third step will eliminate many
of the possible applications identified in the second step. The constraints can be applied either
sequentially or concurrently. Brainstorming by the design team will deliver context-specific ap-
plications.
For the lotus-leaf example, application may be sought for lawn mowers in which the oper-
ator is protected from the cutting blade by a shield. The cut grass often sticks to the inside surface
of the shield and is not easy to remove. Another possible application is for a house painter’s tools
to have non-stick surfaces. Likewise, exterior walls of office buildings require treatment to pre-
vent becoming canvases for graffiti artists.
Step 5. For each of the results of the constrained search undertaken in the fourth step, a
concept has to be created. As explained in Section 2.4.4, whereas an idea is merely a principle for
how to solve a problem, the application of that principle in a specific context leads to a concept
6.3. STEPS FOR SOLUTION-DRIVEN BID 73
because it satisfies the context-specific constraints. The intended performance of each concept
must be described in concrete terms in the fifth step.
For the lotus-leaf example, a concept for the lawnmower is to endow the internal surface
of the shield with topology at the 10-(cid:22)m length scale to prevent wet cut grass from attaching
to that surface. Likewise, a concept for the house painter’s tools is have the exposed surface of
every tool with a similar topology to prevent paint from adhering to the exposed surface. Finally,
providing the surfaces of walls with a similar topology will deter graffiti artists.
Step 6. Each concept for every application has to be discussed with knowledgeable stake-
holders in the sixth step. The stakeholders should be presented with the relevant concept(s)
instead of being asked about possible applications. Some stakeholders are very likely to have
reservations about why a concept may not work well in the real world, but the main point is to
stimulate their creativity so they may come up with their own application proposals. Often it is
easier to be creative when criticizing a concept.
For the lotus-leaf example, the stakeholders to be consulted should be gardeners for the
lawn-mower concept, house painters for the painting-tools concept, and janitors for the graffiti-
prevention concept.
In the case of the production technology based on the pinart toy shown in Fig. 6.5 [11], a
concept was of a flexible mold for use by sandcasting companies. When sandcasting personnel
were consulted on this concept, they informed the design team that the need for flexible molds
is insignificant but a major need exists for traceability during the manufacturing process. If
an individual code or number could be inscribed on each cast by the mold, then it would be
possible to trace each cast subsequently. The quality of representative casts from a batch could be
assessed and related to the personnel who produced that batch as well as to the specific material
composition used. The company could in this way get a better quality-assurance system. The
design team had not been aware of the need for traceability, but consultation with knowledgeable
stakeholders led to a new application of their technology.
Step 7. The penultimate step is a repetition of the fifth step for the new applications iden-
tified by knowledgeable stakeholders during the sixth step.
For the lotus-leaf example, gardeners could suggest superhydrophobic surfaces for wheel-
barrows, house painters could suggest similar surfaces for lunchboxes, and janitors for walls in
children’s bedrooms and school rooms.
Step 8. The final step of the DTU approach for solution-driven BID is to assess every
concept with respect to a set of predefined criteria which could include the expected market
capacity and societal impact.
74
6. SOLUTION-DRIVEN BIDS
REFERENCES
6.4
[1] M. Helms, S. S. Vattam, and A. K. Goel, Biologically inspired design: Process and prod-
ucts, Design Studies, 30:606–622, 2009. DOI: 10.1016/j.destud.2009.04.003. 61, 67, 68
[2] T. A. Lenau, A.-L. Metze, and T. Hesselberg, Paradigms for biologically inspired design,
Proceedings of SPIE, 10593:1059302, 2018. DOI: 10.1117/12.2296560. 61
[3] L. H. Shu, K. Ueda, I. Chiu, and H. Cheong, Biologically inspired design, CIRP Annals—
Manufacturing Technology, 60:673–693, 2011. DOI: 10.1016/j.cirp.2011.06.001. 61
[4] ISO 18458:2015, Biomimetics—Terminology, Concepts and Methodology, International
Standards Organization, Geneva, Switzerland, 2015. https://www.iso.org/standard/
62500.html DOI: 10.3403/30274979. 61
[5] T. Lenau, K. Helten, C. Hepperle, S. Schenkl, and U. Lindemann, Reducing consequences
of car collision using inspiration from nature, Proceedings of IASDR: 4th World Conference
on Design Research, Delft, The Netherlands, Oct. 31–Nov. 4, 2011. 61
[6] K. Wanieck, P.-E. Fayemi, N. Maranzana, C. Zollfrank, and S. Jacobs, Biomimet-
ics and its tools, Bioinspired, Biomimetic and Nanobiomaterials, 6:53–66, 2017. DOI:
10.1680/jbibn.16.00010. 61
[7] S. D. Strauss, The Big Idea: How Business Innovators Get Great Ideas to Market, pages 14–18,
Dearborn Trade Publishing, Chicago, IL, 2002. 61
[8] C. Neinhuis
and W. Barthlott, Characterization and distribution of water-
repellent, self-cleaning plant surfaces, Annals of Botany, 79:667–677, 1997. DOI:
10.1006/anbo.1997.0400. 61
[9] X.-M. Li, D. Reinhoudt, and M. Crego-Calama, What do we need for a superhydrophobic
surface? A review on the recent progress in the preparation of superhydrophobic surfaces,
Chemical Society Reviews, 36:1350–1368, 2007. DOI: 10.1039/b602486f. 61, 70
[10] J. Wang, L. Wang, N. Sun, R. Tierney, H. Li, M. Corsetti, L. Williams, P. K. Wong,
and T.-S. Wong, Viscoelastic solid-repellent coatings for extreme water saving and global
sanitation, Nature Sustainability, 2:1097–1105, 2019. DOI: 10.1038/s41893-019-0421-0.
61
[11] T. A. Lenau, Application search in solution-driven biologically inspired design, Proceed-
ings of the 22nd International Conference on Engineering Design (ICED19), pages 269–278,
Delft, The Netherlands, Aug. 5–8, 2019. DOI: 10.1017/dsi.2019.30. 61, 68, 69, 73
6.4. REFERENCES 75
[12] F. V. W. Appels, S. Camere, M. Montalti, E. Karana, K. M. B. Jansen, J. Dijksterhuis, P.
Krijgsheld, and H. A. B. Wösten, Fabrication factors influencing mechanical, moisture-
and water-related properties of mycelium-based composites, Materials and Design, 161:64–
71, 2019. DOI: 10.1016/j.matdes.2018.11.027. 62
[13] J. D. Birch, S. W. Simard, K. J. Beiler, and J. Karst, Beyond seedlings: Ectomycorrhizal
fungal networks and growth of mature, Pseudotsuga menziesii, Journal of Ecology, 2020.
DOI: 10.1111/1365-2745.13507. 62
[14] F. V. W. Appels, The use of fungal mycelium for the production of bio-based materials,
Ph.D. Dissertation, Utrecht University, The Netherlands, 2020. https://dspace.library.uu.
nl/bitstream/handle/1874/390884/5e1c62cd1b0f1.pdf 62
[15] C. Bruscato, E. Malvessi, R. N. Brandalise, and M. Camassola, High performance of
macrofungi in the production of mycelium-based biofoams using sawdust—sustainable
technology for waste reduction, Journal of Cleaner Production, 234:225–232, 2019. DOI:
10.1016/j.jclepro.2019.06.150. 62
[16] O. Robertson, F. Høgdal, L. Mckay, and T. Lenau, Fungal future: A review of mycelium
biocomposites as an ecological alternative insulation material, Proceedings of NordDesign,
Kongens Lyngby, Denmark, Aug. 11–14, 2020. 62, 63
[17] E. Bayer and G. McIntyre, Method for making dehydrated mycelium elements and product
made thereby, US Patent 2012/0270302 A1, October 25,1997. https://patents.google.com/
patent/US20120270302A1/en 62
[18] Ecovative Design, We Grow Materials. https://ecovativedesign.com/ 62
[19] T. Eisner and D. J. Aneshansley, Spray aiming in the bombardier beetle: Photographic
evidence, Proceedings of U.S. National Academy of Sciences, 96:9705–9709, 1999. DOI:
10.1073/pnas.96.17.9705. 63, 64
[20] N. Beheshti and A. C. McIntosh, A biomimetic study of the explosive discharge of
the bombardier beetle, International Journal of Design and Nature, 1:61–69, 2007. DOI:
10.2495/d&n-v1-n1-61-69. 63, 64
[21] A. C. McIntosh, Biomimetic inspiration from fire and combustion in nature including the
bombardier beetle, Proceedings of SPIE, 7401:74010F, 2009. DOI: 10.1117/12.825477. 64,
65
[22] D. T. Allen and D. R. Shonnard, Sustainable Engineering: Concepts, Design, and Case Stud-
ies, Prentice Hall, Upper Saddle River, NJ, 2012. 65
76
6. SOLUTION-DRIVEN BIDS
[23] P. Larsson, W. Lennard, O. Andersson, and P. Tunestal, A droplet size investigation and
comparison using a novel biomimetic flash-boiling injector for AdBlue injections, SAE
Technical Paper 2016-01-2211, 2016. DOI: 10.4271/2016-01-2211. 65
[24] Swedish Biomimetics 3000, (cid:22)LOT® technology. https://sb3000.tech/ulot-process/ 65
[25] L.-U. Larsson, Swedish Biomimetics 3000®, Bioinspired!, 6(1):6–7, 2008. https://
bioinspired.sinet.ca/content/february-2008-newsletter-issue-61 65
[26] A. S. Brown, From whales to fans, Mechanical Engineering, 133(5):24–29, 2011. DOI:
10.1115/1.2011-may-1. 65
[27] D. S. Miklosovic, M. M. Murray, L. E. Howle, and F. E. Fish, Leading-edge tubercles
delay stall on humpback whale (Megaptera novaeangliae) flippers, Physics of Fluids, 16:L39–
L42, 2004. DOI: 10.1063/1.1688341. 66
[28] F. E. Fish, P. W. Weber, M. M. Murray, and L. E. Howle, The tubercles on humpback
whales’ flippers: Application of bio-inspired technology. Integrative and Comparative Bi-
ology, 51:203–213, 2011. DOI: 10.1093/icb/icr016. 66
[29] A. P. Jackson, J. F. V. Vincent, and R. M. Turner, Comparison of nacre with other ceramic
composites, Journal of Materials Science, 25:3173–3178, 1990. DOI: 10.1007/bf00587670.
66
[30] T. A. Lenau and T. Hesselberg, Biomimetic self-organization and self-healing, Engi-
neered Biomimicry, A. Lakhtakia and R. J. Martín-Palma, Eds., pages 333–358, Elsevier,
Waltham, MA, 2013. DOI: 10.1016/c2011-0-06814-x. 66
[31] M. Mirkhalaf, D. Zhu, and F. Barthelat, Biomimetic hard materials, Engineered
Biomimicry, A. Lakhtakia and R. J. Martín-Palma, Eds., pages 59–79, Elsevier, Waltham,
MA, 2013. DOI: 10.1016/c2011-0-06814-x. 66
[32] L. Addadi, D. Joester, F. Nudelman, and S. Weiner, Mollusk shell formation: A source
of new concepts for understanding biomineralization processes, Chemistry: A. European
Journal, 12:980–987, 2006. DOI: 10.1002/chem.200500980. 66
[33] T. H. Pedersen and T. A. Lenau, Variable geometry casting of concrete elements using pin-
type tooling, Journal of Manufacturing Science and Engineering, 132:061015, 2010. DOI:
10.1115/1.4003122. 70
[34] N. K. Vedel-Smith and T. A. Lenau, Casting traceability with direct part marking using
reconfigurable pin-type tooling based on paraffin-graphite actuators, Journal of Manufac-
turing Systems, 31:113–120, 2012. DOI: 10.1016/j.jmsy.2011.12.001. 70
[35] L. Gao and T. J. McCarthy, Wetting 101, Langmuir, 25:14105–14115, 2009. DOI:
10.1021/la902206c. 71
C H A P T E R 7
77
Biologically Inspired Design
for the Environment
The earth, the air, the land and the water are not an inheritance
from our forefathers but on loan from our children. So we have
to handover to them at least as it was handed over to us.
Mohandas K. Gandhi1
7.1
SUSTAINABILITY AND THE ENVIRONMENT
Concern about sustainable development is mounting as the number of people on our planet in-
creases. In 1987 the Brundtland Commission of the United Nations [1, 2] defined sustainable
development as “development that meets the needs of the present without compromising the
ability of future generations to meet their own needs.” The commission considered three ar-
eas of concern for sustainable development: (i) the environment, (ii) social organization, and
(iii) economy. Technological development as well the global organization of human society cur-
rently require the imposition of serious curbs on consumption, especially considering the limited
ability of the biosphere to absorb diverse types of waste excreted by human activities. However,
both technological development and social organization can be managed and improved to make
way for enhanced economic growth and poverty removal. Sustainable development requires that
the more affluent humans adopt lifestyles that are consistent with the planet’s ecological well
being—for instance, in their consumption of energy. Also, population increase needs to be in
harmony with the changing productive potential of the ecosystem.
The quest for sustainable development was taken further in the 2030 Agenda for Sustain-
able Development which, in 2015, resulted in the United Nations General Assembly adopting
17 sustainable development goals (SDGs) [3]. The SDGs are operational goals focused on con-
crete actions. Figure 7.1 classifies all 17 SDGs in relation to the previously mentioned three
areas of concern: the environment (also referred to as the biosphere), social organization, and
economy [4].
1https://bestquotes.wordpress.com/2007/03/24/hello-world/
78
7. BID FOR ENVIRONMENT
Figure 7.1: The 17 sustainable development goals classified for relevance to the biosphere, social
organization, and economy. Credit: Azote Images for Stockholm Resilience Center, Stockholm
University.
Biomimicry can help in addressing current actions and proposing new actions within all
three areas, with focus on using inspiration from the bioworld to solve problems relating to the
biosphere. The following SDGs can be impacted by biomimicry:
SDG 6: clean water and sanitation,
SDG 7: affordable and clean energy,
SDG 13: climate action,
SDG 14: life below water, and
SDG 15: life on land.
Both economy and social organization are human constructs and, even though inspirations for
their improvement can be found in the bioworld, the dominant application of biomimicry is
for technological solutions in line with SDGs 6, 7, 13, 14, and 15. The bioworld presents many
avenues that can be adapted for circular economy, resource efficiency, and ecosystem balances.
7.2. MATTER OF SCALE 79
7.2 MATTER OF SCALE
A crucial challenge to sustainable development is posed by the growing number of people on the
planet. More people share the limited resources available, more people produce waste, and more
people pollute. Equal opportunities for everyone being a human right, countries should aim at
providing the highest standard of living consistent with the overall health of the biosphere that
includes not only all humans but all other living organisms too.
An activity that seems to be only a small problem when carried out by a few people can
turn out to be a huge problem when carried out by many people. Numerous examples show how
small problems grow out of proportion when scaled to larger populations. In Jakarta, Indonesia,
it is common practice for landowners to pump water from aquifers deep underground since piped
water is not reliably available. However, with 10 million inhabitants this practice has caused land
subsidence of as much as 4 m in the coastal areas of the city, thereby making it highly vulnerable
to flooding [5]. Another example is eutrophication of lakes and rivers [6]. The use of fertilizers is
desirable to increase agricultural yields, but the right amounts may not be applied at the correct
times. Excess fertilizer will run off during rain and/or irrigation to cause increased growth of
algae in rivers and lakes, leading to oxygen depletion and fish deaths on a large scale. This would
not be a problem if confined to a few locations. However, widespread use of excess fertilizers
impacts not only water bodies in landmasses but also causes dead zones in seas and oceans. Thus,
a supposedly harmless action when undertaken by an individual can have grave repercussions on
a large population in a large area when that same action is simultaneously implemented by more
than a few individuals.
The global environmental impact (GEI) can be quantified as the product of three factors [7]:
the number N of people on our planet, the per-capita economic activity E, and the eco-efficiency F
defined as the environmental impact per economic activity. That is,
GEI
N
F
(cid:2)
(cid:2)
D
E:
The global population N in 2019 is around 7.5 billion, rising from 4 billion in 1974 and projected
to rise to 10 billion in 2057 [8]. Concurrently, living standards (i.e., E) have improved for many
people. In 1990, 36% of the global population was living in extreme poverty [9], defined by the
World Bank as an income of US$ 1.9 a day [10]. Extreme poverty was reduced to 8% of the
world population in 2018, which illustrates the fast pace at which the standard of life is being
enhanced globally. To maintain an unchanged GEI, the eco-efficiency F must be decreased, i.e.,
the environmental impact for the economic activity must be lowered.
80
7. BID FOR ENVIRONMENT
U.S., Canada, most European countries, Japan, Taiwan, South Korea, Saudi Arabia,
Qatar, Bahrain, and, increasingly, parts of China and India have an opportunity in being role
models for sustainable life-styles of desirable quality. These regions can demonstrate that it is
possible to maintain a high standard of living that is consistent with sustainable development—
a win-win situation. Sustainable development does not overburden planetary resources, and a
high quality of life allows the citizen to reap the benefits of techoscientific advances.
A major requirement to engender this win-win situation is the low-cost production of
energy from nonpolluting sources for millions of years to come. These sources include the sun,
winds, tides, and reservoirs. Another major requirement is the minimization of the extraction
of ores, minerals, and petroleum from the planetary crust by industrywide recycling of ma-
terials that have already been extracted. Improved quality of life for a growing population is
possible only if both resource consumption and waste production are greatly reduced, resulting
in improved eco-efficiency. Highly efficient systems in the bioworld can inspire technological
developments for an effective transition toward sustainable development.
7.3
SUSTAINABLE PRACTICES FROM NATURE
The biosphere and the sun together can act as an ecosystem to sustain our species and countless
others for very long periods of time compared to the human lifespan. Photosynthesis transforms
solar energy into organic matter that is the basis of the food chain for other living organisms.
The organic matter consists mainly of a few elements—carbon, oxygen, hydrogen, nitrogen,
phosphorus, and sulfur—which can be combined into a large number of materials that later
can be decomposed to form new materials. All materials are synthesized by self assembly into
living organisms at ambient temperatures, and those organisms either excrete or decompose into
materials that can be re-synthesized into living organisms. There is no external agent following
some masterplan that determines the sequence and method of assembly. Instead, the entire
process is embedded within the organisms so they can self replicate.
A good example of how nature produces materials with remarkable properties at ambient
temperatures is the iridescent nacre found in mollusks. The material is very stiff and its hardness
equals that of manufactured materials such as ceramics which require very high production tem-
peratures. Located on the inside of the shell, nacre is a composite layered structure consisting
of aragonite crystals (calcium carbonate) separated by very thin layers of a cross-linked protein.
The structure is often referred to as brick-and-mortar structure due to its visual similarity to
building materials [11].
The material-shaping mechanism is not completely understood but a good model has
been described by Addadi et al. [12]. The layered structure is made through a long sequence of
steps. The epithelial cells in the mollusk’s mantle secrete the highly cross-linked periostracum
protein layer under which a gel-like matrix is formed. The matrix includes both hydrophobic and
hydrophilic proteins as well as colloidal particles of the chemically unstable amorphous calcium
carbonate. Aragonite crystals form at nucleation sites and grow until the periostracum layer
7.4. CIRCULAR ECONOMY OF MATERIALS 81
is reached. In between the crystals, chitin molecules are trapped so that the brick-and-mortar
structure is formed.
An example from nature, but not the bioworld, is the way sediments are transported by
sea currents along coasts. Soil is removed from places along coastlines, thereby causing land
to disappear while cliffs are formed. The soil is moved by the sea currents and deposited at
other places, typically forming headlands. Amazingly thus, water breaks down solid material
and carries it over long distances simply through the persistent application of fairly small forces
over long periods of time.
A similar mechanism is applied by humans to convert seabed into agricultural land. In
Denmark, Germany, and the Netherlands, the tides are fairly large and the coastal areas are
usually wide and flat. By deliberately placing obstacles to delay water brought over by the tides,
the deposition of sediments is promoted causing the land to form above the sea level. This
approach of using many repeated actions to create a shape abounds in nature. Each individual
action is in itself not very powerful. In contrast, humans typically apply a lot of force for a short
while to create similar formations—e.g., when an excavator is used to move soil.
The bioworld thus presents a very different approach for manufacturing materials com-
pared to the approaches humans take. When considering length scales of up to 1 m, humans
manufacture objects primarily by using high levels of energy and through a planned selection
of materials [13]. For instance, polymers are typically processed by pressing the melted poly-
mer into a mould while applying large forces. The bonding energies for polymerization are quite
similar in magnitude for a large variety of polymers, whether manufactured in a factory or in the
bioworld. But, biological systems do not use elevated temperatures and rely instead on chemical
reactions when building blocks of the right basic materials are brought into position. Biological
polymers are mainly proteins and polysaccharides in fibrous form, found in collagen, silk, mus-
cles, and arthropod exoskeletons. Hard tissue in biology is mostly made from calcium and silicon
with smaller fractions of iron, zinc, and manganese—all processed at ambient temperature [14].
7.4
CIRCULAR ECONOMY OF MATERIALS
Most biological materials can be used directly or indirectly by other organisms. Many mammals,
for instance, eat the placenta after the birth of an offspring. All spiders produce silk but not all
spiders spin webs. Webmaking saves a spider the energy-consuming effort of hunting by rapid
locomotion, but it requires a sizable investment of proteins that the web is made of [15]. Many
spiders eat their old webs so that the proteins are recycled to make new webs [16, 17].
Less directly, biological materials are broken down to simpler molecules by bacteria, mak-
ing them useful for other organisms. Biodegradation is a well-known process whereby bacteria
in the ground decompose dead organic material into carbon dioxide, nitrogenous compounds,
and other materials [18]. Mycelia from fungi break down lignin and cellulose from plants. Fungi
grow on dead trees on the forest floor after the wood has been moisturized, and the same can be
82
7. BID FOR ENVIRONMENT
seen in buildings with wooden structures. Thus, moisturized wood provides a good environment
for fungi to grow and eventually break down the lignin and cellulose in the wood [18].
Colors in plants are usually produced using pigments [19] though sometimes structural
colors are also found [20]. A structural color arises due to spectrally selective scattering of visible
light in response to the morphology of a physical structure [21, 22]. Usually, the morphology
has a repeat pattern that is tuned to a certain color. Whether dull or brilliant, a structural color
is not produced by pigments, which is immensely important for biologically inspired design for
environment in that material diversity is not enhanced by incorporating a structurally colored
object.
Multifunctionality is commonplace in living organisms [23, 24], because fewer organs
need to be formed, housed, and coordinated if those organs are multifunctional. As an example,
a mouth is used for ingesting nutrients, releasing sounds, breathing, and showing affection. A
multifunctional module can be incorporated in a variety of products, thereby reducing inven-
tory costs, enhancing repairability, extending product lifetimes, and promoting standardization.
Lifetime extension slows down the depletion of raw materials, reduces the consumption of en-
ergy for manufacturing, and reduces the volume of waste for disposal.
7.5 MUTUALLY BENEFICIAL COEXISTENCE
No organism in the bioworld exists on its own but is dependent on interactions with other
organisms, whether of the same species or not. Within a species, wolves and dingoes hunt in
coordinated groups for greater success, starlings fly in coordinated murmurations to confuse
predators such as falcons, and fish similarly form schools (not to be confused with the much less
coordinated shoals) to elude predators. Mammal mothers rely on kin to bring food and even
look after infant offsprings.
Mammals rely heavily on symbiosis with microorganisms in their digestive tract. On aver-
age, a human has 0.2 kg of bacteria primarily in the intestines [25], not only to help break down
food into substances that can be adsorbed through the intestine wall but also to supply signaling
compounds essential to the mental health of the person [26]. Transplants of fecal matter can
improve the health of humans suffering from a range of diseases [27].
Plants can produce carbohydrate building blocks through photosynthesis by extracting
carbon from the air and water from the ground. However, they cannot extract minerals such
as phosphorus from the soil and therefore benefit from a symbiotic relationship called mycor-
rhiza between their roots and mushrooms [18]. In exchange, the mushrooms get carbohydrates.
Similarly, some bacteria extract nitrogen from air and supply it to plants as ammonia [18]. Ni-
trogen fixation is essential for the biosynthesis of amino acids, proteins, nucleic acids, and other
nirogenous compounds.
Many animal species rely on social relationships to thrive and even exist. These relation-
ships are very pronounced in social insects such as bees, ants, and termites. They are characterized
by a division of labor whereby some individuals provide food, others nurture the eggs and lar-
7.6. ENERGY EFFICIENCY 83
vae, and still others build and maintain the physical living facility. The individuals communicate
using a range of signals including visual (e.g., color in flowers and waggle dance among bees),
olfactory (e.g., pheromone trails made by ants) and acoustic (e.g., bees buzz with their wings).
The role of the individual appears to be centrally controlled only to a limited degree, with guards
allowing entry only to the inhabitants of the hive or pit. So, how do individuals know their roles
and how to perform tasks without feedback from a central authority?
A very subtle type of communication to control flock behavior involves pheromones. A
pheromone is an olfactory agent that, unlike many fragrances that animals consciously recognize,
makes a short cut to the brain and produces almost instantaneous recognition. Pheromones assist
in a range of different activities such as initiating alarms, attracting mates, and marking trails to
be followed by others [28].
Inter-species communication is also commonplace. The approach of a fearsome predator
leads to a single alarm signal that warns birds and mammals of diverse species to take evasive
action [29]. Not only animals but plants also communicate. The roots of grasses and cereals
of many types excrete chemical compounds that are processed by other plants to determine if
the secreting plants belong to their family. This phenomenon has been deduced from the way in
which the growth of roots of a certain plant is influenced by the roots of neighboring plants [30].
Human society too can benefit from symbiosis whereby the residual energy and materials
from one company become resources for another company. Industrial symbiosis is an element
in the circular economy which, apart from better utilization of resources, benefits society by
increasing the number of jobs and boosting the Gross Domestic Product [31]. In the city of
Kalundborg in Denmark, 11 public and private companies have formed a partnership facilitating
a circular approach for the refinement of crude oil; production of insulin, fertilizers, and gypsum
wallboards; and heating residences and office buildings [32]. The symbiotic activities direct waste
energy, water, and materials from one company to another. For example, the insulin factory uses
fermented sugars resulting in residual yeast biomass which is directed to a factory for producing
fertilizers and biogas, the biogas is used in the gypsum factory for heating, and the residual
thermal energy is transferred to a central heating plant. The result is better utilization of resources
and materials combined with enhanced economy and employment.
Another example is the Danish Pig-City project that aims at combining different types
of agri-businesses [33]. The project combines the husbandry of pigs and production of tomatoes
with a slaughter house, an energy generation plant, and a bio-refinery. Heat from the piggery
on the ground level is used for growing tomatoes in a greenhouse on the floor over the piggery.
Organic waste from both the piggery and the greenhouse is treated in the bio-refinery to produce
biogas for heating and fertilizers for the greenhouse.
7.6
ENERGY EFFICIENCY
Access to enough energy is a limiting factor for all physical and chemical processes in the
bioworld. Just as for aeroplanes and helicopters, the range and duration of avian flight depend
84
7. BID FOR ENVIRONMENT
on how much energy does a bird have when it takes off into the air. Avian bodies have therefore
evolved to have lightweight structures. Many large birds such as albatrosses, condors, and eagles
exploit the warmer air currents for lift and thus minimize energy consumption by their pec-
toral and supracoracoideus muscles [34, 35]. Mammals regain energy cyclically when running.
On the downstroke of a leg, the tendons, ligaments, and muscles stretch to store energy that is
released on the offset. This is true for most animals, but a surprising phenomenon is seen for
kangaroos which are very efficient energy regainers. At moderate speeds they are more energy
efficient in terms of oxygen consumption compared to running bipeds and quadrupeds of similar
size [36].
The force that impedes forward motion in a fluid is called drag. Several species have intri-
cate mechanisms for reducing drag. Sharks are covered with tiny corrugated scales which intro-
duce microturbulence close to the body surface. The microturbulence allows for a more laminar
flow of seawater, thereby reducing the overall drag. The phenomenon has been mimicked in
polymer films applied on aircraft to reduce drag [37]. The sharkskin scales are multifunctional
since their corrugated shape also prevents fouling [38], because barnacles are not able to get a
good grip and therefore fail to attach. Penguins reduce drag by releasing microbubbles of air
trapped under their feathers. If necessary, a penguin can thus increase its speed several times
over short distances, e.g., when chased by a predator [39].
7.7
DESIGN APPROACHES
Several approaches have been devised to support the designer toward the goal of sustainability
enhancement. Formal guidelines help keep a tight focus toward that goal. The system-oriented
approach of circular design orients the designer not solely toward the manufacture of a spe-
cific product, but on its entire lifecycle to encompass raw materials, the use phase, and the
utilization of waste products. A third approach is to assess the environmental footprint of the
product.
7.7.1
ENVIRONMENTAL GUIDELINES
An approach suitable for the early-design stages when many product details are yet unknown
is to use Green Design Guidelines (GDG) [40]. The widely used GDGs may have either very
concrete forms such as the specification of acceptable materials, or be abstract by exhorting
the embrace of techniques that produce less waste than those techniques that require remedial
cleanup of the produced waste.
Another approach involves a systematic methodology to aim for efficiency in the use of
energy and materials [41]. This approach comprises different types of efficiency (such as me-
chanical, material, and thermal efficiencies) and a framework to use bioinspiration. Once a type
of efficiency is selected, analogies from the bioworld can help the designer by providing insight
into functioning and efficient solutions.
7.7. DESIGN APPROACHES 85
The International Standards Organization defines biomimicry as “philosophy and inter-
disciplinary design approaches taking nature as a model to meet the challenges of sustainable
development (social, environmental, and economic)” [42]. A distinction has been made in Chap-
ter 1 between biomimicry and engineered biomimicry, the former being contained in the
latter. Whereas engineered biomimicry does not need to be focused on reaching for sustainabil-
ity goals, the term biomimicry—often associated with the Biomimicry Institute, an American
non-profit organization—is focused on using inspiration from the bioworld to design solutions
that contribute to sustainable development.
The Biomimicry Institute has a sister organization called Biomimicry 3.8 which is a con-
sultancy working together with companies to solve design problems. One of the founding mem-
bers of both organizations is Janine Benyus. The two organizations have developed a basic frame-
work for design work [43] and the database Asknature [44] which allow searches for biological
strategies to solve specific functional challenges. To support sustainable development, the orga-
nizations have formulated the following six lessons from the bioworld:
•
•
evolve to survive,
adapt to changing conditions,
• be locally attuned and responsive,
• use life-friendly chemistry,
• be resource efficient, and
•
integrate development with growth.
Each lesson leads to specific guidelines, such as incorporate diversity and use low-energy pro-
cesses, that are mainly concerned with the environmental part of sustainable development. These
guidelines function in the same way as the criteria for evaluation of design proposals described
in Chapter 2. When two proposed solutions are compared, the preferred one has to satisfy more
guidelines in the best way. Thus, these guidelines are not absolute but indicate desirable out-
comes.
In a study of biomimicry practices in the Nordic countries, it was found that only a
few companies have combined biologically inspired design and environmentally conscious de-
sign [45]. But there are many examples of companies adopting either biologically inspired design
or environmentally conscious design, so that their amalgamation is a realistic goal. In another
study, designers were found to use several different sustainability frameworks when working
with bio-inspiration, but without an established system of accountability [46].
7.7.2
CIRCULAR DESIGN
Designing a product with circularity in mind entails an insurance that recycled materials are
used for production and that the product at the end of its life can be reused or recycled.
86
7. BID FOR ENVIRONMENT
Circular economy is an approach to promote sustainable development with parallels to
how resources are circulated in the bioworld. The Ellen MacArthur Foundation defines circular
economy as an industrial economy that is restorative by intention [47]. Motivated by lessons
learned from studies of living nonlinear systems, circularity is premised on the use of renewable
energy, minimum consumption of chemicals, and eradication of waste. Circularity aims to op-
timize systems rather than their components. This is done by managing the flows of materials of
two types: (i) biological nutrients that re-enter the biosphere safely and (ii) technical nutrients
designed to circulate without diminishing in quality and without entering the biosphere.
Consequently, circular economy distinguishes between consumption and use of materi-
als. It promotes a functional service model whereby the ownership of a product is retained with
the manufacturer who acts as a service provider rather than as a product seller. The manufac-
turer therefore does not promote one-way consumption but ensures that the product will be
reabsorbed in the economy after the end of its life.
Circularity can be applied to all types of industrial production. An example is the cloth-
ing industry. The current system is regarded as extremely wasteful and polluting from the initial
production of textile fibers, through the production of fabrics and a wearable followed by re-
peated washes during use to the final after-use destiny of the wearable [48]. Typically, an item
of clothing is discarded after the wearer is no longer interested in wearing it, although sometimes
it can be passed on to another person. A cotton wearable may be collected by rag pickers as a
raw material for producing paper and industrial wiping rags, there is hope that blended poly-
mer/cotton wearables could be reprocessed after recovering and separating fibers of different
materials, wool extracted from woolen wearables can be used for insulation panels for housing,
acrylics and nylons can be reprocessed into blankets, but polyester wearables are mostly inciner-
ated [49]. Circular economy in the clothing industry would be greatly facilitated by fiber-to-fiber
recycling.
Cradle-to-cradle is an approach to maximize the positive effect of human activities on
the environment as opposed to eco-efficiency that focuses on reducing damage to the environ-
ment [50, 51]. It is based on three key principles:
• waste equals food,
• use only energy provided currently by the sun, and
•
celebrate diversity.
The first principle is inspired by the nutrient cycles seen in the bioworld. Instead of reducing
waste, only that waste should be produced which another process can use as an input. The sec-
ond principle dictates that all energy should come from the sun, i.e., from photovoltaic solar
cells, solar thermal heaters, wind turbines, hydroelectric generators, and biomass incinerators.
The third principle encourages design that respects local cultures and environments and also
recognizes that nonhuman species have the right to thrive in their own ecosystems. A criticism
7.8. BIOLOGICALLY INSPIRED DESIGN FOR ENVIRONMENT 87
of the cradle-to-cradle approach is that is does not address trade-offs between energy use and
resource conservation, because even healthy emissions can adversely affect the ecosystem [50].
7.7.3
IMPACT ASSESSMENT
Life-cycle analysis is an approach to assess the eco-efficiency of a design. A comprehensive
inventory is made of materials, energy, and chemicals used to make, distribute, use, and dispose
of the product. The impacts of the materials, energy, and chemicals on the environment are also
cataloged. In order to compare the eco-efficiencies of two different designs, a functional unit
is defined to represent the desired functional performance. As an example, the functional unit
can be used to facilitate the comparison of the eco-efficiencies of different ways of maintaining
a golf course. A functional unit could be defined as the acreage of a certain terrain in which the
height of grass must be maintained, which makes it possible to compare different methods to
maintain grass height—e.g., using lawn mowers or letting a ruminant species such as goats or
sheep graze.
Assessing the environmental impact is a fairly complex task since a design can have envi-
ronmental effects through several mechanisms such as the emission of greenhouse gases leading
to global warming, the emission of chlorofluorocarbons and halons leading to ozone-hole for-
mation, and the acidification of lakes and rivers. When designing products, a simpler and less
precise method is often used—namely, the use of indicators such as CO2-equivalents. The in-
dicators make it possible to compare quite different designs. For example, they can be used to
compare the production of vegetables in heated greenhouses in a cold region with the production
of vegetables produced in a warm region followed by transportation to the same cold region.
The use of life-cycle analyses has been criticized for not including the full potential of
approaches such as biomimicry and cradle-to-cradle [50, 52]. Instead, a life-cycle analysis can
become so easily focused on the function of a specific product that its goal can be best charac-
terized as the reduction of unsustainability. Formulation of the functional unit can in some cases
lead to ignoring ancillary issues whose consideration could have enhanced sustainability. Thus,
a life-cycle analysis can lessen the use of energy and materials in a factory, but it will not address
the improvement of air quality which could be very important for public health.
The life-cycle analysis of a product can be supplemented with clear criteria of when a
product can be considered sustainable and when not. This is not an easy task, but attempts are
in progress to define green products as having zero waste, producing zero emissions, and being
environmentally safe.
7.8
GRAFTING “BIOLOGICALLY INSPIRED DESIGN”
ONTO “DESIGN FOR ENVIRONMENT”
Design for environment aims at developing products to enhance sustainability without com-
promising functionality, cost, quality, etc. The bioworld presents many approaches that can be
88
7. BID FOR ENVIRONMENT
adapted for circular economy, resource efficiency, and ecosystem balance. As an example, micro-
scopic scales on sharkskin swimsuits indeed reduce drag; likewise, sharkskin polymer films on
aircraft and ships lower energy consumption [37]. But care must be exercised when transferring
solution principles from the bioworld to industrial activities [53]. A bioworld phenomenon may
appear simple at first glance but it may actually involve many intricate mechanisms to assure
a desirable outcome. Its complexity may be inimical for adoption by designers. Additionally, a
bioinspired solution may not comply with our ethics; for instance, the predator-prey relation-
ship [54] is highly undesirable as a model for controlling the human population. Finally, an
attractive solution principle may simply be impractical for adoption. As an example, a penguin
can increase its speed several times over short distances underwater by releasing microbubbles
of air trapped under its feathers [39], but the application of the same mechanism to reduce drag
on a regular ship appears practically unimplementable.
The grafting of biologically inspired design onto design for environment requires a careful
delineation of the design object. Design for environment is often focused on reducing the overall
environmental impact of a specific product. An automobile engine that consumes less gasoline
than its competitors delivering the same performance and driver satisfaction will comply with
the objectives of design for environment. In other cases, a system involving many products and
processes has to be considered. An example is the introduction of electric vehicles or hydrogen-
powered vehicles that will necessitate the development of a comprehensive new infrastructure.
In the bioworld, any organism relies on being part of a larger system comprising organisms of
the same and different species. Environmental sustainability must therefore be addressed at both
the product level and the system level, when a bioinspired solution principle has to be considered
for adoption. The mutualistic relationship between plants, rhizobial bacteria, and mycorrhizal
fungi which benefit from an exchange of nutrients and energy [18] illustrates how it can be
insufficient only to consider an isolated object as the design object.
The design of a product or system typically involves the following four phases [55] de-
scribed in detail in Chapter 2:
• definition and clarification of the need for the product or system (Sections 2.4.1–2.4.3),
•
conceptualization of the product or system and the production/realization process (Sec-
tions 2.4.4–2.4.5),
• preparation of its embodiment to focus the attentions of all stakeholders (Sec-
tion 2.4.6), and
•
creation of the necessary detail for production and realization (Section 2.4.6).
Of these four phases, the conceptualization phase offers the most opportunities for implement-
ing strategies associated with design for environment. These strategies include: reduction of
material diversity, ease of disassembly and repairability for longer useful life, use of recyclable
7.9. REFERENCES 89
and recycled components, reduced use of toxic materials and nonrenewable resources, and ease
of disassembly for circularity and recyclability.
An ever-growing compendium of bioinspired solution principles needs to be established
for each of these strategies. This compendium could lead to the identification of new generic
design principles for disruptive innovation. For example, egg shells and sea shells illustrate how
chalk, a soft material, can be microstructured to bear huge static and dynamic loads. Thus, in-
ferior materials can be biomimetically reformulated to deliver superior performance. The com-
pendium would also promote multifunctionality, as exemplified by avian plumage being used for
flight without significant increase of weight, water repellency, and conservation of body heat.
Design for environment brings additional constraints for biologically inspired design,
which may considerably minimize the solution space. However, a clear environmental goal will
facilitate a more focused search in the compendium and would stimulate creativity in finding
new solutions. As an example, the nests of most birds are made from waste materials held to-
gether with friction and thus exemplify temporary structures that require very low investment
but fulfill short-term needs for temporary housing.
The grafting of biologically inspired design onto design for environment will bring certain
challenges. The evaluation of a radical solution from the bioworld may be difficult not only due
to lack of data but also because of uncertainty in how it will affect use patterns and impact
associated products. For example: inspired by the way spiders eat their own web every second
day in order to regenerate the proteins [16, 17], a solution could be the local reuse of building
materials. However, this will impact the business system for building materials and the working
procedures of the construction industry. The uncertainty may be especially high when the context
and the expected-use scenario for a product or system are not yet defined.
In summary, well-established theories and tools exists to analyze environmental impact
and design to enhance sustainability. Still, design for environment can benefit from biologically
inspired design to create novel solutions. For their integration into Biologically Inspired De-
sign for Environment, successful case stories and an ever-growing compendium of solution
principles from the bioworld are needed.
Hopefully, dear reader, you will contribute.
7.9
REFERENCES
[1] J. Richardson, T. Irwin, and C. Sherwin, Design and Sustainability: A Scoping Report for the
Sustainable Design Forum, Design Council, London, UK, 2005. www.designcouncil.org.
uk/red 77
[2] World Commission on Environment and Development, Our Common Future (The Brundt-
land Report). http://www.un-documents.net/wced-ocf.htm 77
[3] United Nations, Sustainable Development Goals. https://sustainabledevelopment.un.org/
?menu=1300 77
90
7. BID FOR ENVIRONMENT
[4] J. Rockström and P. Sukhdev, How Food Connects All the SDGs, Stockholm Resilience Cen-
tre, Stockholm, Sweden, 2019. https://www.stockholmresilience.org/research/research-
news/2016-06-14-how-food-connects-all-the-sdgs.html 77
[5] S. Rahman, U. Sumotarto, and H. Pramudito, Influence the condition land subsidence and
groundwater impact of Jakarta coastal area, IOP Conference Series: Earth and Environmental
Science, 106:012006, 2018. DOI: 10.1088/1755-1315/106/1/012006. 79
[6] E. D. Ongley, Control of water pollution from agriculture, Food and Agriculture Organi-
zation of the United Nations, Rome, Italy, 1996. http://www.fao.org/3/w2598e/w2598e00.
htm 79
[7] M. Z. Hauschild, J. Jeswiet, and L. Alting, Design for environment—Do we get the focus
right? CIRP Annals, 53:1–4, 2004. DOI: 10.1016/s0007-8506(07)60631-3. 79
[8] Worldometers, Live Counters of the World Population. https://www.worldometers.info 79
[9] United Nations, Sustainable Development Goal 1—End Poverty in All its Forms Everywhere.
https://sustainabledevelopment.un.org/sdg1 79
[10] World Bank, Poverty—Overview. https://www.worldbank.org/en/topic/poverty/overview
79
[11] T. A. Lenau and T. Hesselberg, Biomimetic self-organization and self-healing, Engi-
neered Biomimicry, A. Lakhtakia and R. J. Martín-Palma, Eds., pages 333–358, Elsevier,
Waltham, MA, 2013. DOI: 10.1016/b978-0-12-415995-2.00013-1 80
[12] L. Addadi, D. Joester, F. Nudelman, and S. Weiner, Mollusk shell formation: A source
of new concepts for understanding biomineralization processes, Chemistry: A. European
Journal, 12:980–987, 2006. DOI: 10.1002/chem.200500980. 80
[13] J. F. V. Vincent, O. A. Bogatyreva, N. R. Bogatyrev, A. Bowyer, and A.-K. Pahl,
Biomimetics: Its practice and theory, Journal of the Royal Society Interface, 3:471–482, 2006.
DOI: 10.1098/rsif.2006.0127. 81
[14] J. F. V. Vincent and U. G. K. Wegst, Design and mechanical properties of insect cuticle,
Arthropod Structure and Development, 33:187–199, 2004. DOI: 10.1016/j.asd.2004.05.006.
81
[15] C. L. Craig, Evolution of arthropod silks, Annual Review of Entomology, 42:231–267,
1997. DOI: 10.1146/annurev.ento.42.1.231. 81
[16] D. B. Peakall, Conservation of web proteins in the spider, Araneus diadematus, Journal of
Experimental Zoology, 176:257–264, 1997. DOI: 10.1002/jez.1401760302. 81, 89
7.9. REFERENCES 91
[17] B. D. Opell, Economics of spider orb-webs: The benefits of producing adhesive capture
thread and of recycling silk, Functional Ecology, 12:613–624, 1998. DOI: 10.1046/j.1365-
2435.1998.00222.x. 81, 89
[18] R. F. Evert and S. E. Eichhorn, Raven: Biology of Plants, 8th ed., W. H. Freeman, New
York, 2013. DOI: 10.1007/978-1-319-15626-8. 81, 82, 88
[19] D. W. Lee, Nature’s Palette: The Science of Plant Color, University of Chicago Press, Chicago,
IL, 2007. DOI: 10.7208/chicago/9780226471051.001.0001. 82
[20] G. Strout, S. D. Russell, D. P. Pulsifer, S. Erten, A. Lakhtakia, and D. W. Lee,
Silica nanoparticles aid in structural leaf coloration in the Malaysian tropical rainfor-
est understorey herb Mapania caudata, Annals of Botany, 112:1141–1148, 2013. DOI:
10.1093/aob/mct172. 82
[21] S. Kinoshita, Structural Colors in the Realm of Nature, World Scientific, Singapore, 2008.
DOI: 10.1142/6496. 82
[22] N. Dushkina and A. Lakhtakia, Structural colors, Engineered Biomimicry, A. Lakhtakia
and R. J. Martín-Palma, Eds., pages 267–303, Elsevier, Waltham, MA, 2013. DOI:
10.1016/c2011-0-06814-x. 82
[23] D. H. Evans, P. M. Piermarini, and K. P. Choe, The multifunctional fish gill: Dominant
site of gas exchange, osmoregulation, acid-base regulation, and excretion of nitrogenous
waste, Physiological Reviews, 85:97–177, 2005. DOI: 10.1152/physrev.00050.2003. 82
[24] A. Lakhtakia, From bioinspired multifunctionality to mimumes, Bioinspired, Biomimetic
and Nanobiomaterials, 4:168–173, 2015. DOI: 10.1680/jbibn.14.00034. 82
[25] R. Sender, S. Fuchs, and R. Milo, Revised estimates for the number of human and bacteria
cells in the body, PLoS Biology, 14:e1002533, 2016. DOI: 10.1371/journal.pbio.1002533.
82
[26] Y. E. Borre, R. D. Moloney, G. Clarke, T. G. Dinan, and J. F. Cryan, The impact of
microbiota on brain and behavior: Mechanisms and therapeutic potential, Microbial En-
docrinology: The Microbiota-Gut-Brain Axis in Health and Disease, M. Lyte and J. F. Cryan,
Eds., pages 373–403, Springer, New York, 2014. DOI: 10.1007/978-1-4939-0897-4. 82
[27] E. van Nood, A. Vrieze, M. Nieuwdorp, S. Fuentes, E. G. Zoetendal, W. M. de Vos, C. E.
Visser, E. J. Kuijper, J. F. W. M. Bartelsman, J. G. P. Tijssen, P. Speelman, M. G. W. Di-
jkgraaf, and J. J. Keller, Duodenal infusion of donor feces for recurrent Clostridium difficile,
New England Journal of Medicine, 368:407–415, 2013. DOI: 10.1056/NEJMoa1205037.
82
92
7. BID FOR ENVIRONMENT
[28] G. R. Jones and J. E. Parker, Pheromones, Encyclopedia of Analytical Science, 2nd ed., P.
J. Worsfold, A. Townshend, and C. F. Poole, Eds., pages 140–149, Elsevier, Amsterdam,
The Netherlands, 2005. 83
[29] P. M. Fallow, B. J. Pitcher, and R. D. Magrath, Alarming features: Birds use specific acous-
tic properties to identify heterospecific alarm calls, Proceedings of the Royal Society of London
B, 280:20122539, 2013. DOI: 10.1098/rspb.2012.2539. 83
[30] I. Dahlin, L. P. Kiær, G. Bergkvist, M. Weih, and V. Ninkovic, Plasticity of barley in
response to plant neighbors in cultivar mixtures, Plant and Soil, 447:537–551, 2020. DOI:
10.1007/s11104-019-04406-1. 83
[31] Symbiosis Center Denmark, Dansk Symbiosecenter: The Potential is 10,000 Jobs. https:
//symbiosecenter.dk 83
[32] Kalundborg Symbiose, Partnership Between Public and Private Companies in Kalundborg.
http://symbiosis.dk 83
[33] S. Wittrup, Pig City: The piggery of the future will have a nursery on the roof, Ingeniøren,
Danish Society of Engineers, Copenhagen, Denmark, 2010. https://ing.dk/artikel/pig-
city-fremtidens-grisestald-far-gartneri-pa-forste-sal-105643 83
[34] J. M. Rayner, Avian flight dynamics, Annual Review of Physiology, 44:109–119, 1982. 84
[35] T. Alerstam, M. Rosén, J. Bäckman, P. G. P. Ericson, and O. Hellgren, Flight speeds
among bird species: Allometric and phylogenetic effects, PLoS Biology, 5:e197, 2007. DOI:
10.1371/journal.pbio.0050197. 84
[36] T. J. Dawson, Kangaroos, Scientific American, 237(2):78–89, 1977. https://www.jstor.org/
stable/24954004 84
[37] P. Ball, Shark skin and other solutions, Nature, 400:507–509, 1999. DOI: 10.1038/22883.
84, 88
[38] T. Sullivan and F. Regan, The characterization, replication and testing of dermal denticles
of Scyliorhinus canicula for physical mechanisms of biofouling prevention, Bioinspiration
and Biomimetics, 6:046001, 2011. DOI: 10.1088/1748-3182/6/4/046001. 84
[39] J. Davenport, R. N. Hughes, M. Shorten, and P. S. Larsen, Drag reduction by air release
promotes fast ascent in jumping emperor penguins—a novel hypothesis, Marine Ecology
Progress Series, 430:171–182, 2011. DOI: 10.3354/meps08868. 84, 88
[40] C. Telenko, Developing green design guidelines: A formal method and case study, Ph.D. Dis-
sertation, University of Texas at Austin, Austin, TX, 2009. https://repositories.lib.utexas.
edu/handle/2152/ETD-UT-2009-12-591 84
7.9. REFERENCES 93
[41] J. M. O’Rourke and C. C. Seepersad, Toward a methodology for systematically generating
energy- and materials-efficient concepts using biological analogies, Journal of Mechanical
Design, 137:091101, 2015. DOI: 10.1115/1.4030877. 84
[42] ISO 18458:2015, Biomimetics—Terminology, Concepts and Methodology, International
Standards Organization, Geneva, Switzerland, 2015. https://www.iso.org/standard/
62500.html DOI: 10.3403/30274979. 85
[43] The Biomimicry Institute, Biomimicry DesignLens: Life’s Principles, Missoula, MT. http:
//biomimicry.net/about/biomimicry/biomimicry-designlens/ 85
[44] The Biomimicry Institute, AskNature: Innovation Inspired by Nature, Missoula, MT. https:
//asknature.org/ 85
[45] T. A. Lenau, A. M. Orrú, and L. Linkola, Biomimicry in the Nordic Countries, Nordisk
Ministerr(cid:14)ad, Copenhagen, Denmark, 2018. https://doi.org/10.6027/NA2018-906 DOI:
10.6027/na2018-906. 85
[46] T. Mead and S. Jeanrenaud, The elephant
in the room: Biomimetics and sus-
tainability?, Bioinspired, Biomimetic and Nanobiomaterials, 6:113–121, 2017. DOI:
10.1680/jbibn.16.00012. 85
[47] Ellen Macarthur Foundation, Towards the Circular Economy, London, UK, 2013.
https://www.ellenmacarthurfoundation.org/assets/downloads/publications/Ellen-
MacArthur-Foundation-Towards-the-Circular-Economy-vol.1.pdf 86
[48] Ellen Macarthur Foundation, A New Textiles Economy: Redesigning Fashion’s Future, Lon-
don, UK, 2017. https://www.ellenmacarthurfoundation.org/publications/a-new-textiles-
economy-redesigning-fashions-future 86
[49] S. Baughan, What Happens to the Clothes that you Dispose of?. https://www.loveyourclothes.
org.uk/blogs/what-happens-clothes-you-dispose 86
[50] A. Bjørn and M. Z. Hauschild, Absolute versus relative environmental sustainability:
What can the cradle-to-cradle and eco-efficiency concepts learn from each other?, Journal
of Industrial Ecology, 17:321–332, 2013. DOI: 10.1111/j.1530-9290.2012.00520.x. 86, 87
[51] W. McDonough and M. Braungart, Cradle to Cradle: Remaking the Way We Make Things,
North Point Press, New York, 2002. 86
[52] I. C. de Pauw, P. Kandachar, and E. Karana, Assessing sustainability in nature-
inspired design, International Journal of Sustainable Engineering, 8:5–13, 2015. DOI:
10.1080/19397038.2014.977373. 87
94
7. BID FOR ENVIRONMENT
[53] T. A. Lenau, D. C. A. Pigosso, T. C. McAloone, and A. Lakhtakia, Biologically
inspired design for environment, Proceedings of SPIE, 11374:113740E, 2020. DOI:
10.1117/12.2558498. 88
[54] A. A. Berryman, The origins and evolution of predator-prey theory, Ecology, 73:1530–
1535, 1992. DOI: 10.2307/1940005. 88
[55] G. Pahl, W. Beitz, J. Feldhusen, and K.-H. Grote, Engineering Design: A Systematic Ap-
proach, 3rd ed., Springer, London, UK, 2007. DOI: 10.1007/978-1-84628-319-2. 88
Authors’ Biographies
95
TORBEN A. LENAU
Torben A. Lenau is an Associate Professor in design method-
ology, material selection, and biomimetics at the Department
of Mechanical Engineering, Danmarks Tekniske Universitet.
His research interests are creative methods in product design
with focus on materials, manufacturing, and biomimetics (in-
spiration from nature). He has conducted a number of indus-
trial case studies on how to integrate biomimetics in product
development and has developed the biocards used to commu-
nicate design principles found in nature. Furthermore, he stud-
ies natural occurring photonic structures in order to develop
new surface coatings based on structural colors.
AKHLESH LAKHTAKIA
Akhlesh Lakhtakia is Evan Pugh University Professor and the
Charles Godfrey Binder (Endowed) Professor of Engineering
Science and Mechanics at The Pennsylvania State University.
He received his B.Tech. (1979) and D.Sc. (2006) degrees in
Electronics Engineering from the Institute of Technology, Ba-
naras Hindu University, and his M.S. (1981) and Ph.D. (1983)
degrees in Electrical Engineering from the University of Utah.
He was the Editor-in-Chief of the Journal of Nanophotonics
from its inception in 2007–2013. He has been elected a Fel-
low of the American Association for the Advancement of Sci-
ences, American Physical Society, Institute of Physics (UK),
Optical Society of America, SPIE–The International Society for Optics and Photonics, Insti-
tute of Electrical and Electronics Engineers, Royal Society of Chemistry, and Royal Society of
Arts. His current research interests include: electromagnetic fields in complex mediums, sculp-
tured thin films, mimumes, surface multiplasmonics and electromagnetic surface waves, forensic
science, and engineered biomimicry.
|
https://ieeexplore.ieee.org/xpl/ebooks/bookPdfWithBanner.jsp?fileName=6812753.pdf&bkn=6812752&pdfType=book
|
Style and Ethics of
Communication in
Science and Engineering
Copyright © 2009 by Morgan & Claypool
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in
any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations in
printed reviews, without the prior permission of the publisher.
Style and Ethics of Communication in Science and Engineering
Jay D. Humphrey and Jeffrey W. Holmes
www.morganclaypool.com
ISBN: 9781598292985 paperback
ISBN: 9781598292992 ebook
DOI: 10.2200/S00128ED1V01Y200809ENG009
A Publication in the Morgan & Claypool Publishers series
SYNTHESIS LECTURES ON ENGINEERING #9
Lecture #9
Series ISSN
ISSN 1939-5221
print
ISSN 1939-523X
electronic
Style and Ethics of
Communication in
Science and Engineering
Jay D. Humphrey
Texas A&M University
Jeffrey W. Holmes
University of Virginia
SYNTHESIS LECTURES ON ENGINEERING #9
iv
ABSTRACT
Scientists and engineers seek to discover and disseminate knowledge so that it can be used to im-
prove the human condition. Style and Ethics of Communication in Science and Engineering serves as a
valuable aid in this pursuit—it can be used as a textbook for undergraduate or graduate courses on
technical communication and ethics, a reference book for senior design courses, or a handbook for
young investigators and beginning faculty members. In addition to presenting methods for writing
clearly and concisely and improving oral presentations, this compact book provides practical guide-
lines for preparing theses, dissertations, journal papers for publication, and proposals for research
funding. Issues of authorship, peer review, plagiarism, recordkeeping, and copyright are addressed
in detail, and case studies of research misconduct are presented to highlight the need for proactive
attention to scientific integrity. Ample exercises cause the reader to stop and think. Style and Ethics
of Communication in Science and Engineering thus motivates the reader to develop an effective, indi-
vidual style of communication and a personal commitment to integrity, each of which are essential
to success in the workplace.
KEyWoRDS
journal publication, theses, grant writing, peer review, oral presentations, authorship, record
keeping, research misconduct
v
Preface
How to write well.
How to publish your results.
How to secure funding.
How to speak well.
How to ensure integrity.
This book was written to help address these important aspects of beginning a career in science and
engineering. In essence, scientists and engineers seek to discover and disseminate knowledge so that
it can be used to improve the human condition. Effective communication thus plays an essential
role in promoting technical advances. Simply put, communication is the ability to express oneself
in a way that is understood readily and clearly. There will be no impact of scientific or engineering
discoveries without effective written and oral communication.
In sections on writing well, we focus primarily on style — that is, rules of usage as well as
principles of composition and form — and draw heavily from Strunk and White (1979), Berry
(1971), and Brogan (1973). Indeed, many illustrative phrases and sentences were inspired by or
modified from these works. We thus note here our indebtedness to these outstanding works and the
examples therein. We encourage the reader to consult these excellent resources as well.
Although written communication, particularly the archival journal article, is most important
to the widespread and long-term advancement of science and engineering, oral communication
plays a vital role. From didactic lectures by an instructor to entertaining presentations for a lay
audience, oral communication has the potential to capture the imagination and promote the ad-
vancement of science and its applications. Similar to theater, oral communication requires one to
“tell a story” well, that is, to package information in a way that is assimilated quickly and retained.
Technical advances in audiovisual capability can aid tremendously in stimulating and capturing an
audience and thus should be integrated thoughtfully within the oral presentation.
It takes a lifetime to establish a good reputation, but only a moment to destroy it. Integrity
in the workplace is just as important as understanding well the basic principles of science and en-
gineering or the basic operation of a scientific instrument. Yet, even within the narrow context of
technical communication, it is impossible to articulate a prescriptive set of rules or procedures for
vi STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
acting ethically. Despite the increasing prevalence of courses in research ethics, surveys suggest that
most students learn the ethics of research and communication primarily “on the job,” principally
from their research mentor. Good training in the ethics of research thus begins with selecting a
mentor who values such training and seeks to develop integrity through regular discussion and
introspection. One goal herein is to stimulate this process of interaction around major issues most
likely to face scientists and engineers in documenting and reporting their research.
The overall goal of this short book is not to be a standalone source on matters of style (which
is left to professors of English or communication) or ethics (which is left for professors of philoso-
phy or law). Rather, it is meant to motivate the reader to develop an effective, individual style of
communicating and a personal commitment to integrity simply because it matters. Hence, this book
is written based on personal experiences of the authors in research and training in the biomedical
sciences and engineering, including the development and delivery of related graduate courses at
Texas A&M University, Columbia University, and University of Virginia. Nevertheless, one of the
best ways to learn to write well is to read extensively the works of good writers; one of the best ways
to learn to speak well is to listen carefully to good speakers; one of the best ways to ensure integrity
in the workplace is to learn from reputable professionals. The reader is thus strongly encouraged in
this regard and, indeed, to keep a notebook wherein personal experiences and helpful observations
can be recorded and reviewed periodically. Best wishes.
vii
Acknowledgments
We thank Jodi Eyhorn, from the Department of Communication of Texas A&M University, for
expert assistance in correcting early portions of the manuscript. We also thank Joel Claypool, of
Morgan & Claypool, for patience and support during the writing process.
Portions of this work began via a Special Opportunity Award from the Whitaker Founda-
tion. Finally, J.D.H. thanks Carolyn S. and Tommie E. Lohman for their continued support of
many different educational initiatives at Texas A&M University, including composition of portions
of this work.
ix
Contents
1. motivation .........................................................................................................1
2.4
2. Writing Well ......................................................................................................5
2.1 Overall Approach ............................................................................................... 5
2.1.1 Outline ................................................................................................... 5
2.1.2 Write Freely ............................................................................................ 7
2.1.3 Edit Critically ......................................................................................... 8
2.1.4 Read Out Loud ...................................................................................... 8
2.1.5 Have a Colleague Proofread ................................................................... 9
2.2 Removing Redundancies and Unnecessary Words ........................................... 10
2.3 Active Voice, First Person, and Different Tenses .............................................. 15
2.3.1 Voice ..................................................................................................... 15
2.3.2 Person ................................................................................................... 19
2.3.3 Tense .................................................................................................... 21
Infinitives and Modifiers .................................................................................. 22
2.4.1 Infinitive ............................................................................................... 22
2.4.2 Modifiers .............................................................................................. 23
2.5 Additional Issues of Word Choice .................................................................... 26
2.6 Punctuation, Abbreviations, and Foreign Languages ....................................... 30
2.6.1 Exploit Methods of Punctuation .......................................................... 30
2.6.2 Abbreviations ........................................................................................ 32
2.6.3 Foreign Languages ................................................................................ 33
2.7 Footnotes, Quotations, and Proper Citation ..................................................... 35
2.7.1 Footnotes .............................................................................................. 35
2.7.2 Quotations ............................................................................................ 35
2.7.3 Proper Citation ..................................................................................... 36
2.8 Vocabulary ........................................................................................................ 36
2.9 Closure ............................................................................................................. 40
x STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
3.
4.
Scientific Publications ...................................................................................... 41
3.1 Basic Content ................................................................................................... 41
3.1.1 Cover Page and Letter to Editor .......................................................... 42
3.1.2 Results .................................................................................................. 44
3.1.3 Methods (or Materials and Methods) .................................................. 45
3.1.4 Discussion and Conclusion ................................................................... 47
3.1.5 Introduction .......................................................................................... 48
3.1.6 Abstract ................................................................................................ 49
3.1.7 Acknowledgments ................................................................................ 49
3.1.8 Appendices ........................................................................................... 50
3.1.9 References ............................................................................................. 50
3.1.10 Figures and Tables ................................................................................ 52
3.2 Publishing an Archival Journal Paper ............................................................... 53
3.2.1 Origin ................................................................................................... 53
3.2.2 Composition and Authorship ............................................................... 54
3.2.3 Submission and Review ........................................................................ 54
3.2.4 Revision ................................................................................................ 56
3.2.5 Typesetting, Galley Proofs, and Proofreader Marks ............................. 57
3.2.6 Copyright, Permissions, and Page Charges .......................................... 58
3.3 Thesis or Dissertation ....................................................................................... 59
3.4 Technical Reports ............................................................................................. 61
Proposals and grant Applications ..................................................................... 63
Introduction ...................................................................................................... 63
4.1
4.2 Types of Grants ................................................................................................ 63
4.3 The Review Process .......................................................................................... 64
4.4 The NIH R01 Grant ........................................................................................ 67
4.4.1 Specific Aims ........................................................................................ 68
4.4.2 Background and Significance ............................................................... 68
4.4.3 Preliminary Results ............................................................................... 69
4.4.4 Research Plan ....................................................................................... 70
4.4.5 References ............................................................................................. 72
4.5 The Preproposal ............................................................................................... 73
4.6 Summary .......................................................................................................... 74
Appendix .................................................................................................................... 76
5. oral Communication ....................................................................................... 77
5.1 Effective Styles ................................................................................................. 77
5.2 The 15-Minute Presentation ............................................................................ 82
5.3 Summary .......................................................................................................... 84
CoNTENTS xi
6.
Authorship ....................................................................................................... 85
6.1 The Slutsky Case .............................................................................................. 85
6.2 Basic Conventions ............................................................................................ 86
6.2.1 Order of Authors .................................................................................. 86
6.2.2 Submission Agreement ......................................................................... 87
6.2.3 Publication Impact ............................................................................... 87
6.3 Common Problems ........................................................................................... 88
6.3.1 Expectations ......................................................................................... 88
6.3.2 Gift, Guest, and Ghost Authorship ...................................................... 89
6.3.3 Financial Support ................................................................................. 91
6.3.4 Quid Pro Quo ...................................................................................... 91
6.3.5 Students and Technicians ..................................................................... 92
6.4 Current Standards and Emerging Ideas ........................................................... 93
6.4.1 International Committee of Medical Journal
Editors Standards ................................................................................. 93
6.4.2 Author Notification .............................................................................. 94
6.4.3 Specifying Contributions ...................................................................... 95
6.4.4 Quantifying Contributions ................................................................... 96
6.5 Our Approach .................................................................................................. 96
6.5.1 Authorship Criteria .............................................................................. 97
6.5.2 Predraft Group Meeting ....................................................................... 97
6.5.3 Final Review and Approval .................................................................. 97
6.5.4 Default Position for Abstracts .............................................................. 98
7.
Recordkeeping ................................................................................................. 99
7.1 The Slutsky Case Revisited .............................................................................. 99
7.2 Why Keep Records? ....................................................................................... 102
7.2.1 Medical Records ................................................................................. 103
7.2.2 Industry Research Records ................................................................. 104
7.2.3 Academic Research Records ............................................................... 104
xii STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
7.3 Electronic Data ............................................................................................... 105
7.3.1 Date-Stamps, Time-Stamps, and Backup Systems ............................ 106
7.3.2 Images ................................................................................................ 106
7.3.3 Software Development ....................................................................... 106
7.4 Fraud: Fabrication and Falsification ............................................................... 107
7.4.1 Retaining or Discarding Data ............................................................. 108
7.4.2 Image Manipulation ........................................................................... 109
7.4.3 Statistical and Image Forensics ........................................................... 110
8. ownership of Ideas, Data, and Publications ..................................................... 111
8.1 Data and Resource Sharing ............................................................................ 112
8.1.1 Research Data ..................................................................................... 112
8.1.2 Model Organisms ............................................................................... 113
8.1.3 Other Research Products .................................................................... 113
8.2 Copyright ....................................................................................................... 114
8.2.1 Online Publishing .............................................................................. 115
8.2.2 Public Access to NIH-Funded Journal Articles ................................. 115
8.3 Patents ............................................................................................................ 118
8.3.1 Patents and Publicly Funded Research ............................................... 119
8.3.2 Patents and Publication ...................................................................... 120
8.4 Plagiarism ....................................................................................................... 121
8.4.1 Attribution Within a Research Group ............................................... 122
8.4.2 Citation .............................................................................................. 123
8.5 Peer Review .................................................................................................... 124
8.5.1 Archival Journal Articles .................................................................... 124
8.5.2 Grants ................................................................................................. 126
References .............................................................................................................. 129
Author Biography .................................................................................................... 131
Index ...................................................................................................................... 133
1
C H A P T E R 1
motivation
The first part of this book is about communicating well, which is just as important to success in the
workplace in science and engineering as it is in professions such as business, law, politics, and theol-
ogy. Although there are useful guidelines for communicating well, there are no unique formulas. In-
deed, individual differences can bring a freshness and vitality to a field; individual personalities can
generate excitement and interest. Each person should thus develop a style that is most effective for him or
her. The second part of this book addresses the need for professional responsibility, that is, integrity
in the workplace. It has been correctly said that it takes a lifetime to establish a good reputation,
but only a moment to destroy it. There is, therefore, a need for consistent and concerted atten-
tion to ethical conduct and the appropriate communication of research findings. This, too, requires
thoughtful, personal commitment — it is more than simply trying to follow the rules, for rules may
change, it is doing the right things for the right reasons. Consequently, this book is different from
most textbooks in science and engineering. We seek to cause one to stop, contemplate, and adopt a
personal course of action. In some sections, therefore, the style is more like a workbook. Indeed, as
a point of departure, let us consider the following.
Exercise 1.1
List five of the most important inventions of all time.
1.
2.
3.
4.
5.
Exercise 1.2
of all time.
1.
2.
3.
4.
5.
List five of the most important scientists, mathematicians, or natural philosophers
2 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
If you are like many of our previous students, you listed among your selections of the most impor-
tant inventions of all time the airplane, automobile, computer, electric motor, refrigeration, steam
engine, telephone, and so forth. Each of these inventions is truly great, and so too many others, but
did you consider the invention of the printing press with movable type? If not, you are not alone. Yet,
take a few minutes and imagine what the world would be like without books or scientific periodicals.
Indeed, think about how the development of science, medicine, and engineering may have differed
over the past five centuries had the printing press not been invented. For this reason, Time-Life
magazine selected the printing press as the most important invention of the second millennium.
The rapid growth of printing with movable type reveals its overall importance. Invention of
movable type is generally attributed to Johann Gutenberg (ca. 1397–1468), and the first book so
printed is the famous Gutenberg Bible, which was completed in 1454/1455 at Mainz. By 1480, 111
towns throughout Europe boasted printing presses, and by 1500, this number grew to more than
238 (Boorstin, 1983, p. 270). In addition to the printing of the Bible, which had a significant influ-
ence on the development of science and Western culture (Dillenberger, 1961; van Doren, 1991),
these presses allowed the printing and widespread distribution of classics by Aristotle, Cicero,
Euclid, Plutarch, Ptolemy, and many others. It is thus thought that Gutenberg’s invention played
a singularly important role in the European Renaissance.
Recalling Exercise 1.2, many scientists, mathematicians, and philosophers deserve recogni-
tion as great. Among them, you may have listed Socrates, Plato, Aristotle, Archimedes, Copernicus,
Galileo, Newton, Euler, Lavoisier, Gauss, Darwin, Maxwell, Planck, Einstein, or Pauling. How is
it that we know about these great investigators? How is it that we know what they accomplished?
Consider Sir Isaac Newton (1642–1727), for example, who is universally listed as one of the greatest
natural philosophers of all time. Many know of Newton based on comments in courses on physics
related to his law of gravitation, his laws of motion, or perhaps his experiment with a glass prism
that revealed a spectrum of colors in sunlight. Fewer people know about Newton through in depth
study, for example, by reading books such as The Life of Isaac Newton by Westfall (1993). Still fewer
yet know of Newton because they have read his great works, his Principia of 1687 or his Opticks of
1704. Regardless of the particular path, we all know of Newton primarily through the written word,
not oral tradition and certainly not first-hand interaction.
When reading about Newton, it is interesting to learn that he abhorred criticism of any kind
and, in particular, interpersonal conflicts. Indeed, it appears that he was so concerned about criti-
cism, especially from R. Hooke (1635–1703), then secretary of the Royal Society of London, that
for many years he had little interest in publishing his greatest ideas. Apparently, the Principia was
published only because of the persistent encouragement and personal financial sacrifice by Edmund
Halley (1656–1742). This is remarkable! Similarly, it seems that Newton withheld publication of
his Opticks until just after the death of Hooke. What if Newton had died first? Can we imagine
moTIvATIoN 3
how the development of science may have differed had Newton not revealed his brilliant thoughts
through these two books?
Likewise, it is interesting to contemplate the development of Western society, which de-
pended so strongly on Greek thought, had it not been for Plato (ca. 427–347 bc). It seems that
Plato’s mentor, the great Socrates, was content to lecture or discuss rather than to write or dictate.
Although it is not clear how much of Plato’s writings truly reflect Socrates, the importance of works
such as The Republic is without question. The written word and its widespread distribution has im-
pacted the world like few other things — it is fundamental to communication among scientists and
engineers as well as the general public.
Communication is defined as follows: To make known; impart. To transmit; have an
interchange, as of ideas. To express oneself in such a way that one is readily and clearly
understood.
American Heritage Dictionary
Whether to have a long-lasting impact on human history or simply to contribute to success in the
workplace, communicating well is a vitally important skill for the scientist or engineer. Indeed, not
only must one communicate well with colleagues or a technical boss, there is often a need to com-
municate with diverse scientists, engineers, clinicians, venture capitalists, or the general public. For
example, the National Institutes of Health (NIH) is currently promoting the importance of “team
science” in biomedical research, which depends strongly on effective communication among indi-
viduals having diverse backgrounds, and many universities in the United States are promoting the
importance of translational research, which requires interactions among clinicians, scientists, engi-
neers, and those in business. Hence, we cannot overemphasize the importance of effective written
and oral communication in research and development.
Because our focus is on technical communication, note that March 6, 1665, marks a begin-
ning of the scientific periodical, based on the proceedings of the Royal Society of London entitled
Philosophical Transactions. In the preface to the first issue, Henry Oldenburg (ca. 1617–1677) wrote
(see Boorstin, 1983, p. 393):
Whereas there is nothing more necessary for promoting the improvement of philo-
sophical Matters, than the communicating of such, . . . ; It is therefore thought fit to
employ the Press, as the most proper way to gratifie those, whose engagement in such
Studies, and delight in the advancement of Learning and profitable Discoveries, doth
entitle them to the knowledge of what this Kingdom, or other parts of the World, so,
4 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
from time to time afford. . . . All for the Glory of God, the Honor and Advantage of
these Kingdoms, and the Universal Good of Mankind.
A careful editor would likely revise this Preface in the interest of conciseness and clarity, yet the
message would remain apparent and the motivation to improve our communication skills certain.
Indeed, from its beginning, the Royal Society of London sought clear communication in both writ-
ten proceedings and oral presentations, not an “Artifice of Words.” We are well advised to pursue
the same today.
In conclusion, recall from the Preface that a good way to learn to write well is to read widely.
The student having an interest in engineering, mathematics, medicine, philosophy, or science can
learn much from the many books on the historical development of these fields. See Shamos (1959),
for example, who provides brief background information on great physicists from Galileo to Ein-
stein and includes excerpts from their original publications. Lightman (2005) provides a similar
resource for scientists of the 20th century, and Clendening (1960) reprints portions of the early
great publications in medicine. Other books of interest include Bell (1986), Mason (1962), Motz
and Weaver (1989), Tarnas (1991), and van Doren (1991). It is interesting to conclude, consistent
with aforementioned comments by Boorstin (1983), that Shamos (1959) observed: “The exchange
of knowledge, facilitated by the publication of scientific journals, became — and remains — one of
the most significant factors in the growth of physical science.”
Write and submit a three-page (double-spaced, 1-inch margins, 12-point font) es-
Exercise 1.3
say on the role of printed books and scientific periodicals during the Age of Enlightenment.
Write and submit a three-page (double-spaced, 1-inch margins, 12-point font) es-
Exercise 1.4
say on the development of the Royal Society of London and its role in the advancement of science.
Write and submit a three-page (double-spaced, 1-inch margins, 12-point font)
Exercise 1.5
report on the origins of the university and how it differed from the scientific academies of the 17th
and 18th centuries.
• • • •
5
C H A P T E R 2
Writing Well
Vigorous writing is concise. A sentence should contain no unnecessary words, a para-
graph no unnecessary sentences, for the same reason that a drawing should have no
unnecessary lines and a machine no unnecessary parts. This requires not that the writer
make all his sentences short, or that he avoid all detail and treat his subject only in out-
line, but that every word tell.
W. Strunk Jr. and E.B. White, (1979, p. xiv)
2.1 ovERAll APPRoACH
Two of the most difficult aspects of writing are “getting started” and “finishing well.” In other words,
once we truly get started, it is usually easy to continue our line of thinking and to produce a first
draft. Revising and polishing the first draft often takes longer than the initial writing, yet this is
time well spent. Consider, therefore, some general guidelines for writing well, including a simple
five-step recipe for completing a technical document:
1.
2.
3.
4.
5.
Outline in detail.
Write freely.
Edit critically.
Read out loud.
Have a colleague proofread.
Although these steps may seem obvious, even trivial, they serve as important reminders and aid
greatly in the composition of each new work no matter the level of one’s experience.
2.1.1 outline
Most writers agree that it is useful to begin with a detailed outline. Such an outline should contain
the major headings that guide the flow of the work, but perhaps more importantly, it should also
6 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
contain subheadings with bullets that highlight and order the major points within each section.1
Each document is different, thus we should not feel compelled to force our presentation to fit within
the confines of a particular outline. Nonetheless, most technical works adhere to the following basic
outlines:
The technical proposal
Project summary
Specific aims
Background and significance
Preliminary results
Research plan
References
The technical paper or report
Abstract
Introduction
Methods
Results
Discussion
References
The M.S. thesis or Ph.D. dissertation2
Abstract
Introduction
Background
Methods
Results
Discussion
Conclusions and recommendations
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
1 Some encourage full sentences rather than bullets to document the main ideas in each subsection or paragraph,
sentences that may later be used directly in the document. This, however, is a matter of personal style.
2 Many European dissertations are very different. They consist of a sequence of individual chapters, each similar to a
technical paper, all of which are tied together via short introductory and concluding chapters.
•
•
References
Appendices
WRITINg WEll 7
Because of their importance, we discuss these different forms of technical communication in detail
in Chapters 3 and 4. Here, we simply emphasize that a basic outline is the first step toward success-
ful writing; it organizes the flow of the presentation and reminds us to address particularly impor-
tant issues. Additional bulleting within each subsection further directs the writing. Indeed, with the
use of word processors, one may easily use the final outline as a beginning document.
In summary, as in any activity, we increase our chances of “reaching the destination” when we
have a map or detailed instructions to show the way. Note, therefore, that an outline will tend to be
most useful and focused when it is constructed against the background of two questions (Gibaldi,
1995):
What is the overall goal that you wish to achieve with the document?
Who is the intended audience?
Toward this end, it is useful to critique the final outline with regard to both consistency and con-
ciseness. For example, do points made in the introduction set up well the key points made in the
discussion? Moreover, we tend to allow space in our outline for all of the information that we have
collected during our research; we should, however, delete information that unnecessarily duplicates
other documents or simply is irrelevant or unnecessary. Once done, it is then time to begin the
actual writing.
2.1.2 Write freely
One of the biggest impediments to writing efficiently and effectively is untimely self-criticism.
How many of us have labored over that first sentence or first paragraph, rewriting and editing to the
point of fatigue or frustration? Such editing is essential, but it is productive only if addressed at the
right time and in the right way. By writing freely, we mean the unencumbered recording of a logical
thought process. Indeed, it is often useful to disable the spell- and grammar-checking capabilities of
word processors during the initial writing, for they contribute to the distractions of worrying about
the initial spelling of words, ordering of phrases, and even punctuation. These and similar issues are
addressed easily once we complete the initial draft. Indeed, we likewise should not initially worry
about emphasizing active voice, ensuring sufficient variety in our word choice, focusing on concise-
ness, and so forth. Rather, at this early stage of composition, the most important thing is to get the
major ideas onto paper (or the screen) and organized roughly in the right order.
8 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
2.1.3 Edit Critically
Once the first draft is finished, it is usually best (if time permits) to put it aside for at least a few
days before beginning to edit critically. The reason for this is that we often see “what should be
there” rather than “what is there” when we proofread our documents. Most of the remainder of this
chapter addresses specific aspects of editing critically, which typically includes adding, deleting, and
rearranging text.
The fundamental components of any technical document are sentences and paragraphs. A
sentence is a grammatical unit typically consisting of a subject and a predicate (which tells something
about the subject). A simple example is — I am. R. Descartes (1596–1650) expanded this to read —
I think, therefore I am. Clearly, important sentences need not be complex. A paragraph is a gram-
matical unit typically consisting of multiple sentences that together express a complete thought.
Many suggest that the lead sentence of each paragraph should introduce the main idea of that para-
graph and the final sentence of each paragraph should summarize the main thought. This simple
guideline helps to minimize unnecessary generality, that is, it helps to keep the writer focused.
A stepwise approach to editing critically exploits these two fundamental units of compo-
sition. For example, many suggest that the first step should consist of reading the first and last
paragraphs of the document to ensure a consistent introduction and conclusion. The second step
should consist of reading the first sentence of each successive paragraph to ensure that the work
flows logically. Indeed, some go further to suggest that one should be able to glean the salient points
of a document by reading only the first sentence of each paragraph. Although we do not wish to
suggest such a dogmatic approach, casual guidance can certainly come from such an exercise. The
third step of critical editing is a careful evaluation sentence by sentence. In other words, while read-
ing each sentence within context, we should ask if it is necessary, if it is consistent in tense, and if
it as concise and clear as possible. This brings us to the fourth and last step of critical editing, an
evaluation word by word. We should ask, for example, if we have avoided the use of jargon as well
as redundant or unnecessary words and if the intended meaning of each word actually reflects its
definition. Word choice is critical. From a pragmatic perspective, we can simultaneously evaluate
sentence by sentence and word by word.
As noted previously, the importance of critical editing cannot be overemphasized, hence we
return to this issue in detail in Sections 2.2 to 2.8. Here, however, let us finish our discussion of an
overall approach to writing well. After we have outlined our work, written freely, and edited care-
fully, our next step should be to read the document out loud.
2.1.4 Read out loud
Although this step may seem trivial or perhaps uncomfortable, it is amazing how sensitive the ear
is to effective writing — different tenses, logical sequencing, unintentional rhymes, the overuse of
certain words, and so forth. We strongly recommend, therefore, that one read the document out
loud before going to the final step, asking a colleague to provide constructive criticism.
WRITINg WEll 9
2.1.5 Have a Colleague Proofread
Technical advances in science and engineering have been spectacular, and continued promises of
important discoveries make these professions intellectually attractive. Because of the trend toward
multidisciplinary teams, one of the most enjoyable aspects of these professions is the opportunity
to work with colleagues from many allied fields. Consequently, there is not only a need to write
concisely, but also to write clearly. Although it is common to have colleagues from these allied fields
coauthor many of our works, it is essential to have others proofread our work. That is, even though
we may know best what needs to be said, the definition of an effective paper, proposal, report, or
book is one that is understood and valued by others — this is the goal of effective communication.
Classmates and colleagues tend to be busy, thus we sometimes hesitate to ask them to proof-
read our work. Yet, they too would appreciate having someone provide feedback on their work and
consequently will many times agree to do so for you. Consider establishing reciprocal agreements,
whereby you exchange documents to be proofread. This will not only help the author by provid-
ing specific feedback, chances are it will help the reader both directly and indirectly. One not only
learns by reading, oftentimes going through a document carefully and looking for ways to improve
its clarity and conciseness teaches us much more. This is similar to the adage, “the best way to learn
something is to teach it to others.”
If you ask someone to proofread your work, make sure to tell them that you want them to
be “brutally honest” rather than overly concerned about being critical. Moreover, once you receive
the feedback, be careful to avoid the two most common responses: either to ignore the comments
because you “know” that you were correct in the first place or to incorporate the suggested changes
without questioning. These two responses are equally inappropriate. If a colleague questions the way
something is stated, particularly if based on deep knowledge of the technical area, this at least sug-
gests that the text could be written more clearly. In other words, if they do not understand what you
are trying to say, chances are others will likewise not understand. Consider revising the text along
the lines they suggest or in another way; the important thing is to give the suggested revision careful
consideration. Conversely, incorporating suggested changes without questioning is dangerous. The
primary goal is to communicate most effectively that which you are trying to say. If your col-
league’s suggestion does that, great; if not, work to improve clarity and conciseness and perhaps have
your colleague read it again. Often, it is helpful to ask them what was confusing or what they thought
you meant to say. Sometimes an explanation reveals how best to say it in the written word.
Finally, remember that it is good to keep marked manuscripts to evaluate them for possible
consistent errors or patterns. In this way, we can become proactive in avoiding problems, whether
10 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
it is an overuse of passive voice or an inappropriate use of modifiers. Being conscious of potential
errors is the first step to avoiding them.
Recover from your files at least three documents you have written and that were
Exercise 2.1
evaluated/graded on writing style. Compile a list of errors that occurred repeatedly and write brief
examples of how to correct these problems.
Read a journal paper in your area of expertise and record at least 10 sentences that
Exercise 2.2
could be improved for conciseness or clarity. Next, reread the paper out loud and record another five
sentences that you could improve; note the types of concerns that are identified more easily when
heard. Finally, suggest possible improvements for each of the 15 sentences.
2.2
REmovINg REDuNDANCIES AND uNNECESSARy WoRDS
Now that we have a feel for an overall approach to writing well, let us begin to address specific
aspects of “critical editing.” Recall that effective technical writing is first and foremost clear and con-
cise, which for obvious reasons is better written “Recall that effective technical writing is clear and
concise.” One way to ensure such characteristics in our writing is to remove redundancies and unneces-
sary words, sentence by sentence. Let us consider a few specific examples below (note: the original
version is on the left and the corrected version is on the right, hence it is best to cover the right side
first and consider how you might improve each example before looking at the suggested change):
The cells were cultured for a period of three
weeks.
The temperature of the chamber remained
between 35 and 39°C.
The cells were cultured for three weeks.
The chamber remained between 35 and 39°C.
The associated mechanisms are not known at
this time.
The associated mechanisms are not known.
(or, . . .remain unknown.)
The experiments were performed over a
period of 10 hours.
The experiments were performed for 10 hours.
The new transducer is much smaller in size,
which simplifies the design.
The new transducer is smaller, which
simplifies the design.
The temperature increased at a rate of 3°/min.
The temperature increased at 3°/min.
The signal is lost below a threshold level of
10 Hz.
The signal is lost below a threshold of 10 Hz.
WRITINg WEll 11
This thesis reports work done during the
period from January 1998 to December 2000.
This thesis reports work accomplished from
January 1998 to December 2000.
The algorithm searches outward from the
center location.
The algorithm searches outward from the
center.
The A/D converter allows a maximum of
eight input signals.
The range of the output signal was from a
minimum of 2 to a maximum of 5 volts.
The A/D converter allows eight input signals.
The output signal ranged from 2 to 5 volts.
The results of our experiments support the
established theory.
Our experiments support the established
theory.
Turn the potentiometer in the clockwise
direction to increase the gain.
Turn the potentiometer clockwise to increase
the gain.
Use the lenses that are convex in shape.
Use lenses that are convex.
The problem should first be formulated and
then solved.
The problem should be formulated, then
solved.
The amount of noise will be excessive if the
signal is not filtered.
The noise will be excessive if the signal is not
filtered.
There is no known analytical solution to this
equation at this time.
No analytical solution exists for this equation.
The biopsy should be redesigned in the future
to minimize the amount of tissue needed.
The biopsy should be redesigned to minimize
the tissue needed.
The reason for this difference can be
attributed to. . .
This difference can be attributed to. . .
Remember to remove the specimen during the
calibration procedure.
Remember to remove the specimen during
calibration (or, when calibrating).
There is a growing body of evidence that the
hypothesis is indeed true.
There is growing evidence that the hypothesis
is true.
After one test, there should be a sufficient
quantity of culture media for a second test.
After one test, there should be sufficient
culture media for a second test.
12 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
In the first example, the use of the word weeks implies a period or duration, which is therefore
not needed. In the second example, the use of the unit °C specifies that the numerical value refers
to a temperature, which thereby becomes redundant. Similarly, in the third example, we see that if
something is not known, it is implied that it is not known at this time, which is thereby unneces-
sary. In hindsight, the other examples are likewise clear. Indeed, because these specific examples
highlight a redundancy or unnecessary words, they may seem so obvious that we would be surprised
if we ever wrote such sentences. Upon close examination of our previous works, however, we often
find similar or even more flagrant examples. It is for this reason that we must be conditioned to
look for redundancies and unnecessary words, which is often best learned via examples; see Brogan
(1973) for additional examples.
Find a technical research paper that you have written and scan it specifically for ex-
Exercise 2.3
amples of redundancies or unnecessary words. Record five examples below with possible revisions.
If we read a number of published technical papers for style, we quickly realize that we could make
many commonly used phrases more concise or even omit them without a loss of clarity. For example,
how many times have we read the phrase “The purpose of this paper is to present. . . ,” which we
could write more concisely as “This paper presents. . . .” Consider the following phrases (left side)
that occur frequently but can often be rendered better or omitted (right side) as follows (cf. Brogan,
1973; Valiela, 2001):
WRITINg WEll 13
is used to develop
is dependent on
develops
depends on
We propose to use the combination of
We propose to combine
results in the simplification of
simplifies
It is interesting to note that
Note that (or, omit)
due to the fact that
in order to
in spite of the fact that
as a result of
appears to be
experienced a peak at
in the event that
was found to be
a number of
may be a mechanism responsible for
It is well-known that
because
to
despite
Omit
seems
peaked at
if
was
many (or, various)
may cause
Omit
for a long period of time
for a long period
is described in detail in
in the absence of
is detailed in
without
It is not uncommon that
It is common that
The finding is not inconsistent with
The finding is consistent with
14 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
Note, in particular, that the last two entries in this table emphasize that double negatives should
be avoided. Moreover, see Appendix 2 in Day and Gastel (2006) for an expanded list of words and
expressions to avoid.
In addition to removing redundancies and unnecessary words, there are many opportunities
to introduce conciseness via word choice and sentence structure. For example, consider the following
examples:
Because the structure is assumed to remain
circular, . . .
This will enable us to develop a better
understanding of. . .
This finding is the opposite of that reported
by. . .
Assuming the structure remains circular, . . .
This will enable us to understand better. . .
This finding is opposite that reported by. . .
The model is capable of describing. . .
The model can describe. . .
Table 1 is a list of all findings. . .
Table 1 lists all findings. . .
The next section is a brief description of the
experimental methods.
The next section briefly describes the
experimental methods.
The faculty advisor was the supervisor of both
the undergraduate and the graduate students.
The faculty advisor supervised both the
undergraduate and the graduate students.
The temperature readings will be dependent
upon the contact stress. . .
The temperature readings will depend upon
the contact stress. . .
Our laboratory technician also serves as the
budget manager.
Our laboratory technician also manages the
budgets.
The following example is an illustration of the
basic concepts of. . .
The following example illustrates the basic
concepts of. . .
Before continuing, note that some of the suggested changes in the right-hand column assume a
particular style not accepted by all technical writers. Some suggest that a table cannot list, a figure
cannot show, a model cannot describe, a paper cannot present, and so forth. That is, some argue that
only people can list, show, describe, or present; results are simply listed in a table, shown in a figure,
WRITINg WEll 15
described by a model, or presented in a paper. Because this is a matter of style, one must decide what
approach to take, then be consistent.
As we shall see in Chapters 3 and 4, removing redundancies and unnecessary words not only
results in writing that is clearer and more concise, it enables us to meet stringent limitations on
words or pages in published works or proposals. Consider two instructive exercises:
Write a three-page (double-spaced, 1-inch margins, 12-point font) biography of a
Exercise 2.4
leading scientist. Work hard to write clearly and concisely. Two days after finishing your essay, edit
it further to reduce it to a single page without losing significant content. You will be surprised how
easy and yet how powerful this exercise is. Finally, note that a one-page “white paper” is often all
that is used to render important decisions in many professions; thus, it is important be able to write
an effective short report.
Use the “Word Count” tool in your word processor to determine the number of
Exercise 2.5
words in a short document (e.g., abstract) that you recently composed. Once done, set out to reduce
the length of the document by 50% without compromising the content.
ACTIvE voICE, fIRST PERSoN, AND DIffERENT TENSES
2.3
2.3.1 voice
In active voice, the subject of the sentence performs the action indicated by the verb. Conversely, in
passive voice, the subject of the sentence receives the action of the verb. The simple example below
distinguishes between passive and active voice:
Passive voice: The data were analyzed by him using an ANOVA.3
Active voice: He analyzed the data using an ANOVA.
Although passive voice is acceptable, indeed sometimes more appropriate, most writers agree to prefer
active voice, for it engenders conciseness and directness. In the example given, we see that seven words
suffice rather than nine — this reduction represents a savings of approximately 20%. Given a 10-page
paper, a 20% reduction in the number of words would yield an eight-page paper or else would provide an
extra two pages to include more information; such savings can be significant. Moreover, comparing the
two sentences in this example reveals the increased directness of the active voice, which promotes clarity
and conciseness.
Albeit preferred, active voice is less common than passive voice in scientific writing. A simple
change in the preceding example illustrates one reason for this:
3 ANOVA is a common acronym for analysis of variance in statistics.
16 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
Passive voice: The data were analyzed using an ANOVA.
Active voice: We analyzed the data using an ANOVA.
In this case, each sentence contains seven words, thus the active voice does not increase conciseness.
Moreover, the context can imply the “we” in the case of the passive voice, thus there need not be a
difference in clarity (e.g., who did what). In many cases, authors prefer not to write in the first or
the third person and revert to passive voice. The issue of person is addressed in the next section (or
should we say, we address the issue of person in the next section). Here, however, consider examples
of passive voice (left) and easy ways of changing them to active voice (right). First, changes primar-
ily in verb form can be effective:
The specimen is connected to the device
through a custom cannula.
The specimen connects to the device through
a custom cannula.
The output signal is fed into a signal
conditioner.
The output signal feeds into a signal
conditioner.
In the next section, the underlying theory is
given.
The next section gives the underlying theory.
In our current research, attention is directed to
finding the mechanism.
Our current research directs attention to
finding the mechanism.
The theory is dependent on five basic postulates.
The theory depends on five basic postulates.
X was used to create a surface-confined
computational mesh.
X created a surface-confined computational
mesh.
Increasing evidence has implicated the
importance of. . .
Increasing evidence implicates the importance
of. . .
Three different sectioning planes were used to
form. . .
Three different sectioning planes formed. . .
Experimental noise is increased when
unshielded cables are used.
Experimental noise increases with the use of
unshielded cables.
A reader’s attention is increased by the liberal
use of figures and schematic drawings.
A reader’s attention increases with the liberal
use of figures and schematic drawings.
Second, changing the subject, which often necessitates changing the order of the words in the sen-
tence, is often equally effective:
WRITINg WEll 17
The specimen is connected to the device
through a custom cannula.
A custom cannula connects the specimen to
the device.
The results of the study are listed in Table 1.
Table 1 lists the results of the study.
The control is simplified by using commercial
software.
Commercial software simplifies the control.
An improved result is obtained by refining the
computational grid.
A refined computational grid improves the
result.
The proper use of the equipment is described
in Chapter 2 of the manual.
Chapter 2 of the manual describes the proper
use of the equipment.
Ensure that all specimens are tested under the
same conditions.
Use the same conditions to test all specimens.
The temperature is measured by a thermocouple.
A thermocouple measures the temperature.
These empirical findings are used as inputs
into the theoretical model.
The theoretical model uses the empirical find-
ings as inputs.
A detailed derivation of this equation is given
in the appendix.
The appendix details the derivation of this
equation.
The culture system is optimized by maintain-
ing body temperature.
Maintaining body temperature optimizes the
culture system.
A common excuse for (over)using passive voice is that it is natural because we often discuss past
events, as, for example, “it was reported that” or “it was found that.” Tense need not be the deciding
factor, however, as revealed by the following simple example:
The pressure was measured by a mercury
manometer.
The pressure is measured by a mercury
manometer.
A mercury manometer measured the pressure.
A mercury manometer measures the pressure.
18 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
Finally, as noted previously, writing in the first or third person often allows us to avoid passive voice.
Whereas the next section discusses the issue of person, as appropriate in scientific writing, consider
the following, which revisit previous examples with alternate changes:
The specimen is connected to the device
through a custom cannula.
We connected the specimen to the device us-
ing a custom cannula.
The results of the study are listed in Table 1.
We list the results of the study in Table 1.
These empirical findings are used as inputs
into the theoretical model.
We used the empirical findings as inputs into
our theoretical model.
The temperature is measured by a
thermocouple.
We measured the temperature using a
thermocouple.
A detailed derivation of this equation is given
in the Appendix.
I derive this equation in detail in the
Appendix.
Experiments were performed in triplicate for
each set of. . .
We performed three experiments for each set
of. . .
The culture system is optimized by maintain-
ing body temperature.
We optimized the culture system by maintain-
ing body temperature.
In summary, we do not have to avoid passive voice at all costs; indeed, passive voice is preferred
in many cases. We also do not need to invoke first person to avoid passive voice. Nevertheless, our
general guideline is to prefer active voice when editing critically.
Exercise 2.6
examples of passive voice. Record five examples below with possible revisions.
Select a journal paper in your field that interests you and scan it specifically for
WRITINg WEll 19
2.3.2 Person
Students of the history of science know that scientific writing used to be much more personal. As a
simple example, consider the following excerpt from one of the works of W. Harvey (1578–1657)
on the motion of the heart, (Clendening, 1960, p. 159):
Besides the motions already spoken of, we have still to consider those that appertain to
the auricles. Casper Bauhin and John Riolan, most learned men and skillful anatomists,
inform us from their observations, that if we carefully watch the movements of the heart
in the vivisection of an animal, we shall perceive four motions distinct in time and in
place, two of which are proper to the auricles, two to the ventricles. With all deference to
authority I say, that there are four motions distinct in point of place, but not of time. . . .
If written today, we may well have read (with little other editing):
Besides the motions already noted, there is a need to consider those concerning the
auricles. Bauhin (16xx) and Riolan (16xx) report that careful monitoring of the heart in
an open-chest animal reveals four motions distinct in time and place, two of the auricles
and two of the ventricles. Nevertheless, it is suggested here that these four motions are
distinct in place but not time. . . .
Why has scientific writing become so impersonal today? Certainly, there has been an appropriate
move away from the verbose, from patronizing prose, and from self-aggrandizement. Nevertheless,
science and engineering are personal — they are advanced by people, usually for the good of people —
and it is not only acceptable, in many cases it is more honest, direct, and effective to write in first
person. For example, in Chapter 4 on writing research proposals, we will see that an important part
of the NIH-R01 grant is a section called Preliminary Results. Imagine that you review such a sec-
tion and read, “It has recently been shown that . . . (12).” Noting that (12) denotes reference number
12 in the list of references, the reviewer would not know if it was the applicant or another investiga-
tor who showed this important finding unless he/she looked at the reference list. Conversely, there
20 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
would be no ambiguity if the applicant wrote “We recently showed that . . . (12).” In the case of a
research proposal, clearly demonstrating one’s previous work may increase tremendously the chances
of funding, thus employing first person may be both effective and advantageous.
As a reminder that first person can yield effective and memorable prose, recall the following
sentences from the seminal paper by James Watson and Francis Crick on the structure of DNA:
We wish to suggest a structure for the salt of deoxyribose nucleic acid (D.N.A.). . . . It
has not escaped our notice that the specific pairing we have postulated immediately sug-
gests a possible copying mechanism for the genetic material.
There are three persons in typical grammatical structure: first person refers to the person or persons
who are speaking or writing; second person refers to the person or persons spoken or written to; and
third person refers to person(s) spoken or written about. For example, consider the common pronouns,
singular and plural, in the three persons and three common cases (Vivian and Jackson, 1961):
SINgulAR
fIRST PERSoN
SECoND PERSoN
THIRD PERSoN
Nominative
I
Possessive
Objective
My, Mine
Me
You
Your, Yours
You
He, She, It
His, Hers, Its
Him, Her, It
PluRAl
fIRST PERSoN
SECoND PERSoN
THIRD PERSoN
Nominative
We
You
They
Possessive
Objective
Our, Ours
Us
Your, Yours
Their, Theirs
You
Them
Whereas the words he or him were used generically in the past to denote males or females, modern
writers tend to be much more sensitive to issues of gender. Thus, there has been a move to use
neutral pronouns. For example, the famous imperative from Star Trek fame, “To boldly go where
no man has gone before,” can be written as “To boldly go where no one has gone before.” It is also
acceptable to write he/she or him/her when desired, but we should prefer neutral constructions.
WRITINg WEll 21
Finally, numerous terms such as department chairman or layman can be written as department chair
or layperson to avoid this issue.
Albeit largely a matter of style, we suggest that it is acceptable and many times preferable to
use a personal style in scientific writing. For example, it is acceptable to write: “Although they were
the first to exploit their novel empirical observations by identifying quantitative correlations, we
were the first to develop a theoretical basis to explain the observations.”
As food for thought, consider the following simple examples as you decide on a particular
style:
The authors recommend, therefore, that. . .
We recommend, therefore, that
Hence, it is suggested that. . .
Hence, I suggest that. . .
It will be seen that. . .
You will see that. . .
Based on these results, it was decided that. . .
Based on these results, we decided that. . .
It has been shown previously that. . .
We previously showed that. . .
One important warning, however, is that when “I” is used, be careful not to give the impres-
sion that it serves an egotistical end.
2.3.3 Tense
Tense is a property of time; it signifies when events occur or when conditions exist (Vivian and
Jackson, 1961). There are six tenses: past, present, future, past perfect, present perfect, and future perfect.
The perfect tenses typically involve the use of the words “have” or “had.” Consider the following
simple examples:
Past: I completed the experiment.
Present: I am completing the experiment.
Future: I will complete the experiment.
Past perfect: I had completed the experiment.
Present perfect: I have completed the experiment.
Future perfect: I will have completed the experiment.
22 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
Two of the key questions in scientific writing are, “What tense should I use when reviewing
what others reported previously?” and “What tense should I use when reporting what I did?” Al-
though it is of little comfort, the answer to these questions is that there is no set answer.
Some authors suggest, however, that if concepts or findings reported in a previous peer-
reviewed work remain true, one should refer to them in the present tense. As a simple example,
consider Newton’s second law of motion, which was put forth in the 17th century. One could write
“As Newton showed in the Principia, force equals mass times acceleration.” Alternatively, one could
write “As Newton showed in the Principia, force equaled mass times acceleration.” All should agree
that if it is still believed that force equals mass times acceleration, then present tense should be used.
A more modern example could be, “Smith et al. (1999) show that . . .” versus “Smith et al. (1999)
showed that. . . .” Again, the choice is largely a matter of personal style; the most important thing is
to be consistent within a given paper.
Most authors agree that we should use past tense when reporting our own new findings, for
they have not yet been verified or accepted widely. Hence, when writing the results section of a
paper, it is appropriate to use “we measured” and “we found” or similar constructs.
INfINITIvES AND moDIfIERS
2.4
2.4.1 Infinitive
An infinitive is a verb form, a characteristic sign of which is the word to, for example, “to measure,”
“to quantify,” or “to report” (Vivian and Jackson, 1961). A split infinitive occurs when a word or
phrase separates the “to” and its complement. A famous split infinitive in recent years comes from
the aforementioned quote from Star Trek: “To boldly go where no man has gone before,” which
we could rewrite as “to go boldly where no man has gone before.” The issue is how we wish to go,
boldly or fearfully. Although it is best not to split infinitives, grammarians are now less dogmatic
with regard to this rule. Indeed, a purposefully split infinitive may be preferred in some cases. For
example, consider the phrase “to promote exercise vigorously” (Iverson et al., 1998). There could
be confusion by some as to whether vigorously relates to promote or exercise, hence writing “to
vigorously promote exercise” could be clearer, unless of course the intent was “to promote vigorous
exercise.” Strunk and White (1979) also note that the sentence “I cannot bring myself to really like
the fellow” is clear, concise, and relaxed. Nevertheless, the general rule should be: Do not split infini-
tives unless the sentence is less awkward when doing so.
Let us consider a few examples of split infinitives and how to correct them easily.
The goal of this project is to better
understand. . .
The goal of this project is to understand
better. . .
We plan to quickly initiate the funded study. We plan to initiate the funded study quickly.
WRITINg WEll 23
It is difficult to separately control X and Y. . .
It is difficult to control X and Y separately.
. . . , they failed to correctly diagnose
. . . , they failed to diagnose
It is bad practice in the laboratory to
arbitrarily stop an experiment.
It is bad practice in the laboratory to stop an
experiment arbitrarily.
To effectively study the source of the error, . . . To study the source of the error effectively, . . .
The sponsor requested us to, with all possible
haste, complete the final report.
The sponsor requested us to complete the final
report with all possible haste.
The last example in this table is a particularly flagrant abuse of the infinitive.
Other examples of split infinitives occur when a single “to” serves multiple infinitives.
Whereas it is generally acceptable to write, “There is a need to assemble and test the device,” rather
than “There is a need to assemble and to test the device,” it is also better to write “There is a need to
assemble the device according the sponsor’s specification, then to test it . . .” rather than “There is a
need to assemble the device according the sponsor’s specification, then test it. . . .”
Finally, note that infinitives can occur in active or passive voice and in past or present tense.
In these cases, the infinitives may take different forms, such as:
Present active: to tell
Present passive: to be told
Past active: to have told
Past passive: to have been told
Hence, that a word or phrase appears between the “to” and its complement need not signal that an
infinitive has been split.
2.4.2 modifiers
Another mistake common in technical writing is the use of nouns as modifiers. A modifier is a word,
phrase, or clause that renders another word or group of words more specific; two common kinds of
modifiers are adjectives and adverbs. In contrast, a noun is a person, place, or thing. Perhaps it has
been in the spirit of trying to write concisely that nouns have been misused frequently as modi-
fiers. In a syndicated column, J.J. Kilpatrick noted a few examples from the New York Times: “their
court victory,” which is better written “their victory in court,” and “close-knit classical music world,”
which is better written “close-knit world of classical music.” Common examples in the technical
literature include “material science” rather than “the science of materials” and “fluid mechanics”
24 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
rather than “the mechanics of fluids.” Yet, such constructions need not be considered problematic,
which reminds us that certain cases are acceptable. More flagrant examples of noun modifiers exist
in many scientific papers and should be minimized. Tabulated below are a few examples found in
recently published works (which we do not cite so as not to criticize particular authors, for many,
including us, are equally guilty):
The primary extracellular matrix components
include. . .
The primary components of the extracellular
matrix include. . .
When tissue temperature reached. . .
When the temperature of the tissue reached. . .
Force and length data were used to compute
stresses.
Stresses were computed from data on forces
and lengths.
An increased wall stiffness of the aorta. . .
Minimum residual microfibrillar function. . .
An increased stiffness of the wall of the
aorta. . .
Minimum residual function of the
microfibrils. . .
Ultrastructural analysis has begun to. . .
Analysis of ultrastructure has begun to. . .
. . .could detect molecular level changes. . .
. . .could detect changes at the molecular level. . .
. . .will use gene expression measurements
to. . .
. . .will use measurements of gene expression
to. . .
Changes in cell structure and function
reveal. . .
Changes in the structure and function of cells
reveal. . .
The resulting surface stress appears. . . .
The resulting stress at the surface appears. . .
. . .the ability of the cells to move into the
wound area. . .
. . .the ability of the cells to move into the area
of the wound. . .
. . .to undergo changes in contractile protein
expression. . .
. . .to undergo changes in the expression of
contractile proteins. . .
. . .organ development becomes highly
sensitive to. . .
. . . .development of the organ becomes highly
sensitive to. . .
. . .of the neonatal fibroblast. . .
. . .of the fibroblast in neonates to. . .
WRITINg WEll 25
In contrast to previous tables of examples on redundancies, the “corrected” right-hand side here
often resulted in a longer sentence or phrase. Again, it may have been in the interest of concise-
ness that nouns have come to be misused (left side). Nevertheless, one is well advised to use nouns
properly.
Next, consider a few simple suggestions to promote the proper use of appropriate modifiers
(adjectives and adverbs). Recall, that adjectives modify nouns, whereas adverbs can modify a verb,
an adjective, or another adverb. Adverbs may come before, after, or between the words that they
modify. When possible, a sequence of modifiers should be listed according to length or logical order.
For example, Berry (1971) suggests that “tired, bored, and exhausted” is written better as “bored,
tired, and exhausted” because it is likely that one becomes bored before tired. He likewise suggests
that the modifiers “dry, withered, and flaky” should be ordered in the sequence in which they occur:
“withered, dry, and flaky.”
Finally, note that “a,” “an,” and “the” are called articles. The definite article “the” refers to
something or someone in particular. Hence, when we read “A significant finding was…” versus
“The significant finding was . . . ,” we see that the former refers to one of many significant find-
ings, whereas the latter refers to one finding that was significant. This simple distinction must be
respected.
Although many modifiers are effective in different forms of writing, their overuse in scientific
writing may suggest that one’s results are not quantitative, that they need embellishing. For example,
instead of saying that data “are very noisy,” we need only say they “are noisy,” then provide specific
measures such as a signal-to-noise ratio to quantify the degree of the noise. Similarly, a numerical
method may be “very robust,” but if it is robust, that is all that needs to be said. In other words, the
modifier “very” often adds very little (as in this case). Similarly, “quite” is quite unnecessary in most
cases, including this one, and although the word “rather” is considered by some to be rather impor-
tant, it often is not. A general rule, therefore, is: Do not use modifiers unless the meaning is clarified by
doing so.
Review a technical paper that you wrote previously and eschew all unnecessary or
Exercise 2.7
inappropriate modifiers. If you are somewhat puzzled why you used words such as “somewhat,” take
comfort that you are not alone. As examples, record five illustrative sentences below and possible
corrections.
26 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
ADDITIoNAl ISSuES of WoRD CHoICE
2.5
The best advice related to word choice is to keep a dictionary within easy reach and to consult it
frequently. With regard to the five-step recipe for composition given in Section 2.1, however, we
should remember that this should be done during the phase, “edit critically.” Indeed, if you are
struggling for just the right word while “writing freely,” it is often best to put an “xx” in the text so
that you are reminded to search for an appropriate word later and not interrupt the flow of your
thoughts and composition. Here, however, we briefly identify and discuss some words that are often
used interchangeably but should not be so used. Consider, for example:
Alternative/alternation: An alternative is a choice between two mutually exclusive possibili-
ties. An alternation is a successive change from one thing to another and back again.
Amount/number: Amount refers to a quantity that is not countable, whereas number is used
when it is possible to count. It is thus correct to say “The amount of information available
was not sufficient for . . .” or “The data suggest a number of conclusions.”
Because/since: Strictly speaking, because refers to a cause–effect relationship and since refers
to a past event. It is appropriate to write “Because the results suggested . . .” and “Since the
last conference . . .”
Between/among: In general, use the word between when considering two things and use the
words among or amongst when dealing with more than two things.
Can/may: The word can has to do with ability, whereas the word may has to do with having
permission.
Compare with/compare to: Use compare with when examining or discussing similarities or
differences. In general, only use compare to when representing a metaphorical similarity.
Complement/compliment: A complement is something that completes or brings to a whole. A
compliment is an expression of congratulations or praise.
WRITINg WEll 27
Comprise/compose: Comprise means to consist of or to include. Compose means to make up
the constituent parts of, to constitute or form. Good examples are “The Union comprises
50 states” and “Fifty states compose (or constitute) the Union.”
Continual/continuous: Continual means with occasional interruption, whereas continuous
means without interruption.
Data/datum: Data are plural, typically representing facts or information. Datum is the sin-
gular form of data, often used in the context of a point from which to measure.
Due to/because of: Due to means attributable to. Because of relates to a cause or reason for
occurring. A helpful hint is that a sentence should not start with due to.
Effect/affect: An effect is a noun; it implies a result, something that is caused. Affect is a verb;
it brings about a change. To affect is thus to influence or impress.
Either/neither: It is correct to write “either A or B” and likewise “neither A nor B,” but we
do not use “neither A or B.” Moreover, in each case, these words imply only two options,
hence we cannot say “either A, B, or C.”
Essential/important: Essential implies indispensable, fundamental, or absolute. Important
merely implies significant or noteworthy.
Farther/further: Farther should be used when the context is distance. Further implies some-
thing in addition, such as the need for further experiments. Hence, one does not move a
fixture further toward the center.
Good/well: In most cases, good is used to modify a noun (e.g., she is a good writer), whereas
well is used to modify a verb (e.g., she writes well).
However/nevertheless: Strunk and White (1979) suggest that we should avoid beginning
a sentence with the word however when the meaning is nevertheless or yet. This is easily
corrected via replacement with these more acceptable beginning words or by moving the
however to the middle or end of the sentence. When used at the beginning of a sentence,
however should be thought of as “in whatever way” or “to whatever extent.” A good example
is given by Strunk and White, (1979, p. 49): “However discouraging the prospect, he never
lost heart.” In contrast, it would be better to write “Nevertheless, he never lost heart despite
the discouraging prospects” rather than to write “However, he never lost heart despite the
discouraging prospects.”
Imply/infer: Imply means to suggest or indicate by logical necessity, whereas infer means to
deduce based on available evidence.
Precede/proceed: Precede means to come before in time, to occur prior to. Proceed means to go
forward, especially after an interruption, or to move on in an orderly fashion.
Principal/principle: A researcher may be the principal investigator on the project but not
the principle investigator. One may use a scientific principle but not a scientific principal.
28 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
Another usage that is often confused is that the solution to an eigenvalue problem yields a
principal value and in mechanics one may compute a principal stress or strain.
Shall/will: It is suggested by some that shall should be used for future expectations in first
person and will should be used in second and third person. This distinction between shall
and will occurs only in formal writing, however, and the word will often suffices. A good
example is that will is appropriate in grant proposals, for example, “We will test the hy-
pothesis that. . . .”
That/which: In general, use that to lead into a defining or essential clause and use which to
lead into an inessential or nonrestrictive clause. Kilpatrick (1984) suggests an easy way to
decide usage in most cases: use which whenever the clause is set apart by commas and use
that otherwise. The key point, however, is the word that is used with essential clauses. For
example, note the difference between the following sentences.
“The transducer that is broken is on the shelf.”
“The transducer, which is broken, is on the self.”
In the first case, only the transducer that is broken is on the shelf. In the second case, the
transducer is on the shelf and it happens to be broken.
That/who: That refers to things and who refers to people.
While/whereas: Strictly speaking, while should be used to convey a sense of time, for ex-
ample, “The computer acquired data while the device subjected the cells to increasing me-
chanical loads.” Nonetheless, many accept while as a substitute for although. In contrast,
whereas means “it being the fact that” or “inasmuch as.”
Next, consider a few words that are useful in technical writing but sometimes misused.
Aforementioned: This word is an adjective; it must be combined with a previously used
noun. For example, it is correct to write “The aforementioned finding suggests. . . ,” but it
is incorrect to write, “As aforementioned, . . . .”
And/or: This is a construction used by some, but often best avoided. Use either the word
and or the word or as appropriate.
Correlate: To put into a complementary, parallel, or reciprocal relationship, not implying
causality.
Dilemma: Either a situation that requires one to choose between two equally viable alterna-
tives or a predicament that seemingly defies a satisfactory solution.
Former: The first mentioned of two things.
Latter: Like former, this word implies two choices. If one has a list of three or more items,
then to refer to the last one in the list, simply say “the last one,” not “the latter one.”
Per: “Pursuant to”
This: For clarity, follow the word this with a noun. For example, do not write, “This is to
be expected,” but rather write, “This nonlinearity is to be expected” or “This finding is to
be expected.”
WRITINg WEll 29
Finally, some words have specific meanings in science and mathematics even though they are often
used loosely in everyday speech. Because our interest is scientific writing, however, we must respect
the specific meanings. Three prime examples of such words are significant, necessary, and sufficient. It
would be natural, for example, to write: “The response of Group A differed significantly from that of
Group B.” Yet, we must ask whether this is what we really mean. The word significant in science usually
carries a statistical meaning, that is, it usually implies that based on a standard statistical test, there is
a significant difference between two metrics (e.g., as indicated by a p < 0.05 associated with a specific
statistical analysis). If such a test was performed and passed, then we could write our illustrative sen-
tence as given; if not, it would be better to use a different word or to delete the modifier altogether. In
the absence of a statistical test, it would be better to write: “The response of Group A differed mark-
edly from that of Group B” or simply “The response of Group A differed from that of Group B.” The
words necessary and sufficient similarly have precise meaning in mathematics and they are often used
together. In this context, necessary means required and sufficient means adequate. For example, a neces-
sary and sufficient condition for a solution to hold is much stronger than a sufficient condition alone.
In concluding this section, it is interesting to consider a comment ascribed to the famous
ancient philosopher Socrates:
The wise man knows that he knows not; the fool knows not that he knows not.
Similarly, consider a comment by the famous modern philosopher Bertrand Russell:
Although this may seem a paradox, all exact science is dominated by the idea of approxi-
mation….When a man tells you he knows the exact truth about anything, you are safe
in inferring that he is an inexact man.
If we accept that science represents relative, not absolute, truth, then should we be careful not to
use strong phrases such as “the data demonstrate” or “the data prove.” For example, should we use
phrases such as “the data suggest” or “the data imply.” Similarly, should we avoid saying that some-
thing “is,” but instead say that “it appears that.” Here, again, we simply suggest that one should
think carefully about this issue, make a purposeful decision within the context of common usage,
and be consistent.
30 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
Read three technical papers in your field and generate a list of phrases that reflect
Exercise 2.8
either a “certainty” or a “possibility” with regard to important findings or conclusions. Write a one-
page summary and indicate the approach to communicating such results that you find to be the
best.
2.6
PuNCTuATIoN, ABBREvIATIoNS, AND
foREIgN lANguAgES
2.6.1 Exploit methods of Punctuation
Punctuation is a system of devices or marks (e.g., commas, semicolons, colons, dashes, and paren-
theses) that clarify relationships between words and groups of words (Vivian and Jackson, 1961).
Aside from the standard use of the period, many writers of science and engineering tend to use
commas sparingly and to avoid using semicolons, colons, and dashes. Although we should not over-
use such devices, variety in punctuation can be as effective in written communication as variety in
tone can be in oral communication. We list here a few rules of punctuation, but we encourage the
reader to give particularly careful thought to the effective use of semicolons, dashes, and parenthe-
ses. As a start, consider Rules 2 to 4 of Strunk and White (1979):
One should use a comma after each entry, except the last, in a list of three or more entries
that share a common conjunction such as and or or. For example, we should write “this finding was
unexpected, repeatable, and important.” To appreciate this usage, recall from Section 2.1 that the
fourth step in writing well is “read out loud.” Doing so here, the ear reveals a difference between
“this finding was unexpected, repeatable, and important” (with a verbal pause after each comma)
and “this finding was unexpected, repeatable and important.” In other words, the latter case sounds
like the finding was “unexpected” as well as “repeatable and important.”
When paired, commas are useful devices to set off a nonessential clause, for example, “The
transducer, which is broken, is on the shelf.” When used with a conjunction to introduce an indepen-
dent clause, the comma should be omitted before the and when the clauses relate closely. In contrast,
the comma should almost always precede conjunctions such as but, for, and or. Commas are also useful
to set off an introductory phrase, such as “In this paper, we show. . . .” Finally, a comma can be used to
separate three or more modifiers, such as in the case of a “randomized, double-blind, clinical trial.”
Use the semicolon instead of a period when independent clauses relate closely and it is ef-
fective to highlight this similarity. The only exception to this rule is the case of short independent
clauses. Consider, therefore, the following examples from Strunk and White (1979):
Stevenson’s romances are entertaining. They
are full of exciting adventures.
Stevenson’s romances are entertaining; they
are full of exciting adventures.
WRITINg WEll 31
It is nearly half past five. We cannot reach
town before dark.
It is nearly half past five; we cannot reach
town before dark.
Man proposes, God disposes.
Here today, gone tomorrow.
—
—
The semicolon is also useful to separate main clauses that are joined by conjunctive adverbs such as
the following: indeed, yet, however, moreover, or hence. For example, we might write (Iverson et al.,
1998): “This consideration is important in any research; yet it is often overlooked, if not denied.”
Use the colon before a long in-line quotation (see Section 2.7), to introduce a list, or to sepa-
rate independent clauses when the first clause introduces the second one. For example, if we wish
to specify the composition of a physiological solution used in an experiment, we might write the
following.
The specimens were immersed in a physiological solution consisting of, in mM: 116.5
NaCl, 22.5 Na2HCO3, 1.2 NaH2PO4, 2.4 Na2SO4, 4.5 KCl, 1.2 MgCl2, 2.5 CaCl2, and
5.6 dextrose.
Like the comma, one can use short dashes (or em dashes) and parentheses to set off nonessential,
but clarifying, clauses or entries. The decision to use the em dash or parenthesis (more common)
is again a matter of style, with the em dash typically reserved for the longer, sometimes tangential,
breaks in thought. Consider the following two cases:
Of the many risk factors for coronary artery disease — high cholesterol, high salt intake,
cigarette smoking, lack of exercise, diabetes, and hypertension — some can be avoided
by simple changes in lifestyle.
Many risk factors for coronary artery disease can be controlled by simple changes in
lifestyle (e.g., cholesterol, high salt, and smoking).
Parenthetical setoffs are also useful in providing supplementary information or identifications. For
example, it is common to read: “of the 10 tests, only 5 (50%) were successful,” “the differences were
not significant ( p > 0.05),” or “a consistent volume of fluid (10 ml) was injected.” As noted below,
it is common to include clarifiers within parenthetical set offs such as (for example, . . .) or (that
is, . . .), abbreviations for which are given below.
32 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
The hyphen has many uses as well; see Brogan (1973) for a good discussion of this device.
Commonly misunderstood uses are numerical or multiword modifiers. For example, we should
write “The diameter of the device is 5 mm,” but we should use the hyphen to write “The 5-mm-
diameter device. . . .” We should also use the hyphen to write out numbers such as thirty-seven or
two-thirds. Another use of a hyphen is in the pairing of words that, via the natural evolution of
grammar, often become single words. A simple example is mechanical transduction, which became
mechano-transduction and now is mechanotransduction. Finally, multiword modifiers are often hy-
phenated, for example, “the signal-to-noise ratio” or “one-way Student t-test.” Uses such as “pre-
and post-surgical” are also common.
With regard to numbers, it is common to write out in words those numbers less than 10 (e.g.,
zero, one, two) but to write out numerically those numbers 10 or greater (e.g., 11, 100, 1000). There
are exceptions, however (Blake and Bly, 1993). For example, if data are collected at days 0, 3, 7, and
14, we would not write “at days zero, three, seven, and 14.” In other words, one of the best rules of
thumb is consistency and clarity. Moreover, always write large numbers in a way that is most easily
understood. For example, the number 30,000,000 may be best understood as 30 million if referring
to dollars or population; in contrast, it may be best understood as 3 × 107 if referring to a quantity in
physics or chemistry or 30 MPa if referring to stress in mechanics. The example of 30 MPa reminds
us to use, when appropriate, accepted prefixes: giga (G), mega (M), kilo (k), milli (m), nano (n), and
so forth. The best rule of thumb, therefore, is always write for clarity. Finally, it should be noted that
decimal values less than unity should be written with a leading zero, for example, 0.15 rather than
.15. Whether one uses decimal values or not, also remember to include only significant digits, that is,
information that is reliable. For example, although a calculator or computer may provide an answer of
4.1248432, if only three of the digits are reliable then we should write this as 4.12. Refer to elementary
textbooks on physics or chemistry for good discussions on the appropriate use of significant digits.
As last reminders, do not use the apostrophe in special cases of decades or centuries, rather
one should write 1970s or 1700s. Words such as its and it’s and whose and who’s are often confused.
This is simple to remember: it’s and who’s are contractions of “it is” and “who is,” whereas its and
whose show possession. Contractions should be avoided in formal writing, however. Finally, it is im-
portant to emphasize that most publishers use a single space after a period, not two spaces. Not only is
the single space typically more pleasing to the eye, it is also an effective means to reduce the number
of pages and hence cost of publication because those extra spaces add up.
2.6.2 Abbreviations
Many writers suggest that abbreviations should be avoided in formal writing. In technical writing,
however, we should merely minimize the use of abbreviations, using them only when they improve
WRITINg WEll 33
conciseness or are common within the intended context. One of the easiest ways to decide whether
to use a particular abbreviation is to ask if it will improve or impede the reader’s understanding. For
example, many readers of technical papers go directly to the figures or results to see what was found,
or they go directly to the discussion to see what was deemed to be important. They can be frus-
trated, therefore, if the figure legends, results, or discussion contain uncommon abbreviations that
require them to search the introduction or methods to find the associated meanings. This should be
avoided. Nevertheless, many abbreviations are so common that it would be surprising if they were
used with explanation. Examples include ANOVA (analysis of variance), DNA (deoxyribonucleic
acid), ECG (electrocardiogram), MRI (magnetic resonance imaging), and mRNA (messenger ri-
bonucleic acid). There are, of course, many similarly common abbreviations. Scientific units should
also be abbreviated without definition, for example, kPa (kilopascal), MHz (megahertz), ml (mil-
liliters) mmHg (millimeters of mercury), and mM (millimolar).
Many other abbreviations, such as LV (left ventricle) or MAP (mean arterial pressure), are
used widely and so too for abbreviations of many biologically important molecules and chemical
compounds. For example, one would be expected to use the following abbreviations: NO (nitric ox-
ide), transforming growth factor (TGF), and (poly)methyl methacrylate (PMMA). In these cases,
however, common practice is to introduce the abbreviation only if it is used three or more times in
subsequent text and to define the abbreviation at its first occurrence in the body of the paper, not
the abstract. It is best not to construct new abbreviations, however, just because a descriptor is used
repeatedly. For example, we would not introduce NM for noun modifier even if used extensively.
Similarly, we would not use SI for split infinitive. Indeed, this example reveals that one should be
careful not to define new abbreviations that are identical to commonly accepted abbreviations (e.g.,
SI is French for Systeme Internationale, the common units of measurement in most of science and
engineering). Iverson et al. (1998) provides an extensive list of accepted abbreviations in medicine.
2.6.3 foreign languages
Many publishers seek to reduce the length of a publication because additional pages translate into
additional costs. For this reason, some well-accepted abbreviations are encouraged and thus are
common. Four of the most common abbreviations come from Latin, namely:
1.
et al., which means “and others,” is commonly used when referring to a publication by three
or more authors. In such cases, it is customary to cite the last name of the first author fol-
lowed by “and others,” for example, Smith et al. (1999) or (Smith et al., 1999). Whether
one italicizes the Latin et al. depends on the publisher, but in most cases, a period should
follow the al.
34 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
2.
3.
4.
e.g., which means “for example,” is often used in parenthetical situations (e.g., in this way).
Remember, too, that an example is just that, one of many possible illustrations; it is not a
unique clarifier.
i.e., which means “that is,” is also often used in parenthetical situations (i.e., it often appears
in this type of context). In contrast to e.g., using i.e. is similar to using the phrase “in other
words” and thus is meant to clarify a meaning, not to provide an illustrative example.
cf., which means “compare with,” is often used to draw attention to a similar or related il-
lustration, equation, or other scholarly work, for example, (cf. Figure 1).
Another common abbreviation from the Latin is,
5.
etc., which means “and other unspecified things of the same class” or simply “and so forth.”
Albeit commonly used, most grammarians agree that etc. should not be used in formal
writing or, if so, only sparingly for good purpose. If one does not wish to provide an exhaus-
tive list, using “for example” is an appropriate way to indicate the listing of some, but not
all, of the members of that class. One would thus never use e.g. and etc. within the same
parenthetical statement.
Although scientific and engineering documents should be scholarly, they should not be pretentious.
An attempt to impress the reader by using phrases or words from Latin, Greek, or other “foreign”
languages is not advised in general unless their meaning is well-known and they engender concise-
ness or clarity. For example, some words and phrases are common in the biomedical literature and
should be used, for they are well understood. In addition to the aforementioned et al., i.e., e.g., and
cf., consider for example:
de novo: anew
in situ: in its natural place
in vitro: “in glass” or generally in an artificial environment
in vivo: within a living organism
ex vivo: outside of a living organism, but still living
Other acceptable, but less common, examples are:
ad infinitum: without end or limit
in toto: totally, altogether, entirely
reductio ad absurdum: reduction to the absurd
status quo: as it is now
WRITINg WEll 35
fooTNoTES, QuoTATIoNS, AND PRoPER CITATIoN
2.7
2.7.1 footnotes
Footnotes are brief notes placed at the bottom of a page that provide a citation (older use) or a
comment on a specific part of the main text. Although scientists and engineers used footnotes
extensively in the past, such usage is generally discouraged today. We do not advocate eliminating
footnotes, but we do encourage sparse usage. For example, footnotes can provide brief examples or
clarifications that do not otherwise fit within the text using parenthetical devices such as commas,
parentheses, or em dashes. Footnotes should not be used to solve problems in organizing material
or sentence structure, however.
2.7.2 Quotations
Quotations must be denoted in one of two ways: if integrated within the text, they must be enclosed
within quotation (“ ”) marks; if longer, and singled out, they should be indented but appear without
quotation marks. Some publishers also use a smaller font for indented quotations although we do
not advocate this policy. For example, let us recall the quotation from W. Strunk Jr. that is given at
the beginning of this chapter:
Vigorous writing is concise. A sentence should contain no unnecessary words, a para-
graph no unnecessary sentences, for the same reason that a drawing should have no
unnecessary lines and a machine no unnecessary parts. This requires not that the writer
make all his sentences short, or that he avoid all detail and treat his subject only in out-
line, but that every word tell.
In general, use longer block quotations sparingly, if at all, in a technical document. Many times, the
reader will skip such quotations to get to the meaning or importance of the quotation that follows.
Another rule of thumb is to ask whether the quotation is necessary or if it is simply an easier alterna-
tive. If the latter, a brief reference to the original source with original commentary would be better.
Ellipses, that is, three dots in sequence (. . .), indicate that words are omitted, usually from a
quotation. Four such dots in sequence usually indicate that words are omitted at the end of a sen-
tence, hence the last dot can be thought of as the period at the end of that sentence. Beginning a
quotation with a lowercase letter indicates that the author has omitted the initial part of the quote;
beginning with a capital letter indicates that one is beginning the quotation at the beginning of the
sentence. Thus, ellipses are not needed at the beginning of a quotation.
When information is missing or incorrect in a quotation, it is acceptable to provide complete
and accurate information. The information that is added should be enclosed by brackets [ ]. For
example, given the quote “Newton postulated…,” one may write “[Isaac] Newton postulated. . . .”
36 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
It is acceptable, however, to provide a quotation exactly as it appears without correcting obvious
or subtle errors provided this does not mislead the reader. Finally, one may find or insert [sic] in a
quotation. The form [sic] comes from the Latin and indicates that a seemingly paradoxical word,
phrase, or fact is not a mistake; it should be read as given.
2.7.3 Proper Citation
Although we discuss issues of ethics in Chapters 6 to 8, including plagiarism, we briefly mention it
here for convenience. Simply put, plagiarism is the passing off as one’s own the ideas and words of
another. Actually, “pass off ” is too soft of a word; plagiarism is intellectual theft. Most universities
have writing centers and associated Web pages, hence one can find formal definitions and excellent
examples of plagiarism.
The best way to avoid plagiarism is through proper citation. Although we tend to learn in
English classes that there is a particular way to cite works in a bibliography, in scientific writing, the
citation format varies considerably from publisher to publisher. Hence, the best advice is to read the
“instructions to authors,” which can be found on the Web page for the journal or publisher or often
within the journal itself. We give examples of different styles of citation in Section 3.1.9.
“The Double Helix” by J.D. Watson is a wonderful account of the events that sur-
Exercise 2.9
rounded the discovery of the double-helix structure of DNA. Read this book and write a three-page
summary that highlights issues of ethical interest.
voCABulARy
2.8
Vigorous writing should be clear and concise; nevertheless, it should also be provocative and en-
gaging. The reader is thus encouraged to read Chapter 5 in Strunk and White’s The Elements of
Style. There is a need to employ words of power (i.e., having strong meaning) without becoming
verbose or haughty. One way to accomplish this is to expand our vocabulary, which is perhaps best
accomplished by keeping a diary of words as we read technical papers and books. When we come
upon a forceful, precise, or attractive word, we should take note of it. Knowing that the author may
have misused the word, however, we should always consult a reliable dictionary when recording the
associated definition. A good dictionary can be found online at www.m-w.com. Here, we list a few
words that one can use advantageously in technical writing, which may or may not be used on a
daily basis by the reader.4
4 These definitions are taken largely from the American Heritage Dictionary.
WRITINg WEll 37
Abate: To reduce in amount, degree, or intensity; lessen.
Adverse: Antagonistic in design or effect; hostile; opposed.
Ancillary: Subordinate.
Assume: To take for granted; suppose.
Attenuate: To make slender, fine, or small.
Augment: To make greater, as in size, extent, or quantity; enlarge.
Causal: Pertaining to or involving a cause.
Caveat: A warning or caution.
Cogent: Forcibly convincing.
Copious: Yielding or containing plenty; affording ample supply.
Corroborate: To strengthen or support; attest the truth or accuracy of.
Cull: To pick out from others; select.
Cursory: Hasty and superficial; not thorough.
Delve: To search deeply and laboriously.
Didactic: Intended to instruct; expository.
Disparate: Completely distinct or different in kind; entirely dissimilar.
Dubious: Fraught with uncertainty or doubt; uncertain.
Egregious: Outstandingly bad; blatant; outrageous.
Eminent: Towering above others; projecting; prominent.
Enigma: An obscure riddle; puzzling, ambiguous, or inexplicable.
Equivocal: Capable of two interpretations; cryptic; evasive; ambiguous.
Erudite: Deeply learned.
Exacerbate: To increase the severity of; aggravate.
Extant: Still in existence; not destroyed, lost, or extinct.
Extenuate: To lessen or attempt to lessen the magnitude or strength of.
Fortuitous: Happening by accident or chance; unplanned.
Fraught: Attended; accompanied.
Glean: To collect bit by bit.
Hypothesize: To assert a hypothesis (i.e., an assertion subject to proof ).
Inadvertent: Accidental; unintentional.
Inchoate: In an initial or early stage; just beginning; incipient.
Inexplicable: Not possible to explain.
Inordinate: Exceeding reasonable limits; immoderate; unrestrained.
Integral: Essential for completion; necessary to the whole.
Intrinsic: Pertaining to the essential nature of a thing; inherent.
Lucid: Easily understood; clear.
38 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
Manifold: Of many kinds; varied; multiple.
Marked: Having a noticeable character or trait; distinctive; clearly defined.
Myriad: A vast number; a great multitude.
Nadir: The place or time of deepest depression; lowest point.
Nullify: To make ineffective or useless.
Obviate: To prevent or dispose of effectively; to render unnecessary.
Ostensible: Given or appearing as such; seeming; professed.
Ought: Indicates obligation or duty, prudence, or desirability. Use with to.
Paucity: Smallness of number; fewness.
Permeate: To spread or flow throughout; pervade.
Peruse: To read or examine, especially with great care.
Posit: To put forward as a fact or truth; to postulate.
Postulate: Something assumed without proof as being self-evident or generally accepted,
especially when used as a basis for an argument.
Premise: A proposition upon which an argument is based or from which a conclusion is
drawn.
Proliferate: To reproduce or produce new growth rapidly and repeatedly.
Promulgate: To make known by public declaration; announce officially.
Propensity: An innate inclination; tendency; bent.
Purview: The extent or range of function, power, or competence; scope.
Quiescent: Inactive or still; dormant.
Recant: A formal retraction of a previously held belief or statement.
Recondite: Not easily understood by the average person.
Reiterate: To say over again.
Replete: Plentifully supplied; abounding.
Requisite: Required; absolutely needed; essential.
Retrospect: A review, survey, or contemplation of things in the past.
Salient: Striking; conspicuous.
Spurious: Lacking authenticity or validity; counterfeit; false.
Substantiate: To support with proof or evidence; verify.
Succinct: Clearly expressed in few words; concise; terse.
Sundry: Various; several; miscellaneous.
Surmise: To infer without sufficiently conclusive evidence.
Tacit: Not spoken.
Tantamount: Equivalent in effect or value. Used with to.
Tractable: Easily managed or controlled; governable.
Ubiquitous: Seeming to be everywhere at the same time; omnipresent.
WRITINg WEll 39
The space below allows you to record additional words, with their definitions, that you would like
to add to your technical vocabulary.
40 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
CloSuRE
2.9
Recalling that one of the best ways to improve one’s writing is to read widely, we should not only
read for pure enjoyment or the gaining of new technical information, we should also read with the
intent of learning how to write well. In other words, take note of the infinitives, the effective use of
punctuation marks, and so forth; record and use particularly forceful words and phrases. Aside from
scientific publications, works of history, philosophy, and theology (one of the four original academic
disciplines, with law, medicine, and natural philosophy) often represent good examples of writing
well.
As a specific example, consider the book On Growth and Form by D’Arcy Thompson (1917).
In the foreword of the 1961 abridged edition, it is noted that P.B. Medawar wrote that Thompson’s
work was “beyond comparison the finest work of literature in all the annals of science that have been
recorded in the English tongue.” What gave rise to such a claim? Thompson was a true scholar, with
expertise in the classics, mathematics, and zoology; moreover, he purposed both to document good
science and to write well. Although we should not expect to achieve such success in writing well, we
should remain committed to producing the best work possible.
• • • •
41
C H A P T E R 3
Scientific Publications
BASIC CoNTENT
3.1
There are many different types of publications in science and engineering, including abstracts, con-
ference proceedings, journal articles, books, theses, dissertations, and technical reports. We focus
on that which is generally regarded as most important, however, the archival journal article. There
are also different types of journal articles, including original articles, technical notes (sometimes
called brief communications), and review articles. We focus on the original article, which is both
most common and most important to the advancement of science and engineering, for it documents
significant, novel findings. Some journals impose stringent guidelines on the organization of such
an article, including particular subheadings, yet considerable flexibility often allows the author(s)
to present the material in the best way possible. For purposes of illustration, however, we follow an
outline recommended by the majority of scientific journals, namely
Abstract
Introduction
Methods (or Materials and Methods)
Results
Discussion
Acknowledgments
References
Indeed, because most papers have the same basic structure, it is expedient to use a custom, generic file
(e.g., called PaperTemplate.doc) to begin writing each paper. This file not only can remind us of the
basic outline, it can ensure the proper formatting (often 1-inch margins, double-spaced, and 12-point
font unless the particular journal states otherwise or provides its own template), including the place-
ment of tables and then figures at the end of the manuscript. Having such an electronic outline in place
can be a brief time saver, but perhaps, more importantly, it serves as a mental aid to begin writing.
Recall from Chapter 2 that there are five basic steps of writing well: formulating a detailed
outline, writing freely, editing critically, reading out loud, and having a colleague critique the final
draft. Moreover, a detailed outline not only includes major headings, as noted above for the original
journal article, it also includes potential subheadings and either bullets that highlight key ideas or in
42 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
some cases leading sentences. Although different authors employ different approaches in crafting
the original detailed outline, a good place to start is to call all authors together and to lay out on a
table the primary findings: figures, images, tables, equations, and so forth. These key findings can
then be discussed and ordered logically, which will define the key bullets in the results and serve to
remind us what methods were essential in obtaining the results. Once we have outlined the methods
and results sections, it is then easy to outline the introduction and discussion. We discuss each of
these key sections in detail below. First, however, let us consider a few items.
Submission of a paper for consideration for publication in a technical journal usually requires
a cover letter to the editor, a list of potential reviewers, and a cover page. Let us begin with the cover
page, an important part of the submitted paper.
3.1.1 Cover Page and letter to Editor
The cover page serves to communicate to the editor and publisher a number of important pieces of
information: the title (and thus subject) of the work, those who performed the work (i.e., the author
list in a specific order) and their professional affiliations, keywords that classify further the area of
study, and finally the full address of the corresponding author. The title is extremely important; it
will determine to a large extent who reads the paper. A good title captures the essence of a paper
without being overly long. Indeed, general rules of thumb are that the title should not exceed 120
characters and it typically should not contain verbs. Consider, for example, well-known titles from
two of the most important and widely cited papers from the 20th century:
On the Electrodynamics of Moving Bodies
Molecular Structure for Nucleic Acids: A Structure for Deoxyribose Nucleic Acid
The first example is from Einstein’s famous paper of 1905 that introduced his special theory of rela-
tivity; the second example is from Watson and Crick’s famous paper of 1953 that introduced their
concept of the double-helix structure of DNA. The American Medical Association Manual of Style
(1998) recommends further that titles should not contain phrases such as “The Role of . . .” or “The
Effects of . . .” or “The Treatment of . . .” and so forth. Although there is no need to be dogmatic
when crafting a title, simple guidelines are useful reminders nonetheless.
Select keywords that are distinct from words used in the title and based on general, but not
generic, aspects of the paper to ensure broader distribution. It is both much easier and more impor-
tant today to identify appropriate keywords. One can log onto a standard computer-based search
engine, such as PubMed, and compare results for different keywords to identify those that highlight
papers most closely related to your work; such words would be good candidates for keywords. Be-
cause most investigators now search for technical papers using computer-based search engines, we
SCIENTIfIC PuBlICATIoNS 43
cannot overemphasize the importance of appropriate keywords. In other words, writing well is not
enough — if the work does not reach the intended audience, it will not have an impact. Significant
attention must be given to the title, keywords, and, as noted below, the abstract.
Most journals require a cover letter even if the submission is completed online. This letter
serves as the official “intent to submit” the paper and thereby to agree to all policies and procedures
of review adopted by the selected journal. Among the many points that can be addressed in this let-
ter, it is customary to confirm that the work is original, that the paper is not simultaneously under
consideration for publication elsewhere, and that all authors contributed to the work and agree to
its submission. One may also wish to note why the work will be of interest to the readership of the
journal or to identify potential reviewers who either should or should not be selected and why. Like
the paper itself, however, the cover letter to the editor should be concise. Here, we provide a simple
example; letters will vary depending on individual circumstances.
Date
Dr. J. Smith
Editor, Journal Name
Address
Dear Dr. Smith:
Enclosed please find the manuscript entitled, “Title,” which is submitted for consideration
for publication in Journal Name. This paper represents original research that has not been, nor
will it be, considered for publication elsewhere until a decision is reached by you or your staff.
All authors contributed to the work and its preparation and agree to its submission.
With very best wishes, I am
Sincerely,
Name
Title
In some cases, the letter to the editor should also contain statements that, if applicable, all re-
search involving human subjects or animals conformed to accepted standards and was approved by
the appropriate institutional committee. Similarly, if applicable, this letter can communicate that
permissions have been obtained from the appropriate parties to republish previously published work
and it can offer suggestions of possible reviewers. In all cases, however, it is important, indeed often
44 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
required, to include a statement that the paper has been submitted to only one journal for evalu-
ation. That is, just as with submissions of grant applications to most agencies, there is a standing
agreement among scientists and engineers that it is improper to submit the same work for simul-
taneous evaluation by two or more journals because of the significant effort required of others to
provide proper and timely reviews.
Let us now turn our attention to the technical sections of a journal article. Rather than dis-
cussing them in the order in which they appear in a paper, however, we discuss them in the usual
order of composition. Hence, we end with a discussion of the abstract rather than beginning with
such a discussion.
3.1.2 Results
The section on results is the heart of the technical paper; it reports the primary findings, which
often represent the most important information contained in the paper. The results should be easy
to write, thus many authors prefer to write this section first. Indeed, in cases of multiple authors
working on a single document, the first author often drafts the results and methods first, then the
senior author drafts the discussion and introduction. All authors then revise the completed first
draft. Regardless of approach, one of the best ways to write the section on results is to lay out all of
the figures, tables, equations, or other major findings that you may include, then to prioritize and
order them in the most logical fashion. It is important to emphasize in this regard that we need not
order the results chronologically; in some cases, authors order results by importance. Once done, it
is then easy to write freely. This approach is particularly effective when a paper is coauthored by two
or more investigators, for laying the results out on a table facilitates discussion of the relative merits
of each finding. Note, too, that although some journals require subheadings within the results, they
are often omitted, and the lead sentence of each paragraph serves to introduce the different key
findings. Indeed, some recommend that the lead sentence of each paragraph in the results should
state the most notable finding in that paragraph.
One of the most frequently asked, and often most difficult to answer, questions is: How much
interpretation of the findings should be in the results versus discussion? The reason that this is dif-
ficult to address is that it depends in part on the style of the author and recommendations by the
specific journal. In general, however, most technical writers agree that the results section should be
objective; it should focus solely on presenting the findings. Hence, although it is common to point
out within results any interesting or important features within specific figures, images, equations, or
tables, it is best to reserve for the discussion any interpretation of the significance of the finding as
well as any comparison to findings by others.
Another question that arises often is how best to refer to a figure or table. For example, should
we write
SCIENTIfIC PuBlICATIoNS 45
A and B were found to be related linearly (Figure 1),
or is it better to write
Figure 1 reveals a linear relationship between A and B.
In other words, is it best to state the key finding and refer parenthetically to the associated figure,
image, equation, or table, or is it best to cite directly the particular evidence that reveals (shows, il-
lustrates, or so forth) the key finding? Notwithstanding some exceptions (e.g., specific instructions
to authors for some journals), the answer to this question is often that it is a matter of personal style.
Note from our illustrative example, however, that the first approach involves passive voice, whereas
the second approach involves active voice but further requires the figure, image, equation, or table
to “do something” — reveal, show, illustrate, confirm, or so forth. Some editors suggest that inani-
mate devices such as figures cannot “reveal” or “show” such things, thus they prefer parenthetical
references over the direct approach. Conversely, others prefer the crisp, active voice in the second
example, which helps to minimize the use of passive voice as desired in general. We encourage the
reader to consider these and similar options carefully and to adopt a consistent, but not rigid, per-
sonal style. Such a decision should affect other aspects of writing a technical paper, for example, the
introduction wherein one often reads “This paper presents.” Again, some would argue that only the
investigators can present, not the paper, yet many prefer this crisp, active style of introduction.
3.1.3 methods (or materials and methods)
The methods section is usually the easiest to write. Indeed, if one has trouble getting started in the
“write freely” phase, it is often best to go to the methods. Simply put, the methods section is where
we document how we accomplished the work. In principle, this section should contain enough
detail to allow the reader to repeat the study in the same way — this alone allows one to test and
confirm the basic tenet of science, reproducibility. Given the increasingly sophisticated methods
and procedures used in science and engineering, however, writing an effective section on methods
demands significant planning. Moreover, given that one can use many commercially available kits
or software packages, there is a need for balance between detail and proper citation.
Two effective devices in writing the methods section are to use ample subheadings and paral-
lelism. Here we emphasize that we should not write scientific papers to be read, we should write
them to be studied. Subheadings thus aid the student in organizing information or locating quickly
particular aspects of the methods when needed. Typical subheadings in a paper on cell biology could
be:
Immunohistochemistry
46 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
In Situ Hybridization
Statistics
Subheadings in a paper on mathematical modeling could include:
Theoretical Framework
Constitutive Models
Numerical Methods
In either case, subheadings should proceed logically and thereby reveal to the reader the thought
process followed by the investigators. The format for the subheadings, for example, numbering or
italicizing, is dictated by the particular journal and thus is provided in the specific instructions to
authors.
Because many scientific findings result from or imply a mathematical statement, it is impor-
tant to address the treatment of equations within a paper, often within methods or results. Simply
put, write an equation as a normal part of a sentence. For example, Newton’s famous second law of
motion is usually written simply as f = ma, where f denotes force, m denotes mass, and a denotes ac-
celeration. Consistent with the presentation here, mathematical symbols usually appear in a distinct
font, which may include italics (e.g., scalars) or boldface (e.g., vectors). In many cases, however,
either for emphasis or simply because of complexity, equations are set off as a separate line within
the text. In this case, the equation is still part of the sentence and thus should include commas and
periods as appropriate. For example, we could write the following. It is important to remember that
Newton’s second law of motion, namely
f = ma,
holds only with respect to an inertial reference frame. Similarly, we could write the following. The
governing equation in this case is Newton’s second law of motion, which can be written as
f = ma.
An easy way to remember that equations are part of the normal grammatical structure is to recall
step 4 from Chapter 2 on how to write well — reading out loud forces us to include equations as
natural parts of the text. A final reminder is that most journals do not allow a nomenclature for
symbols used within equations. Hence, one should always state the meaning of a symbol just before
or just after the equation in which it is introduced, just as we did above for Newton’s second law
(e.g., noting that f denotes force).
For those symbols that are universally accepted or familiar to readers of a particular journal,
there is no need to define them explicitly. Examples of well-recognized symbols are +, −, =, and also
SCIENTIfIC PuBlICATIoNS 47
those for summation, derivatives, integrals, and so forth. Given the increasing complexity of sci-
ence and engineering, commonly used symbols may represent multiple quantities within the same
paper, hence there is a need for care. For example, R is often used for radius, but it is also used for
the universal gas constant [R = 8.314 J/(g mol) K]. The most important suggestion in this regard is
to be clear and self-consistent.
Finally, a frequently asked question relates to the level of detail needed in cases where one re-
ports results obtained using methods reported previously in other journal articles. Although there is
no rigid answer, the best practice is to document the essential, new methods and to refer the reader
to previously established methods, where appropriate. For example, if your group established the
previous methods, simply state that the details can be found in a previous publication, then briefly
outline the methods; if others established the previous methods, cite the key paper(s) and provide a
brief, but slightly more detailed summary of the methods. Conversely, one must provide significant
detail when reporting a new method or procedure. Such detail can include specific instruments and
vendors, chemicals and their sources and concentrations, specific versions of software, and so forth.
In cases of human or animal research, one must first note that the appropriate institutional oversight
committee (e.g., the Institutional Review Board, or IRB, for human research and the Institutional
Animal Care and Use Committee, or IACUC, for animal research) approved the work.
3.1.4 Discussion and Conclusion
Most journals recommend against using a separate section for conclusions, which would typically be
brief, hence the discussion often serves a dual role. One should address at least three specific points
in the discussion:
Interpret the specific results and emphasize the significance.
Compare the current with past results.
Identify limitations and potential needs for further research.
Whereas the introduction normally addresses the significance of the overall research topic or area,
the discussion should address the potential significance of the particular findings. For example,
an introduction may note the importance of cardiovascular consequences of hypertension, which
affects more than 50 million Americans, but the discussion may note the significance of the new
finding that blocking a particular receptor in vascular smooth muscle cells reduces hypertension
in an experimental cohort. Like the introduction, therefore, the discussion should cite appropriate
references, albeit often with greater discussion of the relevant details. It is important in this regard
to cite only the most relevant literature. In other words, the goal is to place the current findings
within the most appropriate context, not to provide an exhaustive collection of all previous work
48 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
that is remotely related to the overall topic or specific findings. Because of the explosion of scientific
and engineering knowledge, citing good review papers can often serve to cover general information
without concern that some important papers may be missed. Related to issues of ethics, of course,
one should not purposely fail to cite a relevant paper for personal gain.
A frequent question with regard to the discussion is how much information should be in-
cluded on the inherent limitations or future needs. In some ways, this addresses both the style and
the ethics of written communication. It is both prudent and useful to others to point out many of
the limitations of the study, with justifications, for this will both put the current study in perspective
and guide future work. Nevertheless, one must be careful not to focus on the negatives in a way that
it distracts from the significant accomplishments or advancements of the study. The key, therefore,
is to maintain an appropriate and candid balance. Similarly, it is useful to point the reader toward
important, useful directions for future research. Yet, many investigators do this in a guarded fashion
to allow themselves the opportunity to exploit their present findings and achieve further advances.
The key point, therefore, is to maintain a proper balance — provide guidance so that others can
advance the field while protecting intellectual property.
In summary, the primary goals of the discussion section of a paper are to reemphasize the
significance or innovation of the study, interpret and discuss implications of the specific findings,
compare the current findings with similar work by others, discuss limitations of the methods or
findings, and summarize the major finding(s) while giving direction for future work.
3.1.5 Introduction
As with any introduction, the primary goal of this section is to capture the reader’s interest and “set
the stage.” Toward this end, it is generally recommended that the introduction answer three basic
questions:
Why is the general topic or particular study important?
What is currently known and what remains unknown?
What does the current paper address or accomplish?
One should be able to answer these questions easily after having written the results. Consistent
with answering these questions, the typical introduction consists of three to four paragraphs even
though there is considerable variety in the number and especially the lengths of these paragraphs.
Experienced writers may write the introduction first, but most writers write the introduction after
completing the methods and results and sometimes even the discussion. Regardless, it is important
to provide sufficient references in the introduction to justify both the need for the study and the
general approach adopted.
SCIENTIfIC PuBlICATIoNS 49
An important issue with regard to writing a journal article, including the introduction, is the
appropriate use of abbreviations. Good rules of thumb are to use only commonly known abbrevia-
tions (e.g., DNA), to use them only if the word or phrase is repeated three or more times throughout
the document, and to introduce them at the first occurrence in the body of the paper (cf. Section
2.6.2). Some journals require the author to collect abbreviations together, for example, in a footnote
on the first page of the paper or in a table. Regardless, it is best to use abbreviations sparingly.
3.1.6 Abstract
The technical abstract has always served an important role — it provides a brief summary of a paper
and thereby helps a reader decide whether to read or study the paper. With the advent of computer-
based search engines, however, the abstract has become a particularly important means of captur-
ing the attention of the intended audience. Hence, albeit a short section, often not more than 250
words, the abstract deserves great attention.
Most writers compose the abstract last. It must reflect briefly the overall paper, including
the basic motivation, significance, general approach, and key discoveries or final solution; it must
be written clearly, without jargon, acronyms, or uncommon abbreviations, and must stand alone,
generally without references. As with the introduction, the first sentence of the abstract should be
engaging. In contrast with the introduction, the last sentence of the abstract generally summarizes
the most important finding or points to pressing needs for future research. Whereas most journals
allow the authors to write the abstract as they see fit, a few journals require the authors to follow a
uniform outline, including specific subheadings. As in all cases of technical writing, it is thus im-
portant to read the “instructions to authors” for the particular journal.
3.1.7 Acknowledgments
It is customary, indeed required in most circumstances, to acknowledge the financial support that
enabled the work. As an example, one might read:
This research was supported, in part, by grants from the National Institutes of Health (R01
HL-10000 and R21 HL-01000).
Because of the increasing move in science and engineering toward translational research, investiga-
tors cite industrial support more frequently. In such cases, the journal may require the authors to
disclose conflicts of interest related to potential financial gain related to the results. If there are no
disclosures, this should be noted.
In addition to financial support, it is often appropriate to acknowledge technical support as
well as intellectual or editorial contributions by individuals who were not involved extensively enough
to merit coauthorship but who contributed nonetheless. Such acknowledgment should be merited,
50 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
however, and those noted must be informed. Indeed, some journals now require individuals ac-
knowledged in this section to stipulate in writing that they are both aware of and deserving of such
recognition.
3.1.8 Appendices
Appendices are not found in all scientific papers; indeed, they appear in the minority of papers.
Nevertheless, when used well, appendices can serve a very important role. In general, appendices
contain important information that either does not fit well within the flow of the body of the pa-
per or is simply best stated separately for those few readers who will be interested in such details.
A good example of appendix material would be the step-by-step derivation of key equations, the
final result of which can be found in the body of the paper. In this way, the author fulfills his/her
responsibility of providing methods that are sufficiently detailed to enable the reader to reproduce a
result while not distracting the reader from the key points presented in methods. Similarly, detailed
“recipes” for molecular or cellular assays may fit well in an appendix.
3.1.9 References
It is interesting that we are often taught “proper methods of citation” in courses and books despite
different journals and publishers requiring very different citation formats. In some cases, references
must be arranged according to the order of appearance within the work and numbered sequentially
beginning at 1; in other cases, references must be arranged alphabetically by the last name of the first
author, then numbered beginning at 1; in yet other cases, references must be arranged alphabetically
and not numbered. This basic scheme dictates how to cite any reference within the text — by num-
ber or by author. Similarly, the format for the references that details the authors, year of publication,
title, volume, and inclusive pages also varies from journal to journal. The best advice, therefore, is to
follow the specific instructions to authors. As specific examples, however, consider multiple ways to
cite the same article within the text:
Watson and Crick (1953) proposed the double helix . . .
Watson and Crick [20] proposed the double helix . . .
Watson and Crick20 proposed the double helix . . .
or similarly,
The double-helix structure of DNA was . . .(Watson and Crick, 1953).
The double-helix structure of DNA was . . .[20].
The double-helix structure of DNA was . . . .20
SCIENTIfIC PuBlICATIoNS 51
As seen, the third approach in both cases results in some savings with regard to printing,
which is important to some publishers given that most journal articles cite ~35 papers and most re-
view articles cite over 100 papers. When these simple savings are multiplied 30-fold or more, one
can appreciate the potential savings in page costs. It is noted, however, that numerical citation has
the disadvantage that the reader must constantly refer to the reference list to determine who was
responsible for the cited finding. Informed readers often know who has done what in a field, which
is to say who has produced reliable or important findings. Citation by author names (e.g., Smith et
al., 1999) thus has the advantage of increasing the flow of the paper. Nevertheless, one must follow
the format prescribed by the journals and publishers. Usually the only case wherein one can pick a
format is while writing proposals, which is discussed in Chapter 4.
Citation is similar to that discussed above when there is but a single author. For example, we
might find the following: Einstein (1905) proposed . . ., Einstein [15] proposed . . ., or Einstein15
proposed . . ., and similarly we might find . . .the special theory of relativity (Einstein, 1905), . . .the
special theory of relativity [15], or . . .the special theory of relativity15. In the case of three or more
authors, however, the format differs slightly. Recall from Chapter 2 that “and colleagues” is ab-
breviated in the Latin as “et al.” Hence, we might find the following: Smith et al. (2008) proposed
. . .or . . .(Smith et al., 2008). Whereas some journals use (Smith et al. 2008), that is, they omit the
comma after the et al., it is a mistake to add a comma after the first author’s last name (i.e., Smith,
et al., 2008 is not an accepted format). Again, the best advice is to refer the specific instructions to
authors for the journal of interest.
Finally, note that the citation within the reference list can also appear in various forms. For
example, we can cite the same paper as:
Watson JD, Crick FHC (1953) Molecular structure of nucleic acids: A structure for deoxyri-
bose nucleic acid. Nature 171: 737–738.
Watson JD, Crick FHC. Molecular structure of nucleic acids: A structure for deoxyribose
nucleic acid. Nature. 1953;171:737–738.
Watson, J.D., and Crick, F.H.C., 1953, “Molecular Structure of Nucleic Acids: A Structure
for Deoxyribose Nucleic Acid,” Nature, 171 pp. 737–738.
Other formats exist, which is why one must consult the instructions to authors for each
journal.
A final, and important, reminder is to cite only those papers that you have actually read. Per-
haps surprisingly, many investigators will cite papers that someone else has cited simply because it
is easier. Such a practice can be dangerous. In science and engineering, one should always check and
double-check everything, including interpretations of other work used in citations.
52 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
3.1.10 figures and Tables
It has been said that a picture is worth a thousand words. Actually, a well-prepared and appropriately
selected picture (e.g., figure or image) can be worth a thousand words if done well. As an example,
consider the following figure, a standard x–y scatter plot, which is the most common type of figure
found in a technical paper. Although this example contains the basic ingredients of an effective fig-
ure (e.g., clear data points and labeled axes, with the unit of measurement denoted parenthetically
on the x-axis), we can improve it considerably with little effort.
Compare the following version of this figure (reprinted with permission, CISM) to the pre-
vious one. It is easy to see that a reduced number of tick marks along each axis as well as larger
numbers and lettering improve the readability considerably. Indeed, one of the most important
considerations is that typesetters will reduce the size of many submitted figures before placement
within the final version of the paper. This is particularly important when placing a figure within a
single column in a dual-column layout, which most technical journals use.
Albeit a small point, note that the solid curve in these figures represents a best-fit to data
obtained using a formal regression method. Whereas solid lines are appropriate for showing such
“theoretical” or “model” fits, it is best to use lightly dashed lines when the goal is to connect the data
points for emphasis. It should be remembered, however, that simply connecting the dots may be
misleading if data sampling missed key points.
SCIENTIfIC PuBlICATIoNS 53
Finally, realizing that many readers go to the results after having only read the title and per-
haps the abstract, it is important to write complete legends so that the reader can understand the
importance of the figure easily.
In summary, the original journal article should be both well written and well illustrated; it
should address the following primary questions:
Introduction: What was done and why?
Methods: How was the work accomplished?
Results: What was found?
Discussion: Why are the results important, how do they compare to previous work, and what
remains to be investigated?
Seeing one’s name in print for the first time on an archival paper can bring a sense of excite-
ment and pride. Seeing one’s name on a paper that contains errors or fundamental flaws can bring
a sense of regret. There is, therefore, a need to give such work our most careful attention from start
to finish.
Interview someone who serves on an editorial board of a technical journal and ask
Exercise 3.1
how reviewers are selected, what are good reasons for excluding reviewers in particular cases, and
what is done when different reviewers have diametrically opposing views. Write a one-page sum-
mary of the results of the interview.
All journals limit the number of words or pages allowed for papers within par-
Exercise 3.2
ticular categories. For example, it is common for an original article to be limited to 6000 words
inclusive. Noting that an abstract typically is ~250 words, each one-column table and a single panel
figure is equivalent to ~250 words, and a standard full-citation reference is typically equivalent to
20 to 30 words (some up to 40 words), estimate the number of words for an abstract, 6 figures, 1
table, and 35 references, which is typical for a standard original paper. Next, calculate the number
of words available and estimate reasonable lengths for remaining sections: introduction, methods,
results, and discussion.
PuBlISHINg AN ARCHIvAl JouRNAl PAPER
3.2
3.2.1 origin
According to Boorstin, (1983, pp. 390–394),
The printed scientific ‘paper’ or ‘article’, which was simply a later version of the letter,
would be the typical format in which modern science was accumulated and communi-
cated. . . .The letter was an ideal vehicle for the increasing numbers of men dispersed
over Europe who no longer expected to storm the citadel of truth, but hoped to advance
54 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
knowledge piece by piece. . . . A letter had obvious advantages over a book. While works
of science were often large tomes easy to stop for censorship, the novel observations in a
letter could slip unnoticed or be delivered with the ‘ordinary post’.
In contrast to early European investigators such as Galileo (1564–1642), few modern investigators
need be concerned about potential censorship of their work. Rather, the primary concern today is to
ensure that a paper receives broad distribution to the intended audience. Toward this end, electronic
publishing and the Internet have revolutionized the availability of scientific papers, yet the methods
of composition, presentation, and submission have not changed.
3.2.2 Composition and Authorship
It is difficult, if not impossible, to write by committee. Indeed, one of the most important documents in
American history, the Declaration of Independence, was assigned to a committee of five for com-
position but was drafted in seclusion by a single author, Thomas Jefferson. As it should have done,
the committee evaluated and revised the final draft penned by Jefferson before forwarding the final
version for consideration by the Continental Congress of 1776.
Although there has been a significant increase in the number of authors on scientific papers,
particularly in biomedical science and engineering, the primary responsibility of writing a paper
must similarly fall to one author or in some cases two authors (e.g., the first and senior authors). As
noted above, however, the best way to ensure that the first draft represents the ideas and expecta-
tions of all authors is to meet together to define the initial outline and to discuss what findings to
report in the results. We address the issue of joint authorship further in Chapter 6, hence we merely
note here that it is essential that all authors agree on the contents and presentation of a paper before
submission for consideration of publication.
3.2.3 Submission and Review
As noted earlier, the essential first step when preparing a paper for consideration for publication
is to read the instructions to authors for the intended journal. Only in this way will one be able to
fulfill the requirements of each journal. In general, however, the two primary items needed for sub-
mission are the aforementioned cover letter to the editor (cf. Section 3.1) and the complete paper,
including the cover page, body of the paper, tables, and figures.
A few journals allow authors to submit a paper for consideration directly to a
Exercise 3.3
member of the editorial board or the sponsoring society. In these cases, that member can assume the
sole responsibility for review and may then “communicate” the paper to the editor for publication.
Identify two different journals that allow such a procedure and write a two-page summary discuss-
ing the history of this approach and the associated advantages and disadvantages.
SCIENTIfIC PuBlICATIoNS 55
An editor or associate editor will usually solicit two or three experts to provide a recommen-
dation on the potential suitability of a paper submitted for publication. These reviewers are asked to
provide objective assessments and thus to decline to review a paper if there is either a real or a possi-
bly perceived conflict of interest. The period allowed for review varies considerably among different
journals, with some biological and clinical journals allowing only 2 weeks and some mathematical
journals allowing up to 3 months for review. Differences also exist between journals with regard to
the possible categories of recommendations available to the reviewer, but general categories are:
Accept (i.e., accept as submitted)
Accept pending minor revision (not requiring rereview)
Major revision (with required rereview for further consideration)
Reject (i.e., not suitable for publication)
Inappropriate for this journal.
In cases where the topic, type, or length of a paper is deemed to be inappropriate for a journal,
an editor can communicate this to the author(s) without a formal review, even though reviewers are
also allowed to make such a recommendation independently. A recommendation to reject a paper
may be rendered for any of a number of reasons: the paper does not contain original or novel find-
ings, it contains serious flaws in design or analysis of the data, it does not address a relevant prob-
lem, results contradict previously published findings without addressing adequately the associated
reasons, there is insufficient new information (i.e., it is only an incremental advance at best), and so
forth. In the case of a recommendation to reject, the editor should ask the reviewer(s) to state this
case diplomatically, although this does not always happen. It is common for a journal to reject 50%
or more of all submissions.
The recommendation to request major revisions generally implies that there is a need for ad-
ditional experiments, analysis of data, or computations. Such a recommendation can also reflect the
need to correct a major, but not fatal flaw, to reduce the length of the paper if it is overly long, or to
improve significantly the presentation, including improved figures, tables, and writing.
In contrast, minor revisions (not requiring further review) can reflect a need to expand the
methods or discussion or conversely to eliminate information that is available in other publications.
There may also be a need to add some key references or reduce the overall number of references.
Minor improvements in writing or the need to eliminate results that are duplicated in tables and
figures may also lead to a recommendation of the need for minor revisions.
Finally, albeit very uncommon, the recommendation to accept (as is) suggests that the study
is important, novel, and well presented. All authors should strive to submit such work, but should
be prepared to revise a paper as needed.
As one might expect, unanimous recommendations by two to three reviewers are uncommon,
hence the editor or associate editor usually must make a decision based on the information received.
56 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
For example, if two reviewers recommend major revisions and a third reviewer recommends rejec-
tion, the editor is justified in recommending either major revision or rejection. In cases where the
editor rejects a paper, the authors can try to rebut the reviews and request either the privilege to sub-
mit a revised paper or that additional reviewers are asked to review the paper further. In most cases,
however, editors tend to stand by the initial, carefully weighed decision, and it is best to consider
other options. For example, some authors will simply resubmit the same paper to another journal for
consideration for publication; in such cases, they are usually not required to reveal that the paper has
been rejected previously, which enables the second assessment to be objective. Authors must realize,
however, that reviewers are often picked carefully for their expertise, and it is possible that different
editors from different journals will select the same reviewers. For this reason, and simply because
one should always use any opportunity to improve a paper based on any feedback obtained, it is best
to revise a paper that has been rejected before submitting it to another journal.
Finally, it is useful to know how editors select reviewers and what instructions are given to
the reviewers. Ideally, the editor, associate editors, or editorial consultants are familiar enough with
the topic of the submitted paper that they know the experts personally or at least know of them. In
cases wherein such experts either decline to review a paper (because of conflicts or simply because
they are not able to provide a timely review) or cannot be identified easily, editors will often peruse
the references cited in the paper. In other words, frequently cited authors are good potential review-
ers because their work is related closely to that which was submitted and has passed previous peer
review. Alternatively, editors may also use computerized search engines to identify potential review-
ers based on the publication of similar work in reputable journals.
3.2.4 Revision
As noted earlier, only a small percentage of technical papers are accepted upon the first submission,
hence authors should expect to revise a submitted paper. Indeed, in most cases, revision along the
lines suggested by the reviewers improves the paper significantly, thus revision should be seen as an
opportunity not a failure. Nevertheless, it is human nature to be disappointed or, in some cases, upset
by a negative review. Toward this end, we recommend two things. First, read the review carefully but
do not take any action until at least a few days later. In other words, neither a rash response to the
editor nor a hasty attempt to revise a paper is likely to be productive. Second, avoid both of the two
most natural responses to a negative review — to adopt all of the reviewer’s recommendations be-
cause “they must be the expert” or to ignore all of the comments because you “know better.” Rather,
it is best to take comments and concerns by reviewers at face value. For example, if a reviewer states
that a particular section is hard to understand even though you think it is clear, chances are that
at least some other readers will also find the section hard to understand. The best response in this
SCIENTIfIC PuBlICATIoNS 57
case, therefore, is to take advantage of the opportunity to improve its clarity and to ask a colleague
to assess the changes. In other words, because it is your name on the paper, take every opportunity
to make the paper the best it can be.
When a journal allows or requests a revision, the author(s) usually must submit the revision
within a certain period (often 3 to 6 months, but highly variable) and provide evidence of the revi-
sions. In some cases, the author(s) can meet this requirement simply by summarizing the revisions
on a separate page or in a letter to the editor. In other cases, the journal may either require separate
detailed responses to each of the reviewer’s concerns or marks within the submitted manuscript
that identify the revisions. The latter requirement is now met easily using features such as “track
changes” in MS Word.
Once the authors decide to revise a paper, they should ask how to do this most efficiently. For
major revisions, it is best to identity the requisite experiments, calculations, analyses, and so forth
and to generate the additional results. Next, one can follow the same approach used in writing an
initial draft — lay out all results, new and old, on a table and determine which to include and in
which order. Once done, revise the results, methods, discussion, introduction, and abstract accord-
ingly and document the revisions appropriately. For minor revisions, it can be efficient to begin by
writing the required “response to reviewers” or “summary of revisions.” After knowing how one
wants to address the concerns, the paper can then be revised accordingly.
Although policies differ among journals, it is uncommon to allow authors to submit more
than one revision because of the time invested by both the editors and the reviewers. In other
words, one should work very hard to satisfy reviewer’s concerns and to make a revised manuscript
acceptable.
3.2.5 Typesetting, galley Proofs, and Proofreader marks
Years ago, a typewritten copy of a manuscript was copyedited, then typeset from scratch. Today,
nearly all journals require electronic versions of accepted papers, which are then copyedited and
formatted for publication. Briefly, copyediting is an important step wherein a manuscript is checked
carefully by the editorial staff of the journal, or a third-party associate, for format, style, spelling,
complete and consistent citations, and so forth. In some but not all cases, the copyedited version is
sent to the authors along with the galley proof to show the changes that were made.
A galley proof is the final draft of the paper formatted exactly as it will appear in print. The
authors are asked to check the galley proof carefully to ensure accuracy, yet it is expected that only
minor changes or corrections will be made at this stage of publication. In the case of major changes
due to author errors, the publisher may charge extra to make the requested changes. For this rea-
son, authors should ensure that the final version of a manuscript is correct as submitted. Although
58 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
the advent of electronic publishing decreased tremendously the number of necessary corrections,
authors should be diligent to check the galley proof carefully, including the layout of tables, figures,
and equations. If errors are discovered after approval of the galley proof and publication of the paper,
the authors’ only recourse is to publish an erratum (or errata, which is the plural form of erratum and
used in the sense of correcting simultaneously multiple errors).
Because of the importance of the galley-proof step, publishers developed a nearly universal set
of symbols and directives to communicate changes that were needed in the paper. Again, however,
because of the increasing capabilities of word processors, it is now common for such corrections to
be made electronically — for example, using the “comment” function in PDFs or the track-changes
function in word files. If needed, however, one can find proofreader marks in standard dictionaries
(e.g., American Heritage Dictionary) or online.
Find a 10- to 20-page paper that you have written previously and scan it for minor
Exercise 3.4
errors. Use standard proofreader marks to note the appropriate changes just as you would do for a
galley proof.
3.2.6 Copyright, Permissions, and Page Charges
Copyright is a legal procedure that grants exclusive rights to the production, publication, sale, or
distribution of a work by the owner of the copyright. Because this deals with the ownership of ideas,
it is addressed in Chapter 8. Here, therefore, we simply note that upon acceptance of a paper for
publication, the publisher will usually request that the authors transfer copyright to the publisher.
Although copyright agreements tend to be standard for the publishing of scientific papers, one
should read such agreements carefully before signing. If there are questions regarding anything
within the agreement, one should either consult a more experienced author or contact the copy-
rights division of the publisher.
Transfer of copyright requires that the material to be transferred is original and, when ap-
propriate, that special permissions have been obtained to republish any previously published ma-
terial. In the latter case, the most common situation is the desire to republish a figure or image
from another work. One may obtain permissions to do so by writing the copyrights division of the
publisher that holds the copyright and requesting permission to republish the work in a specific way.
In most cases, a publisher will grant such permission provided that a simple statement is associated
with the republished material [e.g., “From Smith (2001), with permission from Publisher Name.”]. In
some cases, however, such permission is granted only after receipt of a fee, which could be hundreds
of dollars for a single figure. In cases of financial constraints, it is best to contact the publisher early
to identify potential fees. Although one only needs to contact the holder of the copyright, which is
SCIENTIfIC PuBlICATIoNS 59
often the publisher, not the original author(s), requesting permission from the author, if he/she can be
located easily, is a good gesture.
In some cases, one step remains before the publication of your paper — payment of “page
charges” or fees to cover the publication of color figures. In many cases, journals charge standard
fees, according to the length of a paper, to offset portions of the cost of publication. Such page
charges are mandatory in some cases but voluntary in other cases. The rationale behind voluntary
page charges is that many agencies that financially support research also desire that the associated
findings be published, and consequently, they provide researchers with funds to support publication.
Payment of page charges often entitles the author(s) to free hardcopy reprints or PDF versions of
the paper. Independent of page charges, some journals assess mandatory fees for the publication of
color figures in print (but not electronic) versions. Because fees for color figures can be hundreds to
thousands of dollars, it is prudent to minimize the use of color figures or images to those that are
best understood in color. With the continued growth of online publishing, however, one can also
consider using color figures in the online version that are equally clear if printed in black and white
by the reader or by the publisher of the print version. Regardless, it is also good to determine before
submission if such fees will be charged. Like all submitted figures and images, color versions must
have the required resolution and must be submitted in the proper file format. Again, the author is
referred to the instructions for authors for each journal for requirements vary with journal.
Read the article “Structural Outline of an Archival Paper for the Journal of Biome-
Exercise 3.5
chanics” [Brand RA, Huiskes R (2001) J Biomech 34: 1371–1374]. Construct a two-column table
to record hints therein that reinforce or contradict that which was presented in this section (3.2).
Provide a summary, not to exceed one page, that articulates your preferences in cases where there is
disagreement.
3.3 THESIS oR DISSERTATIoN
Most universities require the completion of a thesis as part of the requirements for a master of
science (M.S.) degree or a dissertation for the completion of a doctor of philosophy (Ph.D.) or a
doctor of science (D.Sc.) degree. It is common for an M.S. thesis to range from 50 to 150 pages
and for a Ph.D. or D.Sc. dissertation to range from 100 to 250 pages (each double spaced with
ample margins). Although it may seem a formidable task to write such a long document, it is actu-
ally not difficult if one formulates a good outline and simply writes chapter by chapter. One should
obtain specific guidelines for formatting such documents from the local Office of Graduate Studies,
however, for requirements differ from institution to institution. Here we note briefly the two most
common styles for organizing a thesis or dissertation.
60 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
First, a thesis or dissertation can be organized along traditional lines and thus consist of the
following:
Abstract
Chapter 1: Introduction
Chapter 2: Background
Chapter 3: Methods
Chapter 4: Results
Chapter 5: Discussion
Chapter 6: Conclusions and Recommendations
References
Appendices
Note the strong similarity between this outline and that for the archival journal paper, with
two notable exceptions. To encourage students to understand the literature well, most institutions
require a separate chapter, entitled background, which highlights past work on the topic of graduate
study and identifies areas of need for further study. Because a dissertation must represent original
work, review of the background literature is a particularly important part of the doctoral student’s
early work. To encourage students to recognize the limitations of their own work and thereby to
identify areas of further study, most theses and dissertations end with a chapter entitled “conclusions
and recommendations.” A simple way of thinking about recommendations is to ask the question,
“What should the next graduate student in the laboratory do to push the current work forward?”
In contrast to the traditional outline, it is becoming increasingly common to organize theses
and particularly dissertations around individual papers that are based on the student’s work and
either have been or will be submitted for publication in archival journals. For example, consider the
following outline:
Abstract
Chapter 1: Introduction
Chapter 2: Paper 1
Chapter 3: Paper 2
Chapter 4: Paper 3
Chapter 5: Conclusions
Appendices
Packaging the multiple papers (often one for a thesis and three to seven for a dissertation) between
general introductory and concluding chapters allows the student to focus on writing the individual
journal papers. In this case, each chapter contains its own introduction, methods, results, discus-
SCIENTIfIC PuBlICATIoNS 61
sion, and references. It is clear that the individual papers should be written first, as described above,
and the introduction and conclusions written last. In the traditional case, it is common to write the
background first, then methods and results, and finally discussion, introduction, and conclusions
and recommendations. If one adopts this second style of organization and papers are published be-
fore submitting the final dissertation, it is important to consider issues of copyright; the best advice
in this case is to check with your local office of graduate studies.
3.4 TECHNICAl REPoRTS
Whether a technical report will be published or not, it is a formal document and thus requires
careful attention. In contrast with the aforementioned types of technical documents, however, the
technical report does not demand a particular outline. Indeed, such reports can range from a one-
page summary to a thousand-page document. The best advice, therefore, is to discuss in detail the
expectations before beginning the outline, then use, as appropriate, the aforementioned guidelines
for writing an archival journal paper.
Exercise 3.6
tion. Write a one-page bulleted summary of the key stylistic requirements.
Find and read carefully the instructions for a thesis or dissertation at your institu-
• • • •
63
C H A P T E R 4
Proposals and grant Applications
INTRoDuCTIoN
4.1
Whether it be a proposal to undertake a particular senior design project, a graduate student’s pro-
posal to a committee to pursue a particular area of research for his/her thesis or dissertation, an
employee’s request of management to support a new area of R&D, an organization’s application for
state or federal support, or a professor’s request for support of basic research, the technical proposal
is fundamental to securing the resources needed to advance science and engineering. Notwithstand-
ing the many different types of proposals, most are similar in basic structure and preparation. Hence,
for illustrative purposes, we focus on the NIH individual investigator grant application known as
the R01, the primary mechanism for funding health-related research in the United States.
4.2 TyPES of gRANTS
The R01 mechanism supports two general types of grant applications: investigator-initiated and those
in response to a request for proposals (RFP) or program announcement (PAR). By investigator-
initiated, we mean a scientist or engineer identifies a fundamental question or important problem,
conceives of an approach to address this issue, and takes the initiative to request the funding needed
to complete the project. In contrast, an RFP or PAR is a “call for proposals” that addresses a particu-
lar need or area of investigation that a working group, committee, or administrator has deemed im-
portant. In the latter case, the scientist or engineer must first learn of the opportunity, then respond
according to the stated instructions. Indeed, one of the most important aspects of a successful grant
submitted in response to a call for proposals is that it is responsive to the announcement.
Go to the NIH Web site, www.nih.gov, and search for current PARs. Pick a PAR
Exercise 4.1
of interest and read the instructions carefully. Write a three-page summary of the PAR that would
be sufficient as an overview of the motivation, scope, and requirements for submission.
Exercise 4.2
The National Science Foundation (NSF) funds basic research and training in the
sciences and engineering. Go to the NSF Web site, www.nsf.gov, and search for current PARs.
Pick an announcement of interest and read the instructions carefully. Write a three-page summary
that would be sufficient as an overview of the motivation, scope, and requirements for submission.
64 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
There are three general classes of motivations for any proposal: hypothesis-driven, curios-
ity-driven, and technology-driven. No one motivation is more important or more scholarly than
another; they are simply different.
Hypothesis-driven: The NIH defines a hypothesis as “an educated prediction about the
outcome of your study.” Under some programs, the omission of a hypothesis is a major
oversight, one that can result in the reviewers suggesting that the proposal is merely a “fish-
ing expedition,” that is, a project without clear direction.
Curiosity-driven: We have all heard the saying that when asked why he/she climbed the
mountain, the climber simply stated “Because it was there.” Curiosity-driven research is the
desire to answer a new and intriguing question. Curiosity has been, and will remain, the
primary driver of scientific inquiry.
Technology-driven: In our increasingly technology-based society, there are many cases
wherein the ability to design and build a new instrument is motivation enough to pursue
such research. Indeed, in many cases there are excellent opportunities to modify previous
designs to address new applications.
Regardless of the motivation — hypothesis, curiosity, or technical need — the approach to apply for
and secure funding for research is similar.
4.3 THE REvIEW PRoCESS
Proposals undergo a two-step review process at the NIH. First, proposals are evaluated for technical
merit and feasibility. Second, they are evaluated administratively for funding potential. A committee
called a study section, under the direction of a scientific review administrator (SRA), accomplishes
the first step; a council accomplishes the second step while attempting to balance the desire to fund
the best science and the need to accomplish the fundamental mission of the NIH — to improve the
health of people in the United States.
Just as we should identify the intended audience before writing a journal article or giving
an oral presentation, we should also understand the audience that will read and review a particular
proposal. In contrast to many funding agencies, the NIH publishes the names of those constituting
most study sections, which enables the applicant to know the audience. In this case, it is prudent
to read recent publications by members of the study section to get a feel for their scientific interests
and basic perspectives. Although each member of the committee will be asked to score all grant ap-
plications for which they do not have a conflict of interest, only three to five individuals generally
read each application in detail. These individuals are referred to as the primary reviewer, secondary
reviewer, and discussant(s); they are selected based on the closeness of their technical expertise to
PRoPoSAlS AND gRANT APPlICATIoNS 65
that of the proposal, as revealed primarily by its title and project summary. Recalling that we use the
NIH R01 herein mainly to illustrate issues that are important in preparing a proposal, a graduate
student would similarly be well advised to know the composition of his/her committee and to read
recent papers by these individuals to anticipate what types of questions might arise.
Whether or not one knows the composition of a review panel, perhaps the most important
things to know are the criteria that the panel will use to evaluate the proposal. Again, we use the
NIH criteria, but the need to be familiar with such criteria would even apply to a master’s or doc-
toral research proposal, which a faculty committee would evaluate during a proposal defense. At the
NIH, the evaluation criteria include1:
Significance: Does the study address an important health problem?
Approach: Are the design and methods appropriate to address the aims?
Innovation: Does the project employ novel concepts or methods?
Investigators: Are the investigators appropriately trained?
Environment: Will the scientific environment contribute to success?
Overall evaluation: Will the study advance health care or medical science?
Additional considerations of importance during a review include whether a proposed human or
animal research protocol respects current guidelines; before beginning any such study, a local insti-
tutional committee (e.g., IRB or IACUC) must evaluate and approve such protocols, but this need
not be completed before submission. Similarly, reviewers will address whether the proposed budget
is justified. We focus here, however, on the scientific aspects of the review enumerated above. In-
deed, because reviewers are asked to comment specifically on how well an applicant addresses these
criteria, it can be helpful to the reviewer, and thus the applicant, to include succinct, highlighted
statements that address significance and innovation in particular.
Because each reviewer can interpret what is meant by significant, innovative, or appropriate,
it should not be surprising that different reviewers often have very different opinions with regard
to the value of the same proposed research. Hence, the applicant should try to write in a way that
engages different people, noting in particular that not all reviewers are experts in the specific area
proposed. Rather, it is expected that good scientists and engineers can recognize good work when
they see it; a key challenge, therefore, is to help the reviewer appreciate the overall objective, sig-
nificance, logic of the methodology, and innovation of the proposed work from both a general and
a problem-specific perspective. This may be even more important in cases where the reviewer has
1 Portions of this discussion are based on Proposal Writing: The Business of Science by Wendy Sanders, then at NIH
(www.wm.edu/grants/PROP/sanders.pdf ).
66 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
expertise in the proposed area but a different scientific opinion. In this case, the need is even greater
for convincing, objective arguments.
There are, however, a number of things that all reviewers appreciate. For example, because
reviewers are often asked to review multiple proposals (e.g., 8 to 12), it is important to make the
proposal easy to read and understand. Rather than using a small font or small figures to fit more infor-
mation into the same space, use larger fonts and ample figures to aid the reader and write clearly and
concisely to increase the value of what is provided. Indeed, this brings us back to the importance of
Chapter 2. Note, therefore, that in a 1987 publication of the NIH, entitled Helpful Hints on Prepar-
ing a Research Grant Application to the National Institutes of Health, one reads
Try to develop a clear, concise, coherent scientific writing style. A few guidelines that
may prove useful include: (1) using the active voice, which is more direct, less wordy,
and less confusing than the passive voice; (2) keeping together related ideas and in-
formation, for example putting clauses and phrases in sentences as close as possible to
the words that they modify; (3) simplifying and shortening overly long and involved
sentences and paragraphs; and (4) eliminating redundant and awkward words, phrases,
and sentences.
Additional, subtle things can likewise make it easier for the reviewer. For example, although citing
the many references numerically (often 100 or more) can save space and thus enable the applicant
to include additional information, this forces an educated reviewer to refer back continually to the
references to determine what work has been cited. It can be constructive, therefore, to cite refer-
ences by name, for example, Smith et al. (1999) or (Smith et al., 1999) rather than [20], for the
reviewer may know of and respect the work of Smith and colleagues. Similarly, recall that schematic
drawings, images, figures, and tables can each be worth a thousand words if done well. Indeed,
many reviewers will first skim through an application by paying particular attention to these visual
aids. For this reason, it is useful to provide descriptive legends so that the reviewer does not have to
search the text to find the meaning and importance. Because of the importance of the accompany-
ing text, however, it is also good practice to cite the schematic drawings, images, figures, and tables
using boldface type to enable the reviewer to locate easily that part of the text that discusses each
figure (e.g., boldface figure 1 and Table 1 are much easier to identify quickly in the text than are
Figure 1 and Table 1). Although they should be kept to a minimum, introducing key abbreviations
in boldface also enables a reviewer to find them much more quickly in the text, for example, nitric
oxide (No) versus nitric oxide (NO). The key, therefore, is to keep the reviewer’s perspective in
mind at all times and not to compromise the use of effective devices and strategy because of stated
limitations on the number of pages allowed – concise writing will generally provide the extra space
needed to include all the necessary information.
PRoPoSAlS AND gRANT APPlICATIoNS 67
The NIH Web site, www.nih.gov, has specific instructions for writing a K-series
Exercise 4.3
grant application. Write a three-page summary that would be sufficient as an overview of the moti-
vation, scope, and requirements for submission of such a grant application to the NIH.
4.4 THE NIH R01 gRANT
As noted previously, the R01 grant is but one of many funding mechanisms administered by the
NIH. One can obtain information about the other types of grants from the NIH Web site (www.
nih.gov), but we consider here the format for the R01 because it represents well how to design an
effective application.
The R01, or single investigator grant, consists of a cover page, brief description (project sum-
mary) and list of primary personnel, table of contents, budget and budget justification, a biosketch
for each of the primary personnel, information on resources (i.e., the research infrastructure) that
are available to the investigator(s), the main body of the application, and further administrative
information. With the exception of the main body of the application, all other information must be
provided within appropriate NIH-supplied form pages. Again, see the NIH Web site for instruc-
tions and details on the overall grant application package, including form pages.
Here, we focus on the main body of the application, that is, the five basic sections that detail
the scientific need and proposed method of approach. These sections are: specific aims, background
and significance, preliminary results, research plan, and references. Whereas the R01 application
currently allows 25 pages, single-spaced, for the main body of the application, other types of NIH
applications have different requirements. One of the most important aspects of successful grant
writing is to follow the instructions, which includes respecting page limitations and using approved
fonts and margins. For example, the current NIH R21 mechanism allows 15 pages, single-spaced,
for the main body of the application, whereas the NIH BRP (Bioengineering Research Partnership)
mechanism allows 40 pages, single-spaced. Whether 15, 25, or 40 pages, a key to successful grant
writing is to write with clarity and conciseness. Indeed, after having written numerous 25-page ap-
plications, we have found that trying to provide the same level of detail in a 15-page application is
a very good exercise — it forces one to write more concisely.
Before discussing in detail each of the five primary sections of the main body of the grant,
note that each section should answer a specific question:
Specific Aims: What are you going to do?
Background and Significance: Why is it important?
Preliminary Results: Are you capable of being successful?
Research Plan: How are you going to accomplish the work?
References: What are the key findings on which you will build?
68 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
As in any good technical document, the writing should flow logically from section to sec-
tion and the applicant should reinforce the main ideas throughout. Similar to the situation of
multiple authors writing a paper, when multiple investigators write different parts of a proposal,
it is important for the principal investigator to ensure a consistent style throughout, including
tense.
4.4.1 Specific Aims (1 Page Required)
The specific aims section is particularly important; it must capture the reader’s interest, show the
need for the proposed research, and detail specific results that will be sought — all in one page.
Different applicants use different formats, but a general approach to constructing the specific aims
is to begin with two or three short paragraphs that identify the overall problem or long-term goal
of the applicant(s) as well as the specific problem that needs to be addressed and why, then list the
individual specific aims, and conclude with a brief paragraph that highlights the innovation and
overall significance of the work.
There is no limitation on the number of specific aims that one can propose, but most R01
applications focus on three to four aims, which sometimes include multiple sub-aims. Just as it can
be efficient to begin writing a technical paper by identifying the primary findings, so too it can be
efficient to begin writing a proposal by first identifying the specific aims. Indeed, it is often useful to
draft the specific aims page and have multiple colleagues provide feedback on the overall plan before
beginning to write the remainder of the proposal
Specific aims should be just that, specific. Moreover, construct these aims in a forceful way
— to quantify, to determine, to design, to prove, to develop, and so forth. Many applicants con-
struct each specific aim to test a specific hypothesis; the key objective, therefore, is that the aim is
testable. Although one can provide some indication in this first section as to how the aims will be
accomplished (e.g., using a particular animal model or data from a particular clinical trial), it is best
to focus on methods and approaches in the section on Research Plan.
4.4.2 Background and Significance (3 Pages Recommended)
In some ways, the background and significance section can be the hardest section to write well.
Whereas one may think of this section simply as a brief literature review and statement of the obvi-
ous (e.g., that the problem is significant because a particular number of Americans experience the
highlighted disease or problem), it actually must be much more. Within the context of answering
the question, “Why is this research important?,” the applicant should critically assess the literature
to show convincingly what is unknown and why this lack of understanding is impeding scientific
PRoPoSAlS AND gRANT APPlICATIoNS 69
advances, improvements in health care delivery, the development of better medical devices, and so
forth. In the words of the NIH, there is “a need to identify the gaps” in our knowledge. For example,
we may know that a genetic mutation is responsible for a particular disease, but we may not under-
stand how this mutation affects the activity of a particular type of cell. Similarly, we may know that
hemodynamic factors give rise to a particular vascular pathology, but we may not know how the
associated forces induce the changes in gene expression that ultimately cause the disease. Although
identifying gaps will often require one to point out shortcomings in previous investigations of oth-
ers, we should do this diplomatically.
When identifying key gaps in the literature within the background section, one should show
convincingly the need for the proposed specific aims. In other words, background should “set the
stage” for the research plan. In addition, however, we must remember that not all reviewers will be
intimately familiar with the specific area of research, hence also use this section to educate the re-
viewer so the he or she can appreciate better the importance of the identified gaps and the proposed
method for research. For example, if the study seeks to identify the relative effects of particular
cytokines in a disease process, some background on cytokines — their discovery, general activity,
half-lives, receptor affinity, and so forth — may help the reviewer appreciate the motivation for the
underlying hypothesis. Well-illustrated schematic drawings, flowcharts, figures, and images often
add considerably to this section.
Finally, remember that this section is entitled background and significance. If one commits
three pages to this section, only half of one page will typically be devoted to significance. Never-
theless, significance is one of the criteria that the reviewers must address, and the council increas-
ingly bases funding decisions on significance. It is useful, therefore, to address in this section the
potential impact of the overall project, that is, the importance of filling the identified gaps. To aid
the reviewers, the applicant should highlight key points in this section, for example, by italicizing,
underlining, or boldfacing the text. Given the overall importance of significance, but limited space
in this section, successful applicants are generally very good at weaving the significance throughout
the proposal: the last paragraph of specific aims, the significance portion of background and signifi-
cance, and the rationale sections in research plan. The challenge, therefore, is to reinforce key points
throughout without redundancy.
4.4.3
Preliminary Results (6 Pages Recommended, But Not more Than 9)
The primary goal of the preliminary results section is to demonstrate the capability of the principal
investigator(s) and assembled team to accomplish that which is proposed in research plan. In many
ways, this is the easiest section to write; indeed, if you are having difficulty getting started, it is often
good to focus first on the preliminary results.
70 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
This section is best written in the style of a results section in an archival journal paper, but
with abundant subheadings. Proceed logically by documenting your previous successes on closely
related problems (with citations to previous journal publications) or your new results that show
explicitly that you can successfully complete the proposed aims. It is beneficial, therefore, to remind
the reviewers when such results demonstrate that a specific aim can be achieved or that hypotheses
on which they are built are tenable. Like a good results section in a journal paper, this section should
contain copious figures, images, equations, and tables that highlight key findings. One of the key
challenges can be the decision on how much information to include from past publications because
the reviewers have access to such information if so directed. The best advice is to provide sufficient
detail on critical methods or findings — that is, make it easy on the reviewer by not forcing him or
her to find and read the previous paper — but to refer the reader to original papers for nonessential
information that nevertheless may strengthen the argument. Indeed, whereas the NIH used to al-
low the applicant to deposit up to 10 key previous papers in an appendix, this is no longer possible
if the paper can be found on the Web. Just because a paper can be found on the Web does not mean
that a busy reviewer will take the extra time to do so, however; again, it is critical to make the most
important information readily accessible within the grant application itself.
Because of the importance of team science in biomedical research, most grant applications to
the NIH rely on a team of collaborators. Hence, it is also important to demonstrate the capability
of the different investigators to work well together just as they will need to do during the proposed
project. The best way to demonstrate this is via joint publications, or at least joint abstracts for pa-
pers presented at technical meetings. In the absence of such evidence, it is important to show that
materials, data, and so forth have already been shared as will be required by the proposed research.
4.4.4 Research Plan (15 Pages Recommended, But Not fewer Than 12)
Recall that the primary question that needs to be answered in this section is, “How are you going
to accomplish the proposed work?” In conjunction with the specific aims, this section is the most
important and thus demands careful attention. Perhaps the best word to remember when writing
this section is “detail.”
There is no required format for research plan, yet an effective strategy has evolved over the
years. Many applicants begin this section with a paragraph that highlights the overall research
plan and its importance, sometimes including a schematic drawing to show how the different aims
complement one another. Next, they describe the rationale, methods, and expected results/potential
limitations for each aim in sequence. Finally, they conclude with a brief summary of the overall proj-
ect and an expected timeline to accomplish the project. One small variation on this strategy has also
arisen in recent years, due in large part to the extensive but common procedures used in molecular
PRoPoSAlS AND gRANT APPlICATIoNS 71
and cell biology. Similar to the format of some technical journals, one can collect detailed methods
(often common to multiple aims) at the end of this section, almost like an appendix, so as not to
interrupt the flow of the main portion of the section; this allows the interested reader to evaluate
the appropriateness of the details nonetheless. Indeed, in some cases, these detailed methods are set
apart by the use of a smaller font, which saves some space while emphasizing the importance of the
preceding text.
If one adopts the most common strategy, then the basic outline for the main portion of this
section becomes2:
Aim 1. Restate the specific aim exactly from the first page.
Rationale.
Methods.
Expected Results and Limitations.
Aim 2. Restate the specific aim exactly from the first page.
Rationale.
Methods.
Expected Results and Limitations.
Aim 3. Restate the specific aim exactly from the first page.
Rationale.
Methods.
Expected Results and Limitations.
Restating each aim in its entirety reminds the reviewer of the specific goal — to quantify,
to determine, to design, to prove, and so forth. Restating the aim in boldface serves as a natural
and effective visual cue for organizing this long section; this approach is much less distracting than
needless section numbers such as 4.1, 4.1.1, and so forth. Whereas the significance section discussed
previously should focus primarily on the importance of accomplishing the overall project, the ra-
tionale should focus on the fundamental reason(s) for each specific aim. For example, the applicant
should note what important gap in our understanding this specific aim will address and why the
adopted approach is innovative.
The methods section for each aim is similar to a methods section in an archival paper — it
should provide methodological details sufficient to enable the reviewer to repeat the study. For
2 There are many slight variations, however. For example, one could replace the generic methods section with sepa-
rate sections on experimental design and data analysis, or one could separate the expected results and limitations
section and rename the latter potential difficulties and alternate approaches.
72 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
example, one should not write “the cells will be cultured in an appropriate media.” Rather, one
should document the specific media to be used (including vendor), any supplements with appro-
priate concentrations, and the temperature and CO2 level. Similarly, one should not write “the
governing differential equation will be solved numerically” or “the data will be analyzed for possible
statistical significance.” Rather, one should provide details on the specific numerical method and
why it is appropriate for the expected class of differential equation, and similarly one should provide
details on the specific statistical tests, including post hoc testing, why they are appropriate, and the
levels of desired significance. Again, the operative word in this section is detail, assuming of course
that the methods are both appropriate and proven.
Although NIH-funded research does not need to be hypothesis-driven, one should always
anticipate the results. It is thus prudent to discuss why you expect such results, which actually al-
lows one to justify further the importance of the aim. Although results should be new, it is always
good to cite related studies that provide further confidence that the aim will prove successful and
important. Similarly, although methods should be chosen and justified carefully, it is always possible
in science and engineering for difficulties to arise that prevent one from conducting the experiments
or analyses as originally planned. There is also a need, therefore, to anticipate such difficulties and
to have reasonable contingency plans. Just as in the discussion of an archival journal paper, however,
one must achieve the proper balance in identifying potential pitfalls while not implying that the
aim will be very difficult to achieve as planned. It is wise to discuss the presentation of this balance
with a valued colleague.
Finally, it is useful to conclude the research plan with a detailed timeline showing the antici-
pated duration of each part of the project and how the different parts will progress together, per-
haps in different laboratories. It is also good to provide a single paragraph that concludes the grant
— remind the reviewer what the key gaps are in the literature and how the present study will fill these gaps
using innovative approaches that promise significant findings.
4.4.5 References
The reference section was limited to four pages in the past, but there is currently no such page
limitation. Nevertheless, one should not seek to compile an exhaustive list of references; it is more
important to be selective, focusing on the key papers that support the need for the research and
the methods used to address this need. Similarly, there is no required format for references except
that each must include the list of authors, year of publication, title of the work, the publisher, or
journal title, volume, and inclusive pages. Because reviewers are typically familiar with the proposed
research area, and thus the key authors in the field, it can be helpful to list the references alphabeti-
cally. Indeed, this is consistent with the aforementioned recommendation to cite by author (e.g.,
PRoPoSAlS AND gRANT APPlICATIoNS 73
Smith et al., 1999) rather than by number (e.g., [20]), for this eliminates the frustration felt by
knowledgeable reviewers who do not want to go back and forth to the references to see who did
what. Because of the availability of research papers through the Web, some applicants also provide
links to enable the reviewers to download key references easily.
4.5 THE PREPRoPoSAl
Perhaps the best example of the need to write concisely with clarity is the preproposal. Because
of the greater numbers of applicants applying for limited financial resources, many agencies have
instituted a two-stage review process. The applicant must first submit a brief preproposal, which an
expert panel will review. Based on the findings by this panel, only a subset of full proposals is invited
for consideration for funding.
The state of Texas, for example, has a competition called the Advanced Research Program
(ARP) that is open to any full-time member of the faculty of a Texas institution of higher learning.
Preproposals for ARP grants have been limited to 4000 characters (use the word/character counting
feature of your word processor to count); this is essentially 11/3 pages, double-spaced, in 12-point
font — not a lot of information. Yet, a panel will decide whether to invite the applicant to submit
a full proposal, the next important step toward possible funding, based solely on these 4000 char-
acters. Again, the need for clarity and conciseness is clear. The format for the ARP preproposal is
simple:
•
•
•
•
Project goals and methods
Staff
Facilities and resources
Education and training
Although such preproposals are very short, one clearly wants to communicate information
similar to that contained in the much longer R01 application: What are you going to do? Why is
it important? Are you capable of being successful? How are you going to accomplish the work?
Recalling these simple questions, and noting four sections required for the proposal, it is prudent to
think carefully how to partition the essential information within the required format. For example,
whereas one uses the preliminary results in a NIH application to demonstrate capability, it may be
better to use the section entitled staff in the ARP application. Similarly, whereas it may be appro-
priate to highlight the available equipment in preliminary results or research plan in an NIH ap-
plication, it may be more appropriate to list these in facilities and resources in the ARP application.
Again, the key thing to consider when beginning a grant application is what information you feel
74 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
will best represent you and your ideas. Only then will you be able to decide best how to package this
information within the format for the particular agency.
SummARy
4.6
The Whitaker Foundation recently closed, but it provided millions of dollars of funding over de-
cades to support new investigators in biomedical engineering and to develop new academic pro-
grams. They provided reviewers of individual investigator grants with a checklist to ensure that
applicants covered a number of critical aspects of research in biomedical engineering. Reasons for
scoring a Whitaker application poorly included:
No clear hypothesis
Mundane/uninteresting
Little engineering
Little biology
Not enough detail
Unrealistic/faulty approach
Needed collaboration missing
Other reasons commonly cited for scoring NIH applications poorly include:
Not significant; not innovative; not exciting
Unjustified hypotheses
Unaware of previous related work
Insufficient pilot data
Poorly designed research plan; unorganized
Overly ambitious
One or more aims are poor
The success of one or more aims depends on the success of a previous aim
Although we should focus on the positives, it is prudent to appreciate causes for failure. In
summary, some of the most important reminders for grant writing are:
Know the mission of the agency and target the proposal accordingly. For example, you would
not think of sending a proposal on cancer research to the American Heart Association.
Read a recently funded proposal to the agency to which you are applying.
Read the instructions and follow them carefully when preparing your application.
PRoPoSAlS AND gRANT APPlICATIoNS 75
Ensure that the proposal addresses an important issue and offers the potential for significant
advancement.
Remember that your proposal must generally address simultaneously two technical audi-
ences: those who are very familiar with the field and those who are less so.
Finally, finish early so that colleagues can review the application and provide constructive
criticisms that you have time to employ. Only in this way can we avoid the common pitfalls
that plague so many proposals.
Write a 4000-character preproposal using the Texas ARP format. Select a topic of
Exercise 4.4
interest to you, assume you are the only investigator, and describe resources available in your labora-
tory or department that would be sufficient to conduct the work.
The NIH Web site (www.nih.gov) provides useful guidelines on “How to Write a
Exercise 4.5
NIH Grant.” Go to the site, review the material, and prepare a 25-slide PowerPoint presentation
that could be used as an introduction to writing and submitting NIH grants.
76 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
APPENDIX (Copy and use this as a quick reference)
Specific Aims (What you are going to do?)
•
•
•
•
The first sentence or two should engage the reader and motivate the need for the work.
Briefly note long-term goals/overall hypotheses, then draw focus to the work.
State your specific aims (three to four) and how you will achieve/test them.
Conclude by emphasizing the novelty and innovation of the proposed work.
Background and Significance (Why it is important?)
•
•
•
•
Review the literature critically, that is, identify foundations and gaps. Do not simply state that A did this,
B did that, and C did that. Gaps are important areas that your work will address and fill.
Being unaware of important findings in the field does not engender confidence; conversely, citing work
from recent meetings or personal communication with leaders in the field suggests that you are on the
cutting edge (do not overdo though).
Only a few of the reviewers will have expertise in the specific area, yet many will read the proposal. Back-
ground should educate the general reader.
Significance refers to overall importance and long-term potential rather than the significance of each of
the aims — address the latter as rationale in research plan.
Preliminary Results (Are you are capable of being successful?)
•
•
This section should do two things: demonstrate capability in the area and demonstrate feasibility with
respect to the specific aims. In other words, convince the reader that you are capable of successfully com-
pleting the aims as stated.
A picture can be worth a thousand words, and so too a table, flowchart, figure, or equation — illustrate the
proposal well, taking note that an aesthetically pleasing document that is easy on the eyes is much appreci-
ated. Use many subheadings while avoiding the use of numbering sections (e.g., C.1.1, C.1.2) that simply
forces the reader to think about a nonessential.
Research Plan (How you are going to accomplish the work?)
•
•
•
•
•
One of the most effective strategies is to address each aim separately, but to do so in a consistent, well-
ordered manner. For example, for each aim, cover in subsections (a) rationale, (b) methods, (c) expected
results and limitations.
The rationale of each aim should address the importance of this part of the project and how it fits into the
overall/long-term goal. This is also a good time to remind the reader of novelty or innovation. One short
paragraph should suffice.
Methods for each aim may include materials, equipment, theoretical frameworks, assays, statistical meth-
ods, and so forth, all given in sufficient detail. For example, do not merely say that a physiological solution
will be used — give the specific composition. Similarly, do not just say that a particular device will be used
— give the resolution of the device and any unique capabilities.
Whether hypothesis- or curiosity-driven, one should know what to expect with regard to findings. Discuss
this and note its potential importance. Likewise, one should know what limitations or pitfalls may arise.
Noting and addressing them is much better than hoping a reviewer will not think of them; someone al-
ways does and this could relegate an otherwise outstanding proposal to a lower score.
Finally, remember that detail is the operative word in research plan and that the aims should form a logi-
cal, supporting sequence. Tell them what you are going to do, how you are going to do it, and briefly why
it is important.
• • • •
77
C H A P T E R 5
oral Communication
Just as we must write well, so too we must speak well — a belief that is not new to modern science
or engineering. According to Boorstin, (1983, p. 395), Bishop Sprat suggested that the goal of the
Royal Society of London (founded ~1660) was “not the Artifice of Words, but a bare knowledge of
things.” Hence, they
extracted from all their members, a close, naked, natural way of speaking; positive
expressions; clear senses; a native easiness: bringing all things as near the Mathemati-
cal plainness, as they can: and preferring the language of Artizans, Countryman, and
Merchants, before that, of Wits, or Scholars.
In other words, as Boorstin concluded, “It was not enough that the language of science be simple.
It had to be precise — and, if possible, international.” Although audiovisual aids available today are
very different from those of the 17th century, the need for simple, clear, and informative presenta-
tions remains.
Written documents and oral presentations both reflect one’s professional reputation. Yet, the
oral presentation is unique in that it can serve as the all-important “first impression.” If a talk is
lucid and enjoyable, those in the audience will likely seek out the speaker again; if a talk is poorly
organized and boring, it may be the last time that they seek to hear the speaker.
The need for excellence in oral presentations is not unique to science and engi-
Exercise 5.1
neering. Hence, find a good book on public speaking and read two chapters that are particularly
appealing. Write and submit a three-page summary of the main points. Among the many books
available, consider the timeless work, How to Develop Self-Confidence and Influence People by Public
Speaking, by Dale Carnegie.
5.1 EffECTIvE STylES
Carnegie (1956) suggests that four things are essential in one’s pursuit of becoming an effective
public speaker:
78 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
1.
2.
3.
4.
Start with a strong and persistent drive.
Know thoroughly what you are going to talk about.
Act confident.
Practice, practice, practice.
Although these four essentials should not surprise anyone, they should cause some reflection.
In particular, just as in writing well and ensuring integrity in the workplace, effective oral presenta-
tions do not just occur, even with experience — one must resolve to learn to present well and to continue
to improve. Moreover, because of the importance of self-confidence when speaking to either small
or large audiences, it is essential to know the subject so well that you could give the talk even if the
audiovisual equipment failed or if you forgot your typewritten notes. Finally, the old adage “practice
makes perfect” is certainly true, but there is one caveat. One can practice a bad talk over and over,
but it need not improve. Rather, one’s practice should include peers constructively criticizing both
the technical material and the method of presentation; it is better to make mistakes among friends
and to receive helpful suggestions or corrections before the actual presentation.
Attend three professional seminars and record seven specific personal habits and
Exercise 5.2
seven audiovisual techniques used by the speakers that were particularly effective (four each) or inef-
fective (three each). Summarize your findings in a table and submit a two-page report.
Many suggest that much of communication is nonverbal during discussions between individ-
uals. Do we look the other person in the eye and reveal our interest or do we look at other people or
things while they are talking? Do we change our facial expressions appropriately to reveal sympathy
or understanding or do we remain stoic? So too in public speaking, nonverbal communication can
help make a talk engaging or it can render the attempt boring or, even worse, annoying. By defini-
tion, habits are natural and repetitive; they usually arise unconsciously and can manifest nonverbally
or verbally. For this reason, it is essential to have peers provide feedback on potentially distracting
habits that arise while we speak. For example, if one tends to jingle keys in his pocket when ner-
vous, recognizing this problem allows him to remove the keys before speaking, thus removing the
potential distraction. Similarly, if one uses a lot of aaahs or uumhs, there is a need to identify these
problems and remove them from both formal and informal speech, for we develop new habits
through consistency. Indeed, note that we have found that paying careful attention to composing
well-written documents also serves to help us speak well. Finally, if one’s hands shake badly during
a talk, it is best not to use a laser pointer, which will project exaggerated motions onto the screen.
Instead, one whose hands always shake should practice using verbal cues such as “as seen in the first
term of Equation 1” or “as illustrated well in the top curve in the left panel.” A laser pointer can be
an effective aid if used well, but it can also be very distracting. Indeed, even if held by a steady hand,
oRAl CommuNICATIoN 79
a rapidly moving or constantly circling laser pointer can be a significant distraction. Finally, be care-
ful not to keep the laser on if you “talk with your hands,” for the audience gets both distracted and
concerned when the laser shines across someone’s face or constantly goes from floor to ceiling.
Valiela (2001) correctly suggests that effective technical presentations share some common-
alities with successful theater. Two prerequisites for good theater are a good story and actors who
“connect with” or “relate well to” the audience. A good story in science or engineering requires an in-
teresting or important problem to be formulated, then solved in a novel and logical manner. Below,
however, we focus on relating the story well to an audience, first by tabulating reminders related to
basic techniques and habits of effective presentations. Indeed, although it is essential in science and
engineering to have something important to say, as you compare the suggestions below, consider the
suggestion of Carnegie (1956) that “It is not so much what you say as how you say it.”
Do
Do NoT
Be confident, appear confident
Be arrogant or prideful
Be enthusiastic — it is contagious
Pace too much
Speak loudly, clearly, slowly
Speak in a monotone voice
Be respectful of questions
Ask rhetorical questions
Finish early enough for questions
Go over the allotted time
Maintain balanced eye contact
Look only at screen or at a distance
Dress appropriately
Apologize for dress
Use (laser) pointer effectively
Circle everything with laser pointer
Know your audience
Discourage interactions
Define terms, use analogies
Use jargon, try to impress
Minimize nervous habits
Assume every talk begins with a joke
Why is it important to be or at least appear confident? Why is it important not to
Exercise 5.3
be prideful or arrogant? What message will we convey to an audience if we finish early and allow
questions? What message will we convey if we go over the allotted time and ignore calls to stop?
80 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
What is the appropriate dress for different audiences? Ask yourself these and similar questions re-
garding this tabulated list of things to “do” and “not do,” and write a two-page summary. If possible,
conclude the summary with a few overarching statements.
Experienced actors tend to be nervous on opening night, and so too experienced speakers
tend to be nervous before walking up to the podium. Yet, recognizing that nervousness is natural,
indeed expected, allows us to identify ways to minimize its effects and to settle into a comfortable
rhythm quickly. For example, an early visit to the room where you will speak will help you to feel
more at ease — the environment will not be foreign. If you need to use a microphone and have not
done so before, ask the A/V technician if you can test the system before your talk. If you expect to be
nervous nonetheless, eat sparingly before the talk to avoid further complications of the nervousness.
Having complete command of the technical material will also engender self-confidence, which is
the best way to negate nervousness. Remembering the first sentence or two will ensure a good start,
which is essential in transitioning from nervousness to confidence. Beginning with an engaging
slide will capture the audience’s attention, which will reinforce your confidence (provided you make
eye contact with the now engaged audience). Conversely, memorizing a talk word for word can pro-
mote nervousness; you may become concerned that you will forget something and lose track of your
message. Below we tabulate some reminders related to basic techniques that promote confidence as
well as contribute to telling the story well.
Do
Do NoT
Visit the room before speaking
Show up late or just before your talk
Remember the first sentence
Memorize the talk or read it directly
Use an engaging first slide
Start with a bulleted outline
Use slides as your reminders
Require audience to read a lot on own
Maximize good figures/images
Use lots of words and small fonts
Be consistent in slide format
Mix slides with different backgrounds
Use slides to capture attention
Use slides to communicate most info
Remember the concluding remarks
End by saying, “Well that is all I have”
Reflecting on these suggestions, it should be clear that we recommend that one use audio-
visual aids to support a talk, not to carry it. In other words, the speaker should strive to capture the
oRAl CommuNICATIoN 81
audience’s attention so that they look at him/her and only look to the slides when so directed for
clarification. Hence, the speaker should use comments like “and thus x is important, as illustrated
well in this figure” or “x . . . as can be seen in this image,” noting that a well-used pointer can remind
the audience when and where to look. Conversely, detailed text on a slide will usually entice the
audience to read on their own and not to look at or listen to the speaker; this situation should be
avoided. Use slides primarily to show clear black and white or color images and figures, schematic
drawings and flowcharts, equations, and to a lesser degree, tables, each of which should support
what is said. Providing a short heading on each slide can indicate the focus of that slide; beginners
may also put bullets on the slides as further reminders to themselves, particularly to prompt appro-
priate transitions. When referring to figures, start by defining the variables of interest and the axes;
when referring to equations, start by defining the meaning of important variables or terms; when
using color images, use the different colors as indicators of important features or points. Remember,
too, that less information explained well is always better than more information explained poorly.
Software programs such as PowerPoint can be tremendous tools when used well. Resist the
temptation, however, to use all the “bells and whistles.” For example, having figures fly in from
the edges of a slide or animating molecules that come to screeching stops generally distract from the
technical content. Similarly, using complex backgrounds, particularly ones with gradients in color,
can be less effective overall — some words show up well, while others do not. Remember, too, that
some members of the audience may be color blind; appropriate choice of color, particularly when
delineating curves in figures, must be given careful consideration. Depending on the fixed lighting
in the room, slides having dark backgrounds can excessively darken a room and thereby create a
more conducive sleeping environment. For these and other reasons, black print on a white back-
ground and color images on a white background continue to be effective for they generally project
well, maintain modest lighting in the room, do not discriminate unnecessarily against color blind-
ness, and even allow one to use information directly from print versions of abstracts, proceedings, or
papers that often appear in primarily in black and white because of considerations of cost.
The first slide is traditionally a title slide — it should give a brief (60 to 120 characters) but
informative title and list the authors and their affiliations. Many try to add a touch of color by
showing the university or business logo or perhaps a picture of a building or scenic area in which
the group works. The last slide is traditionally an acknowledgment slide — it should list others who
contributed to the work, financial support, and relevant disclosures. Some prefer to read the names
and the funding agencies, but it is sufficient simply to list them in most cases. The last slide often
remains projected the longest, that is, during the question and answer period, thus it is also a good
place to list key references to your work and to provide contact information (e.g., an e-mail address).
Consider adopting a common format/master slide, which enables you to use these slides for different
talks with minor modifications; having a common format (including font sizes for headings versus
text) enables you to insert any slide from a different talk into the present talk with no modification.
82 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
Finally, the next to the last slide usually provides a summary of the work or the “take home” mes-
sage. It is best to end on a high note, emphasizing the major findings, rather than listing all of the
limitations or future needs. Address such needs in response to appropriate questions.
Prepare a 15-minute PowerPoint presentation on effective grant writing. Practice
Exercise 5.4
the talk, paying particular attention to the time limit. Have two or three peers critique the presenta-
tion, then make corrections and repeat the presentation.
Prepare a 15-minute PowerPoint presentation on a technical topic of your
Exercise 5.5
choice, but do so in a way that that highlights bad presentation skills and personal habits. For ex-
ample, use different backgrounds from slide to slide, use small fonts, use long detailed quotes, read
directly from the slides, and so forth. Exaggeration often provides an important reminder of what
not to do.
Prepare a 15-minute PowerPoint presentation on a topic of your choice that ad-
Exercise 5.6
dresses an issue having potential ethical consequences. For example, previous students in our classes
have discussed embryonic stem cell research, cloning, the use of human subjects in clinical trials,
animal research, issues of science and religion, patents, and copyright.
5.2 THE 15-mINuTE PRESENTATIoN
Seeing your name appear in print on a journal article generally produces a sense of accomplishment
and pride. So too, learning that your abstract or paper has been accepted for a podium presentation
at a national meeting produces a sense of excitement. After the initial euphoria, however, you realize
that you have to find a way to describe in a short period, often 12 to 20 minutes, a project that you
may have worked on for months or years. One is tempted, therefore, to pack as much information
into the talk as possible. Surely the audience will be impressed by how much you did, right? As
noted above, however, you will generally make a much more positive impression if you present less
information well. There is, therefore, a critical need to identify the most important information and
to ensure a logical sequence from identifying the problem to interpreting the results and appreciat-
ing the significance. Similar to writing a technical paper, a good way to start this process is to collect
together all of the figures, images, equations, tables, or other major findings that you may include,
then to prioritize and order them in the most logical fashion. This ordering need not be chronologi-
cal; in many cases it is best to order the talk in the way that makes the most sense in hindsight.
A good rule of thumb is to prepare approximately one slide per allowed minute of presentation, in-
cluding the first (title) and last (acknowledgment) slides, which need not be discussed. Moreover,
each slide should generally highlight one main idea. Again, we emphasize that the first slide after
the title slide should capture the audience’s attention. It is much more effective, for example, to
oRAl CommuNICATIoN 83
show a picture or image that motivates the work than to show a bulleted outline noting that you
will introduce the overall problem, describe some of the methods, discuss the results, then draw
conclusions — one expects such an approach. Carnegie (1956) suggests multiple ways to capture
the audience’s attention immediately: “arousing curiosity, relating a human interest story, begin-
ning with a specific illustration, using an exhibit, asking a question, opening with a striking quota-
tion, showing how the topic affects the vital interest of the audience, or starting with a shocking
fact.”
A brief anecdote highlights the importance of the second slide (or first slide when one does
not use a title slide) in a PowerPoint presentation. One of the authors was asked to give the second
technical talk at an anniversary celebration for the college of engineering. The first technical talk
followed directly some brief comments by the president of the university. Out of courtesy, the presi-
dent remained for the first talk because it began immediately following his comments. During the
subsequent question and answer period, however, the president discretely moved toward the rear of
the auditorium. Yet, as he approached the door, it was evident from the podium that the first slide
of the second talk had captured his attention — the talk began with “This electron micrograph
shows the fine structure of the heart and in particular. . . .” The president remained standing at the
door and listened to the entire 10-minute talk. It is very important to capture the audience’s atten-
tion quickly.
Finishing well is equally important to effective presentations. The conclusion is often that
which the audience remembers best. Although Carnegie (1956) wrote on public speaking in gen-
eral, not technical communication, it is interesting nonetheless to consider his suggestions for end-
ing a talk: “summarizing, restating, outlining briefly the main points you have covered; appealing for
action; paying the audience a sincere compliment; raising a laugh; quoting a fitting verse of poetry;
using a biblical quotation; building up to a climax.” Regardless of approach, ensure consistency be-
tween the opening and closing and try to memorize the ending so that it is thoughtful and forceful.
Remember, too, that two of the best words to end with are “thank you.”
It is important to embrace the question and answer period. Although many speakers tend to
abhor criticism and do not want to be questioned, one can obtain valuable suggestions and guidance
during this exchange. Indeed, many times, one will learn something that will improve the quality
of a subsequent paper that will be written on the topic of the presentation. Three useful guidelines
are: first, repeat the question both to ensure that you address what was really asked and to help the
audience hear both question and answer; second, be respectful even if the questioner is antagonistic
or if the question is truly a “dumb” question; and third, if you do not know the answer to the ques-
tion, say that you do not know. It is best, however, not to answer all questions by stating that you
do not know, hence the need for complete command of your subject. Finally, if a questioner tends
to be unrelenting, suggest that you would enjoy discussing the issue at the next break. Remember,
84 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
too, that because the question and answer period can be illuminating to both the speaker and the
audience, finish the presentation early to allow sufficient time for this important exchange.
Although we addressed only the typical 15-minute talk, presentations of other durations
should be treated similarly. Indeed, if you become proficient at “telling your story” concisely, it is
easy to do so for any specified duration. The one caveat, however, is to remember that you should
always present concisely — a longer duration simply means that you should communicate more
information, not that you should communicate the same information less well. Remember, too, that
it is always good to think ahead about which slides to skip if time is running out or if there is an
unavoidable delay or slowdown. In other words, be prepared and be flexible.
Prepare a 30-minute technical presentation, on a topic of your choice, using ap-
Exercise 5.7
proximately 30 slides. After having given the presentation to your peers, reduce the presentation to
15 minutes without losing any significant technical content.
SummARy
5.3
It has been said that “Everyone has but one story to tell, they merely tell it in different ways to dif-
ferent people.” We reemphasize, therefore, that one must know the intended audience. Rather than
trying to sound scholarly, it is most important to be clear and effective. Avoid jargon; define terms
carefully; read faces in the audience to obtain a sense of their understanding and engagement.
Recall from Chapter 1 that individual differences can bring a freshness and vitality to a field;
individual personalities can generate excitement and interest. Each person should develop a style
that is most effective and natural for him or her. The guidelines presented in this chapter are sim-
ply that, guidelines. We encourage the reader to consider or try the ideas presented here, but more
importantly, to pay close attention to styles and techniques used by different speakers in different
settings. You will be well served to take note of what is most effective and what is most ineffective,
and to adjust your style accordingly.
• • • •
85
C H A P T E R 6
Authorship
Seeing your name on your first published paper may be one of the most exciting moments in your
career. Many students thus enter a discussion on authorship focusing on the question, “How do I get
my name on a paper?” In our experience, the more important and difficult questions include when
and how to keep your name off a particular paper and how to negotiate questions of authorship
among your collaborators in a multi-investigator project.
If you have authored a journal article, answer the following questions about your
Exercise 6.1
experience before proceeding. If not, interview someone who has authored a journal article and
report their answers.
1.
2.
3.
4.
5.
How did you become an author on your first paper?
What was your contribution to that paper?
Who decided whose names would appear and in what order?
At what point during the research did you first discuss authorship?
Did you sign a legal agreement as an author, and if so, to what did you agree?
6.1 THE SluTSKy CASE
Many widely publicized cases of research fraud, plagiarism, and other forms of misconduct exist in
science and engineering. Discussing these cases often sheds light on important aspects of ethics in
science and engineering. We will take as an example the case of Dr. Robert Slutsky, a member of
the faculty at the University of California–San Diego School of Medicine in the 1980s. While in
many ways similar to other cases of plagiarism or data fabrication, the Slutsky case is unusual for
two reasons: the university committee, formed to investigate allegations of research fraud against
Dr. Slutsky, included a philosopher as well as medical school faculty, and the committee attempted
to draw broader conclusions about this type of fraud. The committee ultimately published its find-
ings in an article in The New England Journal of Medicine (Engler et al., 1987). We briefly review
details of the case below, but this excellent article is so integral to our discussion that it should be
read before proceeding.
86 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
Read the journal article regarding the Slutsky case [Engler RL, Covell JW, Fried-
Exercise 6.2
man PJ, Kitcher PS, Peters RM (1987) Misrepresentation and responsibility in medical research. N
Engl J Med 317: 1383–1389] and list the five aspects of the case you find most surprising:
1.
2.
3.
4.
5.
Dr. Slutsky, then associate professor of radiology at the University of California-San Diego,
was being evaluated for tenure when a member of the tenure committee noticed an apparent dupli-
cation of data in two published research papers. The ensuing investigation by a faculty committee
revealed a number of striking facts of interest for our discussion. First, the committee found clear
evidence that Dr. Slutsky reported fictitious experiments and statistical analyses, reported incorrect
procedures and statistical analyses, and listed colleagues as coauthors who did not contribute to the
work and in some cases did not know about the publications. Second, the normal peer review pro-
cess did not detect any of these concerns in the fraudulent papers, and some of the journals refused
to retract the fraudulent papers upon notification of the committee’s findings unless Dr. Slutsky
agreed to the retractions. Third, at one point during the period under investigation, Dr. Slutsky
was publishing a paper every 10 days, including many in prestigious journals. Fourth, much of this
work was apparently sound; the committee established the validity of 77 of the 137 publications
they reviewed and classified only 12 publications as fraudulent. Finally, the investigation revealed
missed warning signs over the course of Dr. Slutsky’s early career: several of his colleagues and at
least one journal editor questioned the validity of manuscripts they read, and some recommendation
letters for his original appointment to the faculty expressed concerns about the validity or quality
of his research.
6.2 BASIC CoNvENTIoNS
Before discussing common problems regarding authorship, it is helpful to review current conventions.
These conventions will be familiar to practicing scientists and engineers but not necessarily to under-
graduate and graduate students, particularly those who have not yet authored a journal paper.
6.2.1 order of Authors
The order of authors on an archival journal paper usually has special significance, but conventions
vary by field and occasionally by journal. In most biomedical science and engineering journals, the
AuTHoRSHIP 87
first author is usually the one who performed most of the work; this person is often a graduate stu-
dent or postdoctoral fellow who worked on the project described in the publication. Designation as
first author is so important that footnotes are sometimes used to indicate equal contribution by two
or more “first” authors. The last author is typically the senior investigator who conceived, guided,
and financially supported the project. The ordering of all other authors is generally of less signifi-
cance, as we assume that their contributions were less but otherwise important. In stark contrast,
some fields encourage an alphabetical listing of authors. Notwithstanding customary variations by
field and journal, surveys of scientists and engineers reveal widespread disagreement and confusion
regarding conventions for authorship (Bhopal et al., 1997; Tarnow, 1999).
6.2.2 Submission Agreement
Most journals ask the author(s) to sign a submission agreement. Typically, this agreement transfers
copyright to the publisher and asserts that the author(s) will pay any page charges levied by the
journal as part of the publication process. In addition, this agreement usually asks the author(s) to
verify the accuracy of the submitted manuscript and that it has not been published by or submitted
to another journal. Much more variable are policies regarding who must sign the agreement. In many
cases, the corresponding author (i.e., the person who submits the manuscript and lists his/her con-
tact information in the final version) can sign on behalf of all coauthors. This explains, in part, how
Dr. Slutsky submitted some papers without the knowledge of some people he listed as coauthors.
Conversely, some journals require all coauthors to sign the submission agreement; it appears that Dr.
Slutsky subverted this requirement by forging the signature of some coauthors (Engler et al., 1987).
6.2.3 Publication Impact
One’s record of publication is critically important when applying for jobs, grants, awards, or pro-
motion and tenure. In any discussion of authorship, it helps to understand how reviewers evaluate
your published works. Obviously, one important factor is the number of publications, but this is far
from the only consideration. For example, some journals are more selective and more widely read
than others; publications in these journals are typically valued more in an evaluation. Such assess-
ments are subjective, however, because investigators in the same field may have different opinions
on the relative quality of the relevant journals or their ability to assess the quality of a particular
work. For example, a complex mathematical model of a biological process will likely receive a more
rigorous review by a mathematics journal than by a biology journal even though the latter may have
a larger readership. In an attempt to weigh the quality of a journal more objectively, one can define
quantitative metrics. One such metric is the “impact factor,” a measure based on the idea that more
frequent citations of a journal’s articles implies a greater impact by that journal on its field. Scientific
information service companies, such as Thomson Scientific, compute and report impact factors
88 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
for a wide range of journals (to obtain links, search for “Journal Citation Reports” or “ISI Web of
Knowledge”). These services also track the number of citations of particular publications, and some
reviewers use the number of citations as a surrogate measure of the impact of a publication.
Even if one relies on a metric such as impact factor to value a publication, most publications
have multiple authors. The question then becomes, “How much credit should each author receive
for a given publication?” For example, consistent with conventions discussed above regarding the
order of authors, the first and last authors typically receive most of the credit for any biomedical
publication. When someone evaluates your publication record, they will notice not only the number
of publications and the quality of the journals but also how often you appeared first or last in the au-
thor list. Note, however, that even if your name appears last on a publication, implying that you were
the senior author most responsible for the ideas, if a well-recognized senior colleague also appears
on the paper, other scientists may assume that your senior colleague deserves much of the credit
for the ideas. This issue becomes especially relevant in multi-investigator collaborations, which are
more common in today’s research.
6.3 CommoN PRoBlEmS
In June 2005, an article in the journal Nature, titled “Scientists Behaving Badly,” reported results
from a survey of more than 3000 NIH-funded scientists regarding the frequency with which they
engaged in a range of questionable research practices (Martinson et al., 2005). While only 0.3%
admitted to falsifying research data within the previous 3 years, many more admitted to some of the
other problems highlighted by the Slutsky case, such as publishing the same data in two or more
publications (4.7%). Particularly relevant to our discussion is that 10% admitted inappropriate as-
signment of authorship within the past 3 years. Strikingly, such misbehavior was more common
among mid-career scientists (12.3%) than early-career scientists (7.4%).
Read the article “Scientists Behaving Badly” [Martinson BC, Anderson MS, de
Exercise 6.3
Vries R (2005) Nature 435(9): 737–738] and formulate three hypotheses why mid-career scientists
are more likely to engage in admittedly inappropriate behavior than early-career scientists. Com-
pare and discuss your hypotheses with a colleague.
6.3.1 Expectations
The importance placed on publications as a measure of career progress can create substantial pres-
sure to publish, particularly for tenure-track junior faculty. Managing this pressure begins by devel-
oping clear and reasonable expectations.
AuTHoRSHIP 89
For Ph.D. students and postdoctoral fellows. Answer the following questions re-
Exercise 6.4
garding the number of publications you expect a junior faculty member to produce in your field.
First, estimate the number of publications one might produce (or you did produce) during doctoral
study. Next, estimate the number of publications one might produce (or you did produce) during
3 years of postdoctoral research. Finally, estimate the number of publications a successful junior
faculty member in your field should produce during the first 5 years of his/her career. Now, perform
two different “reality checks” on your estimates:
1.
2.
First translate your estimates of productivity into rates (number of papers per year, which
may be < 1), noting that most papers tend to be produced near the end, not beginning, of
one’s study. Then, use these rates to compute how many graduate students and/or post-
doctoral fellows your model junior faculty member would need to employ if each paper
was coauthored by only one student or fellow. Do you think these numbers are reasonable?
Which of your estimates would you adjust based on this check?
As a second check, ask a senior faculty member in your department to give the same three
estimates. How do they compare to your estimates? If possible, discuss any discrepancies
with a senior colleague.
One thing that seems apparent in the case of Dr. Slutsky is an unrealistic expectation (or
perception of external expectations) regarding productivity. No reasonable person expects a junior
faculty member in any field to produce a paper every 10 days. Yet, Dr. Slutsky apparently felt pres-
sure to improve upon the number of valid publications (at least 77 in 7 years according to the au-
thors of the report in the New England Journal of Medicine) through various types of research fraud.
Clearly, there is a need for open discussion of authorship and productivity with everyone involved,
from students to advisers to department chairs. Only in this way can we develop and clearly express
realistic expectations regarding the number and quality of publications.
6.3.2 gift, guest, and ghost Authorship
Gift authorship entails granting authorship to a person who did not contribute directly to the work
(Davidoff, 2000). As an example, a new trainee discovered upon her arrival in Dr. Slutsky’s labora-
tory that she was an author on a paper she knew nothing about (Engler et al., 1987). Why would
someone do this? Misplaced generosity could be one motivation — colleagues may believe they are
doing you a favor by listing you as an author on a publication. Another possible reason could be the
pressure to show productivity by trainees who are supported by certain types of grants. Regardless,
gift authorship could associate your name with a fraudulent paper, as in the Slutsky case. Cases of
90 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
research fraud are rare, however; embarrassment is a more likely concern if you consider the paper
to be of poor quality, you disagree with its conclusions, or you are forced to admit (e.g., during the
question and answer period after a scientific talk or during a discussion with a colleague you respect)
that you did not contribute to the work.
One of the most difficult situations related to authorship is receiving an unwanted gift au-
thorship, especially if you are a junior colleague of the person conferring it. What choices did the
trainee in Dr. Slutsky’s laboratory have when she was told about the gift authorship? The paper
was already published, thus any change of authorship would have involved admitting the situation
to the journal. Asking your new boss to admit publicly to conferring gift authorship may not be a
good way to begin your research career, nor is contacting the journal directly and triggering an in-
vestigation. Few would suggest driving a scientist from his faculty position for an isolated incident
of gift authorship. Yet, some early action might avert subsequent, more serious problems.
Two points can be made here. First, there exists an anonymous procedure at most universi-
ties and companies for seeking advice if you encounter a situation such as gift authorship; typically,
an officially designated ombudsman will help you resolve conflicts and difficult situations. Second,
an adviser who puts you in a precarious situation is likely not the right adviser for you. The conse-
quences of confronting a situation like this early are not likely as bad as they seem, while the conse-
quences of avoiding confrontation are likely much worse.
Studies of scientific authorship often define guest authorship separately as listing a colleague
who did not contribute directly to a paper in the hope that his or her reputation will enhance the
odds of acceptance for publication (Davidoff, 2000). Combining gift and guest authorship into a
single category, termed honorary authorship, Flanagin et al. (1998) surveyed the authors of papers in
six major biomedical research journals (including Annals of Internal Medicine, JAMA, and the New
England Journal of Medicine) and found that 19% of those publications had evidence of honorary
authorship. They also found that 11% had evidence of ghost authorship, defined as the omission of
an author who contributed significantly to the publication. Ghost authors may be junior colleagues
who simply did not receive the credit they deserved, but they may also be professional medical
writers hired to write articles anonymously or even representatives from companies with a financial
interest in the findings who wish to hide their involvement.
A 1993 survey of postdoctoral research fellows at the University of California–San Francisco
suggests even higher rates of inappropriate practices: 38% of the respondents thought that at least
one coauthor on their papers was undeserving, while 20% thought they were excluded on at least
one paper for which they deserved authorship (Eastwood et al., 1996). One of the most telling
results of this particular survey was evidence that trainees who have unfavorable initial experiences
with authorship lose faith or interest in the integrity of the system. Overall, 32% of the fellows
surveyed said they would be willing to list an undeserving author on a paper if it would enhance
the probability of publication or otherwise benefit their career; that number jumped to 72% among
those who reported a previous adverse experience with authorship.
AuTHoRSHIP 91
6.3.3 financial Support
Accepted practice regarding authorship varies by field and by culture. While relatively rare, some
strongly hierarchical departments expect that the chair of the department should be listed as an au-
thor (possibly even senior author) on every paper, regardless of contribution. Such an environment
may pose a challenge for younger investigators who disagree with the policy, especially if following
the expected procedure weakens their own publication records by preventing them from assuming
senior authorship on their own work. It is important to recognize and discuss cultural variations
when working in a group composed of colleagues who trained under different systems and when
collaborating internationally.
6.3.4 Quid Pro Quo
Nearly everyone agrees that gift authorship is wrong, yet there are many related cases where a col-
league who has contributed to a study in some way requests or expects authorship in return. The
most common situation involves valuable resources such as antibodies or transgenic mice. Consider
a situation where an investigator devotes significant time and energy developing such a resource,
then publishes a paper describing it. Colleagues then ask for access to the resource for studies they
wish to conduct. It is not uncommon for the investigator to offer to provide access in exchange for
authorship on the resulting paper(s).
This basic situation has unlimited variations. At one extreme, a request for a resource can
lead to a genuine collaboration on a new study that is reflected accurately in coauthored papers de-
scribing the results. At the other extreme, however, the situation can approach scientific extortion,
with the original developer of the resource demanding authorship in exchange for access, knowing
few colleagues will deny the request due to the substantial time and effort required to replicate the
resource. While many who disagree with such arrangements accept them as a fact of life, some de-
fend the practice, regarding authorship on related papers as appropriate reward for developing the
resource.
Most believe that the appropriate reward for any innovation, whether a new equation, method,
antibody, or transgenic mouse, is citation, not authorship. Colleagues who employ the innovation
cite the original publication, giving appropriate credit to its originator. In the case of an equation
or its solution, the original paper contains everything colleagues need. In the case of a transgenic
mouse, the original paper contains only a description of how to generate such a mouse. Is it reason-
able to expect the scientist who first generates the mouse to send mice to any colleague who requests
92 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
them? Does the answer change if federal or state resources funded the original development of the
mouse, as with most biomedical research? These and related questions about access to resources and
data from publicly funded science are currently a topic of vigorous discussion in the scientific com-
munity; they are explored again in Chapter 8.
6.3.5 Students and Technicians
We have highlighted some common problems related to authorship beginning with the simplest
and least controversial and proceeding to the more complex and controversial (and therefore inter-
esting). Next, we consider the key question of who should or should not be an author on a particular
paper or, to generalize the problem, of exactly what qualifies someone to be an author. Before pro-
ceeding, use the following exercise to define better what you think should be considered in making
decisions about authorship.
List up to five minimum criteria needed to justify authorship on a scientific or
Exercise 6.5
engineering paper. According to your criteria, would a laboratory technician or an undergraduate
student who orders supplies and prepares samples qualify as an author on papers produced by the
laboratory? What about a technician or student who runs tests according to instructions and turns
over the data for analysis? What about one who runs tests, analyzes data, and makes a figure for the
paper but does not write any of the text?
1.
2.
3.
4.
5.
As this exercise illustrates, it is remarkably difficult to articulate general guidelines for author-
ship that provide practical guidance. Common responses to this exercise are that each author should
make a “significant” contribution to the work, that each author should make an “essential” contribu-
tion to the work, or that each author should make an “intellectual” contribution to the work. This
last point illustrates general agreement that a student or technician who simply prepares samples or
collects data without a true understanding of the project should be acknowledged, not listed as an
author. Nevertheless, none of these statements provides practical guidance.
To increase our appreciation of this situation, it is useful to consider the contribution of a
potential author against the backdrop of what is required to produce a paper. First, one must gener-
ate an idea or identify a problem, then plan an approach to address the problem. Next, one must
AuTHoRSHIP 93
perform the study and collect the data or solve the equations. Analysis and interpretation of the
data or results then precedes writing the paper, which typically requires a comparison to previous
related findings. The example of a student or technician who only collects data or runs a computer
code as instructed suggests that an author should be involved in more than one aspect of the study;
if that person also analyzes data and summarizes the results for the paper, the claim to authorship
would be stronger. Requiring involvement in multiple aspects of a study would limit the quid pro
quo arrangements discussed above to cases where involvement went beyond providing a particular
resource. It seems reasonable to stop short of requiring every author to participate in every phase,
however. For example, most investigators would support authorship for a person who joined a group
after the study was conceived and planned but otherwise was involved deeply in all aspects of a
study.
The concept that all authors should be involved in multiple aspects of a study (e.g., design,
experiment, analysis, interpretation, or writing) seems reasonable. Nevertheless, your list from Ex-
ercise 6.5 likely includes additional criteria. Must every author understand everything in the paper?
Must every author read the final version before submission? Recalling that some of Dr. Slutsky’s
coauthors experienced the stigma of being authors on fraudulent papers, should every author review
the original data that form the basis for the conclusions? Each investigator must wrestle with these
questions over the course of a career; your answers to these questions may well evolve with experi-
ence. It is important to think carefully about these issues early in your career so that you can develop
practices consistent with the ethical standards you set for yourself.
6.4 CuRRENT STANDARDS AND EmERgINg IDEAS
Many people have thought about ways to improve upon practices used to define authorship in the
archival literature. In particular, some professional societies and journals have introduced simple
practices that reflect more accurately the contributions of those involved in a publication. These
practices also have the beneficial effect of forcing increased discussion among coauthors on issues
related to authorship.
6.4.1 International Committee of medical Journal Editors Standards
The International Committee of Medical Journal Editors (ICMJE) evolved from meetings that be-
gan in 1978 to establish guidelines for the format of manuscripts submitted to medical journals. This
group regularly revises and disseminates the document “Uniform Requirements for Manuscripts
Submitted to Biomedical Journals: Writing and Editing for Biomedical Publication,” available at
their Web site, www.ICMJE.org. This Web site also lists journals that adopted these standards. The
uniform requirements continue to provide guidelines on style and format for articles in biomedical
journals and also guidelines on ethical aspects of writing and reviewing journal articles.
94 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
The uniform guidelines provide criteria for deciding who should be an author on an archival
paper. Although these guidelines are similar to principles discussed in the previous section, they
have provoked objections from many scientists who believe them to be too strict (Bhopal et al.,
1997); they are also rarely enforced, even by journals that claim to have adopted them (Davidoff,
2000). Under the ICMJE guidelines, all authors must meet the following three criteria:
1.
2.
3.
Substantial contributions to conception and design OR acquisition of data OR analysis and
interpretation of data.
Drafting the article OR revising it critically for important intellectual content.
Final approval of the version to be published.
In general, it is difficult to assess whether authors follow (or are even aware of ) these criteria when
submitting articles to journals that have formally adopted them. As discussed in Section 6.4.3, one
study that attempted to test compliance with these guidelines found that only 56% of authors of
articles in a prestigious journal that subscribes to the guidelines actually fulfilled them (Yank and
Rennie, 1999).
Finally, as another example, the American Heart Association (AHA) publishes multiple out-
standing scientific journals dealing with cardiovascular health and disease. Among other things, the
AHA form entitled “Authorship Responsibility and Copyright Transfer Agreement” stipulates that
to qualify for authorship, one must have participated in one or more of the following:
conceived and designed the research
acquired the data
analyzed and interpreted the data
performed statistical analyses
handled funding and supervision
drafted the manuscript
made critical revisions of the manuscript for important intellectual content
Other journals continue to require a simple statement that all authors contributed to the work and
agree to its submission for consideration for publication (recall Section 3.1.1).
6.4.2 Author Notification
One of the simplest recent innovations is that many conferences and journals now require the
submitting author to provide e-mail addresses for all authors, who are notified electronically of the
submission of an abstract or manuscript. While no coauthor should ever learn of a submission for
AuTHoRSHIP 95
the first time through such an e-mail, this is not an infrequent occurrence. Notification allows an
investigator who was unaware of a submission to raise objections while the abstract or manuscript is
under review, rather than being forced into the much more difficult position of addressing the issue
after acceptance or publication. Notification may also increase the odds that the submitting author
will discuss the submission with all coauthors in advance to avoid surprising colleagues. Electronic
notification is not a foolproof defense against those who are willing to forge the names of coauthors
on a submission agreement. Those intending to deceive could easily construct false e-mail accounts
for coauthors, but at least this would require more effort than simply forging a signature.
6.4.3 Specifying Contributions
A more radical approach is to discard the traditional premise that all authors bear equal responsi-
bility for the content of an archival paper. Instead, some journals now ask authors to specify their
contributions to an article at the time of submission. In theory, responsibility for integrity of the
research partitions accordingly, with authors only responsible for ensuring the validity of their work.
In addition, most journals require at least one author to declare responsibility for oversight of the
entire article.
Specifying individual contributions simplifies attribution of responsibility or blame. It could
also allow societies or journals to impose more uniform standards for authorship. For example,
a journal could refuse authorship to anyone unwilling to take responsibility for more than one
aspect of a publication. Partitioning responsibility may prove the only practical solution for large
multi-investigator projects. Nevertheless, this approach changes the traditional understanding of
an archival publication and meaning of authorship. It could have the disadvantage of weakening
scientific collaborations, as papers increasingly become a compendium of individual miniprojects.
Such a weakening is certainly contrary to what most of us envision when we discuss the need to
foster more and better multidisciplinary collaboration on today’s increasingly complex scientific and
engineering problems.
One of the first journals to ask authors to specify their contributions as part of the submis-
sion process was the medical journal The Lancet, a signatory to the ICMJE Uniform Requirements
for Manuscripts Submitted to Biomedical Journals discussed in Section 6.4.1. During the first 6
months after authors began specifying contributions, Yank and Rennie (1999) studied the reported
contributions with three goals: to determine how author contributions related to position in the
author list; to determine whether self-reported author contributions fulfilled the ICMJE guidelines;
and to determine the degree of overlap between the contributions of those listed as authors and
those listed in acknowledgments. They made the generous assumption that all authors read and ap-
proved the final version (ICMJE criterion 3), but they found that only 56% of authors fulfilled the
96 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
other two criteria. Specifically, 78% of authors reported participating in conception, design, analysis,
or interpretation (ICMJE criterion 1), a finding that was consistent for those who were listed first,
second, third, or last in the author list. By contrast, 65% reported participating in writing or revis-
ing the paper (ICMJE criterion 2), with a range from 84% for the first author to 54% for the third
author. The Yank and Rennie study contains other interesting findings, hence we recommend this
study as a basis for a journal club or group discussion on authorship.
6.4.4 Quantifying Contributions
A natural response to uncertainty, especially among scientists and engineers, is to introduce quan-
titative measures. In addition to specifying what each author did, some have advocated specifying
each author’s percent contribution to the overall work. This is probably most common during tenure
evaluation, when a junior professor under consideration for tenure estimates his/her percent con-
tribution to each published paper. This is a difficult question to answer, especially during a tenure
evaluation, because the desire to report strong contributions for yourself may tempt you to devalue
the contributions by your coauthors. Typically, a group of collaborators who estimate the contribu-
tion of each group member produce percentages greater than 100% unless some mechanism (such
as an interactive form or pie chart) constrains the total.
Another quantitative approach appeared in the biostatistics literature, reflecting the unique
role played by many statisticians in research. Statisticians may be involved in design and analysis
for many different studies but not directly involved in collecting data, performing experiments,
or writing papers for any of those studies. This “specialist” role makes it difficult to apply typical
criteria for authorship. As a possible solution to this problem, Parker and Berman (1998) proposed
a scoring system to help decide when statisticians should or should not be listed as authors. Their
system divides the statistician’s role into three phases of a research project (design, implementa-
tion, and analysis) and requires for authorship either a deep involvement in two of the phases or a
deep involvement in one and moderate in the other two. They also propose that it is unreasonable
to hold a statistician who is listed as an author responsible for the integrity of the entire published
article.
6.5 ouR APPRoACH
As is common when discussing interesting ethical issues, we raised many more questions than we
answered in this chapter. What is most important is that each person utilizes cases and questions
such as those presented herein, as well as discussions with advisers and senior colleagues, to establish
individual principles about authorship early in a career. It is impossible to do what you think is right
if you do not know what you think is right. Once you establish your principles, the question remains
of how best to put them into practice. In this section, we offer some of our own experiences as ex-
amples of how to apply a set of principles to the everyday practice of science and engineering.
AuTHoRSHIP 97
6.5.1 Authorship Criteria
In our own groups, we expect that all authors on a paper should be involved in more than one aspect
of a study, should agree to be listed as an author, and should be given a chance to contribute directly
to the final version of the manuscript before submission. Ideally, each new group member and each
new collaborator should discuss these criteria at the outset. At the very least, all members of the
group working on a particular project must discuss issues of authorship before submitting the first
abstract or publication related to that work. This is easiest to accomplish when all authors work at a
single location and most difficult when the publication involves collaborators from different depart-
ments or institutions. Fortunately, the Internet and track-changes features in most word process-
ing applications enable all coauthors to contribute directly to developing manuscripts regardless of
physical location.
6.5.2 Predraft group meeting
In our experience, one of the simplest and most useful ideas is to convene a meeting of all poten-
tial authors to review findings and interpretations as well as to agree on authorship before writing
an abstract or manuscript. The senior investigator who is funding or driving the project calls the
meeting, inviting all contributors who potentially satisfy the criteria for authorship. In cases of
coauthors from multiple locations, Web conferencing or teleconferencing becomes a vital resource.
At the meeting, each contributor presents results to the group and answers questions. Then, the
group discusses proposed figures, the proposed author list, and the choice of journal for submission.
Notwithstanding the effort required to bring everyone together for an hour or two, this approach
allows all potential authors to gain confidence in the validity of the studies, to ask questions and
comment on the results and their importance, and to voice any concerns about the content of the
paper, interpretation of the results, or author list before the bulk of the writing begins. This ap-
proach also helps improve the paper by subjecting the results to a round of “internal review,” helps
graduate students and fellows practice oral presentation skills, and helps strengthen relationships
among collaborators.
6.5.3 final Review and Approval
Once a manuscript has become a final draft, it is essential for all authors to review and approve
the draft before submission. This is also an appropriate time to settle final questions of authorship,
98 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
especially if no previous discussion has taken place. One reasonable approach is to list as authors on
the draft those colleagues you believe merit authorship, but to include in the distribution list other
people who have made some contribution and may feel they should be authors. Ask each recipient
whether they feel they deserve to be an author on the paper (or whether they agree with the pro-
posed author list) and whether they have any comments or suggestions for the manuscript before
submission; follow up with those who do not reply. Like the predraft group meetings, this step en-
sures that all authors are aware of the content of any publication bearing their names and provides a
round of internal review to improve the manuscript before external peer review.
Most investigators basically agree on the rules of authorship and are willing to follow them.
Inappropriate attribution of authorship usually reflects someone succumbing to real or perceived
external pressures or simply not giving the matter sufficient attention, rather than attempting to
deceive. In general, our experience with regard to questions of authorship has been heartening. In
most cases where a claim to authorship appeared marginal to us, our colleagues have responded
to our question of whether they want to be an author, as we would have hoped, by stating that
their contribution merits an acknowledgment rather than authorship. Many have provided help-
ful comments on a draft even after stating that they did not wish to be listed as authors. Perhaps
surprisingly, our most difficult experiences have typically involved refusing authorship offered by a
colleague rather than denying authorship to a colleague.
6.5.4 Default Position for Abstracts
The process described above is time-intensive. A confounding situation that can arise, therefore,
is the last-minute abstract for consideration for presentation at a technical meeting. Such abstracts
are short and typically have a fixed deadline for submission, thus they are often written just before
the deadline. On such short notice, the collaborators involved in a particular study may not be
available to meet to discuss the abstract or even to read, revise, and approve the final submission.
In such cases, it is best to agree ahead of time on a “default” position for last-minute abstracts — if
contributors cannot be reached to review an abstract on short notice, do they prefer to be listed as
an author and review the abstract after submission or do they prefer to be left off the author list?
We recommend the latter approach, for it is dangerous practice to include authors who have not
read, revised, and approved the abstract before submission. We also note that it is appropriate to
delay submission when a coauthor cannot be reached; there will always be other meetings and thus
other opportunities.
• • • •
99
C H A P T E R 7
Recordkeeping
Scientists and engineers must keep records of their work, using a combination of laboratory note-
books, images, file folders, and electronic data. Similarly, clinicians must record each step of di-
agnosis and treatment in a patient’s medical records. Although keeping precise records may seem
mundane, those records are central to many important decisions in science, engineering, medicine,
and public policy.
Exercise 7.1
not yet worked in a research laboratory, answer the following questions before continuing:
Based on your experience in a research laboratory, or a laboratory course if you have
1.
2.
3.
4.
5.
Did you maintain a laboratory notebook?
If yes, what instructions were you given about what to record?
If no, where did you record information related to the work?
Did your supervisor review your notebook or records? If yes, how often?
If someone tried to reconstruct your work from these records, what percentage could they
reconstruct without your help?
If possible, compare your answers to those of a colleague who has worked in the pharmaceutical or
medical device industry. It is likely that your answers will differ substantially; discuss the most likely
reasons for this.
7.1 THE SluTSKy CASE REvISITED
In Chapter 6, we considered the case of research fraud by Dr. Robert Slutsky, as described in a 1987
article in the New England Journal of Medicine (Engler et al., 1987), and we asked what aspects of
this case were most surprising. In response to this question, many cite the following paragraph from
the section entitled “What is Fraud?”
After due consideration of what requirements and standards applied, the . . . commit-
tee adopted the position that the ethos of scientific research requires that hypotheses
be validated before they can be accepted and that claims to observation be open to
100 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
scrutiny by peers. The legal principle of “innocent until proved guilty,” which might
be rephrased as “assume correct until proved wrong,” does not apply to scientific work;
the burden of proof remains with those claiming new findings. Thus, the authors of a
scientific publication that is reasonably alleged to be fraudulent bear the responsibility
for establishing the accuracy of their results.
Engler et al. (1987)
This excerpt should be sobering to anyone involved in research or development. Most of us
have lost records to a computer crash, accidentally overwritten a file, lost a notebook, discarded old
data, or at times kept less than complete records. If the supporting data are missing and the burden
of proof against an allegation of fraud lies with the researcher, an anonymous accusation of fraud
from a disgruntled colleague, employee, or student could be enough to support a finding of research
fraud and end a career.
The proposition that the burden of proof lies with the researcher raises two questions. First,
do you agree with the argument that the nature of science and engineering should place the burden
of proof on the investigator, or should it rest (as in criminal law) with the accuser? Second, is the
burden of proof actually placed on the researcher in current practice? While the first question pro-
vokes interesting discussions in any room of scientists or engineers, most are surprised to learn that
the answer to the second question is a resounding yes.
The Office of Research Integrity (ORI) of the U.S. Department of Health and Human Ser-
vices (http://ori.hhs.gov/) performs a range of functions designed to maintain the scientific integrity
of biomedical and behavioral research funded by the U.S. Public Health Service (PHS). One of these
functions is investigating and issuing reports on scientific fraud or misconduct involving PHS grants.
Moreover, to heighten awareness of the importance of scientific integrity, the ORI publishes Findings
of Scientific Misconduct (i.e., brief reports summarizing each case and its outcome) on their Web site
and within weekly electronic mailings on funding opportunities distributed by the NIH. A review of
past cases demonstrates that the burden of proof against an allegation of research fraud does indeed
rest with the researcher. As an example, we reproduce below, in its entirety, a Finding of Scientific
Misconduct issued in 2000. Particularly relevant to our discussion is that the ORI found that “Dr.
Duan . . . engaged in scientific misconduct by reporting research that was inconsistent with original
data or could not be supported because original data were not retained,” even though “Dr. Duan de-
nies all allegations of scientific misconduct and contends that some of his original data is missing.”
Read and discuss with a colleague the following Finding of Scientific Misconduct.
Exercise 7.2
What aspects of this report do you find surprising? What impact did the sanctions likely have on Dr.
Duan’s career? Do you agree with the practice of publicly distributing these findings and naming the
researcher involved? What impact might the fraud in this case have had on other researchers, doctors,
or patients? Given the impact of the fraud, was the severity of the imposed sanctions appropriate?
RECoRDKEEPINg 101
FINDINGS OF SCIENTIFIC MISCONDUCT
Release Date: June 27, 2000
NOTICE: OD-00-043 Department of Health and Human Services
Notice is hereby given that based on oversight by the Office of Research Integrity
(ORI) and decision by the Assistant Secretary for Health, the U.S. Public Health Ser-
vice has taken final action in the following case: Lingxun Duan, M.D., Thomas Jeffer-
son University: The U.S. Public Health Service (PHS) alleges that Dr. Duan, former
Research Assistant Professor of Medicine, Division of Infectious Diseases, Depart-
ment of Medicine, Jefferson Medical College, Thomas Jefferson University, engaged
in scientific misconduct by reporting research that was inconsistent with original data
or could not be supported because original data were not retained.
The research in question was supported by a National Institute of Allergy and Infec-
tious Diseases (NIAID), National Institutes of Health (NIH), grant, R01 AI36552,
entitled “Intracellular antibodies and HIV 1.” Specifically, the research in question was
reported in an NIAID, NIH, grant application; in an FDA-approved phase I gene
therapy investigational new drug (IND) application entitled “Intracellular immuniza-
tion against HIV-1 infection using an anti-rev single chain variable fragment (SFV);”
and in two publications: (1) Duan, L., Bagasra, O., Laughlin, M.A., Oakes, J.W., &
Pomerantz, R.J., Potent inhibition of human immunodeficiency virus type I replica-
tion by an intracellular anti-Rev single chain antibody, Proc. Natl. Acad. Sci. USA
91:5075–5079, 1994; and (2) Levy-Mintz, P., Duan, L., Zhang, H., Hu, B., Dorna-
dula, G., Zhu, M., Kulkosky, J., Bizub-Bender, D., Skalka, A.M., and Pomerantz, R.J.,
Intracellular expression of single-chain variable fragments to inhibit early stages of the
viral life cycle by targeting human immunodeficiency virus type 1 integrase, J. Virol.
70:8821–8823, 1996.
Dr. Duan denies all allegations of scientific misconduct and contends that some of his
original data is missing. Both Dr. Duan and PHS are desirous of concluding this mat-
ter without further expense of time and other resources. Thus, Dr. Duan has entered
into a Voluntary Exclusion Agreement (Agreement) with PHS, in which Dr. Duan
has voluntarily agreed:
102 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
(1) to exclude himself from any contracting or subcontracting with any agency of the
United States government and from eligibility for, or involvement in, nonprocurement
transactions (e.g., grants and cooperative agreements) of the United States Government
as defined in 45 C.F.R. Part 76 for a period of two (2) years, beginning on June 7,
2000;
(2) that for a period of one (1) year after the conclusion of the voluntary exclusion pe-
riod, any institution that submits an application for PHS support for a research project
on which his participation is proposed or that uses him in any capacity on PHS sup-
ported research, or that submits a report of PHS funded research in which Dr. Duan is
involved, must concurrently submit a plan for supervision of his duties to the funding
agency for approval; the supervisory plan must be designed to ensure the scientific in-
tegrity of Dr. Duan’s research contribution, and the institution must also submit a copy
of the supervisory plan to ORI;
(3) to exclude himself from serving in any advisory capacity to PHS, including, but not
limited to, service on any PHS advisory committee, board, and/or peer review commit-
tee, or as a consultant for a period of two (2) years, beginning on June 7, 2000;
(4) that he will not oppose the submission to journals of a statement summarizing the
current state of the science with respect to the scientific matters at issue relating to
grant R01 AI36552, which has been jointly agreed to by Thomas Jefferson University
and the United States of America.
FOR FURTHER INFORMATION CONTACT: Acting Director, Division of
Investigative Oversight Office of Research Integrity 5515 Security Lane, Suite 700
Rockville, MD 20852 (301) 443-5330
7.2 WHy KEEP RECoRDS?
Accurate records are central to any investigation of scientific misconduct, yet such investigations are
rare. Not surprisingly then, defense against an accusation of misconduct is not the primary reason
researchers keep records, and this potential concern should not dominate our discussion of record-
keeping. A discussion of what records to keep and how best to do so begins with a consideration of
what information will be needed in the future and why.
First, list reasons why physicians write information in a medical chart. Compare
Exercise 7.3
your list with one or more colleagues and add to your list as needed until you believe it is complete.
Second, make a similar list of reasons that researchers at a medical device company record infor-
mation in laboratory notebooks. Compare this list with your list for medical charts; how many of
the reasons for keeping records appear on both lists? Third, list reasons why a researcher work-
ing in academia records research methods or findings. Are there any reasons unique to this third
list?
RECoRDKEEPINg 103
7.2.1 medical Records
Although your list may differ, commonly cited reasons for writing in a medical chart are immediate
transfer of information, long-term transfer of information, training medical students and residents,
and legal documentation. Examples of immediate information transfer include a physician writing
an order in a chart that another member of the hospital staff must execute later in the day, or a resi-
dent who is called in the middle of the night to examine a patient deciding an appropriate course of
action based in part on his or her review of the patient’s chart. Because many different people come
in contact with each patient during a typical day in a hospital, a smooth transfer of information can
literally be the difference between life and death.
Availability of an accurate longer-term medical history can be equally important to a patient’s
health. Diagnosing and treating a patient often depends critically on details of that person’s medical
history: previous illnesses and surgeries, current medical problems and medications, allergies, and so
forth. Few patients will remember, or even know, all the details of their own medical history, and few
physicians can remember the complete histories of patients under their care. Consequently, a written
record of each patient’s medical history is not only essential to the accurate exchange of information
between physicians, it is also critical as an accurate, detailed substitute for each physician’s memory.
Perhaps less obvious, good recordkeeping can be useful in training medical students, nurses,
and other health care professionals. Most entries in medical records have very specific formats.
Learning and using these specific formats is integral to learning the thought process associated with
medicine. One usually records a detailed medical history and results from a physical examination on
a form that lists standard questions and aspects of the examination. Recording the same informa-
tion for each patient helps students learn the essential components of a good examination; they soon
begin to ask the questions and perform the examination in the same order each time, which helps
ensure that they do not miss anything. Another common entry in hospital charts is the SOAP note,
an acronym for “subjective, objective, assessment, and plan.” Organizing daily updates under these
four headings encourages a particular thought process: gather the information, think about what it
means, then decide what to do.
Finally, it is no surprise that medical charts serve as an important legal record of what hap-
pened to a particular patient and why. In fact, most respondents to Exercise 7.3 place this first in
their list. Unfortunately, many increasingly view this legal function as conflicting with the training
function discussed above. Many hospitals no longer allow medical students to write in a patient’s
chart for fear that an erroneous assessment or plan, even if corrected later by the supervising physi-
cian, could increase vulnerability in a lawsuit.
104 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
7.2.2 Industry Research Records
Your second list from Exercise 7.3 may not differ much from the first list. Laboratory notebooks
kept by employees of a medical device company serve many of the same functions as a medical chart.
Multiple technicians might record results from a series of tests for review and compilation by their
supervisor the next day (short-term transfer of information); technicians may consult their records
when performing the same tests a month later to make sure they set them up exactly the same way
(long-term transfer of information). Asking new employees to follow a specific structure for re-
cording data from a particular test can help them learn how to perform that test (training). Finally,
approval of new drugs or devices by the U.S. Food and Drug Administration requires stringent
recordkeeping (legal documentation); such records are thus essential for the survival of pharmaceu-
tical and device companies.
If you or a colleague with whom you discussed Exercise 7.3 has worked in industry, the topic
of cosigning likely surfaced in your list. Most industrial research facilities require a supervisor to re-
view and cosign laboratory notebooks at the end of each day. This requirement can assist many of the
functions of recordkeeping discussed above. If a test result is surprising, a supervisor can learn about
the result and take appropriate action immediately: check the equipment, schedule a repeat test, or
discuss the findings with his or her boss (information transfer, quality control). Daily review also
provides an excellent opportunity for feedback on how best to perform the test or record the results.
Finally, for companies that depend critically on regulatory approval of their products, cosigning not
only helps ensure proper performance, it also properly documents all tests and procedures (legal).
7.2.3 Academic Research Records
By now, the pattern should be apparent; recordkeeping serves similar functions in diverse disciplines
and settings. Academic researchers use records to transfer information between members of a group,
as a long-term record of what was done and how it was done, to help train students to perform and
record their work, and as a legal record. Because most academic researchers are more interested in
publishing journal articles than protecting themselves against product liability suits, the need for
long-term documentation tends to dominate in academic practice. Nevertheless, answers to Exer-
cise 7.1 usually reveal that many research groups do not keep adequate records, even to meet the
basic goal of documenting what research was performed, when, and by whom. One of many inter-
esting findings that emerged from a 1993 survey of postdoctoral research fellows at the University
of California–San Francisco was that fellows with an M.D. degree were significantly more likely to
keep laboratory records in ink in a permanently bound research notebook than were those with a
Ph.D. degree (Eastwood et al., 1996).
RECoRDKEEPINg 105
Design a recordkeeping policy for your research group. What should be recorded and
Exercise 7.4
where? Should cosignatures be required? If yes, who should cosign and how often? Should cosign-
ing or other rules of recordkeeping differ for different members of the group (e.g., undergraduates,
graduate students, postdoctoral fellows)? How should new members who join the group be instructed
in keeping records? Who should be responsible for ensuring that rules are followed? What should be
the consequences for a group member who fails to keep appropriate records? What should happen to
a member’s records when they graduate or leave the group? Finally, are any special rules needed for
electronic data? How does your policy compare to your own current practices in your research?
7.3 ElECTRoNIC DATA
Consider a spreadsheet or data file containing results from a dozen experiments performed over
a 6-month period. If you had produced this data file and were asked to verify that you actually
performed the experiments, what proof could you offer? The data file itself is of little use; one can
numerically generate data and save the results in the desired format. The operating system will
likely display the date you created the file and the date you last modified it, but these dates can
be manipulated by changing the computer’s clock. If you had kept every version of the file as you
entered data from each new experiment, paging through these versions would be more convincing,
but this trail of different versions of the file could also be fabricated. Having different versions of
the file plus a laboratory notebook showing data acquisition on dates that match those for the files
would be better, especially if the notebook had been cosigned by a supervisor at intervals over the
period in question.
That most research today relies either entirely or in large part on storing results electronically
presents enormous challenges for ensuring integrity of the science and engineering. While research-
ers rarely need to prove that they performed specific experiments, they often need to revert to an
older version of software or refer to computer files that have been deleted or altered. Consider this
fairly common dilemma from the perspective of the director of a research laboratory at a university.
A new student begins working for you, taking over a project from a previous student who graduated.
Her project involves imaging cells and analyzing the images with custom-written software that is
still under development in your group, and her early results appear to contradict findings by her pre-
decessor. To differentiate between possible changes in experimental protocol, methods of collect-
ing the images, misusing the current version of the software for analysis, and consequences due to
recent changes in the software, you will need a trail of previous images, versions of the software, and
well-annotated analyses. Few laboratories proactively anticipate such situations in their recordkeep-
ing or established procedures for data backup, but most will experience them.
106 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
7.3.1 Date-Stamps, Time-Stamps, and Backup Systems
The most rudimentary form of information about a digital file is its date- and time-stamp. These
stamps have little weight in an investigation of potential fraud because of their ease of manipulation,
but they can be very helpful in typical situations wherein everyone acts in good faith. In the example
just discussed, trying to reconstruct the version of a particular program used for a particular analysis
could be aided greatly by comparing the date-stamp of the file containing the results with dates as-
sociated with successive versions of the software. The major problem with date- and time-stamps is
that they can be changed easily, even inadvertently. Simply opening, printing, or resaving an old file
may result in a new date- and time-stamp, depending on the setup for autosaves. A cumbersome
solution is to lock each data file when creating it, thus forcing subsequent changes to be made on
copies of the original. A more common approach is to create periodic tape or hard-drive backups
of data files. This approach stores both the current versions of the files and their current date- and
time-stamps, thus enabling one to recover deleted or altered information and to reconstruct the his-
tory of modifications to a particular file. Given the low cost of disk space, systematic routine backup
of all electronic data is a prudent and affordable safeguard.
7.3.2 Images
Digital images raise many of the same questions as other electronic data files. Because image files
are often large, there is a temptation to delete an image following analysis, thus retaining only final
results that occupy less storage space. In general, however, it is more effective to do exactly the op-
posite: store copies of the original images on recordable media such as CD-R and force group mem-
bers to manipulate only copies of the original image. Indeed, in cases where an analysis subsequently
becomes suspect, one can always return to the original image and reinterpret the findings.
Most medical images contain information that identifies the patient, which raises another
important concern with regard to image handling. Recent efforts to protect a patient’s privacy have
led to important new regulations and training for everyone who views, analyzes, or handles medical
images. Similarly, concerns about potential attacks on researchers and facilities has led many univer-
sities to formulate policies for handling and storing images that show research with animals. Given
the imperfect nature of computer security, many policies prohibit the storage of sensitive images
involving patients or animals on any computer connected to the Internet.
7.3.3 Software Development
When seeking solutions to a problem, it is often useful to consider who has the best incentive to solve
that problem; they will often have the best solution. Developing software often requires many people
to work on different aspects of a single code, integrating changes made by different group members
in an orderly way while tracking previous versions so that problematic changes can be undone. It is
RECoRDKEEPINg 107
not surprising, therefore, that the software industry has developed excellent systems for controlling
revisions. A number of these systems are now available as open-source software, making such solu-
tions accessible even to research groups whose main business is not software development.
Now that we have discussed electronic data in more detail, return to the recordkeep-
Exercise 7.5
ing policy you designed for your research group in Exercise 7.4. Provide more specific instructions
for the primary types of electronic data in use within your group. What would your group need to
change to implement your plan? If you would need additional servers or individual external hard
drives for routine backups, estimate the amount of storage space you would need and determine
how much this would cost. If you believe your group should implement software revision control,
identify open-source and commercial products that might meet your needs. If you feel strongly that
changes are needed, consider discussing potential changes and options with your colleagues at a group
meeting.
fRAuD: fABRICATIoN AND fAlSIfICATIoN
7.4
Even in a book dealing with the ethics of communication in science and engineering, it seems
futile to admonish readers not to falsify or fabricate data. Very few would willingly do so, and the
remaining few will not be stopped by our disapproval. We focus, therefore, on the need to clarify the
boundary between ethical and unethical behavior, not on why one should behave ethically — we
assume that the latter is clear.
Most universities provide extensive information related to issues of academic
Exercise 7.6
dishonesty and misconduct. For example, undergraduate students at Texas A&M University are
referred to http://ugr.tamu.edu/resources/. Search three university Web sites and record their defi-
nitions of academic dishonesty as well as specific actions that constitute such dishonesty. Compare
results and submit a three-page summary.
Before continuing, consider some widely accepted definitions related to academic or research
misconduct, which in this case are found on the aforementioned Web site at Texas A&M University:
Misconduct in research or scholarship includes fabrication, falsification, or plagiarism in proposing,
performing, reviewing or reporting research. It does not include honest error or honest differences
in interpretations of data.
Fabrication: Making up data or results and recording or reporting them.
Falsification: Manipulating materials, equipment, or processes, or changing or omitting data
or results such that the findings are not represented accurately in the research record.
108 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
Plagiarism: The appropriation of another person’s ideas, processes, results, or words without
giving appropriate credit.
Recall, too, the following definition:
Fraud: A deception deliberately practiced in order to secure unfair or unlawful gain; a misrepre-
sentation of material fact consisting of a false representation, concealment, or nondisclosure.
7.4.1 Retaining or Discarding Data
Consider a student who has carefully collected and plotted a dozen data points on an x–y graph, then
fitted a line through the data. Fabricating extra data that lie near the best-fit line would improve the
r2 value typically reported to indicate how well the line fits the data; classifying a few points far from
the best-fit line as outliers and discarding them would similarly improve the reported fit. Why is it
that every scientist or engineer would consider fabricating extra data points to be fraudulent, while
many would at least consider discarding outliers?
The Nature article titled “Scientists Behaving Badly” reported findings from a survey of more
than 3000 NIH-funded scientists regarding the self-reported frequency of behaviors identified as
concerning by focus groups of researchers (Martinson et al., 2005). While only 0.3% admitted to
falsifying research data within the previous 3 years, 15.3% admitted to “dropping observations or
data points from analyses based on a gut feeling that they were inaccurate.” The only more frequent
offense (27.5%) was “inadequate record keeping related to research projects.”
Unfortunately, the high incidence of self-reported manipulation of data in the 2005 report in
Nature agrees well with earlier surveys that asked research trainees about their willingness to commit
various types of research fraud. In a 1993 survey of postdoctoral fellows at the University of Cali-
fornia–San Francisco, 12% of respondents reported first-hand knowledge of a scientist intentionally
altering data for a presentation, while 4% reported first-hand knowledge of data fabrication. The
numbers were slightly lower, but substantial, for grant applications (8% alteration, 2.5% fabrication)
and publications (8% alteration, 2.5% fabrication). Despite the respondents constituting a self-
selected group of fellows interested enough in research ethics to return the survey, 15.4% indicated
they would be willing to select or omit data to improve their results “if it would make publication of
[their] work more likely or benefit [their] career,” rising to 27.2% “if it would increase the chances
of [their] grant application being funded.”
Formulate criteria for appropriately discarding a data point, observation, or study.
Exercise 7.7
Compare your draft policy with that of one or more colleagues and revise until you believe that your
policy could be implemented in your group. Next, find a data set collected in your group that you know
contains noise or errors and test your policy by applying it to the data set. How many points would you
discard, if any, and what justification would you give? As a final step, discuss your analysis with col-
leagues, then with a senior colleague or mentor. Submit a three-page summary of your conclusions.
RECoRDKEEPINg 109
An Internet search will return many different criteria and algorithms for identifying “outliers”
in a data set, yet such statistical analyses are rarely sufficient for excluding data. Exclusions must
combine good experimental and statistical practices with an understanding of the overall study
and potential impact of those exclusions. In general, exclusion criteria should be established before
performing the study and should depend on something other than the values obtained. Excluding
data points collected on a single day that appear to be outliers, based on the rationalization that the
equipment must have been miscalibrated, is a dangerous practice. In contrast, performing a cali-
bration at the end of each day and excluding from analysis all measurements taken on days where
the calibration fell outside a preset tolerance is good technique. Discussing criteria for excluding
data before performing a study can help ensure that you have enough information to make good
decisions later. Other good practices include disclosing exclusions and the associated justifications
within publications, discussing exclusions with your mentor before making them, and remembering
that “if something does not feel right it probably is not.”
7.4.2 Image manipulation
The importance of digital images in many types of research raises additional questions about the
degree of processing or manipulation that is appropriate in a given situation. Consider a gel such as
a Western blot, where a series of stained bands indicate relative amounts of certain proteins (rows)
in different samples (columns or lanes). Is there anything unethical about cutting and pasting from
several images of gels run on different days to create a composite image showing results from certain
samples side by side? What if one or more of those gels was underexposed? The resulting image
would then be lighter than expected and this problem could be corrected in one of two ways: reimage
the gel with a longer exposure time or adjust the image contrast and brightness using an image pro-
cessing program. Are these two options really the same, or is manipulating the image fraudulent?
It turns out that your answer to this question may change if you learn more about how im-
age processing programs adjust contrast and brightness and how densitometry programs quantify
bands on a gel. Commercial densitometers often output optical density, which relates directly to
concentration. Other devices (CCD cameras, scanners) commonly used to image gels produce im-
ages in which pixel intensity has a nonlinear (logarithmic) relationship to optical density; in such
cases, manipulating images before analysis may have important consequences. When deciding how
to interpret findings, it is not enough to want to make good decisions; it is critical to gather enough
information to make good decisions.
110 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
7.4.3 Statistical and Image forensics
After reading case summaries on the Web site of the ORI (http://ori.hhs.gov/misconduct/cases/),
it is easy to become concerned. There are cases of undergraduates, graduate students, postdoctoral
fellows, and principal investigators fabricating data, scientists manipulating images in applications
for funding, and even a laboratory member altering another’s experiments to ensure that attempts
to repeat his earlier (fraudulent) experiments would not expose him. As you move forward in your
career, you will begin to get more of your data secondhand from employees or students whom you
supervise. You will also spend more time reviewing technical reports, manuscripts, or grant applica-
tions from other researchers. At some point, it is natural to wonder how best to check the validity
of the data and images that you encounter.
Within your own group, good training, good recordkeeping, and replication of randomly se-
lected experiments remain important. Even in an environment that emphasizes integrity, replication
of selected experiments can help guard against error and ensure continuity of methods as members
join or leave the group. When you have less information about the source of data in a figure or table,
however, or when you have reasons to suspect that data or images have been altered, emerging tools
for statistical and image forensics may help. The ORI Web site is a good resource for such tools.
Statistical forensics generally relies on the observation that those who fabricate data rarely do
so with much statistical sophistication. Dr. Slutsky (discussed in Chapter 6) published two papers
that contained data sets having different sample sizes but identical means and standard deviations.
Other cases have identified fabrication via nonrandom distributions of the rightmost digit in a series
of numbers, including a case wherein the rightmost digit was either 5 or 0, thus suggesting that
the sample size had been inflated by averaging pairs of samples to fabricate points for additional,
nonexistent samples.
Image forensics relies on software tools that detect the manipulation of images. One fre-
quently used set of tools is called Forensic Droplets; it is available on the ORI Web site (http://ori.
hhs.gov/tools/droplets.shtml). These droplets run in Photoshop and help the user detect common
methods of manipulation. The Web site also provides illustrative uses of these droplets in actual
cases of misconduct (http://ori.hhs.gov/tools/principles.shtml).
Download and read the article, “What’s in a Picture? The temptation of image
Exercise 7.8
manipulation” [Rossner M, Yamada KM (2000) J Cell Biol 166: 11–15]. This article summarizes
conventions adopted by several top journals regarding accepted types of image manipulation and
how they should be identified. This article is also interesting because it presents several images that
were altered intentionally. Download the appropriate Forensic Droplets from the ORI (http://ori.
hhs.gov/tools/droplets.shtml) and use them to see whether you can detect the manipulation of the
images presented in Rossner and Yamada’s paper.
• • • •
111
C H A P T E R 8
ownership of Ideas, Data,
and Publications
Imagine sitting down 20 years into a successful career to write a book on your area of expertise.
Everyone would agree that it would be wrong to copy verbatim a paragraph from another scientist’s
paper without attribution; we call that plagiarism. On the other hand, most would find it reasonable
to include in your book a figure from one of your earlier journal papers. You may be surprised that
to do so would be illegal in most cases. The publisher of the journal probably holds the copyright on
that figure, and either you or the publisher of your book must secure permission from the journal to
reprint the figure; such permission may involve paying a substantial fee.
One of the many interesting questions in science is who owns the results. Recent trends, such
as the drive to commercialize products of university research and to increase public access to data
from federally funded research, highlight this question of ownership. Debates about ownership of
and access to the results of scientific and engineering research have important consequences for
individuals, universities, companies, publishers, the government, and the public at large.
In Chapter 7, we compared medical charts to laboratory notebooks when considering why we
should keep records. This comparison also provides an interesting entry into the question of owner-
ship of information. Who owns your medical record and what conceptual tests for ownership does
your answer suggest? The information is about you and you can request a copy of the records, which
suggests some level of ownership. Nevertheless, you typically cannot alter or destroy the original
medical record, which suggests that someone else shares ownership. The physicians, nurses, and
other medical personnel who produced the record can write in it and read it, but they cannot obtain
a personal copy to take home. What about the insurance company who paid for your care? Does
paying for the associated diagnostic tests and office visits give the company any stake in ownership
of the information?
Based on your experience in research and the analogy to a medical chart outlined
Exercise 8.1
above, discuss why each of the following parties should or should not be considered owners of
112 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
scientific data (not the resulting publications or patents) that result from a research project funded
by the federal government using tax dollars and performed at a private university:
1.
2.
3.
4.
5.
The principal investigator of the study
The students, technicians, and fellows who performed the experiments
The university
The federal government
The public/taxpayers
Now, for each of the people, groups, or institutions listed above, specify appropriate levels of access
to the original data. In other words, who should be able to acquire copies of all the data, to alter
those data, or to use them to write papers or submit patents? Are there people who should not be
considered co-owners but who should still be granted some access? Explain.
8.1 DATA AND RESouRCE SHARINg
In 1999, Congress amended the FY 1999 Omnibus Spending Bill to require federal agencies that
fund research to ensure that all resulting data be made available to the public under provisions of the
Freedom of Information Act. This new requirement prompted an outcry from the scientific com-
munity (Frankel, 1999). Concerns ranged from how researchers could disclose data from clinical
studies without violating the privacy of human subjects to whether colleagues, companies, or even
political activists might use data obtained under the new policy to compete with or disrupt the work
of individual scientists. Such concerns are not unique to biomedical science, but because the NIH
funds so much science in the United States, we focus below on NIH policies regarding the sharing
of data, model organisms, and publications resulting from federally funded research.
8.1.1 Research Data
Ultimately, the NIH adopted limited requirements for sharing data. The Final Statement on Shar-
ing Research Data reads, in part, as:
NIH reaffirms its support for the concept of data sharing. We believe that data sharing
is essential for expedited translation of research results into knowledge, products, and
procedures to improve human health. The NIH endorses the sharing of final research
data to support these and other important scientific goals. The NIH expects and sup-
ports the timely release and sharing of final research data from NIH-supported studies
for use by other researchers.
See http://grants.nih.gov/grants/policy/data_sharing/index.htm for more on the NIH state-
ment. Note, however, that provisions outlined later in the statement exempt most researchers from
oWNERSHIP of IDEAS, DATA, AND PuBlICATIoNS 113
this policy. Most importantly, only applicants requesting over $500,000 of direct costs in any year
must file a data sharing plan. Because the most common type of NIH grant (the R01, which is
discussed in Chapter 4) typically has annual direct costs of $250,000 or less, this provision exempts
most NIH-funded researchers. In addition, the NIH defines “timely release and sharing” to be “no
later than the acceptance for publication of the main findings from the final data set.” It is not un-
common for a successful researcher to renew the same grant repeatedly over 20 or more years while
studying a particular disease; in such situations, it is unclear what constitutes “the final data set.”
Other critical issues, such as how long an investigator must continue to share data after completing
a study, are not addressed by the NIH policy. Finally, note that the wording refers to sharing of data
for use by other researchers, possibly circumventing scientists’ original objections to sharing data
with companies and the general public.
Community practice in some fields has overtaken the debate on data sharing. For example,
researchers studying gene expression using DNA microarrays have established open databases and
standards for submitting data (http://www.mged.org/). Top journals in this field also typically re-
quire authors to deposit their microarray data in a database as a condition of publication. Interest-
ingly, although arguments for data sharing typically focus on benefits to the scientific community or
public, Piwowar et al. (2007) recently reported a direct benefit to researchers: papers associated with
publicly available microarray data are cited more frequently.
8.1.2 model organisms
In contrast to its data sharing policy, the NIH’s policy on model organisms is clear, demanding,
and relatively uncontroversial. All grant applicants who plan to develop a model organism such as a
transgenic mouse must provide a plan for sharing that model organism with other researchers. Peer
reviewers evaluate sharing plans as part of the grant review process and NIH staff may require “ad-
equate progress in model organism sharing as well as a demonstrated willingness to make research
resources developed during the project widely available to the research community . . .” to continue
funding an existing grant (see http://grants.nih.gov/grants/policy/model_organism/index.htm).
8.1.3 other Research Products
Any researcher can quickly and easily share data in a computer spreadsheet by posting the file to a
public Web site. Even those who do not wish to maintain a Web site can usually deposit tables of
supplemental data with a journal at the time of publication. Annotating the data with clear head-
ings, comments, and notes about exclusions, then answering occasional questions from colleagues
about the posted data, requires a little more time, but not much. By contrast, maintaining a colony
of transgenic mice and providing breeding pairs to any interested colleague can be time consuming
and expensive. Should we require researchers to continue to breed and supply mice to colleagues
114 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
even after the NIH grant that funded development of that mouse expires? What alternatives are
available to the scientist who wishes to share a transgenic mouse without the expense of maintaining
a mouse colony indefinitely?
Sharing of computational models and custom software presents similar difficulties. Devel-
opers of computational models typically publish a description of the model and its results, not the
actual code. As with transgenic mice, another researcher who attempts to reproduce the model from
the published description could easily invest significant time and effort and still generate a slightly
different model. Requiring researchers to post raw code for models developed with public funding
would help, but this raises questions about how much support the developer should provide to those
who download and attempt to use the code and for how long. Software companies have a financial
incentive to provide user-friendly interfaces and technical support, but individual researchers do
not. Research-grade models and software may be difficult for someone other than the developer to
understand and operate, but as long as the software performs its intended task, the developer may
have little interest in making it more user-friendly or in writing a supporting manual.
The incomplete nature of published descriptions of models, uneven practices regarding shar-
ing, and the variety of operating systems and programming languages worldwide all combine to
limit the effective sharing of computational models. Fortunately, several organizations are focused
on improved sharing and interfacing of computational models across groups. One such effort is
the Physiome Project (http://www.physiome.org/), organized by the International Union of Physi-
ological Sciences. One component of this effort is to use markup languages to standardize coding
of models and handshaking between them. A visit to the CellML Web site (http://www.cellml
.org/models) provides more information on this effort as well as an idea of the difficulty of imple-
menting, documenting, and debugging computational models of biological systems based only on
their published descriptions.
8.2 CoPyRIgHT
We noted in the introduction to this chapter that technical journals typically hold the copyright to
any articles they publish. As an author, you transfer copyright to the journal as a condition of publi-
cation, even if the journal assesses page charges and labels your paper an advertisement.
You are writing a review article and wish to include several figures from your earlier
Exercise 8.2
papers. Most publishers of those articles agree that you can use the figures as long as you state that
they are used with permission of the publisher and include the appropriate citation. One publisher
demands several thousand dollars in fees to reprint your figures, however. List all the options you
can think of, including paying the fees, and discuss the relative merits and ethics of each approach.
Based on that discussion, what would you do?
oWNERSHIP of IDEAS, DATA, AND PuBlICATIoNS 115
8.2.1 online Publishing
The past few years have brought forth the most dramatic shift in the history of scientific publishing
since the printing press. Like newspapers, scientific journals have experienced a drop in individual
print subscriptions and a shift to online delivery of content. The critical question for journals, as for
newspapers, is how to remain solvent selling content to an online audience that is used to getting in-
formation free. Not surprisingly, different journals have taken different approaches. Most publishers
continue to sell print copies of their journals to libraries, at least for now; some restrict online access
to print subscribers to encourage print subscriptions; others offer online-only access at a lower price
than print subscriptions; still others provide full, free access to anyone. Regardless, most journals
partially defray costs of publication by assessing substantial fees for publication. Increasingly, pub-
lishers and third-party services sell universities and other large institutions bundled access to groups
of journals. Students and faculty at a typical research university can now access most major scientific
and engineering journals through the university library’s portfolio of electronic subscriptions. These
subscriptions often include extensive collections of scanned back issues, virtually obviating the need
to visit the library when studying the literature.
One of the consequences of the shift of journal content to electronic format is that it is often
unclear to the typical academic user in the United States which journals charge for content and
which distribute it freely. If you access a paper through your library’s electronic journals portal, it is
equally easy to obtain a PDF of an article whether or not the journal normally charges an individual
for the download. If you access the same paper from outside your library’s portal, however, the
download fees are often $20 to $30 per article. This charge explains why you, as an author, are likely
to receive occasional e-mails requesting a PDF reprint of one of your articles; the request usually
comes from someone who does not have free access to the article. As an author, should you send
a PDF when you receive such an e-mail? Sending the requested PDF deprives the journal of the
fee it would normally charge for a download; on the other hand, what if the request comes from a
colleague in a country where the download fee represents an exorbitant sum? What about the fairly
common practice of posting PDFs of recently published articles on your laboratory or group Web
site? Is this unethical? Illegal?
8.2.2 Public Access to NIH-funded Journal Articles
If the NIH funds your research, recent policy changes rendered moot the problem of whether to
share PDFs of your article. Nevertheless, the NIH may have also placed you in an uncomfortable
position between the people who fund your studies and those (often your main professional societ-
ies) who publish them. The NIH began by asking grantees to deposit a copy of any accepted manu-
script resulting from NIH funding into a publicly accessible database, PubMed Central. The policy
was initially voluntary, although many NIH investigators felt pressured to deposit manuscripts
116 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
because they thought that applications for renewing grants would be judged in part on productivity
and that the number of deposited papers would be used as a measure of productivity.
Many journals considered the NIH database a violation of their copyright on the accepted
articles, and some that depended heavily on selling access to their content claimed the NIH require-
ment would put them out of business. The American Physiological Society (APS), which publishes
a portfolio of journals on physiology, genomics, and physiology education, was one of the most vo-
cal critics. Among other actions, the chair of the APS Publications Committee sent the following
e-mail in November 2005:
Dear APS Author,
On February 3, 2005, the NIH announced a new policy (NOT-OD-05-022) to en-
hance public access to publications resulting from NIH-funded research. The policy
itself and information about it are available at http://www.nih.gov/about/publicaccess/
index.htm. As the Publications Committee Chair of the American Physiological Soci-
ety (APS), I am writing because some confusion has arisen about the NIH Policy.
The NIH Public Access Policy is a voluntary program that applies to NIH-funded
investigators. Such authors are asked to submit electronic copies of articles accepted by
peer-reviewed journals that report research funded by the NIH to the National Library
of Medicine’s PubMed Central (PMC). The NIH will then make the journal-accepted
manuscript free to the public at an interval (ranging from immediately to 12 months)
after publication that is chosen by the author. This Policy goes into effect on May 2,
2005.
Since the policy was announced, questions have arisen about whether or not participa-
tion is truly voluntary. On the day the policy was published, NIH Director Elias Zer-
houni sent a letter to all extramural scientists and their research institutions describing
the policy and urging them to participate. Although Dr. Zerhouni stated that the
policy is a request, many researchers, university officials, and even some NIH program
officers have interpreted it as a mandate for grantees. However, in public statements,
Zerhouni and other NIH officials have repeatedly underscored that it is voluntary and
there will be no repercussions for those who choose not to participate. Funded inves-
tigators can still fulfill their progress report requirements by providing print copies of
their publications with their annual progress reports.
While the APS does not support the NIH Plan, we do recognize that it does put you,
our authors, in a difficult position. Do you abide by a request issued by the granting
oWNERSHIP of IDEAS, DATA, AND PuBlICATIoNS 117
agency or do you abide by the copyright statement that you signed when you submit-
ted your manuscript to the journal? The APS does not want to see you placed in that
position. Therefore, we are modifying our copyright statement to help you fulfill the
voluntary request of the NIH Plan.
In doing so, we ask that you recognize that the Society has been at the forefront of
online publishing, putting content online as early as 1994, providing authors with one
of the first online manuscript submission and review systems, and underwriting the
scanning of APS journal content back to 1898. We were one of the first publishers to
change our access policies so that all content is free to all 12 months after publication.
These efforts have cost the Society millions of dollars and subscriptions are one of the
few ways available for us to recover those costs.
With over 50% of articles published in our journals funded by NIH, free release of
manuscripts by PMC sooner than the Society’s access policies allow could lead to
losses of subscription revenues that would interfere with the journals’ ability to meet
the needs of the Society and its members. Moreover, NIH is seen as a leader among
biomedical funding agencies. If others including NSF, NASA, or funding agencies in
other countries such as the Wellcome Trust follow suit, we may end up in a situation
where the vast majority of content is subject to mandates requiring public release be-
fore the journal release date. Should this occur, the APS and other scholarly publishers
may be forced to increase author fees to compensate. Ultimately it would be detrimen-
tal to science if the APS had to charge authors the full cost of publication, which is
currently about $3,000 an article.
Given the importance of subscription revenue to the Society’s ability to provide our
members with high quality and innovative publications, the APS asks that if you choose
to deposit your manuscript into PMC, you will specify that it should not be made avail-
able to the public until 12 months after publication in the Society’s journals. The Soci-
ety intends to modify its copyright agreement so that NIH-funded authors are granted
permission to deposit their accepted manuscript into PMC for release to the public 12
months after publication. By abiding by the Society’s modified copyright agreement,
you will be able to participate in the NIH public access program while still protecting
the ability of the APS to recover the costs associated with its publication program.
Thank you for your past and future support of the Society’s journals. We will be able
to continue to publish these respected journals with your recognition that the NIH
Public Access Plan is a voluntary plan that seeks release at 12 months, a time consis-
tent with the Society’s current access period. Please do not hesitate to contact me, or
118 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
[the] APS Director of Publications, if you have any questions about this important
issue.
Recently, the NIH made it mandatory to deposit accepted manuscripts to PubMed Central
for all NIH-funded investigators. The APS has maintained its stance that it will accept deposit as
long as public access is delayed for 12 months; in fact, APS journals now automatically deposit ar-
ticles and specify the 12-month delay rather than leaving this decision to authors. As of this writing,
however, many journals and societies have not adjusted their copyright statements to account for the
NIH policy, and some universities now urge authors to amend, or refuse to sign, journal copyright
transfer agreements to avoiding placing themselves in a legally untenable situation.
PATENTS
8.3
In 1970, only three major research universities devoted at least one half-time staff position to tech-
nology transfer and research universities as a group secured only ~150 patents (Sampat, 2006). Most
universities consciously confined their activities to the generation and free exchange of knowledge;
they avoided the business aspects of translating that knowledge into profitable products for fear that
it would taint their academic missions. Among others, Columbia University, Harvard University,
Johns Hopkins University, The University of Chicago, and Yale University specifically prohibited
the patenting of results from biomedical research. At that time, private companies performed most
federally funded research and development, and patents derived from that research belonged to the
federal government. University owned patents typically resulted from industry funded research and
were designed primarily to protect against misuse of the technology. Licensing was assigned via
independent foundations or corporations (Sampat, 2006).
Today’s landscape for technology transfer differs dramatically. Most research universities op-
erate substantial technology transfer offices, securing and licensing patents covering a range of ideas
and products derived largely from federally funded research. A network of consulting, research,
and technology transfer agreements links universities to companies, and ownership of intellectual
property is frequently the critical concern during the negotiation of such agreements. Professors
routinely form “spin-off ” companies to translate their discoveries into products. Every practicing
scientist and engineer, whether in industry or academia, must now learn the basics of intellectual
property law and technology transfer. In fact, if you accept a faculty position at a research university
tomorrow, it is likely that the second piece of paper you will sign (after your offer letter) will be a
patent agreement.
Economists consider a strong university system to be a major driver of innovation
Exercise 8.3
and economic growth. Make a list of the ways that universities transfer information and technol-
oWNERSHIP of IDEAS, DATA, AND PuBlICATIoNS 119
ogy to companies. Next, order your list starting from the mechanism you consider most important.
Finally, compare your list to one based on surveys of managers of industrial research and develop-
ment [see Table 4 in Cohen WM, Nelson RR, Walsh JP (2002) Links and impacts: The influence
of public research on industrial R&D. Management Sci 48: 1–23]. Discuss with a colleague possible
reasons for the main discrepancies between your list and the one compiled by Cohen et al. (2002).
Maintaining our focus on communication in science and engineering, we restrict the remain-
der of our discussion of patents and technology transfer to two small aspects of this very broad topic:
private ownership of patents derived from publicly funded research and the impact of university
technology transfer efforts on scientific communication.
8.3.1 Patents and Publicly funded Research
Before 1980, the federal government owned patent rights to any discovery made with federal fund-
ing. The simple rationale for this policy was that inventions generated with public funds should
belong to the public. As we discussed for data and models in Section 8.1 and for publications in Sec-
tion 8.2, however, the general question of ownership is not simple. As with data and publications,
generating patents not only requires funding but also knowledge, ingenuity, hard work, equipment,
space, and other resources. Hence, faculty, students, the university, and the government might all
credibly claim at least partial ownership of a patent based on publicly funded research performed in
a university laboratory.
In 1980, the U.S. Congress passed the Bayh–Dole Act, which granted universities and small
businesses the rights to patents arising from their federally funded research. From an ownership
perspective, this policy was balanced better than the one it replaced — it granted patent rights to
universities, required universities to share royalties with the inventors, and retained limited rights
for the government. Yet, proponents of the change cited the need to stimulate innovation rather
than the need to attribute ownership properly. At that time, federal funding agencies often trans-
ferred their patent rights to universities and companies, but each agency had a different policy. The
Bayh–Dole Act aimed to replace the array of existing policies with a single, uniform policy. Sup-
porters argued that the government failed to promote licensing and use of the patents it owned,
and transferring ownership to companies and universities would promote greater dissemination and
utilization of innovations generated from federal funding.
Read “Patenting and U.S. Academic Research in the 20th Century: The World
Exercise 8.4
Before and After Bayh–Dole” [Sampat BN (2006) Res Policy 35: 772–789], then research the
impact of the Bayh–Dole Act. Write a one-page position paper arguing that Bayh–Dole either has
120 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
or has not enhanced commercialization and utilization of the results of federally funded research in
your field.
8.3.2 Patents and Publication
We discuss here a typical process for technology transfer at a research university, but please note that
each university and company has its own patent policies. If you have a new idea that you think might
merit a patent, you begin by filing an invention report with your university’s office of technology
transfer. A member of that office will review your report, discuss the idea with you, help research
whether existing patents already cover your idea, and make a decision about whether to proceed. If
positive, the next step is usually to file a provisional patent application. Filing a provisional patent is
relatively inexpensive and protects your idea for 12 months while the university decides whether to
proceed with a full patent application. Subsequent actions differ widely depending on the university
and the nature of the invention. Because it can be expensive to file a full patent application, some
universities spend the 12 months shopping your idea to potential corporate partners and proceed
only if a partner is willing to license the patent once awarded. This approach works well when your
invention is developed fully, a prototype has been tested, or the new idea has such obvious value
that an investor will commit based on the idea alone. University-based research commonly produces
ideas at a much earlier stage of development, however. In these cases, it is often difficult to decide
whether to invest in a full patent application based on the limited information at hand.
After reviewing your invention report, the office of technology transfer may recommend that
you delay initiating the patent process until you have developed and tested your idea further. This
approach is where the patent process can begin to conflict with the normal practice of academic
research. Because any public disclosure of the idea impacts the patent process, it is critical that you
discuss your ideas with your office of technology transfer before submitting them for publication or
presentation at professional meetings. Without publications, however, it may be difficult to obtain
additional funding to mature your idea, and even a grant application might constitute a public dis-
closure in some circumstances. There is also the risk that someone else will advance a similar idea
while you wait.
No one, including those in your office of technology transfer, can tell you how best to bal-
ance such concerns. In general, we recommend entering any discussion of technology transfer with
a clear vision of your research and career goals, communicating those goals to your technology
transfer officer, and doing your best to make decisions consistent with those goals. If, for example,
your ultimate goal is to make a significant impact on the treatment of cancer, patenting a new drug
or method of drug delivery and marketing it to companies may be an integral part of achieving your
goal. In contrast, if your goal is to design a simple and effective water filter that can be assembled
oWNERSHIP of IDEAS, DATA, AND PuBlICATIoNS 121
cheaply and easily in the developing world, posting your design on a Web site and publicizing it
through the press or nonprofit agencies might be a more effective approach.
Concerns of intellectual property also impact scientific communication when industry funds
academic research. Universities negotiate the terms of research contracts and agreements with spon-
soring agencies and companies. Large federal agencies such as the NIH can essentially dictate terms
to universities, but individual research agreements with companies vary widely. Intellectual property
rights are often the major focus of negotiations, and restrictions on publication and other dissemi-
nation of the results of the research are common. Companies often demand the right to prereview
and block any planned public disclosure, including abstracts, conference presentations, journal pub-
lications, and grant applications. Few universities agree to such stringent limits, but many agree to
a waiting period to give the sponsor adequate time to review any planned disclosure and file appro-
priate patent applications. As a principal investigator of an industrial sponsored project, it is critical
to work closely with the contract negotiating team at your university to make sure you understand
and are willing to accept any proposed limits on publication. As a student or postdoctoral fellow
considering whether to work on an industrial sponsored project, it is essential to ask whether the
project includes restrictions on publication, for your ability to publish is critical to building your
track record and thus your career.
PlAgIARISm
8.4
Most scientists and engineers would tell you that they know what constitutes plagiarism, that they
consider it a serious offense, and that they would never do it, suggesting that plagiarism is not a
major problem among working scientists and engineers. Many faculty members would admit that
there is more of a problem with plagiarism among university students, but they would attribute
this primarily to two factors: the Internet, which provides easy access to text written by others, and
students from cultures that have different conventions regarding how and when to incorporate or
cite ideas from published work. Yet, data from recent surveys contradict these common percep-
tions. Undergraduates in the United States are frequently confused over what constitutes plagia-
rism, they do not consider it a serious form of cheating, and they do it with shocking frequency.
Although scientists self-report much lower rates of plagiarism, they report frequent observations
of plagiarism by colleagues, suggesting that plagiarism is a significant problem, and not just among
students.
Studies on undergraduate cheating by McCabe and colleagues, in association with the Center
for Academic Integrity, provide an interesting introduction to student attitudes about plagiarism
(McCabe et al., 2001; McCabe, 2005). This group conducted a series of surveys of undergraduates
at universities within the United States and Canada and reported that more than 75% of students
admit to some type of cheating. In particular, one study revealed that 26% of students admitted to
122 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
committing “plagiarism” in the past year, while twice as many (54%) admitted that they “copied one
or two sentences without footnoting.” A more recent survey found no evidence that the Internet is
to blame: 38% admitted to “paraphrasing/copying a few sentences . . . without footnoting” from a
written source, whereas 36% admitted to copying from an Internet source. Apparently many under-
graduates simply do not feel that this as a serious offense; only 56% rated plagiarism as moderate or
serious cheating (versus trivial or not cheating; McCabe, 2005). It may be surprising that graduate
students took plagiarism only slightly more seriously: 32% considered paraphrasing or copying a
few sentences without footnoting to be trivial or not cheating and 25% admitting to doing it in the
past year. Even more concerning than the high rates of self-reported plagiarism in these studies is
that the students surveyed appeared to have a narrow understanding of plagiarism as direct word-
for-word copying, believing that it was acceptable to use someone else’s ideas without attribution as
long as they expressed those ideas in their own words (McCabe et al., 2001).
Whether it is called plagiarism or simply misconduct, “Using another’s ideas without obtain-
ing permission or giving due credit” is a pretty good working definition of a key problem that arises
often in science and engineering. De Vries et al. (2006) found that although only 1.4% of NIH-
funded scientists admitted to using another’s ideas within the past 3 years without giving credit,
45.7% reported observing this behavior among their colleagues over the same period.
“Using another’s ideas without obtaining permission or giving due credit” covers a
Exercise 8.5
wide range of potential behaviors beyond direct word-for-word copying of published text. Perhaps a
colleague suggested an interesting experiment during a conversation at a conference, prompting you
to perform that experiment and publish the results without further discussion with your colleague.
Is this misconduct? What about testing a hypothesis suggested in the discussion section of a paper
you read, then publishing your findings without citing the paper that suggested the hypothesis? To-
gether with a colleague, list 10 examples of using another’s ideas without permission or credit, then
decide which you consider to be appropriate or inappropriate. Compare your list with colleagues to
determine the most frequently listed examples. Include within your discussion the concept of com-
mon knowledge versus personal intellectual property.
Problems that arise commonly in a discussion of using another’s ideas involve interactions
with group members, citation, and peer review. We discuss the first two categories briefly below and
peer review in the following section.
8.4.1 Attribution Within a Research group
Many issues regarding attribution within a group are addressed by the discussion of authorship in
Chapter 6. Ideally, every group member should receive proper credit on publications through ap-
oWNERSHIP of IDEAS, DATA, AND PuBlICATIoNS 123
propriate authorship or acknowledgment. Yet, ideas conceived within a group are often presented in
other venues. Must a professor giving an academic seminar explicitly list the names of all students
and fellows who gathered the data presented? Is it sufficient to acknowledge the entire group on a
slide at the end, as is common, or should the advisor specifically attribute each graph and figure, as
with figures taken from published work by other groups? Such questions become particularly deli-
cate as postdoctoral fellows approach the transition to an independent academic career. Fellows may
view the use of their ideas in their adviser’s grant applications as appropriation, while their adviser
may see these ideas as belonging to him or her as principal investigator of the group.
8.4.2 Citation
It is clearly unethical to omit citations of relevant work intentionally, whether to claim undue credit
for previously published ideas or to slight the work of a rival. Omission of an important reference
is more commonly an honest mistake — the author simply misses an important paper in the expo-
nentially expanding sea of archival literature. That a mistake is honest does not lessen its impact,
however. Omission of a key reference deprives deserving colleagues of credit for their ideas and
misleads readers interested in the topic. In medical malpractice cases, actions are judged accord-
ing to the “standard of care,” that is, what most physicians would do in a given situation. If most
researchers diligently review the relevant literature and find key references before writing an article,
should missing an important reference be considered research misconduct?
All of us have experienced another problem related to citation: while reading an article, we
encounter an interesting statement referenced to an earlier publication, retrieve the original refer-
ence, and find that it says something very different than was claimed. In some cases, this may be an
honest difference of opinion; two researchers reading the same article may interpret its key findings
differently. Frequently, however, discrepancy between attributed content and actual content arises
through a scientific version of the “telephone game” — a chain of citation, where each author de-
pends on a previous citation rather than retrieving and reading the original paper, propagates an
error in describing the content of that paper. As with omitted references, inaccurate citation mis-
leads the reader and misrepresents the work of colleagues. Such errors are not considered actionable
misconduct by universities or funding agencies, yet their impact on the archival literature and on
your reputation can be significant.
High-profile plagiarism cases occur regularly in science and engineering as well as
Exercise 8.6
in history, literature, and other fields. Find and evaluate one recent case in science or engineering
and one case outside science and engineering. For each case, write a one-page summary, including
what the author plagiarized, how the plagiarism was detected, any explanations offered by the au-
thor, and the impact of the plagiarism on the author’s career.
124 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
PEER REvIEW
8.5
Science and engineering rely heavily on peer review, the evaluation of your work by your peers. Peer
review is central to deciding which papers a journal will publish, which grants agencies will fund,
which patents will be issued, and which drugs and devices will be approved. Any system of peer
review must balance the fact that colleagues who work in your field are best qualified to review your
work against the possibility that those colleagues include former students or mentors as well as cur-
rent collaborators or competitors. Evaluating and avoiding conflict of interest is integral to effective
peer review.
Imagine you have been named editor of a new journal in your field. Formulate a
Exercise 8.7
policy stating how your journal will handle potential conflicts of interest when assigning review-
ers. What constitutes a conflict of interest? Are all conflicts equal? How will conflicts be handled?
Must any reviewer with a potential conflict decline to review, or are there some situations where
declaring the conflict will suffice? Will you hire staff to search for potential conflicts or will you rely
on reviewers to disclose conflicts? Will you allow authors to name reviewers they consider to be in
conflict? What other measures will you take? Summarize your thoughts in a two-page paper.
8.5.1 Archival Journal Articles
A journal editor who receives a manuscript for review usually begins by reading the abstract and
scanning the paper to verify that it is appropriate for the journal. The next step is to assign review-
ers. Editors typically identify reviewers from a variety of sources, including personal knowledge of
investigators in the field, databases maintained by the journal, literature searches using keywords
or title words from the manuscript, and the references cited within the manuscript. Some journals
allow authors to suggest reviewers at the time of submission.
Selection of appropriate reviewers helps to ensure a fair and thorough review. Typically, edi-
tors try to assign several reviewers who are experts in the field but not associated closely with the
authors or one another. Achieving this goal is not as easy as it sounds; it assumes that the editor
knows professional relationships among those working in the field — who trained in which labora-
tories, who has collaborated with whom, who has published with whom. Most of this information
can be found with enough research, but such research would be too time-consuming for every sub-
mitted manuscript. Hence, journals rely on multiple safeguards against conflict of interest. Securing
multiple reviewers provides one such safeguard, for it is less likely that multiple reviewers will have a
conflict of interest or the same personal bias for or against a particular author. Some journals divide
the task of assigning reviewers among multiple associate editors or members of an editorial board
to help ensure that assignments are based on detailed scientific and professional knowledge about
oWNERSHIP of IDEAS, DATA, AND PuBlICATIoNS 125
subfields within a broader discipline. Many journals ask authors to list those whom they feel have a
conflict of interest and will often exclude those reviewers.
Allowing authors to name potential reviewers for exclusion raises many interesting ques-
tions. As an author, you may have genuine concerns whether a particular competitor will judge your
work more harshly than normal. Yet, are you willing to assert to the editor, often a senior colleague
in your field, that this rival is incapable of judging your work fairly? Does having the potential to
exclude particular reviewers tempt authors to try to avoid valid criticism that ultimately could help
improve the research and the paper? Ultimately, much relies on the judgment of the editor. Ideally,
an editor who receives two very positive reviews and one negative, but unfair, review will read the
reviews, recognize the lack of substance in the negative comments, and discount the unfair review
when making a decision. If a journal receives a large number of submissions and is highly selective,
however, an editor may simply average scores from all reviewers without resolving potential discrep-
ancies; in such cases, one negative review may be enough to prevent acceptance. Fortunately, there
are many journals in each field, so there are always other opportunities to publish a high-quality
paper. Recall from Chapter 3, however, that one should always take advantage of any opportunity
to improve a paper via revision.
Assigning reviewers and integrating their feedback can be demanding, but it is usually
straightforward from an ethical point of view. In contrast, reviewing an unpublished manuscript
frequently raises difficult ethical questions. First, you must decide whether to accept the request
to review. Generally, you should not agree to review the work of colleagues from your institution,
of former students or mentors, or of current or recent collaborators. What if your former student
graduated 20 years ago, however, and you have not collaborated since? What if one of the authors
is a former collaborator whom you have not published with or spoken to in 5 years? Few journals
provide potential reviewers with specific instructions on exactly what constitutes a conflict of inter-
est. Most rely on the judgment of the reviewers and ask them to err on the side of caution. Main-
taining the integrity of peer review thus requires individuals to avoid not only actual conflicts of interest
but also apparent conflicts of interest. In other words, if the author of a manuscript might believe you
are biased, you should not review the paper, even if you are confident you can provide an objective
review.
You may find it difficult to evaluate objectively the work of a colleague you dislike; most
would agree that this constitutes a bias. A more interesting question is whether you should review a
manuscript written by a colleague whose work you consider to be generally of poor quality. On one
hand, your responsibility as a reviewer is to help the editor evaluate the quality of work submitted
for publication and to ensure the publication of only high quality research; if you know the work of
a particular group well, you may be uniquely qualified to explain why a particular manuscript does
not merit publication. Yet, if you consider a particular group’s work to be poor, it is likely that the
126 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
authors will consider you misinformed or even biased against them. Fortunately, this is another
situation where multiple reviewers provide a safeguard. If you explain carefully why you believe the
work is flawed, the editor can balance your evaluation against those of other reviewers in making a
final decision.
This discussion raises an issue that is critical to consider, especially by novice reviewers. The
primary responsibility of the reviewer is to recommend whether to accept or reject a particular
manuscript; in most cases, however, an intermediate step is to recommend potential revisions that
would improve the work. By explaining carefully the strengths and weaknesses of a manuscript, you
not only help the authors improve the manuscript, you also help advance their research and the field
in general. Few things are more frustrating to an author than receiving a review that rates the paper
poorly without providing specific criticisms. By contrast, few things are more helpful than a critique
that suggests a critical new experiment or an alternative interpretation of your data that you had not
considered previously. Again, however, we see that the reviewer is faced with an ethical decision.
What if you, the reviewer, realize that the authors have missed the most important experiment or
calculation? You could argue that the paper should be rejected because the authors have not proved
their hypothesis or provided convincing results, then pursue the correct approach on your own and
publish it later. Alternatively, you could provide significant guidance to the authors so that they
can pursue the correct approach and publish what would be an important result that they would be
congratulated for without any recognition of your anonymous contribution. Who would you ask for
advice in this situation?
Imagine that you are a third-year Ph.D. student and your faculty advisor asks you
Exercise 8.8
to review a paper that he or she was asked to review by a top journal. If you have never reviewed a
paper before and have not yet published a paper, how should you proceed? What information would
you expect your advisor to provide? If you complete the review and it is to be submitted to the jour-
nal based solely on your evaluation, should the editor be so notified? Should you get the “credit” for
the review? How do you think the authors would feel if they disagreed with the conclusions of the
review, which were very negative, and they learned that it was conducted by a student?
8.5.2 grants
De Vries et al. (2006) met with a number of focus groups consisting of researchers and formulated a
survey that they eventually conducted and reported in the Nature article “Scientists Behaving Badly”
(Martinson et al., 2005), which was discussed earlier in this book. In the focus groups, they found
researchers to be less worried about frank plagiarism or fabrication of data than about handling the
“fuzzier” situations that arise in science and engineering, such as excluding data or properly appor-
tioning credit for ideas and discoveries. We have focused much of our discussion in this chapter on
oWNERSHIP of IDEAS, DATA, AND PuBlICATIoNS 127
these gray areas, where each of us must rely on our values and judgment in the absence of universally
accepted rules. Regarding peer review, the primary concern of the researchers interviewed by De
Vries and colleagues was potential theft of their ideas during grant review. One commented (De
Vries et al., 2006).
I’m always wary of submitting grants to [NIH] study sections, because those people
who sit on the study sections, it’s not unknown for them to take your ideas, kill your
grant, and then take and do it. And I think all of us have either had that happen to
them or know somebody who had that happen to them.
Determining whether to excuse yourself from reviewing a particular grant is often simpler
than for a particular journal article because funding agencies usually issue more specific guidelines
than journals. At the NIH, for example, you may not participate in the review of any grant applica-
tion from your institution; if you are on the panel, you must leave the room during the discussion
of these applications. You must also excuse yourself from the review of applications involving any
colleague with whom you have published during the past 3 years. Furthermore, the review panel on
which you sit may not review any application involving you; these grants must be sent to a different
panel for consideration. NIH reviewers and staff also work to identify and avoid other conflicts,
whether real or apparent, not governed by specific guidelines.
The more challenging issue with regard to reviewing grants is deciding how to handle infor-
mation contained within the applications. Although each application is confidential and must be
destroyed following review, you will remember many of the things that you read (and heard during
a panel review). Indeed, in addition to the importance of fulfilling a professional responsibility,
reviewing grant applications often benefits a reviewer in three ways: you see the difference between
well-prepared and poorly prepared applications, which can help you prepare more competitive ap-
plications, you learn more about the overall process and what other reviewers value, and you are
exposed to things that you may not have been aware of, including important references, new instru-
mentation or materials, useful experimental methods or computational tools, and so forth. Indeed,
you may even learn of individuals who would be good potential collaborators. Yet, because applica-
tions are confidential, what information can you use and when? A good rule of thumb is that any
information available in the public domain can be used in good conscience, including published papers,
commercially available instruments, materials, software, and contacts listed on the Web for indi-
viduals in academia and industry. In other words, if you can obtain the same information elsewhere,
it can be used.
In contrast, novel ideas (e.g., new experimental protocols or methods to solve a complex
equation) that are not available in the public domain should not be used. Some might ask in this
regard if it would be acceptable to contact the investigators and ask for permission to use their ideas.
128 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
Albeit perhaps surprising, the answer is that you are not supposed to discuss any aspect of the grant
application or its review with the applicant(s), hence it is not appropriate to seek such permissions.
Rather, one should wait for the ideas to be made public by the investigators, often via a conference
presentation, published paper, or posting on the Web. Once such disclosure has been made, it is
then acceptable to use the available information or to contact the investigators for further clarifica-
tion or possible collaboration. If the disclosure appears via a patent rather than via free information,
however, one must then respect the conditions of the patent.
That it is appropriate to use information in the public domain should be obvious, but this
does not resolve all potential issues. What if the applicants did not get funded and they were unable
to pursue their ideas? What if they were funded but decided to pursue a different approach? In other
words, is it good for science and engineering in general to let an excellent idea die simply because
those who conceived the idea could not bring it to fruition? Is there a statute of limitations on such
ideas or should there be? What if you wish to apply their idea to a completely different problem, one
they are most likely not interested in and would never pursue? What if you were already planning
to do the same experiment or one related closely — do you need to forego your experiment to avoid
what may appear to be misconduct? Who should you approach to find answers to such questions?
We identified some situations regarding “intellectual property” that may arise when
Exercise 8.9
a person reviews, in confidence, a grant application. Another situation could arise if you submitted a
grant application to the NIH that needed to be revised two times (which generally means a two or
more year delay in starting the research). What could you do if you suspected that someone on the
review panel was purposely trying to delay your research so that they could pursue the same idea?
List five other potential situations that could arise in grant reviews and discuss potential ethical
issues.
In conclusion, recall from the Preface that it was not the goal of this book to be a standalone
source on matters of style or ethics in communication. Rather, our goal was to motivate the reader
to develop an effective, individual style of communicating and a personal commitment to integrity
because it matters. We sought to raise questions, not answer them. Our best advice is simply to
seek advice from good role models whose writing and ethics you respect — learn from others, so-
licit constructive feedback on oral presentations as well as drafts of manuscripts and proposals, and
discuss potential concerns on ethical matters with peers, advisors, supervisors, and administrators.
Work hard to ensure that, when you look back over your career upon retirement, you are proud of
a job well done.
• • • •
129
References
Bell ET (1986) Men of Mathematics. Simon & Schuster, New York, NY.
Berry TE (1971) The Most Common Mistakes in English Usage. McGraw-Hill, New York, NY.
Bhopal R, Rankin J, McColl E, Thomas L, Kaner E, Stacy R, Pearson P, Vernon B, Rodgers H
(1997) The vexed question of authorship: Views of researchers in a British medical faculty.
BMJ 314: 1009–1112.
Blake G, Bly RW (1993) The Elements of Technical Writing. Macmillan, New York, NY.
Boorstin DJ (1983) The Discoverers. Vintage Books, New York, NY.
Brogan JA (1973) Clear Technical Writing. McGraw-Hill, New York, NY.
Carnegie D (1956) How to Develop Self-Confidence and Influence People by Public Speaking. Simon &
Schuster, New York, NY.
Clendening L (1960) Source Book of Medical History. Dover Publications, New York, NY.
Cohen WM, Nelson RR, Walsh JP (2002) Links and impacts: The influence of public research on
industrial R&D. Management Sci 48: 1–23. doi:10.1287/mnsc.48.1.1.14273
Davidoff F (2000) For the CSE Task Force on Authorship. Who’s the author? Problems with bio-
medical authorship, and some possible solutions. Sci Ed 23: 111–119.
Day RA, Gastel B (2006) How to Write and Publish a Scientific Paper. 6th Edition. Greenwood Press,
Westport, CT.
De Vries R, Anderson MS, Martinson BC (2006) Normal misbehavior: Scientists talk about the
ethics of research. J Empir Res Hum Res Ethics 1: 43–50. doi:10.1525/jer.2006.1.1.43
Dillenberger J (1961) Protestant Thought and Natural Science. University of Notre Dame Press, Notre
Dame, IN.
Eastwood S, Derish P, Leash E, Ordway S (1996) Ethical issues in biomedical research: percep-
tions and practices of postdoctoral research fellows responding to a survey. Sci Eng Ethics 2:
89–114. doi:10.1007/BF02639320
Engler RL, Covell JW, Friedman PJ, Kitcher PS, Peters RM (1987) Misrepresentation and respon-
sibility in medical research. N Engl J Med 317: 1383–1389.
Flanagin A, Carey LA, Fontanarosa PB, Phillips SG, Pace BP, Lundberg GD, Rennie D (1998)
Prevalence of articles with honorary authors and ghost authors in peer-reviewed medical
journals. JAMA 280: 222–224. doi:10.1001/jama.280.3.222
130 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
Frankel MS (1999) Public access to data. Science 283: 1114. doi:10.1126/science.283.5405.1114
Gibaldi J (1995) MLA Handbook for Writers of Research Papers. 4th Edition. The Modern Language
Association of America, New York, NY.
Iverson C, et al. (1998) AMA Manual of Style: A Guide for Authors and Editors. 9th Edition. Lippin-
cott Williams & Wilkins, Philadelphia, PA.
Kilpatrick JJ (1984) The Writer’s Art. Andrews and McMeel, Kansas City, MO.
Lightman A (2005) The Discoveries: Great Breakthroughs in 20th Century Science. Vintage Books,
New York, NY.
Martinson BC, Anderson MS, de Vries R (2005) Scientists behaving badly. Nature 435: 737–738.
doi:10.1038/435737a
Mason SF (1962) A History of the Sciences. Collier Books, New York, NY.
McCabe D (2005) Cheating among college and university students: A North American perspec-
tive. Int J Educ Integrity 1(1) [Epub].
McCabe DL, Trevino LK, Butterfield KD (2001) Cheating in academic institutions: A decade of
research. Ethics Behav 11: 219–232. doi:10.1207/S15327019EB1103_2
Motz L, Weaver JH (1989) The Story of Physics. Avon Books, New York, NY.
Parker RA, Berman NG (1998) Criteria for authorship for statisticians in medical papers. Stat Med
17: 2289–2299. doi:10.1002/(SICI)1097-0258(19981030)17:20<2289::AID-SIM931>3.0
.CO;2-L
Piwowar HA, Day RS, Fridsma DB (2007) Sharing detailed research data is associated with in-
creased citation rate. PloS ONE 2: e308. doi:10.1371/journal.pone.0000308
Sampat BN (2006) Patenting and U.S. academic research in the 20th century: The world before
and after Bayh–Dole. Res Policy 35: 772–789.
Shamos MH (1959) Great Experiments in Physics: Firsthand Accounts from Galileo to Einstein. Dover
Publications, New York, NY.
Strunk W, White EB (1979) The Elements of Style. 3rd Edition. Allyn and Bacon, Boston, MA.
Tarnas R (1991) The Passion of the Western Mind. Ballantine Books, New York, NY.
Tarnow E (1999) The authorship list in science: Junior physicists’ perceptions of who appears and
why. Sci Eng Ethics 5: 73–88. doi:10.1007/s11948-999-0061-2
Valiela I (2001) Doing Science: Design, Analysis, and Communication of Scientific Research. Oxford
University Press, New York, NY.
Van Doren C (1991) A History of Knowledge. Ballantine Books, New York, NY.
Vivian CH, Jackson BM (1961) English Composition. Barnes & Noble, New York, NY.
Walters R, Kern TH (1991) How to eschew weasel words. Johns Hopkins Magazine, December 1991.
Westfall RS (1993) The Life of Isaac Newton. Cambridge University Press, Cambridge, UK.
Yank V, Rennie D (1999) Disclosure of researcher contributions: A study of original research ar-
ticles in The Lancet. Ann Intern Med 130: 661–670.
131
Author Biography
Jay D. Humphrey is Regents Professor and Carolyn S. and Tommie E. Lohman Professor of bio-
medical engineering at Texas A&M University. He has authored a graduate textbook (Cardiovascu-
lar Solid Mechanics), coauthored an undergraduate textbook with a former student (An Introduction
to Biomechanics), published more than 150 archival papers and chapters in other books and ency-
clopedias, and serves as coeditor in chief for the international journal Biomechanics and Modeling in
Mechanobiology. He has served as a reviewer for 50 technical journals and 20 funding agencies in
the United States and abroad. He is a fellow of the American Institute for Medical and Biological
Engineering.
Jeffrey W. Holmes is Associate Professor of biomedical engineering and medicine at the University
of Virginia. He has published more than 40 archival papers and book chapters and has reviewed for
15 technical journals and several funding agencies, including the American Heart Association and
the National Institutes of Health. Before moving to Virginia, he developed and taught the course
“Ethics for Biomedical Engineers” at Columbia University, where he won the Distinguished Fac-
ulty Teaching Award. Other awards include an Alexander von Humboldt Research Fellowship, the
Y.C. Fung Young Investigator Award, and an Established Investigator Award from the American
Heart Association.
133
Index
A
Abbreviations, 32-33, 49
Abstract, 49, 98
Academic research records, 104-105
Acknowledgments, 49-50
Active voice, 15-17
Adjectives, 25
Advanced Research Program, 73
Adverbs, 25
Affect/effect, 27
Aforementioned, 28
Alternative/alternation, 26
American Heart Association, 94
American Physiological Society, 116-118
Among/between, 26
Amount/number, 26
And/or, 28
Apostrophe, 32
Appendices, 50
Archival journal paper
authorship of, 54, 86-87
composition of, 54
copyright, 58
galley proofs, 57-58
order of authors in, 86-87
origin of, 53-54
page charges, 59
peer reviews, 124-126
permissions, 58-59
revisions, 55-57
submission and review of, 54-56, 124-125
typesetting of, 57
Attribution within a research group, 122-123
Audience
for grant, 64
for oral presentation, 81, 83
Audiovisual aids, 78-80
Authors/authorship
abstracts, 98
attribution within a research group,
122-123
citation vs., 91-92
copyright transfer to journal, 114
criteria for, 97
expectations for, 88-89
final review and approval by, 97-98
financial support issues, 91
ghost, 90
gift, 89-90
guest, 90
guidelines for, 92-93
honorary, 90
impact of, 87-88
inappropriate practices involving, 90-91, 98
International Committee of Medical
Journal Editors standards, 93-95
134 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
Authors/authorship (cont.)
notification of, 94-95
order of, in publication, 86-87
predraft group meeting with, 97
problems associated with, 88-93
quantifying of contributions, 96
quid pro quo issues, 91-92
requesting reviewers by, 125
Slutsky case, 85-86
specifying of contributions, 95-96
standards for, 93-96
students, 92-93
submission agreement, 87
technicians, 92-93
B
Background and significance section, of grants,
68-69, 75-76
Bayh–Dole Act, 119
Because of/due to, 27
Because/since, 26
Between/among, 26
Brackets, 35
Brief communications, 41
C
Can/may, 26
cf., 33
Cheating, 122-123
Citations, 36, 50-51, 91, 123
Colleagues. See also Authorship
communicating with, 3
proofreading of writing by, 9-10
Colon, 31
Comma, 30
Communication
with colleagues, 3
definition of, 3
individual differences in, 1
nonverbal, 78
oral, 77-84
Compare with/compare to, 26
Complement/compliment, 27
Comprise/compose, 27
Computational models, 114
Conciseness, 14
Conclusion section, 47-48
Confidence, 79
Continual/continuous, 27
Copyediting, 57
Copyright, 58, 114-118
Correlate, 28
Cover page, 42-44
Critical editing, 8
Curiosity-driven research, 64
D
Dash, 31
Data
annotating of, 113-114
datum vs., 27
electronic, 105-107
fabrication and falsification of, 108,
110
manipulation of, 108
retaining or discarding of, 108-109
sharing of, 112-114
Date-stamps, 106
Digital images, 106, 109
Dilemma, 28
Discussion section, 47-48
Dissertation, 59-61
DNA microarrays, 113
Due to/because of, 27
E
Editing, 8
Editors, 54-56
Effect/affect, 27
e.g., 33
Either/neither, 27
Electronic data, 105-107
Electronic publishing, 58
Ellipses, 35
Em dash, 31
Essential/important, 27
et al., 33
etc., 33
Expectations, for publication, 88-89
Expert reviewers, 54-55
F
Fabrication of data, 108, 110
Falsification, 108
Farther/further, 27
15-minute presentation, 82-84
Figures, 52-53, 66, 81
Financial support, 91
Financial support disclosures, 49
First person, 20
Footnotes, 35
Foreign languages, 33-34
Former, 28
Fraud
authorship, 99, 101-102
INDEX 135
data, 108-109
images, 109
recordkeeping, 107-110
Free writing, 7
Future perfect tense, 21
Future tense, 21
G
Galley proofs, 57-58
Gender, 20
Ghost authorship, 90
Gift authorship, 89-90
Good/well, 27
Grants
background and significance section of,
68-69
instructions for, 67
methods section of, 71-72
NIH R01. See NIH R01 grant
peer review of, 126-128
preliminary results section of, 69-70
references section of, 72-73
renewing of, 113
review of, 126-128
revising of, 128
specific aims section of, 68
summary of, 74-75
types of, 63-64
Guest authorship, 90
Gutenberg, Johann, 2
H
Habits, 78
Halley, Edmund, 2
Harvey, W., 19
136 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
Honorary authorship, 90
Hooke, R., 2
However/nevertheless, 27
Hyphen, 32
Hypothesis-driven research, 64, 72
I
i.e., 33
Image(s), 106, 109
Image forensics, 110
Impact factor, 87-88
Imply/infer, 27
Important/essential, 27
Industry research records, 104
Infinitives, 22-23
Intellectual property, 121, 128
International Committee of Medical Journal
Editors, 93-95
Interpretation of findings, 44
Introduction section, 48-49
L
Laboratory notebooks, 104, 111
Laser pointer, 78-79
Latter, 28
Letter to editor, 42-44
M
Manipulation of data, 108
Markup language, 114
May/can, 26
Medical history, 103
Medical images, 106
Medical records, 103-104, 111
Methods section
of grants, 71-72
of scientific publications, 45-47
Microarrays, 113
Misconduct, 101-102
Model organisms, 113
Models, 114
Modifiers, 23-25
Motivations, for proposals, 64
M.S. thesis, 6-7, 59-61
N
National Institutes of Health. See NIH
National Science Foundation, 63
Neither/either, 27
Nervousness, 80
Neutral pronouns, 20
Nevertheless/however, 27
Newton, Isaac, 2-3
NIH
data sharing limitations, 112-113
description of, 3
model organisms, 113
proposal review at, 64-67
public access to journal articles funded by,
115-118
R01 grant. See NIH R01 grant
NIH R01 grant
annual costs of, 113
background and significance section of,
68-69, 75-76
methods section of, 71-72
overview of, 67-68
preliminary results section of, 69-70, 76
references section of, 72-73
specific aims section of, 68, 75
Nonverbal communication, 78
Noun, 23
Number/amount, 26
Numbers, 32
INDEX 137
O
Office of Research Integrity, 100
Oldenburg, Henry, 3-4
Omitted citations, 123
Omnibus Spending Bill, 112
Online publishing, 59, 115
Oral communication, 77-84
Outliers, 109
Outline, 5-7
Ownership
copyright, 114-118
issues involving, 111
patents, 118-121
P
Page charges, 59
Paragraph, 8
Parentheses, 31
Passive voice, 15-19
Past perfect tense, 21
Past tense, 21-22
Patents, 118-121
Peer review. See also Reviewers
archival journal articles, 124-126
description of, 124
grants, 126-128
Per, 29
Permissions, 58-59
Person, 19-21
Ph.D. dissertation, 6-7, 59-61
Physiome Project, 114
Plagiarism, 36, 108, 111, 121-123
Plato, 3
Precede/proceed, 27
Preliminary results section, of grants, 69-70,
76
Preproposal, 73-74
Presentation, oral, 77-84
Present perfect tense, 21
Present tense, 21
Principal/principle, 28
Principia, 2-3, 22
Printing press, 2
Professional responsibility, 1
Program announcement, 63
Pronouns, 20
Proofreader marks, 57-58
Proofreading, 9-10
Proposals
motivations for, 64
preproposal, 73-74
review process for, 64-67
Provisional patent, 120
Publications. See Scientific publications
Public Health Service, 100
Publicly funded research
NIH-funded journal articles, 115-118
patent issues regarding, 119-120
Public speaking, 77-84
PubMed Central, 116, 118
Punctuation, 30-32
Q
Question and answer period, of oral
presentation, 83
Quid pro quo, 91-92
Quotations, 35-36
R
Reading, 4
Reading aloud, 8-9
Recommendations, 54-55
Records
academic research, 104-105
138 STylE AND ETHICS of CommuNICATIoN IN SCIENCE AND ENgINEERINg
Records (cont.)
backup systems for, 106
electronic data, 105-107
fraud involving, 107-110
industry research, 104
loss of, 100
medical, 103-104, 111
reasons for keeping, 102-105
Slutsky case, 99-102
training purposes of, 103
Redundancies, 10-15
References, 50-51
Reprint requests, 115
Request for proposals, 63
Research plan, 70-72, 76
Resource sharing, 112-114
Results section, 44-45
Reviewers, 54-55, 65-66, 124-126. See also Peer
review
Reviewing
of archival journal paper, 54-56, 124-125
of grants, 126-128
Revisions, 55-57
R01 grant. See NIH R01 grant
Russell, Bertrand, 29
S
Scientific information service companies, 87-88
Scientific publications
abbreviations in, 49
abstract, 49
acknowledgments, 49-50
appendices, 50
citations, 50-51
conclusion section of, 47-48
content of, 41-53
cover page, 42-44
discussion section of, 47-48
expectations on, 88-89
figures, 52-53
financial support disclosures, 49
findings, 44, 46
format of, 41
impact factor for, 87-88
interpretation of findings, 44
introduction section of, 48-49
keywords, 42
letter to editor, 42-44
methods section of, 45-47
patent issues, 120-121
references, 50-51
results section of, 44-45
subheadings in, 46
symbols used in, 46-47
tables, 52-53
types of, 41
writing of, 41-42
Second person, 20
Self-confidence, 79
Self-criticism, 7
Semicolon, 30-31
Sentence, 8
Shall/will, 28
[sic], 36
Since/because, 26
Slides, 81-83
Slutsky case
authorship issues, 85-86, 89-90
recordkeeping issues, 99-102
SOAP note, 103
Socrates, 3
Software, 81-82, 107
INDEX 139
Specific aims section, of grants, 68, 75
Statistical forensics, 110
Statisticians, 96
Students
authorship by, 92-93
plagiarism by, 121-122
Submission, of archival journal paper, 54-56
Submission agreement, 87
Symbols, 46-47
T
Tables, 52-53, 66
Team science, 3
Technical paper, 6
Technical presentations, 79
Technical proposal, 6
Technical reports, 61
Technical writing. See Writing
Technician authorship, 92-93
Technology-driven research, 64
Technology transfer, 118, 120
Teleconferencing, 97
Tense, 21-22
That/which, 28
That/who, 28
Thesis, 6-7, 59-61
Third person, 20
This, 29
Thompson, D’Arcy, 40
Time-stamps, 106
Transgenic mice, 114
U
Universities, 120-121
Unnecessary words, 10-15
U.S. Public Health Service, 100
V
Verbs, 25
Vocabulary, 36-39
Voice, 15-19
W
Web conferencing, 97
Well/good, 27
Which/that, 28
While/whereas, 28
Whitaker Foundation, 74
Who/that, 28
Will/shall, 28
Word choice, 26-30
Workplace integrity, 1
Writing
abbreviations, 32-33
approach to, 5-10
editing, 8
footnotes, 35
free, 7
infinitives, 22-23
modifiers, 23-25
outline for, 5-7
person, 19-21
punctuation, 30-32
quotations, 35-36
reading aloud during, 8-9
redundancies, 10-15
tense, 21-22
unnecessary words, 10-15
vocabulary, 36-39
voice, 15-19
|
https://ieeexplore.ieee.org/xpl/ebooks/bookPdfWithBanner.jsp?fileName=6813532.pdf&bkn=6813531&pdfType=book
|
Series ISSN: 1946-7680
Series ISSN: 1946-7680
Series ISSN: 1946-7680
SyntheSiS LectureS On engineering
SyntheSiS LectureS On engineering
SyntheSiS LectureS On engineering
The Engineering Design Challenge: A Creative Process
The Engineering Design Challenge: A Creative Process
The Engineering Design Challenge: A Creative Process
Charles W. Dolan, P.E., University of Wyoming
Charles W. Dolan, P.E., University of Wyoming
Charles W. Dolan, P.E., University of Wyoming
The Engineering Design Challenge addresses teaching engineering design and presents design
The Engineering Design Challenge addresses teaching engineering design and presents design
The Engineering Design Challenge addresses teaching engineering design and presents design
projects for first-year students and interdisciplinary design ventures. A short philosophy and back-
projects for first-year students and interdisciplinary design ventures. A short philosophy and back-
projects for first-year students and interdisciplinary design ventures. A short philosophy and back-
ground of engineering design is discussed. The organization of the University of Wyoming first-year
ground of engineering design is discussed. The organization of the University of Wyoming first-year
ground of engineering design is discussed. The organization of the University of Wyoming first-year
Introduction to Engineering program is presented with an emphasis on the first-year design chal-
Introduction to Engineering program is presented with an emphasis on the first-year design chal-
Introduction to Engineering program is presented with an emphasis on the first-year design chal-
lenges. These challenges are presented in a format readily incorporated in other first-year programs.
lenges. These challenges are presented in a format readily incorporated in other first-year programs.
lenges. These challenges are presented in a format readily incorporated in other first-year programs.
The interdisciplinary design courses address the institutional constraints and present organizational
The interdisciplinary design courses address the institutional constraints and present organizational
The interdisciplinary design courses address the institutional constraints and present organizational
approaches that resolve these issues. Student results are summarized and briefly assessed. A series
approaches that resolve these issues. Student results are summarized and briefly assessed. A series
approaches that resolve these issues. Student results are summarized and briefly assessed. A series
of short intellectual problems are included to initiate discussion and understanding of design issues.
of short intellectual problems are included to initiate discussion and understanding of design issues.
of short intellectual problems are included to initiate discussion and understanding of design issues.
Sample syllabi, research paper requirements, and oral presentation evaluation sheets are included.
Sample syllabi, research paper requirements, and oral presentation evaluation sheets are included.
Sample syllabi, research paper requirements, and oral presentation evaluation sheets are included.
“The H. T. Person Endowment at the University of Wyoming was established in 1990 to focus
“The H. T. Person Endowment at the University of Wyoming was established in 1990 to focus
“The H. T. Person Endowment at the University of Wyoming was established in 1990 to focus
on undergraduate education. Introducing and teaching design to undergraduate students
on undergraduate education. Introducing and teaching design to undergraduate students
on undergraduate education. Introducing and teaching design to undergraduate students
has been the focus of the H. T. Person Chair for over a decade. It is my pleasure to share
has been the focus of the H. T. Person Chair for over a decade. It is my pleasure to share
has been the focus of the H. T. Person Chair for over a decade. It is my pleasure to share
some of the chair’s experiences with you in the hope that they may be of assistance to your
some of the chair’s experiences with you in the hope that they may be of assistance to your
some of the chair’s experiences with you in the hope that they may be of assistance to your
program. This book focuses on both first-year projects and interdisciplinary senior design
program. This book focuses on both first-year projects and interdisciplinary senior design
program. This book focuses on both first-year projects and interdisciplinary senior design
project. In addition to the description of the projects, the methodology for organizing and
project. In addition to the description of the projects, the methodology for organizing and
project. In addition to the description of the projects, the methodology for organizing and
executing the projects is presented.”
executing the projects is presented.”
executing the projects is presented.”
— Robert Ettema, Dean, College of Engineering and Applied Science,
— Robert Ettema, Dean, College of Engineering and Applied Science,
— Robert Ettema, Dean, College of Engineering and Applied Science,
University of Wyoming.
University of Wyoming.
University of Wyoming.
d
d
d
o
o
o
l
l
l
a
a
a
n
n
n
T
T
T
h
h
h
e
e
e
i
i
i
i
i
i
e
e
e
n
n
n
g
g
g
n
n
n
e
e
e
e
e
e
r
r
r
n
n
n
g
g
g
D
D
D
e
e
e
s
s
s
i
i
i
g
g
g
n
n
n
C
C
C
h
h
h
a
a
a
l
l
l
l
l
l
e
e
e
n
n
n
g
g
g
e
e
e
About SYNtHESIS
About SYNtHESIS
About SYNtHESIS
This volume is a printed version of a work that appears in the Synthesis Digital
This volume is a printed version of a work that appears in the Synthesis Digital
This volume is a printed version of a work that appears in the Synthesis Digital
Library of Engineering and Computer Science. Synthesis Lectures provide
Library of Engineering and Computer Science. Synthesis Lectures provide
Library of Engineering and Computer Science. Synthesis Lectures provide
concise, original presentations of important research and development topics,
concise, original presentations of important research and development topics,
concise, original presentations of important research and development topics,
published quickly, in digital and print formats. For more information visit
published quickly, in digital and print formats. For more information visit
published quickly, in digital and print formats. For more information visit
www.morganclaypool.com
www.morganclaypool.com
www.morganclaypool.com
w w w . m o r g a n c l a y p o o l . c o m
w w w . m o r g a n c l a y p o o l . c o m
w w w . m o r g a n c l a y p o o l . c o m
9 781627 051767
9 781627 051767
9 781627 051767
ISBN: 978-1-62705-176-7
ISBN: 978-1-62705-176-7
ISBN: 978-1-62705-176-7
90000
90000
90000
m
m
m
o
o
o
r
r
r
g
g
g
a
a
a
n
n
n
&
&
&
c
c
c
l
l
l
a
a
a
y
y
y
p
p
p
o
o
o
o
o
o
l
l
l
The Engineering
Design Challenge:
A Unique Opportunity
Synthesis Lectures on
Engineering
The Engineering Design Challenge: A Unique Opportunity
Charles W. Dolan
2013
The Making of Green Engineers: Sustainable Development and the Hybrid Imagination
Andrew Jamison
2013
Crafting Your Research Future: A Guide to Successful Master’s and Ph.D. Degrees in Science
& Engineering
Charles X. Ling and Qiang Yang
2012
Fundamentals of Engineering Economics and Decision Analysis
David L. Whitman and Ronald E. Terry
2012
A Little Book on Teaching: A Beginner’s Guide for Educators of Engineering and Applied
Science
Steven F. Barrett
2012
Engineering Thermodynamics and 21st Century Energy Problems: A Textbook Companion
for Student Engagement
Donna Riley
2011
MATLAB for Engineering and the Life Sciences
Joseph V. Tranquillo
2011
Systems Engineering: Building Successful Systems
Howard Eisner
2011
iv
Fin Shape Thermal Optimization Using Bejan’s Constructal Theory
Giulio Lorenzini, Simone Moretti, and Alessandra Conti
2011
Geometric Programming for Design and Cost Optimization (with illustrative case study
problems and solutions), Second Edition
Robert C. Creese
2010
Survive and Thrive: A Guide for Untenured Faculty
Wendy C. Crone
2010
Geometric Programming for Design and Cost Optimization (with Illustrative Case Study
Problems and Solutions)
Robert C. Creese
2009
Style and Ethics of Communication in Science and Engineering
Jay D. Humphrey and Jeffrey W. Holmes
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Analog Multimedia
Explorations
Lina J. Karam and Naji Mounsef
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Digital Multimedia and
Robotics Explorations
Lina J. Karam and Naji Mounsef
2008
CAD/CAM of Sculptured Surfaces on Multi-Axis NC Machine: The DG/K-Based
Approach
Stephen P. Radzevich
2008
Tensor Properties of Solids, Part Two: Transport Properties of Solids
Richard F. Tinder
2007
Tensor Properties of Solids, Part One: Equilibrium Tensor Properties of Solids
Richard F. Tinder
2007
v
Essentials of Applied Mathematics for Scientists and Engineers
Robert G. Watts
2007
Project Management for Engineering Design
Charles Lessard and Joseph Lessard
2007
Relativistic Flight Mechanics and Space Travel
Richard F. Tinder
2006
Copyright © 2013 by Morgan & Claypool
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in
any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations in
printed reviews, without the prior permission of the publisher.
The Engineering Design Challenge: A Unique Opportunity
Charles W. Dolan
www.morganclaypool.com
ISBN: 9781627051767
paperback
ISBN: 9781627051774
ebook
DOI 10.2200/S00487ED1V01Y201303ENG021
A Publication in the Morgan & Claypool Publishers series
SYNTHESIS LECTURES ON ENGINEERING
Lecture #21
Series ISSN
Synthesis Lectures on Engineering
Print 1939-5221 Electronic 1939-523X
The Engineering
Design Challenge:
A Unique Opportunity
Charles W. Dolan
University of Wyoming
SYNTHESIS LECTURES ON ENGINEERING #21
CM&
Morgan
&
cLaypool
publishers
ABSTRACT
The Engineering Design Challenge addresses teaching engineering design and presents design projects
for first-year students and interdisciplinary design ventures. A short philosophy and background of
engineering design is discussed. The organization of the University of Wyoming first-year Intro-
duction to Engineering program is presented with an emphasis on the first-year design challenges.
These challenges are presented in a format readily incorporated in other first-year programs. The
interdisciplinary design courses address the institutional constraints and present organizational ap-
proaches that resolve these issues. Student results are summarized and briefly assessed. A series of
short intellectual problems are included to initiate discussion and understanding of design issues.
Sample syllabi, research paper requirements, and oral presentation evaluation sheets are included.
KEYWORDS
engineering, design, challenges, first-year, interdisciplinary, multidisciplinary, assess-
ment, outcomes, organization, evaluation
ix
To M for years of understanding, help, and support
Contents
xi
1
2
3
Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
The World of Engineering Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Design Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 What is Engineering Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Why Teaching Design is Difficult . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4
The Engineering Design Challenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Introduction to Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1
2.2
2.3
ES 1000 Introduction to Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.1 Introduction to University Life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.2 Intellectual Community . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.3 Information Literacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Role of H. T. Person Chair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Closing Comments on ES 1000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.1 Student Engagement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.2 The Course Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4
Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
First-year Design Challenge Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1 Overall Challenge Philosophy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2
3.3
3.4
3.5
Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Cost and Time Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Awards and Prizes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
xii
4
The First-year Design Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.1
4.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Fleet Efficiency and the Auto Design Dilemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.2.1 Facilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.2.2 The Challenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.2.3 The Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.2.4 Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.2.5 Challenge Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2.6 Concluding Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.3 Dunebuggy Dash . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.3.1 Facilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.3.2 Challenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.3.3 The Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.3.4 Challenge Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.3.5 Concluding Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.4
First Flight: Fly a Foam Airplane 100 Yards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.4.1 Facilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.4.2 Challenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.4.3 The Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.4.4 Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.4.5 Challenge Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.4.6 Concluding Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.5 Hackysack Flip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.5.1 Facilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.5.2 Challenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.5.3 The Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.5.4 Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.5.5 Challenge Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.5.6 Concluding Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.6 Mousetrap Powered Car Slalom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.6.1 Facilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.6.2 Challenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.6.3 The Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.6.4 Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.6.5 Challenge Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.6.6 Concluding Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
xiii
4.7
4.8
The Great Wall of Carpet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.7.1 Facilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.7.2 Challenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.7.3 The Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.7.4 Scoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.7.5 Some References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.7.6 Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.7.7 Challenge Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.7.8 Concluding Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Underwater Recovery Vehicle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.8.1 Facilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.8.2 Challenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.8.3 The Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.8.4 Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.8.5 Challenge Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.8.6 Concluding Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.9 Wind Turbines and Wind Power Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.9.1 Facilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.9.2 Challenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.9.3 The Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.9.4 Developing a Test Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.9.5 Calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.9.6 Challenge Day Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.9.7 Scoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.9.8 Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.9.9 Challenge Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.9.10 Concluding Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5
Interdisciplinary Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Administrative Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.2
Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5.3
5.3.1 NASA Zero Gravity Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5.3.2 Automated Transit System for Campus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5.3.3 Disappearing Roads and Gas Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5.3.4 University Energy Plant Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.3.5 Medieval Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
xiv
6
5.4
5.5
5.6
Student Recruitment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Project Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Interdisciplinary Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
6.3
6.1
6.2
Interdisciplinary Design Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
NASA Zero Gravity I: Construction in Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
6.2.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
6.2.2 Class Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
6.2.3 Class Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
6.2.4 Student Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
6.2.5 Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
NASA Zero Gravity II: Exercise Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
6.3.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
6.3.2 Class Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
6.3.3 Class Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
6.3.4 Student Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
6.3.5 Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
6.4 Design of an Automated Transit System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
6.4.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
6.4.2 Class Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
6.4.3 Class Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
6.4.4 Student Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
6.4.5 Assessment Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.5 Disappearing Roads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
6.5.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
6.5.2 Class Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
6.5.3 Class Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
6.5.4 Student Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
6.5.5 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
6.5.6 Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
6.5.7 Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Beetle Kill and Biomass Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
6.6.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
6.6.2 Class Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
6.6.3 Class Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
6.6.4 Student Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.6
6.6.5 Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
6.7 Gothic Cathedrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
6.7.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
6.7.2 Class Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
6.7.3 Student Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
6.7.4 Assessment Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
xv
7
Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
7.1 H. T. Person Design Challenge for Primary and Secondary Schools in Wyoming 103
7.1.1 Engineering Background and History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
7.1.2 What an Equation Tells you About Design . . . . . . . . . . . . . . . . . . . . . . . . . 104
7.2 Meteor Collision Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Tire Particle Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
7.3
7.4 What Happened? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
7.4.1 Windmill Collapse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
7.4.2 Bridge Accident . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
7.4.3 Development of Stress and Strain Curves . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Notes for Chapter 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
7.5.1 Paper Column Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
7.5.2 Meteor Collision Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
7.5.3 Tire Particle Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
7.5.4 What Happened . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
7.5.5 Stress-Strain Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
7.5
A H.T. Person Lectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
B
Sample Course Syllabus and Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
C Information Literacy Paper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
D Sample Oral Presentation Evaluation Sheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Author’s Biography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Foreword
xvii
If you were to teach engineering design to first-year undergraduate students, how should you proceed?
This question is at the heart of the present book by Professor Charles Dolan, an accomplished
designer and adept educator.
The question may seem daunting, as engineering design entails the interplay of creativity
and specialist expertise gained from a rigorous path of engineering study. First-year undergraduate
students may be naturally creative, but they are just starting down the path and thus unlikely to have
the know-how needed for meaningful engineering design. Yet as Professor Dolan shows, the design
process essentially entails a series of basic considerations that first-year students can readily grasp.
Professor Dolan’s design challenge requires that students organize themselves into teams to
pursue a given design objective subject to a set of prescribed design constraints (rules, as he calls
them).To attain the design objective, each team must prepare a simple plan, coordinate and record its
activities, performance test the resulting design, and write a concise technical report. The challenge
provides students with a substantial foretaste of the design process, and enables them to better
anticipate the knowledge needed for successfully undertaking more complex engineering designs.
Not satisfied with tackling one major question about teaching design, Professor Dolan then
shifts gears to take on another: How to teach interdisciplinary design to a team of undergraduate
students drawn from different engineering disciplines?
Addressing this question can be complicated by non-trivial curricular and administrative
issues, as he explains. Additionally, it can be complicated by the issue of conceiving design projects
that meaningfully involve students from different engineering disciplines over a single semester.
Nevertheless, Professor Dolan successfully addresses all of these issues, to the extent that one of his
interdisciplinary projects, Disappearing Roads, received national recognition and was implemented
at sites requiring a minimal construction footprint.
Professor Dolan’s design expertise draws on his vast experience as a consulting engineer work-
ing with teams to design a broad range of concrete structures, including a remarkable series of concrete
monorail structures where design requirements were subject to comprehensive sets of constraints
pertaining to sound structural performance, transportation efficiency, convenience, economics, and
aesthetics. He has coauthored a leading book on the design of concrete buildings, engaged in nu-
merous research projects, and for many years has been a member of the country’s (and arguably the
world’s) principal committee prescribing the design rules for concrete buildings.
He teaches in an engaging and anecdote-rich manner, and occasionally enjoys pointing out
less common factors that sometimes enliven the design process. For example, I recall him recounting
with some humor the challenge he once faced as a member of an international design team whose
xviii FOREWORD
members spoke four languages including Norwegian and Japanese. Great care was needed to ensure
clarity of communication, a point he often emphasizes to students.
Professor Dolan is a “designer par excellence.” The University of Wyoming has been fortunate
to have him as its founding H. T. Person Chair of Engineering Education, which was established
for the express purpose of attracting world-class engineers and engineering educators to help teach
the design process to undergraduate engineering students. Professor H. T. Person himself was an ac-
complished teacher extensively recognized for his contributions to engineering practice in Wyoming
and surrounding regions.
This book is Professor Dolan’s leave-taking contribution to engineering education. It offers
a useful, brief digest of his experience teaching design and introducing engineering to first-year
students. In late 2012 he retired from the H. T. Person Chair, and now works at his own pace on a
variety of very interesting engineering design and research activities.
Robert Ettema
Dean and Professor
College of Engineering and Applied Science
Laramie, Wyoming
March 2013
xix
Preface
Hjalmar Thorval (H. T.) Person was Professor, Department Head, Dean, and President of the
University of Wyoming in the period from 1929 to 1968. Known as “Prof,” he was an accomplished
teacher, sometimes teaching 20 credit hours per semester. As dean he was instrumental in moving
the College of Engineering and Applied Science from its infancy to a nationally recognized program.
As president, he is credited with quelling faculty dissention on campus and returning the university
to a sense of progress. A Fellow of the American Society of Civil Engineers, he served as a director,
on the executive committee, and on technical committees including Drainage and Irrigation, and
Registration of Engineers.
H. T. worked summers as a prac-
ticing engineer for American Bridge
Company and the Missouri High-
way Department in order to bring
state-of-the-art design to his classes.
In Wyoming he served variously
as the director for the U.S. Coast
and Geodetic Survey and the State
the
Control Survey. Person was
state’s chief negotiator on the Upper
Colorado, Yellowstone, Cheyenne,
Snake, Niobrara, and Columbia
River Compact Commissions. He
was appointed to the President’s Mis-
souri River Basin Survey Commission and in 1965 was named to the Upper Colorado River Com-
mission. In a region of the country where “whiskey is for drinking and water is for fighting” the
commitments to and importance of these commissions were both time consuming and essential to
the State of Wyoming. He was recognized for his service by the National Council of State Board
of Engineering Examiners, and the Four-State Irrigation Council. H. T. Person received the first
Wyoming Society of Professional Engineers’ “Engineer of the Year” award and the University of
Wyoming’s “Medallion of Service.”
In the early 1990s, several alumni joined to create an endowment to establish a chair to
honor the vision of H. T. Person and the “Prof ’s” dedication to undergraduate education. Over
200 individuals, groups, and foundations supported the endowment. I had the distinct pleasure to
discuss H. T. Person’s legacy with several of the leaders of this effort including Gus Albert, “Tut”
Ellis, Harold Kester, Albert “Boots” Nelson, Frosty Kepler, and Ken Kennedy. To a person, they all
xx PREFACE
expressed admiration for the contribution that H. T. made to their careers and the support he offered
to the multitude of students studying at the university.
As the endowment was being initiated a series of lectures in H. T. Person’s honor was estab-
lished. Each fall the college invites a noted individual to be the H. T. Person Distinguished Lecturer
to address our alumni, students, and faculty as part of the university’s homecoming weekend. The
selection of the speakers is an opportunity to present timely topics and, in many cases, to high-
light the accomplishments of our alumni. For example, when the movie Apollo 13 was released,
Mr. David Reid, a graduate of the University of Wyoming Mechanical Engineering program and
flight controller for the Apollo 10, 11, 12, 13, and 14 missions, spoke on the real issues of bringing
Apollo 13 back to earth. Mr. Larry Novak of the design firm Skidmore, Owens and Merrill, spoke
on the rescue efforts at the World Trade Center in 2002. In 2010 Mr. Joe Leimkuhler, Manager
of Offshore Well Delivery for Shell Oil, spoke on oil drilling operations following the Deepwater
Horizon oil spill. Complementing these lectures, speakers meet with classes to discuss the projects
in detail and to provide insight on the value of engineering education. Even though H. T. Person
was a civil engineer, a conscious effort is made to select speakers from all engineering disciplines.
Thus, students and faculty have an opportunity to explore new ideas and concepts. A complete list
of the H. T. Person distinguished speakers is found in Appendix I.
In 2000, Mr. John Clark, noted bridge engineer, spent a semester on campus as the H. T.
Person professor in residence. He supplemented the H. T. Person Homecoming Lecture with a
presentation on the collapse of the Quebec River Bridge. Mr. Clark brought the background of the
bridge design and construction and the Order of the Engineers ceremony together for the students.
He challenged and exposed the students to the high level of responsibility that is expected of them
as practicing engineers. The successful completion of Mr. Clark’s professorship suggested that a
permanent chair would best suit the vision for improved undergraduate education.
In 2002 a national search was conducted for the first permanent H. T. Person Chair. I received
the appointment and have had the privilege to focus on undergraduate education and engineering
design. It has been a true pleasure to be able to share this endeavor for others to use. My years of
professional practice have been essential to my ability to provide meaningful and relevant experiences.
The challenges presented here are the culmination of over a decade of development. When
they began, the college had an embryonic first-year engineering experience focusing on the student
transition from high school to the university. Therefore, initial efforts of the H. T. Person Chair
focused on the first-year Engineering Design Challenge. After three years, the Introduction to En-
gineering course was well established and was being managed by Dr. Thomas V. Edgar. The design
and preparation of the annual first-year Design Challenge remained in the purview of the H. T.
Person Chair. Dr. Edgar prepared many of the common lectures for the course and his collaboration
on this course provided me opportunity to initiate interdisciplinary design courses. His keen insight
and ability to involve students was a particular asset.
The Interdisciplinary Senior Design program courses were created to be truly broad in scope.
The completed projects are discussed in detail as is the philosophy and organization to make them
PREFACE xxi
work. In 2010, Dean Robert Ettema and I discussed the possibility of creating this volume to transfer
the experience of developing undergraduate design programs to others who are interested in similar
ventures. While this effort focuses on the work of the undergraduate students at the University of
Wyoming, the concepts are transferrable. As is the case in most educational situations, motivated
students and faculty rise to the challenge. More than anything, this book is a testament to the ability
of the students to accept the challenges.
As John Donne’s poem reads, “No man is an island entire of itself …” such is also true of this
volume. As with any venture, there are many colleagues contributing to these projects. Dean O. A.
Plumb and Dean Robert Ettema have supported the sometimes non-conventional approaches to
developing design efforts. Associate Deans for undergraduate education, David Whitman, Richard
J. Schmidt, and Steven F. Barrett were always available for consultation on methods to assess progress
and to issue the semester-end student evaluation surveys. Dr. David Mukai and Dr. Jennifer Tanner
have been valuable colleagues and co-PIs for research endeavors.
Most important is the concept of a chair devoted to undergraduate teaching. The financial
and moral support of the original donors to the H. T. Person Endowment is deeply appreciated.
Four people in particular have served on the H. T. Person Advisory Board and have been sounding
boards for some of the ideas developed in this program. Albert “Boots” Nelson has been a constant
source of inspiration and support of these efforts. Tom Lockhart, Floyd Bishop and, Bill Bellamy
have also provided ideas and insight for these efforts. In addition to their support of undergraduate
education, it is a tribute to their professional careers and foresight that several of the original donors
are members of the College of Engineering and Applied Science Hall of Fame. I thank them all for
their active interest in engineering education even as they pursued other interests in their careers
and retirement.
Charles W. Dolan
H.T. Person Chair of Engineering
Laramie, Wyoming
March 2013
C H A P T E R 1
1
The World of Engineering
Design
When Jacob Boorstin, formerly head of the Library of Congress, released his book The Discoverers,1
there was no mention of any engineering feat. The Discoverers examines the intense and often
individual pursuits of people driven to understand the world around them. Beginning with the
ancient Greeks and progressing through Newton to Watson and Crick, The Discovers presents a
personal quest to understand science and the natural world. As such, science requires the intellectual
capacity to examine a multitude of data and assimilate that information into a coherent theory.
When successful, each theory can be replicated and validated by others.
It wasn’t until Boorstin’s publication of The Creators2 that engineering was acknowledged. In
The Creators, art, music and sculpture appear alongside works in stone, concrete, and steel as testament
to human creativity. Michelangelo, Monet, and Mozart share space with DaVinci, Brunelleschi,
and Eads. While engineers do not typically see themselves in the same venue as the great artists,
composers, and playwrights, they share a common trait. They create something where nothing
existed.
Engineering design is a unique activity with a single problem statement and multiple solutions.
Often only a small number of the solutions come close to meeting all of the project requirements.
In Path Between the Seas,3 David McCullough describes the extraordinary difficulties of working in
the mud and landslides during excavation of the Panama Canal. When John Stevens, J. J. Hill’s chief
engineer of the Great Northern Railroad, was put in charge of the construction, he immediately saw
a railroad problem not an excavation problem. By employing the steel rails instead of working in
the constantly changing mud, Stevens was able to stabilize the construction and move the project
toward a successful conclusion. Path Between the Seas additionally places engineering design into the
larger context of societal needs. Less than 10 percent of the book deals exclusively with the technical
problems while the remainder deals with the politics, organization, and personalities engaged in this
world class project.
1.1 DESIGN RECOGNITION
If engineering design is such a creative process, why are so few engineers recognized for their
endeavors? This question evokes two different and somewhat diametric responses.The first argument
is that engineering practice has not engendered the cult of personality evident in the arts, architecture,
and scientific communities.The second argument is far more germane to the development of the craft.
2
1. THE WORLD OF ENGINEERING DESIGN
Engineers work in teams with more emphasis on the end result than on the individual contribution.
Simply put, individuals are easier to recognize than teams.
The word Science derives from the Latin scientia, meaning “knowledge.” Using the Wikipedia
description, science “is a systematic enterprise that builds and organizes knowledge in the form of
testable explanations and predictions about the universe.” Scientific theories and discoveries, unlike
design projects, are often named for the discoverer not the principle involved. Newton’s laws of
motion and Boyle’s law for gas pressure are examples of named scientific principles. An examination
of the development of the atomic bomb at Los Alamos, New Mexico, illustrates the counter practice.
Everything from the uranium separation to the development of the explosive detonation devices are
engineering endeavors. That is, scientific principles are used to develop practical applications to
meet project criteria. Yet, by definition, there were no engineers at Los Alamos, only scientists and
“research scientists.”5
While Henry Petrosky, in Remaking the World: Adventures in Engineering,4 decries the lack
of engineering recognition, the profession identifies the larger issue as the need to be able to com-
municate with each other and with the public at large. Engineering education places a premium
on teamwork and oral and written communication. Articulation of complex issues, solutions, and
the impact on society fall to the engineer. History has proven there is no place for hubris in the
engineering design profession. One needs only to read the impact on Robert Mulholland after the
collapse of the St. Francis Dam in California, to fully appreciate the impact on the individual and
society resulting from neglect or oversight on design.6
1.2 WHAT IS ENGINEERING DESIGN
Design is a creative process. As such, the definition of design changes with its associate profession.
Art, engineering, music and all creative enterprises have design components. For example, civil
engineering design may include creation of contract documents, calculations, or studies that describe
how to assemble materials in new or novel ways. Similarly, computer designers may create a new
generation core chip just as a musician drafts a score or an artist sketches the start of a painting. An
example of the creative symbiosis of technology, design, and the arts is Wikipedia, which is why it is
used in this context. The platform allows all to assemble information and at the same time contains
a series of checks and balances to assure validity of the work.
More formally, Wikipedia defines design as follows:
(noun) a specification of an object, manifested by an agent, intended to accomplish goals, in a
particular environment, using a set of primitive components, satisfying a set of requirements,
subject to constraints;
(verb, transitive) to create a design, in an environment (where the designer operates)
This formal definition provides a framework for the development of engineering design chal-
lenges presented in this book. Specifically, the design must satisfy a set of criteria or requirements
and is constrained by external factors such as available materials or cost.
1.3 WHY TEACHING DESIGN IS DIFFICULT
1.3. WHY TEACHING DESIGN IS DIFFICULT 3
The idea that design creates something where nothing previously existed is a source of anxiety when
teaching design. In the university environment, courses are offered by individual topic and subject
area, which are often aligned with the professor’s personal and professional interest. In most cases,
the subject material is further structured to be narrow and analytic in nature. “Analytic” implies the
student assesses the information provided, prepares the necessary calculations, and presents a unique
solution.This is especially true in the developmental years of engineering education. In these courses,
the student is assembling the building blocks needed for the rest of his or her career. At the same time,
coordination between these core courses is minimal or non-existent. In consequence, students have
several years of experience and practice generating a single solution to a well constrained problem,
which is then graded by the faculty member as right or wrong. Throughout this process, thinking
that integrates coursework from multiple subjects is generally absent or not developed.
Somewhere late in the junior year, or certainly in the senior year, the engineering or computer
science student runs into the “design project.” Depending on the resources available, faculty members
often simplify the design practice questions. Simplification is for self-survival. In a class of 30 to
100 students there will be a wide array of acceptable solutions. Grading this multitude of solutions
engages a substantial amount of the faculty member’s time and is often not suitable for delegation
to a “teaching assistant.” Even if a teaching assistant is available to aid with the grading, the faculty
member is responsible for defining the range of acceptable solutions and needs to provide insight
on the “correctness” of the solution domain.
Developing engineering design problems that require substantial interaction with a student
group is even more challenging. These challenges range from introductory first-year courses to
interdisciplinary senior design courses. In each case the logic, organization, and assessment of the
design courses are presented.
1.4 THE ENGINEERING DESIGN CHALLENGE
The “Engineering Design Challenge” is then a paradox. To the educator, the challenge is to impart
concepts of design to students steeped in a tradition of solving analysis problems. To the student
design is an uncomfortable step from known to unknown conditions, often with apparently sketchy
definitions of the problem requirements.The following is an examination of both sides of the paradox.
Each section discusses the design challenge presented to the students and the objectives and trials
of managing the design challenge effort.
Complementing the Engineering Design Challenge is an overview of the University of
Wyoming first-year Introduction to Engineering course. This course is required of all first-year engi-
neering students and includes the first-year design challenge.The success, failure, and lessons learned
from this course are discussed.
4
1. THE WORLD OF ENGINEERING DESIGN
REFERENCES
[1] Boorstin, D. J., The Discovers, First Vintage Books, New York 1983.
[2] Boorstin, D. J., The Creators: A History of Heroes of the Imagination, First Vintage Books, New
York 1993.
[3] McCullough, David G., Path Between the Seas: The Creation of the Panama Canal, 1870–1914,
New York, Simon and Schuster, 1977.
[4] Rhodes, Richard, The Making of the Atomic Bomb, Simon & Schuster, New York, 1986.
[5] Petroski, Henry, Remaking the World: Adventures in Engineering, First Vintage Books Edition,
New York 1997
[6] Reisner, Marc. Cadillac Desert: The American West and its Disappearing Water, New York,
Penguin Books, 1993.
C H A P T E R 2
5
Introduction to Engineering
2.1 ES 1000 INTRODUCTION TO ENGINEERING
Each fall semester, the University of Wyoming College of Engineering and Applied Science offers
13 sections of ES 1000 - Introduction to Engineering. With 22 to 30 students per section, this is the
largest common course in the university. Instructors are typically tenured faculty or extended term
lecturers who volunteer to lead the course. The professor is assisted by an undergraduate engineering
peer assistant. A common course syllabus is developed and a set of prepared lecture notes accompanies
the syllabus to assist each professor. A sample course syllabus is included in Appendix II. Each fall
a design challenge is developed and presented to the first-year students.
The University of Wyoming Introduction to Engineering course is one credit hour and is
intended to cover a variety of topics that are required by the University Studies Program (USP)
as well as introducing the students to a design exercise. Embedded within the ES 1000 course
are the university requirements for information literacy and intellectual community. Both of these
requirements are somewhat loosely defined. For example, the University Studies Program defines
an intellectual community:
Intellectual Community Definition:
Courses that fulfill the Intellectual Community requirement of University Studies provide stu-
dents with an introduction to the purpose and philosophy of higher education. These academic,
content-based courses, designed for first-year students, focus on the critical-thinking skills nec-
essary to understand, analyze, and produce knowledge within the framework of the discipline
or area of inquiry in which the course is offered.
In attempting to address all areas of study within the university, this definition is more of a
description than a definition and offers only marginal guidance for the organization of the course.
Similarly, the University Studies Program defined information literacy from the American Library
Association.
Information Literacy Definition:
Information Literacy, as defined by the American Library Association, is the ability to “rec-
ognize when information is needed and to locate, evaluate and use effectively the needed
information.”
These two requirements are complementary to the design process. Even so, combining all
of these actions into a single course, let alone a single credit hour, is challenging. Considering the
above, the university catalog description of the ES 1000 course is as follows:
6
2. INTRODUCTION TO ENGINEERING
ES 1000. Orientation to Engineering Study. 1. Skills and professional development related
to engineering. Involves problem solving, critical thinking and ethics, as well as activities
to help transition to university environment. Required of all freshmen entering engineering
curricula. Students with credit in UNST (University Studies) 1000 may not receive credit for
this course. (Normally offered both semesters)
The I and L designation in the catalog description indicate that the course meets the intel-
lectual community and information literacy requirements and are approved for such by the USP
Committee. With only a single credit hour to accomplish the objectives of this myriad set of re-
quirements, the course content is relatively packed. Over the years, the course has evolved from an
effort to assist engineering students transitioning from high school to college life to a course with
substantial engineering and design content. The current content primarily addresses the philosophy
of engineering while engaging the student in the engineering profession. Approximately 10 years
ago the course was modified to meet both the 2003 USP criteria listed above; it also added a design
challenge to the curriculum. In addition to the course professor, each section is assigned a peer assis-
tant, who receives a small stipend. The peer assistant assists the faculty member in the presentation
of the course and becomes a contact for the first-year students. The peer assistant concept has proven
to be exceptionally valuable. The peer assistant is able to answer a number of transitional questions
that faculty members are, by and large, unaware of or unable to address. These questions deal with
issues such as campus food, dating, roommate problems, selection of faculty members for courses
during advising, and related topics.
When broken down to its fundamentals, ES 1000 has three important components. The
first component introduces the students to campus life. The second component requires students to
become active in collaborative work, primarily by establishing groups engaged in the design challenge.
The third component, information literacy, is an individual effort that requires the students to prepare
a paper on a topic related to the design challenge and a second paper assessing the sources used in
the research paper.
2.1.1
INTRODUCTION TO UNIVERSITY LIFE
The introduction to university life component engages students in the College of Engineering and
Applied Science and the campus at large. There are a required number of activities that each student
must attend. These include professional society meetings within the college and events external to
the college. Each student is asked to attend the senior design symposium presentations at the end of
the semester. Participation is recorded by a self-reporting system structured to foster responsibility
and communication.
A critical piece of this component is encouraging students to participate in professional society
activities. An underlying philosophy for this requirement is to retain students. Retention is improved
if the student is engaged in their areas of interest. Undeclared students visit professional societies
in areas of interest. In addition to the regular professional societies, the students may also select
from the Society of Women Engineers, the Minority Engineering Program, and Engineers without
Borders. Through this effort students meet upper-class students and become comfortable both in
the college and their area of study.
2.1. ES 1000 INTRODUCTION TO ENGINEERING 7
2.1.2
INTELLECTUAL COMMUNITY
The intellectual community requirement engages the students through the design challenge. While
the students work in small groups, the challenge is organized to reward collective efforts. In the
detailed challenges that follow, the entire section is taken as a team. This structure supports the
exchange of ideas and concepts within the class rather than developing an attitude of secrecy and
exclusion.
To encourage cooperative endeavors, students are required to present oral summaries on the
progress of their design efforts. Typically, two presentations occur during the semester. The first
presentation is during the preliminary design phase. This presentation opens the array of potential
solutions to the design challenge or focuses on the development of one aspect of the design. The
second presentation either follows the research paper or the design challenge. If the presentation is
on the design challenge, then the students must explain why their designs worked and what aspects of
the design could be improved. If the oral presentation is on the research paper, the students correlate
their research to the solution they developed for the design challenge. The selection of the design
challenge or the research topic for the challenge is at the discretion of the section professor. The
selection of an option in any given semester depends upon the scheduling of the design challenge.
Because the design challenge often requires the use of particular buildings or facilities on campus,
the presentation date often has to be coordinated with other academic units. When the challenge
occurs at the very end of the term, the research topic is selected for oral presentation.
To engender building an intellectual community, each group is required to develop a design
notebook. As motivation for developing and maintaining a notebook, the work of College Hall of
Fame member Thomas Osborne is used as a case study. Osborne worked for Hewlett Packard and
was involved in the design of handheld calculators. Osborne’s notebooks at HP were instrumental
in HP winning a lawsuit and receiving the patent for the first fully functional engineering calculator.
A thought-provoking interview with Osborne can be found on the web.1,2,3
The Design Notebook
To assess student engagement, each group is required to maintain a design notebook. Bound note-
books, e.g., spiral or similar volumes, are preferred, however, loose leaf notebooks are allowed to
accommodate the students’ conflicting schedules. The notebook is started early in the semester to
record ideas, thoughts, and designs. The notebook must contain the following information:
(cid:129) A title page containing the design group name and the name and email of each group member.
(cid:129) A summary page, following the title page, with the following critical information:
– Page numbers for the testing and evaluation program.
– A brief summary of the test data results.
8
2. INTRODUCTION TO ENGINEERING
– The total expenditure to construct the project including a statement that the budget
restrictions were met.
(cid:129) Pages must be numbered and dated.
(cid:129) Pages should include components of the design work including: sketches, references, notes,
calculations, list of materials, costs, alternate materials considered, summaries of group dis-
cussions, conclusions and decisions, and any other activities relevant to the design challenge.
Comments on ideas that work, things that didn’t work, and changes to initial design concepts
are appropriate.
(cid:129) The design notebook is turned in at the design challenge.
(cid:129) The design notebook is source material for the oral presentation.
(cid:129) The final entries should include comments on how to improve the design.
Notebooks are reviewed by the peer assistants at the registration for the design challenge.They
are graded on a three-part scale. Zero points are assigned for no notebook, 5 points for a fair to poor
compilation, 7 points for an average submittal, and 10 points for a thorough effort. All members of
the group receive the same grade.
2.1.3
INFORMATION LITERACY
The information literacy component of ES 1000 requires the students to prepare a brief paper and
to acquire the skills to critically assess information sources. Students examine a number of different
information sources and evaluate the quality of those sources. The paper requires the students to
evaluate relevant sources and limits the number of references allowed. The limitation of references
forces students to concentrate on those references most valuable to presenting their topic. Each
student is asked to locate one reference from a peer-reviewed journal, one reference from popular
literature, one reference from the Internet, and one reference in opposition to the position they are
presenting. A critical part of the information literacy paper is explaining to the students how to
differentiate between Internet sources and peer-reviewed articles since both appear on the Internet.
With the availability of search engines in the University Library, and the accessibility of Google
Scholar, the students must be able to differentiate between general Internet-based sources and peer
reviewed material.
A second paper asks the students to assess their sources. Restricted to two–three pages in
length, the exercise forces students to critically examine the material they were presenting and create
a rationale for the validity of their source material. The students submit a portfolio of the articles that
they used for their paper. The portfolio accomplishes two objectives. The first objective demonstrates
to the student that resource material should be archived for their own use.The second objective allows
the professor to verify that the source requirements are satisfied. A full description of the information
literacy requirements and the assessment paper requirements is provided in the Appendix III.
2.2. ROLE OF H. T. PERSON CHAIR 9
Perhaps one of the more interesting outcomes of this exercise occurs when students comment
on how difficult the journal papers are to read. Comments like this provide an opportunity to
reinforce the value of the education, why the curriculum is structured to build the fundamentals of
and engineering education, and promote an awareness of the value of professional papers compared
to news sources.
2.2 ROLE OF H. T. PERSON CHAIR
The H. T. Person Chair of Engineering position was made possible by University of Wyoming
Alumni who recognized the value of a strong emphasis on engineering fundamentals and desired
to make a serious contribution to undergraduate education. An annual task of the H. T. Person
Chair is to establish the first-year design challenge. The endowment also provides a small amount
of discretionary funds to support these challenges.
The H. T. Person Chair job description is 60 percent teaching, 5 percent advising, 5 percent
service, and 30 percent research and creative endeavors. Historically, two–three credit courses and
one section of ES 1000 are taught in the fall and 2-three credit courses are taught in the spring.
Honors courses are often taught as a voluntary overload. Two to four graduate students are directed
each year.
In retrospect, managing the interdisciplinary senior design courses require considerable effort.
In addition to the course organization, these courses require coordination with faculty in related
fields so expertise is available when needed. A reasonable work load would be to teach only the
interdisciplinary course in a given semester.
2.3 CLOSING COMMENTS ON ES 1000
2.3.1
STUDENT ENGAGEMENT
Prior to 2003 the student response to ES 1000 had been relatively lackluster. To reinforce the
importance of the course and to emphasize its place in the University Studies Program, in 2003 the
college instituted a Freshman Convocation. The Convocation is held on the Monday of the first
day of class with ES 1000 classes beginning on Tuesday. At the Convocation, the Dean welcomes
the students and presents the importance of the class, the faculty members and peer assistants are
introduced, and the design challenge for the semester is unveiled. The convocation improved student
engagement significantly.
2.3.2 THE COURSE STRUCTURE
After the first cycle of teaching ES 1000, it was apparent that a typical one-hour schedule had to be
restructured to meet the demands of the program. The course dragged on and students lost interest
toward the end of the semester. Consequently, the course was modified to meet twice a week for half
a semester. The modified schedule enhanced student engagement and eliminated conflicts between
the design challenge and final term projects in other classes.
10
2. INTRODUCTION TO ENGINEERING
2.4 ASSESSMENT
ES 1000 is successfully engaging students in the college. The course provides the students with
critical tools needed to advance their education and a myriad of topics and activities to explore.
The expanse and complexity of an engineering education is presented and enables students to assess
early in their career if this course of study is appropriate for them. The grading structure of ES 1000
balances class attendance, the design challenge, and outside activities. Adoption of a similar program
can adjust where the emphasis is placed in the grading system.
REFERENCES
[1] www.hp9825.com/html/osborne_s_story.html
[2] www.viddler.com/explore/sleibson/videos/4/
[3] www.edn.com/electronics-blogs/4306814/how-hp-got-its-first-
calculators-video-interview-with-tom-osborne
C H A P T E R 3
11
First-year Design Challenge
Development
3.1 OVERALL CHALLENGE PHILOSOPHY
An overriding concern in engineering educations is engineering and applied science students do not
see design until their junior or senior years.This is a long time to wait and an opportunity for students
to lose interest. The first-year design experience is to motivate students to realize engineering can be
fun as well as complex. The details of first-year engineering challenges are provided in the following
chapter. The philosophy and underlying assumptions of the challenges are presented here.
A review of some design challenges began to show the difficulty in designing challenges for
large groups of students. The MIT “King of the Hill” challenge and the University of Oklahoma
“Pumpkin Toss” demonstrate excitement and student engagement. At the same time, they are a
competition, complex for first-year students, and involve a substantial financial outlay. While life
itself is a competition, engineering generally requires collaboration. Therefore, challenges, not com-
petitions, are developed to promote a cooperative endeavor. The challenges are sensitive to the fact
that college and students’ budgets are severely constrained.This leads to a second goal of these design
challenges: limit the budget for the project.
In several of the following challenges, parts are identified and supplied to the students. Funding
for these parts comes from the H. T. Person endowment. These parts were supplied to balance the
playing field by requiring all groups to work with a common set of components. The motors are
typically underpowered or have a high a RPM in their native mode. Thus, each group has to adjust
their design to the materials available.
One global objective of the challenge is to minimize the rules. This makes the challenge a true
outcome-based activity where creativity and innovation are rewarded. For each challenge, questions
arise that require clarification.These questions are classified as “proprietary” or “public” by the faculty
member in charge or by the student request. Public questions request general clarification of the
rules and are answered on a FAQ site set up on the ES 1000 class website. For example, a public
question might be: “Can I change the batteries after each run?” Proprietary questions deal with
groups wanting to know if an aspect of the design is allowed. An example of a proprietary question
that is used in class is the keel design of the Australian America’s Cup yacht several years ago. The
winged keel design was approved but kept secret until the race and the Australian yacht ended up
with a significant competitive advantage. Students identify their request as public or proprietary and,
if private, they are responded to individually. An example of a proprietary question might be: “Can
12
3. FIRST-YEAR DESIGN CHALLENGE DEVELOPMENT
we use CO2 cartridges for power?” In the case of the CO2 cartridge, the response may also ask for
a safety plan on the cartridge use.
The challenge envisions designs that can be completed in the student dorm room. The college
machine shop has space available for the first-year students. The area contains a number of hand
tools and drill presses. Students must attend a shop safety video before gaining permission to use
shop facilities. Shop times are scheduled when shop staff supervision is available.
3.2
SAFETY
The first challenge assumed that the students would behave in a safe, responsible manner, especially
following the lectures on ethics and holding public safety paramount in professional engineering
efforts. On the day of the challenge, one group launched a vehicle powered by 16 bottle rockets on
the ballroom floor of the Student Union! Therefore, the following safety clause became part of every
challenge.
The objective of the challenge is to foster engineering creativity and cooperation. The design
group is responsible for ensuring safety of participants and spectators during the challenge.
Groups using any feature deemed dangerous by the judges may be asked at any time to prepare
a safety plan or suitably modify the design before continuing in the challenge. Offending designs
may be disqualified at the discretion of the faculty member or peer assistants. Use of pyrotechnic
or similar devices is strictly prohibited. Any questions regarding safety may be directed to your
professor.
The safety statement is general and intended to remind the students of their obligations.
Occasionally, designs come forward that are on the margin of safe operation. In these cases, the
group is required to prepare a safety program that must be approved before their design is allowed to
participate. Requiring a safety plan has two outcomes. Either the design is modified so a plan is not
needed or the students prepare a plan and learn another important aspect of design development.
3.3 COST AND TIME CONSTRAINTS
Engineering design is driven by cost effectiveness. To maintain this philosophy in the challenge,
students’ budgets were initially set at $10–$15 per group. “Free” materials could be used; however,
the definition of “free” needed explanation.
The definition of “free” is that it has no commercial value. In short, the professor can elect to
keep it or throw it in the trash following the competition. “Rented” or “borrowed” materials
are not allowed.
The “free” aspect of the challenge is to encourage the students to use materials that may be
commonly available or otherwise scrap. Plastic drink containers, cardboard, and similar materials
are readily available and have successfully incorporated into many student designs.
3.4. AWARDS AND PRIZES 13
The “borrowed or rented” clause is added because students are very skilled in “gaming the
system.” In one design challenge, a group showed up with a $20,000 piece of robotic equipment
“borrowed” from a parent’s company. While the design was certainly creative, it was not in the spirit
of economical design or the design challenge.
The design challenge has students working in groups of two or three. Experience with these
challenges suggests that providing kits is not needed when parts are readily available locally. Challenge
budgets increased to $30–$50 per group when no parts were provided. This is incorporated into the
class structure as appropriate.
The class schedule also constrains the budget. With only seven to eight weeks from start to
finish, there is not sufficient time for students to execute a full set of design plans, prototypes, and
finished products. The quality of the student designs improves when trial runs are required. Time
to repair or upgrade the designs is built into the schedule. The ideal schedule has one to two weeks
between trial testing and the final challenge. In cases where trial runs can be completed and initial
data recorded in a continuous manner, less time is needed between the trial and the challenge.
An important parameter in determining the schedule for trial runs and the challenge is the
possibility of damage to the design from the challenge testing. Thus, the car side impact and the
Styrofoam airplane flight require a longer period between testing and the challenge as the design
may be damaged in testing. Awareness of these limitations allows the schedule to provide ample
time for modification or replacement.
3.4 AWARDS AND PRIZES
To minimize the competition aspect of the challenge and to reinforce collaboration, individual prizes
are not generally offered. Pizza parties for the section with the best overall performance are often
provided. The latter is consistent with encouraging cooperation among the groups in each section.
To assess whether prizes were an effective inducement to improve the design, on two occasions,
the author provided cash prizes for the best design performance. In one case, the carpet climb, this
was extremely effective inducement. In another case, the underwater recovery vehicle, it led to a
chaotic situation with less than professional behavior. Keeping awards at the section level was most
effective.
3.5 ASSESSMENT
The design challenge is very popular and is often cited as the most interesting and memorable part
of the first-year experience. At the end of each semester, the professors and peer assistants gather and
critique the challenge, and a semester-end survey is issued to each student. These closing comments
are based on student reviews and faculty and peer assistant debriefings.
(cid:129) The low-cost, open-ended design challenges are very popular.
(cid:129) The design challenges are an effective introduction to the engineering program.
14
3. FIRST-YEAR DESIGN CHALLENGE DEVELOPMENT
(cid:129) On a typical challenge, approximately 20 to 30 percent of the groups will successfully complete
the challenge. The toughest challenge, the carpet climb, had 2 of 134 groups succeeded.
(cid:129) Requiring trial runs as part of the challenge improves the overall design by effectively requiring
the groups to complete their design prior to the actual challenge.
(cid:129) The budgets listed on each challenge are typically sufficient.
(cid:129) Access to the shop or to basic hand tools is a benefit but not absolutely necessary.
(cid:129) Challenges can be used by individual sections or by an entire class.
The one minor deficiency in the ES 1000 program is the lack of time spent working with the
students on formal design development. There are two reasons for this. First, the total number of
topics to be covered diminishes the time available for design. Second, with 13 sections and typically
10 different professors, a lack of consistency on the design effort between sections is possible. A pre-
semester briefing helps assure all faculty and peer assistants understand the objectives of the challenge.
Faculty members have considerable freedom to then adjust the course, but not the challenge rules,
to suit their own goals. The syllabus in Appendix II illustrates the overall course and design-directed
activities.
C H A P T E R 4
15
The First-year Design
Challenges
4.1
INTRODUCTION
This chapter presents the design challenges developed at the University of Wyoming. The chal-
lenges are updated to describe the facilities needed for the design challenge, to incorporate relevant
“Frequently Asked Questions,” and to incorporate results from running the challenges.
The issue of facilities is not trivial.The University of Wyoming is fortunate to have widespread
cooperation among various facilities within the university community. At the same time, gaining
access to the indoor football practice field requires coordination between the College of Engineering
and Applied Science and the Athletic Department. One consequence of this cooperation is that the
design challenge is typically held in October during an away football game. This timing provides the
challenge with wider access to campus facilities and less competition for attention but also means
that the syllabus is modified each year to synchronize with the facilities needed for the challenge.
Consideration of the facilities is additionally tied to visibility of the engineering program.
When possible, the challenges are held in highly public areas. The atrium of the campus library was
an ideal setting for the carpet climb. The Student Union ballroom served for the slalom challenge.
Having an audience augments the experience. If the challenge is held in a public area, a poster or
other description of the challenge assists the public in understanding the activity.
Each challenge describes the challenge objective, the rules for the challenge, the safety state-
ment, the challenge organization, and concluding comments. The challenge and rules statements
vary depending on the constraints placed on the challenge and available facilities. The organiza-
tional discussion addresses preparation, evaluation, and peer assistant activities required prior to the
semester, during the semester, and on the challenge day.
If the challenge is subsidized, the subsidy is discussed in the challenge description. Subsidies
often take the format of providing common components that must be used in the challenge and are
funded by the H. T. Person Endowment. The source of these components is identified, although
some caution is offered because the individual parts are not uniformly available from year to year.
Subsidies are typically less than $1,000 spread over the 13 sections of first-year students.
The challenge is typically completed on Saturday morning, although one challenge was suc-
cessful on a Thursday evening. The challenge is conducted outside of class hours so some allowance
is needed for students who cannot attend. The rules require at least one person from each group to
16
4. THE FIRST-YEAR DESIGN CHALLENGES
be present. This has been successful. In addition, all groups are allowed to request a preferred time
if there is a personal conflict. This option has been rarely exercised.
The challenge requires about 15–20 minutes per section and is dependent on the number
of test sites available. With 13 sections, the University of Wyoming typically schedules an entire
morning to complete the challenge.
Faculty and peer assistant participation during the challenge is needed. Typically, the course
coordinator and the H. T. Person Chair are present for the entire challenge and serve as the adjudi-
cators for safety and rule violations. The peer assistants handle the administrative tasks. Professors
are encouraged to attend when their sections participate. Each challenge describes the peer assistant
assignments. The peer assistant assignments are provided assuming 10–15 sections are participating
in the challenge. A single section can be conducted with a faculty member and peer assistant.
4.2
FLEET EFFICIENCY AND THE AUTO DESIGN DILEMMA
Figure 4.1: Side impact safety test.
4.2.1 FACILITIES
This challenge requires a test track. Buildings with a concourse around the perimeter of the building
work as does an indoor running track. The challenge is designed for a smooth floor. If the challenge
4.2. FLEET EFFICIENCY AND THE AUTO DESIGN DILEMMA 17
is conducted on a running track with a composite surface, the qualifying distances may have to be
adjusted to account for the higher ground friction.
A side impact test area needs to be established. Typically, this is an area about 12 feet square
with a heavy plastic sheet placed on the floor. The impact hammer is placed in the center and
the plastic collects the egg splatter. The challenge has been run in a smaller area with the impact
hammer in a cardboard enclosure to capture the egg splash, but the visual impact of the test is greatly
diminished.
4.2.2 THE CHALLENGE
This challenge calls for each section to form its own automobile company and to manufacture a fleet
of electric motor powered automobiles that are both efficient and safe.
The EPA CAFE (Corporate Average Fuel Economy) rules require an increase in the average
fuel efficiency for automobiles and trucks. Meeting these economy goals generally leads to a design
solution favoring smaller engines and lighter weight cars. This creates a conflict between vehicle fuel
efficiency and safety. Many lighter cars perform less well in collisions than heavier cars. Safety rules
require side impact resistance and side airbags to help mitigate this problem.
In order to gain an insight into the design tradeoff between fuel efficiency and safety, each
section will: a) construct a fleet of vehicles to meet fuel efficiency requirements and b) conduct side
impact tests on vehicles that the team constructs. Fuel efficiency will be determined by a test of the
distance traveled by your vehicles. The distance test begins with fresh batteries and runs until the
vehicle stops. The impact test consists of a sledgehammer mounted on a frame that will deliver the
side impact. The sledgehammer will be raised through a 90-degree angle. The vehicle will be placed
against the side impact frame. An egg is placed in the driver/passenger compartment. The hammer
is released to strike the car. The raw egg must survive.
The fleet consists of three sizes of vehicles: economy, standard, and SUV. All are powered
by a 4.5 volt electric motor. Economy cars will use one AAA battery, standard cars use two AAA
batteries, and SUVs use three AAA batteries, provided by the students. At least one vehicle of each
size must be built and tested as described below. In that regard, the company (section) must decide
what constitutes its best overall fleet composition.
The objective of each company is to earn the most points. The section with the highest point
total gets the admiration of the entire freshman class and a pizza party on the last day of class.
Bonuses and Penalties will be assigned to each company as follows:
(cid:129) Each vehicle meeting the minimum distance standard: 5 points.
(cid:129) Bonus for fleet vehicles exceeding distance standards: 2 times the qualifying standard—10
points, 4 times—15 points, 8 times—20 points, etc. Each additional doubling of the qualifying
standard gains 5 points.
(cid:129) Companies not having at least one vehicle from each category will be penalized 50 points.
18
4. THE FIRST-YEAR DESIGN CHALLENGES
(cid:129) Vehicles that fail to meet the minimum distance target: zero points.
(cid:129) Each vehicle with an intact egg: 20 points.
(cid:129) Each vehicle with a cracked egg: 5 points.
(cid:129) Failure to pass the side impact test: zero points.
(cid:129) All vehicles must be operated in a safe manner. Explosive, pyrotechnic, or similar devices are
disqualified, and a 40 point deduction will be assessed to the team if they are brought to the
challenge.
On the challenge day, vehicles will be weighed, measured, classified, and logged in prior to
the test program. Heat times will be assigned in advance so each group knows when their fleet will
run.
4.2.3 THE RULES
Companies consist of a complete section. Individual corporate divisions (groups) consist of two
or three students. Single-person entries and teams greater than three students are not allowed.
Remember, you are working as a company so internal communication is encouraged.
The sole source of power for the vehicle is a 4.5 volt electric motor provided in a kit to your
group. Each kit includes the motor, gear set, and battery box (Figure 4.2).
Figure 4.2: Motor, battery box, and gears provided.
(Note: Jameco http://www.Jameco.com has motors and gears, pictured above, and provides
torque/speed curves for the motors. Electric motors and gears are available from a number of web sites
including Edmunds Scientific.)
Each team must maintain a design notebook (see Chapter 2).
No team shall spend more than $15.00 for supplies and equipment to manufacture the vehicle.
It is OK to use free stuff. The definition of “free” is that it has no commercial value. In short,
4.2. FLEET EFFICIENCY AND THE AUTO DESIGN DILEMMA 19
the instructor can elect to keep it or throw it in the trash following the competition. Modified
prefabricated cars are automatically disqualified. “Rented” or “borrowed” materials are not allowed.
This budget can be adjusted if motors and gears are not provided.
Vehicles will be weighed, measured, and logged in prior to the test program.
At your appointed time, take your vehicle to the distance trial station. Upon completion of
the distance trials, remove the batteries and place them in the recycling box. Take your vehicle to
the Safety Test station.
Vehicle fabrication:
(cid:129) The vehicle is to be constructed using only 1/4-inch-thick foamcore board, engineering cal-
culation paper, and white (Elmer’s) or thermo plastic (hot-melt) glue. The vehicle must fit
within an envelope that is 10 inches long, 4 inches wide, and 3 inches tall. Place a mark on
the centerline of the driver location on your vehicle. This corresponds to approximately 1:18
scale of an actual car.
(cid:129) A box having these inside dimensions will serve as a template to verify compliance of the vehicle
size. The floor of the vehicle must be at least 1/2 inch above the ground level (Figure 4.3)
under impact criteria. A 1/2-inch-thick block must pass freely beneath the vehicle.
(cid:129) Wheels, motor mounts, axles, and gear shafts may be metal or plastic as appropriate. The
vehicle must be hollow; however, you can consider the use of transverse elements for the
firewall and floor bracing.
(cid:129) You may consider stiffening elements at the floor, doors, and doorposts, and strategic placement
of the battery box. No seats or other accoutrements are required; however the “sheet metal”
parts of the cars should not be so flimsy that the aesthetics of the car suffers.
(cid:129) The car body should be shaped to include the passenger compartment, hood, and trunk area.
Front and rear windshield areas must be open. Doors need not open, but there must be a way
to replace batteries without damaging the vehicle.
(cid:129) There must be a hole 1 3/4 inches in diameter on one side or the roof for the insertion of the
egg. You must decide what portions of the car may be fabricated from sheet paper and where
the foamcore may be used as strengthening elements.
Vehicle weights with batteries but without the egg are as follows:
(cid:129) Compact car (1 battery): less than 225 grams
(cid:129) Standard car (2 batteries): 225 to 300 grams
(cid:129) SUV (3 batteries): greater than 300 grams
The distance criteria:
20
4. THE FIRST-YEAR DESIGN CHALLENGES
(cid:129) The vehicle and driver (the egg) will be placed at the starting line. The vehicle is started by
turning on the power. No push starts are allowed. The total distance traveled around the track
will be measured. Teams may redirect the cars to keep them in their lanes.
(cid:129) To qualify, a vehicle must travel a minimum of 100 yards.
(cid:129) Masking tape lines on the floor will indicate the qualifying distance and bonus point locations.
The vehicle safety criteria test:
(cid:129) Side impact safety tests are conducted with the apparatus shown in Figure 4.3.
Figure 4.3: Side impact safety test apparatus and vehicle envelope.
Once each car has completed its distance trial, the batteries are removed, placed in the recycle
bin, and the vehicle is taken to the impact test machine.
(cid:129) The egg is reinstalled and the car placed in the impact testing machine with the centerline
aligned with the hammer. The egg is placed loosely in the vehicle. “Seatbelts” and “airbags”
are permitted if they are in the car during the distance trial. The hammer handle is moved to
the horizontal position and then released.
(cid:129) An egg is deemed to have survived if there are no cracks in the shell. Therefore, careful removal
of the egg after the impact test is important.
4.2.4
SAFETY
The objective of the challenge is to foster engineering creativity and cooperation. The design group
is responsible for ensuring safety of participants and spectators during the challenge. Groups using
any feature deemed dangerous by the judges may be asked at any time to prepare a safety plan or
4.2. FLEET EFFICIENCY AND THE AUTO DESIGN DILEMMA 21
(a) Test setup
(b) Results
Figure 4.4: Side impact test execution.
suitably modify the design before continuing in the challenge. Offending designs may be disqualified
at the discretion of the faculty member or peer assistants. Use of pyrotechnic or similar devices is
strictly prohibited. Any questions regarding safety may be directed to your instructor.
4.2.5 CHALLENGE ORGANIZATION
Early organization:
(cid:129) Identify a test facility track and schedule the design challenge day
(cid:129) Identify a practice area and setup.
Preparation for challenge day:
(cid:129) Prepare an overall summary score spreadsheet
(cid:129) Prepare data sheets and coordinate individual score sheets with the summary spreadsheet
(cid:129) Prepare compliance box
(cid:129) Build the impact test hammer
(cid:129) Prepare a recycling box for used batteries
(cid:129) Draw lots for the section challenge times
(cid:129) Prepare a press release if appropriate
22
4. THE FIRST-YEAR DESIGN CHALLENGES
(cid:129) Prepare a descriptive poster if the challenge is in a public area
Challenge day: Peer assistant assignments:
(cid:129) Set up a registration table: log in each group, weigh and measure the vehicle, score the design
notebooks, and provide the individual data sheet: Typically, four assistants
(cid:129) Distance qualifications: Typically, two assistants. The assistants lay out the qualification and
bonus lines and confirm the distance traveled.
(cid:129) Side impact test: Typically, two assistants. The assistants confirm the outcome, sign off on the
data sheet, and assure each group complete their cleanup.
(cid:129) Data recording: Students turn in data sheets and one or two assistants enter the data into the
summary spreadsheet. One person may have to go back to the other activities to confirm data,
so two people are preferred.
(cid:129) Oversee the site clean-up: all.
Following the challenge:
(cid:129) Provide the summary data sheet to all instructors.
(cid:129) Arrange any awards, e.g., pizza party for the best section.
4.2.6 CONCLUDING COMMENTS
This challenge has been very successful. About 80 percent of the vehicles make the distance require-
ment and approximately 25 percent of the vehicles pass the entire challenge. One vehicle traveled
nearly 1,000 yards. As seen in the photos, the side impact test is extremely popular. Aligning the
hammer to the level position, so there is no excess energy and no “cutting corners,” is a valuable
experience, especially with the competitors watching to keep everything fair.
4.3 DUNEBUGGY DASH
4.3.1 FACILITIES
This challenge requires an obstacle course. As seen below, the challenge was completed in the Civil
Engineering Sediment Transport facility lab. A dirt road, grassy strip, gravel pile, or similar terrain
is acceptable. The challenge should require the students to consider the effects of foreign materials
getting into critical parts of the vehicle and affecting performance. The challenge is modeled after
the Mars Rover, so lightweight motors are emphasized.
4.3.2 CHALLENGE
This challenge requires construction of an electric motor powered dune buggy that can drive across
the sediment transportation laboratory “sandbox” shown below in Figure 4.5.
4.3. DUNEBUGGY DASH 23
(a) Spring
(b) Fall
Figure 4.5: Challenge course.
4.3.3 THE RULES
(cid:129) Teams consist of three or four students. Single and two-person entries are not allowed. This
size limitation is partially based on the size of the lab where the challenge will take place. If
space is available, two- and three-person teams are preferred.
(cid:129) The sole source of power for the vehicle is a 4.5 volt electric motor available from a kit in the
Dean’s office. The price is $5.00 and includes the motor, gear set, and battery box (Figure 4.2).
(Note: The motors provided in the kit are underpowered and require the students to adjust
the gear ratios for successful results. The kits cost more than $5.00 so the project is partially
subsidized. The motors are required otherwise students will use commercial toy dune buggy
motors, which are far more effective.)
(cid:129) Each team must maintain a design notebook (see Chapter 2).
(cid:129) No team shall spend more than $20.00 for supplies and equipment to manufacture the dune
buggy. This includes the $5.00 for the motor so each group has a $15.00 operating budget for
other materials. It is OK to use free stuff. The definition of “free” is that it has no commercial
value. In short, the instructor can elect to keep it or throw it in the trash following the
competition. “Rented” or “borrowed” materials are not allowed.
(cid:129) The vehicle will be placed on the sand with the back of the buggy touching the south wall.
The vehicle is started by turning on the power. No push starts are allowed.
(cid:129) No one is allowed on the sand. A peer assistant is on a movable walkway to collect stalled or
stranded vehicles.
24
4. THE FIRST-YEAR DESIGN CHALLENGES
(cid:129) All vehicles must be operated in a safe manner.
Recognition will be given for the following categories:
(cid:129) Rube Goldberg Award–most complicated design that actually works.
(cid:129) Students’ Choice Award.
(cid:129) Instructor and Peer Assistant Award in each section.
(cid:129) The section with the highest average distance travelled by all vehicles in the section receives a
pizza party.
4.3.4 CHALLENGE ORGANIZATION
Early Organization:
(cid:129) Identify a test facility and schedule the design challenge day.
(cid:129) Identify a practice area and setup.
Preparation for challenge day:
(cid:129) Prepare summary score spreadsheet.
(cid:129) Prepare data sheets and coordinate individual score sheets with the summary spreadsheet.
(cid:129) Prepare a recycling box for used batteries.
(cid:129) Draw lots for the section challenge times.
(cid:129) Prepare a press release if appropriate.
(cid:129) Prepare a descriptive poster if the challenge is in a public area.
Challenge day: Peer assistant assignments
(cid:129) Set up a registration table: log in each group, verify the motors, and provide the individual
data sheet: Typically, two assistants.
(cid:129) Distance qualifications: Typically, two assistants. The assistants confirm the distance traversed.
(cid:129) Recovery team: Typically, two assistants. The assistants recover stalled or damaged vehicles.
(cid:129) Data recording: One or two assistants enter the data into the summary spreadsheet. One
person may have to go back to the other activities to confirm data, so two people are preferred.
(cid:129) Oversee the site clean-up: all.
Following the challenge:
(cid:129) Provide the summary data sheet to all instructors.
(cid:129) Arrange any awards, e.g., pizza party or gift certificates for best categories.
4.4. FIRST FLIGHT: FLY A FOAM AIRPLANE 100 YARDS 25
4.3.5 CONCLUDING COMMENTS
An underlying assumption in this challenge was for the students to experience the difficulty of
working in “dirty” environments. The sand in the sediment basin was ideal for this. Background for
this challenge included discussion of the Mars Rovers.
About half the vehicles bogged down before moving five feet. Gearing was also critical since
the motors were high rpm and low torque. Some groups elected to use balloons to lift the vehicle
and the motor for a propeller to move it across the test area. Thus, the challenge was changed from
“traverse” the facility to “drive across” the facility. One group built a catapult to propel the vehicle
across. After almost hitting a student on the opposite side, the design was judged “unsafe.” Lastly,
recognition took the form of in-class awards for the different categories. The instructors and peer
assistants made comments on the evaluation sheets and these formed the basis of the selections.
4.4
FIRST FLIGHT: FLY A FOAM AIRPLANE 100 YARDS
Figure 4.6: Foam airplane flight preparation.
4.4.1 FACILITIES
This project requires a large open field. We selected the indoor football practice facility for two
reasons. First, by October, Wyoming weather conditions can be snowy and an outside activity may
be compromised. Second, the average wind in Laramie in October is in the 10–30 mph range and is
highly variable.This creates a potentially unfair comparison condition. By moving inside we provided
a stable environment that comes with a very nice graduated scale on the floor.
26
4. THE FIRST-YEAR DESIGN CHALLENGES
4.4.2 CHALLENGE
This challenge is to modify a foam plane to fly 100 yards—the length of a football field.The challenge
will be held in the football indoor practice facility.
From Wikipedia: the definition of an airplane: A fixed wing aircraft is an aircraft capable of
flight using wings that generate lift due to the vehicle’s forward airspeed and the shape of the
wings.
Each section will function as a team. A team consists of groups of two to three students. Each
group will receive one foam model plane. The basic plane is to be redesigned, modified, and tested
to optimize the number of planes in the team to make the required distance. In that regard, the
team must decide what constitutes its best selection of power and flight strategies. Individual groups
present their ideas in class and receive input from the team.
(cid:129) Each group works to design a plane to optimize the team response in the challenge.
(cid:129) A successful plane will fly 100 yards from end zone to end zone.
(cid:129) Each group must maintain a design notebook (see Chapter 2).
(cid:129) One plane is supplied to each group and is classified as “free.”
(cid:129) No group shall spend more than $30.00 for supplies to equipment to modify the plane. It is
OK to use “free stuff.” The definition of “free stuff ” is that it has no commercial value. In
short, the instructor can elect to keep it or throw it away following the challenge. “Rented”
or “borrowed” materials are not allowed. Only the cost of the final plane need be included in
the budget. If a plane is destroyed in testing, the cost of a new plane must be included in the
budget.
(cid:129) All planes must be designed, fabricated, and tested prior to submittal.
4.4.3 THE RULES
(cid:129) The plane may be powered by any safe device including elastic bands, launchers, electric motors,
or other mechanical contraptions. For safety reasons, no gasoline or model airplane engines or rocket
engines (e.g., Estes) are allowed. Part of the challenge is for the team to optimize flight selection
and design.
(cid:129) The plane must be launched behind the goal line and attempt to cross the opposite goal line.
As in football, if just the nose crosses the line, it is a success. Length is measured to the final
position of the nose of the aircraft. (Bounces on the ground count.)
(cid:129) All components of the original plane must be used in the challenge; however, decals and tickers
are optional.
(cid:129) Following initial testing, the group may elect to construct a new plane based on the performance
4.4. FIRST FLIGHT: FLY A FOAM AIRPLANE 100 YARDS 27
of the trial runs.
(cid:129) By definition, planes fly by lift. They are not dragged, towed, pulled, fastened to a wire, or
suspended from balloons.
(cid:129) On the challenge day, the team will have ten minutes to fly the length of the field.
Scoring
Each team may fly as many planes as there are groups.The team score is the average distance traversed
by the best flight of each group. At least three groups must fly to qualify. Thus, if six planes fly the
full length of the field and two fly 20 yards, the score is 80 points (6 flights x 100 yards + 2 x 20)/8.
In addition, each team receives and additional points based on the design notebook grade.
Awards
The team with the highest overall score will receive a pizza party during the final class session.
4.4.4
SAFETY
The objective of the challenge is to foster engineering creativity and cooperation. The design group
is responsible for ensuring safety of participants and spectators during the challenge. Groups using
any feature deemed dangerous by the judges may be asked at any time to prepare a safety plan or
suitably modify the design before continuing in the challenge. Offending designs may be disqualified
at the discretion of the faculty member or peer assistants. Use of pyrotechnic or similar devices is
strictly prohibited. Any questions regarding safety may be directed to your instructor.
4.4.5 CHALLENGE ORGANIZATION
Early Organization:
(cid:129) Identify a test facility track and schedule the design challenge day.
(cid:129) Identify a practice area and setup. This is often the same facility and scheduling the facility
for two activities is a critical endeavor.
Preparation for challenge day:
(cid:129) Prepare summary score spreadsheet.
(cid:129) Prepare data sheets and coordinate individual score sheets with the summary spreadsheet.
(cid:129) Prepare a recycling box for used batteries.
(cid:129) Draw lots for the section challenge times.
(cid:129) Prepare a press release if appropriate.
28
4. THE FIRST-YEAR DESIGN CHALLENGES
(cid:129) Prepare a descriptive poster if the challenge is in a public area.
Challenge day: Peer assistant assignments:
(cid:129) Set up a registration table: log in each group, inspect for safety issues, grade notebooks, and
provide the individual data sheet: Typically, two assistants.
(cid:129) Distance qualifications: Typically, two assistants. The assistants confirm the distance flown.
(cid:129) Data recording: Two or three assistants verify the data on the summary spreadsheet. One
person may have to go back to the other activities to confirm data, so two people are preferred.
(cid:129) Oversee site clean-up: all.
Following the challenge:
(cid:129) Provide the summary data sheet to all instructors.
(cid:129) Arrange award, e.g., pizza party or gift certificates for best categories.
4.4.6 CONCLUDING COMMENTS
Figure 4.7: Plane launch.
4.5. HACKYSACK FLIP 29
Class lectures addressed power, lift, speed, drag, and launch options. The longest flight was
just over 60 yards. The principle difficulty was a high speed launch raised the nose of the plane to a
stall position followed by the plane crashing.
There were three weeks between the trial and the final design. Most students used this time to
improve their designs. Some of the planes crashed during testing and students did not rebuild them
for the final challenge. This again is a function of the scoring and emphasis on participation not
success. At least one group used CO2 cartridges for propulsion. The group was required to present
a safety plan to assure that the cartridges were secured and controlled when fired.
Two groups tried to tow their planes across the field. They were disqualified. Several groups
tried using balloons to lift the plane. The drag from the balloons generally resulted in the plane
flying in circles. The definition of an airplane and the requirement for the plan to fly by its own lift
should limit balloons in the challenge.
This challenge included the value of the design notebooks in the overall scoring.The inclusion
was to place additional emphasis on the value of recording progress.
4.5 HACKYSACK FLIP
4.5.1 FACILITIES
The number of available tracks is a limiting feature for planning the amount of time needed to
complete the challenge. Three tracks were constructed for this challenge. The challenge was held
on the stage of one of the auditoriums on campus. This provided a waiting and viewing area for
students and guests. This challenge is well suited for public viewing.
4.5.2 CHALLENGE
This challenge requires each section to form its own company to manufacture a fleet of electric motor
powered vehicles that runs on a prefabricated track and can perform the tasks listed below. Each
section will enter 8–10 vehicles. Scoring will be completed by averaging the total number of points
from each vehicle’s best run.
Each section will function as a team. A team consists of groups of two to three students.
Vehicles are to be designed, built, and tested to optimize the team’s point score. In that regard, the
team must decide what constitutes its best composition of vehicles. Individual groups present their
ideas and receive input from the team.
Each group designs and constructs a vehicle capable of performing the following tasks:
1. Pass through the start gate (S),
2. ascend a 20-degree hill,
3. propel a “hackysack” bean bag through the hole in a vertical wall (W) at the top of the hill,
4. knock down a flag or flags located at the top of the hill, and
30
4. THE FIRST-YEAR DESIGN CHALLENGES
Figure 4.8: Challenge day setup.
5. descend the hill and round the exit run-out to the finish line.
The points scored on the best run will be recorded.
The Hackysacks
Details of the Hackysacks are given in Figure 4.9.They weigh 37 to 39.8 grams and are approximately
2 1/2 in. in diameter.
The Track
Figure 4.10 shows the approximate dimensions of the track. The drawing is not to scale. The track
width dimension may vary by ± 0.5 inches at any point. The side rails are made from 2.5-inch tall
pressboard. The carpet is a standard, commercial grade. Contestants will approach the right-hand
side of the ramp as seen from the side view (see S below).
The “top of the hill” zone is defined by the two lines (T). A dowel extends 1” into the track
opposite the “Hackysack” wall (W). The Hackysack must be launched through the hole in the
wall. The diameter of the hacky sack hole is 5.” The flags, each consisting of a dowel extending
4.5. HACKYSACK FLIP 31
Figure 4.9: Hackysack details.
approximately 1.4 inches above the track wall (see illustration), are mounted on either side of the
track at the centerline. You may lower either or both flags as you exit the top of the hill. A vehicle’s
flag will pivot only in the direction of the forward motion of the vehicle. Design should include
consideration of “high centering” at the start and top of the hill climb. One track is available for
testing cars prior to the challenge.
4.5.3 THE RULES
Vehicle Design Specifications
1. The complete vehicle must be designed to fit inside a 6-inch cube. The complete vehicle
is defined by all its parts. Appendages, such as an arm, may extend beyond this limit once
activated by passing through the vehicle portal, P, but cannot be activated before the start of
the run.
2. The vehicle must remain intact throughout the competition, that is, it may not jettison any
unattached part, and may not divide into two or more separate sections or pieces. All parts
must remain attached to the vehicle. For the purpose of this rule, the definition of “attached”
is meant to exclude attachment by string, wire, or other flexible tether.
3. The weight of the vehicle, including batteries, must not exceed 1.0 kg (2.2 pounds).
4. Peer assistants will supply competition “Hackysacks” at each ramp. “Hackysack” specifications
are attached in Figure 2.
5. The vehicle must be stationary prior to the start, and it cannot be pushed by a group member
as part of the start. After the start signal, the vehicle’s propulsion system may be activated
using the switch on the battery box, but cannot be activated prior to the start.
32
4. THE FIRST-YEAR DESIGN CHALLENGES
Figure 4.10: Schematic test track.
6. In the execution of its tasks, the vehicle may not damage the track, its walls, or the roadway
carpet.
4.5. HACKYSACK FLIP 33
7. Onboard computing devices, such as microcontrollers, are not permitted.
Power
Power to propel the vehicle and to run any onboard activation or electronic devices is derived using
two AAA 1.5 V batteries and one 1.5–4.5 volt DC motor supplied in the parts kit (Figure 4.2).
Batteries may be connected to the vehicle in any configuration. Supplemental mechanical power
may be derived from such devices as springs, mousetraps, balloons, and rubber bands. Compressed
gas cylinders, chemical reactions, or combustion of any type are not allowed. Mercury switches of
any type are not allowed.
Challenge Day Procedures
Team members will have two minutes to reach the starting position after being called for a round. At
the judges’ discretion, any vehicle not ready after the two minute countdown will forfeit the round
and may be allowed to compete after all other teams have completed their rounds.
A timer will count down the five minutes for each round. Groups can have as many runs as
possible within the five minutes.The run will last until the vehicle crosses the finish line. Contestants
may maintain contact with vehicles prior to “go” but may not touch vehicles during the run interval.
If a group member touches its vehicle before run has been completed, the run will be considered
incomplete, and the accumulated score to that point will be reduced by two points.
All vehicle wheels must remain within the side rails of the track. Deployed appendages may
extend beyond the side rails after the start, but the tops of the side rails may not be used to support
the vehicle.
Groups may provide a fresh set of batteries at the initiation of their round. The round must
be completed on the fresh batteries. If the batteries die, points accumulated up to stop of the vehicle
will be counted.
Groups may modify vehicles between runs; however, the time limit for the round will be
maintained.
Only the called groups may enter the stage competition area and the vehicle repair area.
Similarly, only team members may request verification of opposing vehicles for compliance with
contest rules and design limits. Spectators are not permitted to make such requests.
Any vehicle compliance challenge must be made to the judges prior to the awarding of points
for a particular round. If a vehicle is challenged and found to violate context requirements, it may
be disqualified.
Scoring challenges for a particular round must be made by the end of that round of the
competition and may only be made by team members. Resolution of point challenges will be made
at the sole discretion of the judges. Vehicle modification and rerun following disqualification is at
the sole discretion of the professor.
34
4. THE FIRST-YEAR DESIGN CHALLENGES
Scoring
For each run of the round, a maximum of 9 points will be awarded for each run as follows:
Ascending Points: One point will be awarded to each vehicle that successfully ascends to the
“top of the hill” of the ramp. To earn “top of the hill” points, the vehicle, and all its parts, must reside
between points (T).
Hackysack Points: One point will be awarded to each vehicle that successfully propels its
hackysack through the hole in the wall. An additional point will be awarded to each vehicle that
propels the entire diameter of its hackysack beyond the (L1) line and two points for the hackysack
completely passing the L2 line.
Descending Points: One point will be awarded to each vehicle that passes, including all its
parts, the bottom of the descending ramp (D). To receive descending points, a vehicle must first earn
ascending points.
Flag Points: One point will be awarded to each flag knocked down during the run.
Run-out Points: Two points will be awarded to any vehicle that completely exits the run-out
curve. Completely exit means that a ruler may be placed between the end of the track and the end
of the vehicle.
Penalty Deductions: Two points will be deducted from the run for a vehicle not completing
the course. The minimum score for a run is zero; penalty points cannot yield negative scores.
Vehicle Mishap: If a vehicle falls off the ramp for any reason or the batteries die, it will retain
points earned for the run prior to the mishap.
Points for the best run are recorded.
General Rules
(cid:129) Each group must maintain a design notebook (see Chapter 2).
(cid:129) No group shall spend more than $15.00 for supplies and equipment to manufacture its vehicle.
It is OK to use “free stuff.” The definition of “free stuff ” is that it has no commercial value.
In short, the instructor can elect to keep it or throw it in the trash following the competition.
Modified prefabricated cars are automatically disqualified. “Rented” or “borrowed” materials
are not allowed.
(cid:129) Vehicles will be weighed, measured, and logged in prior to the test program.
(cid:129) Vehicle fabrication: The vehicles must be constructed from scratch, that is, no premade plas-
tic bodies can be used. While the choice of materials is left to each group, 1/2 inch thick
foamcore board, white (Elmer’s) or thermo plastic (hot-melt) glue have proven to be a suitable
construction material, except for axles and wheels, and are available at the campus store.
Prior to the start of the challenge, groups must conduct a “calibration round” on the test track
and record the results in the design notebook.
No tools will be supplied on the competition day. teams are expected to bring all necessary
items to repair or modify vehicles during the competition, including spare parts.
After the official start of the competition, only registered student contestants will be allowed
in the competition and work areas. Spectators are welcome to view the competition from the seating
area in the auditorium.
After each run, and prior to leaving the ramp area, groups are responsible for verifying that
point totals have been correctly recorded by the ramp judge and that the challenge area is clean.
Judges are instructed to oversee these checks. One group member will sign each score sheet.
4.5. HACKYSACK FLIP 35
4.5.4
SAFETY
The objective of the challenge is to foster engineering creativity and cooperation. The design group
is responsible for ensuring safety of participants and spectators during the challenge. Groups using
any feature deemed dangerous by the judges may be asked at any time to prepare a safety plan or
suitably modify the design before continuing in the challenge. Offending designs may be disqualified
at the discretion of the faculty member or peer assistants. Use of pyrotechnic or similar devices is
strictly prohibited. Any questions regarding safety may be directed to your instructor.
Students have access to the College of Engineering and Applied Science shops. Sign up for
shop time is required and students must have watched the safety practices video presented in class.
4.5.5 CHALLENGE ORGANIZATION
Early Organization:
(cid:129) Identify a challenge facility area and schedule the design challenge day.
(cid:129) Fabricate the test tracks.
(cid:129) Identify a practice area and set up one track for trial runs.
Preparation for challenge day:
(cid:129) Prepare summary score spreadsheet.
(cid:129) Prepare data sheets and coordinate individual score sheets with the summary spreadsheet.
(cid:129) Prepare a recycling box for used batteries.
(cid:129) Draw lots for the section challenge times.
(cid:129) Prepare a press release if appropriate.
(cid:129) Prepare a descriptive poster if the challenge is in a public area.
Challenge day: Peer assistant assignments:
(cid:129) Move the challenge tracks into place before students arrive.
36
4. THE FIRST-YEAR DESIGN CHALLENGES
(cid:129) Set up a registration table: log in each group, measure vehicles, grade notebooks, and provide
the individual data sheet: Typically, two assistants.
(cid:129) Run qualifications: Typically, three assistants, one per track. The assistants confirm the points
earned.
(cid:129) Data recording: One or two assistants enter the data into the summary spreadsheet. One
person may have to go back to the other activities to confirm data, so two people are preferred.
(cid:129) Oversee site clean-up: all.
Following the challenge:
(cid:129) Provide the summary data sheet to all instructors.
(cid:129) Arrange any awards, e.g., pizza party.
4.5.6 CONCLUDING COMMENTS
Figure 4.11: Vehicle testing.
This was the most complex challenge we attempted because of limits imposed by the test
track. The college shop fabricated the tracks. One track was set up in a lab space for students to test
their designs. The few days prior to the challenge the room was full.
4.6. MOUSETRAP POWERED CAR SLALOM 37
About 10% of the vehicles completed the challenge. The original challenge has a 30o ramp
and many vehicles had great difficulty making the climb. This challenge has been reduced to 20o
to improve the opportunity for success. Several cars rolled over on the descending ramp. The lower
grade should alleviate this problem. The end runout also proved to be a stumbling block as cars were
getting stuck. An 18 in. radius runout would resolve this issue.
4.6 MOUSETRAP POWERED CAR SLALOM
4.6.1 FACILITIES
A hard floor 25 feet x 25 feet works best and allows the slalom course to be set up and the width
allows 6 to 8 lanes to be laid out side by side. Tight carpet also works but the decision should be
made prior to initiation of the challenge as it will affect both distance and steering. A meeting room
in the student union or equivalent provides high visibility and public access.
4.6.2 CHALLENGE
This challenge requires each group to construct a mousetrap-powered car that can negotiate a slalom
course consisting of four 3 3/4 in square pylons placed in a straight line 4 foot on centers.
4.6.3 THE RULES
Each section will function as a team. A team consists of groups of two to three students. Cars are
to be designed, built, and tested to optimize the team’s point score. In that regard, the team must
decide what constitutes its best composition of cars.
Individual groups present their ideas in class and receive input from the team.
(cid:129) The team works to design vehicles to optimize the team response in the challenge.
(cid:129) Each group must design and fabricate at least one car.
(cid:129) Mousetraps will be supplied in class.
(cid:129) In addition to the two mousetraps supplied, no group shall spend more than $20.00 for supplies
and equipment to manufacture its car. It is OK to use “free stuff.” The definition of “free stuff ”
is that it has no commercial value. In short, the instructor can elect to keep it or throw it in
the trash following the challenge. “Rented” or “borrowed” materials are not allowed.
(cid:129) All cars must be designed, fabricated, and tested prior to submittal.
(cid:129) Groups will consist of two or three students. Single-person entries are not allowed.
(cid:129) The sole source of power for the vehicle is a mousetrap; rattraps are not allowed.
(cid:129) The vehicle may use one or two mousetraps in any combination for power and/or steering.
38
4. THE FIRST-YEAR DESIGN CHALLENGES
(cid:129) Each group must maintain a design notebook (see Chapter 2).
(cid:129) The vehicle will be placed at the start line and released by the group members. Each vehicle
must be placed with its longitudinal axis parallel to the course. Wheels may be aligned at the
group’s discretion (see layout in Figure 4.12). Once released, the car cannot be touched by
any group member. The vehicle must cross the centerline before passing the first pylon (see
Figure 4.12).
(cid:129) Prior to the actual challenge, each car should complete a successful trial run and distances of
each run should be recorded in the design notebook.
(cid:129) On the challenge day, the group will have five minutes to make a successful run.
(cid:129) Vehicle fabrication: The car must be constructed from scratch, that is, no premade units, e.g.,
premade steering mechanisms. The choice of materials is left to each group.
Figure 4.12: Challenge course and starting configuration.
Scoring
Each group receives 0 to 4 points based on the number of pylons successfully passed. The group
earns an additional 2 points for completing the run and crossing the finish line. The team score is
the average of the group scores.
Developing a Test Program
Engineering design and development requires validation and testing. Your notebook must clearly
identify a development schedule, test dates, and test results. Modification to the design after each test
must be documented. The best run and points must be clearly recorded and dated in the notebook.
4.6.4
SAFETY
The objective of the challenge is to foster engineering creativity and cooperation. The design groups
are responsible for ensuring safety of participants and spectators during the challenge. Groups using
any feature deemed dangerous by the judges may be asked at any time to suitably modify the design
before continuing in the challenge. Offending vehicles may be disqualified at the discretion of the
professor. Use of pyrotechnic or similar devices is strictly prohibited.
4.6. MOUSETRAP POWERED CAR SLALOM 39
4.6.5 CHALLENGE ORGANIZATION
Early Organization:
(cid:129) Identify a challenge facility track and schedule the design challenge day.
(cid:129) Identify a practice area and setup. The initial practice area can be a single track in a classroom.
Preparation for challenge day:
(cid:129) Prepare summary score spreadsheet.
(cid:129) Prepare data sheets and coordinate individual score sheets with the summary spreadsheet.
(cid:129) Fabricate pylons.
(cid:129) Draw lots for the section challenge times.
(cid:129) Prepare a press release if appropriate.
(cid:129) Prepare a descriptive poster if the challenge is in a public area.
Challenge day: Peer assistant assignments:
(cid:129) Set up pylons, start and finish lines prior to the students arriving.
(cid:129) Set up a registration table: log in each group, verify the power source, grade notebooks, and
provide the individual data sheet: Typically, two assistants
(cid:129) Run qualifications: Typically, four assistants. The assistants confirm the course was successfully
traversed.
(cid:129) Data recording: One or two assistants enter the number of points into the summary spread-
sheet. One person may have to go back to the other activities to confirm data, so two people
are preferred.
(cid:129) Oversee site clean-up: all.
Following the challenge:
(cid:129) Provide the summary data sheet to all instructors.
(cid:129) Arrange any awards, e.g., pizza party or gift certificates for best categories.
40
4. THE FIRST-YEAR DESIGN CHALLENGES
4.6.6 CONCLUDING COMMENTS
This challenge is deceptive. Many students have participated in mousetrap cars in high school.
The autonomous steering, however, requires closer coordination of forward progress and sinusoidal
action. At least two groups in the challenge used mousetrap-powered cars and radio-controlled
steering. One student group brought in a RC controller with the comment that sabotage is not
disallowed! The rule eliminating prefabricated steering is intended to exclude the most egregious
use of RC controls.
This challenge needed a gross of mousetraps. No one in town stocked that many so they were
special ordered and provided to the teams. This can be a professional student group fund raiser.
4.7 THE GREAT WALL OF CARPET
Figure 4.13: Carpet climb challenge.
4.7. THE GREAT WALL OF CARPET 41
4.7.1 FACILITIES
This challenge works best with a two-story atrium and carpet hung from the upper floor. The
particular challenge was held in the atrium of the campus library. Two similar types of carpet were
used. The carpet strips were approximately 2 feet wide. The carpets are hung so only the fabric side
is available for the challenge.
4.7.2 CHALLENGE
This challenge requires construction of an autonomous robotic device that can climb a carpet draped
in the atrium of a two-story building. Each group will design and construct a robot according to the
rules laid out below. To qualify, a robot must climb at least two vertical feet.
4.7.3 THE RULES
Each section will function as a team. A team consists of groups of two to three students. Robots are
to be designed, built, and tested to optimize the team’s point score. In that regard, the team must
decide what constitutes its best composition of robots.
Individual groups present their ideas in class and receive input from the team.
(cid:129) The team works to design vehicles to optimize the team response in the challenge.
(cid:129) Each group must design and fabricate one robot.
(cid:129) Each group must maintain a design notebook (see Chapter 2).
(cid:129) No group shall spend more than $40.00 for supplies and equipment to manufacture its robot.
It is OK to use “free stuff.” The definition of “free” is that it has no commercial value. In short,
the instructor can elect to keep it or throw it in the trash following the competition. “Rented”
or “borrowed” materials are not allowed.
(cid:129) All robots must be designed, fabricated, and tested prior to submittal.
(cid:129) The robot must be constructed from scratch, that is, no premade units can be used. The choice
of materials is left to each group.
(cid:129) Groups will consist of two or three students. Single person entries and groups of more than
three students are not allowed.
(cid:129) The robot may be powered by any device, elastic bands, electric motors, or other mechanical
contraptions. Part of the challenge is for the team to optimize very small lightweight climbers
with heavier and more powerful alternatives.
(cid:129) The entire robot must make the climb; however the robot may work in discrete elements if
needed and the robot must grip the carpet to climb.
42
4. THE FIRST-YEAR DESIGN CHALLENGES
(cid:129) The robot will be placed at the start line and released by the group members. Each robot must
function autonomously, that is, it cannot be touched by a group member while operating. If
touched, it must be restarted.
(cid:129) Prior to the actual challenge, each robot should complete a successful qualifying run. A survey
rod will be provided and the height for each run recorded.
(cid:129) On the challenge day, the robot will have five minutes to make a successful run.
(cid:129) Special note: there may be two different carpet types. You are not assured which one you
will get on the challenge day.
Ceiling
Carpet
Wall
Floor
Start line
1’
2’+-
Figure 4.14: Approximate challenge course and starting configuration.
4.7. THE GREAT WALL OF CARPET 43
Developing a Test Program
Engineering design and development requires validation and testing. Your notebook must clearly
identify a development schedule, test dates, and test results. Modification to the design after each
test must be documented. The runs and climb height must be clearly recorded and dated in the
design notebook.
4.7.4
SCORING
The best height of the challenge run will be recorded. If the robot does not climb in the challenge,
a challenge height of zero will be recorded. The team with the highest average climb height divided
by the average cost will be recognized as the highest climbing/most efficient team on campus!
To start each run, the judge will indicate start, and the timing clock will begin. The test will
last until the run is complete or time expires. Groups may maintain contact with robots prior to
the robot start but may not touch the robot during the run. A group member may touch the robot
to prevent damage but the run must then be restarted. The run record must be completed without
manual assistance or the run will be disqualified.
Only team members may request verification of opposing robots for compliance with contest
rules and design limits. Spectators are not permitted to make such requests.
Any challenge must be made to the judges prior to the awarding of points for a particular
test. If a robot is challenged and found to violate context requirements, it will be disqualified and a
height of zero recorded.
4.7.5
SOME REFERENCES
If you have never seen a climbing robot, consider looking on Bing or Google. There are a number
of good sites, many of which are far more complex than needed for this challenge. The description
of these robots will give you an idea of some of the design considerations to be included in your
project.
4.7.6
SAFETY
The objective of the challenge is to foster engineering creativity and cooperation. The design group
is responsible for ensuring safety of participants and spectators during the challenge. Groups using
any feature deemed dangerous by the judges may be asked at any time to prepare a safety plan or
suitably modify the design before continuing in the challenge. Offending designs may be disqualified
at the discretion of the faculty member or peer assistants. Use of pyrotechnic or similar devices is
strictly prohibited. Any questions regarding safety may be directed to your instructor.
4.7.7 CHALLENGE ORGANIZATION
Early Organization:
(cid:129) Identify a challenge facility track and schedule the design challenge day.
44
4. THE FIRST-YEAR DESIGN CHALLENGES
(cid:129) Identify a practice area and setup one of each type of carpet.
Preparation for challenge day:
(cid:129) Procure the challenge carpet. As the rules note, we had two different types of carpet so the
students did not know which carpet they will climb until the challenge.
(cid:129) Prepare summary score spreadsheet.
(cid:129) Prepare data sheets and coordinate individual score sheets with the summary spreadsheet.
(cid:129) Draw lots for the section challenge times.
(cid:129) Prepare a press release if appropriate.
(cid:129) Prepare a descriptive poster if the challenge is in a public area.
Challenge day: Peer assistant assignments:
(cid:129) Hang carpet and arrange survey rods.
(cid:129) Set up a registration table: log in each group, grade notebooks, and provide the individual data
sheet: Typically, two assistants.
(cid:129) Distance qualifications: four assistants. The assistants confirm the distance was climbed. We
set up four climbing stations and brought over survey rods to assist with measuring the height
climbed. Place masking tape at the challenge finish.
(cid:129) Data recording: One or two assistants enter the data into the summary spreadsheet. One
person may have to go back to the other activities to confirm data, so two people are preferred.
(cid:129) Oversee site clean-up: all.
Following the challenge:
(cid:129) Provide the summary data sheet to all instructors.
(cid:129) Arrange any awards, e.g., pizza party.
4.7.8 CONCLUDING COMMENTS
Only two of 131 climbers made it to the top and one design used helium filled balloons to lift the
robot. These results made this the most difficult challenge in terms of success. Nothing in the rules
precluded using the edge of the carpet, and several groups used this option. One group made a
catapult with a three-pronged fishing hook. The design launched the hook then used a winch to pull
itself up. It performed poorly and required a safety plan for throwing the hook. About 30 years ago
the University of Utah had a similar challenge using small elastic band-powered paperclip climbers.
4.8. UNDERWATER RECOVERY VEHICLE 45
Figure 4.15: Edge climber.
4.8 UNDERWATER RECOVERY VEHICLE
4.8.1 FACILITIES
The challenge requires use of the university swimming pool. In addition to gaining access to the
pool, life guards are hired to assure safety. No students are allowed in the pool, hence the requirement
for the robots to be tethered. Local pool rules and depths must be incorporated in the rules.
4.8.2 CHALLENGE
This challenge is to construct a tethered robotic device that can retrieve a marker from the bottom of
the swimming pool according to the rules laid out below. The final challenge will be in the shallow
end of the pool and the pool will not available for practice prior to the challenge. A stock watering
tank will be set up and available for practice.
46
4. THE FIRST-YEAR DESIGN CHALLENGES
Figure 4.16: The beginning of the underwater challenge.
4.8.3 THE RULES
Each section will function as a team. A team consists of groups of two to three students. Robots
are to be designed, built, and tested to optimize the number of retrieved weights. In that regard, the
team must decide what constitutes its best selection of robots; however, each team must have at least
two underwater robots (submarines) and two surface robots.
Individual groups present their ideas in class and receive input from the team.
(cid:129) The team works to design vehicles to optimize the team response in the challenge.
(cid:129) Each group must design and fabricate one robot.
(cid:129) A successful robot will retrieve a designated small weight from the bottom of the campus pool
by picking up the hook on the weight (Figure 4.13). A steel washer will be placed on the
magnet for the challenge. (Note: the washer prevents the magnetic base from attaching to a
pool drain.)
4.8. UNDERWATER RECOVERY VEHICLE 47
Weight approximately 30 grams
Figure 4.17: Marker to be retrieved from pool.
(cid:129) Each group must maintain a design notebook (see Chapter 2).
(cid:129) No group shall spend more than $50.00 for supplies and equipment to manufacture its robot.
It is OK to use “free stuff.” The definition of “free stuff ” is that it has no commercial value.
In short, the instructor can elect to keep it or throw it in the trash following the challenge.
“Rented” or “borrowed” materials are not allowed.
(cid:129) All robots must be designed, fabricated, and tested prior to submittal
(cid:129) Robot fabrication: The robot must be constructed from scratch, that is, no premade boats may
be used, but premade components are allowed. A surface vehicle with a winch is permitted.
The choice of materials is left to each group.
(cid:129) The robot may be powered by any device, elastic bands, electric motors, or other mechanical
contraptions. Part of the challenge is for the team to optimize robot selection and design.
(cid:129) The robot must travel to the designated weight, connect to the hook on the weight, and bring
the weight back to the surface at the pool edge.
(cid:129) The robot must be tethered to the shore. A land controller, which may contain the power
supply, navigation, and ballast equipment, can serve as the tether; however, the tether may not
be used to pull or drag the robot or the marker string. Any power supply must be internal to
the robot or the controller, i.e., not plug into a wall outlet, and no large external batteries are
allowed.
48
4. THE FIRST-YEAR DESIGN CHALLENGES
(cid:129) The robot will be placed at the edge of the pool and released by the group members. Each
robot must function remotely, that is, it cannot be touched by a group member while operating
other than from the control panel. If touched, pulled, or dragged, it must be recovered and
restarted.
(cid:129) Prior to the actual challenge, each robot must complete a successful run in the test tank. Each
group should document the robotic performance: e.g., ballast and steering control, power,
ability to lift the weight.
(cid:129) On the challenge day, the robot will have ten minutes make a successful recovery.
Developing a Test Program
Engineering design and development requires validation and testing. Your notebook must clearly
identify a development schedule, test dates, and test results. Modification to the design after each
test must be documented. The final runs must be clearly recorded and dated in the notebook and
the final results reported on the summary page.
For example, consider buoyancy design. This can be done with a flotation device or by using
downward propulsion. Several trials may be required to select and test a suitable design.
4.8.4
SAFETY
The objective of the challenge is to foster engineering creativity and cooperation. The design group
is responsible for ensuring safety of participants and spectators during the challenge. Groups using
any feature deemed dangerous by the judges may be asked at any time to prepare a safety plan or
suitably modify the design before continuing in the challenge. Offending designs may be disqualified
at the discretion of the faculty member or peer assistants. Use of pyrotechnic or similar devices is
strictly prohibited. Any questions regarding safety may be directed to your instructor.
The pool deck is wet and slippery. No shoes are allowed to be worn on the deck. Because the
deck is wet, plug in power is prohibited.
4.8.5 CHALLENGE ORGANIZATION
Early Organization:
(cid:129) Identify the pool, and schedule the design challenge day.
(cid:129) Arrange for life guards.
(cid:129) Identify a practice area and setup.
Preparation for challenge day:
(cid:129) Prepare summary score spreadsheet.
(cid:129) Prepare data sheets and coordinate individual score sheets with the summary spreadsheet.
4.8. UNDERWATER RECOVERY VEHICLE 49
(cid:129) Draw lots for the section challenge times.
(cid:129) Prepare a press release if appropriate.
(cid:129) Prepare a descriptive poster if the challenge is in a public area.
Challenge day: Peer assistant assignments
(cid:129) Set up a registration table: log in each group, verify the power supply rules, assign a target
number, and provide the individual data sheet: Typically, two assistants.
(cid:129) Check design notebooks: Typically one or two assistants.
(cid:129) Recovery targets:Typically, two assistants.The assistants confirm the target numbers are visible
and replace them in the pool after recovery distance was traversed.
(cid:129) Data recording: One or two assistants enter the data into the summary spreadsheet. One
person may have to go back to the other activities to confirm data, so two people are preferred.
(cid:129) Oversee site clean-up: all.
Following the challenge:
(cid:129) Provide the summary data sheet to all instructors.
(cid:129) Arrange any awards, e.g., pizza party.
4.8.6 CONCLUDING COMMENTS
There is nothing like water to compromise the best design, and this challenge amply demonstrates
that fact. Three strategies emerged. Submarines trawled the bottom. Surface ships trawled for the
weights then tried to bring them to the surface, and, finally, surface ships with winches attempted
to grapple the weights, pull them to the surface, and return them to the edge of the pool. Each
weight had a string and a ping pong ball hot glued to the string. Ten weights were spread around
the pool and each section had numbers 1–10 assigned to the group for retrieval. The distance from
the edge of the pool to the weight was similar for all groups. The pool has a current. That current
was sufficient to keep underpowered boats from reaching the weights. Our fall guest speaker was
the Manager for Gulf well delivery for Shell Oil, and this project was tied into the oil spill recovery
efforts.
50
4. THE FIRST-YEAR DESIGN CHALLENGES
Figure 4.18: Redesign consultation time.
4.9 WIND TURBINES AND WIND POWER GENERATION
4.9.1 FACILITIES
This challenge requires a fairly large open space. Three test facilities were set up with large fans
providing the wind. We used an auditorium for the challenge although a large conference room
would work equally well. The fan was set up on a table and the distances marked on the table with
masking tape. The Electrical Engineering Department fabricated a power meter to record the wind
turbine output. This challenge is suitable for a public location.
4.9.2 CHALLENGE
This challenge requires each section to form its own company to manufacture and test a series of
prototype wind turbines to generate electrical energy.
The national energy policy has a goal that 20 percent of all electric energy produced in the
US should come from renewable sources by 2020. This has led to considerable development of wind
farms across the country. Wyoming is one of the premier wind locations in the US and a substantial
4.9. WIND TURBINES AND WIND POWER GENERATION 51
Figure 4.19: Wind turbine testing.
amount of research into wind energy is conducted at UW. Just how efficient is wind energy? This
design challenge is intended to let you examine the design issues associated with efficient wind
generation.
Each group will design and construct a wind turbine to maximize the energy output and to
determine the efficiency of the design. To complete the challenge, each group must complete the
following tasks:
1. Design and fabricate the wind turbine,
2. Set up the wind turbine 4 feet in front of the industrial fan,
3. Measure the wind speed immediately in front of the wind turbine (instrumentation will be
provided),
4. Compute the theoretical energy input of wind,
5. Attach the power meter and measure the energy output of the turbine,
6. Repeat steps 3–5 with the wind turbine 6 and 10 feet in front of the fan,
7. Compute the efficiency of the wind turbine for each wind velocity and plot the resulting data,
8. Enter all data and test results in your design notebook, and
9. At the design challenge, demonstrate that your turbine can generate the energy output reported
from your development program at 6 ft. from the face of the fan.
52
4. THE FIRST-YEAR DESIGN CHALLENGES
4.9.3 THE RULES
Each freshman Section will function as a team. A team consists of eight groups of two to three
students. Wind turbines are to be designed, built, and tested to optimize the team’s point score. In
that regard, the team must decide what constitutes its best composition of turbines.
Scoring will compare the instantaneous energy output of all wind generators in the team. The
highest average energy output receives the outstanding team award.
Individual groups present their ideas in class and receive input from the team.
(cid:129) Each team must design and fabricate at least one vertical axis and one horizontal axis turbine.
The remaining turbines are at the discretion of the team (Figure 4.20).
(cid:129) Each group will receive an electric generator and a gear set. This generator must be used; the
gears are optional. Gears may be traded within a team to optimize performance.
(cid:129) Each group must maintain a design notebook (see Chapter 2).
(cid:129) In addition to the parts supplied, no group shall spend more than $20.00 for supplies and
equipment to manufacture its wind turbine. It is OK to use “free stuff.” The definition of “free”
is that it has no commercial value. In short, the instructor can elect to keep it or throw it in
the trash following the competition. “Rented” or “borrowed” materials are not allowed.
(cid:129) All wind turbines must be designed, fabricated, and tested prior to submittal.
(cid:129) The turbine must be constructed from scratch, that is, no premade units can be used. The
choice of materials is left to each group.
Wind Turbine Design
1. The cross-sectional area of the turbine blades (perpendicular to the wind direction) must be
less than 324 square inches (18 in square or 10.2 inch radius).
2. The center of the wind turbine should be about 27 in. above the table—roughly at the center
of the fan axis.
3. Bricks will be available to anchor the wind turbine base. The bricks should not extend more
than 3 in. above the table.
4. You may construct a gearbox out of plastic sheet available in the shop.
5. The College of Engineering and Applied Science shop is available for fabrication of your
turbine.
4.9. WIND TURBINES AND WIND POWER GENERATION 53
(a) Horizontal Axes
(b) Vertical and Horizontal Axes
Figure 4.20: Sample wind turbines.
Generator and Transmission
The generator is a 6 volt DC motor supplied in the parts kit. The optimum drive speed is 2,180
rpm, so a step up from the turbine blade speed may be needed. Each generator must have a 4 ft.
long 2-wire conductor (available in the shop) attached to the generator for connection to the power
meter.
Data for the generator is given in Table 4.1.
Table 4.1: Generator Technical Data: Nichibo DC Motor—FE-260-18130 Available from Jameco Inc.
Nichibo DC Motor – FE-260-18130 Available from Jameco Inc.
Current @ Max. Efficiency
(A)
Efficiency
Nominal Voltage (VDC)
Shaft Diameter (inch)
Shaft Length (inch)
0.08
50.6
6
0.078
0.385
Size (Dia)
1.259 x
0.767
Speed @ Max. Efficiency (RPM)
2180
Terminal Type
Voltage Range (VDC)
Solder
1.5-12
Torque @ Max. Efficiency (g-cm)
11.6
Each kit has a set of gears similar to those in Figure 4.2. The design may include assembling
the gears into a gearbox with one axle to the turbine blade and an output axle to the generator.
54
4. THE FIRST-YEAR DESIGN CHALLENGES
4.9.4 DEVELOPING A TEST PROGRAM
Figure 4.21 shows a schematic of the turbine test setup. One test setup is available for testing and
measurement prior to the challenge. A schedule of lab availability times for testing will be posted.
Figure 4.21: Schematic test setup.
Each group must establish a test program for their turbine. The wind turbines are to be set
and tested at 4, 6, and 10 feet from the fan. The vertical axis turbine is set with the axis at the 4, 6,
and 10 ft. marks. The horizontal axis turbine is set with the face of the blades positioned at 4, 6, and
10 feet from the face of the fan.
Turn on the fan and measure the wind speed and compute the theoretical wind energy. The
fan will be positioned on a 12 in. high pedestal to make the centerline of the fan axis at an elevation
of 27 inches and to reduce ground effects.
A wind velocity meter and the power meter will be available for testing.
4.9.5 CALCULATIONS
The average mass of air at 7,000 ft. (2130 m) is 0.0620 lbm/ft 3 (0.9920 kg/m3). The total kinetic
energy from the wind is 1/2 mv 2, where the mass of the air is that swept by the turbine. The output
energy of the wind turbine is measured with a power meter. Electrical energy, P, is computed by:
Where P is energy in joules, V is voltage and I is current.
P = V I
4.9. WIND TURBINES AND WIND POWER GENERATION 55
Efficiency is computed: η = 100% Energy−out
Energy−in
where the energy-in is the theoretical wind energy and the energy-out is from the power meter data.
Be sure your units for the kinetic energy and electrical energy are compatible when you complete
the efficiency calculation. Compute the energy-in and -out for the three turbine locations and then
plot the efficiency versus wind velocity.
4.9.6 CHALLENGE DAY SETUP
Prior to the start of the official challenge, groups must complete “calibration tests.” design notebooks
must be submitted on the challenge day.
Each group will have five minutes to set up the turbine and measure the output. On the
challenge day, all tests will be run at 6 ft. from the face of the fan. At the judges’ discretion, any
turbine not ready after the five minute countdown will forfeit and an energy output of zero will be
recorded. The group may be allowed to retest after all other teams have completed their test.
At the start each test, the judge will indicate start and the clock will begin. The test will last
until the energy is recorded or time expires. Groups may maintain contact with turbines prior to
“go” but may not touch the turbine during the test interval. Turbine blades must be stationary when
the test begins and the fan is turned on. A group member may touch the turbine to adjust parts but
the energy record must be completed without manual assistance or the test will be disqualified.
Only the called group may enter the stage challenge area and the turbine repair area. Only
team members may request verification of opposing turbines for compliance with contest rules and
design limits. Spectators are not permitted to make such requests.
Any challenge must be made to the judges prior to the awarding of points for a particular test.
If a turbine is challenged and found to violate context requirements, it will be disqualified and an
energy output of zero recorded.
Scoring challenges for a particular round must be made by the end of that round of the
challenge, and may only be made by team members. Resolution to point challenges will be made at
the discretion of the judges.
No tools will be supplied on the challenge day. teams are expected to bring all necessary items
to repair or modify turbines during the challenge, including spare parts.
After the official start of the challenge, only registered groups will be allowed in the challenge
and work areas. Spectators are welcome to view the challenge from the seating area in the auditorium.
After each test and prior to leaving the test area, groups are responsible for verifying that point
totals have been correctly recorded by the challenge judge. Judges will be instructed to oversee these
checks. One group member will sign each score sheet.
4.9.7
SCORING
Scoring for the project will be based on completing tasks on time. Table 4.2 indicates the task and
timing.The challenge requires the model to be completed, calibrated, and operational.The maximum
56
4. THE FIRST-YEAR DESIGN CHALLENGES
energy output will be recorded for each group. The team score is the average of the maximum energy
output from all groups in the team.
Table 4.2: Challenge scoring
ksaT
etaD
Design Notebook progress
Have a model prototype that is
described in the design notebook
Energy Generation Curve
Functioning
Corresponds to Design Notebook
Energy Prediction within 25%
Model
which
Design Day –Week 4
Design Day –Week 4
Challenge Day
Challenge Day
Challenge Day
ksaT
Points
10
10
20
10
10
Total Points
10
20
40
50
60
4.9.8
SAFETY
The objective of the challenge is to foster engineering creativity and cooperation. The design group
is responsible for ensuring safety of participants and spectators during the challenge. Groups using
any feature deemed dangerous by the judges may be asked at any time to prepare a safety plan or
suitably modify the design before continuing in the challenge. Offending designs may be disqualified
at the discretion of the faculty member or peer assistants. Use of pyrotechnic or similar devices is
strictly prohibited. Any questions regarding safety may be directed to your instructor.
4.9.9 CHALLENGE ORGANIZATION
Early Organization:
(cid:129) Identify a challenge facility track and schedule the design challenge day.
(cid:129) Identify a practice area and setup.
Preparation for challenge day:
(cid:129) Prepare summary score spreadsheet.
(cid:129) Prepare data sheets and coordinate individual score sheets with the summary spreadsheet.
(cid:129) Procure an handheld anemometer.
(cid:129) Procure a power meter and make sure it is calibrated for the generator specified in the challenge.
(cid:129) Draw lots for the section challenge times
(cid:129) Prepare a press release if appropriate.
4.9. WIND TURBINES AND WIND POWER GENERATION 57
(cid:129) Prepare a descriptive poster if the challenge is in a public area.
Challenge day: Peer assistant assignments:
(cid:129) Set up tables, fans, and distance marks prior to students arriving.
(cid:129) Set up a registration table: log in each group, grade notebooks, and provide the individual data
sheet: Typically, two assistants.
(cid:129) Power output: Typically, two assistants. The assistants confirm the power output of each group.
(cid:129) Data recording: One or two assistants enter the data into the summary spreadsheet. One
person may have to go back to the other activities to confirm data, so two people are preferred.
(cid:129) Oversee site clean-up: all.
Following the challenge:
(cid:129) Provide the summary data sheet to all instructors.
(cid:129) Arrange any awards, e.g., pizza party.
4.9.10 CONCLUDING COMMENTS
The fans provided a reasonable wind velocity for the test; however, the power meter was somewhat
overdesigned so we were always reading the low end of the scale. For the final challenge, we placed
foamcore panels around the edge of the fan to better direct the wind toward the turbines. Virtually
all the turbines measured some output. The gear box is essential to the success of this challenge.
Scrap Plexiglas and Lexan sheet pieces, adhesive, and axle materials were available in the shop for
“free.”
C H A P T E R 5
59
Interdisciplinary Design
5.1 OBJECTIVES
Goals of both ABET and the College of Engineering and Applied Science is to offer and evaluate
comprehensive senior interdisciplinary design projects. Interdisciplinary in this instance means stu-
dents from different departments are recruited to design a project as opposed to teams of students
from within a single department. The following addresses how such projects are organized and
executed.
5.2 ADMINISTRATIVE ISSUES
The first major administrative issue of a interdisciplinary project is addressing the individual de-
partmental requirements for senior design. The University of Wyoming senior design requirements
are summarized in Table 5.1. Initiation of an interdisciplinary team requires negotiation with each
department to assure that the students receive proper credit, that workloads are commensurate with
the credit hours, and that each department is satisfied that the project is commensurate with existing
departmental protocols.
Table 5.1: Departmental Senior Design Requirements
Department
Civil Engineering
Civil Engineering-
Transportation
Electrical Engineering
Energy Systems
Mechanical Engineering
Chemical Engineering
Fall Term Credit
Hours
3 hours
2 hours
2 hours
3 hours
3 hours
2 hours
Spring Term Credit
Hours
1 hour
2 hours
3 hours
3 hours
4 hours
Further compounding the credit hour requirements are the departmental deliverable require-
ments. Civil and Chemical Engineering students prepare project plans and require oral presentations
of their projects. Electrical and mechanical students must go through a product development process,
fabricate their project, and prepare poster presentations.
60
5. INTERDISCIPLINARY DESIGN
Establishment of a separate section within each department for the interdisciplinary project
resolved the departmental credit hours issue. The entire class meets as a group even though the
students are all in “different” classes. While straightforward, the solution generated a second set of
administrative issues. The university has a policy that undergraduate classes must have at least 10
students enrolled for the class to proceed. No department had that many students in one section.
The college successfully petitioned an exemption on that basis that the entire project had sufficient
student population.
5.3
PROJECTS
Six different interdisciplinary projects are presented. Two projects were tied to the NASA Zero
Gravity research initiative, and the remaining four were local projects. The local projects required 1)
design of an automated transit system for the campus, 2) develop more environmentally sustainable
solutions for accessing gas fields in the state 3) converting the university energy plant from coal to
wood, and 4) a non-engineering interdisciplinary course on medieval construction in conjunction
with the History Department. The projects are summarized below and the projects are presented in
the following chapter.
5.3.1 NASA ZERO GRAVITY PROJECTS
The NASA Zero Gravity program allows students to propose projects that would assist NASA in
space endeavors. Students prepare proposals, submit them to NASA, and if accepted, construct the
prototypes. Students then take their projects to the NASA center in Houston, Texas. After review
and training by the NASA staff, the projects are loaded into the NASA KC-135A, and the students
conduct their experiments in a weightless environment.
The first Zero Gravity project challenged students to develop truss elements and connections
for the construction of three-dimensional truss structures in space. The project consisted of design-
ing carbon fiber truss elements and plastic connectors. The second Zero Gravity project involved
developing a zero gravity exercise machine for use on the international space station.
5.3.2 AUTOMATED TRANSIT SYSTEM FOR CAMPUS
The students were asked to design an automated transit system for campus that linked the remote
parking lots with the main campus. Completion of the project involved traffic studies for both parking
and walking distances from stations, development of the vehicle concepts, vehicle design, guideway
design, power supply, and operations. The project was conducted over a two-year timeframe.
5.3.3 DISAPPEARING ROADS AND GAS EXTRACTION
Jonah Field in western Wyoming was one of the first major gas fields developed by fracking processes.
Aerial views show a spider web pattern of roads in a region that is highly sensitive to pronghorn
migration and sage grouse habitat. The design project required the students to interact with energy
companies and the Bureau of Land Management to develop alternative designs to reduce the site
impact.
5.4. STUDENT RECRUITMENT 61
5.3.4 UNIVERSITY ENERGY PLANT CONVERSION
The University of Wyoming Energy Plant provides steam heat to the campus. Originally constructed
to burn anything from garbage to coal, it has functioned as a coal plant since its inception. Recent
years have seen an extraordinary amount of beetle kill in the nearby national forest. The student
project evaluated if the plant could be converted to wood chips fuel source, established that sufficient
wood was available to support the conversion, designed handling and transportation of the wood,
developed plans for modification of the energy plant, and participated in the first pilot burn.
5.3.5 MEDIEVAL CONSTRUCTION
This was a joint project with the History Department. Dr. Kristine Utterback of the History De-
partment presented the life and times of Medieval Europe. The engineering aspect of this project
focused on the design and construction of the first Gothic Cathedral in St. Denis, France.
5.4
STUDENT RECRUITMENT
The general concept of each design project is developed to the summary level indicated above. The
project is open to all seniors needing to fulfill a senior design project. The projects are run for an
academic year beginning in the fall term. To suit the departmental requirements, Civil Engineering
students completed their work in the fall and a new group of civil engineering students joined in the
spring. Other students enrolled for the full year.
Critical to recruiting students is the selection of the project. Student engagement was highest
when the project either directly addressed an issue the students recognized or had a clear envi-
ronmental benefit. One year several national competitions were suggested in addition to a local
project. The local project was overwhelmingly the student choice. The professor in charge of an
interdisciplinary project must be able and willing to adapt to the projects that will attract students.
Consideration was given to using an Engineers without Borders project. Engineers without
Borders was not selected because it was not clear that all of the interdisciplinary design criteria could
be satisfied. In addition, Engineers without Borders has projects underway and the schedules did not
mesh.
Each spring, in the week prior to advising and two weeks prior to students signing up for
fall courses, an announcement of the project is emailed to all seniors, and posters are placed in the
college. A project open house for the project is held. A general overview of the project is presented
at the open house and the students are invited to ask questions to explore their interest. It is only
after the students sign up for fall courses that the composition of the class is known.
Supplementing the general sign up for the course, certain “specialists” are recruited directly.
For example, one junior student in the chemical engineering department was mobility impaired and
62
5. INTERDISCIPLINARY DESIGN
served on the university committee for campus mobility. After discussions with the department head,
she was allowed to take this class early and had the responsibility for addressing all ADA requirements
for the campus transit system design in addition to her technical responsibilities. Similarly, for the
disappearing roads project, an Environmental and Natural Resources student was recruited to be the
in-house environmental specialist. In both cases, these students added immeasurably to the quality
of the project.
To further expand the class horizons, external classes are recruited to assist the design team.
For example, on one project a senior class in Computer Graphics assisted in conducting focus group
studies and preparing graphic imagery for the transit system alternatives.
PROJECT ORGANIZATION
5.5
Two key elements determined how the interdisciplinary design class was organized. First, the com-
position of the class had to be established. Because the class was open enrollment, the number of civil,
mechanical, electrical, and chemical engineers is not known until the end of the spring term. Once
the composition of the class is known, the actual project is modified to fit the class composition.
Second, during the first week of class in the fall, each student is interviewed to establish the goals
and aspirations the student has for the course. Prior to the interview, each student fills out a data
card with their name, major, specialized skills like specific computer programming experience, GPA,
and a statement on what they expect to gain from the course. From these interviews, the project
is further adjusted to meet the skills and majors of the available students. A project manager and
project engineers for each of the key components are selected.
The class is structured similar to a design office. The professor becomes the “principal in
charge,” and the day-to-day work responsibilities are given to the students. The project manager is
immediately charged with establishing a schedule and works with the professor to assign students
to the individual tasks. During class periods, the professor meets with the project manager, each
lead engineer, and individual students to review progress. Each team member is expected to provide
written or oral weekly updates on progress, difficulties, and resources needed to complete the task.
The transit system description in Chapter 6 contains a representative organization chart. All senior
design projects require written final reports and public presentations.
The professor’s job is to assure that the project stays on schedule, redirect the project if it
is going off track or the students propose solutions well outside their capabilities, and to arrange
for supporting materials. The supporting materials include anything needed to design and fabricate
the project, meetings with sponsors, field trips, or access to key personnel on or off campus. These
resources vary with the project. Classes have guest presenters ranging from U.S. senators, to the
university president, to maintenance staff. Every project involved at least one field trip.
Each project required some level of funding. The funding came from corporate support,
the dean’s office, or the H. T. Person Endowment. Individual funding is discussed in the project
descriptions. Securing funding support is the responsibility of the professor.
5.6. ASSESSMENT 63
5.6 ASSESSMENT
Multiple assessments of the projects were conducted. The NASA Zero Gravity projects were as-
sessed by whether NASA accepted the project, and the NASA critique of their final report. Each
project required the student teams to make a public presentation of their project. Members of the
university, industry, professional engineers, and the press are invited. Everyone in the audience is
given an assessment sheet and asked to critique the presentation. Professionals associated with the
development of the project or the field trips are invited to the presentation and are given copies
of the written report in advance of the oral presentation and asked to critique the presentation. A
sample critique summary is provided in Appendix IV.
The work demands of these classes are one of the biggest challenges of their academic careers.
The interdisciplinary designs have been well received by the students.They became the best advocates
for the next year’s class and many used their final design report as an example of their work in job
interviews.
C H A P T E R 6
65
Interdisciplinary Projects
6.1
INTERDISCIPLINARY DESIGN PROJECTS
Each of the following interdisciplinary projects follows a parallel format. The objective of the design
project is presented, the student composition is given, the course organization is presented, the
student results summarized, and an assessment in the form of closing comments are provided.
6.2 NASA ZERO GRAVITY I: CONSTRUCTION IN SPACE
6.2.1 OBJECTIVE
This Zero Gravity project challenges the students to develop techniques and connections for the
construction of a truss structure for space applications. The project consists of designing carbon fiber
truss elements and plastic connectors, practice assembling the truss, developing safety plans, and
executing the design in the NASA KC-135 aircraft.
6.2.2 CLASS COMPOSITION
This class consisted of seven undergraduate students ranging from sophomores to seniors. Two were
civil engineers and four were mechanical engineers and one was a senior journalism major. The two
engineering seniors promoted the project as an independent study program to meet their senior
design requirements. Dr. David Walrath, of the Mechanical Engineering department, provided
technical assistance and coordination with the ME department requirements.
6.2.3 CLASS ORGANIZATION
The class was structured in a discussion format and given a single course number.The original concept
of space construction came from the professor. The first quarter of the semester explored the range
of possible solutions. The project was organized into tasks commensurate with the NASA project
requirements. These included design of the experiment, detailed component design and fabrication,
safety plans, and storage and material retention devices for on board the aircraft. Connection of the
truss elements at the nodes emerged as the critical element of the project.
6.2.4
STUDENT WORK
The student design consisted of hollow carbon fiber truss elements and polyethylene connector units
(Figure 6.1). The truss was to be assembled in the zero gravity environment onto base elements
66
6. INTERDISCIPLINARY PROJECTS
Figure 6.1: Snap together truss design.
fastened to the top of a container box. The container box was fastened to the aircraft floor in
accordance with NASA guidelines.
For comparative evaluation, two different truss concepts were developed. The first truss con-
cept consisted of snap together elements.The second concept had a cam mechanism on the connector
so that the node could be opened, a ball from the truss end inserted, and the cam twisted shut to
complete the connection.
The zero gravity flight consists of a series of parabolic curves. Weightlessness occurs while
the plane is in the parabolic arc. The gravity free time to work is on the order of 20 seconds. In
order to prepare for the flight, the students practiced the truss assembly underwater in the campus
swimming pool (Figure 6.2). This trial program worked out especially well because the amount of
time available for holding their breath and working underwater was again approximately 20 seconds.
The underwater fabrication occurred in a semi-weightless environment. The truss elements were
close to neutral buoyancy and the students had no firm surface to grip.
The project was accepted by NASA and scheduled to fly in the spring semester. Each truss
element and connection element had a Velcro strap attached to it and a corresponding Velcro tie
6.3. NASA ZERO GRAVITY II: EXERCISE MACHINE 67
Figure 6.2: Underwater trial fabrication.
down on the container box. The Velcro prevented the pieces from floating free during the zero
gravity portion of the flight. Both truss concepts were capable of being constructed within the time
constraints of the flight. The final report to NASA concluded that the cam device was superior to
the snap together design. While the snap together design initially could be constructed faster, the
cam system was much easier to disassemble or reconfigure.
All components were designed and fabricated by the students. The aluminum storage box and
the foundation of the truss were designed by the students and were common to both truss systems.
The design included the safety padding along the edge of the truss container box (Figure 6.3).
6.2.5 ASSESSMENT
The project required two semesters due to the NASA review and acceptance process. Students
presented their results to both NASA and to their respective student professional societies on campus.
There was sufficient interest in the project that a second project was undertaken the following year.
NASA was critical of the student final report in not being detailed sufficiently for deployment.
In addition to the construction write-up, NASA indicated they wanted a full weight and strength
analysis.
6.3 NASA ZERO GRAVITY II: EXERCISE MACHINE
6.3.1 OBJECTIVE
In zero gravity, astronauts need self-contained, self-reacting, light weight exercise equipment for
transport to space and it needs to be compact enough for easy storage on the space station. The
NASA flight must demonstrate the suitability of the equipment in zero gravity. Students designed
a “Bowflex ®” based exercise platform that allowed upper and lower body exercises. The final device
68
6. INTERDISCIPLINARY PROJECTS
Figure 6.3: Snap together truss construction in zero gravity. (Photo courtesy of NASA)
6.4. DESIGN OF AN AUTOMATED TRANSIT SYSTEM 69
was designed to be collapsible and fit in a minimum volume for both transport to and storage on
board the space station.The project organization and executions were similar to the truss construction
project.
6.3.2 CLASS COMPOSITION
The class consisted of ten students, four civil engineers and six mechanical engineers. Four of the team
were female students. Dr. David Walrath assisted with the mechanical engineering components.
6.3.3 CLASS ORGANIZATION
The class was structured in a discussion format and given a single course number. The original
concept of space construction came from the professor. The class worked in a colloquium setting
and was responsible for fabrication of all components and preparation of reports.
STUDENT RESULTS
6.3.4
The students examined a range of exercise equipment and eventually selected on modifying Bowflex®
system components to provide a force resistance system. The Bowflex® rods were lightweight, and
could be procured in varying stiffness. They could be configured to provide upper and lower body
strength exercises (Figure 6.4). Elastic band solutions provided similar exercise options, but the
rigidity of the student design allowed easier mounting and dismounting exercise positions.
6.3.5 ASSESSMENT
The project ended up being only partially successful. While the student design functioned as planned,
one of the project sponsors did not want to continue due to potential liabilities of elements breaking
in space. Breakage was never an issue in the trials, but the inability to potentially replace a part
remained a concern to the team. Complementing the student effort, the college supported the travel
expenses for a reporter from the local TV station to accompany the students to Houston. This
resulted in a five-day TV documentary featuring the students, their project, and their flight.
6.4 DESIGN OF AN AUTOMATED TRANSIT SYSTEM
6.4.1 OBJECTIVE
The main campus of the University of Wyoming is growing, and parking is being relocated to the
campus perimeter. An interdisciplinary senior design class was recruited to design an automated
transit system for the campus. This was a two-year project. The first year involved planning and
preliminary design of the system. The second year involved constructing a prototype model transit
system and alternative guideway designs.
70
6. INTERDISCIPLINARY PROJECTS
(a) Demonstrating leg strength
(b) Changing exercise setup
Figure 6.4: Flight team on board the NASA KC-135A test flight (Photos courtesy of NASA).
6.4.2 CLASS COMPOSITION
The class composition consisted of five to eight civil engineers, two chemical engineers, one electrical
engineer and four mechanical engineers. One of the mechanical engineering students was a dual
major in ME and EE. The number of students varied from semester to semester due to the civil
engineering departmental requirements.
6.4.3 CLASS ORGANIZATION
This was the first fully interdisciplinary senior design project. The course was divided into five
components.The first component required about half a semester and dealt with project planning.The
second component refined initial concepts and selected the overall transit system. This component
included field trips to assist in understanding the magnitude and complexity of the undertaking.
The third component began in the second semester and included detailed design of the system
elements including vehicle, guideway, geometric layout, stations, maintenance facility, and control
6.4. DESIGN OF AN AUTOMATED TRANSIT SYSTEM 71
center. This activity additionally incorporated graphic art consultants. The last two components
occurred in the second year and included design of an alternative guideway and fabrication of an
operational prototype.
The class was organized similar to a design office. Figure 6.5 provides the second semester
organization chart for the class including the various project assignments. The graphics design
portion of the project was provided by the senior Computer Graphics II class.
Figure 6.5: Class organization chart.
6.4.4
STUDENT RESULTS
First semester established the design parameters for the project. Five tasks were completed. First,
the students reviewed material on automated transit systems found in the literature and lectures
prepared by the professor. Second, they conducted traffic studies to determine the demand to be
placed on the system, stations, and vehicles. Third, they examined the Americans with Disability
Act requirements. Fourth, they developed the technical design guidelines. Fifth, they developed the
preliminary design concept. Augmenting the literature review was a field trip to Denver, Colorado,
72
6. INTERDISCIPLINARY PROJECTS
where the team visited Rocky Mountain Prestress, Six Flags–Elitch Gardens, and the Denver In-
ternational Airport Transit system. Rocky Mountain Prestress provided the team with insight to
understand how construction could be prefabricated to minimize on site construction time and dis-
ruption. Six Flags–Elitch Gardens engineers discussed switching, safety, and operation of rides with
small vehicles (Figure 6.6).
Figure 6.6: Discussing switches at Six Flags–Elitch Gardens and conducting bus traffic studies.
The Denver International Airport automated transit system requires very high reliability
and has a sophisticated maintenance area. Students were introduced to transit operation reliability
concepts.
The second task developed the overall load criteria and transit layout. This study included the
size of the transit vehicles, frequency they would run, and the route they would take. To complete
this task the students conducted extensive surveys of the shuttle bus. The university runs buses
on 10–15 minute headways from about 7:30 in the morning until 8 in the evening. The student
findings were enlightening. First, the only times the buses were heavily used were in the 20-minute
period prior to 8 AM and 9 AM morning classes. Many times during the day the buses ran empty.
Supplementing the traffic study, the students met with the president of the university to review the
long range planning and capital construction plans. In addition to determining where the campus
traffic would likely locate, they also examined athletic events on campus to evaluate if the transit
6.4. DESIGN OF AN AUTOMATED TRANSIT SYSTEM 73
system could assist access to football and basketball games. The study determined that the transit
system would be elevated to eliminate conflict with street traffic.
As part of the traffic study, the students conducted an assessment of how far patrons would
walk. From a series of transit studies in Canadian cities, they determined that transit users would
walk about 1,000 feet before considering alternative mobility options (Figure 6.7). The present and
future station locations provide access to 100 percent of the present and future campus.
Figure 6.7: Aerial plan showing station locations and walking distances.
The third task determined how the Americans with Disabilities Act (ADA) would impact
their design. In addition to reviewing the ADA requirements, the students laid out a mockup of the
interior of the vehicle, then used wheelchairs to enter, exit, and position themselves in the vehicle.
This study led to four major conclusions. First, all stations would require elevator service. Second, the
vehicle must be able to accommodate one and preferably two wheelchairs. Third, wheelchair access
74
6. INTERDISCIPLINARY PROJECTS
must not require supplemental assistance or restraints. Fourth, the team must develop appropriate
emergency egress solutions.
The fourth task established the design guidelines for the system. These were divided into
two parts. The first part examined ASCE-7 Loads on Structures1 for external loads on the structure
and stations. ASCE-7 does not provide wind loads for guideways or vehicles, so the students had
to extrapolate the specification data to suit their conditions. The second part addressed the vehicle
requirements and consisted of two components. The first component was the overall frequency of
vehicles, travel times, and interface requirements with the guideway to assure fatigue performance
and ride comfort. The latter item leads to maximum horizontal accelerations and corresponding
minimum curve radii. The second component was the size of the vehicle and its orientation on the
guideway.
The conclusion resulting from the first semester was that the vehicle would be suspended
under the guideway to mitigate the effects of weather in the Laramie area. An overhead support
system assures that the bogie supporting the vehicle is within an enclosed area and out of the
weather. While Laramie is semi-arid, the winter snowstorms and associated winds were a concern
that an exposed guideway surface could become iced and result in the vehicle being stuck on inclined
areas. To further assure all weather operations, a linear induction motor drive was selected. A linear
induction motor not only provides the power climbing hills but also works to control downhill speed
without having to resort to a mechanical braking system. The mechanical braking system for the
vehicle served as a backup. Safety and emergency egress solutions were developed and included in
the recommendations.
The traffic analysis suggested that vehicles with a capacity of six seats were adequate for
the majority of the travel. The floor space was then designed to accommodate two wheelchairs.
This provided a vehicle with a total capacity of approximately 20 students if wheelchairs were not
present. That would be six students sitting and 14 standing. While this would be a relatively tight
configuration, it was satisfactory to carry the peak load occurring just before the 8 AM and 9 AM
classes. The students concluded that a small vehicle operating at a four-minute headway would be
optimal for the peak hours. Vehicles would automatically be removed from the system and headways
increased to 5- to 10-minute headways off peak. The students further decided that the transit system
should offer two-way operation. That is, one side of the track would carry vehicles in a clockwise
direction while the opposite side would carry them in a counterclockwise direction. The guideway
would split at a station so the station would be between the two tracks and therefore only require
one set of stairs and elevators. The two-directional operation assured that the minimum transit time
would result and provide redundant operation. A rider could go counter flow to get to a station
immediately across campus instead of having to ride the entire route to get to the same location.
During the second semester, the engineers “hired” the ART 4110 Computer Graphics II design
class to assist in developing the system graphics. Three teams from this group presented graphic
concepts to the engineers. Stagecoach emerged as the system theme. The graphics classes presented
marketing concepts for advertising on the side of the vehicles to assist in defraying operational costs.
6.4. DESIGN OF AN AUTOMATED TRANSIT SYSTEM 75
To publically assess the overall concept, the two classes conducted a focus group study. The study was
conducted in the lobby of the student union (Figure 6.8). Students visiting the booth were requested
to vote on a name, final graphic themes, and provide opinions on travel times and station locations.
Figure 6.8: Focus group booth and one schematic of a vehicle.
The guideway design was a steel truss spanning between precast concrete columns. The
columns were designed to have a sandstone finish to match the buildings on campus. The final
guideway design layout was selected to minimize the number of parking spaces taken and trees
impacted. The suspended vehicle system had a secondary benefit that should one of the large cot-
tonwood trees surrounding the campus lose a branch, that branch would hit the guideway but would
not cross the guideway in a manner to disrupt or dislodge a vehicle.
The stations were designed in precast concrete and galvanized steel. The architectural finish
was selected to match the buildings on campus and to be in accordance with the university trustees’
guidance for overall campus architecture. The stations were intended to be modular to facilitate ease
of construction. Where possible, it was also anticipated that the stations might be integrated into
any new building construction to provide even more efficient access to campus facilities.
The mechanical engineering component of the project was satisfied by using the rapid pro-
totype modeling equipment at the university. Students designed the overall cab for the vehicle and
the bogie system. Each of the components was “printed” on the rapid prototype machine. They
were manufactured to approximately 1/10 scale and were available for inspection during the public
presentations.
Chemical engineering students were charged with developing a fuel cell component for each
car to allow a vehicle to return to a station in the event of a power outage. Following their initial
research, the students concluded that such power supplies were available commercially. They un-
dertook a study for an alternative power supply for the project. The students developed a concept
76
6. INTERDISCIPLINARY PROJECTS
for using a solid oxide fuel cell currently under development by Siemens in Germany. The fuel cell
operated on natural gas at a temperature of approximately 900
F. The fuel-cell generated sufficient
electric energy to completely operate the transit system and have a 20 to 30% excess capacity to
provide base load and off peak transit power to the university. The student analysis concluded that
the heat from the fuel cell would be sufficient to replace the heat generated by the coal burning
furnaces at the university energy plant and thereby reduce the university carbon footprint.
◦
The projected construction schedule for the project was 700 working days from bidding until
final construction. The project budget was estimated to be $64.3 million and a 15% contingency for
future design cost increases.
The design was presented at a public meeting and included invited judges. Adjudicators in-
cluded the president of the university, the vice presidents of Research and Facilities, the dean of
Engineering, five faculty members, and three professional engineers. In preparation for the presen-
tation, one of the students suggested that an animated graphic of the systems would be impressive.
The class provided aerial views of the campus and graphics of the transit system to Mr. Brendan
Dolan, who in turn generated a complete campus model in the computer game Roller Coaster Tycoon.
The model was used in the presentation and included aerial views of the system and a comprehensive
passenger’s view from inside the vehicle as it circumnavigated the campus.
The third semester broke the project into two parts. The first part occurred in the civil engi-
neering course for design of prestressed concrete. That class used the design guidelines developed
the previous year to develop design an alternative guideway in precast-prestressed concrete. In the
execution of the precast concrete design, the students were introduced to significant geometric de-
sign constraints due to the centrifugal force on the vehicles and the corresponding torsional effects
in an inverted C shaped structure.
Mechanical engineers undertook the task of constructing a prototype transit system. During
the field trip to the Denver International Airport, Logplan LLC offered to provide the university
with a small section of the original Denver baggage handling system. The students used the bogie
and guide rails from the baggage handling system as the basis for the demonstration system. They
modified the bogie to support a Plexiglas cabin complete with operational doors. The students
modified the LIM rail and LIM motor to correspond to the overall design criteria. The final project
demonstrated the operation of the automated vehicle system. The vehicle pulled into the first station
and cycled the doors automatically. Optical sensors checked for obstructions and recycled the doors
if an obstacle was present. The vehicle traversed to the next station, cycled the doors, then shut down
(Figure 6.9).
6.4.5 ASSESSMENT COMMENTS
The project was complex, difficult, and highly engaging for the students. It was assessed in two
different environments. The first assessment was a public presentation of the project at the end of
the first year. The presentation team included the engineering students and the Computer Graphics
students. Evaluation sheets were given to everyone in the audience and specific sheets were given to
6.4. DESIGN OF AN AUTOMATED TRANSIT SYSTEM 77
Figure 6.9: Students programming the model transit system.
individuals asked to adjudicate the project. Each critique sheet asked the reviewers to evaluate the
project on the basis of the written report, the technical merits, and the oral presentation. On a scale
of 1 to 4 with 4 being outstanding, the technical review team score was 3.6. Non-engineers rated
the team somewhat higher, 3.8, than the technical reviewers. A sample evaluation sheet is provided
in Appendix IV.
A press announcement was compiled by the graphics class, and press releases were prepared.
The project received coverage in most newspapers in the state. Copies of the final report were sent
to the board of trustees and several state legislators.
UW TV conducted the second semester interview and presentation and the tapes were release
within the state. One of the interesting facets of the mechanical engineers design was the fact that
the students were able to make the Denver Airport baggage handling system work.
78
6. INTERDISCIPLINARY PROJECTS
Figure 6.10: Final project graphics (Courtesy ART 4110).
The second level of review occurred when the final report was distributed to the various field
trip sponsors. Several comments were received; however, a most interesting critique came from a
firm that conducts planning of specialty transit systems. They had acquired a copy of the report from
the Denver International Airport. Their comment was that the students had not followed all of the
relevant specifications for transit design. At the same time, the firm understood that working from
first principles not just following specifications was a class objective. They then requested the names
and contact information for every member in the class as they wanted to hire as many as they could.
6.5 DISAPPEARING ROADS
6.5.1 OBJECTIVE
Jonah Field in western Wyoming was one of the first major gas fields developed by fracking. The
tight sandstone led to wells being placed in close proximity to each other. Consequently, a spider
web pattern of access roads developed. The class was charged with designing methods to reduce the
surface impact of drilling. In the process of the course, the class also elected to enter the “Disappearing
Roads” competition sponsored by Halliburton Corporation and run by Texas A&M University.
6.5. DISAPPEARING ROADS 79
6.5.2 CLASS COMPOSITION
The class consisted of seven civil engineers, 12 mechanical engineers, and one Environmental and
Natural Resources student. The second semester introduced a new group of civil engineers. Two civil
engineers elected to take the second semester as an elective credit to complete the project design.
6.5.3 CLASS ORGANIZATION
The Disappearing Roads competition was presented to the class as a model but not a requirement for
the project. The first two weeks discussed the environmental and engineering issues to be addressed.
The class then traveled to Pinedale, WY, for a three-day field trip. The trip included stops at the
Halliburton facility in Green River, WY, and a briefing of environmental and regulatory constraints
by the Bureau of Land Management in Pinedale, WY. EnCana Corporation arranged a full day tour
of Jonah Field including briefings on fracking operations, gas recovery, disposal of drilling materials,
and overall operations.
Following the field trip the class elected to enter the Disappearing Roads competition. The
class was interviewed by Mr. Richard Haut, of the Houston Area Research Consortium, and, based
on their interview, were allowed to compete.
The second semester included a field trip to the Questar Productions, virtual drilling facility
in Denver, CO. The tour included a 3D visualization of the Pinedale Anticline Production Area and
the difficulties of hitting the small gas formations.
Near the beginning of the class, a request came from U.S. Senator John Barrasso’s office for
a student panel discussion of energy policy based on the book Beyond Oil.1 Six students from the
class participated with one student serving as moderator. At the conclusion of the course the group
that was representing the university at the Disappearing Roads competition presented their findings
to Senator Barrasso. The senator met with the students for well over an hour and quizzed them
closely on their work. This briefing was exceptionally helpful in preparing the students for their
Disappearing Roads presentation.
6.5.4
STUDENT RESULTS
The following student results focus on the environmental considerations leading to development of
a mat road system. The work included research, development, testing of concepts, and concluded
with the class participating in the Disappearing Road competition.
The Pinedale Anticline Production Area (PAPA) and Jonah Field are in West Central
Wyoming west of the town Pinedale. Though these two fields share many of the same charac-
teristics, there are a few key differences. First and foremost is the size of the drilling field. PAPA
encompasses 198,000 acres. This is over eight-and-a-half times the size of Jonah Field. The PAPA
is a long narrow swath of land that stretches from Pinedale to 70 miles north of Rock Springs. The
80
6. INTERDISCIPLINARY PROJECTS
Figure 6.11: Class briefing U.S. Senator John Barrasso.
terrain at the PAPA is generally not as level as the terrain at Jonah Field. The PAPA does have a
very similar dry climate to Jonah Field. The fields contain over 3 billion cubic feet of natural gas.
Common environmental concerns in PAPA and Jonah Fields include: impacts on sage grouse,
pronghorn antelope, big game animals, top soil disturbance, air pollution, preserving view sheds, soil
chemical composition, and addressing archaeological issues.
While the two sites share many of the same environmental concerns, the PAPA has far more
habitat concerns. The PAPA is a vast area, and it is broken into nine separate management areas
that are based on land ownership and environmental concerns. Each management area has a set
number of wells that can be developed and its own set of environmental concerns. These concerns
range from preserving historic wagon trails to protecting big game winter habitat. Because of these
additional concerns different strategies will have to be applied to the two natural gas fields.
6.5. DISAPPEARING ROADS 81
Figure 6.12: Aerial view of Jonah Field (Photo copyright Jeff Vanuga, used with permission).
The Pinedale Anticline Project Area and Jonah Field have many shared geological traits. In
both fields the natural gas being recovered is contained in over-pressurized pockets in the Lance
Formation. These pockets in the Lance Formation are described as a bowl of potato chips. Each chip
contains the gas bearing formation. These pockets require hydraulic fracturing in order to recover
the gas. Hydraulic fracturing sites require a larger and heavier footprint than a conventional natural
gas well site.
A key to reducing reclamation time is to reduce the disturbance of the topsoil. When the
topsoil is torn up, and stored in piles for several years, the soil loses vital microbes and nutrients. The
root structure of the sage brush is also heavily damaged in this process. Sagebrush can take 10 to 30
years to reestablish but if the root structure is preserved, the recovery can be as little as two years.
Implementing strategies that would lessen the disturbance of the topsoil and sagebrush would lead
to a reduced recovery time which would be beneficial both to the environment and to the energy
companies as the regulations will only allow more drilling when an equal area is recovered.
Both Jonah Field and the PAPA are on land once inhabited by indigenous cultures and still
maintain very important archeological sites scattered throughout the fields. Damage to these sites
should be avoided, and strategies to account for these sites implemented.
Additional regulations that affect PAPA and Jonah Field include:
82
6. INTERDISCIPLINARY PROJECTS
(cid:129) In order to preserve air quality in limiting NOx emissions, all drill rigs shall use natural gas
powered engines with low NOx emissions.
(cid:129) Mat roads and pads cannot be in the same location for two years or more. If a mat is in place
for two years at one location, it must be removed and cannot be placed in that location for
another year.
(cid:129) All compressing and condensate facilities must produce no more than 49 decibels of noise
pollution.
(cid:129) The owner of the land reserves the right to have any main road removed. If it is desired to have
the main road removed, it shall be done by the energy company who must remove all foreign
soil from the road system. A two track access road must remain for maintenance purposes
throughout the service life of the well.
(cid:129) If heavy equipment is required to access any site after development, a mat road must be
deployed for access.
(cid:129) All reclamation criteria will be according to the current BLM mandated reclamation criteria.
(cid:129) In management areas with big game winter range there will be no development occurring
from November 15 through April 30.
The Pinedale Anticline and Jonah Field have enough differences that they warrant two dif-
ferent strategies for drilling. This summary of the student work addresses the PAPA field. The
recommendations for the Jonah Field are in their full report. The PAPA is slated to have about
2,000 wells drilled in the next 20 years and will be operating through the foreseeable future, whereas
production at Jonah Field is on the decline, and suggested strategies may not be as applicable.
The PAPA has to account for additional time constraints due to winter range of big game
and breeding season of the sage grouse population. Therefore, the solution in the PAPA is to limit
the development footprint using temporary roads for field delineation. This research suggests a
temporary mat road system would improve access and reduce recovery time.
The timeline for an individual well pad at the Pinedale Anticline is constrained as follows: no
surface activity is allowed from November 15 to April 30 in management areas with big game winter
range concerns. This leaves 198 days for development to take place. It is estimated that a well will
be completed in approximately 72 days; at this pace five wells can be drilled in areas where operation
can be year-round and three wells in areas where drilling operations must shut down in the winter.
In order to complete one well pad with 32 wells it would take 11 years in areas with winter range
concerns and seven years in other areas.
Due to these constraints, the timeline for a pad in an area with big game winter range concerns
is as follows: on May 1, the mat road would be deployed. A temporary mobile modular frame
would then be setup, with all equipment and material needed to complete three wells. Once all the
equipment is in place, the mat road would be picked up, and a two-track road would serve as access
6.5. DISAPPEARING ROADS 83
for the workers. Once the final well is completed for the season, the mat road is redeployed and all
the equipment that is required to leave the pad site would leave at that time. The mat road is then
picked up and the site is vacated until the next year.
A main paved road would be designed for the spine of the PAPA field and should meet the
following criteria: provide an adequate base, sub grade, and pavement type with a thickness capable
of withstanding a repeated 80,000 lb truck load. A preliminary design indicates that the paved road
would reduce dust, noise, and maintenance. It would be 6 in. thick consisting of a 3 3/4 in. nominal
hot plant mix. The paved road would be at least 24 feet wide to allow for the larger turning radius
of trucks and equipment. This scenario requires mat roads of up to one mile and would therefore
need a complex mat road system. The mat roads would be at least 12 feet wide and as much as 24
feet where turns are required. Advantages of this scenario include not having to spray chemicals on a
dirt/gravel road for dust suppression along with a smoother, faster, dust-free access to well and hydro
fracking sites. The disadvantages of this system include a higher initial cost, and the requirement
of a more complex mat system and maintenance access to the site. The gravel roads that spur from
the paved road would be placed when a mat road is unable to connect a desired well pad site to the
paved road because of safety or terrain reasons. Due to topsoil concerns, the maximum deployment
for a mat road would be two years with a minimum of one year before redeployment in the same
location.
A roll-out road concept is suggested for short access roads. The roll-out road incorporates
hinged board segments linked with cables that can be rolled out into 50-foot road sections. These
sections can be rolled out to construct a temporary road and then rolled up when finished. This
concept will reduce the time required for setup and removal and enhance the ability to conform to
uneven ground surfaces (Figure 6.13).
Figure 6.13: Roll-out road segment.
The key benefit of the roll-out road concept, compared to other alternatives, is the ease of
placement and removal on site by incorporating a continuous roll rather than individual mat segments
84
6. INTERDISCIPLINARY PROJECTS
in a grid/matrix format. Parallel 10-foot-wide lanes allowing for a complete 20-foot-wide two-lane
road to be rolled out. The four main components in the initial design included board selection, hinge
design, segment connection design, and road dimensions.
During the class field trip to Jonah Field, representatives stated that the main problem with
currently available mat designs was the longevity of the oak. The design team found a solution with
Heartland Bio-composites, a Wyoming-based company that specializes in manufacturing natural
fiber-reinforced/polymer-based lumber products (bio-composite). Some advantages in using bio-
composite lumber are long-term durability, enhanced weather resistance, and their ability to be
recycled. In following the “low impact/environmentally friendly” theme of the project, the design
team felt that bio-composite lumber was an ideal solution. To enhance strength and minimize the
number of segments required for the roll-out road, the team decided to base the initial design on a
2 x 8 in. board cross section.
With board selection completed, the next task was to maximize the ability of the boards
to conform to uneven ground surfaces. The plan for the roll-out road called for individual board
segments to be chained together in the longitudinal direction. Transverse hinges, centered in each
board, hold the connection together in the direction of travel while still allowing individual segments
to conform to changing terrain (Figure 6.14).
with hinge
without hinge
Figure 6.14: Lateral flexibility with and without hinge assembly.
Two conceptual hinge designs were developed for the roll-out road. The first hinge design
incorporates a flexible elastomer/rubber hinge.The elastomer/rubber hinge is affixed to slotted board
segments and held together with lag screws. The second hinge concept utilizes U-bolt fasteners and
fabricated with ASTM A1018 steel endplates.
Before finalizing the roll-out road design, testing was performed on both an individual com-
ponent basis and as a scaled prototype in the field. Testing can be broken down into “sandbox” board
tests, hinge tests, and field tests.
Preliminary tests were run to assess the durability of the bio-composite boards by subjecting
them to cyclic loading, which represents a continuous series of heavy duty vehicles driving over the
road. To replicate field conditions, a “sandbox” was constructed and filled with a sand/soil mixture
and placed under the hydraulic ram of a MTS machine. Oak and bio-composite test boards were
continuously loaded with 4,500 lbs at a cyclic frequency of 1 Hz. Each material was tested for
one hour (3,600 cycles) and the corresponding maximum deflections were recorded as 3.78 in. for
bio-composite, and 3.28 in. for oak.
6.5. DISAPPEARING ROADS 85
The second test administered in the laboratory was designed to test the tensile strength of the
reinforced rubber and U-bolt hinge connections. Two types of reinforced rubber (Capralon® and
Masticord®) provided by JVI Industries were used to assemble two separate hinges, both of which
underwent tensile loading until failure. The U-bolt hinge was tested using the same procedure. The
tensile loads at the point of failure for the Capralon®, Masticord®, and U-bolt hinge assemblies
were recorded as 2,200 lbs, 1,450 lbs, and 4,800 lbs respectively. With a predicted maximum tensile
load for the hinges of 780 lbs under field conditions, all three hinge designs performed reasonably
well (Figure 6.15).
Figure 6.15: Hinge assembly testing.
The final testing application involved placing the prototype road section utilizing the
Capralon® elastomer hinges in the field. The prototype was taken to Mountain Cement Com-
pany in Laramie, WY, where 80,000 lb twin side-dump trucks were continually driven over the road
system on their way from the gravel/limestone quarry to the cement plant. The results of the field
testing revealed some serious problems with the elastomer hinge design. After approximately four
days and 153 truck passes in the field, two of the boards failed at the middle hinge connection. The
failure was determined to be caused by stress concentrations in the notched cuts on the board ends,
which encase the rubber hinge components. The design team feels that cold temperatures (nearing
0
F) also contributed to the brittle fracture that resulted in failure. Even with the elastomer/rubber
hinge failure, the design received praise from the cement company as well as several drivers who
thought the concept would be excellent if a better hinge connection could be implemented. The
initial rubber hinge connection was abandoned and the U-bolt hinge design was chosen as a final
design.
◦
For the roll-out road system to function as intended, the following guidelines need to be fol-
lowed. First, all large obstacles should be removed from the path of travel and the route brush-hogged.
While the road should be able to conform to most terrain, not following the above recommendations
will lead to premature failure of the road. For the placement and removal process of the road, a simple
solution rolling and unrolling the road from around a forklift attachment was selected.
86
6. INTERDISCIPLINARY PROJECTS
The process will allow the road to be rolled up and rolled out without ever having to drive
directly on the terrain. When rolling out the road, the forklift will drive directly over the road
section as it is unrolling. When rolling up the road, the forklift will be driven in reverse down the
road section, allowing the forklift to remain on the roll-out road at all times. The weight of the forks,
beam attachment, and beam is thought to be enough weight to compel the road to roll up. This
process will become easier once an initial wrap is completed.
A replicate mat was designed similar to the wood mats currently in use but used a bio-
composite material. The bio-composites were attractive for their potential lifetime over the oak
mats used today and their ability to be recycled. The extra cost of using bio-composites is a concern.
Therefore, these mats are designed to have a life cycle cost that is substantially less and have a longer
lifetime than the wood mats that are currently in use today.
The complete layout of these bio-composite mats incorporated some ideas from the current
wood mats with a few layout changes. The mats are 8 x 8 foot squares in order to be used both on
drilling pad sites along with roads leading into these sites. The 8-foot width of the mat allows for a
24-foot road leading into the well sites, which is compatible with any size vehicle in Jonah Field.
6.5.5 TESTING
Various tests were completed for the bio-composite material and the prototype mat. The goal was to
determine if the bio-composite material would be more conducive to the environmental constraints
and more cost effective to the consumer. The tests were completed to assess how well the bio-
composite material performed. The standard of comparison for the tests was oak, the material
currently used in the field. The tests performed were for: friction, fatigue, abrasion, shear, deflections
under loading, and a field test.
Concurrent with these lab tests, a field test was performed. Four quarter-scale mat prototypes
(4 foot x 4 foot x 4 1/2 in.) were built and placed in a rock quarry road owned by Mountain
Cement Company outside of Laramie, Wyoming, for field testing. Two mats were replicates of the
oak mats currently in use and two were built with a 0˚/45˚/0˚ configuration to test for strength and
durability for either configuration. These mats withstood an average of 2,400 tons per day for 18
days (Figure 6.15). After testing, both configurations came out looking exactly the same. There were
no broken boards, the mats did not warp and they showed very little wear. The only problem that
arose was when the mats were being removed, the interlocking boards ended up breaking because
they were frozen to the ground and improper removal techniques were used. Because all mats were
removed the same way, it is felt that the 0˚/90˚/0˚ composite prototypes should be pursued over the
0˚/45˚/0˚ configuration simply because they are easier and cheaper to manufacture.
Students recommended additional tests including an Izod test at extreme temperatures, a creep
test on the prototype, the tests already performed with wider temperature variant conditions, longer
field testing, properties testing to compare the theoretical properties to the actual properties, a screw
withdraw test with various screws, and a shear test with various screws. They also recommended that
6.5. DISAPPEARING ROADS 87
Figure 6.16: Field test.
a demonstration installation should be performed on Jonah Field to further test the bio-composite
mats in the field.
The bio-composite mats are beneficial to use in the field over the oak mats.The bio-composite
material is more environmentally friendly because it does not absorb significant amounts of moisture,
it does not leach into the soil, and it can withstand varying temperatures. The bio-composite mats
are also more cost effective because, even though there is a higher initial cost, the life of the bio-
composite mat exceeds the oak mats and that the composite materials can be recycled by melting
and reforming.
After the mat/rollup road is removed, there will be emergency field situations. Efficient ways
of responding to these situations were addressed. Finding an efficient solution to an emergency
situation requires weighing the effects of the emergency against the environmental effects.
In the situation where a worker is hurt, the first option is to drive out to the site of the accident.
When there is a serious injury to a worker, there is always the option of driving off-road because
at that point, the risk of injury or death greatly outweighs the environmental effects of driving off
road. The second option is to use a helicopter from Flight for Life.
In fire or other such emergencies at one of the well sites, there are multiple options for handling
the situation. If you need large equipment at the site, such as a crane, tracked vehicles can be available.
These tracked vehicles are large enough to haul the necessary equipment to the site of the accident.
An advantage of these tracked vehicles is that they will produce a minimal footprint, even while
hauling other large equipment. When the fire needs to be contained very quickly, the only option is
to drive fire equipment to the site, even if this means driving off-road.
88
6. INTERDISCIPLINARY PROJECTS
6.5.6 ASSESSMENT
The above description addresses the design, fabrication, and testing of roll-out and mat road portion
of the design. The students also designed at portable drilling pad structures to reduce the footprint.
Presentations at the University of Wyoming were provocative with varying opinions being offered by
the students, oil and gas industry, and BLM. The students defended their design well. The learning
experience was enhanced by the differing opinions and positions of the reviews.
External assessment of this project included taking first place at the Disappearing Roads
competition. The same year we received a request from Grand Teton national park on the project.
The National Park incorporated wooden mats into their park improvement project using some of
the recommendations in the student report. The mat road concept, using composite mats, has been
incorporated into projects in Texas in part because of the Disappearing Roads report.
6.5.7 ACKNOWLEDGMENTS
The research team acknowledges the materials supplied by Heartland Biocomposites, Torrington,
WY, and JVI Inc., Lincolnwood, IL.Technical support was provided by EnCana U.S.A. Inc. Pinedale,
WY; Questar Production Inc., Denver, CO; Bureau of Land Management, Pinedale, WY; Mountain
Cement, Laramie, WY; the University of Wyoming School of Energy Resources; the H.T. Person
Endowment; and the College of Engineering and Applied Science shop and staff.
BEETLE KILL AND BIOMASS ENERGY
6.6
6.6.1 OBJECTIVES
The massive beetle kill of Lodgepole Pine in the nearby National Forest creates a hazard for fire,
traffic, hikers, and power lines running through the forest. Roads and power line rights of way were
to be cleared to 75-foot setback to prevent trees from falling on the roads or the lines. This design
project assessed whether the timber that was being removed was sufficient to be an alternative energy
supply for the University of Wyoming Central energy plant and the modifications to the Energy
Plant to accommodate wood fuel.
6.6.2 CLASS COMPOSITION
The class consisted of nine students; six civil engineers, two energy system engineers, and one
mechanical engineer. In the second semester only the energy system and mechanical engineers
continued.
6.6.3 CLASS ORGANIZATION
The project was refined to match the available student skills. The University of Wyoming Central
Energy Plant uses coal as the main fuel source for providing the energy needed to heat the University
of Wyoming campus. The stoker-grate boiler employed at the Central Energy Plant has the ability
to burn a variety fuels to produce the required energy needed. This project explores options for
6.6. BEETLE KILL AND BIOMASS ENERGY 89
biomass use at the Central Energy Plant, reduction in emissions in relation to the University of
Wyoming’s emission goals, acquisition of beetle kill wood as a biomass energy source, the facilities
required to store the wood, site design and layout for the energy plant modifications and additions,
the environmental advantages of the project, a risk assessment, and the cost analysis of implementing
such a project. The project focuses on the implementation of the cofiring solutions at the central
energy plant including the following areas of interest: verification of the fuel mixture ratio entering
the boiler, energy and economic forecasting for cofiring at the Central Energy Plant, and biomass
cofiring combustion effects.
6.6.4
STUDENT RESULTS
This project was viewed as an economic opportunity for the University of Wyoming as well as an
environmental opportunity. Currently the University of Wyoming campus is heated by a network
pipes that deliver steam to individual buildings. The steam is produced at the Central Energy Plant
where coal is burned in stoker boilers to produce the delivered steam.The University receives roughly
25,000 tons coal annually from the Grass Creek Mine of Thermopolis, Wyoming (Table 6.1). The
University of Wyoming set a goal to reduce CO2 emissions 15% by the year 2015 and of 25% by
2025. The University of Wyoming is striving to be carbon neutral by 2050.
The Medicine Bow and Routt National Forests are particularly susceptible to beetle infestation
due to the morphology of the forest and mature trees that have had to withstand years of drought.
These forests combined consist of approximately 2.7 million acres of which over 1 million acres, or
nearly 40% of the forests, have been affected by the beetle epidemic.
Table 6.1: University of Wyoming Annual Coal Consumption at the Central Energy Plant from FY04-
FY08
Fiscal Year
Amount of Coal Burned
(tons)
CO2 Emissions from On-
Campus Coal (tons)
2004
2005
2006
2007
2008
24,097
24,059
24,297
25,864
24,510
57,926
57,446
58,221
61,248
58,165
Coal that is burned to heat the University of Wyoming campus contributes significantly to
the overall carbon dioxide, CO2, output of the University of Wyoming. The total carbon emissions
of the university are 147,452 tons of CO2, and on-campus coal use makes up 39% or 58,165 tons
CO2. These emissions have remained relatively constant from 1997-2009. By adding biomass as a
90
6. INTERDISCIPLINARY PROJECTS
fuel source at the energy plant there is a potential to move the university well on its way to meeting
or even exceeding its emissions goals (Figure 6.17).
Transporta(cid:2)on &
Other
15%
Biomass/CO2
Offset
8%
On-Campus
Sta(cid:2)onary
31%
Purchased
Electricity
46%
Total UW CO Emissions: 147,452 Tons
Offset CO Emissions: 11,796 Tons
Figure 6.17: Estimated FY11 CO2 Emissions Contributions by Source for the University of Wyoming
with Cofiring 20% Biomass Replacement.
There are several environmental benefits to burning biomass in place of coal. In the case of
emissions there are three clear benefits. They include: reduced sulfur dioxide emissions, reduced
nitrogen oxides emissions, and reduced net carbon dioxide emissions. In this case, the sulfur and
nitrogen oxides emission, reductions are minimal due to the low sulfur coal and the relatively small
volume of coal being consumed. Therefore, the focus is on carbon dioxide emission reductions.
Burning biomass still produces CO2 just as with any combustion reaction; in fact burning
biomass produces almost the same amount of CO2 as burning fossil fuels. Therefore, it is not
intuitive that the biomass energy production is carbon neutral. To understand the emissions tradeoff
for fossil fuels (coal) to biomass (wood) it is necessary to understand coal source. Coal is formed by
plant matter from swamps that existed hundreds of millions of years ago that became buried; the
plant remains became coal due to long-term exposure to heat and pressure. During its life, the plant
absorbs CO2, and CO2 is stored in the plant matter as carbon. Therefore, when coal is burned it is
releasing carbon dioxide captured by photosynthesis millions of years ago and is considered “new”
6.6. BEETLE KILL AND BIOMASS ENERGY 91
carbon emissions. Biomass, on the other hand, releases CO2 that was stored over the life of the plant,
in our case several recent decades.
The trees killed by the pine beetle are no longer growing and thus no longer acting as a carbon
sink through the process of photosynthesis. Over time these dead trees will naturally release the
carbon that they have stored through the decay process or from a forest fire. If this carbon can be
released during energy production and reduce the amount of coal that is burned, any CO2 that is
offset by the burning of biomass is considered a reduction.
A variety of solutions were considered for the use of biomass at the Central Energy Plant
including: multiple offsite storage locations, four possible on-site storage areas, site modifications
options, and boiler modifications for biomass-only fuel versus a biomass and coal cofiring solution.
The team’s final recommendations included the development of two offsite storage sites for long
term biomass storage and processing at Centennial, WY, and Foxpark, WY; modifications to the
north side of the current Central Energy Plant site to allow for biomass transportation and storage;
and implementation of cofiring rather than boiler modification.
The decision to implement cofiring rather than converting a boiler to burn only biomass
affected a variety of other aspects of the project and was therefore one of the first decisions required. A
cofire solution evolved after reviewing the following decision matrix (Table 6.2). Boiler modification
allows for the displacement of a larger volume of coal; however, this option has significantly more
risk. By implementing cofiring the Central Energy Plant maintains flexibility to adjust for biomass
fuel supply disruptions or price inflations, and a lower capital investment is required to begin the
process. Based on GIS studies, two remote storage sites were selected in the Centennial and Fox
Park locations. The sites total 18 acres, 14.5 acres dedicated to storage of wood and wood chips, 3.5
acres for contractor use to sort and load wood. The two-site solution provides flexibility for forest
access and reduces transportation requirements as compared to a single site solution.
One of the concerns with introducing biomass to the current operations at the Central Energy
Plant is the ability for the plant to store and handle the additional fuel while still maintaining the
current coal storage and handling capacity. Maintaining the current coal capacity is important because
it allows the facility to retain flexible operations and having enough coal reserve should a disruption
in fuel supply occur during a period of high demand. The addition of a separate biomass storage and
handling system allows the plant to maintain current coal and biomass capacity for greater flexibility
to adjust fuel mixture ratios.
Given the need to mix fuels, several modifications to the Central Energy Plant site were
proposed. The requirements for the biomass storage and handling system included: the ability to
store up 100 tons of biomass, an unloading area for biomass that did not interfere with the current
coal delivery system or other components of the site, and a tie-in with the current plant design.
Option 4 on the north side of the site was selected as it provided the least amount of disruption to
the existing operations while providing significant storage space (Figure 6.18 and 6.19).
The recommendation for the Central Energy Plant is to implement cofiring with the mountain
pine beetle killed trees takes advantage of the environmental crisis and turns it into an opportunity.
92
6. INTERDISCIPLINARY PROJECTS
Table 6.2: Risk Assessment Decision Matrix Comparing Boiler Modification to Cofiring at the Uni-
versity of Wyoming Central Energy Plant
Legend
•
√
O
Option 1
Wood
Boiler
O
O
•
O
O
√
√
√
O
•
√
•
√
5 - O
5 - √
3 - •
minimal risk
moderate
risk
high risk
Option 2
Cofiring
•
√
•
•
√
•
•
•
√
•
•
•
•
0 - O
3 - √
10 - •
Risk
Wood availability or delay
Not enough wood supply
Must have carbon offset
Coal price highly fluctuates
Wood price fluctuates
Regular boiler maintenance
Boiler out of service
Storage area
Transport double handle
Variation in wood moisture content
Ash removal
Environmental disadvantages
Sustainability of project
Total=
Detailed design issues include: the ability to verify the fuel mixture ratio entering the boiler; the
ability to predict the energy input into the boiler to account for any necessary control modifications,
understanding how the addition of biomass will affect the burn chemistry within the boiler and how
it relates to ash content and grate integrity as well as potential deposits on heat exchangers.
There are alternative ways to mix biomass with coal for the purpose of cofiring. At some
plants coal and biomass fuels have been injected separately, while at smaller utilities they were mixed
prior to injection. Premixing is done either on site or prior to delivery. At the Central Energy Plant,
the pilot study determined that fuel mixing solution would store the woodchips to one of the three
existing storage silos and then deposit them onto the conveyor flow with the coal. This wood/coal
6.6. BEETLE KILL AND BIOMASS ENERGY 93
Figure 6.18: Aerial view of the central energy plant with the four proposed site modifications to accom-
modate for biomass storage and delivery.
mix will then be loaded into one of the three bunkers, which feeds directly into the boiler. The
mixture ratio can be modulated by means of a guillotine valve at the base of the wood chip silo.
Moisture content is a limiting characteristic when implementing a biomass fuel source, for
a particular biomass to be a viable fuel source. Three different samples of Lodgepole Pine wood
chips were obtained from the Medicine Bow forest and tested for water content in the UW Civil
Engineering soils lab. These samples included wood chips from a tree with no needles, a tree with
red needles (dying), and one with green needles. The results from these tests are shown in Table 6.3.
These results are significant because the water content of beetle kill trees is sufficiently low
that an expensive and cumbersome drying process would not be required. Low water content also
contributes to the recoverable heating value and greater boiler efficiency. If green trees are harvested,
they must be allowed to dry to reduce the moisture content below 20 percent.
The heat value of wood varies between species due to different chemical makeup. Most often
the heating value is reported in units of energy per oven dry weight. When water is present, it
94
6. INTERDISCIPLINARY PROJECTS
Figure 6.19: Rendering of biomass delivery and storage system.
Table 6.3: Water content of different samples of Lodgepole Pine
Sample
Water Content (by mass)
No Needles
Red Needles
Green Needles
8.83%
10.01%
57.91%
contributes a significant amount of weight to the sample, but not to its heat content. Also, when
water is present in a sample that is to be combusted, there is a decrease in boiler efficiency because
vaporizing water uses some of the heat that is liberated in the boiler.
It was found that the water content of wood is the most significant factor in determining the
heat content of it. In fact, the per unit weight heat content of the wood decreases proportionally to
the amount of water that is present. Although the heat content varies from species to species, 8,500
btu/oven dry pounds is an average for local wood fuels. The reported heat value for beetle kill trees
was found by averaging the water content of the red needle and no needle trees and is approximately
7,600 btu/lb.
Completion of a successful test burn of biomass and coal cofiring at the Central Energy Plant
required accurate determination of the mixture ratio of biomass and coal. An experimental program
was developed to verify the mixture ratio of biomass and coal entering the boiler. The verification
6.6. BEETLE KILL AND BIOMASS ENERGY 95
method needed to be simple and repeatable so that it could be performed by personnel at the Central
Energy Plant at the time of the test burn and when the project goes forward. The procedure had to
first be calibrated for the coal and wood chips used at the plant.
A “Mix Calculator” is a tool developed to process the basic information collected at the Central
Energy Plant during the test burn and provide information about the mixture ratio (by volume and by
mass), the estimated energy density of the fuel mixture, fuel cost estimates, estimated annual savings,
and estimated annual carbon dioxide (CO2) emission reductions. The embedded assumptions for
the “Mix Calculator” can be found in Table 6.4.
Table 6.4: Assumptions embedded in the “Mix Calculator”
Assumption
Bulk Density of coal
Bulk Density of wood
Energy density of coal
Energy density of wood
Value
9.55
2.35
10500
7600
Units
Source
[lbs/gal] Experimental data
[lbs/gal] Experimental data
[btu/lbs] Central Energy Plant
[btu/lbs] Theoretical results verified by an
Cost Coal
56.00
[$/ton]
independent lab
Central Energy Plant
In order to calibrate the “mix calculator” the following supplies are needed: a four-gallon
sampling bucket, a scale capable of measuring weights up to 50 lbs, 300 lbs of coal from the Central
Energy Plant, 35 lbs of dry wood chips from beetle kill trees, and two measuring buckets that can
accurately measure samples as small as one half gallon. The procedure is:
1. Measure four gallons of coal, weigh the sample, and record the results.
2. Measure four gallons of wood, weigh the sample, and record the results.
3. Measure two gallons of coal and two gallons of wood; mix the components in the sampling
bucket. Be sure that the mixture is homogeneous. Weight the sample and record the results.
4. Measure three gallons of coal and one gallon of wood; mix the components in the sampling
bucket. Be sure that the mixture is homogeneous. Weight the sample and record the results.
5. Measure three and one half-gallons of coal and one half-gallon of wood; mix the components
in the sampling bucket. Be sure that the mixture is homogeneous. Weight the sample and
record the results.
6. Repeat steps 1-5 two more times to gather additional data points.
7. Calculate the bulk density of coal and wood for the different mix ratios.
This actual mix validation requires a four-gallon bucket and a scale capable of weighing up
to 50 lbs. The following process should be repeated every three minutes upon changing the fuel
mixture ratio until steady state is achieved.
96
6. INTERDISCIPLINARY PROJECTS
1. Identify boiler which will be burning biomass-coal mixture and locate the sampling port.
2. Measure and record tare weight of the sampling bucket.
3. Gather a four-gallon sample from the sampling port into the sampling bucket.
4. Lightly shake the sample to allow for moderate settling and refill the sample to the four-gallon
level if necessary.
5. Weigh the sample.
6. Subtract the tare weight of the sampling bucket.
7. Enter the result into the “Mix Calculator.”
8. The output of the “Mix Calculator” will provide: volumetric mixture ratio, mass mixture ratio,
the expected energy density of the fuel mixture entering the boiler as well as cost estimates for
cofiring.
In order to use the “Mix Calculator” only three inputs are needed:
1. Cost of the biomass [$/ton] (Mountain Pine Beetle killed wood).
2. Current annual coal usage [tons/year].
3. Weight of fuel sample taken at the Central Energy Plant.
The above calibration and sampling method provides the Central Energy Plant a simple,
quick, and effective method of measuring the mixing ratio and predicting the energy density of the
fuel entering the boiler.
A review of the literature concerning cofiring coal with biomass revealed the technical issues
related to the combustion process of these fuels. Table 6.5 is a summary of key differences of biomass
compared with coal that should be considered when implementing a cofiring project.
Coal and biomass ash differ in terms of both their chemical and physical properties, as well
relative amounts of ash produced during combustion. Further complicating matters is that interaction
of these two fuels inside the boiler will have different effects on the predicted ash formation than if
the two fuels were fired separately.
Typically, biomass has smaller ash content than coal with a different chemical composition.
Coal ash is mainly composed of aluminum and silica, with clay and quartz, while biomass contains
a high level of calcium and alkali metals such as sodium and potassium. It is the increased quantities
of alkali metals that cause problems inside the boiler. These volatile compounds can act as fluxing
agents, which, when combined with other mineral elements, cause them to melt. One example of
this reaction is when potassium from the biomass combines with silicon from the coal and forms
low melting silicates. These low melting compounds can bind fly ash materials onto the boiler tubes
Table 6.5: Comparison of Key Differences between Biomass and Coal
6.6. BEETLE KILL AND BIOMASS ENERGY 97
Characteristic
Carbon Content
Moisture
Volatile Matter
Reactivity
Heating Value
Ash Content
Ash Composition
How Biomass Compares with Coal
Lower carbon content than coal
Higher moisture content than coal
Biomass contains a much higher percent of volatile matter and
will de-volatilize independently of coal
Due to higher volatile content, biomass is more reactive and
has a lower ignition temperature
Lower heating value
Lower ash content
Higher volatile alkali metal content, especially in the form of
potassium(K)
and fuel grates in a process known as slagging, the consequences of which include efficiency losses
due to decreased heat transfer and air flow.
Alkali metals are much more prevalent in “rapidly growing” biomass such as wheat straw
and switchgrass, while “old-growth” biomass, such as wood from pine trees, tends to have much
lower alkali quantities. Accordingly, the use of Lodgepole Pine as a cofiring fuel will tend to cause
less deposition problems than other biomass materials. Interestingly, the literature notes that one
solution to the ash-related issues in biomass boilers is to cofire coal with the biomass. Some studies
further suggest that the mineral elements in coal might have a “buffering effect” on the volatilization
of alkali metals, which could further reduce the problem of fly ash deposition.
One other concern related to cofiring combustion is the chlorine (Cl) content of the biomass.
This concern is due in part to the potential for Cl in the fly ash to corrode metal components
of the boiler. Again, this is typically of greater concern for high Cl content herbaceous materials
such as switchgrass and straw. Mitigation techniques for problems related to slagging and corrosion
include increased soot blowing, the use of commercial chemical additives, ash deposition models,
and attentive monitoring of the internal boiler conditions. A thorough chemical analysis should
be performed on any type of biomass that will be combusted in order to understand the mineral
composition of the resulting fly ash.
The final design included development of the off loading facilities, screw drives for wood chips,
storage bins, and fuel mixing details. The calibration of the cofiring operation was completed. Fol-
lowing the conclusion of this project the Central Energy Plant continued pilot studies for including
wood fuel and had dedicated one of the three coal storage bins to wood.
98
6. INTERDISCIPLINARY PROJECTS
6.6.5 ASSESSMENT
The small class size limited the scope of the project. Energy systems engineers defined the quantity
of wood needed annually for a 20-year cofiring conversion and designed the boiler modifications.
Civil engineers conducted the timber availability and designed the staging areas and plant site
modifications. Managers of the Central Energy Plant were judges on the final presentations.
The student design work was used by the Central Energy Plant, and the plant has continued
to convert one boiler to a cofiring operation. The project served as the culminating senior design
project for the first two students to graduate from the college’s new Energy Systems Engineering
program.
6.7 GOTHIC CATHEDRALS
6.7.1 OBJECTIVES
The course was jointly developed with Dr. Kristine Utterback of the History Department. Dr.
Utterback focused on the life, times, and society of medieval Europe at the time of the first gothic
cathedrals. The engineering effort focused on the state of knowledge at the same time of gothic
cathedral construction. From an engineering perspective, this project presented engineering history
to non-engineering students and introduced many engineering concepts used today.
6.7.2 CLASS COMPOSITION
The class was advertised on the university website, and a notice was sent to the local papers. The
class consisted of 15 students, none engineering majors. Four students were non-traditional students.
One criterion for the class was that only the math known in the Middle Ages was required. The lack
of math requirements was a draw for non-technical students to any class dealing with engineering.
The class composition included regular undergraduate students and a number of non-traditional
students interested in the topic.
6.7.3
STUDENT RESULTS
A special summer course explored the development of medieval gothic cathedrals, which were built
between about 1150 and 1400 in Europe. Bringing the vastly dissimilar areas of expertise, medieval
studies and civil engineering, to bear on the subject gave the students very different perspectives on
the life and times of the turn of the last millennium. The students explored the history of cathedrals
as they developed in medieval society, examining the social, ecclesiastical, artistic, economic, and
political elements. At the same time students planned and built a 1/8 scale model of a portion of
a gothic cathedral, based on many of the same techniques medieval builders used. They began by
conducting experiments on how arch and truss structures functioned.They continued by constructing
their measurement tools, particularly the square and the level, using only a straight edge, a string, and
a piece of chalk. The floor of the Kester Structural Research Laboratory became a “tracing room” as
the students laid out the arches for the cathedral walls and vaulted ceiling.
The engineering instruction began by asking the students to build an arch. Wooden blocks
were precut and the students had to assemble the arch. Many hands substituted for falsework. The
frustration was high until one group “discovered” that an abutment was needed to support the
horizontal thrust. Suddenly, construction progressed at a rapid pace (Figure 6.20).
6.7. GOTHIC CATHEDRALS 99
(a) Attempted arch construction
(b) “Discovery” of abutements
Figure 6.20: Discovery of arch statics.
The next task fabricated tools. A line was struck on the floor of the structures lab. Using a piece
of chalk and a string, the line was bisected. The perpendicular lines were used to lay out a square.
A triangular element was laid out on the floor with the perpendicular locations marked. Addition
of a plumb weight through the perpendicular line provided a level. Division of a circle around the
original intersection provided a rough protractor (Figure 6.21).
These tools were used to lay out the model apse of the cathedral. The string and chalk exercise
continued to develop the layout of the stones that would be used to create the pointed Gothic arch.
The width of the cathedral would be laid out and circular segment drawn to intersect at the apex of
the arch. This layout procedure demonstrated how blocks could be rough cut at the quarry. Rough
cutting at the quarry reduced handling and shipping costs while fine finishing was completed at the
cathedral site. The base of one buttress was fabricated out of foam blocks. The flying buttresses were
laid out on foam boards and were representative of six of the major Gothic cathedrals. The project
culminated with a “dedication” of the cathedral. The model and descriptive placards remained in
place for approximately two months.
6.7.4 ASSESSMENT COMMENTS
The course was successful in generating interest in medieval construction. The coordination between
the History Department and Engineering was effective and the two elements of the course meshed
100
6. INTERDISCIPLINARY PROJECTS
Figure 6.21: Design and fabrication of a working level.
Figure 6.22: Cathedral dedication.
well. Leading the students to “discover” how arches work and how tools could be made accurately
with nothing more than a straightedge, string, and chalk was revealing to many. The project attracted
a good deal of interest due to its location on the main campus quadrangle and remained up for the
entire summer tour and freshman orientation period.
REFERENCES 101
REFERENCES
[1] ASCE/SEI 7–10 Minimum Design Loads of Buildings and Other Structures, ASCE, Reston,
VA, 2010.
[2] Kenneth S. Deffeyes, Beyond oil: the view from Hubbert’s peak, New York, Hill and Wang, 2005
198 pg.
C H A P T E R 7
Getting Started
103
The previous design challenges require a substantial time commitment. Motivating the students to
think about design problems often requires a small effort to initiate their thinking. The following are
physical and thought problems to lead into thinking about design and design issues. The Column
Design Challenge can be used for all ages. The maximum load recorded was a freshman engineering
student and it exceeded 12,500 pounds using these rules with no limit on the amount of glue. The
Column Design Challenge is coordinated with the state science fair and is an opportunity for the
students throughout the state to be introduced to the College of Engineering and Applied Science.
The satellite and rubber tire problems introduce students to problems requiring very large and very
small numbers. They are ideal to instigate discussion on a problem with no conventional solution
and to assign follow up discussion or papers to explore the consequences of their findings. It is not
unusual to have solutions to these two problems varying by many orders of magnitude.
7.1 H. T. PERSON DESIGN CHALLENGE FOR PRIMARY AND
SECONDARY SCHOOLS IN WYOMING
The Challenge: Using a single sheet of copier paper and up to 4
oz. of white (Elmer’s) glue, construct a column that has a minimum
height of four inches. The column carrying the largest load will be
declared the winner.
When: Testing will be conducted during the State Science Fair.
[Students can submit entries in person, by their teachers, or by mail
with their name, address, and school. Students need not be present
to win.]
Testing: Column load tests will be conducted in a structural testing
machine or a frame similar to the photo to the right. Students add
one brick at a time up to 15 bricks. Any column carrying 10 bricks will be unloaded and retested in
a structural testing machine.
7.1.1 ENGINEERING BACKGROUND AND HISTORY
(15 April 1707– 18 September 1783) was a pioneering Swiss mathematician
Leonhard Euler
and physicist. He made important discoveries in fields as diverse as infinitesimal calculus and graph
theory. He also introduced much of the modern mathematical terminology and notation, particularly
for mathematical analysis, such as the notion of a mathematical function. He is also renowned for
104
7. GETTING STARTED
his work in mechanics, fluid dynamics, optics, and astronomy. He is credited with the Euler buckling
equation for the strength of columns (adapted from Wikipedia).
7.1.2 WHAT AN EQUATION TELLS YOU ABOUT DESIGN
The Euler Bucking equation, Pcr = π 2EI/L2, indicates the load a column can carry before bucking.
Pcr is the Euler bucking load and is proportional to its modulus of elasticity, E, the placement of the
material away from the center of the column, I, and inversely proportional to the length squared, L.
The modulus of elasticity is like a spring constant. Since everyone is using copier paper, the modulus
is pretty much the same for everyone. So the challenge is to limit the length to just 4 inches and
spread the paper out to improve performance. If the paper is spread too far, then it will buckle locally
and not be as strong as a more compact design. Hint: Paper is made by a rolling process, so one
direction of the paper has a higher modulus of elasticity than the other direction.
7.2 METEOR COLLISION PROBLEM
This is an in-class problem that may be extended to homework with the solution due in the next
class period. Students work in three- or four-person teams to develop an answer to the following
problem.
Each year the earth passes through the Perseus meteor shower. At the peak of the shower,
meteorites hit the earth’s atmosphere at the rate of 100 per hour.The shower lasts approximately
four days.
You are working on a new geostationary satellite design team. The satellite is four feet in
diameter and located 22,500 miles above the earth. Your supervisor is concerned that one of
these pieces of space debris may damage the $2 billion dollar satellite.
You are asked to evaluate two questions. First, what is the approximate probability of the
satellite being hit by a particle from the Perseus meteor group? Second, is this a problem? For
this problem, probability is not a formal calculation but an assessment, like one chance in a
million, that there will be a hit.
You will orally present your answer on the probability of being hit in class. Your presentation
will include the approximate probability and your assessment of whether this is a problem.
You should include a discussion of your logic, assumptions, and for any additional information
needed to complete your assessment.
7.3 TIRE PARTICLE PROBLEM
This is an in-class problem that leads to a short research follow-up activity. Students work in three-
or four-person teams to develop an answer. The first part is to be answered in class the day the
problem is given. The second part is given in the next class period to allow the students to look up
the impacts.
7.4. WHAT HAPPENED?
105
Firestone recently recalled over six million tires. The obvious ramifications of defective tires
are the loss of control of the vehicle followed by the debris generated when the tread flies off.
In considering these impacts, the Environmental Protection Agency began to wonder about
the health hazard of the particulate matter generated by normal tire wear. If you lived close to
a freeway would this material cause respiratory or other problems?
Your team is asked to investigate this problem. The first question is: “What is the size of the
particles generated by normal tire wear?”
As a follow-up question, your team is asked to recommend whether the EPA should issue a
warning about tire particulate matter or mandate new criteria for tire design. How does the
size compare to other particles? How might these particles compare to known problems such
as asbestos?
Your team will orally present your answer on the size of the particles in class today with
a discussion of your logic and your assumptions. For the follow-up question, identify other
information that you may need to complete your recommendations. A one-page summary of
your findings and a list of your team members will be handed in at the beginning of the next
class period.
7.4 WHAT HAPPENED?
7.4.1 WINDMILL COLLAPSE
An interesting discussion problem provides the class the picture below and asks them to work in
groups of two to determine what happened. After about five minutes, ask each group to give one
reason what caused the collapse and write it on the board. Continue around the class with each
group adding one new possibility. When no further ideas come forward, ask each group to compare
the total number of possible reasons they had developed with the total number of possibilities on
the board. This leads to a discussion of why teamwork is better than individual effort.
7.4.2 BRIDGE ACCIDENT
Using the four photographs, determine the sequence of the failure when the excavator hit the I-70
bridge.
7.4.3 DEVELOPMENT OF STRESS AND STRAIN CURVES
Demonstrations in Mechanics of Materials classes typically test a steel, aluminum, or brass specimen
to develop a stress-strain relationship. While instructive, the test requires specialty equipment and
106
7. GETTING STARTED
Figure 7.1: Wind Turbine collapse (Photo Courtesy of Jason Shogren, University of Wyoming).
“black boxes” that isolate the student from the mechanics. The following experiment requires some
weights and a ruler to accomplish similar results.The experiment is set up in a room with the weights,
platforms, and measuring devices available. There is no reason the experiment cannot be conducted
by hanging the specimen from the ceiling. The specimen can extend several times its initial length,
so a short specimen is preferred.
Objective: In this experiment, you will determine the stress-strain characteristics of an un-
defined material and calculate the initial modulus of elasticity.
Background: Read the entire memo before beginning.
[All the necessary equipment for this experiment is available on the first floor of the engineering
building. The specimen materials, mass units, measuring tapes, and safety glasses are in a parts box
on the shelf on the exterior wall. The test can be adapted to any lab that has weights and a tape
measure.]
7.4. WHAT HAPPENED?
107
(a)
(b)
(c)
(d)
Figure 7.2: I-70 Bridge collision.
You may work individually or in groups of no more than two. If you work in a group, only a
single report is required but both names must be on it.
Safety: Wear safety glasses (included in parts box) when conducting this experiment. Keep
feet out from under weights.
Prediction:
Pick one of spools to use for your experiment. On the graph below, qualitatively predict what
the stress-strain curve for the material will look like.
108
7. GETTING STARTED
stress
Specimen Color: ______________
Diameter: ___________________
strain
Experiment:
1. Select a specimen (smaller diameters provide a greater range of response for the mass units
provided). Record the color and diameter of the specimen labeled on the spool.
2. Cut off about 2.5 feet of the specimen from the spool. Tie an overhand loop around one end
as shown below.
(a) Grab one end and pull it against itself so that you form a loop. Take the loop you just
formed and make an overhand loop back through, as you would if you were tying a knot.
Keep the formed loop large enough to fit over the post (Figure 7.3).
Figure 7.3: Specimen termination.
(b) Fix this loop securely around the post attached to the base, making sure that the strand
is just underneath the washer.
(c) Tie a small loop in the opposite end the same way. This end drapes over the pulley.
Place the mass hanger through the loop you created. A picture of the set-up is shown in
Figure 7.4.
3. Place one piece of tape around the material about an inch from the post. Place another piece
of tape about five inches from the first. Mark a line on each piece of tape, or on the specimen,
to provide a consistent measuring location. Measure and record this initial distance between
marks with only the mass hanger in place.
7.4. WHAT HAPPENED?
109
Figure 7.4: Test materials and test setup.
4. Add a mass to the mass hanger and measure the distance between the two pieces of tape.
Record both the cumulative mass and distance.
5. Repeat step four, increasing the total mass until the hanger reaches the ground, you run out
of weights, or the specimen breaks.
6. Repeat the experiment with a different sequence of mass placement and record the data.
Mass
Distance
Mass
Distance
1
2
2
3
3
4
4
5
5
6
6
7
7
8
8
9
9
0
10
11
12
13
14
15
16
17
18
19
20
110
7. GETTING STARTED
7. Calculate the stresses for each mass applied to the specimen and the corresponding strain.
(a) Enter your data into Excel and perform the necessary calculations to obtain the stresses
and corresponding strains.
(b) Create stress-strain curves for the material using the Excel plot function. Stress should
be plotted on the vertical axis and strain on the horizontal axis. Use a scatter plot function
with straight lines between data points unless you trust Microsoft to properly interpret
the curves between points.
(c) Use the stress-strain plot to determine the initial modulus of elasticity, E.
L , E = σ
(Note: σ = P
(cid:5) , and mass is not a force)
A , (cid:5) = (cid:6)L
Furthering the Experiment:
Explain how your prediction compares to the actual stress-strain curve you created. How long did
it take to complete the experiment and is this a concern? Does the sequence of loading make a
difference? How would you further this experiment to collect more information about the unknown
properties of this material? Format your response as a typed memo (no more than two pages plus
the plot) to your professor and attach your worksheet along with your final Excel sheet and plot,
which may be embedded in the memo. This should be a professional quality report.
7.5 NOTES FOR CHAPTER 7
These notes are provided in a separate section so the problem statements may be copied directly.
They are not “solutions” but are provided based on the classroom use of the problems.
7.5.1 PAPER COLUMN EXPERIMENT
The paper column experiment typically ends up with loads less than 300 pounds. A simple test
frame with about 15 bricks is both visually effective and exciting when the column crushes. We use a
small hand-operated universal testing machine for field testing. The hint about the orientation of the
paper is to have students think about the consequences of paper alignment but alignment has little
effect on the final load carried. This project has worked well in the ES 1000 class as homework with
the students bringing the completed columns to class for testing. The explanation of the moment of
inertia I is necessarily vague as it is meant to be used by students without a mechanics of materials
course. Euler originally defined EI as a combination of material properties and geometry. Young’s
modulus, E, was not defined until after Euler’s death.
7.5.2 METEOR COLLISION PROBLEM
The meteor collision solution requires a change in perspective. Since the meteor showers occur at
the same time each year, it is not the meteors that are moving, rather the earth moving through the
debris field. One solution to this problem is taking the ratio of the earth’s diameter to the diameter
7.5. NOTES FOR CHAPTER 7
111
of the satellite and multiplying by the number of hits and the duration of the exposure. Because the
satellite is in the earth’s shadow nearly half the time, a correction can made.
This problem is an education when the student estimates are written on the board followed
by each group explaining their methodology. Expect several orders of magnitude difference in the
student solutions. Ask which solution is “right.” Sometimes the following hints are provided with
the problem.
Hints:
What moves, the meteors or the satellite?
What is the density of the meteor particles and what is the volume of the satellite orbit?
7.5.3 TIRE PARTICLE PROBLEM
A solution to this problem requires a larger number of assumptions including the original and final
thickness of the tread, the number of miles of tire life and the outside diameter of the tire. From these
assumptions an estimated thickness lost in each rotation can be calculated. Assuming each particle
is a cube, the size of each piece of dust can be determined. This problem is a good introduction to
nano-particles.
The follow-up research requires students to look into the effects of nano-particles. The EPA
website has guides for particulate matter size. Students should consider both the size of the particle
and the chemical reactivity.
7.5.4 WHAT HAPPENED
Wind Turbine Collapse
The best estimate at the reason for the collapse is that the wind turbine over-sped and one of the
blades fractured and hit the tower. No detailed failure mechanism was given. Be prepared for some
pretty unusual possible causes. Possible causes ranging from mosquito swarm strikes to UFOs have
been suggested.
Bridge Accident
One solution: From the lower left photo, the boom hit the bridge girder just below the parapet wall.
Note that the parapet wall is intact. The force of the impact tore the trailer from the tractor, top left,
forcing the cab of the equipment to rotate upward until the cab hit the bridge soffit, upper right.
The impact bent the boom and extended the hydraulic actuators, top right. When the cab settled,
the boom extended well above the deck, lower right.
7.5.5
STRESS-STRAIN CURVES
This experiment is intended to remove all “black boxes” from the determination of a stress-strain
curve. The thread in the experiment is a polyurethane string used for kid’s snap-bracelets. It is
available at Hobby Lobby, Michael’s or other craft stores. Larger quantities can be found online. In
112
7. GETTING STARTED
several years of running this experiment, no student has broken the specimen because the weights
hit the floor before the strand ruptures. Even at that, no student report has gone back and shortened
the string to determine the breaking capacity.
The snap-bracelet material is interesting because of its extremely high strain to failure. The
same experiment can be done with a simple elastic band.
A P P E N D I X A
113
H.T. Person Lectures
raeY
1998
rekaepS
Daniel P Welsh
eltiT
Project Manager for Main
Street Parade at Walt
Disney World
cipoT
Engineering at Walt Disney
World – Making the Magic
Happen
1999
Dr. Edward
Anderson
Professor, Texas Tech
University
Learning in the Digital Age
2000
Dr. Carl Mitchem
Professor, University of
Florida
Technology and Ethics: From
Expertise to Public Participation
2001
J. Howard Van
Boerum
Principal, Van Boerum and
Frank Associated
The Design and Construction of
the Utah Olympic Park
2002
Lawrence C. Novak Senior Engineer, Skidmore
Perspective from Ground Zero
Owens and Merrill
2003
R. Paul Humberson Western Area Power
Anatomy of a Blackout
2004
Michael K.
Zyskowski
Administration
Program manager,
Microsoft Flight Simulator
project
Microsoft Flight Simulator: The
Engineering behind the Game
2005
Joseph A. Anselmi Mechanical Engineer
GPS: How does it Work?
Aerospace Corporation
2006
Neil Kelly
National Renewable Energy
Laboratory (NREL) in
Golden, Colorado
Engineering Challenges for
Future Wind Energy
Development
2007
Patrick Tyrell
State Engineer, Cheyenne,
Wyoming
Defending our Borders –
Protecting Wyoming’s Water
114 A. H.T. PERSON LECTURES
2008
Dr. Richard K.
Miller
President, Olin College,
Needham, Mass.
2009
Lawrence C.
Novak
Portland Cement
Association
2010
Joseph Leimkuhler Offshore Well Delivery
2011
Dr. Daniel Pack
Manager Shell Exploration
and Production – Americas
Professor, United States Air
Force Academy
Education of Engineering
Leaders for the 21st Century:
Lessons Learned for Re-
Invention at Olin College
Philosophy of Engineering for
the Burj Dubai, the World’s
Tallest building
Deepwater Well Design and
Operations – Going Forward
Post Moratorium
Developing Cooperative,
Autonomous, and
Heterogeneous Unmanned
Aerial Vehicles
2012
Governor Mike
Sullivan
Former Governor of
Wyoming and Ambassador
to Ireland
Observations and Reflections
on the Benefits and Importance
of an Engineering Education
A P P E N D I X B
115
Sample Course Syllabus and
Policy
ES 1000-1 Introduction to Engineering Study
Course Syllabus and Policy – Fall 2012
Professor:
Office:
e-mail:
Office Hours: M-W 2:30-4:00 PM
Charles W. Dolan
En 2082
[email protected]
Catalog Description of Course:
Peer Assistant: Jane Doe
e-mail:
[email protected]
ES1000. Orientation to Engineering Study. 1. [F1<>I, L] Skills and professional
development related to engineering. Involves problem solving, critical thinking and
ethics, as well as activities to help transition to university environment. Required of
all freshmen entering engineering curricula. Students with credit in UNST 1000 may
not receive credit for this course. (Normally offered fall semester)
Course
Objectives
To acquaint students with resources available at the University for their success
To get to know some of the faculty and students
To help students understand the fields of engineering and the engineering thought
process
Fulfills USP Requirements for I, L and a portion of the O components
To help students to make a successful transition to the University and the College
116 B. SAMPLE COURSE SYLLABUS AND POLICY
Week
Topic-WED.
Topic-FRI.
Assignments
1
1
1
Wed.
2
1
In
class
3
In
class
4
ES-1000 Information
Introduction/Project
Professional courtesy
and conduct
International
Engineering
Information session
Complete introductions
and critique of
presentations
Information Literacy,
Researc Topics
Introduce classmate
using Biographical
Sketch; and first
thoughts on research
topic
MON – Biographical
sketch via email to
Dolan and Peer
Assistant
Design Project:
Teams and Design
Process,
"Deep Dive" video
Provide feedback on oral
presentations
Critical Design Issues
and schedule
Design teams assigned
over weekend
Project Brainstorming
Library Research
Begin Advisor
Interviews
Shop Safety Video
Meet in Library 216
Critical Issue 1
Critical Issue 2
Design Project:
Methodology of Design
Design Day/Shop
time
FRI – Research topic due
Critical Issue 3
Sat
Challenge trial run
9 AM-1 PM
Indoor Practice Facility
22nd and Willett Drive
117
5
6
7
Student Success
Brainstorming fixes
Academic Policies
Professional licensure
Advising
Critical fix 1
Critical fix 2
Preparing and Giving
Oral Presentations
Ethics for Engineers
Research Project Oral
Presentations
Shop time
Research Project
Oral
Presentations
Advisor Interview
FRI –
complete TIP tutorial
completed by midnight
http://tip.uwyo.edu
MON – Research Report
Due by midnight,
portfolio due in class
Wednesday
Friday
College Open House
Counts as an activity
Shop time
-Complete ES1000
8
Sat
9
Research Project Oral
Presentations
Career Services
Knight Hall 222
ES-1000 Flight
Competition
Honors Advising
Class evaluations
9 AM - 1 PM
Indoor Practice Facility
22nd and Willett Drive
118 B. SAMPLE COURSE SYLLABUS AND POLICY
INSTRUCTIONAL MATERIALS
Additional Reference material on the web:
http://wwweng.uwyo.edu/classes/es1000ref/home
This is the “official” home page of the ES1000 class. It will have much of the information we cover.
The “On-Line Textbook”—Changes, Challenges & Choices, Andrea Reeve & Diane LeBlanc
(eds.)
http://www.uwyo.edu/bettergrades/
on the right hand column. This webpage, though “old,” has a lot of great information with just a
few broken links.
Library Assignment: http://tip.uwyo.edu (Note: there is no www in the URL)
Studying Engineering, 3rd edition, Landis, R.B., Discovery Press, L.A., 2007. If you would
like a good, all around introduction to the first year and more in engineering, Landis is a good start,
cost is about $25.
119
CLASS REQUIREMENTS
Prepare a Biographical sketch – FRI./Week 1 (Required for O component)
(cid:1)
Introduce Research topic – FRI./Week 1 (Required for O component)
(cid:1)
(cid:1) Achieve a 70 percent or better on the TIP exam by the end of week 5
o Note: Failure to complete the TIP exam with a 70% or better will result in
an F for a term grade. (Required for L component.)
(cid:1) Complete research topic and assessment papers by week 7
o Note: Failure to submit a research paper, an assessment paper, and a
portfolio will result in an F for a term grade. (Required for L component.)
(cid:1) Participate in a Team in the Design Challenge
(cid:1) Participate in Final Oral presentation with group.
Note: Failure to participate in Final Oral Presentation will result in an F for
a term grade. (Required for O component.)
(cid:1) Complete six outside activities and report by email
o Required List - four activities
1. Advisor interview (by Sept. 18)
2. Two professional society meetings (one in Sept, one in Oct)
3. Senior Design Presentations (in December)
o Elective List - two activities
1. One cultural activity (theater, symphony concert, lecture [not a rock
concert]), one sports event (football, soccer, swimming, wrestling,
etc; must attend the entire event), career fair, departmental
presentation, resource fair, or one club activity (in addition to the
required meetings, like Habitat, Field trip, etc.) Class meetings and
Friday Night Fever don’t count.
120 B. SAMPLE COURSE SYLLABUS AND POLICY
CLASS POLICY
1. Assignments are due when specified, Late = 0.
2. Class attendance is MANDATORY. University Regulation 6-713 explains how authorized
class excuses may be obtained. Missing more than three classes will lower grade by one letter.
GRADING CRITERIA
GRADING CRITERIA
The course grade will consist of the following points:
Class attendance and Participation (15 @ 5 pt ea.)
)stp 01 @ 6( seitivitcA
75
2 ,.coS 2 ,rosivdA ,.seD .rS( 06
hcteks lacihpargoiB
egnellahC ngiseD
)PIT( yrarbiL
eniltuO hcraeseR
tropeR hcraeseR
tnemssessA ecruoS hcraeseR
oiloftrop hcraeseR
laro hcraeseR
noitatneserp laro hcraeseR
outline
stnioP elbissoP latoT
Elect.)
01
06
01
5
04
02
01
5
52
023
Grades
A
B
C
D
=
=
=
=
90
80
70
60
-
-
-
-
100%
89%
79%
60 - 69%
e.g.
>= 288
points
ASSIGNMENT FORMAT FOR REPORTING ACTIVITIES
1. Send a report of each activity you participated in by email to both the PA and to the instructor.
121
The “Send To:” line should read
“[email protected]; [email protected]”
2. The “Subject:” line must start with:
ES1000-XX and then indicate the purpose of the email, i.e., “ES1000-11 - Advisor Interview”
3. The report must be sent within three days of the event (i.e., Friday event, send by Monday).
No late reports accepted. Only the Senior Design report will be accepted after Oct 19.
4. The report must be at least one coherent paragraph, using correct spelling and grammar (one
or two lines do not make a coherent paragraph).
5. Content should contain: What you attended, who, when and where the event was held, and
what you found to be of interest. Not knowing the name of the speaker or organization is not
acceptable.
Examples of two society meeting reports which would also be typical of an elective report:
I attended the American Society of Civil Engineers meeting on Wednesday, February 4. The speaker was
James Johnson, a civil engineer from Laramie. The topic was the development of a neighborhood and the
various aspects that go into land development. He provided a detailed presentation on the project, addressing
such issues as the layout of the neighborhood, plumbing, landscaping, and various legislative aspects of civil
engineering.
On Wednesday, February 25th at 5:00 p.m. in engineering building room number 3044, I attended an ITE
(Institute of Transportation Engineers) meeting.The speaker was Tammy Reed from Trihedral Corporation
which is an environmental and engineering firm located here in Laramie. Ms. Reed talked about a street
project for the City of Laramie that will include a new sewer system and reconstruction of the street with
medians. The most interesting part about this meeting was that they provided food, which was obviously a
tactic to trigger people to come.
Senior Design and advisor reports should be appropriately longer.
(cid:129) “Disability Statement: If you have a physical, learning, or psychological disability and require
accommodations, please let the instructor know as soon as possible. You must register with, and
provide documentation of your disability to University Disability Support Services (UDSS) in
SEO, room 330 Knight Hall.” (University Statement) Appropriate protocols will be developed
after that time.
(cid:129) “Academic Honesty: The University of Wyoming is built upon a strong foundation of integrity,
respect and trust. All members of the university community have a responsibility to be honest
and the right to expect honesty from others. Any form of academic dishonesty is unacceptable
to our community and will not be tolerated” [from the UW General Bulletin]. Teachers and
122 B. SAMPLE COURSE SYLLABUS AND POLICY
students should report suspected violations of standards of academic honesty to the instructor,
department head, or dean. Other University regulations can be found at: http://www.uwyo.
edu/generalcounsel/info.asp?p=3051 (University Statement)
(cid:129) Academic Dishonesty is any use of any work other than your own or of using the same work in
two classes. Any infraction of this nature will be pursued to the full extent allowed by University
Regulation 6-802 or its successors. This does not disallow working together in groups. It does
disallow copying homework within a group which you did not do. Example: Three people
cannot do one problem each and share answers. Three people can work together to solve the
problems and report them separately. It would be a good idea to read this regulation now.
(cid:129) Team projects should be worked on jointly. Members of the teams will be required to report
on how the team and its members functioned together.
RESEARCH PAPER
Find an application associated with the Design Challenge. Assess the state-of-the-art of that appli-
cation and why the Challenge is relevant. For example, in robotics examine conditions that include
working in hazardous environments, under sea, space, toxic or radioactive sites. You may address this
problem from any angle of engineering or computer science including application, design, fabrica-
tion, or artificial intelligence.
A P P E N D I X C
123
Information Literacy Paper
INFORMATION LITERACY (L) RESEARCH AND ASSESSMENT PAPERS
Embedded in ES 1000
Fall 2012
DEFINITION: Information Literacy is the ability to “recognize when information is needed and
to locate, evaluate, and use effectively the needed information.” (American Library Association and
University Studies Literacy Document)
OBJECTIVE: The objectives of the information literacy component in ES 1000 are several: to learn
how to pose a research question, conduct a search on literature that will assist you in answering your
question, present a written evaluation of your sources’ validity or usefulness, and prepare a written
report on your findings.
RESEARCH QUESTION REQUIREMENTS:
You will pass the library TIP (Tutorial) quiz with a 70 or better.
You will select a research question from the list of questions provided in class. A research question
asks for validation of an idea or why or how something works or behaves in a particular manner. The
objective is a statement of what is to be examined or demonstrated. For example: a research question
may ask, “What is the longest span that a bridge can be constructed?” Answering this requires an
understanding of engineering principles, loads, and materials. An objective may be that you will limit
your research to suspension bridges as these have been the longest types of construction recorded.
You may further refine your objective, such as, “I plan to examine the main cables in a suspension
bridge as they are the critical load carrying elements.” A question such as “How long is the longest
bridge?” is simple fact finding and not a valid research question. By the same token, a report on
the construction of the Great Wall of China does not ask a probing question, but rather asks for
historical information.
The research question and objective must be submitted to the ES instructor as indicated on the
syllabus by the end of Week 2. The Instructor will assist in assuring that the research question
and/or objective is not too broad, too complex, or so esoteric that references may be difficult to find.
Each student must prepare a research paper independently.
124 C. INFORMATION LITERACY PAPER
Your research paper will be based on four and only four sources: one each from a professional journal,
a popular literature source, a web-based source, and one additional from any of these sources. You
must select three sources that best support your position and one source that refutes it. Everything
in your paper must be referenced to these four sources. Pick them carefully.
You will prepare a written paper that answers your research question. The paper will be at least three
full pages long (not two and one half ) and contain:
A statement of the approved research question and objective.
A discussion of your findings. This is the body of the paper and answers your research question
within the limits set by your objectives.
Proper identification and citation of sources and quotations used to support your discussion.
Conclusions regarding the outcome of your research.
A reference list cited in the same format as the primary technical journal used for your paper. This
reference list must be in the paper and properly cited in the text of your work.
Academic Integrity: It’s your responsibility to be familiar with UW’s policies concerning academic
dishonesty, both its definitions and its negative consequences. Details can be found under UW Reg
6-802. For more information, go here:
http://www.uwyo.edu/generalcounsel/_files/docs/uw-reg-6-802.pdf
RESEARCH ASSESSMENT PAPER
You will prepare a second report at least two full pages long that critically assesses the material you
used to select references to prepare your research question report. This critique will contain:
A summary table of the number of sources found, the number read and the number used for
your paper (1 or 2). The table will have three categories, journal papers, popular press, and Internet
sources. You must identify at least three references read in each category.
Literature Search
Summary
Journal
Articles
Popular
Literature
Website
125
Number of Sources Found
Number of Sources Read
Number of Sources Used
The references for the source material used. Note that the sources selected for the research
paper must be repeated in this and the research paper.
An abstract, in your own words, of each source article read for the research report (four). The
abstract is approximately one paragraph and restates the most important features of the article for
your use.
A critical assessment of the content of each source you read, including a comparison of
common features and critical differences. The assessment must include why the final reference used in
your paper is selected.
PAPER FORMAT
Your papers will be typed, double spaced with 1” margins all around. Type font will be 12 point,
Times Roman. Your name and section number will appear on the top right of the first page. The
entire project will be submitted in a paper, two pockets, folder, about 9”x12”.
Grading: Each paper will receive a grade and comments. The intent of grading on this exercise
is to assess your understanding of the research process.The comments provide you with an indication
of how the writing meets expectations in college level courses.
An A paper contains all of the required elements, the proper references, correct citation format,
a clear response to the question, conclusions, and findings, and a critical assessment of the resources.
A C paper typically shows a lack of focus on the research question and has a rambling response
to the question, lacks conclusions, and has inadequate or improper references. The assessment is
equally lacking in focus and comparison among articles.
An F Paper is indicative of a student who did not bother to read the instructions, has a poorly
formed research question, has not answered the question, and provided no logical references to
support the answer to the question. The assessment totally misses the objectives.
RESEARCH PORTFOLIO—OPTIONAL BY SECTION
A research portfolio contains copies of the materials used to develop your papers. Several faculty
members require that a research portfolio be included with the papers. Check your section syllabus.
The portfolio need not be organized in any specific manner but it should include:
126 C. INFORMATION LITERACY PAPER
Copies of the four articles cited in your paper. (Copy no more than four pages. The first page
should have the title and author of the article. If the journal or book name is not on that sheet, copy
a fifth page with the cover of the journal or the copyright page of the book.)
Notes developed during your research sessions.
Specific notes and references to sections in your paper.
DEFINITIONS AND HELP SOURCES
Journals: A journal is a record of transactions maintained by a deliberative body. Contents of a journal
are typically peer reviewed and are archived by libraries. That means the articles in the journal are
reviewed by two or more people familiar with the subject matter and a judgment is made that
the content conforms to established practice. Electronic versions of journals are still considered as
journals even if they are found on the Internet. They have the same content and review as the paper
version. If the term “Journal” is not in the title, use the library resources to verify it has a journal
format.
Popular Literature: Articles in this category are typically authored by a single person and
reviewed by an editor for grammar, for libel issues, and for consistency with the editorial objectives.
These include newspapers and magazines like Popular Science. For this exercise, books are considered
popular literature. (In fact, many books undergo a considerable peer review process. The objective is
to have you use the search engines available for research work and to examine the content of shorter
articles, not to use book as references. If you use a book, you must abstract each chapter that you
review.)
Internet Articles: Internet articles may be authored by anyone for any purpose. There is no
requirement that they be factual, although many are very good, an equal number are truly bad or
wrong. Assessing Internet sources requires some basic understanding of the subject material and
often requires an exercise to see if the information on the site can be verified by a second source.
Methods of assessing web based sources are located at
http://www.pbs.org/teacherline/courses/tech340/docs/tech340_bull.pdf
Writing resources: The Writing Center in Coe Library offers assistance in developing written
materials for this and all University courses. The services and hours for the semester are found at
the Writing Center website
http://www.uwyo.edu/ctl/writing-center/
A P P E N D I X D
127
Sample Oral Presentation
Evaluation Sheets
If you want to add your name we would appreciate
knowing who participates. If you would like to receive
an electronic copy of the student work then add your
email address and check the box.
You are asked to provide a grade of
the students’ presentation. The page
on the reverse of this sheet has an eval-
uation form and lists the speakers in
the order of their appearance. Check
the box to indicate your participation
in the review and fill out the sheet us-
ing the criteria below and place your
grade sheet in the box by the exit.
Thank you.
Grading
Evaluation area
Organization
Clarity
Verbal
Response
COMMENTS:
Each item is graded on a 1-4 scale with 1 being poor,
2 fair, 3 good, and 4 excellent
Expectation for a grade of 4
The concepts and designs are presented in a logical
sequence with each point building on previous work
Concepts and designs are presented in terms that are
clear, well defined, free of jargon, and easily understood
The presenter had good verbal skill including eye con-
tact, voice projection, posture, and poise
The presenter was able to respond to questions in a
clear, concise manner.
128 D. SAMPLE ORAL PRESENTATION EVALUATION SHEETS
MULTIDISCIPLINARY ALL COLLEGE DESIGN PROJECT –
EVALUATION SHEET Fall 2009
Order of Presentation
Organization Clarity Verbal Response
INTRODUCTION
Leah
BEETLE BACKGROUND
Matt r
WOOD AVAILABLITY
Allysa
Jonathan
ENERGY PLANT SITE CHALLENGES
Kolter
ENERGY PLANT OPTIONS and CO-FIRING
Jordan
OPTIONS EVALUATION
Leah
WOOD QUALITY ENERGY and ENVIRONMENTAL CONSIDERATIONS
Jordan
Matt
OFFSITE WOOD STORAGE AND HANDLING
Jon
ENERGY PLANT WOOD TRANSFER
Sam
ENERGY PLANT SITE DEVELOPMENT
Kolter
SILO DESIGN
Allysa
Dan
WOOD CHIP HANDLING
Shane
COST ANALYSIS AND QUESTIONS
Leah
Author’s Biography
129
CHARLES W. DOLAN
Dr. Charles W. Dolan is the first permanent H. T. Person Chair of Engineering at the University
of Wyoming. He received his BS in Civil Engineering from the University of Massachusetts and his
Masters and Doctorate in Civil Engineering from Cornell University. Dr. Dolan has over 20 years
of design experience as a consulting engineer and an additional 25 years of teaching experience. His
design projects include the original people mover guideway at the Dallas–Fort Worth airport, the
Detroit downtown people mover guideway, the Walt Disney World monorail, and the conceptual
design of the Vancouver British Columbia Skytrain and the monorail running down the spine of
the Palm Island in Dubai. He has taught at Cornell University, the University of Delaware, and has
been involved in teaching classes at Seattle University and the University of Washington prior to
joining the faculty at the University of Wyoming.
The H. T. Person Chair is the first endowed chair at the University of Wyoming College of
Engineering and Applied Science and focuses on undergraduate education. For over a decade Dr.
Dolan has developed the engineering design challenges for the first-year Introduction to Engineering
course and for a number of years he taught interdisciplinary senior design projects. In addition
to conducting interdisciplinary senior design projects, Dr. Dolan is actively engaged in capstone
design courses for civil and architectural engineers with a focus on concrete and prestressed concrete
structures. He teaches courses on Society and Technology for the University Honors Program. He
chaired the UW Read, first-year common reading committee, and served as Department Head and
on Tenure and Promotion committees.
Dr. Dolan is co-author of the book Design of Concrete Structures with David Darwin and Arthur
H. Nilson and serves on the American Concrete Institute Committee 318 Building Code for Concrete
Structures. He conducts research on the innovative use of prestressed and precast concrete structures
and the use of fiber reinforced polymers for strengthening concrete structures. In his research capacity
he has served on National Science Foundation committees and edited and contributed to several
volumes of work on FRP applications and the durability of FRP strengthening systems and authored
numerous technical papers.
|
https://ieeexplore.ieee.org/xpl/ebooks/bookPdfWithBanner.jsp?fileName=9050689.pdf&bkn=9050688&pdfType=book
|
The Search for the Absolute
How Magic Became Science
Jeffrey H. Williams, Formerly at Bureau International des Poids et Mesures
History and archaeology tell us that when our far ancestors began to settle in localized groups, they
codified their lives and experiences, and formed a collective for mutual support. This proto-civilization
would have arisen from each individual’s questions about the world, and their attempt to understand
themselves and their place in the world. These groups, or tribes, evolved rules of conduct to facilitate
communal living, and made a calendar for the group’s celebration of harvests, and other events upon
which the group was utterly dependent.
This process of social evolution is the origin of religion, and of a magical way of looking at Nature.
Eventually, this developing worldview was also the origin of science, which is our investigation of
Nature to understand something of what is happening around us, and to use this knowledge to ensure
our survival in a violent, indifferent Universe. After all, science and religion seek to answer the same
question: Why and how is the natural world the way it is? This book seeks to show how science evolved
from religion and magic, in response to a need to understand Nature.
ABOUT SYNTHESIS
This volume is a printed version of a work that appears in the Synthesis Digital Library
of Engineering and Computer Science. Synthesis Lectures provide concise original
presentations of important research and development topics, published quickly in digital
and print formats. For more information, visit our website: http://store.morganclaypool.
store.morganclaypool.com
W
I
L
L
I
A
M
S
T
H
E
S
E
A
R
C
H
F
O
R
T
H
E
A
B
S
O
L
U
T
E
M
O
R
G
A
N
&
C
L
A
Y
P
O
O
L
The Search for the Absolute
How Magic Became Science
iii
Synthesis Lectures on
Engineering, Science, and Technology
Each book in the series is written by a well known expert in the field. Most titles cover subjects such
as professional development, education, and study skills, as well as basic introductory undergraduate
material and other topics appropriate for a broader and less technical audience. In addition, the
series includes several titles written on very specific topics not covered elsewhere in the Synthesis
Digital Library.
The Search for the Absolute: How Magic Became Science
Jeffrey H. Williams
March 2020
The Big Picture: The Universe in Five S.T.E.P.S.
John Beaver
January 2020
Relativistic Classical Mechanics and Electrodynamics
Martin Land, Lawrence P. Horwitz
December 2019
Copyright © 2020 by Morgan & Claypool
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in
any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quota-
tions in printed reviews, without the prior permission of the publisher.
The Search for the Absolute: How Magic Became Science
Jeffrey H. Williams
www.morganclaypool.com
ISBN: 9781681737775 print
ISBN: 9781681737782 ebook
ISBN: 9781681737799 hardcover
DOI 10.2200/S00985ED1V01Y202001EST005
A Publication in the Morgan & Claypool Publishers series
SYNTHESIS LECTURES ON ENGINEERING, SCIENCE, AND TECHNOLOGY
Lecture #5
Series ISSN 2690-0300 Print 2690-0327 Electronic
The Search for the Absolute
How Magic Became Science
Jeffrey H. Williams
Formerly at Bureau International des Poids et Mesures
SYNTHESIS LECTURES ON ENGINEERING, SCIENCE, AND
TECHNOLOGY #5
M&C MORGAN & CLAYPOOL PUBLISHERS
vi
ABSTRACT
History and archaeology tell us that when our far ancestors began to settle in localized groups, they
codified their lives and experiences, and formed a collective for mutual support. This proto-civi-
lization would have arisen from each individual’s questions about the world, and their attempt to
understand themselves and their place in the world. These groups, or tribes, evolved rules of conduct
to facilitate communal living, and made a calendar for the group’s celebration of harvests, and other
events upon which the group was utterly dependent.
This process of social evolution is the origin of religion, and of a magical way of looking at
Nature. Eventually, this developing worldview was also the origin of science, which is our investiga-
tion of Nature to understand something of what is happening around us, and to use this knowledge
to ensure our survival in a violent, indifferent Universe. After all, science and religion seek to answer
the same question: Why and how is the natural world the way it is? This book seeks to show how
science evolved from religion and magic, in response to a need to understand Nature.
KEYWORDS
origin of science
For Mansel Morris Davies (1913–1995); a man blessed with the gift of friendship.
He not only instructed the author in physical chemistry, but also taught him how
to look at the world.
ix
Contents
Introduction: Authority and the Collective Memory . . . . . . . . . . . . . . . . . . . . . . . . 1
1
In the Beginning Was the List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.1 The List as the Origin of Science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2 Ramón Llull . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.3 Details of the Ars Combinatoria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.4 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2 The Origins of the Language of Power that Is Science . . . . . . . . . . . . . . . . . . . . . . 19
2.1 A Less Mythic Interpretation of the Babble after Babel . . . . . . . . . . . . . . . . 21
2.2 A Mystical Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3 The Mixing of Physics and Metaphysics to Create a Language of Curiosity . . . . . 29
3.1 The Birth Pangs of Modern Science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.2 Gottfried Leibniz and the Nature of the Universe . . . . . . . . . . . . . . . . . . . . 34
3.3 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4 The Transformation of Magic and Mysticism into Science . . . . . . . . . . . . . . . . . . 37
4.1 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5 The I Ching as a Model of the Cosmos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.1 Details of the I Ching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5.2 Divination with the I Ching
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.3 Final Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.4 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
6
Natural Philosophy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
6.1 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
7 The Laws of Nature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
7.1 The Complex Relationship Between Astrology and Astronomy . . . . . . . . . . 64
7.2 The Search for the Divine Lawgiver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
7.3 A Very Different Point of View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
7.4 That Fearful Perfection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
x
7.5 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
8 Measuring the World . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Defining the Size of the World . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Other Surveys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
8.1
8.2
8.3
9
Dividing Apples with Oranges to Make the Language of Science. . . . . . . . . . . . . 87
Creating Expressions in the Language of Science . . . . . . . . . . . . . . . . . . . . . 91
9.1
Derived Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
9.2
Location: The Surface of Mars, September 23, 1999 . . . . . . . . . . . . . . . . . . . 97
9.3
A Final Comment on the Value of a Quantity: Sacred Geometries . . . . . . . 98
9.4
Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
9.5
10 What Powers Society? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
10.1 Social Forces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
10.2
International Regulation of Terms and Names: Dialects are Inevitable . . . 107
10.3 Science as a New Tower of Babel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
10.4 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
11 The Ghost of the Divine Language: The Theory of Everything . . . . . . . . . . . . . . . 113
11.1 Some Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
11.2 String Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
11.3 Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
11.4 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
12 Changing the Paradigm: From Long Lists to Short Explanations . . . . . . . . . . . . 127
12.1 The Great Paradigm Shift in Biology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
12.2 Electromagnetism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
12.3 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
13 The Classification of the Living and the Dead . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
13.1 A Hierarchical System of Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
13.2 A Warning to the Unwary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
13.3 The Limits of Linnaean Classification:
Two Unclassifiable Species Found off Australia . . . . . . . . . . . . . . . . . . . . . 144
13.4 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
14 Aspects of Chemical Nomenclature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
14.1 The Problem of Naming Things in Contemporary Science . . . . . . . . . . . . . . 149
xi
14.2 The Transfermium War . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
14.3 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
15 The Evolving Science of History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
15.1 Social Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
15.2 Some Details of the Analysis of Personal Data on Social Media . . . . . . . . . . 162
15.3 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
16 Obfuscation: Why Are We Not Living in a New Golden Age? . . . . . . . . . . . . . . . 167
16.1 The Science Wars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
16.2 Anti-Science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
16.3 The Limitations of the Enlightenment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Author Biography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
The world is like the impression left by the telling of a story
(Yoga-Vāsis
t
ha 2.3.11)
̣
̣
1
Introduction:
Authority and the Collective Memory
History and archaeology tell us that when humans first began to congregate and settle, they codi-
fied their lives and experiences, and formed a collective for mutual support. This proto-civilization
arose inevitably from each individual’s questions about the world, and their attempt to understand
themselves, each other, and their place in the world. These groups, or tribes, evolved rules of conduct
to facilitate communal living, and they made a calendar for the celebration of important events—
events such as planting the crops and when to go hunting and fishing upon which the group
was utterly dependent for its survival. These early tribal societies also preserved their songs, their
experiences or history, and their stories, fables, wisdom, and beliefs in the memories of the tribe’s
Shaman or Bard. These collective memories led to myths and legends, which were extravagant and
hence memorable, short-hand records of matters such as: invasions, migrations, conquests, dynastic
changes, admission and adoption of foreign religious cults, and of social reforms.
This inevitable process of social evolution is also the origin of religion, and of a magical way
of looking at Nature; both of which are still with us today. Eventually, this evolutionary process was
also the origin of science, which is essentially our investigation of Nature to understand something
of what is happening around us, and to use this knowledge to ensure our survival, and the survival
of our tribe or extended family in a violent, indifferent Universe. After all, science and religion seek
to answer the same two questions: (1) Why and how is the natural world the way it is? (2) How
best can we assure our survival? In addition, myths and science fulfill a similar function: they both
provide man with a representation of the world, and of the forces that are supposed to govern it.
Both myths and magic, together with science, fix the limits of what is possible.
At first, the members of the tribe were easily cowed and controlled by the superstitious
fear associated with mythology; such control cannot be generated in a group without myths and
marvels. Thus magic, or proto-science, was at the heart of the social organization of early societies.
But the essential part of this tribal codification of experiences was the recording of the information
necessary for survival; that is, the tribal wisdom. For example, the recording for future generations
of the hard-won collective experience that to sustain the life of the tribe, crops are best planted at a
certain moment of the Solar Cycle, and harvested a certain number of Lunar Cycles later; that fish
are best looked for at a high tide (again the relationship with the Moon), and that large animals are
best hunted in early morning when they are rutting at certain times in the Solar Cycle. But how
2
does a non-literate society define those dates, and how did they determine when a particular date
was approaching?
The readily observed phases of the Moon formed man’s first chronometer, and by hard expe-
rience the tribe would have learned the most appropriate time, relative to the phases of the Moon
and the Solar Cycle to go hunting and gathering. Later, a combination of a Lunar Calendar and a
Solar Calendar told agricultural man when was best for planting and harvesting. Our ancient ances-
tors (who were probably no less insightful than we are today) would also have noted the similar and
hence, perhaps, related time scales of Lunar Cycles and female fertility; hence, they evolved a Moon
goddess, rather than a Moon god to represent this fertility and to whom supplications could be
addressed, although the desired “responses” would only be forthcoming on a statistical basis, but this
was evidently good enough. This tribal knowledge was so important that it needed to be preserved
for the survival of future generations. Thus, a calendar based upon an understanding of astronomy
and biology was one of the essentials for the survival of the earliest human communities, and, indeed,
this remains the case for that shrinking part of humanity that does not live in towns and cities.
This tribal wisdom or language of survival, which originally would have been passed down by
the tribe’s Shaman or Bard, would have been articulated in the spoken language of the tribe. Other
tribes would have learned the same essential things, but would have spoken about those same things
in a different vernacular. Thus, as various spoken languages developed, they were incorporating, and
permitting, the transmission of the same essential knowledge; that is, how to survive and prosper.
This was the beginning of science, which was of necessity supra-tribal, and later became supra-na-
tional. Consequently, common to all languages and cultures is a set of observations and facts; what
today we call, in its most general form, science, but was in fact astronomy and biology. But back in
the distant past, our ancestors would have considered all this magic, or perhaps religion.
The question we shall investigate in this volume is: How did the modern form of the
language of science arise from earlier tribal wisdom? Put another way: How did man develop a
worldview which allows him to classify and understand all the phenomena and things observed in
Nature? The modern, technical language of science is actually very simple as a language, but it has
grown very far from the vernacular languages of literature. Yet, the clearly defined language of sci-
ence, which is taught to students from a young age (although few realize that they are being taught
a language different for the vernacular they use with each other) is the nearest thing that we have to
a universal or perfect language; that is, a language that can be understood by all men cutting across
the confusion and redundancy of vernacular languages.
In these pages, we will see how it was that in attempting to construct a complete set of the
observations needed by prehistoric man for his survival, and the survival and stability of his tribe
or extended family, we were inevitably led to the development of a system of classification that best
facilitated the transmission of this information. The earliest proto-scientists, or natural philosophers,
or Shamans, or magicians realized that instead of learning long lists of natural phenomena, and
INTRODUCTIONINTRODUCTION: A CALCULUS OF THOUGHT
3
of biological information and astronomical observations that would assist their society to survive
the potentially fatal vagaries of Nature (e.g., climate change) it was more logical, and a lot simpler,
to arrange the essential facts into different classes (which have today evolved to become different
sciences) and then attempt to find a principle of coherence behind all these observed facts. Such a
systematization would render the long lists irrelevant, thereby saving everyone’s limited and imper-
fect memories. It also permitted the more insightful natural philosophers, or proto-scientists (and
some of these early scientists also practiced magic) to begin making predictions about the working
of Nature, thereby creating modern science and technology. But then, science like magic and religion
was always interested in everything. It was the epistemological earthquake that was the French Rev-
olution that gave us separate, non-communicating, independent disciplines and schools of thought.
A CALCULUS OF THOUGHT
First, we will explore how and why we record essential information. I am sure that I am not alone in
that, when confronted by the complexity of daily life. I find it a great relief to make a list. The prepa-
ration of the list allows me to put my thoughts in order by putting them down on paper. I am taking
control of some aspects of my life, and instilling order into part of the chaos that surrounds me. This
fetish with list-making probably stems from my student days, when my lists of things I had to learn
were very long, but by the time I graduated they were considerably shorter and more concise. And
in so doing, as we will see. I had not only earned a degree in natural science but I had also trained
my memory in the manner of the Catalan mystic, the Blessed Ramón Llull (c.1232–c.1315) and
his later follower, the Catholic heretic and savant Giordano Bruno (1548–1600)—two key figures
in the early part of the search for the universal language of science (see Figure I.1).
Figure I:1. Tree of Science (Arbor Scientiae) is one of the most
extensive manuscripts of the 13th Century Ramón Llull, written
in Rome between 1295 and 1296. It is a version of the author’s
Ars magna written for a general readership (see Chapter 1). It is
one of the first attempts in Europe to describe the holistic nature
of science, that is, the oneness of Nature, and to try and commu-
nicate this idea to a wide readership. As we can see, the work uses
a familiar analogy: the organic comparison, in which science is
represented by a tree with roots, trunk, branches, leaves, and fruits.
The roots represent the basic principles of science; the trunk is
the structure; the branches, the genres; the leaves, the species; and
the fruits are the details. This vegetal allegory shows the influence
of Aristotle. This image will serve as a metaphor for this work,
and is taken from https://en.wikipedia.org/wiki/Tree_of_Sci-
ence_(Ramon_Llull).
4
However, list making as a means of trying to order the overwhelming quantity of informa-
tion we all come upon in our lives is not a new concept. At the dawn of literature, Homer presents
us with the two possible ways in which information, or data, may be presented and stored for future
reference. That is, either as a simple long list, or as a closed-system which contains all knowledge in
microcosm and which shows us, the observers, how all things are interconnected in miniature—into
which one must know how and where to look to find what it is one is seeking to understand, or to
know. A list brings order, and through its use we can (at least) try to understand, influence, and per-
haps control the world around us. We are able to exclude things; creating a list is a means of making
choices. One might imagine that a list seeking to represent a complex set of information such as
an entire discipline of science would produce a near infinity of possibilities and so be useless, but
lists actually bring their own rules and orthodoxy. We will look at how it was that we moved from
merely making and trying to memorize long lists of observations to a rationalization of such lists
in terms of an underlying principle: the move from the qualitative to the quantitative. The I Ching
of Ancient China is a good example of this evolutionary move from a magical worldview toward a
truly empirical, science worldview.
The earliest examples of a scientific or philosophical language, those from before the 17th
century, were not quantitative. The language used by the proto-scientists when they communicated
among themselves was purely qualitative. The early experimenters, or alchemists, were not overly
keen to discuss in too much detail what it was they were doing and why they were doing it. Con-
sequently, early manuscripts read more like a mystery story or a philosophical explanation than a
description of an experiment and the resulting observation of the consequences of the experiment.
But then, these proto-scientists were living a dangerous life; the Church would have condemned
them and burned them if their actions were clearly described. There was safety in obfuscation and
cloudy philosophical concepts. However, the baleful influence of the Church, and its own inability
to effect any change in Nature (miracles) did eventually decrease in importance in society; the
purely statistical success rates of prayer were finally deemed not to be good enough and eventually
a quantitative language of science would be invented. Such a quantitative scientific language was
naturally capable of extension, leading to explanation and prediction, able to support international
communication and commerce. It was a new lingua franca, but a language devoid of metaphor and
multiple, confusing meanings. This was, of course, not a new idea. The idea that there once existed
a perfect language, which was spoken by all mankind, has occupied the minds of savants, mystics,
Neo-Platonists, natural philosophers, and theologians for well over two millennia. This language
was perfect in that it expressed without ambiguity the essence of all things and ideas—the quiddity
of all things. It was a language in which there was only one possible way of describing, e.g., an
animal, a natural phenomenon, or an explanation of why something happened. It was also accepted
that if this perfect “language of Eden” could be recovered, men would again be able to comprehend
INTRODUCTIONINTRODUCTION: A CALCULUS OF THOUGHT
5
each other fully and comprehend the functioning of Nature and thus the meaning of existence.
Men would be able to abolish discord and strife, and return to a Golden Age.
Even today, there are still physicists seeking to discover the perfect universal language in the
form of the Theory of Everything (Chapter 11), although the majority of the physicists and math-
ematicians researching this project do not appreciate the immensely old tradition within which they
are laboring. The discovery of the Higgs’ boson, and of gravitational waves, are only the latest steps
in man’s quest for the absolute, for the essence of the natural world, and for a single, unambiguous
Theory of Everything. The development of modern science may, in large part, be considered as
stages in this investigation; an attempt to understand the “make of all things.” Certainly, the cre-
ation of the system of quantities and units, which today we call the International System of Units
(SI from its French official name, Système International d’Unités), during the French Revolution
was pivotal in allowing man to finally abandon magic and mysticism, and the memorizing of long,
tedious, incomplete lists of properties and observations in his investigation of Nature, and to adopt
a coherent, scientific worldview.
CHAPTER 1
7
In the Beginning Was the List
There must be a beginning in any great matter but the continuing unto the end, until it be
thoroughly finished yields the true glory.
Sir Francis Drake (c.1540–1596)
Some may be surprised to read that there is a link between magic and modern science; that mod-
ern science evolved out of magic. Indeed, one could go further and state that modern physics and
chemistry would not exist had it not been for the ideas and “experiments” of the Neo-Platonists of
the early-Christian world. The problem is that there is a spiritual aspect to Neo-Platonism; there
is more metaphysics than physics in Neo-Platonism, and so many contemporary physical scientists
would be aghast at a suggestion of the metaphysical origins of their subject.
But you do not have to go too far into the quantum mechanical explanation of spin-entangle-
ment and the mixing of quantum states, that is, the generation of qubits of quantum information,
before you realize that what you are dealing with is more philosophical than physical. Today’s laser
physicists attempting to teleport quantum information1 from a laboratory on one continent to a
laboratory on another continent are having to revaluate what is actually meant by a “measurement,”
to fully comprehend their results. These modern physicists are undergoing the same self-analysis
that the Taoists recommended to all natural philosophers, and which the alchemists sought in their
explorations of Nature (see Figure 1.1), although few contemporary physicists have ever thought
about their sophisticated experiments in this way.
History tells us that before there was science, and its most useful offshoot, technology, there
was magic, and a magical way of looking at the world. In the evolution of a culture, the scientific
worldview is always a late development. In the evolution of our culture, the 17th century supposedly
marked the period when astrology, the burning of witches, and folk-magic yielded to Isaac New-
ton’s rationalism, and the Laws of Nature were established as observation and experience explained
1 A pure qubit state is a coherent superposition of the states’ wave functions. This means that a single qubit can be
, where α and β are proba-
described by a linear combination of the wave functions |0
bility amplitudes. When we measure this qubit in the standard basis, according to the Born rule, the probability
of outcome |0
. Because the absolute
squares of the amplitudes equate to probabilities, it follows that α and β must be constrained by the equation
+ |β|2 = 1. Note that a qubit in this superposition state does not have a value between 0 and 1; rather, when
2
|α|
2
measured, the qubit has a probability |α|
of the value 1. In other words,
superposition means that there is no way, even in principle, to tell which of the two possible states forming the
superposition state actually pertains. [1]
2
of the value 0 and a probability |β|
⟩
⟩
with value 1 is |β|
and the probability of outcome |1
with value 0 is |α|
and |1: |ψ
= α|0
+ β|1
2
2
⟩
⟩
⟩
⟩
8
by reason. Yet before there was reproducible and reliable science, there was unreliable or “chancy”
science. Even in the Renaissance, scientific work (what would then have been termed an “explo-
ration of natural philosophy”) was a hit-and-miss affair, as few savants noted down the details of
what it was they had done in their experiments. And as quantities of substances were not measured
consistently, or measured at all (quantities of chemicals and materials could not even be defined
precisely, as units of measurement were entirely parochial), and the materials used were of varying
degrees of purity, experimental science was the affair of each individual practitioner. Consequently,
at this time both science and magic were acceptable and interchangeable ways of interpreting Na-
ture, as neither one nor the other was infallible or even reproducible; both appeared to work only
on a statistical basis. And so the experimentalists would have believed they that had not been, for
example, in the “right frame of mind,” or the stars were in the wrong alignment on the day their
experiment did not yield the result it was supposed to yield.
Indeed, the scientist or savant of that time dabbled in both natural philosophy and the occult.
Isaac Newton (1643–1727) was himself something of a magus or, at least, a Neo-Platonist. At the
tercentenary of Newton’s birth, John Maynard Keynes (Newton, The Man, lecture given as part of
the Royal Society tercentenary celebration of the birth of Newton) described him as the last of the
magicians, “Newton was not the first of the age of reason. He was the last of the magicians, the last of the
Babylonians and Sumerians, the last great mind which looked out on the visible and intellectual world
with the same eyes as those who began to build our intellectual inheritance rather less than 10,000 years
ago.” Isaac Newton was a man with an immense, insatiable curiosity, for whom nothing could, or
should, be taken at face value; he was above all a man interested in exploring everything, and spent
much of his time in his laboratory doing experiments. Newton examined everything, including
several stomach-turning experiments on his own eyes. Today we may consider ourselves as rational,
coldly logical, non-superstitious, scientific beings with several degrees of separation from those
who believe in magic and superstition. In the time of Newton, however, there were fewer degrees
of separation between such individuals—if any at all. But by the end of Newton’s life, the European
Enlightenment was underway and the triumph of science over hermeticism and religion was more
or less assumed. But as it turned out, there was a huge revival of occultism and the hermetic arts in
the late-19th century (Chapter 16).
Before beginning our examination of the origins of science, let us consider the purpose of
science, and the object of scientific investigation, that is, the understanding of Nature. Perhaps the
most famous example of a magician in literature is Faust. In the earliest sections of Johann Wolf-
gang von Goethe’s great poem (begun c.1772), the magician and his tempter correspond broadly
to traditional mediaeval figure (the disillusioned old man manipulated by the Devil) and the plot
gives a somewhat traditional, Christian version of the concepts of salvation and perdition. However,
by the time that Goethe finished his poem (1831), Faust had evolved. He is no longer a magician
excited and led astray by his desire for forbidden knowledge; Faust has been transformed into the
1. IN THE BEGINNING WAS THE LIST1. IN THE BEGINNING WAS THE LIST
9
Romantic figure of Everyman. Faust has become a seeker for oneness with all Nature. Between
1770 and 1830, our civilization had moved from the Classical world to the Romantic world, and
today, we are all still lost in the late-Romantic world. The later, holistic Faust has abandoned a
Manichaean dualism of good and evil, for a mystical sense of the unity of all things. This Romantic
Faust would likely have been an early recruit to the politics of environmentalism. Faust no longer
embodied heterodox magic, but acceded to a knowledge of the interconnectedness of all natural
phenomena (which is the way modern science views Nature, see Chapter 9). In this way the char-
acter of Faust follows the evolution of magical and scientific thinking; first there was magic, and
then there was physics; first there was the sorcerer, and then there was the physicist.
Figure 1.1: The Alchemist, a painting by Joseph Wright of Derby (1734–1797). An image of a man
searching, both experimentally and mystically, to understand himself and his place in Nature. Image
from: https://en.wikipedia.org/wiki/Joseph_Wright_of_Derby#/media/File:Joseph_Wright_of_Derby_
The_Alchemist.jpg.
The magic that predates, and inevitably leads to, science is a force which follows processes
and events that are inherent to consciousness, and is something implicitly connected to constructive
and imaginative thought, therefore to the whole enterprise of artistic and scientific creation. Our
imaginations, our dreams, our ability to use our consciousness to imagine and to describe, and then
10
to transfer theories and fantasies, are inherently bound up with our facilities for reasoning. And they
are essential for making that great leap from observing and explaining a known phenomenon, to
going beyond into the realms of prediction. The fabulous and the fantastic are all around us. Often,
the more we examine a phenomenon that was once deemed to have been fully comprehended, the
more we may learn, which allows us to speculate and then, perhaps, realize that the fantastic is not
entirely separate from the natural.
In the world before the 18th-century Enlightenment, magic and fantasy were inextricably
linked with man’s attempt at understanding the forces of Nature, but nonetheless this pre-scientific
worldview led to considerable advances in technology, and to many useful discoveries in engineer-
ing, metallurgy, chemistry, and pharmaceuticals. When thinking about the world in a magical sense,
one considers the phenomena of Nature to have arisen through the agency of certain secret forces of
Nature; forces which may reside uniquely in certain objects (for example, the loadstone of the flying
island of Laputa in Jonathan Swift’s Gulliver’s Travels of 1726, which would have relied upon repul-
sive magnetic fields (see Figure 12.1), which were not fully explained until the late 19th century;
see Chapter 12) or with certain inspired individuals, for example, Giordano Bruno, Leonardo Da
Vinci, or Philippus Aureolus Theophrastus Bombastus von Hohenheim (Parcelsus). Such magical
thinking structures the processes of imagination; and imagining something can, and sometimes
does, precede the fact, or the act of discovery. This apparent breakdown in causality is something
that we would today term intuition or instinct, and is something that a great many people experi-
ence without thinking about it. Indeed, you do not have to investigate modern quantum mechanics
and the quantum-view of nature very deeply before concepts such as causality and non-causality, of
cause and effect, become much more confused than one would have ever supposed. Such concepts
become, essentially, metaphysical. [2]
1.1 THE LIST AS THE ORIGIN OF SCIENCE
But let us return to the very dawn of science. How was it that the earliest observers of Nature
noted what it was they had observed? Not having a theory to explain their observations, it was
likely that they merely made a list of what they had seen. Thus, the earliest stage of the evolution of
science was the creation of long lists of things and events. Indeed, these long lists of observations
and results would likely have been compiled of symbols (as there was no standard nomenclature
of chemicals or phenomena), numbers, and words in a vernacular (see Figure 1.2). Of course, the
natural philosopher may also have written his notes in a cypher; for example, the Angel Language
of the Elizabethan alchemist John Dee.
List making is a means of attempting to control and understand the complexity of life and
of the world around us, and of trying to order the seemingly incomprehensible quantities of infor-
mation we all come upon in our lives—not a new concept. It is simply a way of recording infor-
1. IN THE BEGINNING WAS THE LIST1.1 THE LIST AS THE ORIGIN OF SCIENCE
11
mation or data. But it is not the only way. At the dawn of literature, Homer presents us with the
two possible ways in which information, or data, could be presented and stored for future reference.
Either as a simple but long list, or as a closed system which contains all knowledge in microcosm
and shows us, the observers, how all things are interconnected, but into which one must know how
and where to look to find what it is one is seeking to know, or to remember; what today we would
term a database.
Figure 1.2: A table of alchemical symbols from Basil Valentine’s Last Will and Testament of 1670.
(Image from: https://en.wikipedia.org/wiki/Alchemical_symbol#/media/File:Alchemytable.jpg). Basil
Valentine, or Basilius Valentinus, was supposedly a 15th-century alchemist and Canon of the Benedic-
tine Priory of Saint Peter, Erfurt, Germany. But this name could also have been an alias for a number
of German chemists/alchemists. During the 18th century, it was suggested that the author of the works
attributed to Basil Valentine was Johann Thölde, a salt manufacturer in Germany (1565–1624). Whoever
he was, Basil Valentine had considerable ability and ingenuity as an experimental chemist. He showed
that ammonia gas could be obtained by the action of alkali on sal-ammoniac (ammonium chloride),
described the production of hydrochloric acid by acidifying brine (sodium chloride), and created oil of
vitriol (concentrated sulfuric acid). Thus did modern chemistry grow out of alchemy; particularly, when
the spiritual aspect of the alchemist’s craft lost its precedence.
12
In Book 2 of Homer’s Iliad, we encounter what is called The Catalogue of Ships. This catalogue
is a long list of the various Greek forces who came together to attack Troy, and Homer uses it as a
means of structuring information which the reader will need later in the story. However, in Book
18 of the Iliad, Homer presents us with a very different manner of presenting and storing infor-
mation; he gives us a description of the shield the lame-god Hephaestus has made for swift-footed
Achilles after the death of Patroclus. This shield was a microcosm within which all of Nature was
to be found; the Sun, the Moon, the 12 Houses of the Zodiac, and the images of the world of men
through the changing seasons of their year, all bound by the mighty river of Ocean running around
the rim of the shield. This shield allowed the viewer, provided he knew where and how to look,
to find whatever piece of information about the lives of men, or of astronomy and agriculture, he
needed, or all the possible ways that these pieces of information could be coupled together. This
shield was undoubtedly intended to promote the semi-divine nature of Achilles, and to protect him
in battle. But even Hephaestus could not protect lion-hearted Achilles from his fate, not even with
a database of all human knowledge on his arm.
A list brings order, and through it we can understand and try to control the world around
us. We are able to exclude things; creating a list is a means of making choices. One might imagine
that a list seeking to represent a complex set of information such as the ephemerides of the Sun and
Moon, or all the possible ways of reacting a number of the basic chemicals used in alchemy, would
produce a near infinity of possibilities and so be useless, but lists actually bring rules and orthodoxy.
Homer did not list all the petty kings of Bronze Age Greece, he only listed those relevant to the
story he was about to relate of the Trojan War. The author had thought about these petty rulers and
then placed them in the wider society and culture of the Ancient World. In a similar way, Homer
had thought about Achilles’ shield, and designed it to represent the world in miniature. In other
words, Homer had thought about the essential elements of Nature and how they interact, and was
presenting this summary of the essential points to future generations as a database.
Another example of this design, by an author of a detailed model of the world, which was
then put on paper for future users is that greatest of all poems, the Divine Comedy by Dante
Alighieri (1265–1321). In the 14,233 lines of this masterpiece, Dante gives us a complete repre-
sentation of the medieval worldview; a concise summary of the Aristotelian-Thomist cosmogony
of the late 13th century. The poem is a portable, readable summary of everything that a Christian
needed to know to achieve salvation, and to understand the natural world in which he finds himself.
But a list can also be a frightening thing. Our imperfect memories will always tell us that
we have forgotten something, and that this something is hugely important. And the more we try
desperately to remember that important something, the more it slips from our mental grasp. Lists
can be troubling, even subversive. Our lives are limited; death is a particularly discouraging limit.
This is why we all like subjects of investigation that have no limit, and therefore no end, for ex-
ample, history, science, and philosophy. Long lists are a way of escaping from our thought about
1. IN THE BEGINNING WAS THE LIST1.2 RAMÓN LLULL
13
death. Even if a list makes us anxious about things we cannot remember we like it because we do
not want to die. [3]
Making lists, or pictorial or text-based summaries of a field of knowledge, may impose order,
but what is really required to effectively use the contents of any long list is a means of manipulat-
ing all the possible entries in the lists of all the various categories of all objects and all ideas. This
could rapidly become a vast quantity of data. And what of the nature of science? Anyone who has
ever attempted to study science knows that there is an awful lot of memory work to be done. Al-
though all science students are told that the vast edifice of science rests on a few basic axioms and
theories, it is unfortunately true that before one can get to study these foundation stones of science
you have to spend years memorizing seemingly incomprehensible amounts of information, data,
rules, and exceptions to the rules. Of course, the professors will tell the aspiring scientist that he or
she must first be grounded in the factual matter of the subject before they can make an attempt at
comprehending, and perhaps applying or even extending, the basic axioms. But then professors and
teachers have always said that—in every discipline. What, of course, should be done is to explain
and teach the axioms to the brightest of the eager students. Then the mountains of facts and gen-
eralities could be derived by the students themselves. But, alas, that is not the way the teaching of
science, or of any other subject, has evolved.
This is the heart of the problem to be examined in this book; how do we construct a simple
language of few words and few rules, and use this language to describe all the phenomena seen in
Nature? How do we take endless lists of observations and facts and reduce, purify, and concentrate
them, as the alchemists would have said, to a handful of base units that can be combined in various
ways to describe everything we see around us, and everything we will ever see?
1.2 RAMÓN LLULL
Perhaps the first person to attempt a method of systematizing and classifying information for
the purpose of facilitating the compilation and generation of derived information systems was
the Catalan mystic, the Blessed Ramón Llull, or Raymond Lull (c.1232–c.1315). He designed
a teaching aid that was also a means of deriving or generating information from lists, which he
called his Ars generalis ultima, or Ars magna (The Ultimate General Art, or Great Art), which
appeared in 1305. This was an attempt to combine, or manipulate, the contents of long lists, in
particular, to generate the theological and philosophical attributes of the Divine, selected from a
number of lists of those attributes.
Ramón Llull intended this technique of combining and manipulating lists of theological and
philosophical information to be a debating tool for converting Muslims and Jews to Christianity
using logic and reason. Through his detailed analytical efforts, Llull built a theological reference by
which a reader, or proselytiser, could enter into a mediaeval calculating machine (it was called Llull’s
14
“thinking machine”) an argument or question which had been put to them about the Christian
faith. The reader would then turn to the appropriate index and page in the Llullian calculator to
find the correct answer to the question posed by the potential convert.
The radical innovation of Llull was the introduction of logic into list making. In particu-
lar, the construction and use of a “thinking machine” made of inscribed metal discs to combine
elements of thought; for example, elements of language. With the help of connected geometrical
figures, following a precisely defined framework of rules, Llull tried to produce all the possible
statements of which the human mind could conceive on certain subjects. These declarations or
statements were represented by a series of signs, or sequences, of letters derived from his pro-
to-computer, or “thinking machine.” We will now briefly consider the hardware and software (see
Table 1.1) of Llull’s “thinking machine.”
Table 1.1: The alphabet of Llull’s thinking machine. This is the software of the device built by Ramón
Llull to explore the logic of list making. It contains the theological and philosophical attributes of
the Divine. The letters in the first column of the table (contain the “computer programme” of Llull’s
device) correspond to the outer circle of the hardware, that is, to the engraved metal discs shown in
Figures 1.3 and 1.4
Questions
and Rules
Figure T
(given in
Figure 1.5)
Difference Whether?
Figure A
(given in
Figure 1.4)
B Goodness
C Greatness Concordance What?
D Eternity
Power
E
F Wisdom
G
Contrariety
Beginning
Middle
Will
End
H
I
K
Virtue
Truth
Majority
Equality
Glory
Minority
Of what?
Why?
How much?
Of what
kind?
When?
Where?
How and
with what?
Subjects
Virtues
Vices
God
Angel
Heaven
Man
Imaginative
Justice
Prudence
Fortitude
Temperance
Faith
Avarice
Gluttony
Lust
Pride
Accidie (apathy)
Sensitive
Hope
Vegetative
Elementative
Charity
Patience
Envy
Ire
Lying
Instumentative
Pity
Inconstancy
The software of Llull’s device, given in Table 1.1, is essentially an alphabetic list giving the
meaning of nine letters, in which Llull (the programmer) says “B signifies goodness, difference,
whether?, God, justice, and avarice. C signifies...,” and so on (there is no J in the mediaeval Latin
alphabet). The components of the first column of Table 1.1 are set out in Llull’s Figure A (that is,
Figure 1.3 here). The letters don’t represent variables, but constants. Here they’re connected by
lines to show that in the Divine these attributes are mutually convertible. That is to say that God’s
1. IN THE BEGINNING WAS THE LIST1.2 RAMÓN LLULL
15
goodness is great, God’s greatness is good, etc. This, in turn, was one of Llull’s definitions of God,
because in the created world people’s goodness is not always great, nor their greatness particularly
good, etc. Such a system of vertices connected by lines is what mathematicians term a graph. This
might seem to be of purely anecdotal interest, but as we shall see shortly, the relational nature of
Llull’s system is fundamental to the idea of an Ars combinatoria.
The components of the second column in Table 1.1 are set out in Llull’s Figure T (that is
Figure 1.4 here). Here we have a series of geometrical principles related among themselves in three
groups of three, hence the triangular links. The first triangle defines: difference, concordance, and
contrariety; the second defines beginning, middle, and end; and the third triangle defines majority,
equality, and minority. The concentric circles between the triangles and the outer letters show the
areas in which these relations can be applied. For example, with the concept of difference, notice
how it can be applied to sensual and sensual, sensual and intellectual, etc. “Sensual” here means
perceivable by the senses, and Llull explains in the Ars brevis that: “There is a difference between sen-
sual and sensual, as for instance between a stone and a tree. There is also a difference between the sensual
and the intellectual, as for instance between body and soul. And there is furthermore a difference between
intellectual and intellectual, as between soul and God.”
This hardware consists of three inscribed metal disks fixed on a single axis on which they can
be rotated independently. The disks contain a limited number of letters—a special lullistic alphabet.
When the circles are turned, step-by-step, all possible combinations of these letters are produced.
The metal-circle called the Prima Figura (Figure 1.3) gives the primary attributes. The next strictly
defined table of words can be produced on the next circle, Secunda Figura (Figure 1.4), where we
find categories and relations of thinking.
Figure 1.3: A list of the attributes of God (see second column in
Table 1.1). This is Llull’s Figure A or Prima Figura. Image from:
http://www.ramonllull.net/sw_principal/l_br/home.php.
Figure 1.4: A list of the categories and relations of thought (see
third column in Table 1.1). This is Llull’s Figure T or Secunda
Figura. Image from: http://www.ramonllull.net/sw_principal/l_br/
home.php.
16
Ramón Llull’s thinking machine allows all the words (attributes of the Divine) in the outer
circle to be combined in different ways by turning the circles, relative to each other in a stepwise
manner. It is therefore possible to connect every word with every other word placed in a position of
a table—depending only on the construction of the individual tables. Ramón Llull created numer-
ous devices for this manipulation, or combining of the contents of lists. One method is now called
the Llullian Circle, which consisted of two or more solid circular discs, a disc of smaller diameter
free to rotate inside a larger annular disc; both these independently rotating discs were inscribed
with alphabetical letters or symbols that referred to, for example, components of lists of attributes
of the Divine. A number of terms or symbols relating to those terms were laid around the circum-
ference of the circle. The discs could be rotated individually, like a circular slide rule to generate an
enormous number of combinations of ideas. Thus, the innermost disc could be inscribed with what
Llull termed the “absolute characteristics” of God: goodness, eternity, power, volition, virtue, truth,
glory, or wisdom, and it could be rotated to be next to attributes on the next outer disc, which was
inscribed with “relative characteristics” such as greatness or extent or purpose. Llull conceived this
combinatorial manner of generating ideas as a perfect logical language, which could be used to
convert non-Christians to Christianity. The language was to be universal; it was to be articulated at
the level of expression in rational mathematics, and its content was intended to consist of a network
of universal ideas held by all people. Llull based this device on the notion that there were only a
limited number of basic, undeniable truths in any field of knowledge, and that we could under-
stand everything about these fields of knowledge by studying combinations of these elemental or
fundamental truths.
1.3 DETAILS OF THE ARS COMBINATORIA
Given a number of different elements n, the totality of the possible arrangements that can be
made from them, in any order is expressed by their factorial n!, calculated as 1.2.3. .... n. This is the
method for calculating the possible anagrams of a word of n letters (as in the art of Temurah in the
Kabbalah). As n increases, the number of possible arrangements rises more rapidly: the possible
arrangements for 26 letters of the alphabet would be a vast, incomprehensible number of combina-
tions. If the strings of combinations admit repetitions, then the number of combinations rises even
further. Consider the situation of four people. We want to arrange these four as couples on a train,
where the seats are in rows that are two across; the order is relevant because we wish to know who
will sit at the window. This is a problem of permutations; that is, of arranging n elements, taken t
at a time, taking the order into account. The formula for finding all the possible permutations is
n!/(n-t)! Suppose, however, that the order is irrelevant. This is a problem of combinations, and we
solve it with: n!/t!(n-t)!
1. IN THE BEGINNING WAS THE LIST1.3 DETAILS OF THE ARS COMBINATORIA
17
This is an expression-system (represented both by the symbols and by the syntactic rules
establishing how n elements can be arranged t at a time—and where t may coincide with n), so that
the arrangement of the expression items can automatically reveal possible content-systems. In order
to let this logic of combination or permutation work to its fullest extent, however, there should be
no restrictions limiting the number of possible content-systems (or worlds) we can conceive of. As
soon as we maintain that certain universes are not possible in respect of what is given in our own
past experience, or that they do not correspond to what we hold to be the laws of reason, we are, at
this point, invoking external criteria not only to discriminate the results of the ars combinatoria, but
also to introduce restrictions within the art itself.
This combinatorial method of Llull was an early attempt at using logic to retrieve knowl-
edge, data, or information from a list; that is, to use mechanical means to generate concepts, which
avoided the tedious necessity of memorizing vast numbers of combinations of ideas and thoughts.
Of course, most of the combinations of ideas so generated would be redundant (in the definition of
the Divine, what exactly is the difference between “glorious eternity” or “eternal glory”). Llull hoped
to show that Christian doctrines and dogma could be arrived at from any starting point, from a
fixed set of preliminary, arbitrary ideas. Llull knew that all believers in the monotheistic religions
would agree with the absolute attributes of God, giving him a firm platform from which to argue
that the Christian interpretation of the Divine was the most apposite interpretation, and so his lis-
teners would be convinced of his logical vision and convert to Christianity. Whether or not Llull’s
system of logical persuasion worked for its intended purpose we do not know; history tells us he
was nearly killed at the age of 83 attempting to convert Muslims in North Africa.
One can ask, what exactly is Ramón Llull’s place in the history of computers and computing?
Llull is one of the first people who tried to make logical deductions in a mechanical, rather than
a mental way; that is, based on the contents of his imperfect memory. His method was an early
attempt to use logical means to generate and retrieve information. He demonstrated in an elemen-
tary, but nevertheless workable way that human thought can be described and even imitated by a
mechanical device. This was a small step toward the thinking machine of the contemporary world.
The ideas of Ramón Llull about the systematisation or ordering, and the generation and
retrieval of information and knowledge were developed further in a more esoteric manner by
Giordano Bruno in the 16th century (but unfortunately for Bruno this was one of his undertakings
that led him to be burned as a heretic by the Catholic Church in Rome, February 17, 1600), and
subsequently by the great German savant Gottfried Leibniz (1646–1716) in the late 17th century
for investigations into the philosophy of science. Leibniz gave Llull’s idea the name ars combina-
toria (the art of combinations). Many consider Llull’s ideas on systems of logic, and their use in
constructing new combinations of ideas as the beginning of the study of information science and
semiotics. Although the combinatorial calculus of Ramón Llull was an extraordinary creation for
the Middle Ages, giving us the earliest machine language and a perfect means of creating unbreak-
18
able encryption codes, Llull’s reputation was not always as high as it is today. The satirist Jonathan
Swift ridiculed such Llullian devices in the third part of Gulliver’s Travels, where on the Island of
Laputa Lemuel Gulliver is shown several large folio volumes of broken sentences generated by the
ars combinatoria which he is told “.. [will] piece together; and out of those rich Materials to give the World
a compleat Body of all Arts and Sciences...” Likewise, in François Rabelais’ exuberant, witty, and satiri-
cal comment on his contemporary world, The Life of Gargantua and of Pantagruel the combinatorial
arts of Llull are disparaged; Gargantua advices his son Pantagruel to master astronomy “but dismiss
divinatory astrology and the art of Lullius as fraud and vanity.”
1.4
FURTHER READING
1
2
3
4
Two excellent, readable accounts of the difference between the real and the quantum
worlds are: The Nature of the Physical World (1947); Sir Arthur Stanley Eddington; Lon-
don, Dent & Sons Ltd., and QED: The Strange Theory of Light and Matter (1985); Richard
P. Feynman; Princeton, Princeton University Press.
The complex and close relationship between magic and science is touched upon in many
places in Stranger Magic: Charmed States and the Arabian Nights (2011); Marina Warner;
London, Chatto and Windus.
Everything you have ever wanted to know about lists and list-making: The Infinity of Lists
(2012); Umberto Eco; Maclehose Press (an imprint of Quercus).
What is the definition of magic, and of a magical way of looking at the world? A Gen-
eral Theory of Magic (1972); Marcel Mauss; London, Routledge; originally published in
French in 1902.
1. IN THE BEGINNING WAS THE LISTCHAPTER 2
19
The Origins of the Language of
Power that Is Science
And God said, Let there be light: and there was light.
Genesis 1: 3-4
Today, if you talk to a scientist about what it is that he/she believes science represents, you will
likely be told that science is the only real source of truth; that the ideas of science are not culturally
specific. That these scientific truths are as comprehensible for an American, as they are for a Russian
or a Chinese savant. Of course, the American, Russian, and Chinese savants must be able to read
their respective languages, and it is unlikely that anyone of them would be able to read the other two
languages sufficiently well to comprehend what it was that the texts were describing. But the idea
that science has an international, non-cultural character, not dependent upon particular elements of
vocabulary, or particular rules of grammar, is certainly accepted by most scientists.
About a thousand years ago it would still just have been possible for the majority of scholars
and savants to have been able to discuss something through the medium of a true international
language such as Latin, Arabic, or Greek. But such direct communication has not been possible
since the end of the Middle Ages and was only ever possible in some parts of the European and
Mediterranean worlds. Ancient Chinese scholars, like their Greek contemporaries, believed that
their language was the only appropriate medium of communication, so they did not concern them-
selves with the languages of barbarians.
So, what exactly is the origin of this confusing association of science with a medium of
universal communication? Indeed, can science be a universal medium of communication? This
question is of particular interest, given the marked inability of the majority of scientists to explain
to non-scientists, and even to scientists in areas of specialzation not their own, what it is they do
and why it is that they do what they do. Therefore, what does it mean to say that science is a form
of universal language, a culturally independent means of communication?
Let us look at “the babble after Babel;” that is, the myth of the origin of the multitude of
languages that we know today. The idea of an age when all men could converse with each other and
could converse directly with their Creator, who had taught them this Ur-language, is a very wide-
20
spread myth.2 It is a myth about a Golden Age of innocence; a myth known to every culture—it
is an idea deeply rooted in the Jungian collective consciousness. But for all its mythic qualities,
the search for this ideal language, which according to monotheist beliefs had also been spoken by
God to bring the Cosmos and humanity into existence (see the quote at the head of this chapter),
obsessed European philosophers and savants right up until the Age of Enlightenment, when the
Indo-European theory of the origin of modern languages was developed, and accepted.
In Genesis 2:16-17, we are told that the Creator God spoke to man for the first time; telling
our ancestor, Adam that everything in the earthly Paradise was his. God commanded him, how-
ever, not to eat of the fruit of the tree of the knowledge of good and evil. We are not told in what
language God spoke to Adam. Modern Biblical tradition has imagined it as a sort of language of
interior illumination, rather than a communication of words, or of thunderclaps and lightning.
After this command, we read that God “formed every beast of the field, and every fowl of the air;
and brought them unto Adam to see what he would call them.” Here we have a motif also common to
most other religions and mythologies; that of the name-giver, the creator of language (nomothete).
Figure 2.1 gives a late-mediaeval image of that naming process. Yet it is not at all clear on what
basis Adam actually chose the names he gave to the animals. In the Latin Vulgate Bible, we are
told that Adam called the various animals “nominibus suis” or “by their own names;” and in the King
James Bible we have “Whatsoever Adam called every living creature, that was the name thereof.” But
were the names given by Adam, the names by which each animal ought to have been known, or
were they simply arbitrary names given by Adam? That is, did the given Adamic name refer to some
fundamental or intrinsic property or characteristic of the animal, or was it purely a matter of what
Adam was thinking at that moment.
In Genesis 2:23 Adam speaks to his female companion, one again assumes that they are
using the Divine Language of Creation as their means of communication, “This is now bone of my
bones, and flesh of my flesh: she shall be called woman...” Eventually, Adam calls this female compan-
ion Eve (which means life as she is to be the mother of humanity) so we see that Adam’s choice
of name for those things he was charged by God to name was etymologically correct and not
arbitrary. This would be a reasonable deduction given that the language in question was itself the
language of Creation.
The linguistic theme is taken up again in Genesis 11:1. We are told that after the Flood and
the repopulation of the earth by Noah’s descendants, “the whole earth was of one language, and of one
speech.” Yet, men in their vanity and arrogance conceived a desire to rival God, and thus erect a tower
that would reach up to the heavens. To punish human pride, and to put a stop to the construction
of their tower we are told that God devised a plan: “Go to, let us go down, and there confound their
language, that they may not understand one another’s speech… Therefore is the name of it called Babel (as
2
In monotheist religions, the myth is given in the various sacred books. In Hindu culture, it is presented in the
Mahabharata.
2. THE ORIGINS OF THE LANGUAGE OF POWER THAT IS SCIENCE2.1 A LESS MYTHIC INTERPRETATION OF THE BABBLE AFTER BABEL
21
represented in Figure 2.2); because the Lord did there confound the language of all the earth: and from
thence did the Lord scatter them abroad upon the face of all the earth” (Genesis 11:7, 9).
Figure 2.1: Image from the late-Byzantine, 14th-century Orthodox Holy Monastery of Saint Nicholas
of Anapafsas, Greece. Adam, in his naked innocence, is seen naming the animals as they pass before
him. The dragon-like creature, next to the lion(?) appears to be a remnant of a mediaeval bestiary.
Image from: https://commons.wikimedia.org/wiki/File:Adam_naming_animals_-_Moni_Ayou_Niko-
laou_(Meteora).jpg.
2.1 A LESS MYTHIC INTERPRETATION OF THE BABBLE
AFTER BABEL
Stories accounting for the multiplicity of human languages appear in nearly all mythologies and
theogonies. But it is a major leap from knowing that many languages exist to deciding that this
multiplicity is a fault or punishment that could be healed by a search for the imagined perfect
original language. Indeed, how would you know you had discovered the Ur-language, the language
of Eden?
For Ancient Greek philosophers, Greek was the language of thinking and ratiocination. This
was not a claim that the Greek language was a primary language: it was simply a case of the iden-
tification of thought with its natural vehicle. About the speech of barbarians or non-Greeks, the
Greeks knew little; hence, little was known about what it would be like to think in the language of
barbarians. The Greeks admitted that the Egyptians and the Babylonians possessed wisdom, only
because someone (Herodotus) had explained this to them in Greek.
As Greek civilization expanded, the status of Greek as a language also evolved. In the period
following the conquests of Alexander the Great (356–323 BCE), a common, universal form of
22
Greek spread rapidly—the koine. This was the language of Polybius, Strabo, Plutarch, Aristotle, and
of the Eastern Roman Empire; it was the language taught in the schools of grammar. Gradually
it became the official language of the Mediterranean world, and of the East of Alexander’s con-
quests. Spoken by patricians and savants, Greek survived under Roman domination becoming the
language of commerce and trade, of diplomacy, and of scientific and philosophical debate. It was
finally the language in which the first Christian texts were transmitted (Septuagint translation of
the Jewish Bible in the 3rd century BCE, and the Gospels in the first centuries AD), and it was
the language of the early-Fathers of the Church. A civilization with an international language does
not need to worry about the multiplicity of tongues. Nevertheless, such a civilization can, and did,
worry about the rightness of its own tongue.
While the Greek koine continued to dominate the intellectual life of the Mediterranean
world, Latin was becoming the language of the administration of the empire, and thus the univer-
sal language for those parts of Europe conquered by the Roman legions. Once again, a civilization
with a common language is not troubled by the multiplicity of vulgar tongues. Learned Roman
patricians would discourse in Greek, but the rest of the Latin-speaking world needed translators.3
Despite this Mediterranean civilization, by the 2nd century AD savants began to study lan-
guages other than Latin and Greek, finding that human experience and wisdom could be expressed
just as well in other languages. The Greco-Roman world was changing; new religions and beliefs
were spreading from the East. Obscure revelations appeared—ome were attributed to Persian magi,
others to an Egyptian divinity called Thoth-Hermes, to Chaldean oracles, and to Pythagorean and
Orphic traditions which, though born in early-Greek civilization, had been buried by rationalist
Greek philosophy. Today, we term these mystical beliefs Hermeticism, the product of the mythic
Hermes Trismegistus (see Figure 6.1). The classical rationalism, elaborated and re-elaborated over
centuries, began to show signs of age. With this loss of rationalism, the established religions entered
a period of crisis. The Imperial Pagan religion had become a purely formal affair of the law courts; a
simple expression of loyalty to the state. Each conquered people had been allowed to keep its own
gods. And these new gods were, as in all conquering empires, accommodated to the Latin pantheon;
no one bothering about contradictions, synonyms, or homonyms.
A result of this widespread syncretism was the creation of modern monotheism, with its be-
lief in a universal World Soul (an idea taken from Hinduism), a soul which subsisted in stars and in
earthly objects alike. Our own, individual souls were but small particles of the great Universal Soul.
However, as philosophers and savants proved unable to supply “truths” and detailed explanations
about important matters (such as: What exactly happens after death?) men and women sought
3 This arrogance about one’s own language is one of the reasons for the political turmoil in the UK over Brexit.
The British people have long been used to English being the universal language, or lingua franca; a situation that
was certainly true for the century after the Battle of Waterloo, 1815. With the decline in the political status of
the UK in the early 20th century and of the USA in the early 21st century, however, English is under threat as
the lingua franca.
2. THE ORIGINS OF THE LANGUAGE OF POWER THAT IS SCIENCE2.1 A LESS MYTHIC INTERPRETATION OF THE BABBLE AFTER BABEL
23
revelations beyond reason, through visions, and through mystical communication with the God-
head itself. This individual search for experience of the Divine led to mysticism being practiced by
individuals, and the search for salvation of an individual soul, personal salvation, which was radically
different from the basis of Pagan belief systems.
Perhaps the syncretic religion that most blended physical and metaphysical concepts (that is
blended matter and spirit, which the rationalist Greek philosophers had said could not be blended)
was Pythagoreanism. The founder of this school was Pythagoras of Samos (c.570–c.495 BCE),
whose political and religious teachings were well known in Magna Graecia and influenced the
philosophies of Plato and Aristotle, and, through them, Western philosophy. Knowledge of the
life of Pythagoras is clouded by legend. The teaching most securely identified with Pythagoras is
metempsychosis, or the transmigration of souls, which holds that every soul is immortal and, upon
death, enters into a new body. He may have also devised the doctrine of musica universalis, which
holds that the planets move according to strict mathematical rules (Isaac Newton would have
agreed with this idea) and thus resonate to produce an inaudible (to us) symphony of music.
In antiquity, Pythagoras was credited with many mathematical and scientific discoveries, in-
cluding: Pythagoras’ theorem; Pythagorean tuning; the five regular solids; the theory of proportions;
and the sphericity of the Earth. It was said that he was the first man to call himself a philosopher,
that is, a “lover of wisdom,” and that he was the first to divide the globe into five climatic zones.
Pythagoras influenced Plato, whose dialogues, especially his Timaeus, exhibit Pythagorean teach-
ings. Pythagorean ideas on mathematical perfection also impacted ancient Greek art. His teachings
underwent a major revival in the 1st century BCE among Platonists, leading to the rise of Neo-Py-
thagoreanism and Neo-Platonism. Pythagoras continued to be regarded as a great philosopher
throughout the Middle Ages and his philosophy had a major influence on scientists, or natural
philosophers, such as Nicolaus Copernicus, Johannes Kepler, and Isaac Newton. Pythagorean sym-
bolism led to early-modern European esotericism.
From its beginnings, Pythagoreans had regarded themselves as the keepers of a mystic
tradition of knowledge and practiced initiatory rites—something that always attracts attention
and new adherents. Their understanding of the laws of music and mathematics, as being the basis
for the physical world, was presented as the fruit of revelation obtained from the most ancient of
civilizations of which they were aware, the Egyptians. By the time of Pythagoreanism’s second ap-
pearance, however, Egyptian civilization had been eradicated by Greek civilization and then Latin
conquerors. Ancient Egypt had become an enigma, a set of incomprehensible hieroglyphs. Yet
there is nothing more fascinating than secret wisdom: one is sure that it exists and that it is hugely
important, but one does not know what it is. In the imagination, therefore, it acquires exaggerated
profundity. The language of Ancient Egypt, the hieroglyphs, naturally became the most ancient of
languages—that of symbols.
24
For Saint Augustine of Hippo (354–430), as for nearly all the early fathers of the Church,
Hebrew was the accepted primordial language. It was the language spoken before Babel, in the
Garden between God and Adam. After the confusion induced by the fall of the Tower of Babel,
Hebrew remained the tongue of the elected people. But Augustine was not interested in recovering
its use. He was at home in Latin, by now the language of the empire, the church, and theology. Sev-
eral centuries later, Isidore of Seville (560–636) found it easy to assume that, in any case, there were
three sacred languages—Hebrew, Greek, and Latin. With this conclusion, the task of determining
the language in which the God had said “Fiat lux,” which had brought forth the visible universe
out of nothing, became more difficult.
Figure 2.2: The Tower of Babel by Pieter Bruegel the Elder (1563). Image from: https://en.wikipe-
dia.org/wiki/Tower_of_Babel#/media/File:Pieter_Bruegel_the_Elder_-_The_Tower_of_Babel_(Vi-
enna)_-_Google_Art_Project.jpg.
There is one sense in which Saint Augustine did have a clear idea of a perfect language, com-
mon to all people. But this was not a language of words; it was, rather, a language made out of things
themselves. He viewed the world as a book written by God’s own hand. Those who knew how to
read this book were able to understand the allegories hidden in the scriptures, where beneath ref-
erences to simple earthly things (plants, animals, stories, etc.) lay hidden symbolic meanings. This
Language of the World, instituted by its Creator, could not be read, however, without a key; it was
the need to provide such a key that provoked a rapid outflowing of bestiaries, lapidaries, encyclo-
paedias, and imagines mundi throughout the Middle Ages. Many times in the last two millennia
European culture has seized upon hieroglyphs and other esoteric ideograms, believing that funda-
2. THE ORIGINS OF THE LANGUAGE OF POWER THAT IS SCIENCE2.2 A MYSTIC LANGUAGE
25
mental truths are expressed in emblems or symbols,4 and all we need do to return to the Golden
Age is comprehend those hieroglyphs [1].
Between the fall of the Roman Empire and the early Middle Ages, new languages came
into being, but without the nationalism of individual nations. It is believed that, toward the end
of the 5th century, people no longer spoke Latin, but rather Gallo-Romanic, Italico-Romanic,
early-Welsh (with Latin additions), or Hispano-Romanic, while savants, less gifted than previous
generations of savants, continued to write Latin, bastardizing it ever further. They heard around
them local dialects in which survivals of languages spoken before Roman civilization were grafted
onto, or crossed with new vernaculars arriving with the barbarian invaders.
This age, characterized as Dark, seemed to witness a reoccurrence of the catastrophe of Babel:
supposedly uncivilized and uneducated barbarians, peasants, artisans, the first Europeans—unlet-
tered and unversed in official Latin-Greek culture—spoke a multitude of vulgar tongues of which
official culture was unaware. It was the age that saw the birth of the languages which we speak
today. European culture, and the cultures of those nations which started as European colonies, were
all strongly influenced by these Dark age vulgar tongues. European critical culture begins with the
reaction, often alarmed, to the explosive growth of the number of these tongues. Europe was forced
at the moment of its birth to confront the drama of linguistic fragmentation, and European culture
arose as a reflection on the perceived destiny of a multilingual civilization. Its prospects seemed
uncertain; a remedy for linguistic confusion needed to be sought. Some savants looked backward,
trying to rediscover the language spoken by Adam. Others looked ahead, seeking to create a rational
language possessing the perfections of the lost language of power spoken in Eden. It was the latter
that led to modern science, but the former path is still with us in the metaphysics of the search for
the Theory of Everything (see Chapter 11).
2.2 A MYSTICAL LANGUAGE
The mystical approach to seeking the secrets of sacred texts is the Jewish esoteric tradition, known
as the Kabbalah. In the 12th and 13th centuries, the Jewish communities of northern Spain and the
south of France developed a tool for the textural analysis of sacred texts. The Kabbalah is a mystical
technique for interpreting the first five Books of the Hebrew Bible, the Torah, and which regarded
creation itself as a purely linguistic phenomenon. Beneath the letters in which the Torah is written
today, the Kabbalist sought to identify the shape of what is termed the “eternal Torah,” which had
been created by God before He created the Universe, and which was believed by the Kabbalists to
be the blueprint for Creation.
4 The best-selling Foucault’s Pendulum (1988) by Umberto Eco is all about the search for supposed hidden knowl-
edge, or hidden truth; and explains how this endless search, for something that likely doesn’t exist has given rise
to so many widely believed conspiracy theories.
26
The Kabbalist seeks to use the existing sacred text as an instrument. He knows that beneath
the given text, beneath the familiar stories and events narrated in the Torah, there is another text
which reveals a mystical and metaphysical reality. To uncover this mystical reality, and thus to come
closer to the mind and intentions of the Divine, one must look beneath the literal narrative of the
written text. Indeed, a Kabbalist would say that a sacred text can be read in four ways: (1) there is
the simplistic or literal reading of the text; (2) there is an allegorical or philosophical manner in
which to read the text; (3) the text may also be read hermeneutically (encompassing everything in
the interpretative process including verbal and non-verbal forms of communication as well as prior
aspects that affect communication, such as presuppositions, previous interpretations, the meaning
and philosophy of language, and semiotics); and, finally, (4) the text may be read at the most pro-
found level—at a mystical level.
Just as the Kabbalists spoke of the four levels of meaning in a sacred text, the poet Dante, a
near contemporary of the Occitan Kabbalists (certainly of the greatest of the Iberian Kabbalists,
Abraham ben Samuel Abulafia, the founder of the school of Prophetic Kabbalah, who was born
in Zaragoza in 1240, and died sometime after 1291) and who knew of their ideas, considered that
there are also four levels of meaning in poetry. In the Divine Comedy (Inferno IX 61–63), Dante
speaks to the reader and tells him of the meaning that is hidden within the verses, “O you whose
intellects are sane and well,/ Look at the teaching which is here concealed/ Under the unfamiliar veil of
verses.” The reader is then led to understand the four meanings of the poem, that is, the literal
meaning, allegorical meaning, moral meaning, and finally anagogical meaning. Dante’s great poem
is seen as a journey through and beyond life; an allegory about the stages of the soul’s redemption;
a warning and guide, and a prophecy of Divine things to come.
For the Kabbalist, language was a self-contained universe where the structure of the language
represented the structure of physical reality. Thus, in contrast to the main schools of philosophy, in
the Kabbalah, language does not represent the world merely by referring to it. If God created the
world by uttering certain words or by combining certain letters, it follows that these elements were
not representations of pre-existing things, but are the very forms by which the Universe was shaped
and molded. The Divine Language was perfect not because it happened to reflect the structure of
the Universe, but because it actually created the Universe. The Divine Language spoken by God,
and used by Adam to name Creation, stands to the Universe in the same manner as the mold stands
to the object cast from it.
But, if there are secrets about how the Universe came into being hidden in well-known
sacred texts, why then is man not able to fully comprehend all the mysteries of the Universe and
to work prodigies by uttering similar combinations of letters from the Torah? The Kabbalists say
that the reason God hid the true meaning of the Torah after the fall of Adam; that is, He did not
give man the correct order of the letters which compose the Torah because if He had given man
the true Torah then anyone who could read this version would have the power to perform miracles.
2. THE ORIGINS OF THE LANGUAGE OF POWER THAT IS SCIENCE2.2 A MYSTIC LANGUAGE
27
To this end, the letters which form the Torah we have today have been considered, over millennia
by savants and linguists, as the basis-functions of other combinations which have been used in
an attempt to find the words of power which will empower the speaker to control Nature and to
work miracles. This is the reasoning behind the meditation techniques (on the Divine Name) of
Abraham Abulafia.
The Torah is interpreted as a mystical unity, whose primary purpose is not to convey a spe-
cific, simple, and literal meaning or story, but rather to express the immensity of God’s power which
is concentrated in His Name. The Kabbalists believe that the Name of God contains power, but at
the same time it maintains and upholds the laws and harmonious order which pervade and govern
the physical universe. Knowing the Name of God would enable man to penetrate the veil that sep-
arates the visible created world from the numinous; and just as God created the Universe by His
speech, the man who knows the Name of Power could also directly influence Creation.
The Kabbalists, and those influenced by them also wished to read and fully comprehend the
esoteric and apocalyptic books of the Bible, in particular, the Book of Daniel and the Revelations
of Saint John. These Biblical texts were studied by Isaac Newton who was also searching for the
Divine or mystical language. The Kabbalists and Newton (who possessed book on the Kabbalah)
believed that Heaven and Earth were created by the uttering of the Name of God, and that the
whole history and story of Creation were to be found in the gnomic utterances of the prophetic
books of the Bible. The Torah was the source text in which to seek the power that had ordered
Creation. Such concepts about names of power, and the power contained in such names, may seem
more appropriate for the Classical or pre-Christian World, but names are very important, as we
shall see in Chapters 13 and 14). Quintus Valerius Soranus (140–130 to 82 BCE) was a poet and
Tribune of the Roman People at the end of the Roman Republic; he was crucified for revealing the
secret arcane name of the Deity of Rome. To name something, or somebody is to know or under-
stand that thing or that person, “to name is to know.” One did not reveal one’s name lightly as we
read in all our myths, fairy tales, and legends.
The mediaeval Kabbalists used combinatorial calculus (although they did not call it that)
to combine various strings of letters in the established Torah as a means of seeking the Name of
God. Intriguingly, at this same time and in exactly the same part of southern Europe, Ramón Llull
was using very similar ideas to perfect his universal language of the philosophical and theological
attributes of God. As we saw in Chapter 1, Llull was seeking a mechanical means of assisting the
human memory and ingenuity in the association of combinations of characteristics and philosoph-
ical attributes, which is precisely what the contemporary Kabbalists were doing in their prayers and
spiritual exercises.
The use of the Name of God to both affect and effect Creation is memorably expressed in the
legends concerning the Golem, where a man could create a living but (importantly) an unthinking
(that is, soulless) being from clay by use of the Ineffable Name of God. This story is beautifully
28
described in Gustav Meyrink’s novel, The Golem (published in serial form, 1913–1914), and in the
magnificent silent movie, Der Golem, wie er in die Welt kam, based on this novel made by Paul We-
gener in 1920.
Of course, what investigations such as the above demonstrate is that man has long been
searching for something he believes he lost long ago, and the retrieval of which will bring about a
new Golden Age for humanity. It does not matter if you are a theologian, an Eastern mystic, or a
particle physicist; we are all looking for that lost something. The perfect language of man’s inno-
cence in the Garden. What was the language with which God conversed with Adam and in which
God commanded Adam to name Creation? This perfect proto-language, (if it could be re-created)
could be used to fully comprehend man’s place in the Cosmos, and perhaps allow man to manip-
ulate Nature itself. A great deal of the history of science, and nearly all pseudo-science is really
the record of man’s search for a simple language with which he could fully describe, comprehend,
manipulate, and, perhaps, foretell or predict Nature, thereby allowing all men to comprehend all
natural phenomena, whether known or as yet undiscovered.
2.3
FURTHER READING
1
A truly remarkable history of language; especially, the more esoteric aspects of that history:
The Search for the Perfect Language (1995); Umberto Eco; Great Britain, Blackwell Publishers Ltd.
2. THE ORIGINS OF THE LANGUAGE OF POWER THAT IS SCIENCECHAPTER 3
29
The Mixing of Physics and
Metaphysics to Create a Language of
Curiosity
It is astonishing how many foolish things one can temporarily believe if one thinks too long
alone.’
J.M. Keynes (1883–1946)
Languages are magical. They are the means of communicating to others our innermost secrets and
thoughts, our desires and ideas. Such communication is not easy as we are not telepathic; we must
construct sentences from the multitude of words in our memories. Such vocabulary is rendered into
something that resembles our thoughts, via the rules of grammar, which are different for different
languages but always serve the same purpose: to bring well-defined order out of chaos. But this
process of verbal communication is rendered complex as no two people will describe something
they both see in the same way; similarly, no two translators will render the same original text into
exactly the same English text. Each of us brings to language, and to communication, our own ex-
periences and limitations.
There is, perhaps, no better way of exploring man’s abiding obsession with magic, with the
occult, and with the hermetic arts than by looking at the ideas that have arisen about the creation
of a single universal language. A language without ambiguity that would allow humanity to return
to the Golden Age of simplicity, and the innocence of the Garden of Eden.
3.1 THE BIRTH PANGS OF MODERN SCIENCE
The 17th century was full of the reciprocal influences of mysticism on science, and science on mys-
ticism; all mixed together by the solvent of philosophy, and inexplicable observations of Nature. It
is said to be the time when astrology, alchemy, and magic yielded to Sir Isaac Newton, to scientific
rationalism and universal laws. But is this really true? It is not more a case that the astrology and
popular folk magic were still present in society, and widely accepted by all levels of that society when
Newton published, the most influential of all textbooks of physics, his Philosophiæ Naturalis Prin-
cipia Mathematica in 1687 and 1713? Indeed, the magic, religion, and superstitions of that period
merely blended into the Newtonian view of the Universe; magic and proto-science were not im-
30
miscible fluids in the 17th century. Newton himself was not adverse to experimental investigations
of alchemy (he wrote a million words on the subject), nor of attempting to comprehend Biblical
prophecies, and he was a respected caster of horoscopes.
By the end of the 17th century, this mixture of popular and erudite beliefs had flowed to-
gether, and then been overlaid by something else. Today, it is as pointless to say that one set of be-
liefs gave way to, or was replaced by, another set of beliefs as it is to say that the Middle Ages ended
in the mid-15th century, and were replaced by the Renaissance. No Age ever really ends, unless by
the agency of a major catastrophe; the pre-existing Age is merely inundated by the succeeding Age.
It is still there; just buried out of sight, and if some event causes the newer Age to be stripped away,
the older Age will reappear as if nothing had happened in the intervening period. Today, belief in
folk magic, superstition, and belief in the supernatural are still with us; many people today follow
astrology, as did their ancestors in the time of Newton.
What is true to say is that the 17th century in Europe was a period of intense spiritual
awakening. The continent was a bubbling cauldron of ideas and beliefs, where savants were redis-
covering the ideas of the Neo-Platonists and Hermeticists. The Rosicrucian manifestos exploded
into a world waiting and wishing for something to happen; into a world shaking off much of the
sterile baggage of mediaeval Christian Scholastic dogma, and seeking new spiritual directions and
dimensions. The Rosicrucian manifestos were a potent stimulus in a period when men were seek-
ing renewal; a new way of looking at the world, which the geographical discoveries of the previous
century had been shown to be larger and more complex than had previously been imagined. Some
17th-century savants poured over magical texts; others labored at forges, melting and distilling
metals; other thoughtful, wise men sought to understand the stars, and to comprehend their silent,
slow, sacred dances; and still others invented secret alphabets and universal languages attempting
to better understand the ordering of Creation. All such men were looking not only to understand
Nature, but also to control Nature.
In that period when magic and science mixed freely, everything was considered to be the
hieroglyph of something else, and nothing was more lambent, more exciting, than a secret cypher.
Galileo Galilee (1564–1642) was dropping weights from the Leaning Tower of Pisa and watching
the isochronous oscillations of the chandeliers inside the nearby Cathedral of Pisa. In France, Car-
dinal Armand de Richelieu (1585–1642) was seeking to create a new political and economic order
in Europe. All had their eyes peeled for signs and portents. All were searching for the unusual, for
the new. The attractive pull of Newton’s gravity and the oscillations of the pendulum became ob-
sessions, and men not unnaturally reasoned that there must be something more; something quite
different that lay behind, or perhaps above visible Nature. Another Italian savant, Evangelista Tor-
ricelli (1608–1647), inverted a long, glass tube filled with Mercury with the open end of the tube
in a bowl of Mercury and invented the barometer with a vacuum at the top of the sealed column of
Mercury, showing that man could recreate the primal nothingness or void. Torricelli may not have
3. THE MIXING OF PHYSICS AND METAPHYSICS3.1 THE BIRTH PANGS OF MODERN SCIENCE
31
understood the physics of atmospheric pressure and how it changes from day to day, and from place
to place; but he did know that he had captured a sample of the primordial nothingness from which
the Cosmos had been created by God’s command. To Torricelli’s and Galileo’s contemporaries, such
experiments were as much about magic as they were about science, because there was no clear way
of distinguishing between these two ways of looking at Nature (see Figure 3.1).
Figure 3.1: A table of symbols for celestial bodies, from the Berliner
Astronomisches Jahrbuch of 1850. This list is from the mid 19th century,
yet the symbols of the planets used date from Antiquity. A planet sym-
bol is a graphical symbol used in astrology and astronomy to represent a
planet, including the Sun (Sonne in Figure 3.1) and the Moon (Mond in
Figure 3.1). The symbols are also used in alchemy to represent the metals
that are associated with the planets. The use of these symbols is based
in ancient Greco-Roman astronomy, although their current shapes are a
development of the 16th century. The International Astronomical Union
discourages the use of these symbols in modern publications, and their
style manual proposes one- and two-letter abbreviations for the names
of the planets: Mercury (Me), Venus (V), Earth (E), Mars (Ma), Jupiter
( J), Saturn (S), Uranus (U), and Neptune (N). The symbols of Venus and Mars are also used to represent
the female and the male in biology and botany, following a convention introduced by Linnaeus (see
Chapter X) in the 1750s. Even today, it is often difficult to separate the scientific and a more… magical
description of Nature. (Image from: https://en.wikipedia.org/wiki/Planet_symbols#/media/File:Beze-
ichnung_der_Himmelskörper_Encke_1850.png.)
Indeed, at this time it is just about impossible to separate the world of magic and the world
of science, where science is defined as the world of verifiable fact. Savants, who today are held up as
paragons of rationalism, the standard-bearers of the scientific method, of rationalism, of mathemat-
ical and physical enlightenment—such as Isaac Newton (1643–1727), Robert Hooke (1635–1703),
Blaise Pascal (1623–1662), René Descartes (1596–1650), and Francis Bacon (1561–1626)—turn
up in, what we might call, the fog of superstition which clung to their age. Many of these men
worked with one foot in the laboratory, and the other foot in the Kabbalah.5
The savants of the 17th century may have sought to understand the workings of the natural
world, but they also dabbled in alchemy, biblical prophec, and astrological cénacles. Perhaps Newton
arrived at his Universal Law of Gravitational attraction because he believed in the existence of
occult forces, which recalled his life-long investigation into astrology (the influence of the orien-
tation of the stars and the planets on a man’s life) where he ended up proving that there is a force
5 But as Joseph Needham (1900–1995), one of the wisest of 20th-century savants commented, “Laboratorium est
oratorium”—the place where we do our experiments is also a place of prayer and contemplation.
32
between the planets and the stars, and that this force is in the nature of an attraction, and would be
differently experienced by each and every infant.
Right up until his last days, Isaac Newton was working on a vast program that had obsessed
him for half a century; to understand what he called the “mystic language” and to thereby fully
understand the Divine Word as it came mysteriously from the mouths of the Prophets. A portion
of his voluminous manuscripts on this work was published after his death in 1733, Observations
Upon the Prophecies of Daniel and the Apocalypse of Saint John. Newton, who today is held up as the
exemplar of rationality in a pre-scientific, barbarou,s and superstitious world, was not alone in this
more mystical type of investigation. Many of Newton’s contemporaries worked on projects of this
type, which they hoped would lead to a renewal of Christianity and the spiritual enlightenment of
man, not based on the views and authority of the Pope, or of a small group of isolated theologians
and churchmen, but instead based on reason which would be accessible to all. They were seeking
an unchanging universal standard of spiritual awareness; one not tied to artifacts kept in the vaults
in Rome or Canterbury.
Today, many get embarrassed at the mention of magic, but even until the latter part of the
17th century practical magic was an accepted part of people’s lives. For the scientists or natural
philosopher of that period, the possibility of magic and magical acts and interventions was a funda-
mental presupposition. That the Bible contained fundamental truths about the place of man in the
Cosmos was another presupposition, as was the possibility of identifying and using the Language
of Creation.
The prevailing cosmology of that time was of an inanimate Earth, or elemental world subject
to the influence of the heavenly bodies or stars and planets. This in itself was sufficient to encourage
speculation about the astral origins for earthly phenomena, and to give rise to much speculation
and lore about the astrologically derived properties of plants and minerals, and to speculate as to
whether magic could be used to gain power over Nature, hence the similitude between the astrolog-
ical/astronomical and alchemical symbols seen in Figure 3.1. The chemical/alchemical experiments
of the magician, or natural philosopher, were often devised to identify and expose divine harmonies
and correspondences between metals and planets. It also suggested that the magician might be able
to find some means of tapping into the influence of the stars and diverting it to other purposes.
This is Neo-Platonism, which had for almost 2,000 years fostered a belief system which
blurred the difference between matter and spirit, which had been the problem of philosophy in the
West since it was first established in Ancient Greece; the Ancient Chinese were sensible enough
never to have made such a separation. Instead of being regarded as an inanimate object, the Earth
itself was deemed to be alive. (Today, we have returned to this essentially pantheistic view of Nature
with the Gaia hypothesis of the self-regulating bio-sphere.) With its blurring of the boundaries
between matter and spirit, Neo-Platonism also emphasized the influence of the human imagination
upon the human body, of the mind upon matter and of words, incantations, and written charms
3. THE MIXING OF PHYSICS AND METAPHYSICS3.1 THE BIRTH PANGS OF MODERN SCIENCE
33
upon physical objects. By the exercise of his imagination, and the use of magic, symbols, and in-
cantations, the operator or magician (the magus) could transform either himself or his subject. The
invisible powers of Nature were believed to be analogous to the invisible powers of the magnet,
which could be clearly seen to act at a distance and penetrate matter with its invisible rays (see
Figure 12.1). Since the world was a pulsating mass of vital influences and invisible spirits, it was
only necessary that the magician should devise the appropriate technique to tap into them, and then
to be able to work wonders. Today, we would say the world is a complex field of interacting virtual
particles as in the quantum field theory view of the vacuum. Perhaps a single seamless manifestation
arising from the never-resting quantum flux of the universal vacuum, that is, at the quantum level
the phenomena of Nature are more closely coupled than the savants of the 17th century could ever
have imagined.
In the 17th century, the Universe was believed to be peopled by a hierarchy of spirits thought
to manifest all kinds of occult influences and sympathies, all linked together by invisible force fields
through the Pythagorean Music or Harmony of the Spheres, and Newton’s gravity. The Cosmos
was thought of as an organic unity in which every part, or manifestation bore a sympathetic re-
lationship to every other part. Even colors, letters, and numbers were endowed with magical or
mystical properties. The investigation of such phenomena was the primary task of the early scientist
or natural philosopher. Modern science merely grew out of our magical and religious view of the
world around us; it was able to correctly answer a few more of the questions that man kept asking.
The project of finding the perfect, universal, divine or mystical language was something
Newton followed all his life. It was a search for an unequivocal means of understanding the texts,
which were the medium through which God communicated with man. First, Newton derived a
mathematical language for formulating and predicting the dynamics of the heavens; then he would
apply these techniques to the Bible and seek to tell us of the future. For Newton, numbers and
equations were akin to those signs and clues by which the magicians or natural philosophers had
first attempted to uncover the secrets of Nature. Newton regard Nature as a cryptogram created by
God; as he put it, “Numero ponderi et mensura Deus omnia condidit” (God created everything by number,
weight and measure). By pure reason, by hard work, by achieving wisdom through experimentation,
this riddle could be solved by the dedicated savant. The question was one of discrimination; how
do we tell the difference between the word of God and mere human words?
The 17th-century search for a universal language was not new, but was a natural wish in light
of the gradual decline of Latin. Literature in vernacular languages became more prominent in the
Renaissance, and during the 17th century learned works written by savants for other savants largely
ceased to be written in Latin; and the rise of printing spread these vernacular texts. But what these
savants observed was that the vernacular languages were not as logical or coherent as they would
like for a medium of communication, which they wished to use to truly describe the world they
saw around them. This desire for a reformed language, a philosophical language where words would
34
perfectly describe objects, animals, and natural phenomena, and where the names of related objects,
animals, and phenomena would be related to each other, led savants to consider again the Divine
Language which Adam had used to name Nature (see Figure 2.1).
The German savant, and competitor with Isaac Newton, Gottfried Leibniz conceived of a
“characteristica universalis” or universal character of a language; that is, an algebra capable of ex-
pressing all conceptual thought. This algebra would include rules for symbolic manipulation, what
Leibniz called a calculus ratiocinator (a means of calculating the correct choice). His goal was to put
debate, argument, and reasoning on a firmer basis by reducing much of it to a matter of calculation
that men could grasp intuitively. This is, of course, what Ramón Llull had been seeking to achieve
in the 13th century with his mechanical means of combining characteristics (see Chapter 1). There
was no need to learn the endless lists of the current meanings of words, in the various vernacular
languages; one could simply use algebra and mathematical manipulations on a small set of base
concepts, or base units to describe everything in Nature. The characteristica would build an alphabet
of human reasoning—a calculus of thought. It would be akin to throwing away the Catalogue of
Ships found in Book 2 of Homer’s Illiad, and instead looking for what it was you wished to know
in the shield of swift-footed Achilles (see Chapter 1).
3.2 GOTTFRIED LEIBNIZ AND THE NATURE OF THE
UNIVERSE
One of Leibniz’s most important developments was that of binary arithmetic. Although Leibniz
was not the first to conceive of this arithmetic, he did formulate it coherently and believed that the
Ancient Chinese must have known about it, on the grounds that it was implicit in the I Ching (see
Chapter 5).
The binary system is the simplest notation for numerals. Our decimal system has a choice of
ten characters for each place (units, tens, hundreds, etc.). In the binary system, there are only two
characters: one to designate an empty place, the other to indicate that the place is filled. Using the
convention of 0 for empty, and 1 for filled, the system runs as follows: conventional decimal num-
bers (binary equivalents): 0 (0), 1 (1), 2 (10), 3 (11), 4 (100), 5 (101), 6 (110), 7 (111), 8 (1000), 9
(1001), etc. Although Leibniz was proud of his discovery of binary arithmetic, he did little with it.
This is a pity, because if the arithmetic used by machines had been developed in the 17th century we
might have been able to develop calculating machines long before the 19th century. However, the
binary system came into its own only with the advent of semiconducting electronics in the 1960s
(an electron is either in a site, or it is not in that site—see footnote on Page 7). As far as Leibniz was
concerned, the greatest significance of his work on binomial numbers was metaphysical, as showing
how the Universe could be seen as constructed out of a number.
3. THE MIXING OF PHYSICS AND METAPHYSICS3.2 GOTTFRIED LEIBNIZ AND THE NATURE OF THE UNIVERSE
35
Gottfried Leibniz’s metaphysical account of the distinction between God and the visible uni-
verse had both mystical and moral repercussions. Leibniz held that the visible, created universe was
distinct from God by virtue of its passive, material, and mechanistic nature. This led him to construe
that matter is unreal, which means that the materiality of the world consists in an admixture of un-
reality, or not-being. God is a pure being: matter is a compound of being and nothingness. Leibniz
elevated this idea into what he called a mystical theology, by developing the ideas of Pythagoras,
who held that numbers (and the ratios of numbers; that is, proportions) were the ultimate realities,
and that the Universe as a whole was harmonious. That is, it manifested simple mathematical ratios,
like those of the basic intervals in music; the idea of the music of the spheres arose with Pythagoras
back in the 6th century BCE. Leibniz’s contribution was to make these numbers binary. He was
thus able to say that just as the whole of arithmetic could be derived from 1 and 0, so the Universe
was generated out of pure Being (God) and Nothingness. God’s creative act was therefore at one
and the same time a voluntary dilution of His own essence, and a mathematical computation of
the most perfect number derivable from combinations of 1 and 0. Binary arithmetic was not merely
a convenient notation for the hierarchy of all possible concepts, but it was the best way of repre-
senting their very essence, with 1 and 0 themselves functioning as the only basic concepts. Thus,
Creation becomes a dialectic between something and nothing; between 1 and 0.
The 17th century produced many proposals for philosophical languages. The best known of
these proposed philosophical languages, which we shall examine in Chapter 6, was that proposed
by the Rev. John Wilkins in 1668 (An Essay toward a Real Character and a Philosophical Language),
the classification scheme of which ultimately led to the Thesaurus.
The search for the universal or perfect language originally concerned an attempt to redis-
cover the primal matrix language. For many centuries, the leading candidate for this original or
primordial language was Hebrew. Then in the 18th century, this search finally lost its utopian fer-
vour, and its mystical component as the science of linguistics and the concepts of semiotics were
born, and with them the Indo-European hypothesis of the origin of modern languages. For a long
time, however, the idea of a primogenital language not only had a kind of historical validity—to
rediscover the speech of all mankind before the confusion generated with the fall of the Tower
of Babel—but it also entertained the minds of some of the greatest writers and thinkers of the
Middle Ages and the Renaissance. This original language should incorporate a natural relationship
between the words and the things we see around us in our world. Adam was told by God to name
Creation, but how did he choose the names he used? The primordial language had a revelatory
value, for in speaking it the speaker would automatically recognize the nature of the named reality,
and it might even be possible for men to effect miraculous changes in Nature merely by speaking
this original mother tongue.
A lot has changed over the last two millennia, but today science is the only possible universal
language that man can ever know. Science is the only endeavor which creates an absolute authority
36
to which we much all respond; science cannot be ignored, nor can it be defied, and the laws of phys-
ics are absolute. It is only through science that we can hope to understand how the Universe came
into existence, and why it is the way we observe and measure it to be. And the basic or semantic
primes of the modern language of science are the base units or quantities, which we use to describe
all known physical and chemical phenomena; indeed, these are also the base units and quantities
we would use to try and understand any new phenomena as yet unobserved.
This search for the perfect language, which is still ongoing in the physics community, could,
perhaps, be considered ridiculous and pointless, but it could also be seen as arising from uneasiness,
because people would like to find in the words we all use an expression of the way the world works
and how history unfolds, and in this we have been regularly disappointed.
3.3
FURTHER READING
For all aspects of the rise of science from an occult and magical background, the works of Frances
Amelia Yates (1899–1981) are all worth reading. She was an English historian who focused on
the study of the Renaissance, and also wrote books on the subject of the history of esotericism. In
1964, Yates published Giordano Bruno and the Hermetic Tradition, an examination of Bruno, which
came to be seen as her most significant publication. In this book, she emphasised the role of Her-
meticism in Bruno’s works, and the role that magic and mysticism played in Renaissance thinking.
She wrote extensively on the occult or Neoplatonic philosophies of the Renaissance. Her books, The
Occult Philosophy in the Elizabethan Age (1979), The Art of Memory (1966), and The Rosicrucian En-
lightenment (1972), are major contributions, where the author deals with the supposed remoteness
and inaccessibility of studies of magic and of the Hermetic arts. These volumes are available from
Routledge Classics, Oxford, an imprint of Taylor and Francis.
A truly remarkable history of languages; especially, the more esoteric aspects of that history:
The Search for the Perfect Language (1995); Umberto Eco; Great Britain, Blackwell Publishers Ltd.
3. THE MIXING OF PHYSICS AND METAPHYSICSCHAPTER 4
37
The Transformation of Magic and
Mysticism into Science
So far as it goes, a small thing may give an analogy of great things, and show the tracks of
knowledge.
Lucretius (99–55 BCE)
As mentioned in the Introduction, science, or rather the scientific worldview, was developed in early
societies after the development of religion, and of a magical or mythological view of Nature. Men
began to observe Nature with the desire to understand her better. They would have observed the
progression of the Sun and the Moon: the ephemerides. This information would first have been
memorized as literacy had yet to develop. As we have seen, eventually the long lists of observed
phenomena and things would have been preserved, perhaps in painting, carvings, or the form of
orally transmitted poetry or myth. And after many hundreds of generations, these myths, sculptures,
and images would have formed the basis of a scientific interpretation of Nature; that is, first ob-
servation, then hypothesis, and then further confirmatory observation and, perhaps, observational
proof of hypothesis. Let us now consider how this transformation to a scientific view-point came
about. Given that Chinese civilization is our oldest and best documented record of how societies
developed, evolved, and became interconnected; it is in Ancient China that we must look for the
origins of proto-science (see Further Reading).
What was the main motive for the early natural philosophers, or Taoists of Ancient China,
that compelled them to engage in the observation and study of Nature? The answer is straight-
forward: to gain that peace of mind that comes from having formulated an hypothesis, however
simple and provisional, about the most terrifying manifestations of Nature. Nature would have been
seen as being all-powerful, and indifferent to the suffering of man. These ancient societies would
have seen that when angered, Nature was able to easily perturb and destroy the fragile structure of
human society. Nature still has this power, as the effects of global climate change upon our engi-
neered structures gathers pace.
Whether the natural phenomena studied by the Taoists were earthquakes, volcanic eruptions,
floods, storms, or the various forms of plagues and disease, at the beginning of the adventure of
science man felt himself to be stronger, more secure, once he had differentiated and classified the
phenomena that assailed him. This security was particularly the case when he could name those
38
plagues and disasters. As we will see in many places in this work, “to name is to know”; if you can
name something, you have power over it—or you think you do. The origin of the name given to the
newly identified violent natural phenomenon, or plague would entail some character of the nature
of that violent, destructive event. Thus, the men who observed Nature, and named and described
the observed natural phenomena, would have formulated some naturalistic theory about the origin,
and likely re-occurrence of those phenomena. This proto-scientific peace of mind was known to the
Chinese as ching hsin (see Figure 4.1). The atomistic followers of Democritus and Epicurus in the
West knew it as ataraxy (calmness, or peace of mind; emotional tranquillity).
Figure 4.1: Early Spring (a hanging scroll painting) by Guo Xi. Completed in 1072, this is one of the
most famous pieces of Chinese art from the Song Dynasty (960–1279). The painting is a meditation on
Nature. The poem in the upper-right corner was added in 1759 by the Qianlong Emperor; it reads: The
trees are just beginning to sprout leaves; the frozen brook begins to melt./ A building is placed on the highest
ground, where the immortals reside. / There is nothing between the willow and peach trees to clutter up the
scene. / Steam-like mist can be seen early in the morning on the springtime mountain. Image from: https://
en.wikipedia.org/wiki/Early_Spring_(painting)#/media/File:Guo_Xi_-_Early_Spring_(large).jpg.
The Book of Master Chuang by Chuang Tzu, dating from the Warring States Period (476–221
BCE) tells us that “The true men of old had no anxiety when they awoke, forgot all fear of death and
composedly went and came.” These ancient men knew from their studies of Nature that there was
4. THE TRANSFORMATION OF MAGIC AND MYSTICISM INTO SCIENCE
4 .THE TRANSFORMATION OF MAGIC AND MYSTICISM INTO SCIENCE
39
an order behind the apparent violent indifference of Nature. For his part, Chuang Tzu, and other
Taoists speak of “Riding on the Normality of the Universe,” or on the “Infinity of Nature,” and thus de-
scribe the sense of liberation which could be attained by those who could remove themselves from
the petty squabbles of human society, and unify themselves with the great mystery of Nature; that
is, to leave society and study Nature and become nature mystics (a beautiful term for a scientist).
They had observed Nature, and had seen that it is possible for man to live in harmony with Nature,
and not be the passive subject of Nature’s more violent manifestations. The same confidence in the
power of, and the security that comes from observation is found in the West in Lucretius (c.99–c.55
BCE). The De Rerum Natura speaks of observation and deduction (that is, modern empirical sci-
ence) as the only remedy for the numerous fears of mankind. The following lines are repeated three
times in Lucretius’ poem:
These terrors, then, this darkness of the mind
Not sunrise with its flaring spokes of light
Nor littering arrows of morning can disperse
But only Nature’s aspect and her Law.
(Translation by William Ellery Leonard)
In modern science, the relationship between the rational and the empirical is seen to be so
obvious as to require no explanation. However, this was not always the case. To emerge into the
light in Europe, the modern scientific method had to struggle against the dead-hand of formal,
mediaeval scholastic rationalism. Even up until the early-17th century, the proper marriage of
rational thought to empirical observations had not been consummated. At this time, it was con-
sidered in the ironic, but the Taoist words of Robert Boyle (1627–1691), the “father of chemistry”
and author of the Sceptical Chymist of 1661, “much more high and Philosophical to discover things
a priore than a postiore.”
The origins of modern science are to be found in the interaction of four tendencies: two on
one side of the argument, and two on the other side. On one side, theological philosophy allied
itself with Aristotelian scholastic rationalism to oppose those natural philosophers who wished to
more fully understand Nature by observation. On the other side of the argument, were those natural
philosophers who wished to use experimental empiricism to explore Nature. This later group would
be experimentalists who reacted against Thomist scholasticism, and who found a powerful ally in
mysticism. In the European Middle Ages, Christian theology, given its universal domination was
on both sides of the arguments about the rise of the modern scientific method; all those for and
against the development of the scientific method would have been believers. But while rational
theology was anti-scientific; mystical theology, or a mystical or spiritual view of the Divine, in all
its aspects proved to be pro-scientific. The explanation for this apparent contradiction is to be found
in the nature of magic; that essential pre-scientific element from which science evolved. Rational
40
theology was vehemently anti-magical (all those burnings of witches and heretics), but mystical
theology tended to be more tolerant of magic and belief in magic; there was an affinity between
mystical theology and Hermeticism in Europe, and that affinity arose from the study of Nature.
But the fundamental cleavage here was not between those who were prepared to use reason
to understand Nature, and those who felt reason to be insufficient to understand Nature, but be-
tween those who were prepared to use their hands and those who refused to do so; between experi-
mentalists and theoreticians. The Vatican theologians of the Inquisition who declined Galileo’s offer
to look through his telescope and to see for themselves that the Ptolemaic System was incorrect,
were Scholastics and so they believed they were already in possession of sufficient knowledge about
the visible universe. If after looking through his telescope, Galileo’s findings agreed with Aristotle
and with Saint Thomas Aquinas, there was no point in looking through the telescope; everything
was already contained in the philosophy of the Angelic Doctor, Saint Thomas Aquinas. If the obser-
vations from the telescope did not agree with established dogma, they would have been dismissed,
and condemned by the churchmen as magical and Neo-Platonic. As it turned out, there was a fun-
damental difference between Galileo’s observations and the philosophy of Saint Thomas Aquinas;
a difference that the Church of Rome did not accept until 1992.6 Those Renaissance magicians,
Nature mystics, and Neo-Platonists had discovered a new way of looking at Nature.
This explanation of the origins of the modern scientific method explains so much about
the “less rational” interests of those earliest scientists. Why it was that men such as Isaac Newton,
Robert Boyle, and Sir Thomas Browne (1605–1682) were interested in the Kabbalah and astrology,
believing that such ancient mystical doctrines, the Hermetica, contained ideas of value to them in
their new empirical studies. Typical of the attitude of these 17th-century experimentalists (the key
here is that they were all experimentalists who ‘got their hands dirty’ in the laboratory) was that of
the Flemish chemist, Jan Baptist van Helmont (1580–1644). Van Helmont was one of the founders
of biochemistry; he was among the first to use a balance in quantitative experiments, he devised
an early-form of thermometer, and he demonstrated the acid in the stomach and the neutralizing
alkali of the duodenum. Yet for all this detailed interest in the reproducibility of experiments, van
Helmont was also deeply anti-rational, displaying an almost religious empiricism. He attacked the
hair-splitting formal logic, scholastic logic, which he felt had little relation to observable reality, but
6
In 1633, the Inquisition of the Roman Catholic Church forced Galileo Galilei to recant his theory that the Earth
moves around the Sun. Under threat of torture, Galileo recanted. But as he left the courtroom, he is said to have
muttered, “all the same, it moves.” 359 years later, the Church finally agreed. At a ceremony in Rome, before the
Pontifical Academy of Sciences, Saint Pope John Paul II officially declared that Galileo was right. The formal
rehabilitation was based on the findings of a committee of the Academy the Pope set up in 1979, soon after
taking office. The committee decided the Inquisition had acted in good faith, but was wrong. The Inquisition’s
verdict was uncannily similar to cautious statements by modern officialdom on more recent scientific conclusions,
such as predictions about greenhouse warming and climate change. The Inquisition ruled that Galileo could not
prove “beyond doubt” that the Earth orbits the Sun, so they could not reinterpret scriptures implying otherwise.
4. THE TRANSFORMATION OF MAGIC AND MYSTICISM INTO SCIENCE4 .THE TRANSFORMATION OF MAGIC AND MYSTICISM INTO SCIENCE
41
merely trapped the mind in an endless circular argument. In truth, van Helmont was a European
Taoist who believed in the need to observe Nature in order to understand Nature.
It may be said, therefore, that at the early stages of modern science in Europe, the mystical
(nature mysticism, see Figure 4.1) was often more helpful than the rationalist approach, when
it came to finding an explanation or a cause of an observed phenomenon. This situation exactly
mirrors the rise of a scientific-like worldview among the philosophical scholars of Ancient China.
Resting on the value placed on manual operations, that is, doing experiments rather than merely
attempting to think of an explanation, and not testing that theory by experiment, men such as van
Helmond and Isaac Newton were active laboratory workers as well as thinkers and writers. The
equipment they built to undertake their experiments is still with us today. The Confucian social
scholastics of Ancient China, like the rationalist Aristotelians and Thomists of mediaeval Europe
nearly two millennia later, had neither sympathy for, nor interest in manual operations. Hence,
practical science and magic were together driven into mystical heterodoxy. It is the association of
nature mysticism and empiricism that is the foundation of post-Renaissance scientific thought in
the West.
This same amalgam of empiricism and mystical theology, leading to a scientific worldview,
can also be seen in Islam. The Brethren of Sincerity was an organization formed in Basra, Iraq, in
about 950. Like the Chinese Taoists 1500 years earlier, and the Christian nature mystics of the 17th
century, this semi-secret society had, at one and the same time, mystical, scientific, and political
interests. The men who met in Basra acknowledged the existence of mysteries transcending reason,
and believed in the efficacy of experimentation, particularly, manual laboratory experimentation in
seeking to study those mysteries. All the savants involved in these early phases of the development
of science recognized that effects may be brought about by specific manual operations without our
being able to say exactly how or why; and they further believed that this unexplained data and in-
formation ought to be recorded and accumulated for future generations of savants. Their opponents,
and often persecutors (as with their Christian and Chinese colleagues), believed that the nature
of the Universe could be apprehended by ratiocination alone, and that quite enough information
had already been made available to scholars, and that in any case, the use of the hands to do work
of any kind was unworthy of individuals claiming to be scholars. The early proto-scientists were
thus in a dilemma, for they could either set up a ratiocination of their own consisting of obviously
inadequate theories and models, or rest on the thesis that “there are more things in heaven and earth,
Horatio, that are dreamed of in your philosophy,” and seek elucidation of the unexplained by further
observation and study. Only cycles of experimentation and hypothesis would allow a resolution of
this situation; thus, the modern scientific worldview was born.
There is of course a great difference between the nature mysticism or mystical naturalism,
which triggered the creation of modern science (but which did not lead to the scientific triumphal-
ism of the late 19th century) and other forms of mysticism, which are focused in purely religious
42
contemplation, or meditation upon a God or gods. As is often said by theologians, religion is a
belief in someone else’s experience of the Divine, while spirituality is having your own experience of
the Divine. The Ancient Chinese Taoists and the first European scientists learned that by looking at
the world, and thinking about what it was they were observing, a larger number of individuals could
gain a first-hand experience of the Divine; that is a gnosis (a revelation of knowledge) as the Ancient
Greeks and early Christians would have described this glimpse of the truth, or this discovery.
All that nature mysticism asserts is that there is much in the Universe that transcends human
reason, but since it required the empirical to the rational for comprehension it also implied that the
sum total of incomprehensibility will diminish if men humbly (and without pre-conceived ideas)
explore the occult properties and relations of things. Religious mysticism is very different; it dotes
on an arbitrary uniqueness, and seeks to minimize or deny the value of investigations of the natural
world. It is authority-denying mysticism, not rationalism, which at certain times in world history
assists the growth of experimental science. And one may readily see why such upwelling of the sci-
entific worldview often correspond with periods of social progress. As we read in the Tao Te Ching:
“Everyone recognises good as good, and thus what is not good… is also known.”
4.1
FURTHER READING
In the sections of this present work where I discuss Ancient China, I will be making use of the
magisterial Science and Civilization in China (published 1956, re-published 1975), Joseph Need-
ham; Cambridge, Cambridge University Press. In this chapter, I make reference to Volume II of
this multi-volume work, The History of Scientific Thought.
4. THE TRANSFORMATION OF MAGIC AND MYSTICISM INTO SCIENCE43
CHAPTER 5
The I Ching as a
Model of the Cosmos
The situations depicted in the Book of Changes are the primary data of life—what happens to
everybody, every day, and what is simple and easy to understand.
Hellmut Wilhelm (1909–1990)7
The I Ching, also known as the Book of Changes, is an ancient Chinese divination text, and it is one
of the oldest pieces of Chinese literature.8 The text has been used for well over two millennia as a
cultural reference, and has inspired ideas in religion, psychoanalysis, literature, and art. Indeed, the
text has had a profound influence on western culture, but it originated as a divination manual in the
Western Zhou Period (1000–750 BCE) of Ancient China. Then during the Warring States Period
(475–221 BCE) and early Imperial Period, the I Ching was transformed into a cosmological text
with a series of philosophical commentaries known as the Ten Wings. After becoming part of the
Five Classics in the 2nd century BCE, the I Ching was the subject of scholarly commentary, and the
basis for divination practice for centuries across the Far East, and eventually took on an influential
role in western understanding of eastern thought. Various modern scholars suggest dates for the
original text ranging between the 10th and 4th centuries BCE.
The form of divination involved in the I Ching is a type of cleromancy, which in the I Ching
concerns the generation and interpretation of random numbers represented as figures termed
hexagrams. The interpretation of the random readings, via the content of the I Ching is a matter of
many centuries of debate, and many commentators have used the book symbolically, often to pro-
vide guidance for moral decision making informed by Taoist and Confucian ideals. The hexagrams
themselves have acquired cosmological significance and become associated with other processes of
change such as the coupled forces Yin and Yang and the five Chinese elements, Wu Xing. Many
believe that the I Ching is a book containing an explanation of all the laws of physics, an expla-
7 Hellmut Wilhelm was the son of Richard Wilhelm (1873–1930), the German sinologist, theologian, and mis-
sionary, who is best remembered for his translations of philosophical works from Chinese into German, which
in turn have been translated into other languages. His translation of the I Ching is still regarded as one of the
finest, as is his translation of The Secret of the Golden Flower; both were provided with introductions by the Swiss
psychoanalyst Carl Jung, who was a personal friend.
8 According to Google searches, the I Ching comes higher in the list of the most influential books than both the
Old Testament and the New Testament.
44
nation of how everything is governed, and carries explicit directions on how men should conduct
themselves in order to remain continually in harmony with these natural laws.
Within both modern physics and Eastern philosophy, it is believed that all natural phenom-
ena in this world of change and transformation are dynamically interrelated. Emphasizing move-
ment and change, Chinese philosophy had long ago developed concepts of dynamic patterns which
are continually formed and dissolved again in the cosmic flow of the Tao. The I Ching has elaborated
these patterns into a system of archetypal symbols, the so-called trigrams and hexagrams.
Ancient Chinese scholars contemplated the Cosmos in a way comparable to that of modern
physicists, who with the advent of quantum mechanics introduced into their model of the Universe
a psychophysical element. In the quantum view of Nature, the experimenter is an essential part of
any experiment; as shown in the various interpretation of the (in)famous thought experiment by
Schrödinger about a cat in a sealed box with a vial of poison. The observed microphysical event in
an experiment necessarily includes the observer, just as much as the reality underlying the I Ching
comprises subjective; that is, psychic conditions in the totality of momentary incidents and events.
The 64 hexagrams of the I Ching become the instrument by which the meaning of the 64 different,
yet typical, situations found in Nature can be determined. Therefore, for someone who regards the
physical world in the same manner as the ancient Chinese scholars, the I Ching retains more than
a slight attraction.
In its original structure, the I Ching is made up of eight trigrams, consisting of eight com-
binations of three lines of broken Yin-Hsiang (- -) lines and unbroken Yang-Hsiang (—) lines (see
Figure 5.1). It is believed that these concepts have a cosmogonic significance. According to the
Supreme Ultimate (Nothingness), a simple line symbolizing the positing of Oneness (—) produced
the two modes Yin and Yang by splitting and filling of the lines. This creation ex-niho may seem
strange, but as the Heart Sutra of Buddhism puts it:
O Sariputra, Form does not differ from Emptiness
And Emptiness does not differ from Form.
Form is Emptiness and Emptiness is Form.
The same is true for Feelings,
Perceptions, Volitions and Consciousness.
And monotheism is not without its own creation ex-nihlo.
5.1 DETAILS OF THE I CHING
The Five Elements Theory (Wu Xing) has the same fundamental philosophy as the theory of the
two coupled and inseparable forces Yin-Yang; that of continual evolution and balance. Each natural
element (the five elements of Classical Chinese thought: wood, fire, earth, metal, and water com-
pare with the four classical elements in the West: air, fire, water, and earth; see Table 7.1) has specific
5. THE I CHING AS A MODEL OF THE COSMOS5.1 DETAILS OF THE I CHING
45
attributes that vibrate with their own frequency or energy. These elements interact with each other
to affect the flow of energy in an individual’s environment, in a positive or negative manner. Feng
shui practitioners utilize the concepts of Yin-Yang and the Five Elements to balance competing
energies in your environment. It is through the four hsiang that the eight trigrams are derived: each
made up of combinations of three divided or undivided lines (see Figure 5.1). A summary of the
properties and meanings of each of the eight trigrams is given in Table 5.1.
Yang
Yin
Figure 5.1: The eight trigrams derived from the Nothingness that also gave rise to the Yin and the Yang.
The origin of the eight trigrams is the two coupled forces, Yin and Yang, and it is from these eight tri-
grams that the 64 hexagrams of the I Ching are derived.
Table 5.1: The names and the attributes of the eight trigrams. We saw earlier how the German savant
Leibniz read the sequence of the trigrams as a perfect representation of the progression of binary
numbers (000, 001, 010, 110, 101, 011, 111 ...)
Trigram
Name
(Chinese)
Name
☷
Kun
☶
Gen
☵
Kan
☴
Xun
☳
Zhen
☲
Li
☱
Dui
☰
Qian
The
Keeping
The
The gentle
The
The
The
The
receptive
still
abyssal
arousing
clinging
joyous
creative
Attribute
Devoted Standstill
Danger
Penetration Movement
Pleasure
Strong
Light
giving
Image
Earth Mountain Water,
Wind,
Thunder,
Lightening,
Lake
Heaven
Family
Mother
Youngest
Middle
Eldest
Eldest son Middle
Youngest
Father
clouds
wood
wood
fire
relationship
Binary
numeral
equivalence
Decimal
equivalence
0
0
son
1
son
10
daughter
daughter
daughter
11
100
101
110
111
1
2
3
4
5
6
7
46
These eight trigrams are some of the most basic symbols of Eastern philosophy, representing
transitional phases of Nature, and of human thought and psychology (see Table 5.1). They repre-
sent the maximum number of Yin and Yang relationships in sets of three. Yin and Yang forces are
combined as Yin/Yin, Yin/Yang, Yang/Yin, and Yang/Yang combinations. These four combinations of
forces are again divided to form the eight trigrams, and are said to be linked to the forces of Nature:
Heaven and Earth, fire and water, thunder and wind, mountain and lake (as given in Table 5.1).
Figure 5.2: The 64 hexagrams of the I Ching (source: https://en.wikipedia.org/wiki/I_Ching#/media/
File:Diagram_of_I_Ching_hexagrams_owned_by_Gottfried_Wilhelm_Leibniz,_1701.jpg).
5. THE I CHING AS A MODEL OF THE COSMOS5.2 DIVINATION WITH THE I CHING
47
Each trigram has its own name and property, and the trigrams are considered to represent all
possible cosmic, natural, and human situations. They are associated with the phenomena of Nature,
and the various possible situations in our social lives. They were also associated with the cardinal
directions, and with the seasons of the year. Thus, the eight trigrams are often grouped around a
circle in the natural order in which they were generated, starting from the top (where people in
Asia have always located the south) and placing the first four trigrams on the left-hand side of
the circle, the second four on the right-hand side. The objects or attributes thus symbolized by the
eight trigrams are made to represent the constituents of the Universe, which form the basis of a
cosmological system elaborated by the scholars of the Han Dynasty (206 BCE–220 AD) using the
Five Element Theory.
By combining these trigrams as a symmetric matrix, a total of 64 combinations is obtained;
known as the 64 hexagrams (shown in an ancient Chinese manuscript in Figure 5.2). The 64hexa-
grams are traditionally arranged in 2 patterns: (i) a square of 8 x 8 hexagrams and (ii) a circular
sequence showing the same symmetry as the circular arrangement of the trigrams; both are seen
in Figure 5.2. The 64 hexagrams are the cosmic archetypes on which the use of the I Ching as an
oracle book is based. For the interpretation of any hexagram that may arise in the initial selection
or divination, the various meanings of its two trigrams have to be taken into account.9
In the I Ching, the trigrams and hexagrams represent all possible combinations generated by
the dynamic interaction of the forces Yin and Yang, and are reflected in all cosmic and human in-
teractions. All things and situations are in a state of continual transition: one changing into another,
the solid lines pushing outward and breaking in two, the broken lines pushing inward and growing
together. Therefore, the 8 trigrams, together with the 64 hexagrams are deemed to represent all the
possible situations and temporal mutations of phenomena in the Universe. The I Ching is believed
to describe a system of metaphysics relating the Universe and natural phenomena, as functions of
the time of day, seasons, weather, family relations, personal relations, etc.
5.2 DIVINATION WITH THE I CHING
To use the I Ching, an individual must first embark upon a process of divination; that is, they must
allow themselves (in what they do) to be open to the influence of Nature. The person who has a
question to ask of Nature must first invoke the forces of Nature; for example, by tossing a set of
coins, or drawing lots to generate a set of results that may then be interpreted via the traditional text
of the I Ching, which is the result of more than two millennia of interpretation of such divination
experiments. The process of consulting the I Ching as an oracle involves determining the hexagram
by a method of random number generation, and then reading the text associated with that hexa-
9 The Internet is full of sites detailing how these interpretations are made.
48
gram. Confucius said that one should not consult the Oracle for divination until one has passed the
age of 40. Those studying the I Ching should also be free of compulsion; that is, repeatedly asking
the same question in hope of either a different/better answer, or further enlightenment as to the
meaning of the answers one first obtains.
Hexagrams were traditionally generated by the casting of yarrow stalks (Achillea millefolium).
The stalks must be cut and prepared, being plain, lacquered, or varnished. Fifty yarrow stalks are
used, though one stalk is set aside at the beginning and takes no further part in the process of con-
sultation, or divination; this is the Wu Chi—the unchanging ground of being. The remaining 49
stalks are roughly sorted into 2 piles, and then from each pile 1 stalk is initially removed, then the
pile is “cast off ” in lots of 4; that is, groups of 4 stalks are removed. The remainders from each half
are combined (traditionally placed between the fingers of one hand during the counting process)
and set aside, with this process being repeated twice; that is, a total of three times. The total stalks
in the remainder pile will necessarily (if the procedure has been followed correctly) be 9 or 5 in
the first count and 8 or 4 in the second. Nine or 8 is assigned a value of 2; 5 or 4 assigned a value
of 3. The total of the three passes will be one of only four possible values: 6 (2+2+2), 7 (2+2+3), 8
(2+3+3), or 9 (3+3+3); that count provides the number of the first line of the hexagram. When three
successive changes produce the sum 3+3+3=9, this makes the old Yang, i.e., a firm line that moves.
The sum 2+2+2=6 makes old Yin, a yielding line that moves. Seven is the young Yang, and eight
the young Yin; they are not taken into account as individual lines. The 49 stalks are then gathered
and the entire procedure repeated to generate each of the remaining 5 lines of the hexagram. (Each
succeeding line is written above its predecessor; that is, the first line is at the bottom of the stack
of lines, and the final, sixth line is at the top.)
During the Eastern Han Dynasty (1st century AD), there were two schools of interpretation
of the I Ching. The first school, known as New Text Criticism, was more egalitarian and eclectic, and
sought to find symbolic and numerological parallels between the natural world and the hexagrams.
With the fall of the Han Dynasty, I Ching scholarship was no longer organized into systematic
schools. One of the most influential writer of this period was Wang Bi (226–249), who discarded
the numerology of Han commentators and integrated the philosophy of the Ten Wings directly into
the central text of the I Ching, creating such a persuasive narrative that earlier Han commentaries
were no longer deemed important. By the 11th century, the I Ching was being read as a work of
intricate philosophy, as a starting point for examining metaphysical questions and ethical issues.
Cheng Yi (1033–1107), founder of the Neo-Confucian Cheng–Zhu school, read the I Ching as a
guide to moral perfection. He described the text as a way for ministers to formulate honest political
opinions, and so avoid factionalism, to root out corruption, and to solve problems in government.
The contemporary scholar Shao Yong (1011–1077) rearranged the hexagrams in a format that
resembles modern binary numbers, although he did not intend his arrangement to be used mathe-
5. THE I CHING AS A MODEL OF THE COSMOS5.3 FINAL COMMENTS
49
matically. This arrangement, sometimes called the binary sequence, is the format that later inspired
Gottfried Leibnitz; when the text had been translated and published in Europe by the Jesuits.
Gottfried Leibniz, who was corresponding with the Jesuit missionaries in China, wrote the
first European commentary on the I Ching in 1703, arguing that it proved the universality of binary
numbers and Theism, since the broken lines, the “0” or nothingness, cannot become solid lines, the
“1” or oneness, without the intervention of God (see Page 35). This mystical interpretation was
criticized by Georg Wilhelm Friedrich Hegel, who proclaimed that the binary system and Chinese
characters were “empty forms” that could not articulate spoken words with the clarity of Western al-
phabets. In their discussion, I Ching hexagrams and Chinese characters were conflated into a single
foreign idea, sparking a dialogue on Eastern and Western philosophical approaches to questions
such as universality, semiotics, and the nature of communication.
Following the Xinhai Revolution of 1911, the I Ching was no longer part of mainstream
Chinese political philosophy, but it maintained a huge cultural influence as China’s most ancient
text. Borrowing back from Leibniz, modern Chinese writers offered parallels between the I Ching
and subjects such as linear algebra and logic in computer science, seeking to demonstrate that
ancient Chinese thought had anticipated Western discoveries. The Sinologist Joseph Needham
(1900–1995) took the opposite viewpoint, arguing that the I Ching had actually impeded scientific
development in China by incorporating physical knowledge into its metaphysics. The psychologist
Carl Jung took a great interest in the possible universal nature of the imagery of the I Ching, and
he introduced an influential German translation by Richard Wilhelm by discussing his theories
of archetypes and synchronicity. The book had a notable impact on the 1960s counterculture, and
on 20th Century writers and musicians such as Philip K. Dick, John Cage, Jorge Luis Borges, and
Hermann Hesse.
5.3
FINAL COMMENTS
In the initial phases of the transformation of magic into science in Europe, the mystical approach to
Nature was often more helpful to savants and those seeking to comprehend what they saw around
them than the theoretical or rationalist approach. After all, if one begins with a rationalist approach
to the investigation of Nature, one will soon end up doing experiments to test and thus extent the-
oretical models. And experiments are often difficult; they may not work for a whole host of reasons,
and they may well be difficult and expensive. On the other hand, mystical interpretations of what
one encounters in the natural world are cheap, and require no testing, only a vivid imagination and
a knowledge of ancient history to provide the ancestral deities responsible for whichever natural
phenomenon is being considered. However, experience tells us which of these two approaches
yields the most useful, that is, reproducible results. In the British Isles, the Anglo-Irish alchemist
and proto-scientist Robert Boyle paid considerable attention to this problem of resolving the use
50
of theory versus experiment (even mystical theories versus rationalist theories) in his publication of
1661, The Sceptical Chymist. Boyle came down on the side of practical experimentation as being the
ultimate, the acid test, for all speculation about Nature.10
Since each of the 64 symbols, or hexagrams, of the I Ching came, in the course of the centu-
ries, to have an abstract signification, such a reference was naturally alluring and saved all necessity
for further thought and any experimental investigation. The technique of the I Ching resembled in
many aspects the astrological pseudo-explanations of Nature and man’s destiny of pre-Renaissance
Europe, but with the greater complexity (64 hexagrams as opposed to 12 Houses of the Zodiac);
abstractness of symbolism gave it a deceptive sophistication.
The 64 symbols, or hexagrams, in the system provided a set of abstract concepts capable of
subsuming a large number of the events and processes, which any investigation is bound to find in
the phenomena of the natural world. It has been said that the I Ching supposes a kind of translation
of all natural phenomena into a mathematical language by means of a set of graphic symbols, the
germs of what the German philosopher and mathematician Gottfried Leibniz would have called a
universal language or a universal character, thus constituting a dictionary capable of permitting men
to read Nature like a book whether with intellectual, or practices aims in view. This is, of course,
as much about true science as it is about astrology. Furthermore, the I Ching brings us back to the
illusory realms of numerology, where numbers are not the empirical and quantitative servants of
science, but a straightjacket into which theories have to forced to fit our pre-conceived ideas. To
paraphrase Jung, the I Ching has more to do with synchronicity than with physics. Yet for all its
flummery and quackery, and lack of anything other than a statistical success rate, it is a technique
which is still hugely followed (much like astrology).
What seems to show through when one looks at the ideas of Taoism, and other similar
thoughts about the origin and usefulness of the I Ching in early eastern natural science, is the effort
made by the School of Naturalists and the Han Dynasty Confucians to use the figures made by
the long and the short strokes; that is, the 64 hexagrams, as a comprehensive system of symbolism
containing, in some way all the basic principles of all natural phenomena. That is, to construct a
proto-language of science; even if it was a symbolic representation of this language. Like the Tao-
ists, the naturalists who invoked the I Ching to comprehend the world were looking for peace of
mind, as opposed to the worry of trying to learn long lists of things and phenomena, and forgetting
some part of that list.
It is likely that a similar argument can be made to account for the central importance of
astrology in the Mesopotamian civilization. The Houses of the Zodiac and the Sun, Moon, and
planets formed a sufficiently complex system that permitted a range of correspondences to be con-
structed and maintained. If then one projects these theoretical, mystical interconnections onto ob-
10 Happily, experiment won out in the end, although theoretical physicists still have an exalted status in the physics
community.
5. THE I CHING AS A MODEL OF THE COSMOSservations of Nature, one does have a system of sufficient flexibility to explain some part of Nature.
But of course, this is a mystical theoretical model of Nature; one could say a mythological model.
And this mystical model yielded to rational experimentation and evidence in the modern world, in
both the East and the West.
5.4 FURTHER READING
51
5.4
FURTHER READING
The Internet is crowded with sites providing information about the I Ching, and about the inter-
pretation of results derived from divinations using the I Ching. As for recommendations for further
reading, I suggest:
1
The American physicist and ecologist Fritjof Capra (born 1939) has explored the parallels
between modern physics and Eastern Mysticism in The Tao of Physics (1975); Boston, Shambhala
Publications, Inc.
2
forward by Carl Jung.
I Ching translated by Richard Wilhelm (2003); London, Penguin Books. This book has a
53
CHAPTER 6
Natural Philosophy
How many angels can dance on the head of a pin?
(A standard question for students of Scholasticism in the 13th century)
In our consideration of the concept of a perfect language, with which and through which man
might truly appreciate and, perhaps, control Nature, we must now leave behind the fascinating
but strangely exotic mixture of magic and science that had characterized the pre-scientific world,
and examine the advent of the a priori philosophical language. The members of this new group of
17th-century seekers after a simpler, more perfect language were not magicians or Hermeticists, but
savants and natural philosophers who sought a simple, but logical language which could eliminate
the concepts and formalisms, which had previously clouded the judgment of men, and which had
kept all men from fully and rapidly embracing the progress of science and technology.
Jan Amos Komensky (1592–1670; he used the Latinized form of his name as Comenius) was
a Protestant mystic from Bohemia. Although inspired by religious ideals, he is considered to be one
of the first savants who as part of their investigation of Nature tried to formulate a more perfect
language to describe his observations, and to allow him to transmit his observations to other savants.
In his Pansophiae Christianae III (1639–1640), Comenius advocated a reform of the commonly used
vernacular languages to eliminate the rhetorical and figurative use of words, which he regarded as a
source of ambiguity and confusion. The meaning of the words that remained should then be fixed,
with one name for each thing; this, he believed, would restore words to their original meaning.
Although Comenius was never to construct his reformed, plain language, he had broached
the idea of a universal language that attempted to overcome the political and structural limitations
of Latin, which was still being used as a sort of universal language in Catholic countries. (Comenius
came from the non-Catholic part of Central Europe, which from 1618–1648 was fighting for its
existence in the Thirty Years War.) Comenius proclaimed that the lexicon of the new philosophical
language would reflect the composition of reality, and every word in it should have a fixed, definite,
and univocal meaning. Every idea should be represented by one and only one expression, and these
definitions and expressions should not arise from an individual author’s fancy or imagination, but
should represent only things that existed. Comenius wished to create a utopian language that would
describe the fixed, unmoving connections of every element of Creation; but he recognized that it
would not be a vehicle for the creation of great literature.
54
The utopian ideas of Comenius would have necessitated a prodigious ability at memorizing
all the new words and the new meanings. But this was exactly the type of problem that had inspired
Ramón Llull to invent his Ars combinatoria. The French philosopher René Descartes saw where the
real problem lay with such new philosophical languages. In order to avoid having to memorize and
learn how to use the new fundamental or primitive names Descartes conjectured it only would be
necessary for these to correspond to an order of ideas or thoughts which had a logic of their own
akin to that of numbers. That is, that it was through the medium of mathematics and mathematical
logic that the new universal language would eventually come into being. Descartes pointed out that
if we can count, we are able to generate an infinite series of numbers without needing to commit to
memory the whole set of all possible numbers. But this problem coincided with that of discovering
a philosophy capable of defining a system of clear and distinct ideas. If it were possible to enumerate
the entire set of simple ideas from which we mentally generate all the complex ideas of which it
is possible to conceive, and if it were further possible to assign to each idea a character, as we do
with numbers, we might be in a position to manipulate them with a mathematics of thinking, or a
calculus of thought, while the words of natural languages evoke only confusion. This was the idea
that was pursued in Germany by Gottfried Leibniz. What we have here in the first half of the 17th
century is a statement about the essential properties of a computer language, three centuries before
the invention of the computer.
In 1654, the English clergyman, alchemist, and astrologer John Webster (1610–1682) wrote
his Academiarum examen, an investigation and attack on the academic world, which Webster felt
had not given sufficient attention to the problems of creating a universal language. Like many En-
glish contemporaries of Comenius, Webster was influenced by the Bohemian’s ideas. Webster fore-
saw the birth of a “Hieroglyphical, Emblematical, Symbolical and Cryptographical learning.” Describing
the general utility of algebraic and mathematical signs, numbers, and equations, Webster went on to
say that “the numerical notes which we call ciphers, the Planetary Characters [the internationally known
symbols for the planets, see Figure 3.1], the marks [the well-known alchemical emblems, see Figure
1.2, and Table 7.1] for minerals and many other things in Chymistry, though they be always the same
and vary not, yet are understood by all nations, and when they are read, everyone pronounces them in their
own Country’s language and dialect.”
John Webster was attempting the synthesis of mathematics and alchemical and astrological
symbolism (today we would rather say chemical and astronomical nomenclature). He went on to
say that such a symbolic language would be the true philosophical or universal language. Webster
was something of a controversial figure in his own life; an Anglican clergyman who supported
the Parliamentary cause in the Civil War, who was openly an alchemist and astrologer and who
was sceptical about witchcraft. Yet, in Puritan England, this chaplain to the Parliamentarian army
produced a work that was at the center of the 17th century’s magico-scientific Hermetic tradition
(see Figure 6.1), which also produced the astrology, mathematics, and Adamic language of Dr. John
6. NATURAL PHILOSOPHYDee, the eminent mathematician and astrologer to Queen Elizabeth I, and the Angel languages
and alchemy of Robert Fludd.
6. NATURAL PHILOSOPHY
55
Figure 6.1: Hermes Trismegistus, a floor mosaic in the Cathedral of Siena (image from: https://en.wiki-
pedia.org/wiki/Hermes_Trismegistus#/media/File:Hermes_mercurius_trismegistus_siena_cathedral.
jpg). The mythic personality, Hermes Trismegistus, is associated with the Greek god Hermes and the
Egyptian god Thoth. Greeks in the Ptolemaic Kingdom of Egypt recognized the equivalence of Hermes
and Thoth, and the two gods were worshiped as one in what had been the Temple of Thoth in Khemenu,
which was known in the Hellenistic period as Hermopolis. But the “personality” of this cultural mix of
Ancient Egyptian and Greek gods became overlaid with something more. Hermes, the Greek god of
interpretive communication, was combined with Thoth, the Egyptian god of wisdom. This multi-faceted
deity thus became a god of wisdom. And it was as a source of all wisdom that he became known to
the Neo-Platonists in the early centuries of the Christian era, particularly, in the Egyptian metropolis
of Alexandria. As a divine source of wisdom, Hermes Trismegistus was credited with many writings,
which were reputed to be of immense antiquity. Early Christians and Neo-Platonists were under the
impression that the Egyptians had 42 sacred writings by Hermes, writings that detailed the training of
Egyptian priests. These Hermetica are a collection of papyri containing spells and induction procedures
for new adepts. The dialogue called the Asclepius (after the Greek-god of healing) describes the art of
imprisoning the souls of demons, or of angels in statues with the help of herbs, gems, and odors, so that
the statue could speak and engage in prophecy. This corpus of ancient wisdom was, however, merely a
compilation of facts and a list of old observations. There was no underlying coherence, and all context
had been lost. We are back with Homer’s Catalogue of the Ships, but the literary and historical context
had been entirely lost. Yet not only did this list lead to science, but it also influenced Christian dogma.
56
Not surprisingly, Webster was attacked and his ideas were ridiculed by contemporaries, how-
ever, his ideas were within the development of a universal, symbolic language based on mathematics
and symbols, and not based upon phrases and the rules of grammar needed to try and keep order
among the rapidly accumulating words of even a reformed language. The more mystical Hermetic
ideas of Webster were denounced by John Wilkins (1614–1672), another Anglican clergyman and
natural philosopher who was quite prepared to accept that a new language could be elaborated in
which letters of the alphabet stood for mathematical quantities. But the critics of Webster argued
that the only real character of which Webster spoke was actually the natural language of which the
Kabbalists and Rosicrucians had sought for vainly in Hebrew. In spite of these mutual criticisms,
the projects of the religious mystics did have something in common with those of the natural phi-
losophers. The 17th century was full of reciprocal influences of mysticism on science and science
on mysticism, all mixed together by the solvent of philosophy and observations of Nature that were
at that time inexplicable. However, as we move toward the ideas of John Wilkins, we finally move
away from the search for the lost language of Adam, and move to the secular world, which would
centuries later lead to linguistics, semiotics, computer codes and modern science.
The first serious attempt at producing a systematized universal language based on philo-
sophic principles was due to John Wilkins. He was a polymath who became the Bishop of Chester
and the brother-in-law of Oliver Cromwell; he was one of the pre-eminent scientific innovators
of that period. Wilkins assisted in the founding of the Royal Society of London. He was one of
the creators of a new natural theology which attempted to be compatible with the science of the
time, attempting to synthesize natural philosophy with the theology and dogma of the Church of
England.
In 1668, Wilkins published his Essay toward a Real Character and a Philosophical Language,
where he attempted to create a universal language to replace Latin as an unambiguous means of
communication with which international scholars and philosophers could communicate. The Essay
also proposed ideas on weights and measure which were similar to those which would later be
found in the Metric System of 1795. In particular, Wilkins suggested that a more perfect system of
weights and measures, a universal system of weights and measures could be generated by using the
decimal metric system based upon a single universal measurement. In essence, Wilkins proposed
that an entire system of units could be based on a single natural dimension, or universal measure
(see Chapter 9).
John Wilkins spoke of a single universal measure upon which all other measures could be
based, and from which all other measures could be derived by mathematics. Wilkins’ Essay was
translated into Italian in 1675 by Tito Livio Burattini (1617–1681), who translated Wilkins’
phrase universal measure as metro cattolico, thereby introducing the familiar modern word, meter, or
“measure.” Tito Livio Burattini was a true Renaissance man, as interested in architecture and the
designing of scenery for theatrical spectacles as in measurement science and mathematics. It was to-
6. NATURAL PHILOSOPHY6. NATURAL PHILOSOPHY
57
ward the end of his life in 1675 that he published his Misura universale (Universal Measurement) in
Italian where he described the ideas of John Wilkins (whose essay had been published by the Royal
Society of London in 1668). Burattini was one of the first European savants to make a detailed
survey of the architecture of the Great Pyramid of Giza. Indeed, on his expedition to Egypt he was
accompanied by the English mathematician John Greaves, who went on to become a professor of
geometry at Gresham College, the forerunner of the Royal Society of London. Interestingly, the
measurements of the Great Pyramid made by Greaves were later used by Isaac Newton in his stud-
ies of Biblical Prophecy and in a calculation Newton made of the circumference of the Earth.11 But
as we have seen, the 17th century was characterized by this curious mixture of science and magic,
physics, and metaphysics.
John Wilkins wished to create a universal language, primarily to facilitate international com-
munication among scholars, but he also envisioned its use by diplomats, travelers, and merchants.
Wilkins’ idea was to create a family of symbols corresponding to a complex classification scheme,
which was intended to provide elementary building blocks which could be used to construct every
possible object and idea. The Real Character is not a written representation of speech. Instead, each
symbol directly represents a concept, without there being any way of speaking, or vocalizing it;
each reader might, if he wished, give voice to the text in his or her own tongue. Later in his Essay,
Wilkins introduces his Philosophical Language, which assigns phonetic values to the Real Characters,
should it be desired to read the text aloud without using any of the existing natural languages.
In this universal language, each word defines itself. Descartes had already noted in 1629 that
using the decimal numbering system it was straightforward to name all the numbers up to infinity,
and to write them in a new language—if one were so disposed. Descartes went on to suggest the
creation of a language similar to this numbering system, but a general language, organizing and
covering all human ideas. In 1664, Wilkins started to work on this task.
John Wilkins divided everything in the Universe into 40 categories or genera, these being fur-
ther subdivided into two hundred and 51 characteristic differences, which were subdivided into 2,030
species which appear in pairs. After depicting Nature in tables that occupy 270 folio pages, Wilkins
began the construction of his philosophical grammar. Wilkins assigned to each genus a symbol con-
sisting of a monosyllable of two letters; the characteristic differences are expressed by the consonants
B, D, G, P, T, C, Z, S, N and the species by the addition of another letter; seven (Latin and Greek)
vowels and two diphthongs. For example, De, signifies an element; Deb, the first difference, which
according to Wilkins’ Tables is fire; and Debα will denote the first species, which is flame. Det will be
11 It was believed in the 17th and 18th Centuries that the Great Pyramid of Giza was a structure associated with
magical and occult rites (Hermes Trismegistus again). In the same way that 18th Century surveyors were mea-
suring the Meridian through Paris to define the universal measure of length (our modern meter), it was thought
that the Ancient Egyptians had defined the near universal unit of length measurement in the Ancient world, the
cubit from the base of the Great Pyramid (each base side is 440 cubits or 230.4 m) which was held to be 1/500
of one degree of the Earth’s circumference (222.639 m).
58
the fifth difference under that genus, which is appearing meteor, and Detα the first species, which is
rainbow. The words in the analytical language of John Wilkins are full of meaning and information;
however, there is a great deal of arbitrariness. Debα signifies flame, because α designates a species of
the element fire. If we replace α with a, we obtain a new symbol, Deba, which according to Wilkins’
Tables designates comet; Deba and Debα are related but different.12
The “words” of Wilkins’ analytical language contain no arbitrary symbols. Each letter in
the analytical language has significance and meaning, in the same manner that the text of Holy
Scripture has various levels of meaning for the Kabbalists, and the long numerical sequences which
regulate our lives such as social security numbers; telephone numbers; computer access codes; bank
account numbers contain all that there is to know about each and every one of us. (For example,
the IBAN bank code has 22 characters and the SWIFT code has 11 characters—these numbers
contain the potential for vast numbers of combinations allowing all humanity to be numbered and
sorted, and then monitored.)
One could learn Wilkins’ language without knowing that it was artificial. Then later, one
could be led to discover that it was also a universal key and a secret encyclopaedia. Not surprisingly,
Wilkins’ language was not ideal, however, it was an extraordinary achievement for its time. But then,
the impossibility of truly representing the entire scheme of the Universe should not and cannot stop
us from planning human models, even though we are conscious that they are, at best, preliminary,
and will, of necessity contain arbitrary and conjectural elements. The reason for this lack of perfec-
tion in any human attempt at categorising all knowledge is because we do not truly know what the
Universe is, or indeed, where it came from, and why it is here. And it was man’s speculation as to
the origin and purpose of the visible world, that is, in God’s secret dictionary that started him off
on his search for the perfect or universal language. We have perhaps been going around in circles,
but coming to new wisdom at the completion of each circuit. It is with the advent of a more perfect
analytical language, that is, the modern physical sciences that man has begun to truly penetrate the
Divine scheme of the Universe.
The analytic language of Wilkins is an example of a scheme intended to order all knowledge
and relieve our memories of much unnecessary work. The word salmon, for example, tells us noth-
ing, but zana, the corresponding word in Wilkins’ classification, defines (for one versed in the 40
categories of genera, and the differences and the species of those genera) a scaled river fish with reddish
meat. It was a century later that the Swedish botanist Carl Linnaeus (1707–1778) would adopt
the familiar binomial system of nomenclature for all creatures (see Chapter 13); the extant and the
fossil. Man being Homo sapiens (“thinking man”) in the Linnaean classification, and the Atlantic
salmon is Salmo salar (“leaping salmon”).
12 Umberto Eco gives a clear and fascinating description of the structure and use of Wilkins’ Analytical Language
in his The Search for the Perfect Language.
6. NATURAL PHILOSOPHY6. NATURAL PHILOSOPHY
59
In the same way that Ramón Llull sought to use mechanical or logical devices to construct
all possible combinations of the attributes of God (philosophical, theological and personal), thereby
avoiding the tedious necessity of preparing ab initio lists, which would be very long and probably
incomplete, and then trying to commit those lists to memory, Wilkins attempted to show how such
a scheme could be used to order all human knowledge. We will see later (Chapter 9), how starting
with a single universal measure (of length) it is possible to construct a self-contained system of units
which is the basis of modern science. If you like, how a truly universal language may be constructed
from a limited number of primitive semantic primes, or base units.
John Wilkins did not so much wish to discover the language used by Adam in the Garden
of Eden; he wished to be the new Adam, by turning the old mystical speculation of universal lan-
guages on its head. As he wrote in the Introduction to his Essay of 1668, “This design would likewise
contribute much to the clearing of some of our modern differences in Religion, by unmasking many wild
errors, that shelter themselves under the disguise of affected phrases; which being Philosophically unfolded,
and rendered according to the genuine and natural importance of Words, will appear to be inconsistencies
and contradictions.” To fulfill this promise of reshaping language, of creating a tool for linguistic
analysis and of providing a means of standardising religious understanding, it was not enough sim-
ply to invent real characters for this new language; it was necessary to develop a criterion that would
govern the primitive features that would compose these characters. In order to design characters
that directly denote concepts and ideas, if not the things themselves that these concepts reflect (this
was the problem which Adam was faced with when he was commanded by God, to name Creation),
two conditions must be fulfilled: (1) the identification of the true primitive notions or semantic
primes and (2) the organization of these primitives into a system which represents the organization
of the model of the content. Thus, such a language is termed a priori. And the formulation of such
a language requires a grammar of ideas that is independent of any natural language.
John Wilkins’ ideas were widely circulated among the savants of his period. Unfortunately,
his ideas were met mostly with derision, not only among ordinary people but also among fellow
savants as being brilliant, but incomprehensible. The Ballad of Gresham College is a satirical ode on
the Royal Society (originally called Gresham College) and refers directly to Wilkins’ project,
A Doctor counted very able
Designes that all Mankynd converse shall,
Spite o’ th’ confusion made att Babell,
By Character call’d Universall.
How long this character will be learning,
That truly passeth my discerning.
Science is the reduction of an extraordinary and bewildering diversity of events and ob-
servations into a manageable uniformity within one of a number of possible systems of symbols,
60
quantities, and units. Similarly, technology is the art of using those systems of quantities, units or
symbols so as to be able to predict, control and organise events. The scientist always views things
through the medium of a system of symbols, quantities, and units, and technology is the handling
of information and materials in ways that have been predicted by the systems of symbols and units.
To many this may seem like magic and, unfortunately, the more isolated the scientist be-
comes from the general public, the more priest-like the scientist appears. But popular or not,
communicative or not, science does evolve and impacts everyone. I am attempting to outline here
how the modern language of science grew out of magic and the search for a mythical language of
power that was used by God to create the Cosmos, and us. Indeed, it was not so long ago that the
original search for the proto-language spoken in Eden was not so much abandoned, but subsumed
into the search for a universal philosophical language which would better allow man to understand
who he is, where he is, and why he is here. This change from the search for the language with which
God created the physical world, to the creation of the language of science, probably arose because
the later language actually worked at allowing man to speculate about Nature—it produced results.
It was seen to be a language of authority, and not just a language of curiosity. The proto-language
might have allowed God to create the Heavens and the Earth, but the philosophical languages
allowed man to understand the Heavens and the Earth, and permitted him to dream that science
might even allow him to one day be able to control Nature.
Science works. But we scientists have still not completely shaken off the aura of the ma-
gician or magus (whether we know that or not). As we are only too aware, all technologies are
increasing in performance at an alarming rate; electronics and computers are becoming smaller,
faster, and cheaper. But modern science and technology, and hence today’s scientist and the
technician, have left the ordinary man far behind; and in the future, technology and magic will,
perhaps again, become indistinguishable. But then this was the case in the 17th century, so there
is nothing new there.
6.1
FURTHER READING
For all aspects of the rise of science from an occult and magical background, the works of Frances
Amelia Yates (1899–1981) are all worth reading. She was an English historian who focused on
the study of the Renaissance, and also wrote books on the subject of the history of esotericism. In
1964, Yates published Giordano Bruno and the Hermetic Tradition, an examination of Bruno, which
came to be seen as her most significant publication. In this book, she emphasised the role of Her-
meticism in Bruno’s works, and the role that magic and mysticism played in Renaissance thinking.
She wrote extensively on the occult or Neoplatonic philosophies of the Renaissance. Her books The
Occult Philosophy in the Elizabethan Age (1979), The Art of Memory (1966), and The Rosicrucian En-
lightenment (1972) are major contributions, where the author deals with the supposed remoteness
6. NATURAL PHILOSOPHYand inaccessibility of studies of magic and of the Hermetic arts. These volumes are available from
Routledge Classics, Oxford, an imprint of Taylor & Francis.
A truly remarkable history of languages, especially the more esoteric aspects of that history:
The Search for the Perfect Language (1995); Umberto Eco; Great Britain, Blackwell Publishers Ltd.
6.1 FURTHER READING
61
CHAPTER 7
The Laws of Nature
63
Nature and Nature’s laws lay hid in night: God said, “Let Newton be!” and all was light.
Alexander Pope: epitaph for Sir Isaac Newton
A moment’s thought will demonstrate that science may be described as the quantitative study of
the complex, coupled relationships that may or may not exist between observed events. Any phe-
nomenon that is susceptible to investigation, that can be measured, that can be weighed, that can
be numbered, and that can be expressed mathematically, the readings on laboratory dials, the clicks
coming from a counter, or detector can all be considered as part of the enterprise of science. On
the other hand, there is no room in the scientific worldview for the inexact, uncontingent, immea-
surable, imponderable, or undefined. A process that can be repeated time after time, a system that
can be reproduced and analyzed, these are the concepts which go to make up science, and not the
individual, the unique, the elusive thing, or phenomenon that can never occur a second time.
Our increasing understanding of ourselves and of the world within which we live comes
from the myriad measurements scientists and technicians make each day. These measurements drive
the evolution of our society. We realized in the 17th century that by studying and using the newly
discovered Laws of Nature to make predictions about future events we no longer needed magicians
or Shamans whose predictions about future events were correct only on a statistical basis. This ob-
servation was a significant advance for mankind. Indeed, the history of science is an essential part
of our political and economic freedom; it could be said to be the Palladium of our freedoms. One
way of thinking about our present democracy is as an expanding mass of conflicting interests, which
through the action of a solvent such as modern capitalism, spiked with a fascination for trivia, such
as are readily available on the Internet, becomes resolved into what is, in essence, a thin vapor. That
is, a dilute or rarefied, ideal gas of non-interacting particles that lose collective internal energy in
proportion to the perfection of its aspiration. Like a perfect gas expanding into a vacuum… into
nothing, losing all coherence and long-range structure. By using the prism of the scientific world-
view to keep our views of how we came to be who we are in perspective, we might even be able to
preserve our freedoms.
When we know something of the origins and evolution of our assumed knowledge, or un-
derstanding of the natural world, we are delivered from the thrall of preconceived opinions and
the foolish, fabulous ideas into which man is all too willing to fall, and into which one may fall
without even realizing it. In addition, we better understand the limited value, the limited shelf-
64
life of all our hypotheses about the unfolding of the visible universe, and on a smaller scale, about
the evolution of our own society and of our own lives. When we study the history of science, we
see how misunderstandings in science have arisen, and how they were resolved. And we are able
to put the achievements of our own period in a more appropriate perspective; a more appropriate
historical perspective.
A study of history and particularly the history of science tells us how well we have been
thinking, and whether what we have been thinking about is relevant and useful. And among other
things, a study of the general scope of historical development affords the scrutiny of evidence, and
the capacity to decide which particular version of an event seems most credible. It also allows one
to observe the strange, almost unfathomable, metamorphosis that occurs in the interpretation and
hallowing of a sacred text; invoked as if it were supernaturally ordained, and hence not available for
contested examination and interpretation. That is, we may investigate the origins of the dogmas of
science; it allows us to understand Nature and to live with less anxiety with the more violent aspects
of the natural world (see Chapter 4).
The essential and all-important characteristic of science is that it is predictive. Science fol-
lows some established order; an order that was thought by our forebears, even as recently as the late
19th century, to be divinely inspired. Today, however, we hold that phenomena arise because of a set
of transcendent fundamental laws, and the interaction between a set of unchanging forces that may
be characterized by a set of inviolable constants of Nature, for example, the mass and charge of the
electron, me and e, respectively. But where did this idea of a divine legislator for the Universe come
from? If we can say that the natural world has arisen from, and is maintained by a set of fundamen-
tal laws what is the similitude between these observed Laws of Nature and the laws promulgated
by national parliaments? Where did the observed Laws of Nature come from?
7.1 THE COMPLEX RELATIONSHIP BETWEEN ASTROLOGY
AND ASTRONOMY
In earlier parts of this volume, I made much of the complex relationship between a magical way of
looking at Nature, and a more rationalist or scientific way of looking at Nature, that is, the close
relationship of natural philosophy, the Hermetic arts, and modern science. Nowhere is this com-
plexity better seen than in a comparison of astrology and astronomy. In addition, in Chapter 5, we
saw how the creation of complex, self-contained system such as the I Ching and the Houses of the
Zodiac formed a system of study permitting a range of correspondences with the observed natural
world to be constructed and maintained. If then one projects these theoretical, mystical intercon-
nections onto other observations of Nature, one does have a system of sufficient flexibility to explain
some aspects of Nature. But of course, this is a mystical theoretical model of Nature; one could say
a mythological model. But this is where it all began. Ancient civilizations, such as the Sumerians,
7. THE LAWS OF NATURE7.1 THE COMPLEX RELATIONSHIP BETWEEN ASTROLOGY AND ASTRONOMY
65
were famous for their ability as both astronomers and as astrologers. You could not be an astrologer
if you did not know something of the slow, reproducible, sacred dances of the planets and the stars.
Consequently, we first need to consider the origins of the words astrology and astronomy.
Table 7.1 gives a list of some alchemical and astrological/astronomical symbols commonly
used by savants in the pre-scientific age (during and before the late 17th century). As can be seen
(compare with Figure 3.1), there is a clear mixing of the symbols; the symbols used in alchemy also
represent the metals that are associated with the seven planets that also give us our days of the week.
The use of these symbols descends from ancient Greco-Roman astronomy/astrology, although their
current shapes are a development of the 16th century. The symbols of Venus and Mars are also used
to represent the female and the male in science, following a convention introduced by Linnaeus (see
Chapter 13) in the 1750s. Even today, it is often difficult to separate the scientific and the magical
description of Nature.
Table 7.1: A table of symbols for celestial bodies (astrological and astronomical symbols) and chemical
elements (alchemical symbols)
Important Alchemical and Astrological Symbols
According to the Swiss alchemist and chemist Paracelsus (1493–1541), the three primes or tria
prima—of which material substances are composed are mercury, salt, and sulphur. Paracelsus
reasoned that Aristotle’s four element theory appeared in all bodies as three principles. He saw
these principles as fundamental, and justified them by recourse to the description of how wood
burns. Mercury included the cohesive principle, so that when it left as smoke the initially solid
wood fell apart. Smoke described the volatility (the mercurial principle), the heat-giving flames
described flammability (sulphur), and the remnant ash described solidity (salt).
Mercury (or mind)
This is also the symbol for the planet Mercury
Salt (base matter or body)
Sulphur (or the soul)
… continued on following page
66
Western alchemy makes use of the Hermetic elements. These are the four classical elements of
Aristotle: air, earth, fire, and water
Air
Earth
Fire
Water
The properties of the four classical elements are first
discussed by the Islamic scholar Abū Mūsā Jābir ibn
Hayyān (c.721–c.815). He has been widely described
as the father, or the founder of early chemistry, in-
venting many of the basic processes and equipment
still used by chemists today.
Seven metals are associated with the seven planets, which also give us our seven days of the
week, and seven major deities, all figuring heavily in alchemical symbolism. Although the met-
als occasionally have a glyph of their own, the planet’s symbol is most often used, and the sym-
bolic and mythological septenary is consistent with Western astrology. The planetary symbolism
is limited to the seven wandering stars visible to the naked eyes of ancient astronomers, the
extra Saturnian planets. Uranus and Neptune are not included, as they were identified as plan-
ets only in the late 18th and early 19th centuries, respectively.
Lead dominated by Saturn
Tin dominated by Jupiter
Iron dominated by Mars
Gold dominated by Sol (the
Sun)
Copper dominated by Venus
Mercury (quicksilver) domi-
nated by Mercury
Silver dominated by Luna
(the Moon)
Also the symbol for the planet Saturn. Saturday is
the day of Saturn or Kronos—dies Saturni.
Also the symbol for the planet Jupiter. Thursday is
the day of Zeus or Jupiter—dies Iovis.
Also the symbol for the planet Mars, and for the
male. Tuesday is the day of Mars—dies Martis.
Also the symbol for the Sun. Sunday is the day of
the Sun—dies Solis.
Also the symbol for the planet Venus, and for the
female. Friday is the day of Venus or Aphrodite—dies
Veneris.
Also the symbol for the planet Mercury. It is also
used as a unisex symbol since the intersex Hermaph-
roditus was a child of Hermes and Aphrodite (Mer-
cury and Venus). Wednesday is the day or Mercurius
or Hermes—dies Mercurii.
Also the symbol for the Moon. Monday is the day of
the Moon—dies Lunae.
It was the Austrian-American, Marxist historian and sociologist Edgar Zilsel (1891–1944)
who pointed out that the compound word, astronomy could not have been formed and used had
7. THE LAWS OF NATURE7.1 THE COMPLEX RELATIONSHIP BETWEEN ASTROLOGY AND ASTRONOMY
67
there not been, at that time a tacit recognition of the quasi-juridical nature of the laws which con-
trol the motions of the heavenly bodies. That is, that there was a celestial law-giver who legislated
for the Universe. [1]
The de Legibus (On the laws) is a dialogue by Marcus Tullius Cicero (106–43 BCE) composed
during the last years of the Roman Republic. Cicero wrote this work as a fictionalized dialogue
between himself, his brother Quintus, and their mutual friend Titus Pomponius Atticus. The dia-
logue begins with the trio taking a leisurely stroll through Cicero’s estate at Arpinum; they begin
to discuss how laws should be made, and how they should be maintained. Cicero uses this text for
expounding on his theories of natural laws of harmony among the social classes. But what Cicero
also included was the comment, “The universe obeys god, seas and land obey the universe, and human
life is subject to the decrees of the Supreme Law.” Cicero’s comment was certainly a Taoist view of the
nature of all things, but it was a view that demonstrated a separation between divine laws (Laws of
Nature) and the laws of men. Yet, in his de Natura Deorum (On the Nature of Gods), Cicero tells us
that gods and men are influenced by the same laws, so we see there is an indication that there were
laws of Nature which bind us all.
The words astrology and astronomy were at first synonymous, and the later was familiar to
Aristophanes in the 5th century BCE (Clouds, lines 194 and 201). Subsequent usage seemed to
follow the personal preference of individual authors. Plato wished to settle on the word astrology
for all investigations of the heavens, but astrology was already beginning to acquire the magical
significance of astro-mancy. In the astrological literature of late antiquity, we sometimes encounter
a mixing of terms, which today we take a great deal of care to separate; for example, “Laws of Na-
ture” are mentioned in the context of a magical interpretation of phenomena. The astrologer Vettius
Valens (120–175), while discussing an astrological predetermination (submission to fate), speaks of
the legislation of Nature, of fate and of the stars.
Vettius Valens’ surviving texts are particularly interesting, because he cites the views of a
number of earlier authors and authorities who would otherwise be unknown. Although the astron-
omer, mathematician, and astrologer Ptolemy of Alexandria (90–168), and author of Tetrabiblos
(the most influential astrological text we possess), was generally regarded as the colossus of Helle-
nistic astrology and astronomy for many centuries following his death, it is likely that the practical
details of the astrology of the period resemble the methods elaborated in Valens’ Anthology. Mod-
ern scholars tend to compare and contrast the two men since both were roughly contemporary and
both lived in Alexandria. Yet Valens’ work elaborated the more practical techniques that arose from
ancient tradition, while Ptolemy was more of a “modern” scientist, and tended to focus on creating
a theoretically consistent model based on his Aristotelian interpretation of the Cosmos. Ptolemy’s
model of the Cosmos persisted until the early 17th century.
Deciding that the traditional Pagan religion (with all those sexually driven anthropomor-
phic gods and goddesses) was useless, Valens found in fate a substitute religion. For him, absolute
68
pre-destination gave emotional satisfaction and aroused an almost mystical feeling of oneness with
the Cosmos. Knowing that everything was already predetermined, apparently gave one a sense of
freedom from anxiety (ataraxia or “unperturbedness”) and a sense of salvation. With such a view
of Nature, we are not far from the Consolation of Philosophy by the late-Roman writer Boethius
(480–524/525).
In the 5th century AD, Latin encyclopaedias written for monks explain astronomy as, liter-
ally, the “science dealing with the laws of the stars,” that is, lex astrorum.13 But these sources are not
fully explained, and the significance might be that of the laws which the stars give to every man in
fixing his fate, rather than that of the laws which the stars themselves had to obey in their eternal
motions. So, we are no further forward.
There is no clear distinction between astrology and astronomy until we get to the European
Enlightenment, with even Isaac Newton being both an astrologer and an astronomer. Indeed, the
idea of lex astrorum suggests that gravity is not only what keeps the stars in their courses, but also
what carries astrological “influence.” Perhaps it was this later function that started Newton on
his great quest for gravity. In short, there is no simple explanation or authority on the difference
between astronomy and astrology other than one of personal belief in the influence of the stars on
our fate, a fact that is emphasized when we consider how easily we make a Freudian lapsus when
using the two words.
7.2 THE SEARCH FOR THE DIVINE LAWGIVER
It is quite difficult to locate an exact moment when natural philosophers, savants, or magicians
started using the term Law of Nature for a law or laws derived from observations of Nature, and
which is considered to be inviolable; that is, a law which has absolute authority over all of us, and
over our society. Archimedes (Figure 7.1) was probably the first to expound a Law of Nature, but
he would probably have regarded what we call the law of the lever, as a principle rather than an
inviolable law. But by the mid-18th century, the term Law of Nature was being widely used, cer-
tainly as a result of the propagation of the Newtonian synthesis of mathematics, mechanics, and
optics, although how many of those who used the term had much of an inkling of what it might
mean is a moot point. Of course, there are the majestic lines of hymn, number 535 from the English
Hymnal of 1796,
Praise the Lord, for he hath spoken.
Worlds’ his mighty voice obeyed;
Laws that never shall be broken,
For their guidance he hath made.
13 See Cassiodorus, Inst. 2, 7 and Isiodorus (Isidore of Seville), Diff. 2, 152.
7. THE LAWS OF NATURE7.2 THE SEARCH FOR THE DIVINE LAWGIVER
69
It is not entirely clear, however, if the author is here writing about God’s law as being a set
of rules as given in the Bible, or a set of more fundamental rules stating how the Cosmos itself was
to function. Such was the prestige of Newton in the late 18th century that this verse could just
as well apply to the demi-god Newton, who had identified and presented to man a set of laws, or
principles that he said were inviolable.
Figure 7.1: The Fields Medal. This medal is awarded to those who achieve significant advances in
mathematics (there being no Nobel Prize for mathematics), and it carries a portrait of Archimedes
(c.287–c.212 BCE), as identified by the Greet text. The Greek natural philosopher was well ahead of
his time in using the modern scientific techniques of observation, conjecture, and further confirmatory
observations (and experiment) to understand the phenomena he saw around him. On the Equilibrium
of Planes is a treatise in two volumes by Archimedes. The first book establishes the law of the lever, and
locates the center of gravity of the triangle and the trapezoid. According to Pappus of Alexandria, Ar-
chimedes’ work on levers caused him to remark: “Give me a place to stand on, and I will move the Earth.”
The second book, which contains ten propositions, examines the centers of gravity of parabolic segments.
The Latin phrase states: Transire suum pectus mundoque potiri (Rise above oneself and grasp the world).
It is almost certain that the concept of a celestial lawgiver legislating for non-human, natu-
ral phenomena goes back to the Ancient Sumerians. The Sun god, Marduk, was raised to central
pre-eminence in Babylonian mythology about the same time that the sixth king of the First Dy-
nasty of Babylon, Hammurabi (c.1810 BCE–c.1750 BCE), codified his society’s laws. We read
how Marduk is he who prescribes the laws for the other gods, and it is he who fixes their bounds.
Marduk is the lawgiver to the stars. It is he “who prescribes the laws for the lesser star-gods, Anu, Enlil
and Ea and who fixes their bounds.” Marduk it is who “maintains the stars in their paths” by giving
70
“commands” and “decrees’” (from the Later Babylonian Creation Poem as given in Joseph Needham
in Science and Civilization in China Volume II, P.533). Similar ideas of a supreme law-giving god
may be found in Hindu literature; see the Rig Veda X, 121.
Today, we know that it is the mass of the planetary and stellar bodies interacting with each
other through the medium of gravity, which holds the stars in their courses; however, this idea of
Isaac Newton is barely 300 years old, and received its latest refinement by Albert Einstein only a
century ago. And although the ideas of Newton and Einstein are accepted by modern scientists as
dogma, they are not widely understood. However, the concept of a primal lawgiving sky-god is still
very much accepted by, perhaps, the majority of humanity.
At an earlier period of scientific development and investigation, the pre-Socratic philoso-
phers spoke about “necessity in Nature” but not about the “laws of Nature.” For example, Heraclitus
(c.500 BCE) tells us that “The Sun will transgress his measures, otherwise the Erinyes, the bailiffs of
Dike (Goddess of Justice) will find him out.’” Anaximander (c.560 BCE) also speaks of the forces of
Nature paying fines and penalties to each other for slights and transgressions. But then is this not
what the stories of Greek Mythology are really implying; that behind the lusty gods, goddesses,
nymphs, and heroes whose stories are intended to instruct the unsophisticated, there was a complex
philosophical picture about the nature of divine and human transgressions and actions. The Roman
Stoics maintained, as did Zeno of Citium and Diogenes that Zeus, being immanent in the world
was nothing other than universal law, an intelligent presence, or logos behind Nature—as with many
ideas about the nature of the Monotheist god.
Aristotle makes a separation between “positive law” which is obeyed by society, and “natural
law.” In the Nicomachean Ethics (V, vii) we read “Some people think that all rules of justice are merely
conventional, because whereas [a law of ] of Nature is immutable and has the same validity everywhere,
as fire burns both here and in Persia, rules of justice are seen to vary.” Plato does use the phrase law of
Nature in the Timaeus, but, unfortunately, he did not discuss the subject. It is the Stoics, particularly
the domineering, law-conscious Roman stoics who developed the idea of a set of supreme natural
laws common to all men, irrespective of their national or cultural heritage. One can see that just
as the Babylonian idea of Laws of Nature grew out of a centralized, absolutist oriental monarchy
or authority, so in the time of the Roman stoics, living within the world empire of Rome with its
greatly increased centralization of power and of authority, it would be natural to view the Universe
as a great empire ruled by a divine Logos, or intelligence. A supreme intelligence, which maintained
the stars in their courses, and ruled the destinies of all men and of all empires.
It is from the poet Ovid (43 BCE–17 AD) that we find the clearest statement of the belief
in the existence of laws in the non-human world (in Pythagoras from Metamorphoses XV, 17). “What
shakes the earth; what law the stars keep their courses under, and what so ever thing is hid from common
sense;” that is, Pythagoras knew the laws according to which the stars move. Ovid does not hesitate
to use the word lex (law) for stellar and planetary motions. In the Tristia, Ovid describes a supposed
7. THE LAWS OF NATURE7.3 A VERY DIFFERENT POINT OF VIEW
71
friend’s faithless behavior as being so appalling as to make rivers flow uphill, the Sun go backward,
and all things proceed reversing Nature’s laws.
Judaism, Christianity, and Islam are, of course, built on the idea of a single divine lawgiver.
Perhaps it was during the Babylonian Captivity that the Jewish people came to adopt the idea of
a single transcendent god. Certainly, it is with the Hebrew Bible that we first begin to glimpse a
celestial lawgiver who influences both Nature and human society. “The Lord gave his decree to the
sea, that the waters should not pass his commandment” (Psalm 104) and “He hath made them fast for
ever and ever, he hath given them a law which shall not be broken” (Psalm 148). The problem with
the monotheist view of natural laws, however, was that it quickly became identified with morality;
human morality, particularly the do’s and don’ts of sex, as Saint Paul and Saint Augustine of Hippo
inform us. Yet, even as late as the 4th century AD, the Christian apologist Arnobius of Sicca (died
about 330) could argue that Christianity was not such a bad religion after all; as the adoption of
Christianity by the Roman Empire had not changed the way the natural world worked. After all,
the Sun still rose and the Moon still followed its traditional cycles. That is, that the Laws of Nature
are implicit. The rotation of the stellar firmament, the cycles of the seasons had not altered with
Constantine’s Edict of Milan of 313. Whatever was driving the Universe had little to do with the
Christian religion; replacing Jupiter or Jove by God, Yahweh, or Allah in your affections did not
change the visible world, it merely influenced your private life.
7.3 A VERY DIFFERENT POINT OF VIEW
What of the idea of a celestial law-giver in the Orient? Following Needham (Science and Civiliza-
tion in China, Volume II, P.554ff) we only need look at the Nei Ching which contains conversations
between Chi Ni Tzu and Kou Chnen, the King of Yűeh in the late 4th century BCE. The king asks
the sage about the origins of natural phenomena (he has already asked him about the forces that
rule human society). “There are the Yin and the Yang. All things have their chi-kang [that is, their fixed
position and motions with regard to other things]. (This chi-kang is what Needham translates as
“Laws of Nature.”) The Sun, Moon and Stars signify punishment or virtue, and their change indicates
fortune and misfortune. Metal, wood, water, fire and earth (the five elements of Classical Chinese
thought; slightly different from the European quartet.) conquer each other successively; the Moon waxes
and wanes completely. Yet these normal changes have no ruler or governor. If you follow it [heaven’s way]
virtue will be attained; if you violate it there will be misfortune.”
The Ancient Chinese viewed Nature as a great net, or vast pattern. There is a web of relation-
ships throughout the Universe, the nodes of which are things and events. There is no ruler or gov-
ernor. Nobody wove this great net; it is eternal, like the quantum mechanical view of the vacuum,
but if you interfere with the texture of the net, you do so at your peril. The Ancient Chinese did not
follow the Roman stoic’s love for celestial law-givers and law-enforcers. The Ancient Chinese did
72
not need the sense of security coming from the creation of an all-powerful, male deity who lived in
the sky, who had a long white beard (a sign of wisdom according to Gnostic creation myths), and
who would tell us all what to do and what to think; and of equal importance, what not to do, and
who not to do it with.
The idea that heaven does not command the processes of Nature to follow their regular
courses is linked to the belief system and philosophy which we know today as Taoism, where, wu
wei or non-action, or unforced action is central to the ways of heaven. The Tao of Heaven is a Ch-
hang Tao, the order of Nature is an unvarying order, as was said by Hsűn Chhing in about 240 BCE,
but that is not the same as affirming that anyone ordered it to be so. As Confucius (c.551 BCE–479
BCE) says in the Li Chi “The most important thing about [the ways of Heaven] is its ceaselessness...
Without any action being taken, all things come to their completion; such is the Tao of Heaven.” There is a
denial, if only an implied denial, of any heavenly creation or legislation. The heavens act according
to wu wei; the Tao produces, feeds and clothes the myriads of things that compose our world, it
does not lord it over them, and asks nothing in return.
Back in Europe, it is not until the 17th century that savants or natural philosophers began
to separate morality from the Laws of Nature, which were thought to be obeyed by animals, hu-
manity... minerals, plants, chemical substances and planets alike. This separation could only have
occurred with the advent of the Reformation, and the idea that there existed a “right of rebellion”
against a supposedly un-Christian prince or authority. That is, a change in the worldview of Euro-
pean man could not have begun until the absolutism of the pre-Reformation Catholic Church had
been challenged and successfully broken by the Protestant Reformation; popes such as Alexander
VI, Julius II, and Leo X were absolute “oriental” potentates. If it could be accepted that princes
could act contrary to natural law, no matter how well or badly defined was that natural law, then
a distinction could be made between natural or universal laws or authority, and man-made laws
or authority. And, perhaps, in the case of man-made laws they should be more accurately termed
as choices rather than as authority. Before the Reformation, the Christian world was in thrall to
the greatest of Scholastic philosophers, Saint Thomas Aquinas. This Dominican friar and teacher
envisaged a system of sets of laws: the lex aeterna, which governed all things for all time, the lex
naturalis, which governed all men, and the lex positiva created by human legislators (-divina if canon
law inspired by the Holy Ghost working through the church, and –humania, or common law laid
down by princes and governments).
Remarkably though, Johannes Kepler (1571–1630), who discovered the three empirical laws
of planetary motion, one of the first occasions when Laws of Nature, or rules of observation were
expressed in mathematical form, never referred to them as laws. Indeed, neither Galileo Galilei nor
Nicolaus Copernicus (1473–1543) ever used the expression “Laws of Nature,” even though their
work is the foundation of modern science. Perhaps, we see here the last vestige of the influence of
the Roman Church on natural philosophers and the church’s desire not to separate the concepts of
7. THE LAWS OF NATUREuniversal laws and man-made (that is, church-sanctioned) laws for the governance of the Universe
and society, respectively. Even Isaac Newton could not bring himself to totally decouple universal
laws and society’s laws.
7.4 THAT FEARFUL PERFECTION
73
7.4 THAT FEARFUL PERFECTION
But change was coming; the zeitgeist was moving in the direction of the creation of a God whose
relationship to His creation could be examined. The Catholic heretic, Giordano Bruno, following
Nicolas de Cusa (1401–1464), asserted that God was a perfect sphere. That is, the most perfect
of solid (Platonic) bodies.14 Xenophanes of Colophon (c.570–c.475 BCE) was the first Classical
philosopher to speak against the anthropomorphic nature of the gods, and spoke instead of a sin-
gle transcendent god (“One god, greatest among gods and humans, like mortals neither in form nor in
thought”). This perfect deity was conceived of as being a sphere. It was Plato who had told us that
the sphere was the most perfect and uniform of all solid bodies; ideas that carried forward to dis-
cussions of the shapes of atoms and molecules in the last century. For some Classical writers it was
inconceivable that the transcendent god would not be spherical, because this shape was the best, or
least inadequate to represent the Divine, the supernatural. But how did this abstract, geometrical
image of God become established in the European mind?
Alain de Lille (c1116/1117–1202/1203) was a French theologian and poet who studied and
taught in the schools of Paris where he came under the influence of the philosophers and mystics
attached to the Augustinian Abbey of Saint Victor. Alain was also influenced by ideas of material-
ism, which could have condemned him to the flames; he wrote “God is an intelligible sphere, whose
center is everywhere and whose circumference is nowhere.” This powerful, fearful image of the Divine
sphere quickly became part of the European imagination. In Rabelais we read of, “that intellectual
sphere, whose center is everywhere and whose circumference is nowhere, and which we call God” (Pan-
tagruel). The mediaeval mind believed that God was in each of His creatures, but none of them
limited Him, “The Heavens and the Heavens of the Heavens cannot contain thee” (1 Kings 8: 27). What
better image for the Divine than the sphere?
Nicolas de Cusa wrote in his De Docta Ignorantia (On Learned Ignorance) that, “Deus est
absolutus;” no arguments or quibbles here. But he was following Saint Anselm of Canterbury
(1033–1109), who gave us the first ontological proof of the existence of God, who said God is, “id
cujus nihil majus cogitari possit” (something beyond which nothing greater can be envisaged). God can
never fully be reached by the human intellect. One could say that invoking the geometric metaphor
of the sphere that the relationship between our knowledge of God and our view of Nature is the
same as that between a polygon made up of many (N) sides and the circumference of a circle. As
14 Interestingly, while Bruno was burned at the stake in Rome for such ideas, Nicolas de Cusa had gone on to be-
come a Cardinal; evidently, the Middle Ages was a more easy-going time for cosmological speculation than the
late Renaissance, but that difference was probably due to the intervening Reformation.
74
N increases, the polygon more closely resembles the circumference of the circle which may contain
it, but they can never be commensurate. The circle cannot be squared. God is that circle (one slice
through a sphere) whose center is everywhere, but whose circumference is nowhere.
But whatever the difference in the disciplinary nature of the church for those who contem-
plated the nature of God, between the end of the Middle Ages and the late 16th century, both
Giordano Bruno and de Cusa demonstrated that there were no crystalline Ptolymeic Spheres
between man and the Empyrean, where God was believed to dwell. There was just an immense,
boundless emptiness filled with stars like our Sun. Bruno had finally overturned the Aristotelian
model of the Universe by accepting the ideas of Copernicus. Before Copernicus and Bruno, when
man looked into the heavens at the stars, he believed that he was looking inward toward God,
toward the premium mobile as given in the cosmology of Dante, so wonderfully expressed in the
Divine Comedy. After Bruno, when man looked at the stars, he looked into the depths of empty
space, which Blaise Pascal (1623–1662) so eloquently and majestically told us terrified him, “Le
silence Eternel des ces espaces infinis m’effraie” (Pensées 102), Dante thought that space was a cathedral
containing God and man. Giordano Bruno, however, showed man that he was alone on a seashore
looking out into the unknown. The Divine sphere, which contained all things, was fearful indeed.
Giordano Bruno derived this geometric idea of the nature of God/the Universe from Nicolas
de Cusa, who in turn had derived the idea from Alain de Lille. But from where or from whom did
Alain de Lille get this idea? Interestingly, it seems as if the idea was derived from a 3rd century AD
Corpus Hermeticum. That is, the idea of God as an infinite sphere filling the Universe, or if you like,
by association, the Universe being an infinite sphere, an idea which made the Universe immense
and homogeneous and removed man and his small planet from the central position assigned to
them by theologians, came from Gnostic cosmology and the Hermetic writings which supposedly
derived from the ancient Hellenistic-Egyptian magical writings of Hermes Trismegistus (see Page
55). That is, they date from Alexandria in Egypt and the 3rd century AD, but were supposedly an
ancient tradition disappearing back into the mists of antiquity. In particular, from writings which
derived from a secret or sealed, self-contained wisdom (Hermeticum) relating to alchemy, magic, and
philosophy. Such an evolution of a science-like worldview evolving out of magic is not a unique
event; it happened again in the 17th ventury. Such a change from magical to a more science-ori-
ented worldview change can be thought of as the change from the use of a language of curiosity to
a language of authority to describe the world within which we find ourselves.
The next time this idea of spheres of infinite diameter, but with no ascertainable circumfer-
ence, was heard of in the writings of the French mathematician, theologian, and philosopher Blaise
Pascal who said that “Nature is an infinite sphere whose center is everywhere, whose circumference is no-
where.” So, from Alain de Lille to de Cusa to Pascal, via Bruno, we have replaced God with Nature
as being infinite. Not only has man lost his centrality from the Christian Cosmos, but even God
seems to have got lost in this process.
7. THE LAWS OF NATURE7.4 THAT FEARFUL PERFECTION
75
The new feature of Bruno’s universe came from his blending of several philosophical ideas.
The monk from Naples was attracted by the atomic theories of the Classical World, which had
themselves been associated with the possibility of a plurality of worlds, formed by different combi-
nations of the eternal, never-resting atoms passing in and out of various combinations. Bruno was
also fascinated by the idea of Alain de Lille and de Cusa that the Universe had no center yet was
infinitely vast. For Bruno, it was the Copernican system that best suited such an unbounded, infinite
space, and also provided a model of planetary systems associated with stars extending away from us
in all directions. In the same way that the mediaeval philosophers had developed a theology where
God was infinite in all his attributes, Bruno correlated this infinite God with an infinite Universe,
a physics of the infinite, which corresponded with a theology of the infinite. Bruno was only saying
that the divine attributes of God be given physical meaning, just as Isaac Newton would do in the
next century when he reconstructed God’s omnipotence in terms of an absolute space-time.
Giordano Bruno had affirmed that the Universe was boundless and homogeneous, and that
the same physical or natural laws would operate everywhere in this universe; this is still the standard
view (dogma) of the physical sciences. Newton’s Universal Law of Gravitation is as valid on Earth
as it would be in the Orion Nebulae, and Planck’s constant has the same value on Earth as it would
have on a planet orbiting a star in a distant galaxy. Although Bruno does not use the phrase “Law
of Nature” very often (he knew that the Holy Inquisition had their eyes on him), he did frequently
refer to ratio (reason). He visualized the phenomena we see around us as a synthesis of freely de-
veloping innate forces impelling an eternal growth and change. Bruno spoke of heavenly bodies as
animalia pursuing their course through infinite space, believing in the Neo-Platonic ideal that both
organic and inorganic entities and objects were in some sense animated. The anima constituted the
ratio, or inherent law which, in contradiction to any outward force or constraint is responsible for
all phenomena underlying motion. This was a very Taoist view of the Cosmos from a 16th-century
Neapolitan Dominican friar. He may not have said it often, but Giordano Bruno said that God was
to be found everywhere “... in inviolabili intermerabilique naturae lege..” (in inviolable laws of nature).
This made Bruno a Pantheist as far as the church was concerned, although it did demonstrate that
Bruno possessed a modern holistic, Taoist, or organic view of the character of natural phenomena.
It was with the triumph of scientific rationalism of the 19th and 20th centuries that we move
to definitively speaking about the Laws of Nature, and the advent of science as a language of au-
thority capable of explaining the world around us, and the entire Cosmos. It could not be otherwise;
we had removed God from our lives, and humanity wished to assume the divine mantle by showing
that all Nature was subject to something that we had discovered and measured. We knew what was
happening everywhere in Creation, because it happened in our laboratories here on Earth, partic-
ularly, in the Cavendish Laboratories in Cambridge. Whereas to speak of rules or propositions of
Nature would have been humbler, triumphalist scientists, however, wished to say that science (and
by implication, the scientist) was omnipotent.
76
We are now at the position to ask why, after such a long period during which the Laws of
Nature were viewed in Europe as a theological commonplace, they did attain such a position of
central importance in the society of the late 17th century? For example, Pope’s epitaph for Isaac
Newton in Westminster Abbey, quoted at the beginning of this chapter could not have been written
for an earlier savant or natural philosopher. How was it that in the early-modern world, the idea
of God’s sovereignty over the Cosmos shifted from the exceptions in Nature (comets which so
terrified the mediaeval, and not so mediaeval mind) to unvarying, absolute, unbreakable rules? The
answer is probably to be found in the political changes that were taking place in the wider society
of this time. What was it that could lead men to look to an absolutist centralization of power over
the Universe? Almost certainly a slow, but inevitable, seepage into Nature of man’s conception of an
earthly ruler and his sovereignty. Perhaps with the decline in feudalism, and the rise of the capitalist
mercantile state with a single central Royal Authority (Henry VIII or Elizabeth I), coupled with
a widespread decline in the power of the aristocracy, and an increasing isolation of the monarch as
absolute; best demonstrated by that most absolute of monarchs, Louis XIV of France. Perhaps it
is no coincidence that the Cartesian idea of God as the supreme legislator for the Universe devel-
oped during the lifetime of Thomas Hobbes (1588–1679), “Nature (the Art whereby God hath made
and governs the World)” (Introduction to Leviathan, 1651). Thus, an idea which originated in early
Bronze Age Mesopotamia of absolute oriental despotism, was preserved and evolved over three
millennia to awake to new vigour in the world of early-capitalist absolutism.
7.5
FURTHER READING
The Social Origins of Modern Science (Boston Studies in the Philosophy and History of Science) by
Edgar Zilsel (2000); Boston, Kluwer Academic Publishers.
The Grand Titration: Science and Society in East and West (1969), Joseph Needham; London,
Routledge.
In the sections of this present work, where I discuss Ancient China I, will be making use
of the magisterial Science and Civilization in China (published 1956, re-published 1975), Joseph
Needham; Cambridge, Cambridge University Press. In this chapter, I make reference to Volume II
of this multi-volume work: The History of Scientific Thought.
7. THE LAWS OF NATURECHAPTER 8
Measuring the World
77
…une entreprise [the Metric System] dont le résultat doit appartenir un jour au monde en-
tier.
Charles-Maurice de Talleyrand-Périgord (1754–1838)
One of the biggest changes to affect the lives of Europeans in the 16th century occurred in Febru-
ary 1582, when Pope Gregory XIII reformed the solar calendar. This long-needed change should
have been instantly accepted throughout the Christian world, but as the Reformation had already
splintered Christendom, various nations adopted the new calendar in a piecemeal manner, based
on national politics and religious sentiments, with England not adopting the changes until 1753.
Russia only adopted the change in 1917. The new Gregorian calendar, named in honor of Pope
Gregory XIII, was introduced because the old Julian calendar, introduced by Julius Caesar more
that 16 centuries earlier, had made the solar year slightly too long. With the passage of the centu-
ries, this accumulated additional time had become significant and had caused a drift of the seasons,
which given the primordial place of agriculture in European society had lead to serious problems.
In the Julian calendar, all years exactly divisible by four were leap years. To remedy the trend in the
distortion of the solar calendar arising from the imprecision of the Julian calendar, an Italian savant
Aloysius Lilius (1510–1576) devised a new calendar with new rules: Every year that is exactly di-
visible by four is a leap year, except for years that are exactly divisible by 100, but the centurial years
that are exactly divisible by 400 are still leap years.
The changes proposed by Lilius corrected the drift in the civil calendar, but it was still nec-
essary to delete ten days to bring the calendar back into synchronization with the seasons. This
deletion of ten days lead to considerable consternation in Christendom, as ordinary people believed
that the church, and the savants and natural philosophers who advised the Church were stealing
ten days of their lives.15
The 16th century was also notable for the widespread introduction of a new idea to simplify
everyday arithmetical operations; something that also impinged upon the lives of nearly everyone.
That is, the use of decimal numbers (numbers to the base ten). In 1584, the Flemish engineer and
surveyor, Simon Stevin (1548–1620) published a set of tables for the calculation of the amount of
15 And even when the new calendar was finally introduced into Great Britain in 1753 (when because of English
procrastination it was now necessary to delete eleven not ten days of the year), there was similar popular anger.
These events lead to a distinctly anti-science, or anti-expert, attitude in the UK, which persists to this day.
78
interest that banks would charge for lending money at various rates over various periods of time.
As he was preparing these tables, Stevin realized that decimal numbers would greatly simplify
calculations in every area of life. Consequently, in 1585, Stevin published De Thiende (Of Tenths)
in Flemish and La Disme (The Tenth) in French. These were the first books where the simplicity of
decimal numbering was fully explained and demonstrated. Thus, the invention of decimal arithme-
tic is usually attributed to Simon Stevin, who, in addition to assisting people to understand how
much interest they were paying to bankers, also found time to use his skill as a mathematician in
the design a more efficient type of windmill, which was able to drain land more effectively, and so
permitted the creation of the Netherlands.
However, it was in Italy that the greatest scientific advances were being made in the 17th
Century; scientific advances that would have a profound effect upon the science of measurement
(metrology). The Italian mathematician, astronomer, and savant Galileo Galilei enjoyed daydream-
ing in church. While attending Mass in the Cathedral of Pisa, he allowed his attention to wander
from the Liturgy, and it was while contemplating the swaying motion of the heavy chandeliers
suspended by long, fine chains from the high ceiling that he formulated several ideas about the
pendulum. Galileo went on to conduct detailed experiments on pendulums, and eventually deter-
mined the length of a pendulum swinging through its arc in exactly one second in Pisa. This became
known as a seconds’ pendulum. It was subsequently shown that the seconds’ pendulum varied in
length according to where it is on the Earth’s surface. For example, at the Equator the seconds’ pen-
dulum is 991.00 mm in length, and at 45° north of the Equator it is slightly longer at 993.57 mm,
this difference arising because of variation is the local value of gravity. The detailed study of the
motion of a pendulum undertaken by Galileo made this humble object, a heavy mass attached to
the end of a long piece of string, the world’s first precision measuring device. It also lead to ideas
of occult magic being attached to the pendulum; primarily, but not wholly by Neo-Platonists; as
it revealed characteristics of the invisible, hidden, and Hermetic part of Nature—the mysterious
force of gravity.
The pendulum is a simple measuring device. The period, T, in seconds of a pendulum of
length, L (in meters) is given by T = (2π/√g) √L, where g is the local value of the acceleration due
to gravity. The value of g varies by a few percent over the surface of the earth, and the pendulum is a
sufficiently precise device that it is capable of determining the spatial variation of g (approximately,
). This relatively simple relationship yields the approximation, T = 2 √L. Thus, the period
9.806 m.s
of oscillation of a pendulum is independent of the mass of the bob of the pendulum. This surprising
finding from the 17th century linked the pendulum, in the imaginations of Occultists and those
interested in the Hermetic arts with some, as yet undiscovered level of existence; which could well
be the much sought-after link between the physical world and the world of the spirit.
-2
An interesting aside on the presentation of decimal numbers, and one which has still to this
day not been resolved, is the nature of the decimal marker. In 1615, John Napier (1550–1617) the
8. MEASURING THE WORLD8. MEASURING THE WORLD
79
Eighth Laird of Merchiston, Scotland, a mathematician, astronomer, and well-respected occultist
and astrologer, used a comma as a decimal marker to separate the whole number part from the
decimal number part of numbers in his book of multiplication tables Rabdologia. Here, Napier was
following an idea put forward by the Frenchman François Viéte (1540–1603) who suggested that
the comma be used as a separatrix between the whole number and the fraction. Unfortunately, for
the scientific community, Napier later changed his mind, and replaced the comma as the decimal
marker with the full stop. This change from the comma to the full stop as the decimal marker is
still with us today. In the English-speaking world, the decimal marker is the full stop, but in the
French-speaking world and in Continental Europe, the decimal marker is the comma. This confu-
sion has become an unresolvable cultural identifier.
We saw in Chapter 3 that in 1668 John Wilkins published An Essay Toward a Real Character
and a Philosophical Language. Wilkins’ long essay included a four and a half page description of a
proposed system of measurements based upon his idea for a single “universal measure” that could
be used to define length, weight, volume, and money. John Wilkins suggested a decimal system
of measurement, with a universal standard of length based on time and derived through the use
of a swinging seconds’ pendulum, and that this standard length could then be used to define area,
volume, and weight using a well-defined volume of pure, distilled rainwater. Wilkins’ Essay is the
first description of a complete system of measurement intended to be used by all nations. Indeed,
Wilkins’ proposal contained almost all of the essential elements of the Metric System of 1795,
which could quite reasonably therefore be said to have originated in England in the 17th Century
and not in France during the late 18th century (please note that national chauvinism is particularly
strong in this debate).
Following John Wilkins’ first description of an international system of measurement, the
development of the decimal metric system of measurements was inevitable even though Wilkins
himself was not confident of its success. Wilkins wrote about his plans for a universal measure, “I
mention these particulars, not out of any hope or expectation that the World will ever make use of them,
but only to show the possibility of reducing all Measures to one determined certainty.” Following the
publication of John Wilkins’ essay, savants in several countries took up and promoted his ideas.
The zeitgeist was waiting for this concept of a single system of units or weights and measures (and
money) based on a single natural dimension. In 1670, Gabriel Mouton (1618–1694), a French
cleric and astronomer, promoted a system of measurement that was to be based upon the physical
dimensions of the Earth; rather than a measurement based on the length of a seconds’ pendulum,
or one or other measurements of a human (all be it, Royal) body.
Gabriel Mouton assumed that the Earth was a perfect sphere, and so a section along a Me-
ridian would be a circle. Mouton proposed that this “great circle” should be divided into ever smaller
angles, and that these small angles could be used to define a new system of measurement. What was
also proposed was that the division of these angles should be made using decimal arithmetic; that
80
is, division by ten, rather than the old Babylonian sub-divisions of an angle based on arithmetic to
the base 60 (an angle being divided into 60 min and each minute into 60 sec). Mouton suggested
that a minute of arc along a Meridian be measured and defined as a unit of distance called a milliare;
a linear distance that subtended this angle. The Abbé Mouton also suggested dividing the milliare
into centuria, decuria, virga, virgula, decima, centesima, and millesima by successively dividing by fac-
tors of ten. In short, Mouton suggested that Simon Stevin’s 1585 decimal system of tenths should
be used to divide the Earth-based angular units into ever smaller parts.
Interestingly, Abbé Mouton’s milliare is the modern definition of a Nautical Mile (that is,
one minute of arc of Latitude along any Meridian), which given the importance of maritime trade
to the world’s major powers accounts for the longevity of this system of measurement. Mouton’s
virga would be one thousandth of a minute of arc, which would be about 1.11 m in today’s Met-
ric System. In the same way that Wilkins suggested using a seconds’ pendulum beating 1 second,
and hence having a length of about one meter, to define the universal measure of length, Mouton
suggested using a shorter pendulum to measure the virgula (a tenth of a virga), or 0.111 m. A
pendulum of this length would be beating or oscillating every 0.66 sec.
However, the theoretical models of Wilkins and Mouton demonstrated that to make prog-
ress in defining more precise units of measurement; that is, establishing a universal system of mea-
surement to assist in the progress of science and society, one needed accurate measurements of our
planet. Was the Earth a sphere, or not; and if not, what was the eccentricity of the planet? One of
the first surveyors who undertook the task of precisely determining the curvature of the Earth was
the founder of a dynasty of French mathematicians and astronomers, Jacques Cassini (1677–1756)
who made measurements of the Earth based on minutes of arc. With his son, César-François
(1714–1784), Jacques Cassini surveyed a portion of the Arc of the Meridian from Dunkirk on the
coast of northern France to Barcelona in Spain; this is a line from Pole to Pole passing through
Dunkirk and Barcelona. This particular Arc of the Meridian also passes through Paris (the Paris
Observatory was built on the Meridian line in Paris) and is therefore called the Paris Meridian,
and it would be surveyed many times over the following century. These repeated measurements of
ever-increasing precision would finally yield the first standard meter, which is still preserved in Paris
(the task would be completed by Jacques’ grandson Jean-Dominique, 1748–1845). 16
16 The Paris Meridian is not our present line of zero Latitude. The Prime Meridian runs through the Greenwich
Royal Observatory, and so it is at Greenwich and not Paris that the world is bisected and the zero of Latitude
established. This move to a Greenwich-based view of the world was decided by an international conference in
1884 (International Meridian Conference 1884, Washington DC,); and, as one can imagine, this move pleased
the British, then at the zenith of Empire, and greatly displeased the French. Modern satellite measurements
of the geographical coordinated of the Paris Observatory show how much the Meridian had been shifted by
global geopolitics; the Paris Observatory is at 48°50’0’’N 2°21’14.025’’E. The Greenwich Royal Observatory is
at 51°28’40.12’’N 0°00’0.5.31’’W. By using the Abbé Mouton’s system of units (given above) one can see how
little the Meridian changed geographically, but the political repercussions where tremendous. The French did not
begin to accept the 1884 change until after World War II.
8. MEASURING THE WORLD8. MEASURING THE WORLD
81
Between 1735 and 1737, the explorer, geographer, and mathematician Charles-Marie de
La Condamine (1701–1774), the astronomer Louis Godin (1701–1780), and the naval architect,
mathematician, astronomer, and geodesist Pierre Bouguer (1698–1758) measured an Arc of a Me-
ridian in Peru where they also made equatorial measurements of the local value of the acceleration
due to gravity (g). In addition, they returned to Europe with the first detailed map of the Amazon
basin. And between 1739 and 1740, the astronomer Nicolas Louis de Lacaille (1713–1762) who
had started his career in the church, but then moved to astronomy, together with Jacques Cassini
again measured the Dunkirk-Barcelona Meridian. The northern and southern ends of the surveyed
meridian were the belfry in the center of Dunkirk and the fortress of Montjuïc in Barcelona,
respectively. Apart from defining the dimension of the universal measure or meter, these early
surveyors refined the value of the Earth’s radius, and definitively established that the shape of the
Earth is oblate or slightly flattened near the North and South Poles, which had been predicted by
Isaac Newton. This observation lead to the Enlightenment cult of Newton as Universal Genius
(see Figure 8.1).
Figure 8.1: A medallion struck in Paris in 1840 to mark the final introduction of the Metric System
in France, and to act as a souvenir to posterity of the manner in which the meter was determined in
1799 by a measurement of distance; the allegorical figure is measuring one quadrant of our planet. The
reverse side of this medallion bears Condorcet’s famous rallying call for the Metric System, A TOUS
LES TEMPS: A TOUS LES PEUPLES. This image is reproduced with the permission of the BIPM
(https://www.bipm.org/en/about-us/), which retains full international copyright.
While the French were surveying Latin America and Europe, the British finally got around
to accepting the Gregorian calendar. Part of the reason given for the UK’s decision to finally adopt
the Gregorian reforms to the Julian Calendar, and the necessary change in the date of the beginning
of the new year, was the difficulty of calculating interest on loans which were outstanding. It was
82
noted, “…[the comparison of the date in England compared with the date on the Continent was]
attended with divers inconveniences, not only as it differs from the usage of neighboring nations, but also
from the legal method of computation in Scotland, and from the common usage throughout the whole king-
dom, and thereby frequent mistakes are occasioned in the dates of deeds and other writings, and disputes
arise therefrom…” And in 1753, the New Year in Britain actually began on January 1st rather than
March 25th, to bring it into line with the rest of Europe because Great Britain (and the British
Colonies, including America) began to use the Gregorian calendar (New Style or N.S.) rather than
the Julian calendar (Old Style or O.S.).17
As the 18th century drew to its close, two political events occurred which would have a pro-
found influence upon the nature of the various systems of weights and measures still used through-
out the world today. In North America, the British colonists decided that they did not need to pay
the government in London for protection against the French, Spaniards, and Native Peoples. The
rebellion of these colonists was successful, and by 1791 a new nation was born, which was using the
same system of weights and measures as they had inherited from England. But would they wish to
continue using this system? In Europe, the major political event of this period was the bankruptcy
of the Kingdom of France, and the collapse of the nation into the French Revolution of 1789.
History tells us that systems of weights and measures are mostly reformed or changed during,
or just after, major political upheavals. Well before the Revolution of 1789, everyone in France
knew that the systems of weights and measures in France were tangled, convoluted, complex and
an invitation to fraud, but no one thought that there was much to be gained by reform. Why re-
form the system, it was only the ordinary people who suffered adversely from the complexity of the
various systems of weights and measures, whose local variations were maintained for the personal
advantage of the local aristocrats and church leaders. At the time when the French economy was
beginning to industrialize in response to similar developments in England, which essentially had a
national system of weights and measures by this time, France was unable to compete as it did not
have any means of standardising and automating manufacture. In England, factories could out-pro-
duce manual manufacture, because there was uniformity of measurement and standardisation of
production based on the inch and the pound.
The, as yet unborn, U.S. sent ambassadors to France, which was actively helping them in their
struggle against Britain. Thomas Jefferson (1743–1826) served as Ambassador to France where he
17 Any reader of the letters of the Fourth Earl of Chesterfield to his sons will have remarked that the Earl of Ches-
terfield was always fastidious in noting whether he was using the Old Style notation for dates, or the New Style
notation when dating his letters. But then the reason why the Earl of Chesterfield paid so much attention to the
change in calendar was because he was responsible for the change. The Calendar Act (also known as Chester-
field’s Act) of 1753 made provision to ensure that monthly or yearly payments would not become due until the
dates that they originally would have fallen due under the old Julian calendar. For this reason the UK tax year
continued to operate on the Julian calendar and begins on the April 5th, which was the Old Style date for the
New Style tax year that began on March 25th.
8. MEASURING THE WORLD8.1 DEFINING THE SIZE OF THE WORLD
83
was in regular contact with British and French savants as they formed their ideas about new, more
natural systems of units of measurement for science and for society. Similarly, Benjamin Franklin
(1706–1790) signed an alliance between France and America, but although the primary objective
of this alliance was the raising of funds for the war against Britain, Franklin did not see any reason
why he should not take the opportunity of this alliance to promote the cause of science. And it was
from this exchange of ideas that the political leaders in America began to consider the best system
of weights and measures for their young nation, which would ensure that they were able to com-
pete effectively and independently on an international stage, and not be tied in any way to Britain.
For example, in 1786, five years before the American Colonies gained their independence Thomas
Jefferson proposed that the new nation adopt a decimal system for their currency. The Continental
Congress established the silver dollar as the basis for decimal coins, although it was not minted
until after independence in 1792.
At the same time, Thomas Jefferson independently proposed a system of weights and mea-
sures very similar to the proposed French decimal Metric System. He differed from the French in
that he wanted the meter to be defined by the length of a pendulum that beat a second rather than
surveying the surface of the Earth. Jefferson rightly reasoned that other countries could then readily
duplicate such a standard at any time; thereby laying the foundations for a truly international sci-
ence. Jefferson did not particularly like the idea that the meter would be based on a series of surveys
made on French territory. Unfortunately, it was at this time that detailed measurements showed by
how much gravity varied over the surface of the Earth, and the swinging pendulum definition for
the universal measure of length was losing widespread support among savants.
During his period as Ambassador to France (1785–1789), Jefferson visited London in 1789.
While in London, the political situation in France deteriorated, and to avert bankruptcy King Louis
XVI convened the États Généraux or States-General for the purpose of imposing new taxes on the
nation. The États Généraux was a meeting of the three “states” or groups of people who were seen
as constituting the nation: the First State was the clergy, the Second State was the nobility and the
Third State was the bourgeoisie. The urban workers and the peasants were rather left out of things.
8.1
DEFINING THE SIZE OF THE WORLD
Shortly after the fall of the Bastille in July 1789, but long before political stability was re-estab-
lished throughout France, the science commission of the Académie des sciences in Paris recom-
mended a measurement of the new standard of length, the meter, based on a detailed survey along
the meridian arc extending from Dunkirk to Barcelona, which had already been surveyed and
measured by de Lacaille and César-Francois Cassini in 1739. The commission calculated that if
they could measure a significant piece of the Meridian, the rest could be estimated. Both ends of
the line to be surveyed needed to be at sea level, and as near to the middle of the Pole-to-Equa-
84
tor Quadrant as possible to eliminate errors. Fortunately for them, the only one such meridian
on Earth is about a tenth of the distance (about one thousand kilometers) from the Pole to the
Equator and it runs through Dunkirk and Barcelona, so most of the distance to be surveyed lay
conveniently inside France, a fact that did not escape the more nationalistic attention of observers
such as Thomas Jefferson.
Condorcet appreciated the potential for such nationalist views when he wrote “The Academy
has done its best to exclude all arbitrary considerations—indeed, all that might have aroused the suspicion
of its having advanced the particular interests of France; in a word, it sought to prepare such a plan that,
were its principles alone to come down to posterity, no one could guess the country of its origin.” The Leg-
islative Assembly endorsed the proposal from the Académie des sciences, directed that the detailed
survey be made as soon as possible, and enacted the necessary legislation on March 26th, 1791.
Although the Académie des sciences finally chose that the meter would be exactly a ten mil-
lionth of the distance between the North Pole and the Equator, their choice also defined this
distance as being precisely 10,000,000 m. Unfortunately, an error was made in the commission’s
initial estimation, because the wrong value was used in correcting for our planet’s oblateness. We
now know that this Quadrant of the Earth is actually 10,000,957 m. One should never forget that
these savants were not only setting out to create what they saw as a new fundamental system of
units based on the dimensions of the Earth, but they were also imposing models and views about
the character of the Earth. In 1791, a handful of French Enlightenment mathematicians, guided by
the writings of Isaac Newton, imposed a definite shape and size to our planet. The Earth shrank,
and became precisely known. The Académie des sciences presented humanity with a fait accompli. The
medallion shown in Figure 8.1 was struck to commemorate this standardization of the Earth.
8.2
OTHER SURVEYS
The European Enlightenment had come up with the idea of constructing a new decimal metrol-
ogy based on a single measurement of length. Such ideas, however, have a long history, and it is
to Ancient China that we must turn for the first consistent use of decimal weights and measures;
particularly, in the decrees of the first emperor, Chin Shih Huang Ti in 221 BCE.
Also, given the size of China, it is perhaps not surprising that an early effort was also made
in fixing terrestrial length measurements in terms of astronomical measurements or observations.
It was an early idea of Chinese savants, going back before the time of Confucius (551–479 BCE),
that the shadow-length of a standard height (an 8-ft gnomon), at the summer solstice increased by
1 inch for every thousand li (a length measurement equivalent to 1,500 chi or Chinese feet) north
of the Earth’s “center,” and decreased by the same proportion as one went south. This rule of thumb
remained current until the Han Dynasty (205 BCE–220), when detailed surveying of the expand-
ing Chinese Empire showed it to be incorrect. But it was not until the Tang Dynasty (618–907)
8. MEASURING THE WORLD8.3 FURTHER READING
85
that a systematic effort was made to determine a range of latitudes. This extensive Tang survey
had the objective of correlating the lengths of terrestrial and celestial measurements by finding the
number of li that corresponded to 1° of polar altitude (that is, terrestrial latitude), thereby fixing the
length of the li in terms of the Earth’s circumference. This Chinese meridian survey takes its place
in history between the lines of Eratosthenes (c. 200 BCE), and those of the astronomers of the Ca-
liph, al-Ma’mūm (c. 827), but more than 1,000 years before the French metric survey of the 1790s.
The majority of these Chinese surveying measurements were undertaken between 723 and
726 by the Astronomer-Royal Nankung Yüeh and his assistant, I-Hsing, a Buddhist monk. The
survey was carried out at 11 sites along a meridian running from the Great Wall in the north to
Indo-China in the south, a distance of 7,973 li or about 2,500 km. The main result of this field work
was that the difference in shadow length was found to be close to 4 inches for each 1,000 li north
and south, and that the terrestrial distance corresponding to 1° of polar altitude was calculated to
be 351 li and 80 bu (the bu was a measure of between 5–6 chi). The imperial surveyors had achieved
their goal of defining a terrestrial unit of length, intended for use throughout the empire, in terms
of the dimensions of “Heaven and Earth,” that is, 1/351 of a degree.
This survey is today practically unknown, yet it represents an outstanding achievement given
the spaciousness and amplitude of its plan and organization, and represents one of the earliest uses
of advanced mathematics which was needed to compute the final result. These results were known
in 18-century Europe, as they were commented upon by Leonard Euler and later by Pierre Simon
in distance,
de Laplace. While the metric survey obtained a routine precision of about 1 part in 10
. The Tang value of
the much earlier Chinese survey could boast only of a precision of 1 part in 10
the li gives a modern equivalence of 323 m, but the earlier standard Han li is very different at 416 m.
3
6
8.3
FURTHER READING
These books provide a readable background to the origins of the modern metric system.
1
Defining and Measuring Nature: The Make of all Things (2014); Jeffrey H. Williams; San
Rafael, CA, Morgan & Claypool. This work contains an explanation of the recent redef-
initions of several of the base units of the modern metric system.
2
3
Le nombre et la mesure : Logique des classifications métriques et prémétriques (1980); Franck
Jedrzejewski; Diderot multimédia (in French.)
The Measure of All Things: The Seven-Year Odyssey that Transformed the World (2002); Ken
Alder; London, Abacus, an imprint of Little, Brown Book Group. This work describes in
enjoyable details the problems of the two French surveyors who determined the value of
the universal measure, or meter in Revolutionary France during the 1790s.
CHAPTER 9
87
Dividing Apples with Oranges to
Make the Language of Science
In questions of science, the authority of a thousand is not worth the humble reasoning of a single
individual.
Galileo Galilei (1564–1642)
Having looked at the evolution of the scientists’ worldview, that is, how science compartmenta-
lises and classifies the phenomena and things we observe in the world, let us now turn to how the
quantitative is introduced into this classification. After all, science would be nothing but magic
without a means of quantifying as well as qualifying what we see around us. We have only words
and numbers, the universal currency of humanity to interpret, describe, and record the wonders of
Nature. Our various vernacular languages have evolved with our on-going, never-ending study of
Nature; indeed, one could say that languages have created man, rather than man having created
languages. The worldview of modern science has evolved by specialization of the basic language
used to describe natural phenomena, but few understand what has become a highly specialized
language. It has become a dialect of an elite. An elite as separate from the general mass of society,
as any caste of priests.
Previously, we saw something of the ideas of John Wilkins for a universal philosophical lan-
guage, capable of being understood by all humanity. By the end of the 17th century, such ideas of
a rational universal language were very much part of the European zeitgist. In 1666, the German
polymath Gottfried Wilhelm Leibnitz published his Dissertatio de arte combinatoria in which he
claimed that a proper or true philosophical language would be able to analyze all possible concepts
into their simplest elements, into what Leibnitz termed “the alphabet of thought” (see Page X?). In
such a philosophical or, as we would say today, scientific language, a proper symbol should indicate
the nature of the animal, phenomenon, or whatever it was naming. In other words, it was a language
which could define that thing, or that phenomenon by means of that thing’s appearance, or that
phenomenon’s intrinsic properties. Leibnitz’s theoretical proposition presupposed that: (i) ideas can
be analyzed into primitive notions or components; (ii) ideas can be represented symbolically; and
(iii) it is possible to represent the relations between these ideas. Gottfried Leibnitz was writing in
a century which had attempted the construction of many universal philosophical languages, and
so presupposed that a complete enumeration of human knowledge could be achieved. The ques-
88
tion then arises as to how a relatively small number of fundamental primitive components or base
units could be manipulated or combined to produce a true universal scientific language capable of
describing all Creation.
The philosophical languages of the 17th century attempted to reform natural languages by
simplifying the complex, multiple meanings of some words and concepts. Consider an attempt at
learning the definitions of all the words in a dictionary, or of attempting to comprehend all aspects
of a discipline of science. In the dictionary, you will find every word defined in terms of other words;
in the scientific discipline, you will find explanations involving other scientific terms. In your deter-
mination to learn the meaning (or meanings) of every word, you may find that you need to consult
the definitions of the words employed in the definitions of other words. Indeed, you soon realize
that your initial attempt at learning the meaning of each word in the dictionary is futile. In fact, it
is a circular task, because the dictionary contains only a closed set of words, finite in number, that
enable descriptions of the meanings of each other. If you do not already have in your mind a set of
basic words whose meanings you know independently, without the need of words to define them,
you will remain forever in a continuous circular loop with your dictionary; and the same goes for
seeking to learn a new scientific discipline. For this reason, the philosophical languages of the 17th
century did not reform and simplify English, but they did give us the thesaurus.
At the time of the French Revolution, savants who were familiar with the ideas of Wilkins
assumed that the new fundamental unit of length, the meter, could be used to define all the scien-
tific and technological concepts required by their society. This meter, or universal measure, was to
be defined from the dimensions of the Earth, as one ten millionth of a quadrant running from the
North Pole to the Equator. This universal unit of measurement may be thought of as a semantic
prime of the new language of science. In fact, it is one of the seven base units, or semantic primes of
the International System of Units (SI), which is the modern scientific version of the metric system
of 1795; see Figure 9.1. [1]18
Having defined the basic unit of length, l, to define an area, a two-dimensional quantity, you
simply multiplied two distances, that is, l.l = l2
. Similarly, when we go to three spatial dimensions to
define a volume, we write algebraically l.l.l = l3
. Then, assuming that the density (that is, the mass of
a known volume of a substance) of, for example, pure water is taken to be well defined as one gram
for each cubic centimeter, one can define a base unit of mass as the weight of a precisely known
volume of pure water. The kilogram was originally defined as the mass of 1,000 cubic centimeters
or 1,000 cm
, that is, one liter.
3
18 SI stands for Système international des unités, the French name for this set of units; in all matters relating to the
metric system, French is the only official language. This point is rarely born in mind by English-speaking nations;
but as is pointed out in Chapter 14, politics and science do not always sit well together.
9. DIVIDING APPLES WITH ORANGES TO MAKE THE LANGUAGE OF SCIENCE
9. DIVIDING APPLES WITH ORANGES TO MAKE THE LANGUAGE OF SCIENCE
89
Figure 9.1: Medal commemorating the centenary of the Meter Convention of 1875, manufactured by
R. Corbin, Monnaie de Paris. This face of the medal represents the seven base units of the SI (meter,
kilogram, second, ampere, kelvin, mole, and the candela), and how the meter is defined in terms of the
wavelength of light (in 1975 this was via the red light from a krypton discharge lamp) rather than by
an artifact. This image is reproduced with the permission of the BIPM which retains full international
copyright (https://www.bipm.org/en/about-us/) .
But what happens when we wish to consider the combination of the universal measure of
length with other quantities which are essential, in even some of the simplest ideas and concepts of
technology, for example, how does one introduce time, the basic unit of which is the second into a
system of mechanical quantities? [2]19
The speed, or velocity, of a planet flying through space, or of an ox ploughing a field, is de-
fined in terms of distance and time; yet, how do we combine these two different base units? One
might think of these two quantities as being as different as apples and oranges, so how can they
be divided or multiplied together when they certainly cannot be added of subtracted? It is possible
to mix and manipulate dimensions of distance and time, to divide or multiply meters and seconds,
or even furlongs and fortnights. It is the mathematical definition of a unit that allows us to ma-
nipulate distance and time, and generate new ideas such as the concept of speed or velocity, and
of acceleration which is speed or velocity per unit time. First, consider what we mean by a unit.
Any value of a physical quantity, Q, may be expressed as the multiplied product of a unit [Q] and a
19 The second is a base unit of the SI, and is the oldest measured quantity having been defined about 5,000 years
ago by the Ancient Sumerians.
90
purely numerical factor (that is, a simple number). Written algebraically, we have Q = (a number).
[Q], where [Q] is the unit, for example, meters or seconds, and there are a certain number of these
meters or seconds; for example, Q
= 10 meters or Q
= 10 seconds.
length
time
This convention of expressing a quantity as a unit and a numerical factor is used throughout
science and is referred to as quantity calculus. When units are being manipulated, one may only add
like terms, as with apples and oranges, but all units may be manipulated algebraically. When a unit
is divided by itself (that is, meters/meters or seconds/seconds), the division yields a dimensionless
number, which is one (1) and so intrinsically without dimension or unit. When two different units
are multiplied or divided, the result is always a new unit, referred to by the combination of the indi-
vidual units. For instance, in the SI, the unit of velocity is meters per second; that is, meters/seconds
. This new unit is neither length nor is it time, but length divided by time. When
or m/s or m.s
dividing length by time, one is only dividing the numerical factors, which appear before the unit.
The two original units are distinct, and cannot be divided but are left as a new unit, meters divided
by seconds. Length and time are base units, but the new unit of speed, or velocity is said to be a
derived unit, and may be deconstructed into base units. Likewise, density is defined as the mass of
a known volume of something, or mass per unit volume. This derived unit is composed of two base
units, the base unit of mass (kilogram) and the base unit of length (meter), which as we are dealing
with a volume is cubed. Again, we have divided two base units together to create something new.
-1
When the metric system was first introduced in April 1795, there were two base units, the
meter and the kilogram; the second was already part of the social fabric. As science and technol-
ogy advanced in the 19th century, the new profession of scientist (first defined in revolutionary
France—scientifique) understood how the various manifestations of, for example, heat and work
were all related to the concept of energy, and how this idea related to the established base units of
length, mass and time. In fact, today we have seven base units which may be combined to explain
every known scientific phenomenon, and which would be used to comprehend scientific discov-
eries that have yet to be made. That is, it is through these seven base units that the true universal
language, the language of authority that is science is formulated.
By convention, all physical quantities are organized into a system of dimensions. Each of the
seven base quantities used in the modern SI is regarded as having its own dimension. The symbols
used for the base quantities or base units, and those which are used to denote their dimensions are
given in Table 9.1.
All other quantities, all the phenomena known to modern science are derived from these
seven base quantities using the well-established equations, or Laws of Nature and are called derived
quantities. As outlined above, the dimensions of the derived quantities are written as products of
9. DIVIDING APPLES WITH ORANGES TO MAKE THE LANGUAGE OF SCIENCE9.1 CREATING EXPRESSIONS IN THE LANGUAGE OF SCIENCE
91
powers of the dimensions of the base quantities using the equations that relate the derived quan-
tities to the base quantities.20
Table 9.1: Base quantities and their dimensions, and the base units of the SI
Base Quantity
Length
Symbol of Base
Quantity
l
Dimensional
Symbol*
L
SI Base Unit
Meter
Symbol of SI
Base Unit
m
Mass
Time
Electric current
Temperature
Amount of substance
Light intensity
* The dimension of a physical quantity does not include magnitude or units. The conventional symbolic rep-
resentation of the dimension of a base quantity is a single uppercase letter in Roman (upright) sans-serif type
(these specifications are part of the dogma of science [1]).
Kilogram
Second
Ampere
Kelvin
Mole
Candela
kg
s
A
K
mol
cd
M
T
I
Θ
N
J
m
t
i
T
n
I
9.1
CREATING EXPRESSIONS IN THE LANGUAGE OF
SCIENCE
Dimensional analysis, or the manipulation of quantity calculus is a powerful tool in understanding
the properties of physical quantities, independent of the system of units used to measure them.
Every physical quantity is some combination of the base units in Table 9.1, for example, speed,
which may be measured in meters per second (m/s) or miles per hour (miles/h), has the dimension
L/T, or L.T−1
, and pressure which is a mass pressing down on an area (as in pounds per square inch)
is M/L2
. Dimensional symbols and exponents are manipulated using the rules of algebra;
for example, the dimension of area is written as L2
(meter per
second), the dimension of acceleration (the rate of change of velocity with respect to time) is writ-
ten as L.T-2
(meter per second squared; that is, meter per second per second), and the dimension of
density as M.L-3
, the dimension of velocity as L.T-1
(kilogram per meter cubed).
or M.L-2
Dimensional analysis is routinely used to check the plausibility of newly derived equations,
the design of experiments, and the results of calculations in engineering and science before money
and effort is expended on detailed measurements. In this way, reasonable hypotheses about complex
20 As mentioned above, the dimension of any quantity Q is written in the form of a dimensional product; dimen-
sions of Q = Lα Mβ Tγ Iδ Θε Nζ Jη
, where the exponents α, β, γ, δ, ε, ζ, and η are generally small whole numbers
(integers), they can be positive or negative, or even zero, and are termed dimensional exponents. This expression
defines the make of all things [2].
92
physical situations are examined theoretically, to see if they merit subsequent testing by experiment.
And, it is also the means by which one seeks to determine appropriate equivalent values for a quan-
tity in another system of units; for example, how you convert from the value of a quantity in metric
units to the equivalent quantity in British customary units; for example, meters/second to miles/
; that is,
hour, or joules (the SI derived unit of energy, symbol J, where J is equivalent to kg.m
L.M2.T-2
) to British Thermal Units or BTU (a customary unit of energy equal to about 1,055 joules.
A BTU is approximately the amount of energy needed to heat 1 lb (0.454 kg) of water, which is
exactly one tenth of a UK gallon, or about 0.1198 U.S. gallons, from 39°F to 40°F , or 3.8°C to
4.4°C). Thus, dimensional analysis is the means of translating between the various dialects of the
single, universal language of science.
.s
-2
2
Consider the concept of force, something that is done to an object to make it change its
speed, or velocity through, for example, acceleration. In the SI, the unit of force is the newton
(symbol, N), named after Isaac Newton in recognition of his fundamental work in mechanics. The
newton is equal to the force required to accelerate a mass of one kilogram at a rate of one meter per
second squared. In dimensional analysis using Newton’s famous formula where force (F) is given
as being equal to a mass (m) multiplied by acceleration (a), that is F = m.a, multiplying m (kilo-
), the dimension of the newton is found to be M.L/T2 or
gram) by an acceleration a (meter/second
M.L.T-2
. The newton is derived from the base units of mass, length and time, and
, that is, kg.m.s
-2
2
so could have been derived by the savants of the late 18th century.
These principles of dimensional analysis were known to Isaac Newton, who referred to them
as the Great Principle of Similitude. The 19th-century French mathematician and Egyptologist Jo-
seph Fourier (1768–1830) made important contributions to dimensional analysis based on the idea
that physical laws like Newton’s famous law, F = m.a, should be independent of the systems of units
employed to measure the physical variables. That is, the Laws of Nature and fundamental equations
should be equally valid in the metric system of units as in a non-metric system of units. And when
converting between these two systems of units, we need only be cognizant of the mathematical
factors needed to convert between the base units to convert the entire quantity from one system
to another. Thus, one should take care never to mix systems of units, as the consequences could be
disastrous (see Page 97). But there is nothing stopping one defining force in Ancient Egyptian units
of measurement; distance would be in terms of the Royal cubit (about 0.525 m), mass would be
in deben (about 0.015 kg) and time would have been in unut (the hour, which is identical with our
hour). Fourier showed how each of these base units would need to be converted to SI base units to
convert the Ancient Egyptian unit of force to the newton or vice versa.
9. DIVIDING APPLES WITH ORANGES TO MAKE THE LANGUAGE OF SCIENCE9.2 DERIVED UNITS
93
9.2
DERIVED UNITS
The base quantities of the SI given in Table 9.1 are combined to generate derived units, which
are products of powers of base units without any numerical factors (other than 1). Such a set of
coherent derived units is, in principle, without limit and they represent the means by which all the
phenomena of Nature are described. Table 9.2 lists some examples of derived quantities and how
they are represented in the technical literature.
Table 9.2: Derived quantities
Quantity
Area
Volume
Velocity or speed
Acceleration
Density
Surface density
Specific volume
Representation as a Unit
Derived Quantity
2
m
square meter
3
m
cubic meter
meter per second
m.s
meter per second squared m.s
kilogram per cubic meter
kg.m
kilogram per square meter kg.m
3
.kg
cubic meter per kilogram m
-3
-1
-2
-1
-2
In this way, the semantic primes of the universal language of science, the base units of the SI,
are combined to produce a means of describing and quantifying Nature. Some important derived
units are given a specific name, usually to honor the scientist most closely associated with that
quantity. Some of these named derived units are given in Table 9.3.
Table 9.3: Named derived quantities
Quantity
Frequency
Force
Pressure
Energy (or work)
Power (or light intensity) watt (W)
Derived Unit (Symbol)
hertz (Hz)
newton (N)
pascal (Pa)
joule ( J)
-2
Representation as a Unit
-1
s
m.kg.s
-1
m
m
m
.kg.s
-2
.kg.s
.kg.s
-3
-2
2
2
Of particular interest are two derived quantities related to angles. The plane angle is a two-di-
mensional quantity defined by two lines, and the solid angle (steradian) is a three-dimensional
quantity defined by a cone with a certain cross-sectional area. When expressed in terms of base
units of the SI, these two angles are: meter/meter and (meter squared)/(meter squared), respectively;
2
consequently, they are dimensionless, as m/m = 1 = m
. The fact that quantities related to angles
in the SI are essentially invisible, as far as the unit is concerned, needs to be remembered as we see
in Table 9.4, which contains more derived quantities.
2
/m
94
Table 9.4: Further derived quantities
Derived Unit (Symbol)
Quantity
newton meter (N.m)
Moment of force
pascal second (Pa.s)
Viscosity
Surface tension
newton per meter (N/m)
Angular velocity or torque radian per second (rad/s)
Heat density
Thermal conductivity
Energy density
watt per square meter (W/m
)
watt per meter kelvin (W/m.K)
joule per cubic meter ( J/m
)
2
3
Radian intensity
watt per steradian (W/sr)
-2
-1
-1
-1
= s
.kg.s
-1
.kg.s
-2
Fuller Representation
2
m
m
kg.s
(m/m)s
-3
kg.s
m.kg.s
-1
m
2
m
2
m
.kg.s
-3
.kg.s
.kg.s
2
/(m
2
/m
) =
K
-2
-3
-1
-3
Radiance
watt per square meter steradian
2
(W/m
.sr)
2
2
/m
(m
-3
).kg.s
= kg.s
-3
These are only a handful of the phenomena of Nature described by a few of the base units
of the SI. All the derived quantities listed above, except thermal conductivity, involve only the
base quantities length (meter), mass (kilogram), and time (second); so these phenomena could, in
principle, have been identified by the savants who created the metric system in 1795 who had the
meter, kilogram, and second. They would not have had the kelvin as the base unit of temperature,
as temperature was not included into the SI as a base unit until 1954, the 18th-century savants
who created the metric system would have used the Centigrade scale of temperature. With only
the meter, the kilogram and the second we can define energy, the driving force of Nature. Gottfried
Leibnitz had pointed out that what he termed the vis visa of a body (kinetic energy) was propor-
2
(see Table
tional to the product of the body’s mass and the square of its velocity; that is, kg.m
9.3). Likewise, Isaac Newton had said that a force (F) that causes a body to change its speed is equal
2
, which
to the mass (kg) of the body multiplied by the acceleration (m/s
is the definition of the newton in Table 9.2.
-2
); that is, F = m.a = kg.m.s
-2
s
By using the laws of physics as a grammar, and the base units as expressions or words, we
may construct a language that allows us to make predictions about phenomena that have not yet
been identified, but which should be observable. Looking at the above tables, a scientist could, for
example, ask questions such as: What happens if a force tries to twist a body instead of pushing
(repelling) or pulling (attracting) it?; What happens if a force acts upon an area, not simply along
a line?; or Is there a real, measurable phenomenon that arises when one couples the next highest
power of length with time? In the first case, one defines torque, which is a force that tries to rotate
a body (as any inexperienced motorcyclist, who has applied his rear-break too harshly while going
around a corner too quickly will tell you). In the SI, torque is termed angular velocity (see Table
9. DIVIDING APPLES WITH ORANGES TO MAKE THE LANGUAGE OF SCIENCE9.2 DERIVED UNITS
95
-1
9.4) and is expressed in radian per second. But as the radian is dimensionless in the SI, it is written
). This can be confusing and is the reason why many engineers prefer to express
as per second (s
this important quantity in non-SI units. A force acting over an area would be newtons per square
, which is
meter, and from the Tables we see that this quantity would be m.kg.s
the definition of pressure. Pressure is nothing more than the force exerted by something (a gas or
a fluid) upon a well-defined area.21
2
/m
or m
.kg.s
-1
-2
-2
As for the reasonableness of phenomena that have not yet been observed, as was mentioned
above, one first has to consider the magnitude of the units and a dimensional analysis to see if such
a new phenomenon is observable. A relatively recent example would be the pressure exerted by
light, that is, radiation pressure. Could it exist? Is it measureable? The answer was yes, and it was
discovered that the radiation of the Sun exerts a pressure of less than a billionth of an atmosphere at
the Earth’s surface. But it was an examination of the existing language of science, which suggested
to and allowed individuals to look for this new phenomenon. The complex interconnectedness of
the base units that couple, to generate the phenomena of Nature is represented in Figure 9.2. This
figure tells us, for example, that the present definition of the second is used in the present definition
of kilogram, meter, candela, ampere, and kelvin, and that the definition of the unit of temperature,
the kelvin is dependent upon mass, time, and length. This organic wholeness of the phenomena of
Nature reminds us of the ideas underlying the I Ching and the Taoist view of Nature. [2]
In linguistics, grammar is the set of rules governing the composition of clauses, phrases,
and strings of words in any given natural or vernacular language. Individuals who use or speak a
language have a set of internalized rules for using that language, and these rules constitute that lan-
guage’s grammar. The vast majority of the information in the grammar is, at least in the case of one’s
native language, acquired not by conscious study or instruction, but by observing other speakers.
Much of this work is done during early childhood; learning a language later in life usually involves
a greater degree of explicit instruction. But for all the emotion expended by those who understand
and use the rules of grammar, as well as those who have no idea about grammar; grammar is the
cognitive information underlying language use. And this is the same in the language of science, as
it is in English. Grammar allows us to turn the lists of verbs, nouns, adjectives, adverbs, etc. that
come into our minds, at a particular moment into comprehensive, information-conveying prose.
Grammar is not a distraction or an irritation; grammar is magical. Indeed, grammar and grimoire
are derived from the same root. Grammar is also glamour, and the primary meaning of glamour is
enchantment or spell. While grimoire is a manual for the casting of spells. Through grammar we
may define, explore, understand, and perhaps control some aspects of the Universe.
21 The British customary unit for pressure (still used in, for example, tyre pressures) is pounds per square inch, which
gives clear indication of pressure as a force upon an area.
The Meter:
the base unit
of length
(defined by c)
96
The Candela:
the base unit
of light intensity
The Mole:
the base unit
of amount of
substance
(defined by NA)
The Kilogram:
the base unit
of mass
(defined by h)
The Second:
the base unit
of time (defined by
the frequency
of an atomic
clock)
The Kelvin:
the base unit
of temperature
(defined by kB)
The Ampere:
the base unit
of amound of
electricity
(defined by e)
Figure 9.2: The interconnectedness of the seven base units of the SI, now that (since May 2019) the
kilogram is redefined via Planck’s constant, h, the unit of thermodynamic temperature defined by Boltz-
mann’s constant, kB, the mole defined by Avogadro’s number or constant, NA, and the ampere defined
by the charge of the electron, e. As can be seen, the network of connections is complex, for example, the
kelvin is connected to the physics underlying the SI and is defined as an energy, but is dependent on
the definitions of length (L), mass (M), and time (T) as M.L2.T-2. In addition, the ampere is defined as
a flow of electrons in a time interval and so is no longer a force (M.L.T-2) dependent on length, mass,
and time, but only on time. The kilogram is now defined by h, which is dependent on energy and time.
9. DIVIDING APPLES WITH ORANGES TO MAKE THE LANGUAGE OF SCIENCE
9.3 LOCATION: THE SURFACE OF MARS, SEPTEMBER 23, 1999
97
9.3
LOCATION: THE SURFACE OF MARS, SEPTEMBER 23, 1999
On the September 23, 1999, the Mars Climate Orbiter satellite was lost during a maneuver to
place it in an orbit around Mars. Instead of entering a stable orbit from where it could monitor the
Martian weather, it is believed that the satellite crashed onto the surface of the Red Planet. After
the long crossing of interplanetary space, the satellite’s controllers would have needed to slow down
the satellite for it to safely enter the Martian atmosphere. It is believed that it was this braking
process, which led to the satellite’s loss. To slow down a body in motion requires the application of
a force of the same order of magnitude as the velocity of the satellite, but applied in the opposite
direction. Unfortunately for the satellite, and the scientists waiting to hear from it, something went
seriously wrong.
As it turned out, the problem for the Mars Climate Orbiter was that the force necessary
to slow the satellite down for it to safely enter a stable orbit around Mars, was calculated in one
set of units, but when the command was sent to the satellite to ignite the braking thrusters, it was
applied in a different set of units. The two sets of software, on Earth and on the satellite hurtling
toward Mars, were trying to communicate in different scientific units, and instead of entering a
stable orbit well above the surface of the planet, it attempted to enter an orbit much closer to the
surface and crashed.
The principal cause of the disaster was traced to a thruster calibration table, in which British
customary units instead of metric SI units had been used to define force. The navigation software
expected the thruster impulse data to be expressed in newton seconds, but the satellite provided the
values in pound-force seconds (a non-SI unit). This confusion in units caused the electric impulse
to be interpreted as roughly one-fourth its actual value. Knowing the forward-moving force of the
satellite, a calculation would have been undertaken to determine the force require to be applied in
the reverse direction to reduce the forward force, but due to incompatible programming on earth
and in the satellite, the satellite crashed.
The pound-force (lbF) is a unit of force in the system of units, loosely term British customary
units. There are many ways to define force in this system of units, which may be considered con-
fusing to some, but actually tells one a great deal about the physics going on in a particular experi-
mental situation. The pound-force is equal to the force exerted on a mass of one avoirdupois pound
on the surface of Earth. Originally, this unit was used in low-precision measurements where small
changes in the Earth’s gravity (which varies from place to place on the surface of the Earth by up
to half a percent could be neglected. The acceleration of the standard gravitational field (g) and the
international avoirdupois pound (lbm) define the pound-force as: 1lbF = 1lbm . g = 1lbm . 32.174 feet
per second squared, which on converting to the SI is equal to 0.454 kilogram . 9.806 65 meter per
second squared = 4.448 newton. (This factor of 4.448 was absent from the software that controlled
the satellite, and so it crashed.)
98
Even after 200 years of the decimal metric system, the world of science and technology is still
full of different systems of units. That is, different dialects of the same universal language of science.
But provided one is aware of these different dialects, one may affect the necessary translation in a
trivial line of computer code and everything will be fine. Assuming, blindly, that everyone is speak-
ing exactly the same dialect by assuming that everyone is using the same system of units is risky.
9.4
A FINAL COMMENT ON THE VALUE OF A QUANTITY:
SACRED GEOMETRIES
The Book of Revelations is the Ur-text of a great deal of the nonsense one finds on the Internet.
One example of the many Hermetic, or opaque subjects arising from Revelations often discussed
at interminable length, but rarely with any clarity on the Internet are sacred geometries, or sacred
architecture. Interestingly, these arcane concepts also reveal something of the nature of the way
scientists look at the world.
The concept of a sacred geometry, or a sacred architecture, comes from Saint John’s vision
of the Heavenly Jerusalem given in Revelations 21:17. The King James Bible tells us about the
dimensions of the future New Jerusalem, “And he measured the wall thereof—an hundred and forty
and four cubits, according to the measure of man, that is, of the angel.” The cubit was a unit of length
measurement common to the Ancient Mediterranean world, equal to the length of a man’s forearm
(elbow to finger-tip). We immediately see from this obscure text from Revelations that the author
is in fact referring directly to the 5th century BCE, pre-Socratic Greek philosopher Protagoras’
well-known comment that “man is the measure of all things,” but we are also given the actual height
of the walls of the heavenly city, 144 cubits. This dimension has over the last two millennia, inspired
many individuals, particularly, architects.
The great gothic Cathedral of Amiens in northern France, where construction began in 1220,
is built to a height of 144 pieds Romans (that is, 42.3 meters or 138.8 British feet). The near-by
Cathedral of Beauvais, on the other hand, where construction began in 1225, is built to a height
of 144 pieds du Roi (that is, 48.5 meters or 159 feet). These two mediaeval units, Roman feet (pieds
Romans) and Royal feet (pieds du Roi), are different. The fact that Royal feet are longer than Roman
feet means that the Cathedral of Beauvais is higher than the Cathedral of Amiens, which may well
explain why it was an unstable building that partially collapsed in 1284, while the Cathedral of
Amiens has never fallen down.
As far as the mediaeval architects of these two neighboring cathedrals were concerned, it
was the numerical value of the height of the City of God that was important. One hundred and
forty-four units was to be the height of the cathedrals, because that was the height of the City of
God given by Saint John in Revelations 21:17; it appears not to have mattered much which units
was actually adopted. (Remember: any value of a physical quantity, Q, may be expressed as the
9. DIVIDING APPLES WITH ORANGES TO MAKE THE LANGUAGE OF SCIENCE9.5 FURTHER READING
99
product of a unit [Q] and a purely numerical factor or number.) The gothic architects of northern
France were only interested in the numerical factor of this physical quantity taken from the Book
of Revelations; as far as they were concerned, the unit was irrelevant. This addiction to the fetish
value of a number is numerology, pure and simple; it is Hermeticism. That is, attempting to find
some meaning or, perhaps, a secret hidden in a particular number. What precisely is the meaning of
144? Perhaps it is related to other celebrated biblical numbers such as 666? What is it about 144;
the height of the walls and the number of the just (144,000)? When you put the heights of the
two cathedrals into British feet or meters, any mystical significance or magic disappears in the very
different light of the English Industrial Revolution and the French Enlightenment.
Conversely from the architects of mediaeval France, the Ancient Sumerians had little concept
of pure numbers, although they were well versed in the use of numbers to hide mystical significance
and to work magic. Our earliest recorded list of objects comes from Ancient Sumeria, and when
we look at these ancient lists (more than five millennia old) we see that the scribes used the same
metrological symbol, or unit as many times as was required by the value of the numerical factor
before the unit. Thus, instead of writing six oxen as the Sumerian form of the number six followed
by, perhaps, a schematic of an ox, the scribes simply drew the pictorial schematic of the ox six times.
The Ancient Sumerians used only the unit part of the definition of a quantity, while the me-
diaeval French craftsmen only used the numerical part of the definition of a quantity. Neither is the
correct approach. One can readily imagine the evolution of the Sumerian usage to a more sensible,
modern approach arising because the lists were getting to be very long and tedious to compose,
and those clay tablets were so very small. The magico-Christian architects, on the other hand, got
caught up in looking for mystical significance in the numbers mentioned in religious-poetic texts of
late antiquity, and as a consequence they lost their way in numerology, and like the Tower of Babel
their cathedral collapsed.
9.5
FURTHER READING
1
Concerning the origins, since the late 19th century and the previous definitions of the
base units of the SI, the best source for detailed information is the bilingual SI Brochure published
by the Bureau international des poids et mesures (BIPM). This is a non-technical document intended
for those already familiar with the science relating to the origin of the SI, and is essentially a list
of rules concerning the use of SI units. The brochure is written by the Consultative Committee for
Units of the BIPM; the most recent edition, the 8th, was published in May 2006. This substantial
booklet is only available through the BIPM, but the text is freely available on the BIPM’s website
(www.bipm.org/en/publications/). Also included are extensive lists of references as to when certain
words or quantities were adopted for use with the SI.
100
The BIPM website also contains freely available information about the evolution of the
definitions of the base units of the SI. The definitions of several of the base units changed in May
2019—full details may be found on the BIPM website (in English and in the official language,
French) at https://www.bipm.org/en/measurement-units/.
Also see https://en.wikipedia.org/wiki/2019_redefinition_of_the_SI_base_units.
2
There is also a more recent history of metrology and of the metric system in this volume:
Defining and Measuring Nature: The Make of all Things (2014); Jeffrey H. Williams; San Rafael, CA,
Morgan & Claypool.
9. DIVIDING APPLES WITH ORANGES TO MAKE THE LANGUAGE OF SCIENCECHAPTER 10
What Powers Society?
101
We already know enough to begin to cope with all the major problems that are now threatening
human life, and much of the rest of life on earth. Our crisis is not a crisis of information; it is a
crisis of decision.
George Wald (1906–1997)
ἐνέργεια
Having looked briefly at how the worldview of scientists has evolved, let us now briefly begin to
consider the mutual influence of science and society. In particular, we will do this by looking at bulk
thermodynamic properties (see Table 10.1, which is based on the tables in Chapter 9). In physics,
energy—derived from the Greek
(activity or operation) a term that first appears in Aris-
totle—is an indirectly observable quantity. Energy powers, not only us and our society, but also the
Universe and everything in the Universe; however, it cannot be measured as an absolute quantity.
We may say that there is energy in a system, but we cannot quantify it precisely; as Saint Thomas
Aquinas said in the 13th century, “I see motion, so I infer energy.” Indeed, there is so much energy
in the Universe that we are only really able to quantify the influence of changes in quantities of
energy, as they act upon matter. Table 10.1 lists four derived units from the International System
of Units (see Chapter 9), for energy, work, force, power, and pressure. We can see from their rep-
resentation in the language of science that they only differ by powers of length (m in meters) and
time (s in seconds). The final column also demonstrates how interconnected are these four basic
phenomena— as revealed in Figure 9.2.
Table 10.1: Named derived closely related quantities
Quantity
Energy (and Work)
Force
Power
Pressure
Derived Unit
(Symbol)
joule ( J)
newton (N)
watt (W)
pascal (Pa)
2
-2
Representation in the
Language of Science
s
kg m
-2
kg m s
-3
2
s
kg m
-2
= kg m
Nm
= Jm
-2
s
-1
-3
James Prescott Joule (1818–1889), Figure 10.1, was an English brewer and amateur scientist,
born in Salford, Lancashire. Joule studied the nature of heat, and discovered its relationship to me-
chanical work. This led to the law of conservation of energy, which in turn led to the development
of the first Law of Thermodynamics. The SI-derived unit of energy, the joule, is named after him.
102
James Watt (1736–1819), Figure 10.2, was a Scottish inventor, mechanical engineer, and chemist
who improved on Thomas Newcomen’s 1712 Newcomen steam engine with his Watt steam engine
of 1776, which went on to become the driving force of the Industrial Revolution around the world.
Figure 10.1: A photograph of James Prescott Joule,
the Salford brewer and amateur scientist who defined
and quantified energy. Image from: https://en.wikipe-
dia.org/wiki/James_Prescott_Joule#/media/File:Joule_
James_sitting.jpg.
When we look up at the sublime spectacle that is the night sky, all we see is energy and
matter, nothing else, and as Einstein pointed out, energy and matter are proportional, and can be
equated with a constant equaling the speed of light (c) squared (E = m.c2
). All the things we see
around us—form, texture, color, together with the sublimity and sense of the numinous that arise
in our minds when we contemplate the Universe—all arise from the distribution of energy. But
static quantities of energy and matter; stationary for all eternity can do nothing. It is only when
energy and matter vary in quantities with distance that we can conceive of a Universe such as ours;
a Universe capable of supporting life. If the Universe were not dynamic, in fact expanding, it would
be a dark, dead gaseous nothing. In this, the Universe is like an economy. If we all kept our money
in banks and spent nothing, there would be no global economy. Economic activity that generates
jobs and opportunities, together with economic growth arises from the movement of money. That
is, by money moving from one location to another. If it all stayed put in the bank accounts of a
few billionaires, there would be no economic activity. Money only generates something useful to
10. WHAT POWERS SOCIETY?
the wider society when it moves. The same with energy in a system. Everything we see around us,
especially life, comes from the flow of energy.22
10. WHAT POWERS SOCIETY?
103
2
-2
-2
.s
/m = kg.m.s
When energy is channeled into a particular direction, we have a force; a force is energy/
as in Table 10.1. Again, the same can be said of money.
distance, that is, kg.m
When money flows into an economy, it can be said to be a powerful force. Consider a definition:
a force is any influence that causes an object to undergo a change in its motion (for example its velocity
or speed), the direction of its motion, or its shape (for example, compression). A force may thus cause
a moving object to change its velocity (the force of gravity defining the trajectory of comets mov-
ing around the Sun; or an injection of capital to change the direction of an economy) or even to
begin to move if it were stationary (the force of the rocket engine accelerating against the force
of gravity, or motion in a game of billiards, or a start-up grant) or it may compress the shape of
an object (collapse or bankruptcy). While mechanical stress can often remain embedded in solids,
gradually deforming them, mechanical stress in a fluid gives rise to changes in the fluid’s pressure
if the volume is constrained.
Work, energy, power, and force are all closely related, indeed, they are interconnected in
physics (see Table 10.1). But they are also closely related concepts in the wider society. Work is
described as the product of a force multiplied by the distance over which it acts. Only the compo-
nent of a force (which is actually a field of potential action extending in many directions) in the
direction of the movement of its point of application does work. The term work was first used in
1826 by the French mathematician Gaspard-Gustave Coriolis (1792–1843), who gave his name to
the force responsible for the swirling form of hurricanes, and of the vortex of water as it disappears
down a plughole.
If a constant force of magnitude F acts on a point that moves a distance l in the direction of
the force, then the work W done by this force is calculated from W=F.l. In the SI system of units,
force is measured in the newton and it would act over a distance in meters, so the work done would
equal newton meters, which (see Table 10.1) would be an energy, and so would be equal to joules.
In the SI, work and energy have the same units.
Power is the rate at which energy is transferred, used, or transformed into another form of
energy. For example, the rate at which an electric heater transforms electrical energy (electrical
current per unit time) into heat and light by passing the current through a heating element of high
resistance, and is measured in watts, in honor of James Watt (Figure 10.2). The greater the power
output or wattage, the more power, or equivalently the more electrical energy is used per unit time.
Thus, power is the time-averaged use of energy (kg.m
, as in Table 10.1).
/s = kg.m
.s
.s
-3
-2
2
2
22 As the British Nobel laureate in biochemistry, Frederick Gowland Hopkins (1861–1947) commented, “Life is a
dynamic equilibrium in a polyphasic system.”
104
Figure 10.2: James Watt painted by Carl
Frederik von Breda. Image from: https://
upload.wikimedia.org/wikipedia/com-
mons/1/15/Watt_James_von_Breda.jpg.
Energy transfer can be used to do work, so power is also the rate at which this work is per-
formed. The output power of an electric motor is the product of the torque the motor generates
and the angular velocity of its output shaft. The power expended to move a vehicle is the product
of the traction force of the wheels against the ground and the velocity of the vehicle. The SI unit of
power is the watt, which is equal to one joule per second. Older, more picturesque units of power
include ergs per second, horsepower, metric horsepower (in German, Pferdestärke), and foot-pounds
per minute. One horsepower is equivalent to 33,000 foot-pounds per minute, or the power required
to lift 550 pounds of weight by one foot in one second, and is equivalent to about 746 watts (and
has nothing to do with carts and horses).
10.1 SOCIAL FORCES
By the late 19th century, classical physics was triumphant. We understood how the Universe func-
tioned because we understood how energy was transfered, conserved, and partitioned in our labo-
ratories, particularly, the Cavendish Laboratory in Cambridge, and we merely extrapolated to the
larger scale of the Universe. In 1864, Maxwell published A Dynamical Theory of the Electromagnetic
Field, where he first proposed that light was composed of waves moving in the same medium that
gives rise to electric and magnetic forces. Maxwell’s work in electromagnetism has been called the
10. WHAT POWERS SOCIETY?
10.1 SOCIAL FORCES
105
second great unification in physics, after the first great unification achieved by Isaac Newton. Max-
well wrote, “The agreement of the results seems to show that light and [electro] magnetism are affections of
the same substance, and that light is an electromagnetic disturbance propagated through the field according
to electromagnetic laws.” Maxwell was proved right, and his quantitative connection between light
and electromagnetism is considered one of the great accomplishments of the 19th century in any
field of endeavor (see Section 12.2).
By considering the propagation of electromagnetic radiation as a field emanating from some
active source, Maxwell was able to advance his work on the nature of light. And by estimating what
the speed of light should be, and observing that his prediction agreed with the best available exper-
imental values, he was able to say that his initial assumption had been correct. In this way, he laid
the foundations of the modern scientific methodology of solving complex interrelated problems.
Even though Maxwell reconciled electricity, magnetism, and light, he did not live long enough to
finalize the details of the character of the electromagnetic field. At that time, Maxwell and many
others believed that the propagation of light required a medium which could support the waves, and
through which the waves could move or propagate (as was the case for sound waves in air—sound
waves cannot cross a vacuum). This proposed medium was called the luminiferous aether. Over time,
however, the existence of such a medium permeating the Universe, and yet apparently undetectable
by any mechanical means, proved more and more difficult to reconcile with experiment. Moreover,
it seemed to require an absolute frame of reference in which the equations were valid, with the
extraordinary result that the equations governing the phenomena of electromagnetism and optics
would be different for a moving observer and for an observer at rest. These difficulties inspired Al-
bert Einstein to formulate the theory of special relativity, and in the process Einstein demonstrated
that one could dispense with the requirement for a sustaining luminiferous aether.
The scientists of the late 19th century spoke about energy as the means of powering the
Universe, and of forces operating everywhere throughout the Universe; they appeared to possess
near-divine competence to explain and predict what was going on here on Earth as well as in the
distant reaches of the Cosmos. The period from the mid-19th century to World War I was the great
period of scientific triumphalism, and it is not surprising that this attitude of authority was copied
by many social scientists and those involved in ordering and maintaining society. After all, the
application of a quantitative way of looking at life was a wide-ranging consequence of the French
Revolution; we were all now subject (whether we knew it or not) to a new tyranny, the tyranny of
numbers and scientific concepts. Society was to be run efficiently, like a late 19th century laboratory.
This quantitative view of society and of the evolution of society is best represented in the work of
the German philosopher and economist, Karl Marx, especially his Das Kapital, published posthu-
mously in 1885 and 1894.
For those interested in the history of science, the concept of power is now inseparable from
politics and economics. This confusion of terms began in the late 18th century when Matthew
106
Boulton, the financier who supported James Watt’s work to develop the steam engine that powered
the Industrial Revolution wrote to Empress Catherine (the Great) of Russia, in an attempt to sell
the new steam engines he was developing with Watt. He wrote to Her Imperial Majesty, “I am
selling what the whole world wants: power.” He had given the game away. That was what it was all
about. That a discussion of power in Nature is really inseparable from the idea of power in society.
We could say that whereas the fundamental principle of physics is energy, power is the fundamental
principle of the social sciences, and of politics and economics. Perhaps because power can be readily
quantified, if only by demonstrating that one politician can command more votes than another, or
that one bank, organization, or media Moghul is richer than another bank, organization or media
Moghul, that they are deemed to be more powerful.
Corporate power may have a different sense from physical power, but the units are exactly
the same, and we have a confusion of terms. Such homonyms, are known in other areas. The ideas
presented here are an attempt to point out that our society is really a microcosm of Nature, and
concepts of what makes the Universe work the way it does, can be applied (perhaps without too
much difficulty) to our society. Energy is the currency of Nature, and power and force drive nature;
in society, money may be currency, but it is also the medium, the power and force that brings about
change in society.
When we come to forces, we are in an even more difficult position about possible confusion
between politics, the social sciences, and the physical sciences. How many times have we heard a
partisan journalist say of a politician that “he/she was a force of Nature.” A force is something that
compels a molecule, or a planet to do something; that is, it is energy directed along a well-defined
path, so perhaps certain politicians are able to act as forces. They order people to do this or that,
they change society and use considerable resources in their endeavors. However, the big difference
is that in Nature forces act in the most efficient manner, there is little wastage of energy; politicians,
on the other hand, are less efficient and waste a great deal of the scarce currency of society.
The age of scientific triumphalism came crashing down with the World War I, the advent
of quantum mechanics and relativity in physics, and with modernism in the arts (cubism in the
visual arts did more to advance the ideas of Einstein and Heisenberg than any number of textbooks
written for a general readership). Sadly, the quantitative triumphalism of the social scientists and
of politicians took a bit longer to dissipate, but another world war and several economic crashes;
particularly, that of 2008, have revealed the final bankruptcy of all the standard models of economic
growth; yet politicians still believe that we can all continue to have unlimited economic growth in
a closed system of finite resources, that is, it is possible for politicians to repeal the Second Law
of Thermodynamics. The force of politics has gone, and all we have left are politicians seen as a
disruptive or, at best, an irrelevant force in social progress.
10. WHAT POWERS SOCIETY?10.2 INTERNATIONAL REGULATION OF TERMS AND NAMES
107
10.2 INTERNATIONAL REGULATION OF TERMS AND NAMES:
DIALECTS ARE INEVITABLE
I have spoken about the scientists’ desire to create a simple language to describe the world that
would be universally understood; to permit a return to a Golden Age. But are such dreams
practicable in the wider society? The followers of such projects always try, with greater or lesser
cohesive power, to realize an international forum. But which authority has the competence to ad-
judicate between these contending parties? Is it the richest or the most powerful nation on earth
that decides for the rest of humanity? The beginning of the last century was the most optimistic
epoch for the creation of Utopic ideas of international committees deciding on matters affecting
and effecting humanity. This was an epoch when it still seemed realistic to believe that an inter-
national body would be capable of coming to a fair and ecumenical conclusion, and imposing it
on every nation by reason. But two world wars and numerous economic depressions put an end
to all that Utopian nonsense.
Anyway, if a committee did make a useful contribution; for example, inventing a good,
new candidate for a universal language; as soon as it was made public, the language would spread
through various countries. There would be clubs to propagate this new language, and these clubs
would begin petitioning national governments to access national education systems. However, what
invariably happens is that the original inventor discovers that his/her language has been subjected
to, supposedly “heretical” modification(s), which might further simplify, restructure, and rearrange
it—making it more useful as an international language. But the original inventor for, whatever
reason will likely not be happy about this. The product of all their labor will have been modified
by others. Their creation is not the final version. That honor will go to someone else. Such will in-
evitably be the fate of artificial languages: the “word”’ remains pure only if it does not spread; if it
spreads, it becomes the property of the community of its proselytes, and (since the best is the enemy
of the good) the result is “Babelization.” After a few short years of rapid inflationary growth, the
movement collapses, and continues only in an ever-shrinking state.
One may make the observation that a universal language is impossible for a simple reason:
Even if everybody on earth agreed to speak the same language from today, they would soon discover
that, under the influence of their own use the single languag,e had begun to change, to modify itself
in a multitude of different ways in each country, until it produced in each a different dialect, which
gradually grew away from all the others. It is for this reason that the Portuguese spoken today in
Brazil differs from the Portuguese spoken in Portugal and, more famously, the ever-widening sep-
aration of English spoken in the UK and in the U.S.
108
10.3 SCIENCE AS A NEW TOWER OF BABEL
By this point, I am sure that the reader will have appreciated that creating a universal language of
science, or even a new system of weights and measures is no easy matter. Even ignoring political
conflicts, it is rarely possible to achieve consensus between relatively small groups of scientists as to
which units they should be using. And as for devising a scientifically coherent system of units that
may be adopted and used by the wider society there is no simple answer.
Creating a system of weights and measures, or a universal language capable of quantitative
extension, is difficult due to conflicting requirements.
• To facilitate everyday use, the units or nomenclature should be of a size or facility that
is appropriate for use in specialist areas of science and technology, but they must also
be appropriate for everyday use by the wider community.
• To facilitate international use, the units or nomenclature should be defined in a man-
ner that is both precise and capable of being reproduced anywhere in the world, and
not be subject to reference to a prototype or artifact kept by a particular nation.
• The units or nomenclature should be coherent; that is, all the subsidiary or derived
units, which are needed to fully describe Nature, can be expressed as combinations of
the basic units without the introduction of any numerical constants.
Fortunately, compromise and even pragmatism are not entirely unknown in science, and
some progress has been possible. The centimeter-gram-second (CGS) system of units was widely
used until the electricity industry decided that the electromagnetic unit, or emu derived from
Ampère’s Force Law, gave quantities which were too small for practical use by electrical engineers.
They rescaled the electrical units to make them more “user friendly” for the electricity generation
and supply companies, who needed units which referred directly to the large electric currents and
the huge voltages found in industry, rather than the much smaller values used by research scientists.
This resizing of units made the electrical engineers happy, but coherence with mechanical units
was lost because of the numerical factors which were now needed to connect the units for large
currents and voltages to related quantities. In 1948 it was decided that the centimeter, the gram and
the erg (the CGS unit of energy) should be replaced by the larger meter, kilogram, and joule (one
-3
kilogram, and one erg equals 10-7 joules). This
centimeter equals 10
new system of units was called the meter-kilogram-second-ampere (MKSA) system of units, and
it restored coherence to the whole system of units and quantities; that is, the numerical conversion
factors introduced by electrical engineers disappeared. The MKSA system of units is also known as
the SI system of units (see Chapter 9).
meters, one gram equals 10
-2
So, why use these different systems of electrical units? Why was it deemed necessary to ap-
pease the electricity industry? Well, it is not simply a reluctance to change. It is all about what you
10. WHAT POWERS SOCIETY?10.3 SCIENCE AS A NEW TOWER OF BABEL
109
hold to be important. Pragmatists favor the SI’s utilitarian approach to calculation, which actually
keeps a lot of the underlying complex physics out of sight (which is useful for teachers when trying
to instruct bored students, but is inappropriate for researchers); on the other hand, philosophically
minded physicists want only the base units to better reflect the underlying science.
This problem of fundamentally different systems of units and nomenclature being used con-
currently by different communities of both scientists and non-scientists does, of course, extend far
beyond the world of electrical engineering. A similar “confusion of tongues” applies in something as
technically straightforward as pressure measurement. (Pressure is defined as the force per unit area
applied in a direction perpendicular to the surface of a vessel. It can be thought of as arising from
the molecules of gas striking the inner surface of the vessel containing that gas.)
-1
-2
The SI unit for pressure is the pascal (Pa), named in honor of the 17th-century French
mathematician and Catholic philosopher Blaise Pascal (1623–1662), which in the SI is equal to
2
; pressure is a force). The name “pascal”
or kg.m
one newton per square meter (that is, N/m
(symbol Pa; see Table 10.1) for the unit was adopted in 1971; before that date, pressure in the SI
. The problems associated with pressure measurements begin
was expressed simply as so many N/m
with the SI unit of area; one square meter is a very large area, and so the values of even the modest
pressures encountered in everyday life are very large numbers. For example, the pressure in your car
tyre would be about 340,000 Pa; and successful systems of units usually express everyday quantities
in small numbers, so as to facilitate familiarity with the size, and in recording and quoting the values
(particularly, with non-physicists in places such as garages and tire shops).
.s
2
On the other hand, non-SI units of pressure are legion. There are pounds per square inch
(psi), or more precisely (given the distinction between mass and weight) pounds-force per square
inch, and bars (that is, atmospheres) which are commonly used in the English-speaking world. The
2
or 0.1 Pa (a dyne, abbreviated as dyn,
CGS unit of pressure is the barye (ba), equal to 1 dyn/cm
newton). Then there
being the unit of force in the CGS system of units, one dyne is equal to 10
is the universal measure of pressure used by the medical profession; your blood pressure of, for ex-
ample, “130 over 82” is actually two measurements of pressure with each result given in millimeters
of Mercury (mmHg). Here the pressure is defined as a force which would support a column of
Mercury of uniform cross-section to that particular height.23 The standard atmosphere (atm) is a
well-known and well-used constant. It is approximately equal to the air pressure at sea level and is
equal to 101,325 Pa or 101.325 kPa or 1013.25 hPa, in the SI, or 14.696 psi or 760 mm of Mercury;
so 1 mm of Mercury is equal to 133.3 Pa.
-5
23 When millimeters of Mercury, or even inches of water, are quoted today as pressures, these units are not based on
an actual physical column of Mercury or water; rather, they are measured by small electronic sensors or transduc-
ers whose readings could be calibrated or expressed in any number of units (SI or non-SI) by an inbuilt computer
chip, calibrated to behave as if it were a column of Mercury or of water.
110
Another point to bear in mind about this profusion of units of something as straightfor-
ward as pressure is the difference between relative pressures (relative to atmospheric pressure) and
absolute pressures (relative to a vacuum); for example, the pressure in your car tires is a relative
pressure, or an overpressure and is often written as psig (pounds per square inch of gauge), which
is one atmosphere or 14.696 psi above a vacuum. An absolute tire pressure would be written in psia
(pounds per square inch absolute), and would be lower than a measurement of the same pressure
given in psig by the amount 14.696 psi.
These are all expressions of the same piece of information, but expressed in the various di-
alects of the single language of science. This plethora of units for something as basic as pressure
measurement is mirrored in measurements of many other common phenomena. This variety is not
something that exists to confuse students and non-scientists. Such varieties of units exist for sound
technical reasons: convenience in specialist branches of science, or convenience or facility of use in
certain ranges of pressure, or because one profession refuses to change to another system of units,
or because there is such an investment in technology that any change would be too expensive. The
medical profession will, for example, not move away from using mmHg for blood pressure mea-
surement,24 which is convenient for them and a sufficiently precise measurement for their patients,
but this is not the case for the vast majority of physicists who gave up using mmHg as a unit for
pressure early in the last century.
However, the question we have to ask ourselves is whether there is anything to be gained
by attempting to force a large body of professionals to give up a system of units with which they
have become familiar over many generations. There is certainly the possibility of serious adverse
consequences arising from such a move. It would be far better to encourage the ability to use and
convert between many of these systems of units—to celebrate the diversity of the dialects of the
single language of science. A scientist or a technician who can convert between these units will be
someone who will truly understand the science underlying the phenomenon, and will be less likely
to make foolish errors.
Any common language for science would inevitably and rapidly grow distant from the lan-
guage of literature, but we know that the language of science and the language of letters influence
each other. But, in addition, an international language of purely scientific communication would
soon become an instrument of secrecy, from which the humble speakers of their own native dialects,
or regional languages would be excluded. And as to possible literary uses, if the authors were obliged
to write in a common tongue, they would be exposed to international rivalries, fearing invidious
comparisons with the works of foreign writers. Thus, it seems that circumspection was a disadvan-
tage for science and an advantage for literature, as it was for the astute and cultivated traveler, more
learned than his native and naïve interlocutors. In the background to the formation of a universal
24 In France, doctors define the blood pressure of their patients in terms of centimeters of Mercury, rather than
millimeters of Mercury. That is only a factor ten to consider.
10. WHAT POWERS SOCIETY?language there is an 18th-century prejudice, which is still with us; that people simply do not wish
to learn other languages, be they universal or merely foreign. There exists a sort of cultural deafness
when faced with polyglottism, a deafness that continued on throughout the 19th century to leave
visible traces in our own time.
10.4 FURTHER READING
111
10.4 FURTHER READING
Defining and Measuring Nature: The Make of all Things (2014); Jeffrey H. Williams; San Rafael, CA,
Morgan & Claypool.
Order from Force: A Natural History of the Vacuum (2015); Jeffrey H. Williams; San Rafael, CA,
Morgan & Claypool.
CHAPTER 11
113
The Ghost of the Divine Language:
The Theory of Everything
Reality is merely an illusion, albeit a very persistent one.
Albert Einstein (1879–1955)
The Theory of Everything (TOE) is the final theory, the ultimate theory, or master theory. It is a
hypothetical single, all-encompassing, coherent theoretical framework of physics that will fully ex-
plain and link together all aspects of the Universe. Science fiction writers has long speculated about
such an over-arching model of how the natural world functions, but let us consider what physicists
mean by this fantastic, this sublime idea.
Finding a TOE is one of the major unsolved problems in physics. Over the past two cen-
turies, two theoretical frameworks have been developed that, as a whole, most closely resemble a
TOE. These two theories, upon which all modern physics rests are general relativity and quantum
field theory. General relativity is a theoretical framework that only focuses on one of the four fun-
damental forces of Nature, gravity, for understanding the Universe on a large scale, and for objects
kg), stars, galaxies (the mass of a
of high mass: planets (the mass of the earth is about 6 × 10
kg), clusters of galaxies, etc. On the other hand, quantum field
galaxy is estimated to be about 10
theory is a theoretical framework that focuses on the other three of the four fundamental forces of
Nature (one of which is displayed in Figure 11.1), excluding gravity, and it holds for understand-
ing the Universe at the small scale, and for objects of low mass: sub-atomic particles (the mass of
kg), atoms, molecules, etc. Quantum field theory suc-
a proton is 1.672 621 923 69(51) × 10
cessfully implemented the Standard Model and unified the interactions (so-called Grand Unified
Theory) between the three non-gravitational forces: strong nuclear force, weak nuclear force, and
electromagnetic force.
−27
24
42
By merely suggesting that there is, somewhere… at some energy, a TOE, and that when we
discover it all experimental science will become redundant (that is the implication), we are again in
the world of the mediaeval and pre-mediaeval savants who searched for the Perfect Language (see
Chapter 2). Both the poet Dante and his exact contemporaries, the Kabbalists of Spain searched
for the language used by God to bring the Universe into existence from nothing (Ex nihilo). They
believed that this language was merely lost, or perhaps, given that it was a language of power, had
been hidden from man. But once man had re-discovered this language, all the secrets of Nature
114
would be revealed to him. He could re-order the world, and by implication humanity, bring about
a new Golden Age of harmony, peace, and prosperity.
Figure 11.1: Lightning or electromagnetism in action. Lightning is probably the most spectacular,
frightening, and immediate demonstration of one of the four forces of Nature (electromagnetism).
A great many of the most eminent physicists have confirmed with precision experiments
virtually every prediction made by the two theories of general relativity and quantum field theory—
when used in their appropriate domains of applicability. In accordance with their findings, scientists
have also learned that general relativity and quantum field theory, as they are currently formulated
are mutually incompatible; they cannot both be right. Since the domains of applicability of general
relativity and quantum field theory are so different, most situations require that only one of the
two theories be used. As it turns out, this incompatibility between general relativity and quantum
field theory is apparently only an issue in regions of extremely small scale and high mass, such as
those that exist within a Black Hole, or in the early stages of the Universe. To resolve this conflict,
a theoretical framework revealing a deeper underlying reality, unifying gravity with the other three
fundamental interactions, must be discovered to integrate, harmoniously the physics of the very
large and of the very small into a seamless whole: a single theory that, in principle, is capable of
describing all phenomena, at all length and mass scales.
Today, it is string theory that has evolved into a candidate for this ultimate theory of the
Universe, but not without limitations and controversy. String theory posits that at the beginning of
second after the Big Bang, when the Universe was very small, and so at a
the Universe (up to 10
very high temperature), the four fundamental forces were a single fundamental force. According to
string theory, every particle in the Universe, at its most microscopic level (the Planck length scale),
−43
11. THE GHOST OF THE DIVINE LANGUAGE: THE THEORY OF EVERYTHING
11.1 SOME BACKGROUND
115
consists of varying combinations of vibrating strings with distinct patterns of vibration. String
theory further claims that it is through these specific oscillations that a particle of a unique mass
and charge is defined. Thus the electron (of mass, me = 9.109 383 56(11) ×10−31
kg and of charge,
1 e = 1.602 176 620 8(98) × 10−19 C = 4.803 204 51(10)×10−10 esu) is a string vibrating one way,
while the up-quark, of charge = +(⅔) e and mass = (⅓)me, is a string vibrating in a different manner.
11.1 SOME BACKGROUND
In Ancient Greece, philosophers such as Democritus (c.460–c.370 BCE) speculated that the ap-
parent diversity of observed phenomena was due to a single type of interaction, namely the ability
of the most fundamental particles, termed atoms, to move and collide with each other in the void
that existed between the indivisible, eternal atoms. Archimedes (c.287–c.212 BCE) was possibly
the first natural philosopher known to have described Nature with axioms (or principles), and then
to have deduce new results from observations of these principles.
In the late 17th century, Isaac Newton’s description of the force of gravity, which he knew
operated over vast, astronomical distances implied that not all forces in Nature result from things
coming into contact, or colliding. In his Mathematical Principles of Natural Philosophy of 1687, New-
ton gave us an example of the unification of physical principles; in this case, unifying the mechanics
of Galileo Galilei on terrestrial gravity, the laws of planetary motion of Nicolaus Copernicus (1473–
1543) and the phenomenon of tides by explaining these apparent actions at a distance under a sin-
gle law: the Law of Universal Gravitation.25 The mid 19th century saw the unification of electrical
and magnetic phenomenon to create electromagnetism. In his experiments of 1849–50, Michael
Faraday was the first to search for a unification of gravity with electricity and magnetism, but he
was unsuccessful. In 1900, the German mathematician David Hilbert (1862–1943) published a list
of mathematical problems that became famous, and stimulated a great deal of research in the early
years of the last century. In Hilbert’s sixth problem, he challenged researchers to find an axiomatic
basis to all of physics. He asked the physics community to come up with a Theory of Everything.
In the late 1920s, the recently invented quantum mechanics showed that the bonds between
atoms, which are the basis of all chemistry and physiology were examples of electromagnetic forces
(see Figure 11.1, where the lightning results from the energy released by atoms and molecules
excited and ionized by huge electric and magnetic fields). This discovery led one of the inventors
of quantum mechanics, Paul Dirac, to boast in the Preface of the first edition of his textbook on
25 Newton’s Law of Universal Gravitation states that every particle attracts every other particle with a force that
is directly proportional to the product of their masses and inversely proportional to the square of the distance
between their centers. This is a general physical law derived from empirical observations by what Isaac Newton
called inductive reasoning. The equation for this law is: F = G(m1.m2/r2), where F is the gravitational force acting
between two objects, m
2 are the masses of those objects, r is the distance between the centers of those
masses, and G is the gravitational constant (6.674 × 10−11 m3kg−1s−2). We see immediately that this constant is
known with considerably less precision that are the charge and mass of the electron given above.
1 and m
116
quantum mechanics (The Principles of Quantum Mechanics; Cambridge University Press, 1930) that
“the underlying physical laws necessary for the mathematical theory of a large part of physics and the whole
of chemistry are thus completely known.” Although Dirac predicted the end of experimental science,
things turned out rather differently. During the great advances made in particle physics in the last
century, that is, the elucidation of the menagerie of sub-atomic particles we know today, the search
for a unifying theory was interrupted by the discovery of the strong and weak nuclear forces, which
both differ from gravity and from electromagnetism.
Gravity and electromagnetism could always coexist as entries in a list of classical forces, but
for many years it seemed that gravity could not even be incorporated into the quantum framework,
let alone be unified with the other fundamental forces; the strong and the weak nuclear forces are
purely quantum phenomena. For this reason, work on unification in the last century focused on
understanding the three quantum forces: electromagnetism and the weak and strong nuclear forces.
The first two were combined in 1967–68 by Sheldon Glashow (U.S., born 1932), Steven Weinberg
(U.S., born 1933), and Abdus Salam (Pakistan, 1926–1996) into the electroweak force; see Figure
11.2. Electroweak unification is a broken symmetry; the electromagnetic and weak forces appear
distinct at low-energies because the particles carrying the weak force, the W and Z bosons, have
non-zero masses of 80.4 GeV/c2
and 91.2 GeV/c2, respectively, whereas the photon, which carries
the electromagnetic force, is without mass. At higher energies, the W and Z bosons can be created,
and the unified nature of the force becomes apparent.
A TOE would unify all the fundamental interactions of Nature: gravitation, strong nuclear
interaction, weak nuclear interaction, and electromagnetism. Because the weak interaction can
transform elementary particles from one kind into another, the TOE should also yield a deeper
understanding of the various kinds of sub-atomic particles. The usual assumed path of these various
theories is given in Figure 11.2, where each unification step (indicated in Figure 11.2 with a short
vertical arrow (↑) from two horizontal pre-existing theories) leads to a higher level of sophistication
and complexity.
11. THE GHOST OF THE DIVINE LANGUAGE: THE THEORY OF EVERYTHING11.1 SOME BACKGROUND
117
Figure 11.2: A generalized and theoretical route to the creation/discovery (whether science develops a
theory to explain Nature, or discovers a theory that explains Nature is a moot point, but one beyond
the scope of the present volume) of the Theory of Everything, or the Perfect Language needed to
characterize and explain every detail and phenomenon of Nature. Originally, electricity and magnetism
were considered to be two separate forces. This view was overthrown by James Clerk Maxwell in 1873.
Albert Einstein published his general theory of relativity in 1915. In 1961, Sheldon Glashow combined
the electromagnetic and weak interactions. In 1967, Steven Weinberg and Abdus Salam incorporated
the Higgs mechanism into Glashow’s electroweak interaction, giving it its modern form. The Standard
Model was developed in stages throughout the latter-half of the 20th century, through the work of many
scientists with the current formulation being finalized in the mid-1970s with experimental confirmation
of the existence of quarks. A Grand Unified Theory (GUT) is a model in particle physics in which, at
high energy the three gauge interactions of the Standard Model that define the electromagnetic, weak,
118
and strong interactions, or forces, are merged into a single force. Although this unified force has not
been directly observed, the many GUT models theorize, or foretell its existence. If unification of these
three interactions is possible, it raises the possibility that there was a grand unification epoch in the very
early universe in which these three fundamental interactions were not yet distinct. The novel particles
predicted by GUT models are expected to have extremely high masses of around the GUT energy scale
of 10
GeV, and so are well beyond the reach
of any foreseen particle collider experiments. The total energy range covered by the physics in this figure
is a staggering, 30 orders of magnitude (10
GeV; that is, just not far below the Planck scale of 10
eV to 10
eV).
16
28
19
-2
The essential point to make about Figure 11.2, other than that it is a road map to an un-
known destination (always the most exciting sort of journey); a destination that may not even
exist, is that it represents a sequence of events at vastly different ranges of energy. From energies,
of order, kBT for the unification of electricity and magnetism (here T is the temperature and kB
the Boltzmann constant that relates temperature to energy; electromagnetic phenomena occur at
ambient conditions) to vast energies way beyond the capabilities of our present high-energy particle
GeV, Grand Unification is predicted to
colliders. The electroweak unification occurs at around 10
GeV, and unification of the Grand Unified Theory force with gravity is expected at
occur at 10
the Planck energy, roughly 10
GeV (that is, about 1028 eV).26
19
16
2
16
Several Grand Unified Theories (GUTs) have been proposed to unify electromagnetism and
the weak and strong nuclear forces. Grand unification would imply the existence of an electronu-
GeV, far greater than could be reached
clear force; it is expected to set in at energies, of order, 10
by any present particle accelerator. The Large Hadron Collider is the world’s largest and most
powerful particle collider, and the largest machine in the world. It was built by the European Orga-
nization for Nuclear Research between 1998 and 2008 in collaboration with over 10,000 scientists,
and hundreds of universities and laboratories from more than 100 countries. It lies in a tunnel 27
km in circumference, 175 m beneath the France–Switzerland border near Geneva. First collisions
GeV)
were achieved in 2010 at an energy of 3.5 teraelectronvolts (TeV; 1 T eV = 10
per beam, about four times the previous world record. After upgrades it reached 6.5 TeV per beam
(13 TeV total collision energy, the present world record). At the end of 2018, it entered a two-year
shutdown period for further upgrades; https://home.cern/.
eV or 10
12
3
26 An electron volt (1 eV) is the amount of kinetic energy gained or lost by a single electron accelerating from rest
through an electric potential difference of one volt in vacuum. Hence, it has a value of one volt, 1 Joule/Coulomb,
multiplied by the electron’s elementary charge e = 1.602 176 620 8(98) × 10−19 Coulomb. Therefore, one electron
volt is equal to 1.602 176 620 8(98) × 10−19 Joule. By mass–energy equivalence, the electron volt is also a unit of
mass (from Einstein’s celebrated equation). It is common in particle physics, where units of mass and energy are
often interchanged, to express mass in units of eV/c2, where c is the speed of light in vacuum. The mass equivalent
of 1 eV/c2 is 1.782 × 10-36 kg. 1 eV corresponds to a temperature of about 11,604 K or 11,331°C. This system of
units is useful for theoretical physicists, and the community of particle physicists but is wholly outside of the SI.
11. THE GHOST OF THE DIVINE LANGUAGE: THE THEORY OF EVERYTHING11.2 STRING THEORY
119
The final step in Figure 11.2 requires resolving the separation between quantum mechanics
and gravitation, often equated with general relativity. Numerous researchers have concentrated
their efforts on this specific step; nevertheless, no accepted theory of quantum gravity, and thus no
accepted TOE have yet been formulated. In addition to explaining the forces mentioned in Figure
11.2, a TOE should also explain the status of, at least two candidate forces suggested by modern
cosmology: an inflationary force for the Universe, and dark energy or dark matter.
11.2 STRING THEORY
We believe there are four fundamental forces that govern the Universe: gravity, electromagnetism,
the weak force, responsible for beta-decay, and the strong force which binds quarks into protons
and neutrons. The physics community believe they understand all of these forces except for gravity.
The word “understand” is used loosely, in that we may define the Lagrangian,27 which describes how
these forces act upon matter and, at least in principle we know how to use these Lagrangians to
make well-defined predictions with which to test theories. But with gravity, our understanding is
incomplete. Certainly, we understand gravity classically; that is, in the non-quantum limit of (h/2π)
= 0, where h is Planck’s constant. And provided we do not ask questions about how gravity behaves
m), we may calculate the effects of gravity.
at very short distances (the Planck scale, of order, 10
It is sometimes said that physicists do not know how to combine quantum mechanics and
gravity. In fact, physicists do understand how to include quantum mechanical effects into gravity, as
long as we do not ask questions about what is going on at distances, less than the Planck length. For
the other three fundamental forces we know how to include quantum effects, at all length scales. So,
while we have a quantum mechanical understanding of gravity, we don’t have a complete theory of
quantum gravity. And that is the problem, as the most interesting questions we wish to ask about
gravity is what happen at very small length scales; for example, questions such as “What was the
Big Bang” and “What happens at the singularity of a Black Hole?” So what is it that goes wrong
with gravity at scales shorter than the Planck-length? The mathematical answer is that the force of
gravity is not renormalizable; that is, at very short length scales, the gravitational energy become
very large (we are trying to divide something by zero), and we can only avoid this mathematical
divergence (that is, a number tending toward ∞) by constructing unnatural models.
-35
27 Lagrangian mechanics is a reformulation of Newtonian mechanics, introduced by the Italian-French mathemati-
cian and astronomer Joseph-Louis Lagrange (1736–1813) in 1788. In Lagrangian mechanics, the trajectory of a
system of particles is derived by solving the Lagrange equations in one of two forms; one separates terms involv-
ing kinetic and potential energy. In each case, a mathematical function called the Lagrangian is a function of the
generalized coordinates, their time derivatives, and time; containing the information about the dynamics of the
system. No new physics is necessarily introduced in applying Lagrangian mechanics, compared to Newtonian
mechanics. It is, however, more sophisticated mathematically.
120
The force of gravity is a property of the scale over which it is being investigated. Consider an
isolated electron in classical electromagnetism. The total energy (ET) of this electron (of rest mass,
m) is given by the sum of a kinetic part and a potential part:
ET ≈ m + ∫ d3x │E│2 ≈ m + 4π ∫ r2dr (e2/r4),
where e is the charge of the electron and r is its radius. This integral defines the potential energy of
the electron, and it diverges because you are dividing by a number going to zero at r = 0 (a point
singularity). We may avoid this divergence by cutting the function off at some scale, Λ; so the total
energy of the electron is now given by
ET
m + C (e2/Λ).
∼
Clearly, the second term still dominates in the limit which interest us; that is, small r. Naively,
we speak of the rest mass of the electron, m, but we cannot measure m; we measure ET. That is, the
inertial mass should include the electromagnetic self energy. Consequently, the physical mass m is
given by the sum of the bare mass m and the mass derived from the electron’s field energy (via E
= mc2
). This means that the bare mass is infinite in the limit of interest. Note that we must make
a measurement to fix the bare mass. We cannot predict the electron mass. It also means that the
bare mass must cancel the field energy. That is, we have two huge numbers which cancel each other
extremely precisely. To understand this better, note that it is natural to assume, using dimensional
analysis (see Chapter 9) that the cut-off should be the Planck length. Which in turn means that
the self-field energy is, of order, the Planck mass. So the bare mass must have a value which cancels
the field energy. This cancellation is sometimes referred to as a hierarchy problem. This process of
absorbing divergences in masses, or couplings (an analogous argument can be made for the charge
e) is called renormalization.
String theory, however, replaces point particles with minute strings, which can be either
open or closed (depending on the particular type of particle that is being replaced by the string),
whose length, or string length (denoted ls), is approximately 10
m. Moving from a point particle
to a string avoids the problems of renormalisation. In string theory, one thus replaces Feynman
diagrams by surfaces, and word-lines, or trajectories become world-sheets (see Figure 11.3). One
increases the dimensionality of the problem.
−35
All such theories use supersymmetry, which is a symmetry that relates elementary particles
of one type of spin to another particle that differs by a half-unit of spin. These two partners are
called superpartners. Thus, for every boson there exists its superpartner fermion and vice versa. For
string theories to be physically consistent they require ten dimensions for space-time. However, our
everyday world is only four-dimensional (three spatial dimensions and time) and so one is forced to
assume that the extra six dimensions are extremely small, but must still be taken into consideration.
To generalize about going from moving point particles (moving along a trajectory or world-line)
11. THE GHOST OF THE DIVINE LANGUAGE: THE THEORY OF EVERYTHINGto strings we have a world-sheet instead of a world-line, which has the form of a curved sheet or a
curved cylinder depending on whether the string is open or closed (see Figure 11.3).
11.2 STRING THEORY
121
e-
e+
e+
World-Sheet World-Volume
World-Line
e
m
T
i
Space
Space
Particle
String
Brane
Figure 11.3: (upper) In string theory, Feynman diagrams (the one displayed here describes the scatter-
ing, or interaction of an electron, e- and a positron, e+) involving point entities are replaced by surfaces.
(lower) World-line, world-sheet, and world-volume (denoting the trajectories of particles, strings or
branes), as they are derived for particles, strings, and branes.
What happens with gravity with this type of renormalization procedure? How does string
theory solve the renormalization problem in gravity? Because the electron (now a string, not a
point; one-dimensional rather than zero-dimensional) has finite extent lp, the divergent integral is
cut-off at r = lp. We now have no need to introduce new parameters to absorb divergences, as they
do not arise.
String theory has only one unknown parameter, the string length, of order, lp, but this can
be fixed by the only measurement string theory requires before it can be used to make predictions.
However, the hierarchy problem remains. String theory predicts that the electron mass is huge, of
order, the inverse length of the string, but we still need an additional something to give a reasonable
122
value for the mass. It turns out that string theory can do more than just cut off the integral, it can
also add an additional integral which cancels off a large part of the first diverging integral, leaving
a more realistic result for the electron mass. This cancellation is a consequence of supersymmetry
which, as it turns out is necessary in some form for string theory to be mathematically consistent.
So by working with objects of finite extent as opposed to point particles, we accomplish two things.
All the integrals are finite and in, principle, if string theory were completely understood we would
only need one measurement to make predictions for gravitational interactions at arbitrary length-
scales. In addition, we also gain predictive power (at least, in principle). Indeed in the Standard
Model of particle physics, which correctly describes all interactions to energies, of order, 200 GeV,
there are 23 free parameters which need to be fixed by experiment; for example, the electron mass.
String theory, however, has only one such parameter in its Lagrangian, the string length.
However, one must never forget that physics is a predictive science; it is not an end in itself.
The less descriptive and the more predictive a theory becomes the better. In that sense, string the-
ory has become a latter-day Holy Grail. We have a Lagrangian with one parameter, which would
be fixed by experiment. You would then have a TOE. You could, in principle, explain all possible
phenomena. Particle physics tells us that there are a huge number of elementary particles, which
can be split into two categories: matter and force-carriers. The set of particles that define matter is
composed of six quarks: u, d, s, c, b, t (up, down, strange, charm, bottom, top), while the force carriers
are the photon, the electroweak bosons, Z, W±, the graviton g, and eight gluons responsible for the
strong force, and the recently discovered Higgs boson. So, in particle physics, we have a Lagrangian,
which sums over all particle types and distinguishes between matter and force-carriers in some way.
If we had a TOE, all the particles and forces should be unified in some way so that we could write
down a Lagrangian for a “master entity,” and the particles mentioned would then just be different
manifestations of this underlying entity. In string theory, the underlying entity is the string, and
different excitations of the string represent different particles. Furthermore, unification of the four
fundamental forces is also built into the theory.
In the world we perceive, there are three familiar dimensions of space: height, width, and
depth. Einstein’s general theory of relativity treats time as a dimension on a par with the three
spatial dimensions. In general relativity, space and time are not modeled as separate entities but are
instead unified to a four-dimensional space-time; three spatial dimensions and one time dimension.
In this framework, the phenomenon of gravity is viewed as a consequence of the geometry of space-
time. In spite of the fact that the Universe is well described by four-dimensional space-time, there
are several reasons why physicists consider theories in other dimensions. In some cases, by modeling
space-time in a different number of dimensions, a theory becomes more tractable mathematically,
and one can perform calculations to gain insights more readily. There are also situations where
theories in two or three space-time dimensions are useful for describing phenomena in condensed
11. THE GHOST OF THE DIVINE LANGUAGE: THE THEORY OF EVERYTHING11.3 REALITY
123
matter physics. Finally, there exist scenarios in which there could actually be more than four dimen-
sions of space-time, which have nonetheless managed to escape detection.28
However, one should keep in mind that string theory is in some sense only in its infancy,
and, as such, is nowhere near being able to answer any of the questions we would wish it to answer;
especially regarding what happens at singularities. There are those who believe that, in the end,
string theory will either have nothing to do with Nature, or will never be testable, and as such will
be relegated to being a mathematical plaything, or a new branch of philosophy.
11.3 REALITY
At present, there is no candidate TOE that includes the Standard Model of particle physics and
general relativity. For example, no candidate theory is able to calculate the Fine Structure constant
(α = 1/137.035 999 084(21)), or the mass of the electron (me = 9.109 383 56(11) ×10-31 kg), both
of which are known very precisely from experiment (as can be seen by the error limits). We know
these fundamental constants with extraordinary precision from experiment, and achieving that level
of agreement with a theory, would be a significant test on the validity of any theory. Most particle
physicists expect that the outcome of the ongoing experiments, the search for new particles at the
large particle accelerators and for dark matter, are needed in order to provide further input for a
TOE. However, there are many who take the view that the search for the TOE is a waste of time
and resources. In fact, no better than the futile search for the Perfect Language by Dante and the
Kabbalists, or Isaac Newton and Gottfried Leibniz.
Kurt Friedrich Gödel (1906–1978) was an Austrian, and later American, logician, mathe-
matician, and philosopher; he is considered to be one of the most significant of logicians. His in-
completeness theorems define modern views of mathematical logic; they demonstrate the inherent
limitations of every formal axiomatic system capable of modeling basic arithmetic. These results
from 1931 are important both in mathematical logic and in the philosophy of mathematics. The
theorems are widely, but not universally, interpreted as showing that David Hilbert’s 1900 program
to find a complete and consistent set of axioms for all physics, the TOE is impossible.
A number of scholars claim that Gödel’s incompleteness theorem suggests that any attempt
to construct a TOE is bound to fail. Gödel’s theorem, informally stated, asserts that any formal
theory expressive enough for elementary arithmetical facts to be expressed, and strong enough for
them to be proved is either inconsistent (both a statement and its denial can be derived from its
axioms) or incomplete, in the sense that there is a true statement that can’t be derived in the for-
mal theory. Stephen Hawking was originally a believer in the TOE but, after considering Gödel’s
theorem, concluded that an ultimate theory is not possible: “Some people will be very disappointed if
28 There is a wonderfully imaginative use of such compactification and hidden dimensions in the science fiction of
Liu Cixin; the technology of the Trisolarians and other alien species in The Three-body Problem of 2007, and its
two sequels, The Dark Forest (2008) and Death’s End (2010).
124
there is not an ultimate theory that can be formulated as a finite number of principles. I used to belong to
that camp, but I have changed my mind.”
Other physicists have argued against this view, pointing out that Gödel’s theorems are irrele-
vant for computational physics; that is, purely model-based theoretical physics. Analogously, it may
(or may not) be possible to completely state the underlying rules of physics with a finite number of
well-defined laws, but there is little doubt that there are questions about the behavior of physical
systems which are formally undecidable on the basis of those underlying laws. Whereas there may
or there may not be an underlying philosophical/theoretical reason why a TOE may or may not
exist, there is also a more prosaic argument about experimental uncertainty that clouds the search.
How do you know if you have found a more fundamental model of physical reality, a higher syn-
thesis of the Laws of Nature?
To date, no physical theory is held to be precisely accurate. Physics proceeds by a series of
successive approximations, allowing more and more accurate predictions and measurements of an
ever-wider range of phenomena; see Table 11.1, which summarizes the evolution of the precision of
the measured values of the speed of light, c. Some physicists believe that it is therefore a mistake to
confuse theoretical models with the true nature of reality, and hold that the series of approximations
will never terminate in an “absolute truth.” Einstein himself expressed this view on several occa-
sions. Following this view, we may reasonably hope for a TOE which self-consistently incorporates
all currently known forces, but we should not expect it to be the final answer.
A motive for seeking a TOE apart from the pure intellectual satisfaction of completing a
centuries-long quest, is that prior examples of unification have predicted new phenomena, some of
which (for example, electrical generators and technology) have proved of great practical importance
to our civilization.
11.4 FURTHER READING
The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory (2003);
Brian Greene; New York: W.W. Norton & Company.
The Road to Reality: A Complete Guide to the Laws of the Universe (2005); Roger Penrose; Knopf.
The Trouble with Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next (2006);
Lee Smolin; New York: Houghton Mifflin Co.
Not Even Wrong: The Failure of String Theory And the Search for Unity in Physical Law (2006); Peter
Woit; London, Jonathan Cape.
11. THE GHOST OF THE DIVINE LANGUAGE: THE THEORY OF EVERYTHING11.4 FURTHER READING
125
Table 11.1: The evolution of the experimental value of the speed of light, c. We see how the value of this
constant of Nature converged to its present day accepted value, but the size of the uncertainty associated
with the measurements also fell with time and the increasing precision of the measurements. Today, the
value of c is fixed by the definition of the meter in the modern Quantum-SI; hence, the present value
of c is exact, and without error. But this “fixing” of the value of a constant of Nature has implications
on our evolving understanding of other aspects of Nature. All Nature is interconnected (see Figure 9.2),
and we should be careful in formally constraining some of those connections.
Value of c
Year Method
1675 Astronomical observations of the moons of Jupiter 220,000 km/s
301.000 km/s
1729 Studies of optics (parallax)
315,000 km/s
1849 Optics (interference)
298,000 ± 500 km/s
1862 Optics (rotating mirror)
299,710 ± 30 km/s
1907 Electromagnetism
299,796 ± 4 km/s
1926 Optics (interferometry) due to Albert Michelson
299,792.5 ± 3.0 km/s
1950 Electromagnetics (masers)
299,792.50 ± 0.10 km/s
1958 Radio interferometry
299,792.4562 ± 0.0011 km/s
1972 Laser interferometry
299,792.458 km/s (Exact, and so
without error.)
1983 c fixed with the new definition of the meter
CHAPTER 12
127
Changing the Paradigm: From Long
Lists to Short Explanations
One had to be a Newton to notice that the Moon is falling, when everyone else sees
it doesn’t fall.
Paul Valéry (1871–1945)
So how did we go from the earliest stage of the creation of science, that is, the creation of long lists,
to looking for an underlying principle to explain all the observations and information contained in
those long lists?
12.1 THE GREAT PARADIGM SHIFT IN BIOLOGY
In many ways, modern physics, or if you will modern physical sciences, is an unstable structure.
While parts of the edifice of physics are solid enough, and have been around for centuries; rep-
resenting a coherent story, there are other parts of the edifice that have been added in an ad-hoc
manner. Quantum mechanics enables one to calculate many measureable atomic properties, and
when the theoretical or calculated quantity is compared to the measured quantity, we have excep-
tional agreement, and we say that the theory must therefore be true. This is, after all, the basis of the
scientific method. Yet quantum mechanics does not fit in at all with astrophysics. These two areas of
physics each apply to very different length scales-from galaxies to quarks. Consequently, we have
at present two very different and successful, in their own domains, models of reality, but they do
not come together. We have yet to construct, or find the Theory of Everything (see Chapter 11).
When you look at physics today, you are looking at the state of biology in the early 1950s;
that is, before the discovery of the structure of deoxyribonucleic acid or DNA, and the explanation
of how this molecule and its self-replication explains evolution on earth. Before the early 1950s, we
knew about the patterns of inheritance of characteristics such as eye color and hair color in animals,
the breeding of horses and greyhounds, and the color of flowers in pea plants; and it was suspected
that this strange mechanism of inheritance had something to do with the complex molecules found
in the nuclei of all living cells. These large, complex molecules, which became known as nucleic
acids, were investigated and found to be long polymers made up of a handful of smallish molecules.
And so the race was on to try and determine the structure of the nucleic acid polymer to see if it
could tell us something about inheritance and genetics.
128
The problem was solved in 1953. Francis Crick (1916–2004) and James Watson (born 1928)
determined the double-helix structure of the major component of nucleic acid, deoxyribonucleic
acid (DNA). Not only did they determine the position of the atoms within the double-helix, but
they showed that when the DNA molecule divides, at the same time as the cell divides into two
daughter cells, the two (helical) strands of the original DNA molecule unwind, and each strand as-
sembles a new partner strand from small molecules available in the surrounding cell-fluid. And it is
the rules that govern this assembling of a new stand of the double-helix DNA, based on the chem-
ical structure of the original strand that allows physical characteristics to be passed from one gen-
eration to the next. Two physicists, using the technique of x-ray crystallography, had demonstrated
that the whole of biology could be re-interpreted from the view-point of the three-dimensional
structure of the complex molecules found in the cells of every living organism. What was more, the
work of these two young crystallographers demonstrated that if a mistake were made in building
the daughter-DNA molecule; that is, in assembling the new DNA strand/molecule to be fitted into
the daughter nucleus then there was the possibility of a mutation, or a change in the blue-print of
life of that organism. Generating the possibility that the next generation of that organism would
be ever so slightly different from their parents. At a stroke, the mechanism of Darwin’s theory of
evolution by natural selection was discovered, and the pseudo-science of eugenics overturned.
The paradigm of biology had changed. A new unity had been achieved out of a synthesis
of diversity. Before the 1950s, biology consisted in learning the contents of some very long lists
contained in a vast array of books. With reference to what was said in Chapter 1, when it comes to
retrieving data and information we had moved from Homer’s Catalogue of Ships to the database of
swift-footed Achilles. After the 1950s, biology consisted in looking at the observations contained in
those long lists of book and re-interpreting them in terms of the structure and replication of DNA.
The discovery of the structure of DNA, and how it replicated itself, revolutionized every aspect of
biology and medicine. It opened up the possibility of designing life, and of significantly extending
life-spans. The list of how the discovery of Crick and Watson will change our world is practically
endless. This century will be the century of genetic engineering and genetic medicine, just as the
last century was the century of the physical sciences.
12.2 ELECTROMAGNETISM
When we use a mobile phone, listen to the radio, use a remote control, or heat food in the micro-
wave, few are aware that it was the great Scottish physicist/mathematician, James Clerk Maxwell
(1831–1879), who was responsible for making these technologies possible. In 1865, Maxwell
published an article entitled “A Dynamical Theory of the Electromagnetic Field,” where he stated:
“It seems we have strong reason to conclude that light itself (including radiant heat, and other radiations
if any) is an electromagnetic disturbance in the form of waves propagated through the electromagnetic
12. CHANGING THE PARADIGM: FROM LONG LISTS TO SHORT EXPLANATIONS12.2 ELECTROMAGNETISM
129
field according to electromagnetic laws.” These ideas of Maxwell ushered in the great synthesis of
electromagnetism. This paradigm change was a development of the metric system in Revolution-
ary France (April, 1795). In the same way that the study of heat and energy had not reached a
sufficiently mature stage to allow the savants who formulated the metric system to propose a base
unit for temperature in 1795, the study of electricity was at an even more immature stage. Indeed,
the study of electricity and magnetic phenomena in the 1790s had more in common with parlor
tricks than laboratory investigations. The two names associated with the earliest investigations of
the nature of electricity are the pious, conservative, Italian medical doctor Luigi Alyisio Galvani
(1737–1798), and another Italian, the natural philosopher Alessandro Giuseppe Antonio Anastasio
Volta (1745–1827).
In 1791, Galvani famously discovered that the leg muscles of dead frogs twitched when they
came in contact with an electrical spark. According to popular versions of the story, Galvani was
dissecting a frog at a table where he had previously been investigating discharges of static electric-
ity. Galvani’s assistant touched an exposed sciatic nerve of a dead frog with a metal scalpel, which
had picked up a residual static (electrical) charge. At that moment, the two men saw the leg of
the dead frog kick as if it were alive. Such laboratory-based observations made Galvani the first
to consider the possible relationship between electricity and animation; that is, the creation of life,
and the possibility of the re-animation of dead tissue. Indeed, Luigi Galvani used the term “animal
electricity” to describe the force that animated the muscles of the dead frog. Along with many of
his contemporaries, he regarded the activation of the supposedly dead muscles as being generated
by an electrical fluid carried by the still functioning nerves of the frog to the inanimate muscles.
Given his background, Galvani naturally assumed he had discovered something of the animating
or vital force that was implanted in all creatures by their Creator. However, not everyone agreed
with this conclusion. In particular, Alessandro Volta thought that the term “animal electricity” had
a suggestion of superstition and magic, and that it was not an explanation of the dramatic phe-
nomenon observed repeatedly by Galvani and co-workers. For his part, Galvani held that natural
philosophers like Volta had no place in moving from the laboratory into God’s realm of vitalism
and the nature of life itself. The argument between Galvani and Volta was a microcosm of the larger
debate about the place of the Divine in Nature which was animating the European Enlightenment.
Galvani spent years repeating his experiment on dead animals, and discovered that you did not need
a traditional source of static electricity to cause the dead muscle tissue to twitch. A combination
of two wires of different metals, for example, copper and zinc was sufficient, but Galvani could not
explain these observations.
The phenomenon observed by Galvani was subsequently named “galvanism,” on the sugges-
tion of his sometime intellectual adversary, Volta. Today, the term galvanism is used only to describe
someone who suddenly becomes excited, and it is likely that most people who use this word have
no idea of its origin. Although at the beginning of the 19th century, the observations of Galvani
130
were the source of much discussion, most famously in the novel Frankenstein, or, The Modern Pro-
metheus by Mary Shelley, which describes further investigations into the principles of animation
and vitalism.
Alessandro Volta was more of a scientist than Galvani. In the late 1770s, Volta had studied
the chemistry of gases, and was the first person to investigate the origin and chemical composition
of natural gas, or methane. However, it is for his investigations into the nature of electricity that
Volta is most famous, in particular, for a systematic investigation of electrical capacitance. He de-
veloped separate means of investigating both the electrical potential applied to the two plates of the
capacitor, and the charge residing on the plates. Volta discovered that for a given pair of plates, the
potential and the charge are proportional. This relationship is called Volta’s Law of Capacitance, and
to honor his fundamental work on electrostatics the unit of electrical potential is named the volt.
Alessandro Volta realized, from his own studies of Galvani’s observations that the frog’s leg
merely served as both a conductor of electricity (the fluid in the dead muscle tissue is what today we
would term an electrolyte) and a detector of the presence of a flowing electric current; all of which
mimicked an instantaneous animation. Indeed, Volta realized that the two different metals (the
electrodes) used by Galvani, inserted into the fluid of the frog’s leg formed an electrical circuit. Volta
replaced the frog’s leg by paper saturated with another conducting electrolyte, e.g. salt solution, and
detected a flow of electricity. In this way he invented the electrochemical cell, the forerunner of all
chemical batteries.
Luigi Galvani never perceived of electricity as being separable from biology. He always
believed that animal electricity came from the muscle of the animal. Volta, on the other hand,
reasoned that animal electricity was merely a physical phenomenon external to the dead frog, an
electric current coming from the metals, which formed an electrochemical cell or battery (for ex-
ample, zinc and copper), mediated by the fluid in the muscle tissue. There was no reanimation of
dead tissue, merely a flow of electrical current from one electrode to the other electrode through
the physiological fluid (the electrolyte) in the muscle of that poor dead frog. But Galvani’s ideas did
give literature, and the cinema Dr Frankenstein and his splendid creature.
In the early 19th century, electricity, magnetism, and optics were three independent disci-
plines. However, the situation changed thanks to one invention and two discoveries. The invention
was the electrical battery, a continuous source of electrical current created by Alessandro Volta in
about 1800. The two discoveries were: (1) the demonstration of magnetic effects caused by the flow
of electrical currents, observed by the Danish chemist and physicist Hans Christian Ørsted (1777–
1851) and by the French mathematician and one of the creators of the Metric system, André-Ma-
rie Ampére (1775–1836) in 1820; and (2), the 1831 discovery by the British chemist and natural
philosopher Michael Faraday (1791–1867) of the generation of electrical currents from magnetic
fields, that is, electromagnetic induction. In September 1820, Ampére presented his results to the
Académie des sciences: “mutual action between currents without the intervention of any magnet”; that is,
12. CHANGING THE PARADIGM: FROM LONG LISTS TO SHORT EXPLANATIONS12.2 ELECTROMAGNETISM
131
two parallel electrical currents attract, or repel each other depending on their polarity, as do per-
manent magnets. In 1826, he published Theory of Electrodynamic Phenomena, Uniquely Deduced from
Experience, whereby he claimed that “magnetism is merely electricity in motion” and that magnetic
phenomena depend only on the existence and motion of electrical charges (see the lines of force
emanating from a bar magnet, which also represents the magnetic field generated by an electric
current in Figure 12.1), thereby setting the stage for Faraday’s experiments.
Figure 12.1: The invisible repulsive lines of force emanating from similar poles of two bar magnets;
visualized by the use of iron fillings. Image from: https://commons.wikimedia.org/wiki/File:Magnetic_
field_of_bar_magnets_repelling.png.
The three contributions mentioned above form the basis of modern electromagnetism, but
required the insight of the Scot, James Clerk Maxwell to form a coherent single theory. Before
Maxwell, electromagnetism still consisted of long lists of observations of supposedly disparate
phenomena; Maxwell demonstrated the single underlying causation. Such a synthesis represents
the most profound transformation of the fundamentals of physics since Newton, and is one of the
greatest of scientific achievements, unifying electrical and magnetic phenomena, and enabling the
development of the theory of electromagnetic waves, including light.
James Clerk Maxwell published his major work, A Treatise on Electricity and Magnetism in
1873; a first step on the great journey to the Theory of Everything (see Figure 11.2). Here, Maxwell
rationalized and unified all the then known phenomena involving electricity and magnetism. When
we come to consider how matter interacts with light; that is, with an oscillating, or time-varying
electromagnetic field, we have to consider the other great contribution to the final synthesis of elec-
tromagnetism made by Maxwell, who, between 1861 and 1862, published a set of equations relating
electricity and magnetism and demonstrated that light is another electromagnetic phenomenon.
Classically, light scattering arises through secondary radiation from oscillating dipoles induced by
132
the incident electromagnetic wave. The simplest case occurs when the scattering medium is a gas,
composed of randomly distributed molecules of dimensions that are small compared to the wave-
length of the light.29 For a random distribution, the phase relationships between waves scattered
from different molecules are uncorrelated in all but the forward direction, so that the total scat-
tered intensity can be calculated directly as the sum of contributions from each molecule; thereby
permitting study of the properties of individual scattering molecules. Figure 12.2 demonstrates the
dramatic colors that are generated by the scattering, and absorption of the light coming from the
Sun, by the molecules in the atmosphere.
Figure 12.2: The blue color of the sky is caused by light scattered by atmospheric gas molecules (N2, O2,
H2O, and CO2), and not by absorption. These molecules being much smaller than the wavelengths of
visible light. The red color at sunset and sunrise (sunrise in Montpellier, France, in October 2019, in this
photograph) comes from absorption, because at sunrise and sunset the Sun is low in the sky and so the
sunlight is passing through the thickest section of atmosphere (that is, the path-length is at its longest).
This absorption of blue light, leaving the red/pink color is due to molecules other than the normal com-
ponents of air; that is, pollutants or dust. The grey/white color of clouds is caused by light scattered by
water droplets, which are of a comparable size to the wavelengths of the incident visible light. The darker
the color of the clouds, the larger are the water droplets, as liquid water does have a weak (electric dipole
forbidden) absorption in the visible.
29 The molecules comprising air (principally, N2 and O2) have a “diameter” of about 1 Å; that is, 1 × 10
m. On the
other hand, the wavelength of visible light; that is, the spacing between two successive maxima of the oscillating
electromagnetic wave is about 5,100 Å (in the green where our eyes have evolved to be the most sensitive).
-10
12. CHANGING THE PARADIGM: FROM LONG LISTS TO SHORT EXPLANATIONS
12.3 FURTHER READING
133
In the next two chapters, we will consider how the classification of information about life,
and about natural phenomena is accomplished, and how this scientific classification assists scientists
in comprehending Nature, and in developing theories to explain the world around us. First we will
consider biology. As pointed out above, the paradigms of biology changed with the discovery of the
structure and function of DNA, and the interpretation of evolution at a molecular level in terms of
the hydrogen-bonding between the four nucleoside bases (the monomers) that compose the large
(long) polymeric DNA molecule: adenine, cytosine, guanine, and thymine. The diversity and evolu-
tion of life on earth arises from the coupling of these four smallish molecules, when they are bound
into the DNA polymer. It is generally held that evolution is the most powerful and comprehensive
idea ever formulated.
12.3 FURTHER READING
1
2
The Double Helix (1968); James D. Watson; New York, Touchstone.
The Molecule as Meme (2018); Jeffrey H. Williams; San Rafael, CA:Morgan & Claypool.
CHAPTER 13
135
The Classification of the Living and
the Dead
Nothing in biology makes sense except in the light of evolution.
Theodosius Grygorovych Dobzhansky, (1900–1975)
There are millions of species of organisms, both plants and animals living on this Earth. In addition,
there are many millions of species preserved in the fossil record. These are plants and animals that
lived once upon a time, and as a consequence of climate change, natural selection, and rogue large
meteorites have become extinct. Given the scientist’s desire, and need for classification and list
making, how is it possible to keep track of everything that is alive, or has ever been alive? This is
not a dull academic question, as if we are to understand how we (Homo sapiens, or “thinking man”)
evolved from less-advanced creatures; we must be able to locate ourselves, and our ancestors in the
Great Scheme of Life on this planet. How then do we name and organize all of the long lists of the
living and the dead, without getting confused, and without our imperfect memories leaving large
embarrassing lacunae in our models of the evolutionary story of life on this planet?
The answer to this question is straightforward; we use a complex, but elegant system of
classification developed in the 18th century by Carolus Linnaeus (1707-1778).30 Linnaeus was a
Swedish botanist, physician, and zoologist who formalized a binomial nomenclature of organisms;
and we still use this system of naming and classifying organisms. Indeed, Linnaeus is known as the
“father of modern taxonomy” (see Figure 13.1). Linnaeus was a product of, and became a towering
figure in, the European Enlightenment. However, he is less well known,or remembered today than
many of his scientific and philosophical contemporaries, particularly, those from France, Germany,
and Scotland.
30 Many of his writings were in Latin, and his name Carl von Linné, is rendered in Latin as Carolus Linnaeus. In
addition, we speak of the Linnaean system of classification (from the Latin form of his name), but the Linnean
Society of London is named from the original Swedish spelling of the name. The Linnean Society of London are
the custodians of Linnaeus’ specimen collection, and are the UK’s learned society responsible for taxonomy. It was
in the rooms of the Linnean Society that the papers of Charles Darwin and Alfred Russel Wallace on evolution
by natural selection were presented on July 1, 1858.
136
Figure 13.1: Painting of 1737 by Martin Hoffman
showing Carl von Linné (Linnaeus) in his field-cos-
tume; the traditional dress of the Sami people of
Lapland. In his hand is the plant that Jan Frederik
Gronovius named after him. Linnaea borealis is a spe-
cies of flowering plant in the family Caprifoliaceae (the
honeysuckle family). Until relatively recently, it was
the only species in the genus Linnaea. It is a boreal to
subarctic woodland subshrub; hence the specific name,
and is commonly known as twinflower. This plant was
a favorite of Carl Linnaeus, founder of the modern
system of binomial nomenclature, for whom the genus
was named (image from: https://en.wikipedia.org/wiki/
Carl_Linnaeus#/media/File:Naturalis_Biodiversity_
Center_-_Martin_Hoffman_-_Carl_von_Linné_(Lin-
naeus)_in_his_Lapland_costume_-_painting.jpg).
The European Enlightenment was the 18th century up to, but not including, the French
Revolution; the great socio-political event that was the product of the Enlightenment. This was a
period of intense philosophical and scientific investigation. Toward the end of the Enlightenment,
in 1784, the German philosopher and mathematician Immanuel Kant (1724–1804) published his
celebrated essay, “Answering the Question: What is the Enlightenment?,” where he told his readers
that the Enlightenment was man’s emergence from a self-imposed immaturity that had led to his
incapacity to use his understanding to explain Nature without guidance from another (higher)
being. Kant told his readers that they had to be courageous to understand Nature, they had to “dare
to know” (in Latin, sapere aude!). That man should investigate the world around him, and pursue his
investigation to the limit of technology; and then use his mind to imagine what might happen be-
yond his technical limitations. Although best known today for his work in ethics and metaphysics,
Kant made significant contributions to other disciplines, especially, mathematical physics. In 1754,
he was awarded the Berlin Academy Prize for his prediction of the inevitable slowing down of the
Earth’s rotation.
13. THE CLASSIFICATION OF THE LIVING AND THE DEAD13. THE CLASSIFICATION OF THE LIVING AND THE DEAD
137
In his early mathematical studies, Kant pointed out that due to the gravitational attraction
between the Earth and the Moon, the frictional resistance of the motion of the oceans on the
Earth’s surface must lead to a slow decrease in the Earth’s speed of rotation. Energy was being
generated as friction between the moving slabs of water, and the rotating solid Earth that supported
and carried the oceans. Kant knew that energy was conserved; it cannot be made or be lost, so he
reasoned that the frictional interaction of the oceans with the massive rotating crust had to come
from somewhere, and Kant further reasoned that it was being taken from the speed of rotation of
the Earth; that is, the Earth’s angular momentum. The position of the Moon relative to the Earth
is determined by the sum of the attractive forces derived from the masses of the Earth and Moon
and the much larger, but more distant, mass of the Sun. That is, the overall gravitational attraction
of these three bodies. Through this mutual gravitational force, the Earth holds the Moon to itself
and the Moon generates the tides seen on Earth; tides that are the origin of the inevitable slowing
of the Earth’s rotation (2.3 milliseconds/century). This discovery attracted little or no attention until
about 1840, when the concept of energy began to be widely and more fully comprehended. Yet, the
slowing of the Earth is the reason that time-metrologists still insert “leap seconds” into the year
to maintain coherence between Greenwich Mean Time (a measurement of when the sun rises at
Greenwich) and atomic time (a measure of frequency of an excitation within the hydrogen atom).
These early mathematical investigations had taught Kant that all the sciences are linked; that
Nature is a seamless whole (see Figure 9.2). He looked at the energy of interaction of the oceans
with the Earth’s rotating solid surface, then using mathematical logic he saw that the Earth is a
complex but self-contained, and self-regulating system. He dared to use mathematics to follow, to
its logical conclusion something that no one had previously considered: that the Earth is inevita-
bly slowing down through the generation of frictional energy. A mechanical energy derived from
the relative motion of the oceans to the Earth. Kant clearly demonstrated that one should look
at Nature holistically; in much the same way that the Ancient Chinese Taoist sages had looked
at the world around them. And in so doing, Kant came to a startling conclusion. What Kant and
others achieved in the physical science, by building on the work and ideas of Isaac Newton, the
Swedish botanist Carl Linnaeus attempted in the monumental task of trying to devise a system
of classification capable of containing not only every living organism, but every organism that had
ever lived. And in so doing provide a wealth of information about the interconnectedness of those
living and extinct organisms; connections which proved invaluable after the appearance of Darwin’s
explanation of the evolution of species in the mid-19th century.
Carl von Linné (Carl Linnaeus) was born in the countryside of Småland in southern Swe-
den. He received most of his higher education at Uppsala University and began giving lectures in
botany in 1730. He lived abroad between 1735 and 1738, where he studied and published the first
edition of his systematic classification of Nature, Systema Naturae, in the Netherlands. He returned
to Sweden, where he became professor of medicine and botany at Uppsala. In the 1740s, he was
138
sent on several journeys through Sweden to find and classify plants and animals. In the 1750s and
1760s, he continued to collect and classify animals, plants, and minerals, while publishing several
further volumes. At the end of his life, he was one of the most acclaimed scientists in Europe. Phi-
losopher Jean-Jacques Rousseau sent him the message: “Tell him I know no greater man on earth.”
Johann Wolfgang von Goethe wrote: “With the exception of Shakespeare and Spinoza, I know no one
among the no longer living who has influenced me more strongly.” Swedish author August Strindberg
wrote: “Linnaeus was in reality a poet who happened to become a naturalist.” Linnaeus has also been
called Princeps botanicorum (Prince of Botanists), and is considered as one of the founders of mod-
ern ecology. But it is as a taxonomist31 that he is remembered today.
13.1 A HIERARCHICAL SYSTEM OF CLASSIFICATION
During his lifetime, Linnaeus collected around 40,000 specimens of plants, animals, and shells. He
believed it was important to have a standard way of grouping and naming species. So in 1735, he
published the first edition of Systema Naturae (The System of Nature), which was a small pamphlet
explaining his new system of the classification of Nature. He continued to publish further editions
of Systema Naturae that included increasing numbers of named species. In total, Linnaeus named
4,400 animal species and 7,700 plant species using his binomial system of nomenclature. The tenth
edition of Systema Naturae was published in 1758 and is considered the most important edition;
its full title in English is System of Nature through the Three Kingdoms of Nature, According to Classes,
Orders, Genera, and Species, with Characters, Differences, Synonyms, Places. In Systema Naturae, Lin-
naeus classified Nature into a hierarchy. In this, Linnaeus was following the classification of John
Wilkins in his attempt to create a new philosophical universal language in 1688 (see Page 57).
Linnaeus proposed that there were three broad groups, called kingdoms, into which the whole of
Nature could be fitted. These kingdoms were animals and plants; he originally attempted to classify
minerals within the same hierarchy, but this did not work. He divided each of these kingdoms
into classes; classes were divided into orders. These were further divided into genera (genus is the
singular) and then into species. We still use this system today, but we have made some changes.
Today, we only use this system to classify living things, or things that were once alive. Also,
we have added a few additional levels in the hierarchy. The broadest level of life is now a domain.
All living things fit into only three domains: Archaea (single-celled microorganisms), Bacteria
(prokaryotic microorganisms) and Eukarya (organisms whose cells have a nucleus enclosed
within membranes, unlike prokaryotes (Bacteria and Archaea), which have no membrane-bound
organelles). Within each of these domains there are kingdoms, for example, Eukarya includes
the Kingdoms: Animalia, Fungi, Plantae (plants). Each kingdom contains phyla (the singular is
31 Taxonomy is the part of science that focuses on naming and classifying, or grouping organisms. Carolus Linnaeus
developed a way of naming and organizing species that we still use today, and which is still expanding as new
species of living and extinct organisms are discovered.
13. THE CLASSIFICATION OF THE LIVING AND THE DEAD13.1 A HIERARCHICAL SYSTEM OF CLASSIFICATION
139
phylum), followed by class, order, family, genus, and species. Each level of classification is called a
taxon (the plural is taxa).
According to this system, the tree of life consists of three domains: Archaea, Bacteria, and
Eukarya. The first two are all prokaryotic microorganisms, or single-celled organisms whose cells
have no nucleus. All life that is made up of cells containing a nucleus and membrane-bound or-
ganelles, and multicellular organisms, is included in the Eukarya. Kingdom is the second highest
taxonomic rank, just below domain. Kingdoms are divided into smaller groups called phyla (except
the K. Plantae, which has “Divisions”).32
Some recent classifications based on modern cladistics33 have explicitly abandoned the term
kingdom, noting that the traditional kingdoms are not monophyletic; that is, do not consist of all
the descendants of a common ancestor. Depending on definitions, the animal kingdom Animalia
or Metazoa contains approximately 35 phyla, the kingdom Plantae contains about 14, and the
kingdom Fungi contains about 8 phyla. The total numbers of species in these phyla are estimates;
figures from different authors vary wildly, not least because some are based on described species,
some on extrapolations to numbers of undescribed species. And then there is the problem of es-
timating, from the fossil record the number of species of a particular phylum that existed in the
distant past. For instance, around 25,000–27,000 species of nematodes have been described, while
published estimates of the total number of nematode species include 10,000–20,000; 500,000; 10
million; and 100 million.
As mentioned above, animals, fungi, and plants are arranged into various groupings to assist
with the classification of the great variety of life on Earth. Extinct organisms are treated in the
same way as extant organisms (those which are still around today). All life on Earth belongs to one
of three kingdoms: Animalia, Plantae, and another kingdom for the Fungi. Kingdoms are further
sub-divided into other categories, organizing creatures, plants, and fungi in such a way that com-
mon features lead to organisms being associated together until an individual species is defined; that
is, until we arrive at the specific. This is the accepted method of classifying life (although cladistics
has added a new dimension, or two), the principles of this form of classification were laid down
by Carolus Linnaeus in the 18th century. To have achieved this level of hierarchical classification
would have been impressive enough, but Linnaeus also demonstrated that by judicious choice of
32 Traditionally, some textbooks from the U.S. used a system of six kingdoms (Animalia, Plantae, Fungi, Protista
(any eukaryotic organism (one with cells containing a nucleus) that is not an animal, plant or fungus), Archaea/
Archaebacteria, and Bacteria/Eubacteria) while textbooks in countries such as Great Britain, India, Greece, Austra-
lia, Latin America used five kingdoms (Animalia, Plantae, Fungi, Protista, and Monera (a kingdom that contains
unicellular organisms with a prokaryotic cell organization, having no nuclear membrane such as bacteria).
33 A method of classification of animals and plants that seeks to identify and take account of only those shared
characteristics which can be deduced to have originated in the common ancestor of a group of species during
evolution, not those arising by convergence.
140
names, for the various taxa, one could input into the classification a great deal of useful morphologic
information; see Table 13.1.
Table 13.1: Classification of man, an animal that lives with man, and an extinct carnivorous dinosaur
Category
(Taxon)
Eukarya
Animalia
Chordata
Mammalia
Primates
Hominidae
Homo
H. sapiens
Domain
Kingdom
Phylum
Class
Order
Family
Genus
Species
The specimen of T. rex shown here is in the Field Museum in Chicago, IL, USA, and is affectionately known
as Sue. There are many photos of Sue all over the Internet. The cat lives in south London.
(Taxon)
Eukarya
Animalia
Chordata
Reptilia
Saurischia (Theropoda)
Tyrannosauroidea
Tyrannosaurus
T. rex
(Taxon)
Eukarya
Animalia
Chordata
Mammalia
Carnivora
Felidae
Felis
F. catus
We see from Table 13.1 that a human, a cat, and an extinct meat-eating dinosaur are all in
the same domain, kingdom, and phylum. All three of us are, or were, multi-celled animals with
backbones (the phylum Chordata: having a spine or backbone). It is after this level in the Great
Hierarchy of Life that there is a divergence. The human and the cat are mammals (Mammalia), but
the dinosaur was a reptile (Reptilia). Thereafter, the man and the cat diverge; we are Primates, and
the cat is a Carnivore.
The categories of classification from kingdom down to the species level is referred to as a
taxonomic hierarchy. Organisms should be classified to reflect evolutionary relationships, with each
taxon representing organisms that share a common ancestor; this is, the Tree of Life model of tax-
onomy. It can be seen from Table 13.1 that the author, his neighbor’s cat, and a large meat-eating
dinosaur all share the same domain, kingdom, and phylum. It is only there after that we start to
separate; that is, the tree of life branches for us. The author and the cat, both being mammals, lie on
a different line of descent from dinosaur T. rex, which derived from the chordates via reptiles. To be
absolutely correct the name of all taxa should begin with a capital letter, except for the individual
13. THE CLASSIFICATION OF THE LIVING AND THE DEAD13.1 A HIERARCHICAL SYSTEM OF CLASSIFICATION
141
species name which should always begin with a lowercase letter.34 The scientific name for a partic-
ular organism consists of two Latin or Latinized words that are always the genus followed by the
species classification. Hence the term binomial classification. Such a standard system of nomencla-
ture ensures that scientists from around the world can communicate effectively when describing the
characteristics of an individual organism, or even of a single fossil. The genus, such as Tyrannosau-
rus, can be used on its own but the species name; that is, rex without the genus associated with it
has no meaning, as some species names may be used many times for organisms in different genera.
It is at the level of order that the three creatures illustrated in Table 13.1 separate fully into
primate, carnivore, and dinosaur. At the next level down in the hierarchy, family, we refer to Ho-
minidae (whose members are known as great apes or hominids, are a taxonomic family of primates
that includes eight extant species in four genera: Pongo, the Bornean, Sumatran, and Tapanuli
orangutan; Gorilla, the eastern and western gorilla; Pan, the common chimpanzee and the bonobo;
and Homo, which includes modern humans and their extinct relatives (e.g., the Neanderthal; Homo
neanderthalensis), and ancestors, such as Homo erectus); for the cats, Felidae (a family of mammals
in the order Carnivora, colloquially referred to as cats. A member of this family is also called a
felid. The Felidae species exhibit the most diverse fur pattern of all terrestrial carnivores.); and Ty-
rannosauroidea (meaning tyrant lizard forms is a group of coelurosaurian theropod dinosaurs that
includes the family Tyrannosauridae as well as more basal relatives). The genus taxa in Table 13.1
become even more narrowly defined: Homo (from the Latin for human being, is the genus which
emerged from the otherwise extinct genus Australopithecus, that encompasses the extant species
Homo sapiens (modern thinking man), plus several extinct species classified as either ancestral to or
closely related to modern humans (depending on a species), most notably Homo erectus and Homo
neanderthalensis); Felis (genus of small and medium-sized cat species native to most of Africa and
south of 60° latitude in Europe and Asia to Indochina). But it is with the specific taxon that we
arrive at the specimens illustrated in Table 13.1. Homo sapiens (not, Homo neanderthalensis); Felis
catus (not, Panthera tigris, the tiger) and Tyrannosaurus rex.35
In this way, all scientists, including palaeontologists work primarily with extinct species have
a classification framework to use, and into which they may locate their particular specimen. That
specimen will then be located in geological or evolutionary time, and in space; that is, in a partic-
ular habitat. However, there are additional conventions to consider when classifying and naming
animals such as dinosaurs, (or indeed all organisms for that matter). If a dispute arises as to the
34 This is some of the dogma of science that has attached itself to the technical advances of science. Such dogma,
or “the way things should be done, as decided by an international group of elderly scientists” is also encountered
in chemical nomenclature (see Chapter 14) and in the use of the International System of Units (SI, abbreviated
from the French Système international d’unités), the modern form of the metric system, and the most widely used
system of measurement (see Chapter 8).
35 Only species are real. Everything else is interpretation; that is, arbitrary and often subject to re-classification. But
a species name should be eternal (unless a precedent turns up).
142
naming of an organism (and they do; see Chapter 14 on the naming of the chemical elements)
then it is convention for the earliest name, the first description to take precedence. For example,
the nomenclature of early hominids (our direct ancestors) is a minefield (see comment below). But
this system of priority publishing does permit standardization; in this way, the name Brontosaurus
was replaced by its older synonym Apatosaurus. There have, however, been some notable exceptions
to this, and T. rex is one of them.
In the late 19th century, many years before T. rex was named and described in 1905, the
American palaeontologist, Edward Drinker Cope described two badly eroded fossil vertebrae as
Manospondylus gigas; that is, “giant porous vertebra” in reference to the numerous openings for
blood vessels he found in the fossilised bone. This strange honey-combed backbone was different
to any known dinosaur fossils, and it was given this name. One of these bones has since been lost,
however, the name stood and if scientific nomenclature was followed, as this bone is believed to
represent a Tyrannosaurus rex then T. rex, the “Tyrant Lizard King” should be renamed Manospon-
dylus gigas or “giant porous vertebrae,” but that is not quite such an exciting name.
The debate as to the true name of Tyrannosaurus rex was brought to wider public attention
when in 2000 a team from the Black Hills Institute of Geological Research, Hill City, South Da-
kota, U.S., www.bhigr.com , claimed they had found the original site where Cope had unearthed
the weathered fossil bones described as Manospondylus. Fossils found on this site, presumably
from the same specimen that Cope studied almost a century earlier, turned out to be T. rex, so
Tyrannosaurus rex should have been renamed based on this evidence. The International Code of
Zoological Nomenclature (ICZN) states that if further remains are found and these are identical
to those of the earlier discovery then the earlier name and description should be used. This led to
much consternation among scientists (and among Hollywood directors, as Manospondylus sounds
significantly less cool than Tyrannosaurus rex). However, in 2000 the ICZN ruled that T. rex should
stay, as the name had been cited in numerous works by many authors and the case of mistaken
identity was more than 50 years old.
The creation of names, of naming things is an important and mysterious action, which is the
origin of the attraction of the study of names, how to encapsulate in a word, or perhaps two words
(as in the Linnaean system of biological nomenclature), the important characteristics of a newly
discovered mineral, plant, animal or planet.36
36 A small bird like the thrush has very different names in different countries, yet even if you know all those names,
you would still know nothing about the bird. You would only know something about the people who have
observed that bird, and what they called the bird. The thrush sings, it teaches its young to fly, and it flies great
distances; distances so large that we are not sure how this small bird is able to navigate using the Earth’s magnetic
field as a guide. A true description of the thrush should provide some of this important information. The classifi-
cation of the thrush: Kingdom: Animalia; Phylum: Chordata; Class: Aves; Order: Passeriformes; Suborder: Passeri;
Family: Turdidae. Below this hierarchical level, the taxonomy of thrushes becomes complex because evolution is
continually at work, leading to complex local characteristics.
13. THE CLASSIFICATION OF THE LIVING AND THE DEAD13.2 A WARNING TO THE UNWARY
143
13.2
A WARNING TO THE UNWARY
The Linnaean system of biological nomenclature is in two parts, allowing some personal individu-
ality while retaining some information on sample characteristics. The first name refers to the genus
and is given to a set of distinct anatomical characteristics of the organism. The second, specific name
can take this morphological characterisation further; for example, the large ammonite (a member
of the phylum Mollusca (molluscs) similar to the still extant Nautilus, but the ammonites became
extinct along with the dinosaurs at the end of the Cretaceous Period about 70 million years ago)37
found on Portland Isle, Dorset, UK, Titanites giganteus, is the largest representative of a genus of
ammonite already well known for their size; see Figure 13.2. Although there are even larger am-
monites in other genera.
Figure 13.2: Giant Ammonite of the genus Titanities, in an organic limestone (the rock consists largely
of a matrix of shells) of Portland stone ( Jurassic). Image from: http://www.southampton.ac.uk/~imw/
portfoss.htm. Thanks to Dr. Ian West for permission to reproduce this image; he retains full copyright.
Alternatively, the second name in a Linnaean system of nomenclature may be the Latinized
name of the person who first identified, or described the sample; the discoverer does not, however,
37 Classification of Ammonites: kingdom: Animalia; phylum: Mollusca; class: Cephalopoda; subclass: Ammonoidea.
144
propose his or her own name—this is discretely left to colleagues and vice versa. Thinking up such
names might seem a good way to relax after all the hard work of obtaining the specimen. However,
any slackness or lack of care in determining the taxonomy of a new specimen can have serious re-
percussions; taxonomy is an important science as it explains and summarises the history of life on
Earth. In 1995, an issue of Nature carried a scathing editorial condemnation of a research group
who had, for whatever reason, ascribed a frozen corpse (affectionately named Ötzi, who died about
3345 BCE, and was found in an Austrian glacier) to a new species. The Nature editorial ran, “With
breath-taking abandon, Lubec et al. assign Ötzi to a new species, Homo tirolensis. No reason is given
for this casual designation. Readers will look in vain for the careful systematic and diagnostic argument
that such nomenclature requires,” [1] Ötzi is as much a member of the species, Homo sapiens as is the
author illustrated in Table 13.1. Such a colossal error in taxonomy (or, perhaps a lack of apprecia-
tion for the details of taxonomy) will terminate your scientific career faster than a large incoming
meteorite.
13.3 THE LIMITS OF LINNAEAN CLASSIFICATION: TWO
UNCLASSIFIABLE SPECIES FOUND OFF AUSTRALIA
As a final point, it is often the case that the limits of the Linnaean system of nomenclature are
pushed by the discovery of new, exotic specimens. In this manner, the Linnaean system is contin-
ually being expanded, and refined. The following is taken from a report in the Guardian newspa-
per (www.theguardian.com/environment/2014/sep/04/two-unclassifiable-species-found-off-aus-
tralian-coast) concerning a newly discovered organism. An organism that was so unusual that
identifying the appropriate kingdom was problematic, and identifying the appropriate phylum
almost impossible. The specimens displayed in Figure 13.3 were collected off the south-east coast
of Australia in 1986. They were collected at water depths of 400 meters and 1,000 meters on the
continental slope near Tasmania, using a sled that was dragged over the sea floor to collect bot-
tom-dwelling organisms. The researchers were immediately struck by the unusual characteristics of
the specimens collected.
When initially discovered, the organism’s classification was difficult. The two specimens were
assigned their own genus, Dendrogramma, and family, Dendrogrammatidae and the researchers even
considered putting them in their own phylum. As they put it, however, “we refrain from erecting
such a high-level taxon for the time being, because new material is needed to resolve many pertinent out-
standing questions.” The lead scientist of the identification effort, Jørgen Olesen of the University of
Copenhagen, suggested that they represent “an early branch on the tree of life, with similarities to the
600 million-year-old extinct Ediacara fauna.”
The genus name Dendrogramma derives from the two Greek words déndron, meaning tree-
like, and grámma meaning drawing, mathematical figure. Alluding to the branching pattern of
13. THE CLASSIFICATION OF THE LIVING AND THE DEADthe digestive canals, see Figure 13.3, which resemble dendrograms; that is, branching diagrams
frequently used by biologists to illustrate the evolutionary relationships among organisms. The
specific name enigmatica of the type species refers to the mysterious nature of the organisms, while
discoides—the species epithet of the second species—alludes to the disc-like shape of the animals.
13.4 FURTHER READING
145
Figure 13.3: Enigmatic specimens dredged up from the ocean depths. Preserved specimens of Dendro-
gramma. Images from: https://en.wikipedia.org/wiki/Dendrogramma#/media/File:Multiple_Dendro-
gramma.png.
13.4 FURTHER READING
The author is not a biologist, and is unfamiliar with texts used for teaching taxonomy; however,
the author has found invaluable many of the articles available in the online encyclopaedia, Wiki-
pedia (https://en.wikipedia.org/wiki/Wikipedia). In particular, the articles on Linnaean taxonomy
(https://en.wikipedia.org/wiki/Linnaean_taxonomy) and taxonomy (https://en.wikipedia.org/
wiki/Taxonomy_(biology)) were helpful, as were the links they contain.
1
Nature, 1995, 373, 176.
CHAPTER 14
147
Aspects of Chemical Nomenclature
What’s in a name? that which we call a rose / By any other name would smell as sweet
Romeo and Juliet, Act 2, Scene 2; William Shakespeare (1564–1616)
The invention of modern biology began with the order introduced by Linnaeus, with his binomial
nomenclature. In this way, biology became more that just long lists of organisms, fossils, and ob-
servations of morphology and behavior. Biology became a system, which was developed further by
Charles Darwin, and finally rationalized by the work of Crick and Watson in the 1950s. At a stroke,
Darwin’s theory of evolution became a fact grounded in molecular physics. The myriad of books on
everything from the breeding of horses and pigeons, to the genetics of pea plants and roses could be
replaced by a new paradigm based on the interaction, and number of the hydrogen-bonds formed
between the four DNA nucleoside bases. Biology and condensed matter physics were unified, and
those long lists could be forgotten.
This same type of rationalization is underway in chemistry. There are about 20 million known
chemicals, with new molecules being synthesised every week. How do you construct a systematic
chemical nomenclature, which removes the appalling idea of having to memorise all, or even a
smallish part of those individual trivial names? Well, the subject of chemical nomenclature is truly
vast. Indeed, the International Union of Pure and Applied Chemistry (IUPAC) was created over
a century ago to address this very problem. But sadly, not everyone in the chemistry community
follows the rules of nomenclature laid down by the committees of experts in IUPAC (see https://
iupac.org/what-we-do/nomenclature/ for details). Here we will consider the names of the chemical
elements, which amply demonstrate the limitations of the present system of nomenclature. And of
how the present system of nomenclature is not based purely on dispassionate scientific argument.
Figure 14.1 and Table 14.1 give us a feeling for the arcane origins of chemistry, and of chemical
nomenclature.
We are taught that science is above politics; that it, science is truly international, but is this
really true? The more one questions, for example, the names selected today for newly discovered
chemical elements, phenomena, and units the more one seems to see the world of politics and the
individual intrude into the world of science. But then the selection of a name is not a trivial matter.
Old taboos about not revealing one’s name to a stranger, lest that stranger gain some magical hold
over you, stem from the time when a person’s name represented a characteristic. That characteristic
148
defined the person under discussion, and if you knew that name you knew something, perhaps
everything, about the person: “to name is to know.”
Figure 14.1: A table of chemicals; including some
chemical element constructed by the father of mod-
ern chemistry, John Dalton (1766–1844); compare
the symbols used for the entries in this table with the
alchemical symbols, and planet symbols used by al-
chemists and astrologers (see Table 14.1, Figure 14.2,
and Figure 3.1); for example, Hydrogen has the same
symbol as the Sun (see Table 7.1), but Dalton could
not have known that the Sun is composed mostly of
Hydrogen. Image from: https://en.wikipedia.org/wiki/
History_of_the_periodic_table.
Names are at the heart of any classification of the world. They are therefore at the heart of
science. A true name is the name of an object, or an animal that expresses, or is somehow identical
to the true nature of that object or animal. The notion that language, or some specific sacred lan-
guage, refers to things by their true names has been central to a great deal of literature, philosophy,
as well as various traditions of magic, religious invocation, and mysticism (mantras) since antiquity
(see Chapter 2). And thus the idea that if you know a person’s secret, or true name you can gain
a magical power over that person, for example, the true name of the Egyptian Sun god, Ra, was
revealed to Isis only through an elaborate trick. This knowledge gave Isis complete power over Ra,
and allowed her to put her son Horus on the throne. Socrates in Plato’s Cratylus considers, with-
out taking a position, the possibility as to whether names are “conventional” or “natural”; that is,
whether language is a system of arbitrary signs, or whether words have an intrinsic relation to the
things they signify. Odysseus, when captured by Polyphemus in Homer’s Odyssey is careful not to
reveal his name; when asked for it, Odysseus tells the giant that he is “nobody.” But later, having
escaped after blinding Polyphemus and thinking himself beyond Polyphemus’ power, Odysseus
boastfully reveals his real name; an act of hubris that was to cause enormous problems later in the
14. ASPECTS OF CHEMICAL NOMENCLATURE14.1 THE PROBLEM OF NAMING THINGS IN CONTEMPORARY SCIENCE
149
story. Knowing his name, Polyphemus was able to call down upon Odysseus the revenge of his
father, the god of the sea Poseidon.
14.1 THE PROBLEM OF NAMING THINGS IN CONTEMPORARY
SCIENCE
Today in astronomy, celestial objects are often named after individuals as we have exhausted the
Classical pantheons of pagan gods and goddesses. It is likely that in some cases, using a personal
name is justified. One can well imagine, for example, the relief of the European nations in Septem-
ber 1683 when the King of Poland, Jan (III) Sobieski, defeated the armies of the Ottoman Sultan,
Mehmed IV at Kahlenberg, thereby saving Christian Europe from the Turks. In recompense, So-
bieski’s name was given to a newly discovered constellation of Stars, a rare honor for someone who
did not live on Mount Olympus. However, when we learn that there are comet and small planet
hunters who scan the heavens for new objects so that they may name them after family members
and friends, one wonders at the seriousness with which they regard their enterprise. Astronomy is
a field crying out for a totally rational system of nomenclature, although the International Astro-
nomical Union does have a numbering system for asteroids, and has adopted a numerical system
for comets, they continue to use a personal name in parentheses.
Figure 14.2: The first modern tabulation of the chemical elements, by Dimitri Mendeleev (1834–1907);
dating from 1869–71. Image from: https://en.wikipedia.org/wiki/History_of_the_periodic_table.
Perhaps the most rarefied branch of experimental chemistry is the synthesis of new chemical
elements. As one might imagine, this is not easy but nuclear chemists are continually attempting
such syntheses. At present there are 118 named chemical elements, but the naming process is not
150
always straightforward. And we will see that in an investigation of the relevance and appropri-
ateness of naming new chemical elements after towns, laboratories, and scientists, one begins to
question that oft-stated description of science as being supranational; particularly, science related
to nuclear physics and the disintegration of radioactive nuclei.
The International Union of Pure and Applied Chemistry (IUPAC) is the body, which since
1919 has been charged with organizing a systematic nomenclature of chemistry, and this includes
the naming of the chemical elements. Figure 14.2 gives a picture of the classification of the chem-
ical elements in the middle of the 19th century. Today, the naming of a new element is complex.
Gone are the days when early chemists such as Humphrey Davy (1778–1829) isolated whole col-
umns of the Periodic Table,38 and assigned the names we still use. Today, when a discovery is first
published a temporary systematic element name is assigned, by IUPAC to the newly synthesized
chemical element. In chemistry, a transuranic element (heavier than Uranium) receives a permanent
name and symbol only after its synthesis has been confirmed by a second laboratory (there are only
two or three laboratories in the world capable of doing these experiments). In some cases, however,
such as the Transfermium Wars, controversies about priority and the naming of the elements have
arisen; and there have been protracted international disagreements. Such controversies are not only
deeply embarrassing for the laboratories concerned (questioning the science), but also embarrassing
politically because non-scientists get involved to further their own ends by boosting national pride
and chauvinism, which should have no place in science.
The IUPAC systematic, but temporary, names for a new element are derived from the ele-
ment’s atomic number, and are only applicable for elements between atomic number (Z) 101 ≤ Z
≤ 999. Each digit is translated to a numerical root, according to published rules. The roots are con-
catenated, and the name is completed with the ending-suffix -ium. Some of the roots are Latin and
others are Greek to avoid two digits starting with the same letter (for example, the Greek-derived
pent is used instead of the Latin derived quint to avoid confusion with quad for 4). There are elision
rules designed to prevent odd-looking names. The suffix -ium overrides traditional chemical suffix
rules, thus elements 117 and 118 were ununseptium and ununoctium, not ununseptine and ununocton.
This does not apply to the final trivial names these elements receive once their existence has been
confirmed; thus element 117 and 118 are now Tennessine and Oganesson. For these trivial names,
all elements receive the suffix -ium, except those in group 17 which receive -ine (like the other
Halogens) and those in group 18 which receive -on (like the other Noble Gases). The systematic
symbol is formed by taking the first letter of each root, converting the first to a capital. This results
in three-letter symbols instead of the one- or two-letter symbols used for named elements.
38 If you look at Mendeleev’s Table of the Chemical Elements, Figure 14.2, you will see most of the elements iso-
lated and named by Davy (sodium, potassium, calcium, strontium, magnesium, barium) in the first two columns
or groups (
I and II).
Группа
14. ASPECTS OF CHEMICAL NOMENCLATURE14.1 THE PROBLEM OF NAMING THINGS IN CONTEMPORARY SCIENCE
151
After many years and a great deal of discussion in the appropriate IUPAC committee, the
scientists responsible for the experiments that created these ephemeral species (few of the most re-
cently discovered chemical elements exist for anything approaching a second; they are all radioactive
species, and their half-lives are very short—usually fractions of a millisecond) will eventually agree
on a true trivial name; that is, a name that can be used to identify this element in the non-specialist
literature. Table 14.1 gives the names of the Transfermium chemical elements; Fermium is element
number 100, and it can be clearly seen that these chemical elements have been named in accordance
with the pre-eminent geo-political struggle of that period, the Cold War. There are about as many
Russian as American names, with a few European names thrown in for good measure; to make it
look as if it is not a Cold War club.
It is not for nothing that the two largest laboratories involved in this type of nuclear synthe-
sis are the Lawrence Livermore (element 116) National Laboratory in California (element 96 is
named after California) and the Flerov (element 114) Laboratory of Nuclear Reactions in Dubna
(element 105) near Moscow (element 115), where the centers for Cold War research into nuclear
weapons.
Table 14.1: Names and origin of the elements after Fermium in the Periodic Table of the Elements
(see Figure 14.3)
Number in
Periodic Table
100
Final Trivial Name
(and symbol)
Fermium (Fm)
Origin of Trivial Name
Named in honor of the Italian-American physicist, En-
rico Fermi (1901–1954).
101
102
103
104
105
106
107
108
Mendelevium (Md) Named in honor of the Russian chemist, Demitri Men-
Nobelium (No)
Larwrencium (Lr)
deleev (1834–1907).
Named for the founder of the Nobel prizes and arma-
ments manufacturer, Alfred Nobel (1833–1896).
Named in honor of the American physicist, Ernest O.
Lawrence (1901–1958).
Rutherfordium (Rf ) Named in honor of the British (New Zealand born)
Dubnium (Db)
Seaborgium (Sg)
Bohrium (Bh)
Hassium (Hs)
physicist, Lord Ernest Rutherford (1871–1937).
Named after the town of Dubna in Russia.
Named in honor of the American chemist, Glenn T.
Seaborg (1912–1999).
Named in honor of the Danish physicist Niels Bohr
(1885–1962).
Named for the German state of Hesse
152
109
110
111
112
113
114
115
116
117
118
Meitnerium (Mt)
Named in honor of the German chemist, Lise Meitner
(1878–1968).
Copernicium (Cn)
Nihonium (Nh)
Flerovium (Fl)
Darmstadtium (Ds) Named for the German city of Darmstadt.
Roentgenium (Rg) Named in honor of the German physicist, Wilhelm
Conrad Röntgen (1845–1923).
Named in honor of the Polish astronomer, Nicolaus
Copernicus (1473–1543).
Named for Japan.
Named for the Flerov Laboratory of Nuclear Reactions,
Dubna, Russia.
Named for the city of Moscow, Russia.
Named for the Lawrence Livermore National Labora-
tory, CA, U.S.
Named for the state of Tennessee, USA.
Named in honor of the Russian chemist, Yuri Oganes-
sian (born 1933).
Moscovium (Mc)
Livermorium (Lv)
Tennessine (Ts)
Oganesson (Og)
Speaking personally, the origins and meaning of the names given to the chemical elements is
sometimes the first romantic attachment formed by young chemists with their future career. Many
of us are fascinated by the way that the various names have been derived. For example, some are
taken from the name of the mineral from which the element was extracted (for example, Sodium
(from soda), Potassium (from potash), or Carbon; carbo being Latin for charcoal), and some from
the locality where the mineral containing the element was found (for example, Strontium, from
Strontian in Scotland, or Copper derived from Cuprum, the Latin name for Cyprus,). Other names
of elements derive from the name of the city where the discoverer lived (for example, Hafnium,
Hafnia being Latin for Copenhagen), or from the color of the purified element (for example, Chlo-
rine, from chloros, meaning greenish-yellow in Greek).
The element’s name might also be derive from some characteristic of the chemical properties
of the element, thereby providing information about its chemistry. This may include inertness (for
example, Argon; in Greek argos means inactive), or reactivity (for example, Bromine, bromos being
Greek for stench, or Fluorine, where in Latin fluere means flux). Indeed, the name may even derive
from the difficulty of extracting the element from the naturally occurring source, again imparting
important chemical information. Examples of these include the Greek lanthanein (Lanthanum,
element number 57) which means to “lie hidden,” and dysprositos (Dysprosium, element number
66) meaning “hard to get at.” If one is familiar with the Greek myths, it is easy to understand why
Niobium (element number 41) and Tantalum (element number 73) are so named. Niobe was the
14. ASPECTS OF CHEMICAL NOMENCLATURE14.1 THE PROBLEM OF NAMING THINGS IN CONTEMPORARY SCIENCE
153
daughter of Tantalus, and the two elements are found together in the same ore. It was only in 1844
that they were shown to be two distinct elements. The element Tantalum was first isolated in 1802,
but Niobium was not isolated until 1864, when it was extracted from a purified ore of Tantalum.
Sir Humphry Davy and JÖns Jacob Berzelius (1779–1848) isolated and named almost whole
columns of chemical elements. Davy was the first to isolate and name eight of the elements: Boron,
Barium, Calcium, Chlorine, Magnesium, Potassium, Sodium, and Strontium, whereas Berzelius
only managed to isolate and name four elements: Cerium, Selenium, Silicon, and Thorium. Other
well-known chemists who figure in this list of discoveries include Friedrich WÖhler (1800–1882),
who isolated Aluminium and Beryllium, and Robert Wilhelm Bunsen (1811–1899) who, in a tri-
umph of early analytical chemistry, spectroscopically identified Caesium and Rubidium, without
even isolating weighable quantities of the pure metals. Bunsen named these elements from the color
of the principal spectral line; caesius, Latin for sky-blue, and rubidus, Latin for deepest-red. These
early chemists did not simply add extra elements to the list of elements already known, they put de-
tailed information about the structure and properties of the newly discovered elements into the new
names; just as Linnaeus did in his system of binomial nomenclature of organisms (see Chapter 13).
Recently, however, there has been a trend to name elements after individuals. Cynics might
say that this trend has arisen because few of today’s nuclear chemists know the Greek myths, or any
Classical languages. Probably the last element named from a distinctive characteristic, as opposed
to merely adopting the Latinized name of the university or state where it was discovered, was the
man-made metal Technetium (element number 43), discovered in 1937 and named, appropriately,
from the Greek technetos, meaning artificial; although Dimitri Mendeleev knew that such an el-
ement must exist when he was putting together the first pictorial representation of the Periodic
Table, he left a gap for it (see Figure 14.2 where the gap is at number 44).
Unfortunately, the modern desire to name elements after scientists has allowed politics and
nationalism to creep into the Periodic Table. During the Cold War, Russian scientists suggested the
name of an eminent Russian scientist for an element they claimed to have discovered, U.S. scientists
suggested the name of a U.S. scientist, and Germans suggested a German scientist, or town. As a
result, the naming of the most recently discovered chemical elements required more international
compromise than classical erudition. Such national disagreements over the name of a chemical el-
ement are unfortunately not new, and can never improve the public perception of science or, more
importantly of scientists. In 1950, IUPAC had to intervene, after almost a century of controversy,
to recommend that the name of element number 41 be Niobium. The first specimen of the ore
containing this element was found in the American Colonies by John Winthrop (1714–1779), pro-
fessor of natural philosophy at Harvard College, but this specimen was sent to England for study.
However, many American institutions continued, after its isolation in 1864, to refer to this element
as Columbium (Cb), after the spirit of America. Nevertheless, it is the name Niobium that is today
universally accepted by working scientists.
154
14.2 THE TRANSFERMIUM WAR
Given the nature of American-Soviet rivalry in the forty years following the World War II, it is
perhaps not surprising that this rivalry carried over into the world of scientific research. Although
given that the particular area of research, which interests us here involved the investigation of the
stability of atomic nuclei, and that the research was largely carried out in laboratories better known
for their research into developing nuclear weapons, this Cold War rivalry is probably not unex-
pected. But there was more to this competition than simple, international political posturing; there
also no small measure of personal vanity.
After all, the naming of new chemical elements is a rare event, unlike the discovery of a new
asteroid or a new comet. And if your name is chosen, you join one of the most hallowed clubs in
all of science. The lucky few who are immortalised by having their name adopted as a chemical
element. Consequently, the names for the chemical elements beyond number 100 were the subject
of a major international controversy starting in the 1960s, described by some nuclear chemists as
the Transfermium War. This controversy was only resolved in 1997, and only by a significant weak-
ening of one of the international organizations created early in the 20th century to permit science
to continue internationally, even when nations were at war.
The controversy arose from disputes between American scientists and Soviet scientists as to
which had first made these particular elements. One cannot be said to have isolated these elements
as Humphrey Davy did back in the late 18th century as they are ephemeral; they are unstable and
their half-life is usually only a few milliseconds, or even a few microseconds. Consider element 112,
Copernicium (112Cn). This element was first observed in the Heavy Ion Research Laboratory (Ge-
sellschaft für Schwerionenforschung, GSI) in Darmstadt (element number 110), Germany (element
number 32), where researchers were attempting to fuse the nuclei of Lead and Zinc (named by the
alchemist Paracelsus after the form of its crystals) atoms. The reaction scheme for the generation
of element 112 and its subsequent decay via neutron and alpha-particle emission are given in the
following scheme (where the superscripted number refers to the atomic weight of a particular iso-
82Pb +
tope of the element, and the subscripted number refers to the element’s atomic number):
70
112Cn → -(alpha) [losses an alpha-particle39]
108Hs → -(alpha) [losses another
100Fm.
277
110Ds → -(alpha) [losses another alpha-particle] →
106Sg → -(3 alpha) [losses three alpha-particles] →
→
alpha-particle] →
0n) [loses a neutron] →
The scientists undertaking these experiments observed a single atom of
112Cn on Febru-
110Ds decays after
ary 9, 1996. It should be noted that
108Hs is relatively long-lived, decaying after 19.7 seconds. These are all
110 microseconds, and
ephemeral materials made in quantities so small they cannot be weighed. The same research group
39 An α-particle (alpha-particle) is a doubly-ionized Helium atom; that is, He
112Cn decays after 280 microseconds,
(a bare Helium nucleus) with a
30Zn →
273
112Cn → -(
277
273
253
269
208
278
265
277
269
2+
1
mass of 4 amu (or 6.644 657 230(82) × 10
−27
kg)
14. ASPECTS OF CHEMICAL NOMENCLATURE14.2 THE TRANSFERMIUM WAR
155
were the first to observe element 111 by fusing Bismuth nuclei with Nickel nuclei, and were re-
warded by seeing three atoms of the desired product over the period December 8–18, 1994.
By convention, the right to suggest a name for a newly discovered chemical element goes to
their discoverers. However, for the elements 104, 105, and 106 there was a controversy between a
Soviet/Russian laboratory and an American laboratory regarding priority. Both parties suggested
their own names for elements 104 and 105, neither recognizing the names suggested by the other
laboratory. This is what brought IUPAC into the debate. In addition, the American name of Sea-
borgium for element 106, chosen by the American Chemical Society to honor Glenn T. Seaborg
(1912–1999) of University of California, Berkley (a Nobel laureate chemist who had also been
a science adviser to U.S. presidents during the Cold War) was objectionable to some, because it
referred to an individual who was still alive at the time his name was proposed. Einsteinium (ele-
ment number 99) and Fermium (element number 100) had also been proposed as names for new
elements while Einstein and Fermi were still living, but by the time that the names of these two
eminent physicists were adopted, both scientists were dead. So, there was no precedent at this time
for naming a chemical element after a living person. However, Seaborg wanted this fame while still
living. And this caused serious international tensions; reviving much of the rivalry that had existed
during the Cold War, and with no little Cold War rhetoric.40 In addition, the Soviet Union wished
to name element 104 after Igor Kurchatov (1903–1960), builder of the Soviet atomic bomb, which
was another reason the name was objectionable to Americans.
The two principal groups which were involved in the conflict over element naming were: an
American group at Lawrence Berkeley Laboratory, California, and a Russian group at Joint Insti-
tute for Nuclear Research in Dubna, Russia. And between these two national teams, the referee, or
arbiter, was the IUPAC Commission on Nomenclature of Inorganic Chemistry, which introduced
its own proposal to the IUPAC General Assembly (the Union’s highest decision making body) for
the names of these elements. The German group at the GSI in Darmstadt, who had undisputedly
discovered elements 107 to 109, were dragged into the controversy when the IUPAC Commission
suggested that the name “Hahnium” (in honor of the German physicist Otto Hahn (1879–1968),
who won the Nobel physics-prize in 1944 and, although opposed to the Nazi Party, had remained
in Germany throughout WWII); a name already proposed for element 105 by the Americans, be
used instead for GSI’s element 108. In short, no national laboratory was happy, and it was in fact a
major blow to the prestige of IUPAC.
In 1994, the IUPAC Commission on Nomenclature of Inorganic Chemistry proposed the
names given in column six of Table 14.2. Thus attempting to resolve the international disagreement
by sharing the naming of the disputed elements between Russians and Americans, replacing the
name for 104 with one honoring the Dubna research center, but not naming 106 after Seaborg.
40 The author was, at this time, the Deputy Executive Secretary and editor of IUPAC and saw, read, and heard the
voluminous correspondence and exchanges concerning this sad affair.
156
However, this solution drew objections from the American Chemical Society on the grounds that
the right of the American group to propose the name for element 106 was not in question, and
that group should have the right to name the element. IUPAC further confused things by deciding
that the credit for the discovery of element 106 should be shared between Berkeley and Dubna,
but the Dubna group had not come forward with a name. Along the same lines, the German group
protested against naming element 108 with the American suggestion Hahnium, mentioning the
long-standing convention that an element is named by its discoverers. In addition, given that many
American textbooks had already used the names Rutherfordium and Hahnium for elements 104
and 105, the ACS objected to those names being used for other elements.
Finally in 1997, the 39th IUPAC General Assembly in Geneva put forward the names given
in the last column of Table 14.2. Professor Glenn Seaborg died in 1999, however this attempt at
creating a tradition of naming chemical elements after living people has continued with the Russian
chemist, Yuri Oganessian whose name is given to element 118, Oganesson. Thus, the convention of
the discoverer’s right to name their elements was respected for elements 106 to 109, and the two
disputed claims were shared between the two opponents.
Table 14.2: A summary of the evolution of the names of some of the transfermium elements
Atomic
Systematic
Proposed
Proposed
Proposed
Suggested
Final Name
Number
IUPAC
American
Soviet/
German
IUPAC in
(IUPAC 1997)
Name
Name
Russian
Name
1994
Name
104
105
106
107
108
109
unnilqua-
Rutherfordium Kurchatovium -
Dubnium
Rutherfordium
dium
unnilpentium Hahnium
Neilsbohrium -
Joliotium
Dubnium
unnilhexium Seaborgium
unnilseptium -
unniloctium -
unnilennium -
-
-
-
-
-
Rutherfordium Seaborgium
Neilsbohrium Bohrium
Hassium
Hahnium
Bohrium
Hassium
Meitnerium Meitnerium Meitnerium
This modern personality cult is inappropriate and inherently nationalistic, laying itself open to
political problems. It was a lot simpler, and more appropriate, when the names of mythological char-
acters or names derived from chemical properties were used for the elements. Myths and legends are
the common heritage of all mankind and tell us, by analogy, more about the element, for example the
chemical affinity between Niobium and Tantalum, than do Fermium or Nobelium, which were never
associated with Enrico Fermi or Alfred Nobel. And unlike Niobium—a relatively common, naturally
occurring, element whose salts are key materials used in modern electronics—element 106 has a
half-life of a few hundred microseconds and will only ever be available in the minutest of quantities.
14. ASPECTS OF CHEMICAL NOMENCLATURE14.2 THE TRANSFERMIUM WAR
157
The name Iridium, derived from the Latin iris, meaning color, as exemplified by the salts of
this element, and Iodine, from the Greek iodes meaning violet; both impart chemical information.
Likewise with Antimony, derived from the Greek anti monos—“a metal not found alone,” savants
of the Ancient World are telling us that this element is unreactive enough to be found as a native
metal, but always associated with its chemically similar neighbors in the Periodic Table. This is
quite a lot of information for the Ancient World (Antimony salts were used as cosmetics by the
Ancient Egyptians). In comparison, the names Tennessine, Nihonium, Hahnium, and Meitner-
ium tell us nothing and create confusion because we would need to consult textbooks of history,
or English–Japanese dictionary to identify the origins of their names—let alone their discoverers.
Berzelius refused to name elements after people; when the discoverer of Tungsten (element number
74), Carl Wilhelm Scheele, was to be immortalized as the discoverer of this new element, Berzelius
remarked “The immortality of our compatriot does not need this support.” Thus, today we have Tungsten
and not Scheelium.
Figure 14.3: The well-known Periodic Table of the Elements (source: https://en.wikipedia.org/wiki/
Periodic_table). Hydrogen (H) is element number 1; Uranium (U) is number 92; Iron (Fe) is in the
middle at number 26. Compare with the image in Figure 14.1; it took two centuries for scientists to
shake off the last vestiges of alchemy.
However, many modern scientists prefer the names of scientists as labels for the chemical ele-
ments. Unfortunately, the choice of scientist to be so honored is arbitrary and illogical—why choose
Lawrence, who was not a chemist, while Humphry Davy, who discovered eight elements, and Fred-
158
erick Soddy, who discovered and explained the existence of isotopes, have not been so honored?
If only truly great scientists are to be so honored, why not Newton, Maxwell, Faraday, or Galileo?
14.3 FURTHER READING
The International Union of Pure and Applied Chemistry (IUPAC) was founded in 1919 by chem-
ists from industry and academia who recognized the need for international standardization in their
area. As I have pointed out in this volume, the standardization of weights, quantities, names, and
symbols is essential to the successful advance of the scientific enterprise, and to the smooth devel-
opment and growth of international trade and commerce.
IUPAC is the authority on chemical nomenclature and terminology, and two IUPAC bodies
take leading roles in this activity: Division VIII—Chemical Nomenclature and Structure Repre-
sentation and the Inter-divisional Committee on Terminology, Nomenclature, and Symbols (see
https://iupac.org/ for more details). As one of its major activities, IUPAC develops Recommenda-
tions to establish unambiguous, uniform, and consistent nomenclature and terminology for specific
scientific fields, usually presented as: glossaries of terms for specific chemical disciplines; definitions
of terms relating to a group of properties; nomenclature of chemical compounds and their classes;
terminology, symbols, and units in a specific field; classifications and uses of terms in a specific
field; and conventions and standards of practice for presenting data in a specific field. Information
on chemical terminology can also be accessed through the IUPAC Color Books, which may be
consulted on-line at https://iupac.org/what-we-do/books/color-books/.
14. ASPECTS OF CHEMICAL NOMENCLATURECHAPTER 15
159
The Evolving Science of History
Social media: Websites and applications that enable users to create and share content or to par-
ticipate in social networking.
The premise of this volume is that scientists (but not social scientists) have a worldview that is
different from that of non-scientists, particularly, for example, historians. Whether this is a good
thing or a bad thing, given the predictive power of science, it is irrelevant. Science is in the world,
and it cannot be removed. We have seen how science has evolved over the last few millennia, how
it has extended human life, and how it is capable of extending it a lot further. There is no problem
that has arisen from some aspect of the misuse of science that cannot be corrected by application of
more science. This is true whether we are considering nuclear power, or the influence of man-made
greenhouse gases in the Earth’s atmosphere. Our errors catalyze future progress.
Ordinary people may not understand science, particularly, the physical sciences, and they may
also be in awe and fearful of the power of science and of the scientist, but it is to science and the
scientist that politicians must turn when they need a solution to a technical (non-social or non-po-
litical) problem. After all, there is no one else to turn to; and science has already transformed the
world and society on a number of occasions. For example, the rise of science and medicine in the
early-modern age of the 17th century; the Victorian Internet of the international telegraph (Wil-
liam Thomson became Lord Kelvin for his invention of the equipment needed to lay conducting
cables between the UK and the U.S.); atomic power; genetic medicine; and the creation of the mod-
ern Internet.41 These things cannot be undone; scientific discoveries once made cannot be forgotten.
Society will have been transformed, and if the technical details of an advance become lost; then it
will survive in the form of legend and myth, which will catalyse its re-invention. History waits for
no one; it is always on the move, somewhere. The Second Law of Thermodynamics tells us that the
arrow of time can only point in one direction—into the future. Scientific advances are the ratchets
in the mechanism of history that prevent history, and social advance running backwards.
But what of history, that subject that fascinates all thoughtful individuals? Anyone who has
read, at least, two history texts by different authors on the same period of history will know that
historians can be maddening people. Yet asked if history has a pattern or a plan, they will usually
assert that such a question is offensive; history is but the record of the chaotic events that come
about because people’s hopes and ambitions are invariably modified by external circumstances—
41 Question: how did we organize holidays and trips to the theater before the invention of the Internet?
160
often the hopes and ambitions of other people. As Edward Gibbon put it, “History is but the record
of crimes and misfortunes.”
You will likely be told that an historian’s job, is not to find a pattern, let alone a fundamental
law, but to ensure by research that the record is accessible and intelligible. There are, however, var-
ious schools of historians. There are Marxist historians, who look preferentially for patterns in the
data of events, which they believe reveal the signature of the perpetual struggle between the prole-
tariat and their economic masters. Other historians hold to the idea of progress, or the notion that
the improvement of the human condition in recent centuries can, with ingenuity be extrapolated
into the future. Physical scientists who have an interest in history often fall into this camp. Without
realizing it, however, these physical scientists are adopting the ideas of the science fiction writer,
Isaac Asimov (1920–1992); best expressed in his invented science of psychohistory. That the future
may be predicted, if only we had sufficient data and powerful enough computers.
Irrespective of the desires of even the most ardent of anti-scientists, whether their dislike of
science arises from their religion, their politics, or their limited education, science is here to stay. As
an example of the now unavoidable influence of the mathematical or physical sciences on society,
let us consider the mining of personal data on social media. This study will also demonstrate how
techniques of mathematical physics are used to analyze a set of data. This is a topic that will likely
remake our society over the next generation. It will certainly do away with conventional politics;
there will still be elections in the future, but there will be no need to go to a polling office to vote.
We will all like or dislike a particular politician, or a particular proposal for a law on social media.
15.1 SOCIAL MEDIA
While it may seem premature to associate history with something as new and vibrant as Facebook,
it must be commented that a great many people, of a variety of political views, religious persuasions
and commercial interests are sifting through; that is, mining the data we have all (well over 2.5
billion of us… and increasing) carelessly left littering social media.
Before writing was invented, there was only one way for an individual to leave behind a
record of himself for future generations. They would place their hand on cave walls and blown a
mouthful of pigment over it; leaving behind a stenciled handprint. It was a successful strategy, as a
great many of these haunting cave paintings survive. With time, our civilization evolved, as did an
individual’s ambitions and desires. We have now become amazingly adept at recording our lives. We
have built mausoleums and libraries, and filled those libraries with books; we have written books
of history and compiled sophisticated records of our ancestors. Then a college student developed
an extraordinarily simple and useful tool to convey our personal histories, and interests to future
generations—Facebook.
15. THE EVOLVING SCIENCE OF HISTORY15.1 SOCIAL MEDIA
161
Historians have always struggled to tell the stories of our everyday ancestors, even those who
lived only a few generations ago. Historical records offer great insight into a handful of important and
powerful people, but piecing together the lives of ordinary people has always been difficult. Facebook
changes all this. In little more than a decade, Facebook’s users have contributed to a massive depository
of personal information that documents both our reactions to events and our evolving customs with a
scale and intimacy earlier historians could only dream about. It’s hard to estimate just how substantial
this database of personalities could become. Presently, more than 2.5 billion people are regular users
of Facebook. Assuming people will continue to use the site regularly, this means most of these users
will document more of their lives over the coming years, leaving behind photos, details of friendships
and love affairs, their likes and dislikes, and their reactions to news-worthy events. In addition, there
are tens of millions of deceased Facebook users; individuals who have left behind digital remains. It
is estimated that if Facebook stopped growing tomorrow, the number of deceased users “on” (perhaps
one should say “in”) the site would be well over a billion by the end of the century. If the site were to
continue growing as it is now, that number could reach about 5 billion users by 2100. All this data,
this information is there to be mined and exploited—for whatever reason.
Today, these “dead” accounts offer a virtual environment for mourning. However, the data
they contain will be invaluable to future historians and sociologists. They will be able to investigate
the events surrounding the election of Donald Trump, and the online culture wars that facilitated
and followed this election. Similarly, with the great saga of Brexit, future historians will be able
to study what ordinary people thought about Brexit and the politicians who desired Brexit, rather
than merely what the politicians who desired Brexit thought about Brexit. Of course, future re-
searchers will see lots of pictures of dead cats and puppies, of homemade cakes, and Game of Thrones
memes with which users distracted themselves from the daily grind of work. But, this potential
utility complicates, rather than resolves, the problems of security already plaguing Facebook: How
much privacy do the dead deserve? How do we guarantee that those who wish to be forgotten are
allowed this right—is it a right? This matters because, for the first time, we all have the power to
leave behind far more personalized histories than any previous generation. We don’t have to rely
on the recollection of our descendants for our memory to survive, and we don’t have to accept that
our collective experiences will fade away with time. This is our chance to leave far more than our
handprints on the digital walls of history.
Let us now look briefly at the possible misuse of the data of the living, and of the dead. What
is happening to our data, to our likes and dislikes, our loves and hates that we all leave all over social
media? If I, for example, like a friend’s post about what he and his boyfriend did on the weekend,
who outside of a list of our mutual friends looks at this information, and for what purpose may this
information (data) be used by parties unknown to anyone actually involved? If these data miners
restricted themselves to collecting recipes for chocolate cookies, there would be no problem, but it
appears that this is not what data mining on social media is all about.
162
15.2 SOME DETAILS OF THE ANALYSIS OF PERSONAL DATA
ON SOCIAL MEDIA
The first study of Facebook data, that is, data mined from the personal profiles of Facebook users,
was based on a sample of 58,466 volunteers (thus in Table 15.1, N = 58,466) from the U.S.; this
data was obtained through the myPersonality Facebook application (no longer available), which
included their Facebook profile information, a list of their likes (with, on average n = 170 likes per
person; totalling about 10 million likes, and one assumes dislikes), psychometric test scores, and
survey information. Users and their likes were represented as a sparse user–like matrix, the entries
of which were set to 1 if there existed an association between a user and a like, and to 0 otherwise.
The modelers selected a wide range of traits and personal attributes from users that clearly re-
veal how accurate and potentially intrusive such an analysis could be; these characteristics included:
sexual orientation, ethnic origin, political views, religion, personality, intelligence, satisfaction with
life, substance use (alcohol, drugs, cigarettes), whether an individual’s parents stayed together until
the individual was 21, and basic demographic attributes such as age, gender, relationship status, and
size and density of the friendship network. This data was then represented in the form of a matrix
(see Table 15.1), and standard statistical methods of analysis were used for their manipulation.
Table 15.1: A representation of the raw data mined from social media sites such as Facebook. The
N users are represented as the first column, and the various characteristics of each of these N users
form the columns of the table or matrix; for example, the user either likes (1) Mickey Mouse or
he/she dislikes (0) Donald Trump. There can be as many columns as there are data-fields available
for personal traits and characteristics: ethnicity, sexual orientation, religion, etc. [1]
t
r
A
s
e
k
i
L
x
e
s
-
y
a
G
s
e
k
i
L
y
t
i
n
a
i
t
s
i
r
h
C
s
e
k
i
L
l
a
c
i
l
e
g
n
a
v
E
s
e
k
i
L
y
t
i
n
a
i
t
s
i
r
h
C
d
l
a
n
o
D
s
e
k
i
L
p
m
u
r
T
t
i
x
e
r
B
s
e
k
i
L
g
n
i
t
o
v
o
t
s
t
i
m
d
A
t
i
x
e
r
B
r
o
f
s
g
u
r
d
s
e
k
i
L
s
e
k
i
L
g
n
i
t
a
r
t
s
n
o
m
e
d
y
e
k
c
i
M
s
e
k
i
L
e
s
u
o
M
Facebook
user 1
Facebook
user 2
….
Facebook
user N
0
0
1
1
1
0
0
1
1
0
0
0
1
0
0
0
0
1
1
1
0
1
0
1
1
0
0
1
1
1
15. THE EVOLVING SCIENCE OF HISTORY
15.2 SOME DETAILS OF THE ANALYSIS OF PERSONAL DATA ON SOCIAL MEDIA
163
Matrix decomposition, also known as matrix factorization, is the standard initial procedure
in the analysis of any large body of data from which researchers wish to make predictions. Perhaps
the best-known and widely used matrix decomposition method is the singular-value decomposition
(SVD). All matrices have an SVD, and as such it is used in a huge range of applications. The SVD
takes a rectangular matrix of, for example, likes and dislikes from a selection of users of Facebook;
defined as M, where M is an m by n matrix, see Figure 15.1, in which the m rows represents the N
social media users being studied, and the n columns represents those users’ likes and dislikes (see
Table 15.1). The SVD theorem states:
M = U Σ V*
Here, U is an m by n unitary matrix; Σ is a diagonal m by n matrix; V is a symmetric n by n
matrix, and V* is the conjugate transpose of V. The SVD represents an expansion of the original
data in a coordinate system where the covariance matrix is diagonal, and so more amenable to
statistical analysis.
0
0
0
0
0
0
Σ
m × n
0
0
0
1
0
0
0
=
=
V*
n × n
0
0
0
1
0
0
1
0
1
0
0
1
0
0
0
0
1
0
Im
0
1
0
In
M
m × n
=
U
m × m
U
U*
V
V*
Figure 15.1: Details of the matrix algebra involved in the singular-value decomposition by which the
raw data of our likes and dislikes on social media are turned into a set of linear equations, from which
the likelihood of our “behavior” may be calculated. (The lowercase, italicized letters denote the type of
matrix—symmetric or non-symmetric.)
164
Having now prepared the data in a standard statistical format, it is possible to make predic-
tions about the group under study; the users of social media such as Facebook, as a representation
of humanity in general, or just the voting population of the U.S. or the UK. In statistical model-
ing, regression analysis is a set of statistical processes used for calculating the relationships among
variables. It includes many techniques for modeling and analyzing several variables, when the focus
is on the relationship between a dependent variable and one or more independent variables (or
predictors). More specifically, regression analysis helps one understand how the typical value of the
dependent variable changes when any one of the independent variables is varied, while the other
independent variables are held constant.
Let us consider an example of such an analysis that could be readily applied to data available
on social media: the influence of education upon voting intention. To obtain the raw data we take a
large sample of individuals of similar age and ask them how long they spent in full-time education
and what are their political likes and dislikes. Clearly, when graphed such raw data will generate
a plot with an enormous range of uncertainty, or scatter. But the question of interest is: Is there a
relationship between education level and voting pattern? The scatter of points on the graph may
suggest that people with higher values of education tend to follow a more liberal voting pattern,
but the relationship is not perfect; and it would be clear that knowledge of education level does not
suffice for an accurate prediction of voting intention. To apply regression analysis to this particular
problem, and to obtain a smoother curve of the data when plotted (for better predictions) requires
that we first hypothesize that voting intentions for each individual are determined by education,
and by a collection of other factors (race, locality, sex, sexuality, profession, etc.) that we term “con-
tributing noise.” This noise is the background against which one is trying to derive a clear cause and
effect relationship, upon which to base extension and prediction.
First, we write a hypothesized model relationship (a linear regression model) or equation
between education level (E) and voting intention (I, where a particular numerical value, or range
can be correlated with right-wing or left-wing voting intentions) as, for example, I = α + βE + ε,
where α is a constant (how one would vote with zero education), β is the coefficient relating how an
additional year of education influences voting intention (assumed to be either positive or negative),
and ε is the noise term representing other factors that influence voting intention (for example, age,
postal address, religious beliefs, etc.). The parameters α and β are not observed, and regression anal-
ysis is used to produce an estimate of their value, based upon the information contained in the raw
data set; for example, in Table 15.1. What we have hypothesized is that there is a straight line, or
linear relationship, between E and I. Thus, somewhere in the cloud of data points derived from our
social media user-like matrix we expect to find a line defined by the equation I = α + β E + ε. The
task of estimating α and β is equivalent to the task of estimating where this line is located relative
to the axes, I and E. The answer depends in part upon what we think about the nature of the noise
term, ε. To estimate ε, one would look at the solutions (eigenvalues) of the matrix of likes and dis-
15. THE EVOLVING SCIENCE OF HISTORY15.2 SOME DETAILS OF THE ANALYSIS OF PERSONAL DATA ON SOCIAL MEDIA
165
likes for various groups of individuals. The predictive modeling undertaken by organizations such
as Cambridge Analytica (now bankrupt) involved fitting various models of correlations (such as
education, age, or sexuality with voting intention, or liberal sentiments, or lack of liberal sentiments)
using standard techniques of linear regression. Variables such as age or intelligence were predicted
using a linear regression model, whereas dichotomous variables such as gender or sexual orientation
were predicted using logistic regression [2].
So how close are we to a precise science of prediction based on the analysis of the personali-
ties of individuals comprising large groups? To my mind, the present furore about the actions of the
now defunct company Cambridge Analytica in the UK and the U.S., and Facebook in seeking to
gain personal data on hundreds of millions of individual voters with the desire to target individual
voters with the most appropriate publicity material, so as to influence elections, and thereby direct
the course of history, is merely a first overt attempt at creating something akin to Isaac Asimov’s
psychohistory. But perhaps it is too premature—it only just worked and has been revealed to public
scrutiny and condemnation, which was probably not the intention. Perhaps there are not yet enough
individuals on social media; we are still a long way from the assumptions of Asimov about the ap-
propriate size of the group being examined and modeled. But this new form of democracy is only
in its infancy; it works, and that is a great stimulant for further development. Intrusion into one’s
Internet-space, is happening everywhere, and will only become worse.
Sadly or happily, depending upon one’s point of view, we have a long way to go before we
can approach the perfect state required for a true statistical modeling of society. The transition from
the non-statistical behaviour of individual molecules (or individual humans in Asimov’s fiction) to
the more mathematically friendly statistical behaviour of large groups of molecules, i.e., solids and
liquids (or human society in Asimov’s fiction), is not so easily identified. However, the American
historian Henry Adams (1838–1918), the grandson and great-grandson of American presidents,
attempted such a modeling of human history at the beginning of the last century. In his Degrada-
tion of the Democratic Dogma (1919), Adams proposed two laws of history: All civilization is central-
ization and All centralization is economy. It is difficult to find fault with the first law; however, the
second law says that resources; particularly, energy sources must be adequate to sustain the energy
needs of the civilization or empire. Therefore, all civilization is the survival of the most economic
system; for example, the nation that has an ample source of energy (coal, oil, gas, nuclear, etc.) and
is able to control access to all major sources of energy for all other nations will necessarily dom-
inate the world. There is a strange closeness between physics and history; a closeness that always
moves out of focus when you seek to examine it in detail. In both physics and history, all is cause
and effect; in history as in physics, there is no action without a reaction. The problem is that in any
predictive, quantitative estimation derived from history and from physics, the error bars are larger
for the former than for the latter.
166
15.3 FURTHER READING
1
The full details of such analyses are to be found in Private traits and attributes are pre-
dictable from digital records of human behavior; Michal Kosinskia, David Stillwella, and Thore
Graepel; Proceedings National Academy of Sciences (2013); 110(15): 5802–5805.
2
This topic is discussed in detail in any textbook on the statistical analysis of experimen-
tal data; for example, Linear Algebra and Matrix Analysis for Statistics (2014): Sudipto Banerjee
and Anindya Roy; Texts in Statistical Science; Boca Raton, FL:Chapman and Hall/CRC Press.
And Quantifying Measurements: The Tyranny of Numbers; Jeffrey H. Williams (2016); San Rafael,
CA:Morgan & Claypool, and references therein.
15. THE EVOLVING SCIENCE OF HISTORY
CHAPTER 16
167
Obfuscation: Why Are We Not
Living in a New Golden Age?
Where is the wisdom we have lost in knowledge, and the knowledge we have lost
in information?
T.S. Eliot, What is a Classic?;
Presidential address to the Virgil Society of London, 1944.
In this book, I have tried to show the evolution of science was an attempt to classify, understand,
and contain Nature. Originally, this involved the creation and memorising of long lists of natural
events, things and phenomena; then we abandoned the lists and looked for the coherences that
existed behind the observations. We searched for the rule or universal law that gave rise to, or led
to, the observed thing or phenomenon. Why man wished to understand his environment is simply
stated; he wished to protect himself and his extended family, or tribe in a hostile and indifferent
world. We invented religions and social systems to further enable this protection. Science, as we
know it today, grew out of the failure of religion and magic, that is, pre-scientific natural philosophy,
to explain and predict natural events. Science worked on a regular basis, unlike magic and religion
that only worked on a statistical basis. But this triumph of science has not eliminated magic and
religion; they are still with us today, and in the case of religion still play an important role in the
stability of the wider society.
All this effort to create a scientific worldview was directed toward improving the lot of hu-
manity. Men imitated the divine lawgivers in our mythologies and religions, demonstrating that
religion always develops before science in an evolving society, and thereby formulated the idea of
Laws of Nature, which we all had to obey. Scientists believed that a codification and explanation of
natural laws would help man return to that Golden Age when everything was perfect; when men
were at peace with each other, and at one with Nature. This search for a better world goes on. The
ultimate discovery, the Theory of Everything should enable man to cease struggling against Nature,
as all would finally be revealed; there would be no questions left unanswered. Figure 11.2 shows
that we are well along on the roadmap to the Theory of Everything. So, the question we must now
ask is why are we not living in a new Golden Age? What has gone wrong? Or, are we living in a
168
Golden Age, and we just haven’t noticed?42 What happened to all that optimism and enthusiasm
that gave birth to the first tentative steps of the great endeavor of science?
16.1 THE SCIENCE WARS
Sadly, it is true to say that not everyone wishes to be liberated from their narrow, limited, obscure
way of looking at Nature. It is not universally assumed that scientists have the only correct way of
interpreting the world we see around us. The fractious disputes or disagreements between scientists
and some non-scientists as to who has a monopoly on objective truth have always generated a lot
of heat. Scientists, particularly physical scientists, find sociologists, historians, and “critics of culture”
tedious in their attempts to rub the gloss off the enterprise of science. In good relativistic fashion,
non-scientists usually look askance at scientists’ claims of an absolute objectivity and the ultimate
truth, and regard scientists in the same way they would regard any other anthropological group.
Social scientists often regard scientists as a tribe, whose structures of supposed authority, of peer
review, as a means of maintaining their integrity, of training and initiation and peer-acceptance,
should be subjected to the same levels of criticism, as would be the behaviour of any tribe of non-in-
dustrialised people to be found in the Amazonian jungle. Needless to say, this does not go down too
well with the professional tribe who have given us: atomic power, aeroplanes, antibiotics, weapons
of mass destruction, mobile phone, spaceflight, and genetic engineering. But then to contemporary
cultural critics, such as the French philosopher and literary theorist Jean-Francois Lyotard, science
is nothing more than another grand narrative, with a structure just like, and no better than history,
sociology or semiotics. That is, science should be treated as an object of study in itself, not as an
enterprise that provides a transcendental view of Nature.
Scientists (especially physicists) will, however, tell you that their view of Nature is the real,
or the most real view of the world around us, that it is possible to gain. Scientists like to call them-
selves realists, but even some of the greatest of physicists have had doubts as to the transcendental
viewpoint of the scientist. Niels Bohr, one of the founders of quantum mechanics, said that “It is
wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about
nature.” Nothing transcendental here; description, rather than explanation. This is a more realistic
viewpoint; but, unfortunately, not one shared by all scientists.
So, when it comes to understanding the world in all its wonderful complexity, should we be-
lieve our ardent, zealous physical scientists, or the majority of mankind? The philosopher, physicist
Karl Raimund Popper (1902–1994) held that all science provides are hypotheses that have, so far
defied attempts to falsify them. Of course, this argument does not take account of the “trust” we
put in the technological products of science. We design things not in accord with a fundamental
42 Granted we have yet to find the Theory of Everything, but we are well on the way to glimpsing its spectral form
in extreme events in Space and here on Earth, as in the discovery of gravity waves and of the Higgs boson.
16. OBFUSCATION: WHY ARE WE NOT LIVING IN A NEW GOLDEN AGE?16.2 ANTI-SCIENCE
169
principle of a theory, but with a theory that has survived rigorous examination, and which we fully
expect to continue to survive ever more detailed examination. When you agree to have major sur-
gery, you expect to wake up cured, and when you buy an airline ticket to Australia, you expect to
get there, and so you buy a return ticket.
Modern medicine is successful, because it is based upon the scientific method of observation,
hypothesis and testing. When we design jet engines, we do so in the context for which they are
intended. In saying these things, we are not able to step outside our skins, or outside the Universe
and attempt a majestic, dispassionate, transcendental examination. We are merely repeating sci-
ence’s own explanation of events and observations. There is a world of difference between observing
and recording something, and an a priori explanation of a phenomenon. There is, in fact, no getting
behind the explanation of science, because we are in the world we describe, we are part of it and
cannot stand outside that world. As modern quantum mechanics tells us, the experimenter is part of
the experiment; Schrödinger’s cat again. Whereas a sociologist does often stand outside of society
and deliver himself/herself of sweeping generalisations.
16.2 ANTI-SCIENCE
The problem between scientists and non-scientists is, today, often termed anti-science; this is a term
that has arisen in our contemporary world of alt-truth and climate change sceptics. Anti-science is
an extreme form of a suspicion of science; a position that rejects science and the scientific method.
People holding antiscientific views do not accept science as an objective method that can generate
anything of use to them—let alone universal knowledge. The more thoughtful, contend that scien-
tific reductionism is an inherently limited means to reach an in-depth understanding of a complex
world in continuous evolution.
At the beginnings of the scientific revolution, proto-scientists or savants such as Robert
Boyle found themselves in conflict with non-practical thinkers, such as Thomas Hobbes, who were
skeptical as to whether or not science was a satisfactory way of arriving at real, or genuine knowl-
edge of the world. Hobbes’ stance is sometimes regarded as an early anti-science position. And we
saw in Chapter 4, that this disagreement between the experimental scientist and the rationalist is
not a new phenomenon; it goes back to the Taoist Sages of the Waring States Period of Ancient
China. We also saw that it was nature mysticism that allowed the experimental sciences to over-
come the opposition of the dogmatic theologians, theoreticians, and philosophers.
However, in our modern world, Nature and a study of Nature is often invoked by those
opposed to science. Perhaps it is in the world of artistic inspiration that we find some of the most
thoughtful, and therefore useful (perhaps even persuasive) arguments for anti-science. The poet and
mystic, William Blake (1757–1827) reacted particularly strongly against the ideas of Isaac Newton
in his paintings and writings, and is seen as being perhaps the earliest, and almost certainly the most
170
prominent and enduring, example of what is seen by historians as the aesthetic, or romantic re-
sponse against science. In Blake’s 1795 poem Auguries of Innocence, the poet describes that beautiful
exemplar of Nature, the robin redbreast imprisoned by the materialistic cage of Newtonian mathe-
matics and philosophy. In Blake’s painting (1795) of Newton (Figure 16.1), Newton is depicted as
a misguided hero whose attention was only directed to the drawing of sterile, geometrical patterns
on the ground, while the beauty of Nature was all around him; as Blake put it, “May God us keep
/ From single vision and Newton’s sleep!” Blake thought that Newton, Bacon, and Locke with their
emphasis on mechanistic reasoning were nothing more than “the three great teachers of atheism, or
Satan’s Doctrine.” Blake’s painting of Newton, progresses from exuberance and color on the left-side,
to sterility and darkness on the right-side. In Blake’s view, Newton brings not light, but night. In a
poem, W.H. Auden summarizes Blake’s anti-scientific views by saying that he “[broke] off relations
in a curse, with the Newtonian Universe.” But Newton was a complex, universal personality. As we
saw in Chapter 1, Isaac Newton was as much at home in the Kabbalah, and other such metaphysics
as he was in classical mechanics, but this was not widely appreciated in Blake’s day.
Figure 16.1: William Blake’s Newton (1795) demonstrates his opposition to the “single-vision” of
scientific materialism. Image from: https://en.wikipedia.org/wiki/William_Blake#/media/File:New-
ton-WilliamBlake.jpg.
16. OBFUSCATION: WHY ARE WE NOT LIVING IN A NEW GOLDEN AGE?
16.2 ANTI-SCIENCE
171
Issues of anti-science are best seen as a consideration in the on-going transition from
pre-science or proto-science to present-day science. This is what we spoke about in the continued
use of the I Ching and Astrology as means of divination, and as limited models of the complexity
of the natural world. This same argument is evident in the evolution of alchemy (a mystical and
an experimental art) into purely functional experimental chemistry. Many disciplines that pre-date
the widespread adoption and acceptance of the quantitative scientific method (early 18th century
in Europe), such as mathematics and astronomy, are not seen as anti-science. However, some of the
orthodoxies within those disciplines that predate a scientific approach, for example, those orthodox-
ies repudiated by the discoveries of Galileo are seen as being a product of an antiscientific stance.
Of course, an ardent or zealous belief in the central importance, the universality and the unfailing
potential of science can be considered as a new religion. A religion in which scientists would be the
priestly caste. But this would be a religion based upon reproducible miracles. And if everyone could
see and benefit from a miracle, then all would become believers.43
The derogatory term “scientism” derives from the study of science, and is a term invented and
used by social scientists and philosophers of science to describe the views, beliefs, and behavior of
strong supporters of science; those who speak of science triumphalism, the science of the late 19th
century. It is commonly used in a pejorative sense, for individuals who seem to be treating science
in a manner similar to that used by believers in their particular religion.44
Of course, it is often the case that a difference arises between scientists and non-social sci-
entists, because of a difference of perception. Some supporters of anti-science may have presented
unreal images of science that threaten the believability of scientific knowledge, or appear to have
gone too far in their anti-science deconstructions. The question often lies in how much scientists
conform to the standard ideal of communalism, universalism, disinterestedness, originality, and
skepticism. Unfortunately, scientists don’t always conform; scientists do get passionate about pet
theories; they do rely on reputation in judging another scientist’s work; they do pursue fame and
fortune via research. Thus, they may show inherent biases in their work. Indeed, many scientists
are not as rational and logical as legend would have them, but then neither are they as illogical or
irrational as some supporters of anti-science might say. We are all human.
A point of contention often presented by supporters of anti-science involves the inappropri-
ate, or inadequate nature of the mathematical models used to model real systems. That these models
do not capture the full reality of existence. Scientists would be told that the formulae of mathemat-
ical models are artificial constructions, logical figments with no necessary relation to the outside
43 This is the premise behind a great deal of science fiction, particularly, the novels of H.G. Wells, such as The War
in the Air of 1908 and his writings such as The Shape of Things to Come of 1933, which were turned into the
1936 movie, Things to Come.
44 Thomas Henry Huxley (1825–1895) was an English biologist and anthropologist specialising in comparative
anatomy. He is known as “Darwin’s Bulldog” for his aggressive advocacy of Charles Darwin’s theory of evolution,
an ardent advocate, who would not have been out of place in a mediaeval search for heretics.
172
world. That such models always leave out the richest and most important part of human experience:
daily life, history, human laws and institutions, the modes of human self-expression. That these
models fail to appreciate the subtle complexity of the social world; so a great deal of what is best
in society is excluded from the model, which, not surprisingly only generates oversimplifications.
A great deal of this criticism is true—our models of the world are limited. But they are limited
by the ability of present-day computers to solve the equations that describe the model; that is, the
models we use are limited by the present limitations of technology. Today’s computers are fast, but
the computers of the mid-century will be a lot faster, and so better for solving the types of complex
social problems that supporters of anti-science criticise scientists for not solving—just be patient.
This difficulty in communicating the evolving interaction of science and the wider society
is as the heart of the problem about the public understanding of science, and the vanquishing of
anti-science. For my part, I continue to believe in science, and that science has done incalculably
more good than harm to mankind. I have not lost faith in science as part of the highest civilization,
and its development as one single epic story for all humanity. It might have been the American
taxpayers who paid for the journey of the first men to the Moon half a century ago, but it was all
mankind who exalted in the achievement (see Figure 16.2).
16.3 THE LIMITATIONS OF THE ENLIGHTENMENT
Supposedly, the Enlightenment of the 18th century was the moment when the light of reason was
focused into the obscure corners and occult recesses of the human mind. Yet, the 19th century saw
an extraordinary revival of interest in magic, spirituality, and religion. True, there was also the great
synthesis by James Clerk Maxwell of electromagnetism, and the first steps toward the triumphs
of 20th-century physics, but why was it that so much enchantment survived the Enlightenment’s
rational examination? Why was it that the Enlightenment’s spirit of “daring to know” failed? This
question is at the heart of why it is that man has not been translated into a new Golden Age by the
extraordinary achievements of science over the three centuries since science came into its own.The
German philosopher, Emmanuel Kant, who was the man who told us to “dare to know” also said
that man, because of his reason was fated to propose and worry about questions he could never an-
swer or dismiss. Kant regarded this fruitless search after mysteries to be an aspect of human reason.
An irritating aspect that may be deflected or ignored, but cannot be fully denied.
The Enlightenment’s quest for any truth that may have been hidden within the occult sci-
ences was not a search for magic per se. It was a process of copying scientific investigations that in
the fields of natural sciences were yielding new useful, verifiable discoveries in Nature. The natural
world was seen to be yielding up her secrets to the empirically minded scientist. Those 18th-century
occultists and Neo-Platonists would have considered themselves as men of science. It was just
that the sciences they pursued were not limited to chemistry, physics, botany, etc., but also include
16. OBFUSCATION: WHY ARE WE NOT LIVING IN A NEW GOLDEN AGE?16.3 THE LIMITATIONS OF THE ENLIGHTENMENT
173
necromancy, alchemy, and magic. Everything was being studied; the practitioners were daring to
know everything, and by so doing gave to the more esoteric and recondite subjects a veneer of re-
spectability. The Book of Nature had no forbidden chapters. The rationality of the Enlightenment
collapsed into a myth of the type that rationality was intended to banish. Physics became mixed up
with metaphysics, and it was Isaac Newton who had warned us to avoid such a mixing, even though
his Neo-Platonic outlook told a very different story.
Figure 16.2: Neil Armstrong (1930–2012) became the first human to step onto the surface of the Moon
(image from: https://en.wikipedia.org/wiki/Moon_landing#/media/File:Apollo_11_first_step.jpg). He
was Commander of NASA’s Apollo 11 mission, and is here seen descending the ladder of the Apollo
Lunar Module to step onto the Lunar surface. During this descent he spoke one of the most celebrated
of all phrases; a phrase that still inspires and transcends tedious earth-bound politics. The video of this
event, which the author watched live on TV as an impressionable 13-year-old, may be found at: https://
commons.wikimedia.org/wiki/File:Apollo_11_Landing_-_first_steps_on_the_moon.ogv.
It was in the last century that we witnessed the most significant re-appraisal of our well-es-
tablished way of looking at Nature. At the dawn of the 20th century, we began to finally get to
grips with the question of the nature of light, which is just as well as it is via light that we perceive
the world around us; and so try to begin to disentangle our perceived sensations from our real and
imaginary fears. Physics had shown that light could be thought of as a wave when it propagated
freely, yet could also be analyzed successfully if it were considered a stream of tiny particles. Which
174
was true? Was light continuous, or was it particulate? The same could also be said for electrons, so
what was the relationship between electrons and light?
Such debates led to the creation of quantum theory, and then the rationalization of quantum
theory by Paul Dirac (1902–84) and Werner Heisenberg (1901–76); involving the interpretation
of radiation-matter interactions in terms of an uncertainty as to the precise values of the velocity
and position of the quantum particles; the problem of complementarity. This rationalization means,
that at the quantum level of Nature, measuring something will interfere with the actual validity of
the measurement. Niels Bohr (1885–1962), one of the founders of quantum mechanics, thought
that the apparatus in which an experiment was performed should be described in the mathemat-
ical equations defining what was being measured. Thus, scientific objectivity disappeared. Albert
Einstein, the other founder of early quantum mechanics was not happy with Bohr’s ideas, and he
could never bring himself to abandon belief in the reality of an external world controlled by cau-
sality; a world that could be investigated in an objective manner by science. For the purely classical,
pre-quantum view of the Universe, we must go back to Pierre-Simon Laplace (1749–1827), who
thought in purely classical terms, and who said that from the known laws of mechanics and from
a full knowledge of the present state of the Universe, every future state could, in principle, be pre-
dicted. But that conceit of Laplace was the view of the European Enlightenment; it was based on
their devotion to the mechanical universe of Isaac Newton.
The European Enlightenment did not long survive the Revolutionary Wars of Napoléon
Bonaparte. The restoration of the French monarch in 1815 was part of the rapid return to the in-
fluence of religion in the political affairs of Europe. For many people, the European Enlightenment
was supposed to have done away with ideas of magic and the occult, yet it is true to say that Paris at
the end of the 19th century was filled with individuals searching for new ways of looking at Nature.
Not only did fin de siècle Paris have Picasso inventing cubism, but the greatest, and the most famous
physicists in the city at this time were the husband and wife team of Pierre (1859–1906) and Marie
Curie (1867–1934), who investigated radioactivity. Yet for all the tremendous research carried out
by scientists like the Curies, and the supposed triumph of the Enlightenment, Paris in the period
1880–1914 was also the world center for occultism and mysticism. There was the revival of Rosi-
crucianism, Helena Blavatsky was attempting a synthesis of eastern and western Hermeticism into
what she termed Theosophy (God’s wisdom), and symbolism was a dominant idea in literature and
music. Why this revival of occultism and spirituality in the City of Light and science? Perhaps
because the Curies were investigating radioactivity and radioactive decay; that is, the transforma-
tion of one chemical element into another element. This was something that science had always
said was impossible, yet was now the hottest topic in physics. How could, supposedly indivisible,
eternal atoms such as Uranium decay to form other indivisible, eternal atoms such as Radium and
Polonium; what was going on in that atom of Uranium (see Figure 16.3)?
16. OBFUSCATION: WHY ARE WE NOT LIVING IN A NEW GOLDEN AGE?
16.3 THE LIMITATIONS OF THE ENLIGHTENMENT
175
238
226
Uranium, the progenitor of
Figure 16.3: Decay chain of
Radium. Since the early modern period,
Hermeticists and other followers of Hermes Trismegistus had been told they were crazy, and that there
was no such thing as the transmutation of the chemical elements. Yet at the end of the 19th century in
Paris, the Curies were unravelling the decomposition of a particular isotope of Uranium, and discovering
that it transformed into many different chemical elements. Image from: https://en.wikipedia.org/wiki/
Radium#/media/File:Decay_chain(4n+2,_Uranium_series).svg.
As far as the 19th-century Hermeticists and alchemists were concerned, the Curies were in-
vestigating and seeking an explanation for the transmutation of the chemical elements, something
176
they had always believed in. The Curies were painstakingly showing that many, supposedly eternal,
indestructible atoms, could transmute into atoms of other elements releasing huge amounts of en-
ergy in the process. Marie Curie even demonstrated how Radium, or rather the radiation emitted
by Radium as it transmuted, could cure cancer. All these strange and magical new discoveries in
science were to the occultists a vindication of alchemy, and if alchemy was now seen to be true
what other occult, or Hermetic sciences, would also be vindicated? But then, man’s passion for the
fantastic is such that he is only too ready to suspend belief in the rational and the mundane.
In this intellectual ferment, it was not surprising that Einstein and Picasso began coinciden-
tally exploring notions of space and time. Relativity in its overthrow of absolute space and time
teaches us that in thinking about perspective, we cannot simply trust our senses; and the cubism
of Picasso destroyed the primacy of perspective in art. Indeed, one could say that cubism is art
inspired by a redefinition of space and time; a technique for reducing the artistic form to geometry,
of representing three dimensions in two dimensions. In their different ways, both Einstein and Pi-
casso discarded the empiricist view—what you see is what you get—in favor of an intellectualized
view of the world. But, of course, this re-intellectualization of our study of Nature was the opposite
of what had happened in the Middle Ages when modern science had been born in Europe (see
Chapter X?). Then a combination of empiricism and nature mysticism had overturned the Aristo-
telian-Scholasticism of the Middle Ages. At the beginning of the last century, some of the leading
figures in the arts were returning to a cerebral, scholastic interpretation of Nature. Proclaiming
that thinking, not seeing leads us closer to the truth. Yet the purpose of science is not to provide
the most economical representation of the facts, and the purpose of art is not to provide the most
accurate representation of what we can see—why compete with photography? The purpose of both
science and art is to discover the reality that lies hidden behind the appearances. After all, what is
today considered to be magic and science fiction could well become scientific dogma in another
half century.
16. OBFUSCATION: WHY ARE WE NOT LIVING IN A NEW GOLDEN AGE?177
Author Biography
Jeffrey H. Williams was born in Swansea, UK, in 1956. He
attended the University College of Wales, Aberystwyth and
Cambridge University, being awarded a Ph.D. in chemical
physics from the University of Cambridge in 1981. Subse-
quently, his career as a research scientist was in the physical
sciences. First, as a research scientist in the universities of
Cambridge, Oxford, Harvard, and Illinois, and subsequently
as an experimental physicist at the Institute Laue-Langevin,
Grenoble, which remains one of the world’s leading centers
for research involving neutrons, especially, neutron scattering
and diffraction. During this research career, the author pub-
lished more than seventy technical papers and invited review articles in the peer-reviewed literature.
However, after much thought, the author chose to leave research in 1992 and moved to the world
of science publishing and the communication of science by becoming the European editor for the
physical sciences for the AAAS’s Science.
Subsequently, the author was Assistant Executive Secretary of the International Union of
Pure and Applied Chemistry; the agency responsible for the world-wide advancement of chemistry
through international collaboration. And most recently, 2003–2008, he was the head of publications
and communications at the Bureau International des Poids et Mesures (BIPM), Sèvres. The BIPM is
charged by the Meter Convention of 1875 with ensuring world-wide uniformity of measurements,
and their traceability to the International System of Units (SI). It was during these years at the
BIPM that the author became interested in, and familiar with the origin of the Metric System, its
subsequent evolution into the SI, and the coming transformation into the Quantum-SI.
Since retiring, the author has devoted myself to writing. In 2014, he published Defining and
Measuring Nature: The Make of All Things in the IOP Concise Physics series. This publication out-
lined the coming changes to the definitions of several of the base units of the SI, and the evolution
of the SI into the Quantum-SI. In 2015, he published Order from Force: A Natural History of the
Vacuum in the IOP Concise Physics series. This title looks at intermolecular forces, but also explores
how ordered structures, whether they are galaxies or crystalline solids, arise via the application of a
force. Then in 2016, he published Quantifying Measurement: The Tyranny of Number, again the IOP
Concise Physics series. This title is intended to explain the concepts essential in an understanding
of the origins of measurement uncertainty. No matter how well an experiment is done, there is
178
always an uncertainty associated with the final result—something that is often forgotten. In 2017,
he published Crystal Engineering: How Molecules Build Solids in the IOP Concise Physics series.
This title looks at how the many millions of molecules, of hugely varying shapes and size can all be
packed into a handful of crystal symmetries. Most recently, 2018, the author published Molecules
as Memes, again in the IOP Concise Physics Series. This title explains how the onetime separate
sciences of physics and chemistry became one science, with the advent of quantum mechanics and
the acceptance of the existence of molecules.
In addition, retirement has allowed the author to return to the research laboratory and he is
again publishing technical papers, this time in the fields of crystal design and structure determina-
tion via x-ray diffraction, in particular, the architecture and temperature stability of co-crystals and
molecular adducts.
AUTHOR BIOGRAPHY
|
https://ieeexplore.ieee.org/xpl/ebooks/bookPdfWithBanner.jsp?fileName=8737906.pdf&bkn=8737905&pdfType=book
|
Synthesis Lectures on Engineering
Series ISSN 1939-5221
Transformative Teaching
A Collection of Stories of Engineering Faculty’s Pedagogical Journeys
Nadia Kellam, Arizona State University
Brooke Coley, Arizona State University
Audrey Boklage, University of Texas at Austin
The journey to becoming an exemplary engineering educator is one that is rarely simple and
straightforward. Simply being exposed to active learning strategies or innovative pedagogies
rarely leads to a transformation of one’s own teaching. In this book, we present a collection of
stories from exemplary engineering educators that are told in their own voices. These stories are
shared to enable readers to immerse themselves in first-person recollections of transformation,
involving engineering educators who changed their teaching strategies from the ways that they
were taught as engineering undergraduate students to ways that more effectively fostered a
conducive learning atmosphere for all students.
It is our hope that providing stories of successful engineering educators might stimulate
thoughtful and productive self-reflection on ways that we can each change our own teaching.These
stories are not simple, linear stories of transformation. Instead, they highlight the complexities and
nuances inherent to transforming the way that engineering faculty teach. Through our strategy of
narrative storytelling, we hope to inspire future and current engineering educators to embark on
their own journeys of teaching transformations. We conclude the book with some lessons that we
learned during our readings of these stories, and invite readers to extract lessons of their own.
ABOUT SYNTHESIS
This volume is a printed version of a work that appears in the Synthesis
Digital Library of Engineering and Computer Science. Synthesis lectures
provide concise original presentations of important research and
development topics, published quickly in digital and print formats. For
more information, visit our website: http://store.morganclaypool.com
store.morganclaypool.com
K
E
L
L
A
M
•
C
O
L
E
Y
•
B
O
K
L
A
G
E
T
R
A
N
S
F
O
R
M
A
T
I
V
E
T
E
A
C
H
I
N
G
M
O
R
G
A
N
&
C
L
A
Y
P
O
O
L
Transformative Teaching
A Collection of Stories of
Engineering Faculty’s Pedagogical Journeys
Synthesis Lectures on
Engineering
Each book in the series is written by a well known expert in the field. Most titles cover subjects
such as professional development, education, and study skills, as well as basic introductory
undergraduate material and other topics appropriate for a broader and less technical audience.
In addition, the series includes several titles written on very specific topics not covered
elsewhere in the Synthesis Digital Library.
Transformative Teaching: A Collection of Stories of Engineering Faculty’s Pedagogical
Journeys
Nadia Kellam, Brooke Coley, and Audrey Boklage
2019
Ancient Hindu Science: Its Transmission and Impact of World Cultures
Alok Kumar
2019
Value Relational Engineering
Shuichi Fukuda
2018
Strategic Cost Fundamentals: for Designers, Engineers, Technologists, Estimators,
Project Managers, and Financial Analysts
Robert C. Creese
2018
Concise Introduction to Cement Chemistry and Manufacturing
Tadele Assefa Aragaw
2018
Data Mining and Market Intelligence: Implications for Decision Making
Mustapha Akinkunmi
2018
Empowering Professional Teaching in Engineering: Sustaining the Scholarship of
Teaching
John Heywood
2018
iii
The Human Side of Engineering
John Heywood
2017
Geometric Programming for Design Equation Development and Cost/Profit
Optimization (with illustrative case study problems and solutions), Third Edition
Robert C. Creese
2016
Engineering Principles in Everyday Life for Non-Engineers
Saeed Benjamin Niku
2016
A, B, See... in 3D: A Workbook to Improve 3-D Visualization Skills
Dan G. Dimitriu
2015
The Captains of Energy: Systems Dynamics from an Energy Perspective
Vincent C. Prantil and Timothy Decker
2015
Lying by Approximation: The Truth about Finite Element Analysis
Vincent C. Prantil, Christopher Papadopoulos, and Paul D. Gessler
2013
Simplified Models for Assessing Heat and Mass Transfer in Evaporative Towers
Alessandra De Angelis, Onorio Saro, Giulio Lorenzini, Stefano D’Elia, and Marco Medici
2013
The Engineering Design Challenge: A Creative Process
Charles W. Dolan
2013
The Making of Green Engineers: Sustainable Development and the Hybrid Imagination
Andrew Jamison
2013
Crafting Your Research Future: A Guide to Successful Master’s and Ph.D. Degrees in
Science & Engineering
Charles X. Ling and Qiang Yang
2012
Fundamentals of Engineering Economics and Decision Analysis
David L. Whitman and Ronald E. Terry
2012
iv
A Little Book on Teaching: A Beginner’s Guide for Educators of Engineering and
Applied Science
Steven F. Barrett
2012
Engineering Thermodynamics and 21st Century Energy Problems: A Textbook
Companion for Student Engagement
Donna Riley
2011
MATLAB for Engineering and the Life Sciences
Joseph V. Tranquillo
2011
Systems Engineering: Building Successful Systems
Howard Eisner
2011
Fin Shape Thermal Optimization Using Bejan’s Constructal Theory
Giulio Lorenzini, Simone Moretti, and Alessandra Conti
2011
Geometric Programming for Design and Cost Optimization (with illustrative case study
problems and solutions), Second Edition
Robert C. Creese
2010
Survive and Thrive: A Guide for Untenured Faculty
Wendy C. Crone
2010
Geometric Programming for Design and Cost Optimization (with Illustrative Case Study
Problems and Solutions)
Robert C. Creese
2009
Style and Ethics of Communication in Science and Engineering
Jay D. Humphrey and Jeffrey W. Holmes
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Analog Multimedia
Explorations
Lina J. Karam and Naji Mounsef
2008
Introduction to Engineering: A Starter’s Guide with Hands-On Digital Multimedia and
Robotics Explorations
Lina J. Karam and Naji Mounsef
2008
CAD/CAM of Sculptured Surfaces on Multi-Axis NC Machine: The DG/K-Based
Approach
Stephen P. Radzevich
2008
v
Tensor Properties of Solids, Part Two: Transport Properties of Solids
Richard F. Tinder
2007
Tensor Properties of Solids, Part One: Equilibrium Tensor Properties of Solids
Richard F. Tinder
2007
Essentials of Applied Mathematics for Scientists and Engineers
Robert G. Watts
2007
Project Management for Engineering Design
Charles Lessard and Joseph Lessard
2007
Relativistic Flight Mechanics and Space Travel
Richard F. Tinder
2006
Copyright © 2019 by Morgan & Claypool
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in
any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations
in printed reviews, without the prior permission of the publisher.
Transformative Teaching: A Collection of Stories of Engineering Faculty’s Pedagogical Journeys
Nadia Kellam, Brooke Coley, and Audrey Boklage
www.morganclaypool.com
ISBN: 9781681735450
ISBN: 9781681735467
ISBN: 9781681735474
paperback
ebook
hardcover
DOI 10.2200/S00911ED1V01Y201903ENG035
A Publication in the Morgan & Claypool Publishers series
SYNTHESIS LECTURES ON ENGINEERING
Lecture #35
Series ISSN
Print 1939-5221 Electronic 1939-523X
Cover image by Pexels on Pixabay (https://pixabay.com/users/Pexels-2286921/).
Image retrived from https://pixabay.com/photos/mountains-dawn-dusk-grass-hills-1868715/
Transformative Teaching
A Collection of Stories of
Engineering Faculty’s Pedagogical Journeys
Nadia Kellam
Arizona State University
Brooke Coley
Arizona State University
Audrey Boklage
University of Texas at Austin
SYNTHESIS LECTURES ON ENGINEERING #35
CM&cLaypoolMorganpublishers&ABSTRACT
The journey to becoming an exemplary engineering educator is one that is rarely simple and
straightforward. Simply being exposed to active learning strategies or innovative pedagogies
rarely leads to a transformation of one’s own teaching. In this book, we present a collection of
stories from exemplary engineering educators that are told in their own voices. These stories are
shared to enable readers to immerse themselves in first-person recollections of transformation,
involving engineering educators who changed their teaching strategies from the ways that they
were taught as engineering undergraduate students to ways that more effectively fostered a con-
ducive learning atmosphere for all students. It is our hope that providing stories of successful
engineering educators might stimulate thoughtful and productive self-reflection on ways that
we can each change our own teaching. These stories are not simple, linear stories of transfor-
mation. Instead, they highlight the complexities and nuances inherent to transforming the way
that engineering faculty teach. Through our strategy of narrative storytelling, we hope to inspire
future and current engineering educators to embark on their own journeys of teaching transfor-
mations. We conclude the book with some lessons that we learned during our readings of these
stories, and invite readers to extract lessons of their own.
KEYWORDS
engineering teaching journeys, engineering teaching stories, innovative engineer-
ing teaching, engineering active learning, narrative interviews of engineering fac-
ulty, exemplary engineering teachers, exemplary engineering educators, innovative
engineering teachers, innovative engineering educators, innovation in engineering
education, engineering teaching inspiration, engineering educator inspiration, en-
gineering faculty teaching stories
Contents
ix
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
1
2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Nadia Kellam, Audrey Boklage, and Brooke Coley
Motivation for this Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
How we Structured These Stories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Overview of the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Developing a Liberative Pedagogy in Engineering . . . . . . . . . . . . . . . . . . . . . . . . 9
Donna Riley
Call to Adventure: Why Can’t Engineering be Taught the Way Religion is
Taught? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Supernatural Aid: You Have to Read Teaching to Transgress . . . . . . . . . . . . . 11
Belly of the Whale: The Start of a 10-Year Period of Experimentation in
Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Supernatural Aid: Learning about a CAREER Award in Engineering
Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Road of Trials: Becoming Comfortable with Critique in Thermodynamics . . 13
Refusal of the Call: Becoming a Feminist Activist . . . . . . . . . . . . . . . . . . . . . . 14
Stories from My Class: The Montreal Massacre as a Case Study . . . . . . . . . . . 15
Road of Trials: Pushback from Students and Colleagues . . . . . . . . . . . . . . . . . 17
Road of Trials: Required Service Learning in Thermodynamics . . . . . . . . . . . 18
Return Threshold: Protesting a Nuclear Power Plant . . . . . . . . . . . . . . . . . . . . 20
Return Threshold: Challenging the Powers that Be . . . . . . . . . . . . . . . . . . . . . 21
Road of Trials: Creative Solutions to Constraining Policies . . . . . . . . . . . . . . . 22
Apotheosis: Pushing the Boundaries in any Context . . . . . . . . . . . . . . . . . . . . 22
Master of Both Worlds and Freedom to Live: The Importance of Reflection . 23
Additional Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
x
3
Experiencing Vulnerability and Empowerment in Teaching . . . . . . . . . . . . . . . 27
Sara Atwood
Call to Adventure: From Childhood to Undergraduate, Becoming an
Educator First . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Call to Adventure and Supernatural Aid: Experiences as an Undergraduate
TA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Call to Adventure: Experiencing Faculty who Prioritize Research First . . . . . 29
Supernatural Aid: Finding My Home . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
First Threshold: Finding the Right College and Connecting with the
Students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Supernatural Aid: Learning from Others . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Belly of the Whale: Not Enough Time During the Lecture . . . . . . . . . . . . . . 31
Road of Trials: Theory vs. Solving Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Road of Trials: Just-in-Time vs. Established Preparation . . . . . . . . . . . . . . . . . 33
Road of Trials: Students with Learning Disabilities . . . . . . . . . . . . . . . . . . . . . 34
Belly of the Whale: A Particularly Challenging Semester . . . . . . . . . . . . . . . . 35
Supernatural Aid and Meeting with the All Knower: A Community of
Academic STEM Women . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Master of Both Worlds and Freedom to Live: A Balance of Vulnerability
and Empowerment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4
From the Armed Services to the Classroom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Brad Hyatt
The Call to Adventure: A True Learner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Refusal of the Call: Deciding to Leave Industry . . . . . . . . . . . . . . . . . . . . . . . . 40
Road of Trails: Connecting Classroom to Industry . . . . . . . . . . . . . . . . . . . . . 40
Crossing the First Threshold: Flipping the Classroom . . . . . . . . . . . . . . . . . . 41
Apotheosis/Freedom to Live: Learning Together . . . . . . . . . . . . . . . . . . . . . . . 41
Supernatural Aid: Professional Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Master of Two Worlds/Return Threshold: Real-World Examples . . . . . . . . . . 42
Freedom to Live/Ultimate Boone: Constructive Criticism . . . . . . . . . . . . . . . 42
Supernatural Aid: Faculty Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Road of Trials: Resisting Change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Freedom to Live: Embracing the Change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Master of Two Worlds: Investing Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Freedom to Live: Gamification in the Classroom . . . . . . . . . . . . . . . . . . . . . . . 44
xi
5
Engaging Students through Service Learning and Innovation . . . . . . . . . . . . . 45
Chris Swan
Call to Adventure: The Ten-Year Plan to Become a Professor with
Practical Experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Crossing the Threshold: Helping Students Connect the Theoretical and
Practical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Apotheosis: Seeing an Explosion in the Desire of the Students to Learn . . . . 47
Road of Trials: Researching New Ways to Engage and Deepen Learning . . . 48
Ultimate Boon: Becoming the Best Faculty Member through Student
Engagement and Innovation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
6
From Food to Simulation with Legos: Engaging Students in Hands-On
Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Thais Alves
Call to Adventure: Creating a Community . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Road of Trials: A Lack of In-Situ Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 54
First Threshold: Building a Language Bridge . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Belly of The Whale: The Task of Teaching . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Acceptance of the Call: Food, Handmade Legos, and Presentations . . . . . . . . 55
Ultimate Boone: Positive Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Apotheosis: More Pluses than Deltas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Master of Two Worlds/Freedom to Live: Pedagogical Flexibility . . . . . . . . . . 57
Meeting with the All Knower: Like-Minded Educators . . . . . . . . . . . . . . . . . 57
Master of Two Worlds: Legos Aren’t a Waste of Time . . . . . . . . . . . . . . . . . . 58
7
Finding Her Niche with Hands-On, Practical, and Real-World Pedagogy . . . 59
Fernanda Leite
Call to Adventure: Combining Passions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Supernatural Aid: Figuring Out How To Become a Professor . . . . . . . . . . . . . 60
Meeting with the All Knower: A Visiting Professor and Future Advisor . . . . 60
Road of Trials: Experience Teaching in Graduate School
. . . . . . . . . . . . . . . . 60
Apotheosis: Developing an Interconnected Course . . . . . . . . . . . . . . . . . . . . . 61
Return Threshold: Bringing the Real World into the Classroom . . . . . . . . . . . 64
xii
Master of Both Worlds and Freedom to Live: Encouraging Other
Academics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Stories from My Class: Teaching with Legos . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Master of Both Worlds and Freedom to Live: Integrating Teaching and
Research Through an Industry Focus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
8
Creating a Community of Collaborators to Achieve Curriculum Change . . . . 69
Charles Pierce
Call to Adventure: Teaching Runs in the Family . . . . . . . . . . . . . . . . . . . . . . . 69
Supernatural Aid: Graduate School Advice . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Supernatural Aid: Push to Ph.D. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Meeting with the All Knower: An Opportunity to Teach Autonomously . . . . 71
Supernatural Aid: Funding Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Road of Trials: New(ish) Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Atonement: Student Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Road of Trials: Communication in the Classroom . . . . . . . . . . . . . . . . . . . . . . 73
Ultimate Boone: Concepts not Schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Supernatural Aid: Classmate Assistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Ultimate Boone: Candy and Personality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Freedom to Live: Flexible Syllabi
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Master of Two Worlds: Problem Solving in the Classroom . . . . . . . . . . . . . . . 75
Road of Trials: Improving the Classroom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Ultimate Boone: With Funding Comes Change . . . . . . . . . . . . . . . . . . . . . . . 77
Apotheosis: Encouraging Critical Thinking . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Ultimate Boone: Teaching Award and Reflecting on My Journey . . . . . . . . . . 79
Master of Two Worlds: Collaboration is Key . . . . . . . . . . . . . . . . . . . . . . . . . . 80
9
Teaching with Advocacy: Buffing the Talent to Break the Mold of the
Monolithic Engineer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Matthew Fuentes
The Call to Adventure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Supernatural Aid: Learning to Teach in a Student-Centered Way . . . . . . . . . 82
The Call to Adventure: Aspiring to Teach Students Who are Less Privileged 83
Supernatural Aid: A Mentor Who Helped Encourage Experimenting
Educationally . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Stories from My Class: Going Outside to Bring out the Inquisitive Mind . . . 84
xiii
Road of Trials: Finding a Faculty Position at a Place Where I Can Make a
Difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Road of Trials: Becoming Transparent About Who I Am . . . . . . . . . . . . . . . . 86
Stories from My Class: Helping Students Overcome Imposter Syndrome
and Become More Engaged . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Stories From My Class: Teaching Through Making and Failure . . . . . . . . . . . 87
Road of Trials: Introducing Simulink® Before it Had Been Debugged . . . . . . 88
Road of Trials: Uncovering Biases and Expectations and a Need for
Engineering to Change, Culturally . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Road of Trials: Experiencing Marginalization Through a Last Name
Change and Becoming an Advocate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Apotheosis: Empowering Students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
10 Conclusion and Lessons Learned . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Nadia Kellam
Lesson 1: Importance of Having a Community . . . . . . . . . . . . . . . . . . . . . . . . 91
Lesson 2: The Power of Reflection in Improving Our Courses . . . . . . . . . . . . 92
Lesson 3: Take it Slow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Lesson 4: Improving Teaching and Learning is a lot of Work, but it is
Fulfilling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Lesson 5: Tradeoffs Between Teaching and Research, or Not? . . . . . . . . . . . . 93
Lesson 6: Consider an Asset-Based Approach to your Teaching . . . . . . . . . . . 94
Lesson 7: Empower Engineering Students who have Otherwise been
Marginalized . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Lesson 8: Connecting Theory to the Real World in the Classroom . . . . . . . . . 97
Lesson 9: Using Ideas from Entrepreneurship in Engineering Education . . . 97
Lesson 10: Comfort with Ambiguity and Relinquishing Control are
Required . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Lesson 11: Learn Something New . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Authors’ Biographies (in order of appearance) . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Preface
xv
When I was interviewing for my first engineering tenure-track faculty position in 2006, the
department head asked me which classes I could teach in the program. Confidently, I responded
that I could teach anything I had taken. A few months later, in the summer before I started, I
learned I would be teaching something I had never taken before—a computational engineering
methods class for first-year engineering students. After talking to a few people, I understood that
the undergraduate program coordinator wanted to get someone into the classroom who was nice
to first-year students. The current teacher, an Associate Professor, had structured the entire class
around programming and was known to get frustrated with the students with rumors of him
throwing chalk at students. I could be nice, I thought, but I’m not sure about teaching the course
with my degrees in mechanical engineering. I managed to get by through changing the course to
focus on learning programming structures (through storytelling using Alice), learning relative
and absolute referencing in Microsoft Excel, learning html to create websites, and learning basic
programming using MatLAB. I ended up teaching a few sections of this class for two years
and eventually became comfortable teaching computational engineering methods to first-year
students. I even ended up having some fun with the class with a challenge at the end of the
course for the students to use MatLAB to create artwork for an art exhibit. Students created
music, edited photographs, created fractals, and created stop motion animations. During the
art exhibit, I overheard one of my senior colleagues make a comment to the students saying that
this was not engineering and seemed like a waste of time to him. This was my first experience
teaching as an engineering faculty member.
After that first semester teaching my own class, I quickly realized how difficult and com-
plicated teaching could be. While I had read many books preparing me to be a teacher, it was
hard to truly prepare for experiences like this. I did not expect to be asked to teach something
that I had not learned myself as a student. I also expected that senior faculty members in the
department would be supportive of a junior faculty member. In hindsight, I was a bit naïve and
probably should not have been surprised to experience some pushback for my alternative ways
of teaching. I was, after all, the second woman faculty in our department of around 50 faculty
members. In addition, I was the youngest faculty. The composition of our faculty was about to
change, but when I first joined it was pretty homogeneous.
Because the teaching books that I read did not seem to be helping prepare me for the
realities of teaching, I sought out additional opportunities to develop my teaching skills. These
included workshops such as the National Effective Teaching Institute (NETI) that Rich Felder
and Rebecca Brent hosted before the American Society for Engineering Education (ASEE)
conference. However, in spite of these intensive experiences to help me as a teacher, it felt difficult
xvi PREFACE
to reconcile the actual experience of teaching with what I was learning from the experts. I would
learn helpful strategies for teaching, but some of the difficulties I faced were not addressed in
these workshops. In this book, we are going to share the messy and sometimes complicated
stories of faculty as they embark on journeys to become better teachers. I hope that, through
immersing yourself in these stories, that you will learn more about the journeys that faculty take
to become better teachers and feel better prepared as you embark on your own journey.
Nadia Kellam
April 2019
Acknowledgments
xvii
We wish to thank our fellow research team members, Joachim Walther, Stephan Durham, San-
dra Bird, and Kathleen DeMarrais at the University of Georgia for support during the early
stages of this project. We would also like to thank more recent research team members includ-
ing Madeleine Jennings, Joshua Cruz, Michael Sheppard, and Anna Cirell at Arizona State
University for their support during the later stages of this book preparation. We would espe-
cially like to thank Madeleine Jennings for helping with copy-editing many of the chapters of
this book. In addition, we would like to thank the research participants in this study, including
those whose stories were not included in this book.
This material is based upon work supported by the National Science Foundation under
grant numbers 1329300 and 1542531. Any opinions, findings, and conclusions or recommen-
dations expressed in this material are those of the authors and do not necessarily reflect the views
of the National Science Foundation.
Nadia Kellam, Brooke Coley, and Audrey Boklage
April 2019
C H A P T E R 1
Introduction
Nadia Kellam, Audrey Boklage, and Brooke Coley
1
Anyone who has been an undergraduate engineering student knows that exemplary engineering
educators are rare to find but can make a big difference in engineering students’ development,
sometimes being the difference between persisting in engineering or changing majors. As some
of us continue through school and then become engineering faculty members, it can be daunting
to figure out how to become good teachers ourselves, especially when our own Master’s and
Ph.D. programs tend to focus on research and engineering sciences with little, if any, focus on
our development as teachers. In this book, we (in this chapter, “we” refers to Nadia, Audrey, and
Brooke) hope to provide an opportunity for engineering educators to learn about engineering
teaching through becoming immersed in the stories of exemplary engineering faculty. These are
not polished and overly edited stories of engineering faculty, but instead are somewhat raw and
uncut stories as told by the faculty themselves. These stories were developed from transcriptions
of narrative interviews and are kept in the spoken words of the engineering faculty. We wanted
to explore ways of sharing the experience of hearing these inspirational stories, and thus the
concept of this book was born. These stories promise to humanize teachers and show that good
teachers are not just “born that way,” but face many obstacles on their own journeys of becoming
exemplary teachers.
From the outset of this project, we were interested in using narrative research strategies
(narrative interviews and analysis), to develop an understanding of the stories of successful engi-
neering faculty who have embraced active learning strategies [Meyers and Jones, 1993]. Stories
are powerful ways of learning from others and are inherent to the way that we communicate,
think, and learn. “We think in story. It’s hardwired in our brain. It’s how we make strategic sense
of the otherwise overwhelming world around us” [Cron, 2012, p. 8]. We wholeheartedly agree
that people are “hardwired” for stories and hope that by sharing these faculty stories others can
be inspired and better prepared to embark on their own teaching journeys.
As you immerse yourself into the stories shared in this book, in addition to providing
inspiration, we hope the stories resonate with you and help you as you navigate your own personal
teaching journeys. Through experiencing these stories, we can all learn from these stories, and,
possibly, see teaching in a different way than we did before. I know for me (in this chapter,
“I” refers to Nadia), they impacted my understanding of what it means to be a good teacher.
Somewhere in the back of my mind, I thought there were some teachers who were simply born
2
1. INTRODUCTION
amazing teachers. From my experience, I knew I was not one of these fortunate people but
hoped that with enough work I could become better. Through these stories, I began to see that
other’s journeys are not simple and that even amazing teachers experience trials and difficulties
throughout their journeys.
MOTIVATION FOR THIS PROJECT
When reading literature about STEM or engineering faculty and how they become exemplar
teachers, much focuses on why faculty do not embrace active learning strategies. For example, in
the Discipline Based Educational Research report, the committee describes barriers to chang-
ing teaching practices including institutional priorities, local leadership, peers, reward systems,
students’ attitudes, perceived importance of teaching, and faculty members’ beliefs [National
Research Council, 2012]. While we recognize the importance of identifying and understand-
ing these barriers, we are also interested in understanding faculty who have successfully changed
their teaching practices. We decided to focus this research on these faculty who have successfully
transitioned to active learning strategies and to uncover insights and lessons learned from their
stories.
The interviews that were used as the basis for these stories came from a research project
focused on engineering faculty change. When we conducted these interviews, we did so for the
purpose of the research project, and not with the goal of writing a book that included their
stories. However, as we began conducting interviews, we quickly became inspired by the stories
of the interviewees. Many of the interviews were conducted by Brooke and Audrey, who were, at
the time of the data collection, postdoctoral researchers. In our team meetings after interviews,
they were both very excited about the stories that they were hearing. We began to see the power
of hearing the inspirational stories of these engineering faculty. In addition, I, who had been a
faculty member for about 10 years, would listen to the interviews and become just as inspired
and excited as Brooke and Audrey. These stories were powerful and we began to consider ways
that these could be shared in a more complete form so that more people could become inspired
and empowered by these stories.
Another observation when conducting interviews for this research project was that the
stories of faculty change were complex stories. They were not stories of faculty who just happened
to be amazing teachers from day one. Nor were they stories of faculty who decided to make a
change to their teaching, made their change, and succeeded easily. Instead, they were stories
of faculty who wanted to make changes to their teaching for different reasons, and they all
encountered successes and struggles. In other words, these were not simple or linear stories of
change. Instead, they were messy and complicated stories of change. In a few of the cases, these
stories had reached a conclusion, and in others, the journey was ongoing and the engineering
faculty members were continuously evolving as teachers.
In Chapters 2–9 of this book, we will present these stories of faculty who have successfully
transitioned to active learning strategies in their classes. These stories were developed based on
interviews with these faculty and are kept in their words as spoken.
1. INTRODUCTION 3
HOW WE STRUCTURED THESE STORIES
As described above, these stories were captured as part of a research project where we interviewed
exemplar teachers to develop an understanding of how they got to where they are today in spite
of all the challenges and obstacles along the way. As part of our methods, we constructed stories,
in the spoken word of the participants, as we felt this format had the most resonance with the
reader. We used Joseph Campbell’s Hero’s Journey [2008] as a way to structure these stories.
We then analyzed the data for patterns across the stories. However, when we disseminated this
work in journal articles, most of the participants’ stories and voices were lost. Their stories were
reduced to a few pages, at most, with a few supporting quotes from their interviews [Boklage,
Coley, and Kellam, 2018].
Because we were unable to share these complete stories in traditional dissemination
venues, we began considering nontraditional ways of sharing these stories. After some con-
sideration, we decided to prepare a book that would include faculty stories in their entirety. The
hope is that these stories will serve as an inspiration to help teachers as they embark on or grow
in their own personal journeys of transformation.
Prior to sharing these stories, we will describe the Hero’s Journey, as the stories in this
book are all organized using this structure. The idea behind the Hero’s Journey is that all stories
follow similar structural patterns. In Joseph Campbell’s book, Hero with a Thousand Faces, he
introduces the monomyth [2008]. The monomyth is a universal structure that all epic myths are
claimed to follow. In Campbell’s book, he considers over 100 stories from multiple cultures and
times and shows that these stories follow a similar trajectory. Campbell proposes 17 stages that
stories generally follow. We have interpreted these stages that were intended for written or told
stories or epic myths for lived stories of engineering faculty. Below are brief descriptions of the
stages that we used in structuring stories that were told in interviews.
1. The call to adventure marks the beginning of the faculty’s story and includes their purpose
or reason for embarking on a journey.
2. The refusal of the call occurs after the call to adventure and involves the faculty member
changing their mind and deciding to not begin their journey. This is typically a considera-
tion and, at least in the stories of faculty who have successfully transitioned their teaching
practices highlighted in this book, is only a consideration that does not result in the end
of the journey.
3. Supernatural aid occurs when the faculty member receives unexpected help from a mentor,
colleague, or other resource (e.g., a book or website). This aid helps the faculty member
prepare for the journey that they are about to take.
4
1. INTRODUCTION
4. The first threshold is experienced when the faculty member continues forward in their jour-
ney and experiences their first trial or challenge on their journey. This challenge is typically
expected by the faculty member, as they anticipate some difficulties when embarking on
the journey.
5. During the belly of the whale, the faculty member experiences a very low point in their
journey. Oftentimes, this experience becomes transformative for the faculty member as
they have a realization of the importance of this journey as they recover from this low
point.
6. During the road of trials, the faculty member experiences and overcomes many challenges.
This could be student resistance to active learning strategies, or colleagues questioning the
effort being put into teaching.
7. The meeting with the all-knower structure represents the faculty member meeting with a
mentor who passes critical knowledge onto the faculty member. Without this interaction,
it could be imagined that the journey might have ended very differently for the faculty
member.
8. The meeting with temptation occurs when the faculty member has an experience that could
keep them from reaching their personal goal. This temptation could be in the form of
focusing efforts on research instead of teaching, beginning to lecture again because of the
potential of earning higher teaching evaluations, or following traditional course approaches
after some students express frustration with the new approaches.
9. In the apotheosis stage, the faculty member reaches a new level of understanding where
their journey becomes routine and their teaching innovations come with fewer surprises.
10. The ultimate boon occurs near the end of the journey as the journey reaches resolution. As
could be imagined, many of the faculty in this book do not reach the end of their story,
but do reach some boon where they attain a steady state in their goals.
11. The return threshold occurs when the faculty member begins communicating with peers,
colleagues, students, and administrators, telling them what they learned during their jour-
ney and beginning to reconcile their new identity with the one they left behind as they
embarked on their journey.
12. The final phase, master of both worlds and freedom to live, represents when the faculty mem-
ber moves back to the “ordinary” world that they left when embarking on their journey
to the “special” world that they inhabited while on their journey. This can involve shar-
ing their story of change with people who have not embarked on their own journey. It
can also involve becoming integrated back into the “ordinary” world with the knowledge
gained while on their journey.
1. INTRODUCTION 5
There are five additional stages that were not used in these journeys, and will not be ex-
plained in detail. These include some that are less applicable to lived stories, including, for ex-
ample, the magic flight which involved the hero rushing home in a pursuit. Others were just not
included in the stories highlighted in this book and include, for example, the refusal of the return,
where a faculty member would refuse to move back into their “ordinary” world after experiencing
the “special” world.
As you begin reading the chapters, you will notice that many of the stories only include
some of the stages in the journey. These stages were only used to structure the stories as they
were constructed from the spoken interview. These stages will be used to help organize the
subsequent chapters. For those interested in journal articles, we outline this process in Cruz
and Kellam [2017] and use this structure in an article exploring the beginning of engineering
students’ journeys [Cruz and Kellam, 2018].
In addition to the stages described above, we added some stages to the stories. One com-
mon addition is named stories from my class. While the monomyth provided a helpful structure
for organizing and constructing narratives from the interview transcripts, we found that some
parts of the story were excluded because they did not follow neatly into one of the structures.
While the stories from my class structure did not involve a particular trial or challenge, we felt it
was important to include this part of their story as it showed innovations in their teaching and
provided more texture and context to their particular journey.
In each participant’s story we will use headings to denote each stage in the journey. This
will help the reader move more easily between stories to compare, for example, specific stages
for each engineering educator.
OVERVIEW OF THE BOOK
The participants’ stories are told in their spoken voice as transcribed from the interview. By keep-
ing the stories true to their voice, we believe that the stories are more engaging than they would
be if we rephrased them. This does mean that there are some run-on sentences and colloquial
terms used in their stories. Occasionally, we include a few additional words to help improve
the flow of the story. These words are denoted with square brackets in the text. In addition, we
provide some clarifying details in parentheses (e.g., the meaning of an acronym).
In Chapter 2, Donna Riley, the Kamyar Haghighi Head of the School of Engineering
Education at Purdue University, shares her teaching story. At the time of the interview, Donna
was a faculty member at Virginia Tech. Donna tells her story of integrating a liberative pedagogy
into engineering education. After she started her first faculty appointment at Smith College, she
began a 10-year experiment in a Thermodynamics course where she challenged the power dy-
namics common to engineering courses and pushed students to begin thinking critically about
the subject. Her story is one that includes social activism and is one that will serve as an inspi-
ration to many faculty as she challenged the status quo in engineering education.
6
1. INTRODUCTION
In Chapter 3, Sara Atwood, an Associate Professor and Chair of Engineering and Physics
at Elizabethtown College, shares her teaching story. Her undergraduate studies at Dartmouth
College, a liberal arts setting, provided her foundation for student-centered learning. Sara’s jour-
ney was one that elevated the evolution, process, and development of implementing this peda-
gogical approach. Among her main supports were the colleagues and community created around
these efforts, like-minds committed to enhancing the education of engineering students.
In Chapter 4, Brad Hyatt, an Associate Professor of Construction Management at Fresno
State University, shares his story. Prior to being a faculty member, Brad worked for 12 years in
industry, both as a Civil Engineering Officer in the Navy and as a Project Management and
Construction Management Consultant. When he was a new faculty member, Brad approached
teaching with a lot of energy and a “just do it” attitude where he adopted project-based learning,
flipping the classroom, and bringing case studies into the class.
In Chapter 5, Chris Swan, an Associate Professor in Civil and Environmental Engineer-
ing at Tufts University, shares his story. His belief is that students should experience knowledge
and he works to connect content with applications. He finds seeing the application in a real-
world context to be especially critical in students’ ability to truly grasp material and he facilitates
this by offering students service-learning based projects.
In Chapter 6, Thais Alves, an Associate Professor of Construction Engineering at San
Diego State University, shares her story. Thais brings an international experience as she is from
Brazil, completed her Ph.D. at UC Berkeley, and returned to Brazil again prior to becoming a
faculty member in San Diego. When Thais became a faculty member in San Diego, she had to
become creative with her teaching because she did not have the access to construction sites that
she had in Brazil. She began to take an entrepreneurial approach to her teaching and considered
her students as clients to find a way that students begin to value what they were learning in class.
Now, she integrates site visits, food, and Lego simulations into her classes.
In Chapter 7, Fernanda Leite, an Associate Professor in Civil, Architectural, and En-
vironmental Engineering in the Cockrell School of Engineering at The University of Texas in
Austin, shares her story. Throughout her experiences, Fernanda has always been passionate about
teaching and as a graduate student she revamped a lab course while she worked as a Teaching
Assistant (TA). At UT Austin, Fernanda has developed courses where she created modules that
connect lectures, lab classes, and reflections across topics in the course. She brings real-world
scenarios into the classroom where students have to make assumptions and estimates. She also
discusses how her teaching and research have been inseparable with each one enhancing the
other.
In Chapter 8, Charles Pierce, an Associate Professor of Civil and Environmental En-
gineering at the University of South Carolina, shares his story. Charlie had a strong passion
for teaching and pursued his Ph.D. so that he could become a teacher. As he began teaching,
he initially emulated some of his professors who were engaging and entertaining. He quickly
transitioned from trying to cover content in his classes to ensuring that students were develop-
1. INTRODUCTION 7
ing conceptual understandings. He describes using activities to help explain concepts in class,
including activities involving candy, demonstration activities, and problem-based learning. He
also describes a group of faculty in his department who continue to inspire and motivate him as
he continues in his journey to become an exemplary engineering educator and an engineering
education researcher.
In Chapter 9, Matthew Fuentes, an Engineering Faculty member at Everett Community
College, shares his story. Matthew uses his quirky zeal for learning to create student engage-
ment in his classrooms anchored in a belief in equity and opportunity for all. In recognizing
his own privilege in the world as a White, male engineer, he envisions the classroom as a place
where all students should be able to see themselves. Through his student-centered approaches,
Matthew hopes to change what engineers look like, one student at a time. Matthew’s willingness
to challenge meritocracy with an appreciation for the process of developing potential positions
him as a rare and refreshing advocate for a just education. In finding comfort amid situations of
ambiguity, Matthew has enhanced student learning while also cultivating a culture of inclusion
that empowers students to reach their fullest potential.
In Chapter 10, we provide a set of lessons learned from the stories. These lessons include
taking it slow when innovating in the classroom, finding a community of educators with simi-
lar visions and goals, and using reflection to help improve classes. Another take-away from the
stories is that innovative teaching can require a lot of work, but can also prove very fulfilling and
worth the extra time and effort. One lesson was around focusing on teaching or research, with on
story demonstrating that these two aspects of faculty roles can be symbiotic. Other lessons focus
around concepts of inclusivity, with one focusing on considering the assets of students when
they come into the classroom, valuing their experiences, and being intentional to empower stu-
dents who have been marginalized in engineering education programs. Moreover, there were
many examples in the stories of engineering educators connecting theory through teaching ap-
proches to the real world in the classroom through case studies, projects, service learning, and
open-ended problems. There were a few examples of engineering educators using concepts from
entrepreneurship to improve their classrooms, with a focus on value propositions, considering
our customer segments, and pushing on boundaries. Finally, there were many engineering edu-
cators who were motivated and inspired to become better teachers because of their experiences
as undergraduate or graduate students. The last lesson learned includes a challenge to consider
learning something new and trying new things, to help faculty relate better to students in their
classrooms who are learning something new and to help expose them to different pedagogies and
ways of teaching. As you begin reading these stories, we encourage you to think about lessons
or take-aways that can help inform your own teaching journeys.
8
1. INTRODUCTION
REFERENCES
Boklage, A., Coley, B., and Kellam, N. (2018). Understanding engineering educators’ pedagog-
ical transformations through the hero’s journey. European Journal of Engineering Education.
DOI: 10.1080/03043797.2018.1500999. 3
Campbell, J. (2008). The Hero with a Thousand Faces, 3rd ed., Novato, New World Library. 3
Cron, L. (2012). Wired for Story: The Writer’s Guide to Using Brain Science to Hook Readers from
the Very First Sentence, Ten Speed Press. 1
Cruz, J. and Kellam, N. (2017). Restructuring structural narrative analysis using Camp-
bell’s monomyth to understand participant narratives. Narrative Inquiry, 27(1). DOI:
10.1075/ni.27.1.09cru. 5
Cruz, J. and Kellam, N. (2018). Beginning an engineer’s journey: A narrative examination of
how, when, and why students choose the engineering major. Journal of Engineering Education,
107(4), pp. 556–582. DOI: 10.1002/jee.20234. 5
Meyers, C. and Jones, T.B. (1993). Promoting Active Learning Strategies for the College Classroom,
Jossey-Bass Inc., Publishers, San Francisco, CA. 1
National Research Council. (2012). Discipline-Based Education Research: Understanding and Im-
proving Learning in Undergraduate Science and Engineering, The National Academies Press,
Washington, DC. 2
C H A P T E R 2
9
Developing a Liberative
Pedagogy in Engineering
Donna Riley
Narrative constructed by Brooke Coley and Nadia Kellam
It’s just recognizing that [change] doesn’t happen trivially. [It] takes a lot of thought. [It]
takes a lot of adjustment. It takes a lot of troubleshooting. And small changes can be tremen-
dously huge… Letting [change] play out organically, it allows for students to shape the class.
That’s part of it.
Donna Riley is currently the Kamyar Haghighi Head of the School of Engineering Ed-
ucation at Purdue University. At the time of the interview in November of 2016, Donna was
Professor and Interim Head of the Department of Engineering Education at Virginia Tech.
CALL TO ADVENTURE: WHY CAN’T ENGINEERING BE
TAUGHT THE WAY RELIGION IS TAUGHT?
I think it all started basically in undergrad where I went to Princeton and we had very old school
professors there. A lot of them were Oxford and Cambridge educated. They would do the classic
thing of taking out notes that were yellowed and 30 years old and write what was in the notes on
the chalk board, and we wrote what was on the chalkboard in our notes, and rarely were we ever
asked a question in class. The biggest exception to that was a professor who was in a wheelchair
and he wrote what was in his notes on a transparency that was projected on a screen rather than
on a chalk board, that was the variation. It was extremely passive, and we all had to sort it out
later. We learned to work together in groups because all we had was what we wrote down in
lecture and we had to figure out how to understand that and make sense of it.
Meanwhile, I took other classes, and this was just kind of my own interest that I thought,
well, if I’m only going to have 8 or 10 classes outside of engineering that I’m going to be able
to take, I wanted to make them count. I took these upper-level classes in the humanities and
social sciences which were over my head, but I just wanted to do that, so I took a class on five
romantic poets, so I took a class on women’s history in the United States, or something like
10
2. DEVELOPING A LIBERATIVE PEDAGOGY IN ENGINEERING
that. Just because it was interesting to me. I was taking a class, I think it was probably my junior
year; no, it was [the] spring of my sophomore year, from Elaine Pagels on Gnosticism and Early
Christianity. She had this way of teaching the class where on the Monday we would...She would
give her perspective on the reading and we would turn in a reflection paper that was just a couple
of pages long. On Wednesday, she would basically facilitate a conversation among all of us in
the seminar, there were probably a dozen of us. She would facilitate this conversation among us.
What always surprised me was that I felt like I belonged in the room. I felt like I had something
important to say. Despite my not having had any of the prerequisites for this class—I didn’t
speak Greek. I couldn’t read things—she’d come in and read, on Monday, she’d be translating
from original Aramaic and stuff and everybody would sort of be nodding and I’d be like “How
is she even doing this?” Just feeling completely both in over my head, but supported at the same
time that I had actually something important to say. Contrast that with engineering [for] which
I had all the prerequisites and yet every single time I was in there, they made us feel like we
didn’t know anything. I became curious [around] that time about why engineering couldn’t be
taught in the same way that my religion classes were being taught. I didn’t really get to pursue
that question, it just kind of rested in the back of my mind for a while.
When I got [into] grad school, I found that in chemical engineering at Carnegie Mellon
there were people that were much more interested in some pedagogical innovations. They were
doing project-based learning and problem-based learning, and they were just more engaged
with the literature. A professor named Ed Ko was at Carnegie Mellon at the time, he ended up
moving, changing universities later, but at that time he was there, and he was pretty well known
in engineering education circles at the time.
It was a campus that was just more engaged with conversations about active learning and
so on. I was educated in how to do that. There was a certified program for Ph.D. students from
the Center for Teaching and Learning and I went and pursued that. They taught us Bloom’s
taxonomy. They taught us the basics of what it would take to do active learning. I felt, at the
time, I was like “Okay, I can do these things.” I was teaching a project-based course that was
community-based as well, so we were working with the city of Pittsburgh on Pittsburgh’s urban
forest. We had seniors in the engineering and public policy program and some Master’s students
from the policy school working together in teams on how to assess the value of Pittsburgh’s urban
forest from an environmental perspective. What was it doing to mitigate climate change, having
all these trees around? What did it mean for property values? and so on; What did it mean in
different neighborhoods? And so on… We were looking at some environmental justice aspects
of that problem. I was coaching these teams and really enjoying doing that, and thinking “Gosh,
I think I really want to become a teacher.” Still not really getting at the heart of what I wanted
to understand about the classroom.
It wasn’t until I got to Smith College for my first faculty appointment…Smith is a lib-
eral arts college, it’s a women’s college.… In the fall semester, I taught an intro class, intro to
engineering with two other people, so a team-taught class. I stepped in and taught the class
2. DEVELOPING A LIBERATIVE PEDAGOGY IN ENGINEERING 11
the way the other people taught it, but then in the spring I taught thermodynamics. That was
really the first time I had my own class, where there wasn’t someone else setting the syllabus,
the curriculum, whatever. It was just me.
SUPERNATURAL AID: YOU HAVE TO READ TEACHING TO
TRANSGRESS
I contacted a friend of mine who I knew through other relationships. This was someone [that]
I was an activist with who taught sociology at Grinnell College in Iowa. I said, “Look, I need
to understand, like it’s time for me to really unpack this. What was different about what Elaine
Pagels was doing in my Gnosticism class compared with this other stuff? Because I know some-
thing about active learning, I know something about project-based and problem-based learn-
ing, but that’s not what this is. This is something else that she was doing. What was she doing?
What’s it called?” I didn’t even know the name for it. I couldn’t research it on my own because I
didn’t know what the keywords were. She said “Oh, you have to read bell hooks’ book Teaching
to Transgress. By that point it was spring break, it was March, and I got the book and started
reading it. It completely changed how I thought about what was going on in my classroom.”
It was the key to understanding, not exactly what Elaine Pagels was doing, because I think she
might describe her pedagogy differently, but it did talk about the power relationships in a class-
room. It talked about viewing students in a holistic way. It talked about valuing the authority
of experience and what students bring into a classroom. All of those things were things that
were elements that Elaine Pagels was doing in our classroom that were never being done in
engineering classes.
BELLY OF THE WHALE: THE START OF A 10-YEAR PERIOD
OF EXPERIMENTATION IN THERMODYNAMICS
I noticed that I was repeating some of the very same problematic relationships that existed in my
prior experience with thermo. This was true even though I wasn’t doing this passive lecture thing.
I was doing active learning. I was doing the stuff I was taught to do, but I could tell there were
students in front of the class that were engaged, and the students in the back of the class weren’t
engaged. I could just see it all unfolding in those same ways that I had been taught. The first
thing I did was I went back to my class after spring break and said “Here’s the problem I’ve been
noticing. I’ve noticed that some of you are sitting in the back of the class and you don’t seem as
engaged. I’d like to change the way that we’re sitting and so that we can actually face each other.
What do you think?” They said “Okay.” We started doing that. They felt that was better. I tried
this experiment that completely failed which was having them teach each other the material.
I said “Oh, well, why don’t you just prepare chapter eight and come in and let’s talk about it.”
That didn’t work so well, so I abandoned that. It started this 10-year period of experimentation
in this thermodynamics class. I was fortunate to be at a place that was, first of all, a brand-new
12
2. DEVELOPING A LIBERATIVE PEDAGOGY IN ENGINEERING
engineering curriculum where we were encouraged to experiment and encouraged to go to the
ASEE conference, learn about the state of the art in engineering.
The other really fortunate thing that happened is [that] bell hooks happened to come to
Smith College while I was doing this. That was the same semester. There was some connection
with the program in Buddhist Studies. She’s Buddhist, and they brought her to campus a couple
of times over the next few years and one of her visits would happen to be that semester, so because
it’s a small campus, I was able to go to her. They had a reception for her after her talk. I went
to her talk and then went to the reception after. I just walked up to her and said, “Is anybody
doing your pedagogy in a science or engineering class?” She said “Well, yes, but nobody has
written about it.” Then she said, “Whatever you do make sure that you publish what you do.”
I said “Okay.” She didn’t really give me any names, so I was able to kind of, I sort of Googled
around and tried to find some other folks and I found [a] couple things in science education but
nothing in engineering. What’s interesting about that is, I wrote up the thermodynamics class,
and submitted it to the Journal of Women and Minorities in Science and Engineering. It’s the only
time I’ve ever had a paper published without any revisions at all.
I fell into this ability to continue to innovate in that class because I applied for a CAREER
award to the National Science Foundation. My research area was actually risk-assessment and
risk-communication. I was doing technical research in this but my research, because my Ph.D.
was in engineering and public policy, was always interdisciplinary…. When I went to meet with
the program officers at NSF, I met with the environmental engineering program officer and he
said “Well, this isn’t really…this sounds more like social behavior and economic sciences, you
should go over there.” I went over to SBE [Directorate for Social, Behavioral, and Economic
Sciences], and the person there was actually someone from my research group, so she had gone
on to be a policy school professor in risk communication and was doing a rotation as a program
officer at NSF that year. When I met with her, I said “Look, I mean, you know exactly the work
I do.” She’s published in the same area, she knows the area really well. She said “Look, I’ve got to
tell you, don’t waste your time. As much as I’d love to fund this, this doesn’t fit what SBE funds
and I can tell for sure it’s not going to fit what engineering does.” She said “Don’t waste your
time writing this proposal. It’s not for CAREER,” basically. I was upset because I didn’t know,
you know, what can I do? They told us for tenure, we don’t expect to get a CAREER award, but
we expect you to apply for them. I was feeling this urgency that I had to submit one but had no
idea how or where.
SUPERNATURAL AID: LEARNING ABOUT A CAREER
AWARD IN ENGINEERING EDUCATION
Just again by luck, Rich Felder was in the office [of ] a colleague of mine. He was in Glenn Ellis’
office. I was just walking by, stopped in to talk to Glenn about something else, and Rich, he’s
just a mentor to everybody, and he took an interest and he said, “How’s it going? What are you
working on?” and just asked me how it’s going. I said, “Well, it’s not going so well, frankly,
2. DEVELOPING A LIBERATIVE PEDAGOGY IN ENGINEERING 13
because they told me I have to apply for this CAREER award.” Rich has this very famous
resource called, “So You Want to Get a CAREER Award,” and it tells faculty how to write their
CAREER grant. I was like, “I’ve been told by NSF that my topic doesn’t fit.” He said “Well, you
know you can apply in engineering education.” And I said “No, I didn’t know that.” This was in
2004, and Rich said “Well, you know, they’re giving CAREER awards in engineering education
now.” And, so I wouldn’t have known that if he hadn’t been there. I had two weeks to write the
thing. I wrote it. I submitted it. It wasn’t the best grant proposal ever written but they funded it.
It was probably a risk for Sue Kemnitzer to do it, but I suddenly had a five-year funded research
program that would enable me to explore what it would mean to do bell hooks-like pedagogies
in engineering education.
I had used the word liberative to describe these pedagogies because bell hooks did and
that turned out to be really interesting. I wasn’t quite aware of what that would mean, and it was
an interesting move, because apparently that’s not a commonly used term. I used it because I
wanted to group together various kinds of critical pedagogies—feminist pedagogies, anti-racist
pedagogies, pedagogies that are considering class. All of these are grouped together under some
label, but probably the term ought to be critical pedagogies. All of that happened. That allowed
me to do these little experiments.
ROAD OF TRIALS: BECOMING COMFORTABLE WITH
CRITIQUE IN THERMODYNAMICS
The best thing was that grant allowed me to hire a colleague, and so I had this great half time,
actually he was quarter time in the beginning, a Research Associate named Lionel Claris. I hired
[Lionel] when he was a Master’s student in Education at Smith. He had been at Hampshire
College before that and did his undergraduate degree on political philosophy, so he knew all
of the social theory, and he had an education Master’s, and he was now teaching in the K-12
schools in Springfield, Massachusetts.
As we started talking, I was thinking about power relationships in the classroom and the
fact that I could ask the students to share power, and they would do it. They would go through
the motions of what I asked them to do. I want you to talk to each other more. They would
do those things, but they never really seemed to internalize what that was for. Why that was
happening? That [it] was about trying to change this fundamental set of assumptions that I knew
stuff that they didn’t know. That I had some position, the privilege in the classroom, and I was
trying to challenge that and mess with that in some way.
They didn’t understand that, so he was like “Well, why don’t you have them read something
about that.” I started having them read this piece [that Lionel brought in] from Michel Foucault
[1980] on truth and power in science. So, it’s specific to science, it lets them think about it in a
very concrete way that they can access. It was just three pages. Even if they found it impenetrable,
it was short. They would read it and we would unpack it. They took a whole day of class to just talk
about that reading and what it meant for the syllabus. What it meant about who decided what
14
2. DEVELOPING A LIBERATIVE PEDAGOGY IN ENGINEERING
was in the thermodynamics class, who decided what was in their textbook, who decided what
the discipline of thermodynamics constituted. This ended up being the most fruitful change,
and I didn’t realize all of the ways it was going to be so important, because at the time I had this
focus on pedagogy and I realized with this assignment that in order to change the pedagogy,
to really change it, I had to change the curriculum to an extent, and I had to change it at least
enough to insert this one reading.
I realized that once I did that, then I had students reading the textbook critically and saying
“Well, wait a minute.” The textbook that I picked was a textbook that had a lot of real-world
examples, so [it was] trying to relate to students. There’s this whole unit when they taught the
first law of thermodynamics, they taught this piece about energy and exercise and diet, so they’d
be like “You’re burning calories, you can do an energy balance on your calories in, calories out,”
kind of thing. But some of the problems that [the book] had them do were problematic from a
gender perspective and problematic from a women’s health or anybody’s health perspective. An
example problem was like, “Jack and Jill go to Burger King and Jack orders a Whopper and a
large fry and a large Coke, and Jill orders a Whopper Junior and a small fries and a Diet Coke.
If Jill weighs this much and Jack weighs that much, how much do they have to exercise to work
off their meal?” And Jill has to work way more because she’s smaller or whatever. You’re learning
all these gendered ideas about exercise. Then you’ve got these other problems where someone
diets and loses like 13 pounds in a week, and this one student of mine who was [an] eating
disorder survivor wrote a piece. The student was a survivor of Anorexia and she had read this
one problem that was about somebody losing 13 pounds in a week and she pointed out that
that’s really unhealthy weight loss, and so that enabled us to, first, we talked about it as a class,
but then that turned into an assignment where I asked the students to pick some of the problems
from that section, because a lot of them were really problematic on different grounds, and just
talk about them. They could do the problem and then critique what it is saying about health,
exercise, whatever, and then write their own new version of that problem, or a wholly different
problem if they wanted to, that related to their interest in some other way. That was a really great
opening, because the students became comfortable with critique.
REFUSAL OF THE CALL: BECOMING A FEMINIST
ACTIVIST
And it led to a second thing where a student came up to me that same semester and said “You
know, I read about this thing on the Internet and I don’t believe everything I read on the Internet
so I just wanted to ask you, have you ever heard of this thing called the Montreal Massacre?”
I had heard of the Montreal Massacre because I was a first-year engineering student when [it]
happened, and it left a big impression on me, because it was a critical moment where, for me,
there was a microaggression associated with the event, so I had read about the event, heard about
it. There was a vigil that the women’s center on my campus was organizing about [the Montreal
Massacre]. I had a chemistry exam that night, so I went to take the chemistry exam, and we
2. DEVELOPING A LIBERATIVE PEDAGOGY IN ENGINEERING 15
were all talking before the exam and this guy in front of me had turned around to see. We were
talking and he said “Oh, what’s your major?” And I said “Oh, chemical engineering.” And he
said “Oh, you’re an engineer. Where’s my gun? Ha ha ha ha.” Everybody is laughing. I’m just
like did I hear him right? Right. I’m like did he really just say that? Then right then, it was like
“Pencils up, take the test.” I spent a good amount of time going, “What else could he have said.
What the hell?” I walked out of that exam and this vigil was still going on and so I went to
the vigil and what was upsetting to me was there was [only] one engineering faculty member
there at all, and he was there in his role as, they had these things at Princeton called Masters
of residential colleges. There were four or five residential colleges where students live their first
two years and they had faculty that had an administrative role at the college. He was in that
role, so he was there because he was involved in student affairs. He wasn’t just there because he
was a concerned engineering faculty member. …. He was the only representative of the entire
engineering school. The Society of Women Engineers wasn’t there. Nothing. It was all about the
Women’s Center. That radicalized me in undergrad, and it was a big moment for me because it
is pretty much directly how I became a feminist activist. The head of the Women Center found
me at that event because I said something [like], “I’m an engineer, this shit just happened to
me.” She approached me after and said, “You should really come to the Women’s Center.” And
so that started my engagement with the Women’s Center at Princeton. Anyway, [back to the
conversation with the Smith student] the Montreal Massacre, I was like “Yes, I have heard of
this thing.” She’s like “Well, how come we’re at the first women’s college engineering program.
Like why are we not learning about this event? This is important.” Anyway, I said “Oh yeah,
sure.”
STORIES FROM MY CLASS: THE MONTREAL MASSACRE
AS A CASE STUDY
I went in the literature, there was a women’s studies class that used a case study, like a memorial
to the women from the Montreal Massacre as a way to talk about violence against women. I
read [the case study], it was in Feminist Teacher, it’s a journal, and so I picked up that journal,
read her lesson plan that involved doing a memorial to the women [by] saying their names, and
then talk about violence against women. I picked up some important pieces from that, like not
talking about the shooter being an essential thing. You don’t want to give airtime to that because
then you end up in this criminology conversation that you don’t want to be having.
I took the basic structure and some tips from that and then found a bunch of videos from
the Canadian broadcasting company that just had the basics of what happened that night and a
really poignant survivor retrospective. Because one of the things, the guy literally yelled [was],
“You’re all women who are going to be engineers. You’re all a bunch of fucking feminists. I hate
feminists.” Right when he shot them. There was a woman who said “No, we’re not feminists.
We’re just trying to get an education.” She’s pleading with him not to kill them and she was
in the original group of ... There’s this classroom of about 60 people, and about 50 of them are
16
2. DEVELOPING A LIBERATIVE PEDAGOGY IN ENGINEERING
men, and he orders the men out of the room and there’s maybe 10 women left, and he opens
fire on them. Most of those women died. She was shot but survived.
They had an interview with her five years later where she’s working as an engineer. The
woman that’s interviewing her is this famous feminist journalist. [The shooter’s] suicide note,
had this list of women that he wanted to kill that day, but he couldn’t access those women. Her
name, this journalist’s name is on that list. They’re having this conversation and [the journalist]
goes, “What does he represent to you?” And [the survivor] said “He’s just a poor guy.” She said,
“That’s all.” She said “Yeah. Yeah. He’s just a poor guy.” The journalist says “Well, what did you
represent to him?” She says “You.” She’s like “Yeah. You know, like you.” And then she starts
naming these other Canadian feminists. You, so and so, so and so, but I was easier to take, and
then she goes “Well, maybe what we were doing is the same as what you all were fighting for.”
She’s like thinking out loud about, like, “Is being a woman in engineering a feminist act?” She’s
thinking through this out loud to herself.
It’s this incredible moment. It’s like a five-minute clip. I played the clip. I’m like “Okay,
so what do you think about this?” You get all these different perspectives about people’s comfort
or lack of comfort with the idea of feminism and they talk about that, and they start talking
about their internships. They start talking about intersectional ideas of feminism. They’re not
just talking about their experiences as women [with] internships. They start talking about how
race intersects with that, how class intersects with that, sexuality. They have this incredibly rich
conversation that all I had to do was spend maybe 15 minutes [at the] beginning of the class
presenting this incident to them, and they didn’t get hung up on the violence part of it. They
didn’t talk about murder. They immediately got the relationship and just started talking about
the stuff.
I think it helped that I was in a liberal arts environment because I think they had more
tools to talk about this stuff than they might have elsewhere, and this was true of the Foucault
reading as well, that they all had heard of Foucault. None of them had read him except one or
two that took a sociology class or something. Most of them, they knew their roommates were
reading him. They were like “Oh, yeah, I know what this is.” There were people they could talk
to in their dorms about it. Now they have this opening for conversation outside of class that gave
them, I think, a lot of good common ground with other folks to talk about everything and what
it means for them to be an engineer and put together their identity with what other people are
doing. That’s all [a] big aside, those are the kinds of things I was able to do in my classroom that
were really a departure from what, at least at that time, anybody else was doing to my knowledge
in their classes.
This did empower them to raise questions in other classes. They would come back to me
and tell me stories about “Well, so and so teaches really traditionally and I asked him about
it.” They did learn to push a little harder on some of the other faculty. In the early years of
engineering at Smith, I think the other faculty were pretty receptive to that. Everybody was in
2. DEVELOPING A LIBERATIVE PEDAGOGY IN ENGINEERING 17
an experimental place. Everybody was willing to play around with stuff. “Yeah, I’ll take your
suggestion. Let’s shape the assignment this way, whatever, whatever.”
ROAD OF TRIALS: PUSHBACK FROM STUDENTS AND
COLLEAGUES
As time went on and people got more and more busy with other stuff and less willing or less
rewarded for doing stuff with their teaching—whatever it was, they became less interested in
that conversation. There was a real shift from this kind of open place to a more like, well, “This
isn’t how I teach” kind of thing. “This isn’t how engineering is.”
There was this other pushback that happened where some of the time and in the later
years I started to get more and more pushback from the students that what we were doing in
that class wasn’t engineering...when I asked them to do the Montreal Massacre exercise, when I
asked them to think about ethics. The most shocking thing occurred the very last time I taught
that class at Smith, [which] was the fall of 2012. [The students] entered a National Academy of
Engineering [NAE] competition that was making engineering energy ethics videos. I had the
whole class do it. The requirement wasn’t that they had to enter the competition, but they had to
make the video in teams. That was a semester-long project, [which] was to make a video about
some issue of energy ethics, you pick, totally open.
They didn’t see how any of that related to thermodynamics. This was despite my spending
a lot of time trying to address it explicitly and having those conversations about what [ethics
has] to do with thermo. They all entered it. I explained to them what the NAE was, I didn’t
assume that they knew that. They all entered it, and they won, four teams won awards from the
National Academy of Engineering that year. It was a big deal. They won money. They got to go
to a conference.
They told the NAE that they didn’t think ethics belonged in a thermodynamics class, and
then the NAE people got back to me, and they’re like “Do you know that they’re saying this?”
And I’m like “Yeah, I know.” It had gotten to the point where there was more pushback. This is
a really interesting possible result of the pedagogies I was using.
One of my favorite examples of this happened pretty early on. It was right before Thanks-
giving break and everybody is really stressed out. There’s a lot of stuff due in all of the classes.
She came in and I had handed back a couple different assignments. I was asking them to do
learning reflections, which were written, or ethics essays, which were written. And then they
always had problem sets, which were shorter because I wanted them to spend less time on them
to make room for the reflections. I had this theory, which I would explain to them. It was the
rule of the nth problem, a law of diminishing returns. We have you do so many problems that
there’s a learning curve and as you approach the nth problem, you don’t really learn as much
when you do it because you already got it by that point. I want them to stop there and then
spend their time doing something else. Anyway, I was handing back assignments, and this stu-
dent got really mad and held up her essay and said, “This isn’t thermodynamics.” Then she held
18
2. DEVELOPING A LIBERATIVE PEDAGOGY IN ENGINEERING
up the problem set and said, “This is thermodynamics.” Or maybe she said engineering, I forget.
This is engineering. This is thermodynamics, not this. It was a really poignant moment because
that’s what I wanted them to do. I want them to be authoritative. I want them to feel like they
can decide what belongs in the class, to the point where they’re telling me this doesn’t belong in
this class. That’s a good thing. Though maybe the irony of it, I guess, is that they’re reasserting a
traditional view of engineering in doing that I thought, “Well this is good, I want her to be able
to do that.”
It was a perfect setup because right after Thanksgiving—and I had the weekend to think
about it—right after Thanksgiving, the next topic was co-generation. And they learned it, it’s
in their textbook, and they’re learning how to analyze co-generation power plants. They had
taken a tour of the Smith campus plant and we had a heating plant that did not generate any
electricity. All it did was, they’d burn fuel oil and then use the steam to heat the whole campus.
There had been a lot of talk [about] whether or not they wanted to retrofit that facility to generate
electricity and feed some back to the grid. They hadn’t done it and there was a widespread belief
on campus that they ought to do this and they should have done it yesterday. I was able to talk
about it and the students said “Yeah, why don’t we have co-generation on this campus, that’s so
stupid.” And I would say “Well, why do you think?” They’d say, “The [Board of ] Trustees.” So,
OK, say I’m on the [Board of ] Trustees. What do you need to communicate to convince me?
Suddenly they realized that this was about communication that you had to be able to
articulate to someone who’s not an engineer, why co-generation was going to save money, why
it was going to be more environmental, why it was going to be good PR, why it was the right
thing to do. I was able to, the very next class, make the case again for why all of these things
were engineering and were really important things for them to know in a thermodynamics class.
I was able to continue to have the conversation with the class, and I liked having that
tension because it was a productive tension. There’s a type of resistance going on that I was able
to make sure that it stayed a learning moment, but then toward the later years, it stopped feeling
that way because, I’m not sure. I think some of it was less direct, like the students weren’t coming
to me directly and saying this in class. They were sort of going sideways, they were telling the
NAE, not me, that kind of thing. It became harder to bring stuff back for conversation.
As I said, [I] just got, for whatever reason toward the end, I got more and more pushback.
A student told me at one point that one of my colleagues, her advisor, told her that I just didn’t
like thermodynamics; I didn’t like the material, so I taught other stuff, because I didn’t like the
technical stuff, which wasn’t true at all. They said that and that became widespread belief among
students so it’s hard to refute that one, it’s very political in some ways.
ROAD OF TRIALS: REQUIRED SERVICE LEARNING IN
THERMODYNAMICS
One of the things that really didn’t go over well was the time I did a service learning or
community-based learning project where all of [the students] had to go to Springfield, Mas-
2. DEVELOPING A LIBERATIVE PEDAGOGY IN ENGINEERING 19
sachusetts. I thought it was a really cool project. It was about the energy density and energy
cost of food, and so we were looking at, we were in a food desert in Springfield. There was a
women’s studies class that was looking at women’s role in organizing this urban farmer’s market,
where it was a struggle to get the folks from the surrounding farms to drive the extra distance
into Springfield. They were filling this critical need for fresh vegetables because there was no
grocery store in this neighborhood at all. There were a couple convenience stores, and you just
couldn’t get fresh vegetables at these places for any reasonable price anyway, and so these women
organized the farmer’s market there.
My class was looking at doing an energy analysis, because there’s this fascinating study
that these folks at the University of Washington did—this analysis where on a per calorie basis,
vegetables like lettuce cost up to 100 times more per calorie than fats and oils or potato chips.
There’s this whole explanation of hunger, about why it’s cheaper to buy fast food and junk food,
and they just have a lot of interesting data. Taking that as the model, can we collect the local
data for the farmer’s market and compare it to the convenience store that people could get to
and then the nearest grocery store which you had to take a bus to, and sort of look at, okay, what
are the energy costs of food in this community?
There were some really interesting things that came out of [the energy analysis project]
which were that there’s an assumption that farmer’s market vegetables are more expensive. That
wasn’t always true and there were places where the vegetables were either cheaper or there were
vegetables that they just couldn’t get elsewhere. There were a lot of Puerto Rican and other
Caribbean families there. They were looking for particular kinds of greens, the farmer’s market
had those, like to make callaloo. They couldn’t get that otherwise. There were a bunch of things
the farmer’s market was able to provide, and then we presented the results to this larger group
of folks, including food bank people, and some of the farmers, and the women that organized
this farmer’s market.
What was fascinating was the farmers started talking about the policies of the grocery
stores in underselling them. Corn comes in season [and] the grocery stores will sell corn at a
loss, just to bring people into the grocery store. They started talking about how [we can] counter
that. Because the farmer’s market is in a food desert, it actually gave them an opportunity, like a
market for their corn that they wouldn’t have otherwise. There were some really interesting pieces
that wouldn’t have come out without the analysis that the students provided, but the students
had to travel 25 minutes to go to the farmer’s market. At the end of the day, the students hated it.
They hated it because it was required, I think. Most service learning classes are elective classes,
where students sign up for it, knowing they’re signing up for it. I said “Okay. Clearly, I can’t keep
doing that in this class because it’s pissing them off.” I couldn’t justify continuing to do that in a
required course. I did service learning in my elective courses, but I didn’t do it in thermo anymore
after that.
20
2. DEVELOPING A LIBERATIVE PEDAGOGY IN ENGINEERING
RETURN THRESHOLD: PROTESTING A NUCLEAR POWER
PLANT
The pedagogy that I was doing in the early 2000s led directly to my meeting a couple of people
who are part of this network on engineering, social justice, and peace. Early on when I was
presenting about my thermodynamics stuff, I was in a session with the liberal education division
at ASEE with an engineering professor named George Catalano at SUNY Binghamton. He
was presenting on peace pedagogy, basically how to teach peace in engineering classes. I was
talking about, I think that particular time I was talking about globalization and how to teach
critical perspectives on global engineering. We were in the same session and we were like “You!
I found a kindred spirit.” I invited him to my very first kickoff meeting for the critical pedagogy
project, and then he invited me the next year to go to this engineering, social justice, and peace
meeting.
I was part of that network of folks and I began thinking more and more about community-
based learning and after I did that community-based project with the food bank, I did a different
community-based project in my engineering ethics class, which was an elective on the ethics of
nuclear power. It was during a critical point where the Vermont Yankee Nuclear Power Plant had
just had its license extended by the Nuclear Regulatory Commission. Its life had been extended
by 20 years, but there were serious problems with how the plant was being managed, and they had
had a number of problems like a cooling tower collapsing. It was rotted wood and rusted bolts
that caused that thing to collapse. It should never have happened. I think they were just totally
deferring maintenance on that thing. There was a series of ridiculous things that happened. The
state came to take an official position of opposition to the nuclear power plant continuing to
operate. They didn’t renew its public utilities license and said, “You can’t continue to operate
in the state of Vermont.” Well, they challenged this and said “Well, if the Nuclear Regulatory
Commission has approved us, we should be able to operate regardless of what the state says.”
And that went through to federal court and they made a ruling that they could continue to
operate.
The state of Vermont actually lost that case, and so then on the day that the state’s license
expired, there was a massive protest, over a thousand people in the street, and 138 people got
arrested, including me. That fall the plant was still operating and the community was interested
in continuing to take data. There had been little tritium leaks here and there. Stuff leaking
from this plant. People were like, “What’s happening when they release steam?” “What’s in the
river?” “We know it’s really hot when they’re releasing steam into the river, but what happens
downstream, how are fish being affected?” A bunch of questions that were ripe for citizen science
projects.
My students started thinking about, okay, “What can we do as engineers to help support
what the citizens want to do?” “What questions do they want to ask, how do we do this?”
It was fascinating because the students were coming up against differences in what the nuclear
industry was saying was valid data and what the citizens (some of whom also possessed expertise
2. DEVELOPING A LIBERATIVE PEDAGOGY IN ENGINEERING 21
in nuclear power) wanted to take as data. They had to confront these different ways of knowing.
What did they think was valid, and who do they believe and why? Lots of really critical thinking
processes [were] going on and all in this context of citizen science, or as I would call it, citizen
engineering.
This is coming up against my own questions about, how do I as a citizen and an engineer
oppose this nuclear power plant or get involved in other kinds of projects. My involvement in
the engineering, social justice, and peace network led me [to] ask... where are the places that
engineers ought to be acting? That was one way that I found that was very concrete and local
to me that I could be out there as an engineer and say “Look, I’m a risk analyst. You know,
I’m in risk assessment, risk communication. I know about nuclear power risk. You know, people
talk about how low the risks of nuclear power are. And after Chernobyl and Fukushima, those
numbers were re-calculated. Because they had been based on models, not on experience, but
once you factor in all the different accidents that have taken place, the probabilities go up from
what the early nuclear reactor safety studies said.”
It’s not to be alarmist or create undue concern, but the Vermont Yankee plant is a
Fukushima clone. It has the same containment problems that the Fukushima plant has, and,
no, there’s not likely to be a big earthquake, but there are hurricanes, there are floods, and a lot
of the same questions apply when you start really looking at the risk and what the evacuation
plans are, and so on and so forth. Long story short, I was involved in that and then I went to
NSF and the plant closed by the way, they’re decommissioning it, which is good.
RETURN THRESHOLD: CHALLENGING THE POWERS
THAT BE
I think the cultures in the different places I’ve worked are more similar than they are differ-
ent in that there’s always a creative tension with traditionalists, however that manifests itself.
There’s always people who think, “Well this is the way you have to do it.” There are unchal-
lenged assumptions everywhere, and I’m sure they exist in my class as well. When you’re trying
to do something different, you’re always pushing against those and you have to push. That is the
whole for me, that’s the definition of it. If you’re not pushing against those, you’re probably not
being truly innovative. If you’re not getting resistance, you probably aren’t challenging the right
things. You’re not being really challenging of the powers that be, if you’re not getting pushed
back.
That was true at Smith, and it is true at Virginia Tech. You want to be supported. You have
to find where your support is, so that you can continue to do that work and I think at Smith, I
had that support from day one from the top down because we were doing this new engineering
program. I think I didn’t have to build it because it was given to me at the beginning. Getting
the CAREER award bolstered that support in a big way, so NSF was able to support that work
in important ways so that the critics couldn’t descend until toward the end of my grant.
22
2. DEVELOPING A LIBERATIVE PEDAGOGY IN ENGINEERING
At Virginia Tech, there’s the same thing of well, we are funded to do this. Stephanie
Adams had this huge grant, so sure, I can do it. We’re trying to be good citizens to this new roll
out of the new general ed curriculum, with backing from the Provost. There are ways in which
you go, and you find your support where you can and then go with that. And yeah, if you’re doing
it right, you’re going to hit some pushback. Then depending on who that’s from, you address it
in different ways, and if it’s from students, well, you want to morph and change and do things
that are going to be responsive to their concerns like I did with the service learning thing.
If it’s from the powers that be, well, [you’ve] got to really think about, how do you work
with them going forward, if it’s [that] you’re not meeting this requirement or spending too much
time here. Well, you always want to just continue to be creative, is the way that I think about it,
so there’s a new constraint, well, you work creatively with, around, and through that.
ROAD OF TRIALS: CREATIVE SOLUTIONS TO
CONSTRAINING POLICIES
One of the big constraints at Smith with doing the collaboration with the women’s studies
faculty was that if you team-taught, you got half a course credit. We knew that team teaching
was going to be twice the work for us because we’d have to somehow join women’s studies and
engineering intellectually. What we decided to do instead was [say], “Look let’s each teach our
separate classes, we’ll enroll them separately, but we’ll have them do joint meetings together,
we’ll have them do this joint project together.” By doing that with a couple different classes,
we found a workaround that was successful. We didn’t directly challenge this constraint even
though it is really prohibitive of collaborative work to say that team-teaching is half the credit.
We found a way around it. It’s just that attitude of finding the creative solution to stuff. You do
want to always be in that give and take of pushing the boundary.
APOTHEOSIS: PUSHING THE BOUNDARIES IN ANY
CONTEXT
I don’t want to downplay the importance of the institution because I do think I was able to do
what I did at Smith in part because I was at Smith. I really do think that played a role.
That said, I think there’s other ways I would have pushed the boundaries if I had been at
a traditional engineering school from the beginning. It would have looked really different but
there is this, it does ultimately come from within the individual to do this creative work because
it’s your class. You’re the one that’s generating the ideas that are going to push the boundaries.
If I’m advising a new faculty member that’s going to one of these places, yeah, you want to
go to the institution that’s going to let you do the work that you can do without getting in your
way too much, but at the end of the day, every institution is going to get in your way somehow.
It is about recognizing that and not seeing it in too black-and-white a way. I know a lot of junior
faculty are like “Oh, I just have to get tenure. I’m going to just mind my own business, I’m going
2. DEVELOPING A LIBERATIVE PEDAGOGY IN ENGINEERING 23
to get tenure, then I’m going to do what I want.” But that never works. Look at all the senior
people. How did that strategy work out for them? Clearly, this gets drummed out of you if you
take that path.
You always have to keep within you the sense that, “Well, okay, I’m going to push the
boundaries.” Sure, you don’t want to do something that’s going to get you fired but there’s so
many things in between doing the one thing that’s going to be a bridge too far, and doing all
these other things that are going to push boundaries and make people think and challenge the
status quo, and maybe make real change as you go.
Or maybe you do want to make that stand that gets you fired. There might be reasons to
do that. I would never rule that out, but I think most of the time that’s not going to happen.
Many people fear that way more than it actually happens, so you do good work, you do what
you love. Sometimes, changing institutions ends up being the best route. At Smith, I had a great
cadre of folks that I could stir stuff up with, despite the pushback. And, as I engaged more on a
national level, I started thinking more about the bigger picture of what was I doing in the field of
engineering education, and how could I have an impact outside of my institution. As the Smith
experiment wore on, people paid less and less attention to what was going on there, and kept
saying “Well, you can do that because you’re at Smith. You’re a special case.” Doing something
at Virginia Tech would obviously directly impact a large number of engineers right away and
have more influence on the rest of the enterprise of engineering education. That made the move
make a lot of sense.
MASTER OF BOTH WORLDS AND FREEDOM TO LIVE:
THE IMPORTANCE OF REFLECTION
I think there’s something about the reflection of the faculty member that matters in this story. I
had a lot of opportunity and still do have a lot of opportunity to talk with others about what I’m
doing and why. Having Lionel at Smith and having my other colleagues at Smith too, both inside
and outside of engineering, who were able to talk stuff through with me and just be supportive
and help troubleshoot and help creatively was essential.
The process of reflection, of really taking the time at the end of the semester, and at Smith
this was built into our ABET processes and I still think it’s a really worthwhile exercise, and it
was built into our grant in NSF too with the science and engineering project. At the end of the
term, you stop, and you say well, what worked, what didn’t, what’s the student feedback, what
am I thinking. About how to do this better the next time, what are my goals, how do I think
about how I’m going to push the envelope next, that keeps the spirit of innovation alive.
You’re not just doing the same thing every year, good, that’s done. You’re actually taking
the time to have a reflective practice about what you’re doing and say I need to change this [next]
time, more of this, less of that, tweak this, try this new thing here, whatever. It’s a constant pro-
cess. The thermodynamics class over 10 years became almost unrecognizable from the original
24
2. DEVELOPING A LIBERATIVE PEDAGOGY IN ENGINEERING
one, but I never changed more than one or two things a semester. I never completely overhauled
the class.
It was about asking just what am I capable of doing and doing well? If I’m going to create
a new type of assignment, well, I’m going to do that and take something else out to put it in and
just do that and not ... It’s just a sustainability thing. You don’t want to completely, yeah, you
just don’t want to make yourself so distraught. I did that community-based learning thing with
the food bank, and that’s basically the only thing I changed that semester, because it was a big
deal to do that.
It’s just recognizing that these things don’t happen trivially. They take a lot of thought.
They take a lot of adjustment. They take a lot of troubleshooting. And small changes can be
tremendously huge, like that Foucault change [that] led to all these other changes and all I did
was add one assignment. I added one thing. Read these three pages, talk about it in class, write
an essay about it.
That led to a whole bunch of other stuff, and so that’s being open to that too, and not
predetermining. Well, I’m changing this, this, this, this, and this. No, I’m just changing this one
thing, let’s see what happens. Letting that play out organically, it allows for students to shape
the class. That’s part of it.
ADDITIONAL RESOURCES
American Society for Engineering Education (ASEE), annual conference. https://www.asee
.org/conferences-and-events/conferences/ 12
Felder, R. (2005). Resources in science and engineering education,. http://www4.ncsu.edu/u
nity/lockers/users/f/felder/public/
Felder, R. (2002). So you want to win a CAREER award. Chemical Engineering Education, 36(1),
pp. 32–33. http://www4.ncsu.edu/unity/lockers/users/f/felder/public/Columns
/Career-Award.html
Foucault, M. (1980). Truth and power, Alessandro Fontana and Pasquale Pasquino, interview-
ers. In Power/Knowledge: Selected Interviews and Other Writings 1972–1977, C. Gordon, Ed.,
pp. 131–133, New York, Pantheons. 13
Hooks, B. (1994). Teaching to Transgress: Education as the Practice of Freedom, New York, Rout-
ledge.
Riley, D. (2003). Employing liberative pedagogies in engineering education. Journal of
Women and Minorities in Science and Engineering, 9(2). DOI: 10.1615/jwomenminor-
scieneng.v9.i2.20.
2. DEVELOPING A LIBERATIVE PEDAGOGY IN ENGINEERING 25
Riley, D. and Claris, L. (2006). Power/knowledge: Using Foucault to promote critical under-
standings of content and pedagogy in engineering thermodynamics. Proc. of the ASEE Annual
Conference, Chicago. https://peer.asee.org/155
C H A P T E R 3
27
Experiencing Vulnerability and
Empowerment in Teaching
Sara Atwood
Narrative constructed by Brooke Coley
Whereas my walking around and coaching them and challenging them and saying, “Now,
how does that work?” they didn’t perceive that [as teaching]. So, I did get comments, es-
pecially from first-years, of, “She didn’t teach us anything. I taught myself.” Well, yeah. I
coached you to learn how to learn. That’s the point.
Sara Atwood is an Associate Professor and Chair of Engineering and Physics at Eliza-
bethtown College with specialization in mechanical and biomedical engineering.
CALL TO ADVENTURE: FROM CHILDHOOD TO
UNDERGRADUATE, BECOMING AN EDUCATOR FIRST
I think when—you know, hindsight’s 20-20, but when I look back at it, I think that I’m kind of
an educator first and an engineer second in a lot of ways. Growing up, I never played with dolls, I
actually lectured to my stuffed animals and had a little chalkboard easel. My mom was a teacher,
second grade, and a lot of people in my family were K-12 educators. So, I grew up around that
and always enjoyed school, obviously. In high school, I tutored math for some extra money,
and word spread through teachers. I was always really good at math and science, although my
favorite class in high school was actually English and Literature. But people told me, “Oh, you’re
good at math and science, maybe you should consider engineering.” I went to Dartmouth for
undergrad, and part of why I liked that institution [were the variety of options]. I didn’t look
at any institutions that were engineering-specific, so I wasn’t really thinking about my major or
engineering going into college, necessarily. It was kind of in the back of my mind, but I wasn’t
going for that.
Then, at Dartmouth—a liberal arts school, you didn’t declare a major until [your] second
year, and at that point, enough people had said, “You’re good at math and science, consider
engineering,” that I took the introductory engineering courses, and I really liked that and really
28
3. EXPERIENCING VULNERABILITY AND EMPOWERMENT
found a home. At Dartmouth, it was a very close—and it’s grown a lot now—so at the time
that I went, we had maybe 30 students in a cohort, so it was pretty small. I found that it was
comfortable, [a] home environment and that’s, I think, a big reason why I went in that direction.
I feel like, looking back, maybe I could have gone in different directions. But, I liked that. [I]
liked the impact that engineering has on the world. And [I] had some professors that were very
good mentors to me.
And so there at Dartmouth, I really got that experience of more of the teaching-focused,
rather than research-focused. Now, I think it’s transitioning a little bit more away from that,
which I think is sad. But at the time, the faculty were really teachers first. It was still more
lecture-based, but I feel like they were all very aware of doing a good job of educating; it wasn’t
secondary to them, it was their primary thing. So, I think that they were starting to do [active
learning], we would certainly work through a lot of examples and things like that, it wasn’t
necessarily what I would call active learning now with small group examples. But, it was not just
like you only saw the professor’s back the entire lecture, and you were just scribbling notes the
whole time.
CALL TO ADVENTURE AND SUPERNATURAL AID:
EXPERIENCES AS AN UNDERGRADUATE TA
[At Dartmouth] I had a couple of professors who encouraged me, “Hey, you would be a really
good professor.” And I did some TAing [teaching assisting] there, so I think that might have ac-
tually been formative in my ease with embracing active learning, because I ran problem sessions
each week. So maybe more like a grad student would do at an institution with more grad stu-
dents. And in those problem sessions, it was really just small group, walking around [to] people,
explaining how to do problems, basically doing that more small-group, problem-solving, active
learning model in my problem sessions. And I did grading for them, and had a couple of faculty
that kind of brought me under their wing, putting together exams and things like that. So, I got
some mentorship there.
So then when I was looking for grad school, I was really specifically looking to go to grad
school to become an educator. And I didn’t know anything about engineering education pro-
grams at that time. I graduated Dartmouth undergrad in 2003, then I stayed for my Master’s—so
[I] graduated in 2005, then was looking for Ph.D. programs around that time. I think at that
time there might have only been Purdue and Virginia Tech, [engineering education] was not a
big thing at all and I feel like those were even pretty new, is my sense. I just hadn’t heard of that,
I hadn’t heard of the [American Society for Engineering Education] (ASEE). None of that. So,
I was looking specifically at engineering programs in mechanical engineering. Dartmouth was
a general engineering degree undergrad, so I had [a] broad base.
So, I thought, “Okay, I’m going to apply to a couple places and be selective, or I’ll just
go and be an engineer, or even teach high school physics or math, or something like that if I
don’t get into these grad schools.” I applied to UT Austin because I’m originally from Texas,
3. EXPERIENCING VULNERABILITY AND EMPOWERMENT 29
and then Berkeley, because for some reason, I had in my head I wanted to try out the west coast,
and I had a professor at Dartmouth who had done a sabbatical there and knew an advisor that
he thought would be a really good mentor to me and was doing work I was interested in.
CALL TO ADVENTURE: EXPERIENCING FACULTY WHO
PRIORITIZE RESEARCH FIRST
I ended up going to Berkeley, and that was a huge shock because the difference in teaching and
student focus from Dartmouth to Berkeley was enormous. My first semester I was like, “Oh my
gosh these are the worst teachers I’ve ever had in my life.” And in fact, I feel like there was almost
a pride in that [teaching] wasn’t their thing. I’ve known some people who have gone to R1 type
places and they’ve been told that, “If your teaching evaluations are high, you’re not doing things
right. That’s not where you should be focusing your time and effort.”
So [I] was just thrown into the deep end of the traditional lecture style. No working
examples. You know, the professor’s back, working on the boards the whole time. They finish
the one [board], they scroll it up, just keep going on to the next one. And I had a really rough time
my first semester, transitioning to that. It was very difficult going into professors’ office hours
and they wouldn’t even be there at their posted office hours. And I was like, “What is this?” At
Dartmouth, it was an open-door policy. [At Berkeley] you didn’t know where they were, you
couldn’t track them down. I had a really hard time. And honestly, the problem sessions with
TAs also were like recitation sessions and not that great either because most of the TAs were
not that interested in teaching, either. I think it was a little bit of that rude awakening into that
style. It made me, first of all, not want to work at an R1 (very high research activity university)
and secondly, really kind of reject that more traditional lecture format, because that was such a
rude awakening and it took me a while to adjust to that.
SUPERNATURAL AID: FINDING MY HOME
Then when I was at Berkeley for a longer time, I did some TA-ing [Teaching Assisting] and
had courses with a few professors who cared about teaching. I luckily had a mentor (Dr. Lisa
Pruitt) who did focus a lot on education, and she sent me to ASEE, even though I didn’t have
any conference presentation, just to expose me to it because I wanted to go in that direction. I
TA’d for her… several times. And through that, I sort of really enjoyed the time that I spent on
that, more than on my research. So that was kind of a big clue for me as to what I should do in
the future. And, actually, right after I passed my quals [qualifying exams] they had a professor
that went on medical leave, and another was going on Sabbatical or something.
[As a result] I got to teach a course at Berkeley all on my own. I was the primary instructor.
And I used more of what I had learned at Dartmouth. It was a 115-person class, so it was not
the style where you could really do small group/walk around to very easily, and I didn’t have,
at that time, any exposure for how to do active learning in a larger setting, because I had never
30
3. EXPERIENCING VULNERABILITY AND EMPOWERMENT
instructed in a larger setting. But I did try to do some aspects of it in terms of, my lectures were
a lot more on solving example problems, and stopping [for questions]. I didn’t know that it was
called active learning, then, and I didn’t know what the right way to do it was, but stopping and
saying, “Okay, now you do the next two steps of this derivation to try to get down to where we
only have this and this left,” and throwing it back to them a bit. And at the end of that, I got
the highest teaching evaluations in the department, and my comments were things like, “Wow,
she works examples!” And, “This is like having a discussion section every lecture, it’s great! I’m
learning so much!” And so that was also pretty formative to me.
It was at the end of my time at Berkeley that I started going to the ASEE sessions and
conferences. And I was like, “Oh, this is home, this is heaven, this is amazing!” Every session [I
wanted to attend]. So that was huge for me.
FIRST THRESHOLD: FINDING THE RIGHT COLLEGE
AND CONNECTING WITH THE STUDENTS
Then I knew, when I was looking at schools [after my Ph.D.], I wanted a teaching focus. I was
looking for a liberal arts college, an accredited program, small residential college. There are not
many of those, actually. Just kind of a handful. I ended up here at Elizabethtown. And right
from the beginning, I was doing a lot more of [the student centered, active learning]. I had been
successful doing what I did at Berkeley in terms of just working through more example problems
and throwing those [out] to the students in small chunks or whatever. [At Elizabethtown] it
was much smaller classes, so that was a little more natural. And I think another thing that really
helped was I was closer to my students’ age, being a newly graduated grad student. And my
advisor had told me that, back at Berkeley, she said, “The years that you are seeming just like an
older sister, or someone who could be kind of in their friend group, embrace that. Kind of use
that to your advantage.” I think that also made it natural to kind of walk around and work on
problems with them, and be a little more on their level because we were so much closer in age.
And that has changed a little bit over the years, and that’s a little bit hard sometimes to deal
with, that I’m just getting further away from them socially, so that sometimes makes that gap a
little bit harder to close.
SUPERNATURAL AID: LEARNING FROM OTHERS
My advisor had done NETI [National Effective Teaching Institute] with Rich Felder and Re-
becca Brent. And she had done that right when I was graduating. I knew of it, and I knew to
look out for it, even though our college, Elizabethtown College was not on their list, because
we were a newer program and just not all hooked into everything or well known. I knew my
first year [at Elizabethtown] to look out for the NETI invitation, and I think I reached out to
Rich Felder and made sure that my dean got the invite. And so, myself and another colleague
who started at the same time went and did NETI after our first year. And I think that it was
3. EXPERIENCING VULNERABILITY AND EMPOWERMENT 31
actually better after our first year, rather than before you start, because you have a year to sort of
see what’s natural, see what works, see where you need to pay attention to like, “Oh, that’s how I
could be tweaking that a little bit.” Whereas I think if you get it before you ever do the classroom
all yourself, you don’t quite know what the more important parts are, the subtler parts, or where
you know you have a little bit of trouble and you need to make sure and pay attention to that.
But [NETI] was huge. That was very, very formative in my embracing of active learning. But
I do think everything that had come before that had sort of primed me to want to do [active
learning, and NETI] definitely refined that technique.
Then my second year [at Elizabethtown], we hired another young faculty, and we then
sent her to NETI right after her first year because it had been so great. And she came back and
she actually embraced the gap notes piece (see Felder and Brent [2015] for more details about
gap notes), which I had never done before just because I was doing all new preps. [With gap
notes,] what I’m talking about there is basically having a note packet that has blank spaces for
where the students fill in, but it provides kind of a structure, a scaffolding, and then there are
certain things the students fill in. So, I tend to just type out, if it’s a definition or something,
I don’t want them wasting their time writing out a sentence. Or what’s been really helpful to
me, I have the objectives right up top, the first page of the gap notes, and then the last page
I just have a blank gray box that says “summary.” So, at the end of each notes packet, we do a
summary and they write it there. And then, a lot of what [the notes] consist of are pictures and
problem set-ups. So, instead of them writing down all the given information and the figure, just
giving them that information and then spending that time working on the problem. That’s sort
of what the gap notes are.
At the time that [my colleague] did it—she did it in the fall after she came back and
really liked it—and had a lot of success, and I was like, “Ooh, I love this. And now, I’m actually
teaching things for the second time, and I feel like I can handle doing that.” So, then I really
embraced doing gap notes, and that has been really formative in being able to [implement active
learning].
BELLY OF THE WHALE: NOT ENOUGH TIME DURING
THE LECTURE
One of my challenges, which I think is probably everyone’s challenge, is it always seems like
there’s not enough time during the lecture. So, you’re always kind of rushing to get through
what you want to and I am a very structured person—disciplined, very ordered and structured.
I definitely have an approach of, “I want to get through this amount of stuff on this day because
we have weekly homework, and I already have my quizzes set, my exams set, and everything like
that.” The use of gap notes was really big to me, because it does take more time to let the students
work on problems, walk around and talk to them, [and] let them struggle in some places before
you then pull them back together and say, “Okay, you tried this out. I see some of you were
having trouble with this stuff. Here’s how to do that.” That’s always a challenge.
32
3. EXPERIENCING VULNERABILITY AND EMPOWERMENT
So, I think before NETI, I was doing a lot of group problem solving, but not in small
chunks. It was more, give the class an entire problem and have them work it through, and walk
around and talk to them and then come back up to the board and go through it. But, I remember
this one student particularly who was very talented, and he always had a paperback book—he
liked Clive Cussler. He would get done with the problem so quickly and then he would just
open his book. He wasn’t being disrespectful, he just [grasped the concepts quickly]. And that
was one of the things when I went to NETI that I mentioned that I need to look out for. “How
does this work with the timing?” Because at Berkeley and at Dartmouth students were more at
the same level, and so people would tend to finish more in the same amount of time. We have
a wide range of students’ preparation and backgrounds here at Etown, because we don’t have a
pre-selection program. I was having a lot of trouble with some students would finish a problem
really quickly, some would take a long time, and how do I handle that?
With active learning after NETI, one thing the gap notes enabled [was it] put everyone
on at least an even playing field to start, because some people just take a really long time to
write down the problem and focus and get on it. Some students have learning challenges around
writing and processing that make them take longer. And two is that the idea of saying, “Okay
now just do this step. Okay let’s come back together and go over that. Now do this next step and
let’s come back together. Okay now finish it up and I’ll walk around and help everybody with
it.” That was kind of a big change. Now it’s more back and forth, whereas before it was more like
me lecturing a chunk, and then problem solving a chunk, but in a bigger chunk. Now, I guess
the engagement is more dispersed. Like me, them, me, them.
So that’s become kind of my steady state of where I am now. I’ve done a little bit of dipping
my toe into a flipped classroom kind of thing, so I have recorded some videos. And I try to keep
those, I know the literature says about seven minutes is about max for those. I’ll try to have two
videos each week, which is about a packet or a chapter or whatever, and post those and then
just do a 5-minute summary of that, the most important equations, and get right into problem
solving.
And this is something that I kind of swing the pendulum on, too. Because I’ve had students
that have responded that they actually like some lecture, and I think one of my strengths is being
able to explain things fairly clearly and logically and in a neat, ordered way. So, I’ve had students
that say, “We actually like it when you lecture maybe half an hour and then go to the problem
solving,” because also a flipped classroom is depending on them to watch the video, which you
can’t always… And, I think it depends on your style as well. I go a little bit back and forth
between, I guess in my lower-level class I do less lecture, more problem-solving because I feel
like the concepts are a little easier to grasp quickly. In my upper-level class, I tend to do a little
more lecture with problem solving sprinkled in. That’s something that I still sort of go back and
forth on as well. How much flipping to do and how much outside of class time are students
really going to spend with that?
ROAD OF TRIALS: THEORY VS. SOLVING PROBLEMS
3. EXPERIENCING VULNERABILITY AND EMPOWERMENT 33
One of the things that I still struggle with a bit in terms of active learning—I think that it’s best
suited with working problems. I swing back and forth like a pendulum semester to semester on
the importance of the derivation versus the application in working problems. And so I try to use
active learning and say, “Okay, for the next step of this derivation we want to reduce this down
to one term, or something,” and then try to have them do that for the next 30 seconds or so, and
then come in with it and engage them in the derivation. But, there’s just a little bit of a debate
in my mind about how much [undergraduates] get out of a derivation. I do want them to see
where things come from, so it’s not just some black box. But, a lot of our engineers also—now
we’re getting a steady state where we graduate probably 30, 35 a year and two of those will go to
grad school—so most of them go out and become practicing engineers. They may be more like
construction management, or project managers or things like that. I just kind of struggle with
the balance between the theory and the derivation versus the practical and applied problem-
solving piece. And active learning, I think, is a little better suited for the problem-solving piece,
at least the way that I tend to use it. So that’s something that, potentially if I did a redo of NETI,
I’d be paying a little more attention to that balance.
ROAD OF TRIALS: JUST-IN-TIME VS. ESTABLISHED
PREPARATION
[There have definitely been some trade-offs to this journey]. My first three years was basically
all new preps all of the time. Because we’re a general engineering program—and when I first
got here we were smaller—so my first three or so years, we were teaching classes every other
year, a lot of them, like the upper level ones, so each time was new. And, of those courses I was
teaching, only one was in my Ph.D. area. So, I was teaching something that I hadn’t looked at
since undergrad and a Physics 3 course with optics that I had never done in my life. I was doing
new labs that I had never done before. So that was like drinking out of a fire hose. It’s hard to
even say what that was like because it was just so much prep all of the time. I was just a lecture
ahead of the students, essentially. But, in some ways that was better, because now I feel like
it’s a little hard to get motivated to go back and rework the example problems that I’ve already
got prepped. It seems like you shouldn’t be spending your time on regenerating new example
problems all of the time. I always do new quiz problems and switch out a couple of homework
problems. But, when it was just-in-time, and I was learning it almost alongside them, I feel like I
was better in some ways because it was a little fresher and I was getting a better fresh perspective
on trying to understand the material.
And so, there’s been some tradeoffs. I’m teaching things now for the third or fourth time.
Things are more settled, so I spend a lot less time on prep and getting to where I’m barely making
any changes to [my gap notes]. I’m hoping that I can turn them into a reader. And so that part is
nice, but sometimes I feel a little bit disconnected because I’m at the point where now I can kind
34
3. EXPERIENCING VULNERABILITY AND EMPOWERMENT
of go in on Monday and I haven’t really looked at the stuff for a year. And I’m familiar enough
with it. But then sometimes, it’s just a little different than when you’re learning it alongside with
them two days before. So that’s been a bit of a tradeoff, actually. I feel like I’ve had enough years
now that I’ve been able to chip away at it. I’ve been able to sort of add something that I wanted
to do every year. But I certainly, even though I wanted to do those things, I knew that I couldn’t
do them all at once. Is there anything still left? I mean the gap notes have taken me several years
to get the way that I want them. I’d like to make more videos.
I wish that I could see it more from the students’ perspective again. When I was just
learning material for a class, it made me closer to their experience. And so that’s one thing that
I really miss, and I don’t know exactly how to get that back. I taught a class I hadn’t taught since
my very first year, and I changed the format to being mastery-based. It was really energizing to
get that new perspective back again, but it was a lot of time. I’m looking forward to tackling that
course again next year.
I also think doing more open-ended work would capture that fun again. When I first
started, homework solutions were not out there in public. They just weren’t accessible like they are
now. Students [would] learn a lot doing the problems and would come in and work with me and
we would learn a lot through that process. And now, the solution manuals are so readily available
online, that only a few students get the same learning out of it. That may be the thing that I would
do [to improve as a teacher is] figure out open-ended, interesting, design-analysis problems to do
for problem sets, that help them meet the learning objectives, but are really interesting, higher-
level struggles for them. I think I might still keep example problems and quizzes simple. But, I
feel like the homework sets—in a way—the students who use them correctly, learn a lot from
them. But, now with solutions readily available they’re set up in a way that students don’t have
to use them correctly. I’ve tried doing a couple of “Epic Finales” where students in groups work
through an open-ended problem in place of a traditional final exam. It’s gone really well.
ROAD OF TRIALS: STUDENTS WITH LEARNING
DISABILITIES
I just read “Tomorrow’s Professor,” [and] it was talking about one of the pitfalls of active learning
being those with learning disabilities. It was very interesting. We actually do have a number of
students, and I think everyone has increasing students with learning disabilities, partly because
of the higher K-12 education system, and our cultural expectation that everyone goes to college.
So, [Tomorrow’s Professor] mentions though—that these students that have visual or auditory
processing issues, or slow processing, or dyslexia—that active learning might not be good for
them because it might take a long time for them to process it. So, then, they kind of miss out
on the problem-solving piece and it makes them feel worse because their peers are able to do it
and they really have no idea what that 15-minute lecture that you just gave was on because they
have not gotten time to process it.
3. EXPERIENCING VULNERABILITY AND EMPOWERMENT 35
Actually, the email was kind of a call for more studies into how active learning might be
able to be tweaked for those with learning disabilities or processing speed difficulties or things
like that. And that really struck me because we do have a number of those students, and I do
wonder if maybe those are some of the ones who are resistant or who don’t even make it past the
first year of the program because of that way of teaching. I can see [how] psychologically they
[could] just feel like, “This isn’t for me. I don’t get it. Everybody else around me gets it, and I
have no idea what she’s talking about.”
One thing that occurs to me, actually, is [with] the little videos that I do. They are posted
and can be accessed any time in the semester on the learning management system. That might
be one thing that could help a lot. And I actually have a student who’s struggling with English,
and that occurred to me, too. “Oh, yeah, the international students, too, who have a hard time
grasping the English. They don’t have time to translate or really understand what’s been said,
and now I’m asking them to do something with it like work a problem.” So what I’ve been doing
with a student who’s been having trouble with English is I actually send him the gap notes a few
days ahead so that he has a few days to look at them, to run them through his translation, to
look up any terms that he’s not familiar with and try to translate it so that he comes into class
[better prepared]. That might be a way that students with learning disabilities could come into
class more prepared to see it for a second time or a third time and then be able to jump in on
the working of a problem better. Especially if those [videos] are closed-captioned or whatever.
I can imagine where yeah, that could be some tweaks [to the student-centered teaching] that
could have a bit impact.
BELLY OF THE WHALE: A PARTICULARLY CHALLENGING
SEMESTER
So many things have changed throughout the years, I guess. I did the Intro to Engineering
class for about four years. One semester I had only Intro to Engineering course, and it was just
70 first-years (we expected 50)—all first-years, all of the time—and it was miserable. That was
probably my darkest semester because I just did not have much joy in it. And I actually did a
workshop with a STEM-UP PA group for academic women in STEM that was funded by the
National Science Foundation (NSF) through the Advance Grant. And I remember I was doing
this workshop and the facilitator said something like, “What do you take joy in in your job?”
And I remember writing down, “encouraging students,” and I realized that that semester I was
just not doing that, because I had so many discipline problems and [was] on the phone with
Learning Services and first-year advisors all the time. And so, for the last about three weeks of
that semester after that workshop, I made a point to really reach out and encourage students
and write nice emails to them and notes and things. And that kind of got some of the joy back,
I guess. But it had just been, like, stamped out of me with that semester.
That was a pretty miserable semester, to be honest. I never want to teach all first semester
students again nor just one class with so many sections. I enjoy the variety of topics and student
36
3. EXPERIENCING VULNERABILITY AND EMPOWERMENT
populations. After that the department faculty started sharing the Intro class, which I think was
more appropriate—team teaching the first-years.
SUPERNATURAL AID AND MEETING WITH THE ALL
KNOWER: A COMMUNITY OF ACADEMIC STEM WOMEN
One of the big supports was the STEM-Up PA group, funded by an advance grant. There was
one semester that I did the STEM-UP PA Oasis program. Rutgers [University] has an Oasis
program, and it’s like an Objective Analysis of Self and Institution Seminar. And so, the STEM-
Up program of Central PA teaching-focused group adapted that. And now I think they call it
their leadership development program.
It was one semester, and it was four meetings during that semester, Friday nights or Sat-
urdays, for about five hours, so it was pretty intensive. All women, maybe about 25, 30 women.
And they also formed peer groups of four women and you also had to meet in person and get din-
ner or something with those peer groups in between the other sessions. So it was eight sessions
during a semester, which was basically every other week. So that was a lot of time commitment,
but a lot of support. And a lot of that support, it wasn’t necessarily the content of the workshops,
it was just hearing other women faculty suffering through some of the same problems and being
like, “Oh, it’s not just me, I’m not alone.” I certainly have experienced it where you return things
twice as fast as a male colleague and students have this perception that, “Ugh, she’s still grading
those?” And you’re like, “What about him?” And they’re like, “We love him!”
Just sharing some of those experiences was key, I think, to getting me through that. We
did a negotiation seminar. It gave me a little bit more empowerment to feel like, okay, next
semester, if they try to assign me all of the first-years, I’m going to say, “No, that doesn’t work
for me. Someone else needs to take some of this on.” And not just be like, “Sure! I’ll do all of
the first-years.” And that course content is so much writing and soft skills. The department put
me with something that was non-technical, and I didn’t appreciate that because it made me look
like a kindergarten teacher that was teaching the soft skills, not like someone who was teaching
the upper level technical stuff. Historically the class had been taught by the one other female
lecturer in the department. After me we started team-teaching with male faculty as well.
MASTER OF BOTH WORLDS AND FREEDOM TO LIVE: A
BALANCE OF VULNERABILITY AND EMPOWERMENT
Do I feel like I’m better at teaching now? I feel like my focus has changed a little bit. So that’s the
other thing about my trajectory. My first few years, the department, the college, the emphasis
was really on teaching, so I was really focusing on that. The last few years, then, going up for
tenure, I was sort of shifting because my teaching evaluations were very good and very solid. I
was willing to accept a little bit of a dip, and some vulnerability in how well I thought I was
doing in teaching to get some papers written and get those out the door for tenure. And now,
3. EXPERIENCING VULNERABILITY AND EMPOWERMENT 37
I’m actually Chair of the department. So now some of my research is really taking off and I’ve
got that going on, and I’ve got Chair. So, I’m also right now willing to accept a little bit of a
lower standard in teaching.
Return of teaching evaluations has really evolved, too. In my first years of teaching, I used
to get just so devastated, even though they were really good. But I would get so devastated when
there was any sort of negative comment on something. I remember our Dean of Faculty saying,
“Well, for tenure, we just want to see that you’re reacting to those [evaluations] and you’re taking
them seriously.” And I remember joking to my friend, I was like, “Yeah, let me send you the
empty bottle of wine and empty chocolate bar wrapper, that shows you that I’m taking them
seriously.” My husband was always like, “Oh, did you get your evaluations today?” He knew it
would be a couple of rough days. And now, I get them and I read them and I guess I feel a
little more evened out about the response to my teaching, and can use them more formatively
without negative emotion. I keep a list of positive student comments on my bulletin board for
those occasional bad days.
One evolution is in prep work, when I walk in and I feel 80% prepared, I’m okay with,
“Well, I’ll wing it on the other 20%,” that’s a little bit of a difference. And I think there’s also
that realization that the 80-20 rule or whatever, you know, my first few years—I guess I was
feeling sort of vulnerable, too, because I was new and the students didn’t really know me yet,
and I was young, and I got mistaken for a student all the time, and being a woman also walking
into the classroom and not having much authority. That’s the positive about getting older. I’m
like, “Well, I’m going to get gray hair, but I’ll get some authority, too.” But, yes, I think, just
personally, knowing that I didn’t know the material that well, I would work extra hard to make
sure that I was prepared. I think that I allowed myself to be vulnerable, too, and say, “I’m not
sure. Let me get back to you.” And the students were pretty good about that. But then, over the
years, I felt a little less vulnerable in terms of how good of a teacher I am, or whatever, and so
have allowed that whole, “It doesn’t have to be perfect. Good enough is good enough.” But, it’s
really satisfying because I still have days where I walk in and I just absolutely kill it on a lecture,
and it just goes really well for whatever reason. And I have to tell the students, “It’s actually time
to leave now, we [have to] go.” So those are definitely good days that I still appreciate, but I
guess I don’t get upset anymore when days aren’t like that.
I will say that one of the trade-offs, I think, is some students really like the active learning.
But, one thing that strikes me, is that some students are pretty resistant to it. And particularly,
when they’re first years, you get a lot of the comments of, “They didn’t teach us anything,” because
I think that they have a very specific way of thinking about what teaching looks like. [Students]
think teaching is, “Open my brain, pour in your brain” kind of thing, rather than the sort of self-
discovery together. So, I’ve had some students that tell me, “I would rather you just lecture the
whole time, and I just want to sit back and take notes.” So, there are some that I think remain all
four years a bit resistant to that, and would rather be passively lectured at, even though all of the
literature shows differently. And I think that might have been my challenge, too, with having
38
3. EXPERIENCING VULNERABILITY AND EMPOWERMENT
first year students, is their perception of what they thought I should be delivering to them as
a consumer because education has gotten very consumerized, especially at a private, high-cost
institution, [which] was very different than what I felt like my role was in the classroom. And
I wonder if, in a more traditional sense, they might have felt like they were getting what they
perceived as teaching. Whereas my walking around and coaching them and challenging them
and saying, “Now, how does that work?” they didn’t perceive that [as teaching]. So, I did get
comments, especially from first-years, of, “She didn’t teach us anything. I taught myself.” Well,
yeah. I coached you to learn how to learn. That’s the point.
So, I will say that I have seen that resistance. It has not, certainly, been enough to stop
me. And I think, when I first got to Etown, part of why I was so successful is because I was
teaching some students that had been taught by a visiting instructor that was very lecture-based.
So, all of a sudden, when I came in, and now they had that alternative, and I was teaching
upperclassmen, they appreciated that a lot. Same thing at Berkeley. I think my evaluations were
honestly artificially high there because I was just doing something different than what they saw
in all of their other classes. Now that we sort of almost all do it at Etown, I think some of them
almost want the alternative where they would rather just sit there and be passive and be given
the information.
I think when I look back, I guess the main thing that was formative for me was that
experience at Dartmouth, kind of small, with a teaching focus, and then going to Berkeley and
seeing the alternative. And I wanted to make that journey, that was intentional on my part, but
I didn’t realize quite how different the experiences would be. So, I think that that was one of the
most formative things in my student-centered active learning teaching style.
REFERENCES
Felder, R. M. and Brent, R. (2015). Random thoughts… handouts with gaps. Chemical En-
gineering Education, 49(4), pp. 239–240. https://www.engr.ncsu.edu/wp-content/up
loads/drive/1l5p3UW6e7oQ_JBOpvMEPcpz8APhxj2Ph/2015-r_HandoutsWithGaps.pdf
31
C H A P T E R 4
39
From the Armed Services to the
Classroom
Brad Hyatt
Narrative constructed by Audrey Boklage
It wasn’t just something from a book that was 10 years old, but it was something that
was currently happening. It was very relevant. Those are the type of opportunities that
make me excited, that we can provide students once we make it focused on them and engage
technology or bring that technology into class as much as possible.
Brad Hyatt is an Associate Professor of Construction Management at Fresno State Uni-
versity.
THE CALL TO ADVENTURE: A TRUE LEARNER
I’ve been teaching now full-time for 7 years. Prior to that, I spent about 12 years in the industry,
first as a civil engineering officer in the Navy and then a couple years as a project management,
construction management consultant, working on large construction projects. When I came and
started teaching, I was really interested in bringing my experience to the classroom, and really
the only way that I thought I could do that effectively is by talking about projects.
I’ve always enjoyed teaching. Even when I was in the Navy, I taught some college classes,
and then I did some training in which I would instruct other people. I always enjoyed that. I’ve
always enjoyed mentoring others along the way and being a part of that. I think a lot of it has
to do with just my makeup, and the way that I am, and I enjoy that. I also enjoy learning new
things. I think any of us that are in academia, that’s a big part of why we do what we do, is that
we really, we have those questions, and we want those questions answered. We’re really good
at learning. A lot of us wouldn’t be in this position if we weren’t. For me, the process has been
to really try new things, and always having that goal of “How can I improve?” “How can I be
better?” “How can I take the feedback from students and from my peers?” “How can I look at
the examples of what else is going on and try that?”
40
4. FROM THE ARMED SERVICES TO THE CLASSROOM
One of my personal goals had always been to teach at the college level or a university
level. I’ll be honest, I never thought it would happen this soon. I had envisioned it would be
something that I would be a consultant for 10, 15, 20 years, and then go back and teach 1 or
2 classes as an adjunct.
The opportunity came 7 years ago. This position came open. My background is the Navy.
As a civil engineer in the Navy, they send you to graduate school, so I went to the University
of Texas, got my Master’s degree in construction engineering and project management. At the
time, here at Fresno State for this position with construction management, they were looking
for someone that had industry experience and a Master’s degree as a minimum. It was just one
of those things that the door opened, I went and interviewed with it, it worked out, and they
offered me the position.
REFUSAL OF THE CALL: DECIDING TO LEAVE INDUSTRY
Quite honestly, it was a very, very hard decision, because I loved my job. I loved the company
I worked for. I loved the project that I was assigned to. It was a very exciting project, probably
one of the best projects that I had ever been a part of. It was a hospital project in Riverside,
California, just a great team. There was about a year and a half left in the project. I just looked
at really what I wanted to do.
As far as my professional experience goes, specifically with the Navy, as an officer in the
Navy, I think what that has allowed me to do, it’s given me the confidence to try new things. It’s
allowed me to understand and know that I can try new things. I went and talked to my boss.
He said, “Really you should take it. It’s a unique opportunity that may not come along in the
future, at least the way it’s designed.” I took his advice, and it’s been the best decision that, one
of the best decisions I’ve ever made in my career.
ROAD OF TRAILS: CONNECTING CLASSROOM TO
INDUSTRY
I would say that, when I first started [the faculty position], it was a huge transition, not just for
me but for the students as a whole. I got lots of negative feedback from students, comments, you
know, “I’m not paying to work in class. I’m paying you to lecture and teach me things. You’re
not like other faculty members. Your class is too hard. I don’t like the style where I have to do
homework or problems in class. I’d rather do it on my own time.” There were a lot of comments
that I got back initially that were discouraging. What I found after my first semester of teaching
was that I just didn’t like lecturing. I didn’t like being the person in the front that talked 90%
of the time, and students weren’t paying attention or asleep or just not engaged overall in the
classroom. It was very frustrating.
I decided quickly that I needed to do something different. I needed to engage the students
more. I needed to get them more excited about my profession and what they were going to do
4. FROM THE ARMED SERVICES TO THE CLASSROOM 41
eventually. At that point, I started to do some research, reach out to people here on campus
and other places to see what they did. What I quickly found was that there were other ways,
primarily project-based learning and things like that, but the feedback that I kept on getting
[from other professors] was that it’s just a lot more challenging to do that.
CROSSING THE FIRST THRESHOLD: FLIPPING THE
CLASSROOM
Being a new faculty member, and with lots of vigor and excitement, I decided, “You know what,
I’m going to go ahead and do it anyway, and see how it goes.” The next semester, the second
semester that I was here, I started to think about and utilize project-based learning. You bring
in projects into the class and have the students work on specific projects and do more hands-
on work, and a lot less lecturing. Then the next year, I found out about flipped teaching. I
decided I really liked that approach where the students would still get the content, but it would
be delivered outside of class. Then when they came into class, we would do things with that
content, whether it be work on problems or work on a project itself. I really started to see my
students get the content and get the problems and do much better on the exams than what they
had previously done. Ever since then, that’s really the approach that I’ve taken.
APOTHEOSIS/FREEDOM TO LIVE: LEARNING
TOGETHER
As a civil engineer in the Navy and also a project consultant, that has given me a very broad-
based experience. When I go into the classroom and I talk about certain things, I just have a very
broad background in which I can pull from, not that I know everything, because I don’t. That’s
the very first thing that I tell students is I don’t know everything. If I don’t know the answer,
then we’re going to find the answer together. Luckily with technology, computers, and Google,
you can get an answer to almost any question. It may not be the right answer. You may have
to dig for the right answer, but we can get an answer and really try to discern what the correct
solution is or answer that question. Again, having that broad-based background gives me the
confidence to step into a classroom and know that either any question that a student asks, either
I’m going to be able to pull it from my background or I’m going to know the resource that we
can use to answer that question. It may not happen in the class, but soon after class, we can get
that solution or answer.
SUPERNATURAL AID: PROFESSIONAL KNOWLEDGE
If I hadn’t had that professional experience, if I hadn’t been in the Navy for over 9 years, and then
been a project consultant for a couple years after that, I don’t think I would have had [professional
42
4. FROM THE ARMED SERVICES TO THE CLASSROOM
knowledge]. I wouldn’t be able to do those things as confidently as I can do now. It’s really a
culmination of all my experience and just a willingness to try.
One of the things, especially in our discipline, in construction management, in construc-
tion engineering, and management, that we always tell students is that our discipline is one in
which you work with people all the time. We do not have the opportunity to work in a silo. We
don’t get to work by ourselves in a cubicle without ever having any human interaction. That is
not the reality of our career path. I try to explain to them that the classroom itself, I am doing
my best not just to deliver the content, have them learn the content, and learn some of that
technical and management aspects of what their job’s going to be, but also I want them to learn
those soft skills of how to interrelate, how to work in teams, how to articulate your position and
work with others. Really having this focus on student interaction, group work, project-based
learning, it provides that opportunity.
MASTER OF TWO WORLDS/RETURN THRESHOLD:
REAL-WORLD EXAMPLES
What’s been most rewarding in this process has been not necessarily what I had planned students
to get out of it, but when we go beyond what was planned in the class, and we really start to
talk about or do work problems or look at projects that have much more depth than what was
originally planned. A prime example of this would be when we talk about construction law
in our classes. Sometimes law can be a very tricky subject, especially for a student that’s fairly
early on in their curriculum. We do an introductory law class, construction law class, at the
sophomore level. They’re very challenging topics a lot of times, but by giving students the tablet,
what we allow them to do is do some research. When we have a specific topic in law, have them
go and do a web search and find some examples, and then we can talk about those examples.
The examples then are contemporary examples. I can think of one example where it was a case
that had come out within the last month, and we can really talk about what that case was, and
how it applied to our topic, and it became something very real that the students understood. It
wasn’t just something from a book that was 10 years old, but it was something that was currently
happening. It was very relevant. Those are the type of opportunities that make me excited, that
we can provide students once we make it focused on them and engage technology or bring that
technology into class as much as possible.
FREEDOM TO LIVE/ULTIMATE BOONE: CONSTRUCTIVE
CRITICISM
Do I still get negative comments [about my teaching]? Absolutely. I always do. There’s always
some. There’s always 1 or 2 that just don’t ... that really don’t like that style, especially the students
that are used to learning on their own and used to not having to work in groups. Really, they
like to process things by themselves. What I see with those students, and it could be students
4. FROM THE ARMED SERVICES TO THE CLASSROOM 43
that are really good, and it could be students that aren’t as adept academically, but I always get
some feedback from them. “Why do I have to do this? I can do this work on my own. Once I
get the problem right, why do I have to help someone else? I have other things to do.”
Maybe I’m unrealistic in thinking that I’m never going to have a semester where I don’t
have some kind of comment that really critiques what I’m doing. Come to think of it, really, one
of the things that I always tell students is I want your feedback. I want your critical feedback on
my class, on the content, and on myself. If you think that there are things that I can do better, I
want to hear it. I welcome it. I feel confident that I can try new things, that I can do something
different. That’s reassuring. I’m not afraid to fail. I’m not afraid to try something new.
SUPERNATURAL AID: FACULTY SUPPORT
The past two years really has been the most exciting, because here at Fresno State, our new pres-
ident, President Castro, had an initiative to bring tablets into the classroom. The way that the
university approached that is to have faculty really decide how they were going to do that, and
they provided us with the training and the resources to explore new ways to bring tablets, tech-
nology, into the class, engage students more, and hopefully improve the learning atmosphere.
This is the third semester in which I’ve done that. It’s been extremely exciting.
What Fresno State has done is they have created programs in which they provide all kinds
of workshops and opportunities for you to become a better teacher. Like a lot of institutions,
they provide these faculty learning committees or faculty learning groups that focus on a specific
topic. I’ve been involved in a number of those, one on flipped teaching, one on using e-portfolios.
Then they also have a lot of programs that allow faculty to redesign courses, to integrate
technology or new teaching practices. Those oftentimes are a full-year program. You meet about
once a month for a couple hours each meeting, and then you go to a “summer institute,” which
is a week-long intensive program in which you do the heavy lifting and redesign your course.
It’s a really good program that Fresno State has created. Even if you come in and don’t have a
lot of teaching experience, they provide a lot of resources to help you get better at it.
ROAD OF TRIALS: RESISTING CHANGE
Again, there’s a lot of bumps in the road. There’s a lot of frustration. There are some students
that just don’t like [changes]. What I’ve seen this semester is that almost every single student
engages it. They’re excited to have a tablet. They’re excited to have a tablet, because when I do
problems, when I do activities in class, they have something in their hands that allows them to
leverage what I’m talking about and do more than just what they could do with a piece of paper
and a pen. Really, I view it as the next step of having technology in the hands of all students and
engaging them more, and really moving away from what I’m doing in the class more to what I
want students to do and what they can do.
44
4. FROM THE ARMED SERVICES TO THE CLASSROOM
FREEDOM TO LIVE: EMBRACING THE CHANGE
I just truly believe it’s so important to have the confidence to try new things. A faculty colleague
of mine, he and I were talking about it yesterday in that it’s so much easier to do lecture. It really
is. In the big scheme of things, if I was solely focused just on myself and not really…not that
I wouldn’t care about the students, but if I was less focused on the students and really focused
on me and my time and being efficient, I would lecture every single period. I would lecture and
give quizzes and give an exam. It’s just so much easier that way, but I don’t learn that way well
myself, and I find that I have to be engaged and I have to be interested.
The work load is immensely higher than traditional teaching, but I think, just from my
standpoint, I really see huge benefits to the students. The feedback I get from the majority of
the students is, “Wow, this is great. It’s one of my favorite classes. You’re a great instructor. I
appreciate you.” Then to see them in a next class down the line really understand the concepts
and be able to apply the concepts, that just makes such a huge difference. I think that, by trying
something new, using technology, using project-based learning, doing some of these things that
are innovative and out there, it takes a lot more time and energy.
MASTER OF TWO WORLDS: INVESTING TIME
I would say, for anyone, if they try something new, really put the time and energy, know that
it’s going to take a lot of time and energy. Any time you try something new, outside the box, it’s
going to take a lot more time, more time than you probably anticipate, but if you stick with it, if
you get feedback from students, if you have them involved in the overall process, it really does
pay dividends.
FREEDOM TO LIVE: GAMIFICATION IN THE CLASSROOM
My next step is I’m really interested in gamification, and finding a way to integrate points,
[badges] into our courses. I tried a little bit of it in the past, and it’s challenging. It adds a little
bit more complexity to the classes, but I’m really interested in finding a way to streamline that.
Hopefully that will take the students further and get them more excited about it, the course
itself, and the content that’s created and the projects that come out of it.
C H A P T E R 5
45
Engaging Students through
Service Learning and
Innovation
Chris Swan
Narrative constructed by Brooke Coley
To me, a faculty member is someone who actually provides the best education that they can
for their students. So, doing it where people are engaged is a key part, at least to me…
What I really think is engaging the students [is] doing it in such a way that allows them to
take the technical expertise that they’ve mastered in the classroom and apply it to real-world
situations and learning as well. Not just doing the technical expertise, but actually learning
about both their technical and non-technical professional skills. That’s the best education I
think we can provide to our students.
Chris Swan is Dean of Undergraduate Education for the School of Engineering and an
Associate Professor in Civil and Environmental Engineering at Tufts University.
CALL TO ADVENTURE: THE TEN-YEAR PLAN TO BECOME
A PROFESSOR WITH PRACTICAL EXPERIENCE
I started off at UT-Austin. I grew up in Texas and always dreamed to go to college, period. I
chose engineering because I found it to be closest to my own interests and capabilities. I always
enjoyed math and science, and since my father was an excavating contractor, I said “let me do
something in the construction field,” and civil engineering fit best as a direction. So, I pursued
that as my degree. Struggled through it as a lot of people do, but by the time I was a junior, I
finally got the hang of all this stuff because it became more applied. And that was the key for
me, was that all of a sudden, I started to see the application of all that math, science, and other
often abstract things that they require you to take.
[I] finished with a Bachelor’s, went straight through for a Master’s with the particular
disciplinary aspects of geotechnical engineering. My plan was to then work for a while and
46
5. ENGAGING STUDENTS THROUGH SERVICE LEARNING AND INNOVATION
return for a doctorate and then an academic career. [I] graduated in 1986, turned in my Master’s
thesis, loaded up my car and drove to Massachusetts from Texas. I arrived here to work for
a company. Originally, [it] was going to be for 4 years, but it turned out to be only 3. Then,
went back to school (MIT) to get my doctorate so that I could actually go on to the academic
profession.
After MIT, and 5 years of doctorate education, I was lucky enough to get a position here
at Tufts. Amazingly, I did have a 10-year plan after my Bachelor’s to actually become a professor
and it happened. It was just a number I threw out at the end of my senior year saying, “Yeah, I
want to pursue a Ph.D., but I want to pursue a Ph.D. with practical experience because that’s
what’s helped me to learn things.” So, I wanted to go out and work for a little bit and take the
knowledge that I gained from working…It [was] 3 years of practical “apprenticeship,” knowledge
that I could bring back into engineering education.
CROSSING THE THRESHOLD: HELPING STUDENTS
CONNECT THE THEORETICAL AND PRACTICAL
I’ve [now] been teaching here at Tufts for [more than] 24 years. What and how I began teaching
were basically the same way that I was taught. But I always had valued, at least in my own
learning, the application aspect. So, it wasn’t just “Here’s the formula” and trying to get to the
mathematics of the formula, but saying, “Here’s the formula, let me tell you why it works and
how it works and where it is applied.” And then you can work from the “I’ve applied it” to
“Oh, let me try and understand the mathematics of it, or even the science of it,” so it was a
reversed direction. What that has shown me is that the math and the science is so important
to understanding many of our engineering principles, but engineering is still something that
students need to also experience—to the point that the application is so important that it makes
the math and the science that much more interesting to them. In other words, I can actually
apply that calculus to that, or the differential equations to that. But they also get to see this
particular principle, if you will, and its real-world aspects and its applied aspects.
The fact to me is that connecting the theoretical and practical aspects is important. It’s
not necessary, I don’t think, for all students to experience engineering as such a connection, but
for me, it made the experience that much stronger. [Students think], “Oh I just learned this
formula, and I can easily plug and chug in this formula, but I don’t understand what that means
in reality,” so, [I] try to bring in real situations where that can happen. And the real situation
could be that you do a video, or something that they actually have to do themselves. To me, the
tactile [nature] of actually making something makes a difference in how students will engage
with a particular topic.
As a civil and environmental engineer, let’s say [the lecture topic] is water resource related,
I can’t have them build an actual dam. [Nor can I] really have them do watershed analysis from
the standpoint of delineating its location for a particular river or stream. That’s a good field
effort, but it’s not what I’m doing. And I would say [in] environmental studies people can do
5. ENGAGING STUDENTS THROUGH SERVICE LEARNING AND INNOVATION 47
that, that actually fits very well. But for me, I can show it in a planned situation, and then talk
about the situations as they arise in reality. Not just, “here’s a plan,” but we also can see that,
here we are in Massachusetts, close to the Aberjona River watershed. So, what does that mean?
Well, the Aberjona River watershed is the same one that is in the book (and later movie) Civil
Action from the 1980s/late 90s. But it has real-world implications because understanding the
Aberjona River watershed was an important aspect of this case of contaminating a community’s
drinking water, leading to an increase of cases of leukemia. But it makes a connection because
this is real, the people are less than 20 miles from Tufts, and that impact is real. So, if I can make
those connections between the abstract concept of watershed analysis and the concrete reality
of understanding a watershed so that you can see its impacts on the community, I think that it
just brings home the topic. It makes it deeper for a student to go into how they can understand
the subject.
APOTHEOSIS: SEEING AN EXPLOSION IN THE DESIRE OF
THE STUDENTS TO LEARN
So, my teaching was not completely hands-on, but I did a lot of projects at that point to make it
hands-on. But it was still, and I would say, it still is, strongly lecture-based. But, the evolution
of my lectures is another piece that’s interesting ...I would say that the real evolution in my
teaching, and almost revolution in my teaching, occurred when the projects started to become
real projects instead of ones that I had made up and controlled the data.
In [the] spring of 1999, I was still teaching a course called Site Remediation Techniques,
which are basically methods in which we clean up hazardous waste sites or toxic waste sites.
Previously, I had always used projects for which I knew the result; either the site had been cleaned
up and I had the data, or I had made up the data to lead to a specific solution. For example, here’s
the soil profile, here’s the chemicals of concern, let’s think of a method [with] which you can
clean it. How would you go about cleaning up this thing? The “switch” in 1999 was that we
were working on real sites with no known or given solutions. Additionally, remediation of these
sites would have impact on the economies, the social fabric, all different aspects that weren’t
traditionally seen, nor taught as an engineering aspect, beyond just technical [aspects]. So, these
projects were basically service learning-based projects. We were doing projects in service to a
community, working with clients, working with regulators, working with the community. And
so, when I introduced those as projects, I saw the change more so in the students [with] what
they learned than anything else. So, all of a sudden, the students became extremely attached to
the project. It wasn’t just because it was a real-world aspect, but because it had real people.
Students just seemed to love it. They loved it to the point where they were learning things
beyond what I taught them in class. Instead of saying, “Oh, we learned this in class, we’ll do this
method of analysis,” it’s like “Well, we did talk about that, but that’s not what you need to do
here.” They would actually think about how to implement solutions that we had not talked about.
And therefore, they had to get the details I did not talk about—excavation support systems, or
48
5. ENGAGING STUDENTS THROUGH SERVICE LEARNING AND INNOVATION
methods of remediation, or decision-making processes; items that we had not even discussed in
class.
Seeing this explosion in the desire of students to learn is what first got me interested in
the pedagogy of service learning. For a number of years, it was just trying to orchestrate the
class so that all planned subjects could be completed. Now, it became the logistics for finding
potential sites with potential clients and stakeholders and getting students to interact with them.
I will say that one of the most powerful achievements of these service-based projects was in the
Spring 2000 term, where we had a group of students who not only took a hold of the site,
but they actually became advocates for the community. They would go to community meetings
and act as technical advocates for the community, and in some cases, get into arguments with
the contractors about what should be done and shouldn’t be done. They basically provided a
tremendous service, in that case. And it just so happened that those students were also graduate
students, they were all Master’s level students, many of them had worked for environmental
agencies or environmental groups in the past, and so they already had a passion for this, and
now they had the technical knowledge to go with their eagerness, passion, and desire for social
justice and fairness; [they were] a very good group.
Once I started seeing the students who were performing at an enhanced level technically,
as well as otherwise, I started to ask the question: Why does this approach—having these service
learning-based projects—really engage these students? And, therefore, it went from the way that
I taught, to the research direction.
ROAD OF TRIALS: RESEARCHING NEW WAYS TO ENGAGE
AND DEEPEN LEARNING
The thing that I’ve been working on for the last few years is how to make sure that I am not only
achieving technical learning outcomes through service-based efforts, but that this achievement is
equal in level as found using traditional, non-service-based pedagogical approaches. So, now I’m
doing research on the impacts of service on engineering students. How that sort of engagement
can actually, hopefully, lead to a better prepared engineer, both technically as well as all other
aspects. I call [these impacts] professional skills, other people call them soft skills, but I tend
to say you communicate better because you know who your client is [and] you can actually
communicate with them as opposed to just talking at them. You will take into consideration
things such as social issues and economic issues and political issues, and not just say that that’s
someone else’s job. In doing the research, it became clear that there’s more than just service
learning that can engage a student. And so now I’m finding ways to engage students throughout
the course, instead of just saying, “Oh, you’re going to have this really cool project, just wait.
8 weeks of lecture stuff, and it’ll pay off, believe me, you’ll get to see it.”
For example, I now look for ways of engaging and deepening the learning without the
effort being service-based nor a long-term project. It’s engaging them in the moment; within a
class period or about a particular topic. For example, you say to the class “let’s design a flagpole,”
5. ENGAGING STUDENTS THROUGH SERVICE LEARNING AND INNOVATION 49
so the class will do that design very quickly; back-of-the-envelope style, using quick and simple
calculations, by assuming the flagpole is a simple cantilever beam. You do the calculation, and
now you’re done—technically. Let’s think deeper about this. Is that flagpole supposed to be
there? What’s the flagpole for? We’re now getting to questions not of the design’s technical
aspects—how big should it be, what type of material should it be, etc.—but, into the why it
should be. Is the client really looking for a flagpole, or are they really looking for something
else? Would the neighborhood accept that particular location for a flagpole? Does the town
have the money to pay for said flagpole? So, other issues start to be seen. And I don’t dwell on
them, I don’t make them the entire discussion, I just make them a part of the topic of designing
the flagpole.
A specific case that I used to do was a bridge. I had a project in a sophomore engineer-
ing course for the students to design a bridge. But they’re not designing the entire bridge, just
designing an element; [a] straightforward, simply supported beam. I’d push on the technical
analysis by asking them to do things beyond what they know. “It’s reinforced concrete. We’re
not going to talk about it, you’re going to have to figure it out.” The project then asked for stu-
dents to create a miniature model of their design, using concrete and created formwork. This is
now getting closer to real-world implementation. But then I add other considerations that are
not strictly technical. Questions such as, how do we make this sustainable? Should we consider
this with the neighborhood or the community’s input as to what’s necessary? The goal is to get
students to recognize that such questions should be a part of the design.
So, when they get into their senior year and do their senior capstone design, if they have to
do a bridge, or any structure, they will start to ask those questions. Hopefully, students realized
that it comes down to, “what does the client want?” [These are] the first questions that should
be asked [along with] why do they want it? How does that structure “fit” with the client, the
neighborhood, the bankers, whomever else is involved?
ULTIMATE BOON: BECOMING THE BEST FACULTY
MEMBER THROUGH STUDENT ENGAGEMENT AND
INNOVATION
I think what [students] did recognize was that the remediation course and the course’s material
became much more interesting and connected for them because the class changed. It increased
in number, essentially doubling in size once the service-based efforts were integrated into it. Ad-
ditionally, its audience changed. When I first started teaching, it was predominately graduate
students who did not have strong technical backgrounds. Then, it switched over to be under-
graduate seniors who had stronger engineering background, but little practical experience. Then
when I started doing these real, service-based projects, it became more balanced; about half
undergraduate and half graduate students. What I was seeing was graduate students coming
into the course because it had a stronger connection to actual case studies for them. That is,
at this time, many of our graduate students were part-time [students] holding full-time jobs.
50
5. ENGAGING STUDENTS THROUGH SERVICE LEARNING AND INNOVATION
So, the course was taught in the late afternoon/evening, allowing them to be involved. Many
of them were working for environmental agencies and consultants, but they saw that this was a
good course that would help to hone in some of the things they had seen out in practice. What I
found really interesting was they were getting technical aspects from me, but they were bringing
to the classroom their real-world experiences. So, in essence they were co-teachers.
[Graduate students] could talk about, “I remember this site, we actually pumped it, and we
actually found this and this ...” Yes, real world. They may not have understood the theoretical
or technical aspects of pumping; that is, as you pump and you get a draw down and you can
actually figure out the change in height of the water at different points, that technical detail
[was] not there. But then they could make the connection of, “Okay, when we pumped, we
saw the water level being different across the site.” They could make that connection to what a
technical analysis was saying. So, I think it created an opportunity for students who had real-
world experience to make those connections. It also created, especially in this mixed classroom
that I had, an opportunity for more ‘default’ instructors to be there, and to be educators to those
whom [had not yet had] such real-life experiences. After doing it for 2 or 3 years, [I could] see the
value. These people [were] not just picking up a real-world project, they were actually providing
their expertise and their knowledge in a non-technical sense to what the technical issue was. So,
the undergraduates that could do the technical work, calculations out [of ] the kazoo, but they
didn’t understand what the calculations were for. Whereas these graduate students, especially
the experienced ones, they could and they could run with it.
The impact on me and my time was substantial. Because, to do that, to maintain that
same level of technical competence, and to expose these other things, was additional work—
additional work on the faculty, additional work on the TA, additional work on the students. But
the students won’t see it as work, if they see it as learning. So, there are barriers; I call them
self-imposed. It depends on your own personal value proposition. I don’t have a personal value
proposition that says I need to become a full professor and then write papers all the time and
have a graduate [cohort] of 10–15 students. That’s not my personal value proposition. Mine
is delivering an education to all students. And this allows me to do that. Yes, it takes a lot of
time. More so than what some people say I should be doing, probably. I agree with that. So, the
barrier in my case has been my own personal goals and interests. They are internal as opposed
to external. Intrinsic versus extrinsic.
To me, a faculty member is someone who actually provides the best education that they
can for their students. So, doing it where people are engaged is a key part, at least to me. Do I
find colleagues that pursue this as well? Yes, and more and more of them are coming. But it’s not
an overnight sensation and everybody wants to do it, no. Not that way. What I really think is,
[engage] the students in such a way that allows them to take the technical expertise that they’ve
mastered in the classroom and apply it to real-world situations and learning, as well. Not just
doing the technical expertise, but actually learning about both their technical and non-technical
professional skills. That’s the best education I think we can provide to our students.
5. ENGAGING STUDENTS THROUGH SERVICE LEARNING AND INNOVATION 51
So, that’s where I am right now, I have academic evidence to show that [service learning]
works. [This thing I did in 1999] is still impacting [my ability] to continue on these different
pathways. [I’m] still working on things, got an entrepreneurial side to it, too. Most people look at
entrepreneurship as another way to say, “I want to make a lot of money and I want to make a lot
of money fast.” To me, entrepreneurship is finding out what your client wants—basically their
values—and saying how do I satisfy those values? And you may find out that what is currently
available doesn’t satisfy them. So, you have to be innovative in the process. Entrepreneurship is
truly a mindset where you really evaluate if something can be grown, scaled and sustained. Why
not be entrepreneurial in applying an educational concept? An educational innovation? When
most of the education that we still receive today is the traditional lecture style, when people can
deliver it in a different way, why can’t that be an entrepreneurial effort? My value proposition is
that service-based efforts enhance student’s learning outcomes. I’m looking at service learning as
something that engages them so much, and they continue to be engaged by it throughout their
lives, that they say, “I picked that up at Tufts.” Same thing at another institution, “I picked that
up, at Institution X.” Long term, [service learning] could have broad and deep benefits—[and]
this is really long-term thinking—[as] an engaged student leads to an engaged alum, which leads
to [a] continued flow of institutional support. [Such efforts will influence a] different student
body, hopefully more engaged with learning, but also more engaged with the institution. I want
to say that it really comes down to wanting to deliver the education that I think is appropriate
and most impactful to the students. What I’m seeing just by doing it and being involved in it,
is that [service learning] is impactful.
C H A P T E R 6
53
From Food to Simulation with
Legos: Engaging Students in
Hands-On Learning
Thais Alves
Narrative constructed by Audrey Boklage
Right now, I hope I can better manage this struggle that I can just smoothly put these
innovations in the class and still be able to move on with the content. This was another
thing that the CTL (Center for Teaching and Learning) changed in my mind before I
wanted to go and bang, bang, bang, bang, cover the syllabus. Right now, I said, “ You
know what? If they learn this and this and this and they really know well about this, I can
skip a topic or two and maybe have a smaller amount of time dedicated to that.” I think
that my battle is to change a little bit at a time but still cover the material.
Thais Alves is an Associate Professor of Construction Engineering at San Diego State
University.
CALL TO ADVENTURE: CREATING A COMMUNITY
It started all the way when I did my Master’s in [lean construction] in Brazil. Then I went to
UC Berkeley and I had more exposure to [teaching]. Through my time there I also TA’d for
my advisor and I really enjoyed being a TA and working with the students and seeing how the
assignments were prepared from the other end.
After my Ph.D. studies, I had to go to Brazil because I had a scholarship, a full ride. I
had to go there [Brazil] and spend some time. Then when this opening showed up at San Diego
(San Diego State University, SDSU), it was in an environment very similar to the one I had in
Brazil in the sense that the industry there is extremely supportive of our program. Whatever we
need, if we structure the call or the request nicely, we’ll get help. Projects that we send people to
collect data, research, whatever, you name it. It was the same thing when I was in Brazil. The
interesting thing when I was [in Brazil, there] was this group that was very close to the university
54
6. FROM FOOD TO SIMULATION WITH LEGOS
down there, they had their own learning community, so there were 10, 12 companies that paid
money every month to become a member of an innovation type of community, and they would
bring experts from other parts of the world to see what they were doing, and they were very
creative in how they implemented lean construction, so much so that people from around the
world, we were often hosting people just to show what they are doing. One of the things that
caught my attention very early was that people who were trying to explain lean [construction]
to us, they would always have different ways of explaining. It was not your traditional “I teach,
you sit and learn.” There were a lot of fun exercises outside of the classroom and games and the
sheer volume of discussion and how we were trying to understand it, because we are engineers
and we were trying to understand all this philosophy behind lean.
I think the ingenuity that they have in the U.S., everything has to have a computer and
a laptop and a tablet and a projector. [In Brazil], this is not [the case]. People [become] very
creative in terms of how they [do] things with paper and pencil and conversations because of
[the lack of a computer and other technology]. That was very good, too, because when I was
there, I had a bunch of examples that I could give to my students and they could step out of the
university and see it.
ROAD OF TRIALS: A LACK OF IN-SITU EXAMPLES
When I came here to San Diego in 2009, I was talking about some of the concepts that I taught
in my graduate course [in Brazil], and people had never heard about them. I didn’t even have
construction sites to send [students] to see because [they were] not here. That forced me to be
even more creative on how these things were going to be put together because they couldn’t step
out of campus and say, “Oh, we are going to go to a construction site and see these.” Some of
the concepts that I teach them, to this day, they haven’t seen anywhere. My students back in
Brazil, they could actually go to a construction site and talk to somebody who didn’t know how
to read, and they were implementing some of those things. The barrier to change some of the
mindset there was much lower at the construction site level because some of these people, they
didn’t know how to read and write, but once they were brought into the discussion, and were
given some ideas, they would use it. If it benefited them, they would do it. It was a very big
shock when I came from there to here and I had to adopt my teaching and the sites were not
there for them to see, and people here, they seemed to be more reluctant to accepting some of
these things. One of the most interesting things was that the terms that we were using to teach
lean to engineers, some of them were not translated to Portuguese. When you talk about certain
terms, the students had to learn what those terms meant in English. The term was in English
or in Japanese or whatever language and they had to learn the term. I didn’t think that that was
actually a problem for them. You would just say what the look-ahead schedule is, and they would
get that the look-ahead schedule was whatever I was explaining. They would call it as such in
English.
6. FROM FOOD TO SIMULATION WITH LEGOS 55
FIRST THRESHOLD: BUILDING A LANGUAGE BRIDGE
When I came back to U.S., I started seeing that some of those terms that I was trying to teach
back in Brazil and those students would just learn a term like “look-ahead” and they would make
sense of it, when I got back to U.S., I realized that some of those terms were not commonplace
here either. It was interesting to figure out how I was going to teach that because there was not
this, let’s say, inertia, right. The term “inertia,” is translated into many different languages and
there are formulas that are associated with it. Well, with what I was teaching, there was not. It
was interesting when I came back here to see that those students were having trouble getting
some of these concepts that I thought, “Okay, it’s in their language, they are going to capture it
better.” That pushed me to be ever more creative in terms of how I was teaching these things. I
had to become very creative as my other instructors were in the past. That’s how I got into this
track, if you will.
BELLY OF THE WHALE: THE TASK OF TEACHING
[I realized that] I have to adapt whatever I’m doing and see what kind of population I have here.
SDSU is a university that is supposed to form people to go to the market, so they are not going
to become researchers. Going to the Center for Teaching and Learning [CTL] lunches here,
they said, “Remember that your goal is for students to learn. Do whatever you have to do, but
they have to learn.” I think that was that “aha” moment that I can do whatever I want, but these
people are my clients and if they are not happy or if this is not useful for them, I have to do
something. I have to find the happy medium, which I think I ended up finding after going to
the CTL meetings and seeing the different approaches and using Blackboard and working on
my syllabi to make sure it’s clear and they know when the assignments are coming, when we
have simulations and just keep reminding them. I had to become this person that serves more
of the students and tailor my teaching to their needs.
ACCEPTANCE OF THE CALL: FOOD, HANDMADE LEGOS,
AND PRESENTATIONS
I like food. I would always start talking about some food related stuff [in my classes]. I would
catch their attention right away. I was talking about something that was very personal and I
would say, “I like this, I like that, and now imagine that you are in a restaurant and that’s what’s
happening.” I would always try to anchor the concepts into something that they already knew
and they were familiar with on a daily basis. To this day, when I teach, I talk about all the food
places on campus. In one of my classes I used to send them to do studies in these places before
I would send them to a construction site.
That was one way. The other way was the simulation with Legos. Probably you heard
about many of them, and I created my own with, I asked the students to cut dice, small dice,
and I would give very simple instructions and they would do it, and then we would move on to
56
6. FROM FOOD TO SIMULATION WITH LEGOS
different problems of the game with them, making dice and making the airplane game that you
might come across as you talk to some people in this field.
I remember a professor in Berkeley who used to bring some ingredients to class and mix in
front of the students. He taught construction materials, and he would show how those things will
add glue to the mix or become more watery or harder. Usually we see a lot of that in construction,
but I don’t see [it] in the other disciplines in my department, unfortunately. They might have
other things that they do that I don’t know, but as far as these Lego simulations go and stuff like
this, it’s just construction people.
In my grad class, which is the one that I teach these concepts the most, I created an
assignment that they have to create a video or an animation or a game that explains a concept.
They have to figure out, I just tell them what the parameters are. They have to pick a concept.
They have to do a lesson plan. They have ten minutes to present and the video has to be up to
five minutes within that presentation. It’s something very focused, and they presented that.
ULTIMATE BOONE: POSITIVE FEEDBACK
A few weeks ago, and every time they have a presentation, I ask them to post on Blackboard a
positive thing, a negative thing that can be improved, and a lesson learned. By far and large, they
love this assignment because they could see the concepts, the abstraction that the metaphor for
different concepts in each group presented in a different way. Some people presented washing
dishes. Some people presented how they prepare to go surf. Some people presented how they are
getting ready to organize a new production line in their company. The comments on Blackboard
overwhelmingly tell me that they enjoy that because they could see different ways of applying
the same concepts.
APOTHEOSIS: MORE PLUSES THAN DELTAS
[These comments are] usually very positive because we still have the traditional lecture [when]
I’m in front of them and I’m lecturing the traditional way, but we have a lot of guest speakers
and we have these simulations. Every time we have a guest speaker or we have a simulation, or
they present, they have to do this plus/delta of lessons learned. This plus/delta lesson learned is
open for everybody to see, so they see what other people write, and it’s a safe environment, it’s
free of criticism. They post whatever they want and I put my comments as well, but they can all
see what they’re saying.
The only negative comment that I got that I would say for my video animation assignment
is that they want more time to present. I’m saying, “No. I’m not going to let you get more time
because you’re going to be rambling there and people are going to get bored.” That’s the only
negative thing that I have gotten so far. They want more time.
MASTER OF TWO WORLDS/FREEDOM TO LIVE:
PEDAGOGICAL FLEXIBILITY
6. FROM FOOD TO SIMULATION WITH LEGOS 57
I feel that every time I’m going to lecture, I have my slides that are ready, but I never have the
same exact lesson. Never, ever. Every time I go to a class, not only I have to see my notes, but
whatever I’m listening on NPR, if I have a chance, or I read the news, I always try to bring
something that is contemporary, if nothing else, just to catch their attention. They might be
distracted and say, “Oh, did you guys see this and that?” This relates to the class. I want to keep
using that and I want to flip the class a little bit more moving forward. It’s very hard to come
to grips with the idea that you want to introduce these things and you want to give freedom for
them to lead the class, and at the same time cover the course material.
Right now, I hope I can better manage this struggle that I can just smoothly put these
innovations in the class and still be able to move on with the content. This was another thing
that the CTL changed in my mind before I wanted to go and bang, bang, bang, bang, cover
the syllabus. Right now, I said, “You know what? If they learn this and this and this and they
really know well about this, I can skip a topic or two and maybe have a smaller amount of time
dedicated to that.” I think that’s my battle is to change a little bit at a time but still cover the
material.
MEETING WITH THE ALL KNOWER: LIKE-MINDED
EDUCATORS
[I also received support from the ASCE workshop.] That was the best work week of my life,
because we would work from 8 o’clock [in the morning] to 8 o’clock, maybe 9, 10, 11 [in the
evening], and we were all so happy, so excited, and that was another thing that I appreciate having
a chance to go there and have this support that you are mentioning, because that’s another thing
they said. You have to think about ways to teach your students, and engineering professors are
very, “You teach like this or teach like this.” We went to West Point [for the workshop], and you
had all these military professors that you would think are very structured. It’s very structured,
but it’s also very fun. They had all these different things that they would do.
That was a huge help for me to be part of that and to accept that those things are accepted
in engineering, because some people, I have the impression that when you say that you are doing
these things, people think, “Are these people really learning? Is she really teaching something?”
With my comments, my reviews, right, you will see that the students enjoy and that they like
this approach. That was a huge help.
58
6. FROM FOOD TO SIMULATION WITH LEGOS
MASTER OF TWO WORLDS: LEGOS AREN’T A WASTE OF
TIME
As far as the CTL help here, the director of the CTL here just invited me recently to be part of
their first advisory board that they are putting together, so I’m having a chance to actually look
at some of the schedules and workshops and things that are more hands-on. They are doing that
and whoever is part of the board is trying to say, “Yeah, we prefer to have hands-on things that
we can bring to class immediately.”
I just wish more people [who teach engineering] spent the time to do these things, because
I have the impression that sometimes when some of the students from other disciplines, when
they land into my class and they say, “We are going to work with Legos,” they are like, “Oh, there
it comes.” They think that it’s something that I’m just doing to kill time, that I’m not prepared
for class and then I’m going to play a game or something, and then some of this perception
changes, and when I say we are going to play a simulation, and they know that it’s serious and
they will be engaged and I’m not just trying to kill time. I think that it will be good if more
people had this mindset, that they could try and use these other experiments in class.
C H A P T E R 7
59
Finding Her Niche with
Hands-On, Practical, and
Real-World Pedagogy
Fernanda Leite
Narrative constructed by Nadia Kellam
I think I’m still a work in progress. You said [that I’m a] rock star, but I just see myself as
somebody that’s just constantly learning, and constantly trying to provide our students the
best education that they deserve to have. That just is a work in progress. Honestly, there are
still days that I come out of a class and I said, “I could have done that better. Next time I’ll
do it better.” It’s always a work in progress. That keeps me motivated.
Fernanda Leite is an Associate Professor in Civil, Architectural and Environmental En-
gineering in the Cockrell School of Engineering at The University of Texas at Austin.
CALL TO ADVENTURE: COMBINING PASSIONS
My father is an educator. He’s a professor in Brazil in agricultural engineering. I grew up actually
in College Station, Texas. I knew what it was like to live the academic life from my observa-
tions of my father. I also was very passionate about construction, which was my grandfather’s
profession. He was a developer of high residential/commercial construction in Brazil.
My first teaching experience was teaching English as a foreign language in an afterschool
program. That’s where I fell in love with teaching, when I was an undergrad [in Brazil, where
I’m from.] I knew that I wanted to teach, but I knew I didn’t want to be an English teacher in
an afterschool program as a full-time career.
I put all these little pieces together. Really, the passion that sparked that, that was ignited
when I was just teaching. It was better than flipping burgers. I had the skills. Why not use that?
The research observations of my father, and the construction domain from my grandfather. I
wanted to combine the two [professions.]
60
7. FINDING HER NICHE WITH HANDS-ON, PRACTICAL, AND REAL-WORLD
SUPERNATURAL AID: FIGURING OUT HOW TO BECOME
A PROFESSOR
[When teaching English in the afterschool program,] I just had lots of fun… just seeing people,
observing people grow, and how a small intervention could really impact people. That, for me,
was really encouraging, and it just gave me a high. The same thing that I feel after teaching a
good lecture. Endorphins go off in your brain, or something like that. It just feels really good.
[However,] I knew that that wasn’t the domain where I wanted to be doing that, in terms of
teaching English. I wanted to teach my chosen profession.
My father really helped me shape how [to reach my goal.] That’s where I traced out my
plan of, “What do I need to do to be this person, in terms of getting the right degrees?” I sat
with my dad and said, “What do I have to do?” He said, “Well, if you want to be a university
professor, you’ve got to have a Ph.D.” That’s where it started. From there, I went into a Master’s
program in the south of Brazil. I’m from the north-east [of Brazil.]
In Brazil, at least, at that time, I didn’t know I was going to get a Ph.D. in the U.S., or
become a professor in the U.S. My plan was more, “How do I become a professor in Brazil?”
Because that wasn’t in my radar, that this would be a possibility. My dad’s like, “Well, you need
a master’s first, and then a Ph.D. Then you can apply for a faculty position in Brazil.”
MEETING WITH THE ALL KNOWER: A VISITING
PROFESSOR AND FUTURE ADVISOR
During my Master’s [degree], a professor from Carnegie Mellon University went and taught a
one-week short course over the U.S. summer, our winter in Brazil. At the end of that course,
he basically said, “Would you like to get a Ph.D. at Carnegie Mellon?” I said, “Sure. Are there
two positions, one for me and one for my husband?” Because my husband was also in the same
path. We were both Master’s students together. We ended up going to Carnegie Mellon for
our PhDs, as well. He said, “Sure, I’ll get a position for the two of you.” We applied, and it
worked out. We actually only applied to two U.S. universities for our Ph.D.s: UT Austin and
Carnegie Mellon. We were accepted to both, but we decided to go to Carnegie Mellon in the
end. [Eventually,] I ended up here [at UT Austin] anyway. Because I grew up here in Texas, so
I had a big connection with the state.
ROAD OF TRIALS: EXPERIENCE TEACHING IN
GRADUATE SCHOOL
I’ve always served as a teaching assistant in classes in grad school. I’ve always done research,
which was my primary responsibility in grad school. I’ve always been really, really passionate
about teaching. What I noticed [when I taught in graduate school] is that a one hour lab was
pretty limited and it was, most of the times, very disconnected to what was happening in the
7. FINDING HER NICHE WITH HANDS-ON, PRACTICAL, AND REAL-WORLD 61
three hours of lecture in the week. My Ph.D. advisor taught those, and then I taught the one-
hour lab. There was that disconnect. That was the first thing. My desire was that it would be all
connected, better connected.
I was frustrated [then,] because I didn’t [teach in a hands-on] way as a graduate student,
as a TA. I was supposed to teach the lab, and teach them how to press the buttons. That just
frustrated me. But you only had one hour a week, and it was not connected to the lecture slides.
You really couldn’t do a lot more than that anyway. That’s the first thing that I said. “If I’m going
to do this, I’m going to do it right, the way that I really believe how this should be done.” For
me, since I’m in such a practical field, which is construction, I don’t understand how I can teach
without it being hands on, and very practical, and very real-world oriented. I just don’t know
how to do it a different way, honestly.
When I was a teaching assistant, I think the hands-on component was very limited. I
tried to develop my advising style, my teaching style, based on my own experiences working
with other people.
APOTHEOSIS: DEVELOPING AN INTERCONNECTED
COURSE
When I came here to UT, that was one of the things that I created was this BIM (Building
Information Modeling) course. The way I thought of creating those connections better was by
dividing the course into modules. The modules would be a lecture, two lab classes, and then a
reflection class. All on that same theme. They’re all very interconnected.
The first lab class, we teach them, they’re able to use five or six different software systems
to be able to do applied BIM for different application areas in construction. There’s a lot of new
software that they’re learning throughout the semester. Each module has at least one or two
[types of software]. That first lab class is getting them up to speed in those software systems.
Then they typically have one week between that lab and the second lab.
The second lab, I call it Time for Questions. There’s no teaching component. We’re not
showing them anything. We’re just walking around the classroom answering questions. Most of
the teams have done about 80% of the work between that first lab and then the second lab. That
way we’re just really helping them connect what they’ve been doing to the general theme of that
module, and answering questions.
I tell them that, “I don’t answer button-related questions. Don’t ask me a button-related
question. Personally, I don’t really care about teaching how to press the right button in the
right order in the software system. For me, I care about what are you getting out of that deci-
sion support, that software system, the output that you’re getting? How is that changing your
decision-making process to solve that problem?”
I tend to focus more on the process of the module, because I really believe that they can
pick up the software, wherever they go. Whatever software I’m teaching them here might not be
62
7. FINDING HER NICHE WITH HANDS-ON, PRACTICAL, AND REAL-WORLD
the one that they’re going to be using when they go out in industry. I really don’t put too much
emphasis on that. That, I think, is the main difference between how I teach them in class.
Over the summer of 2015, I participated in an academic BIM symposium, where faculty
from all over the U.S. that teach BIM shared how they teach it. Most people tend to focus on
... One of the major softwares that [we] use is called Autodesk Revit. They said, “Oh, I teach
Revit. How to draw a column, how to draw a wall.” Well that’s not really ... You’re teaching
them how to use a program. For me, that is a disservice to our students. Because there are tons
of YouTube tutorials online that they can learn that through. You really have to teach them how
to make decisions with those systems.
Then the second component is [that] it’s got to be all based on real-world problems. See
those large drawing sets over there? They actually use that. That’s a commercial building in a
different state, in Pennsylvania. That’s what they use for two of their homework assignments in
my BIM class. They literally walk around with those giant sets of drawings. It’s a real building
that has been built. Another homework assignment is another building under construction. The
fourth homework assignment is this building we are in [at UT Austin], ECJ (Ernest Cockrell
Jr. Hall).
They do different things with these real-world projects. That’s important, because [the stu-
dents] need to understand project complexity. In engineering, we tend to over-simplify problems
and provide and spoon-feed a lot of the boundaries of problems in a way that, in the end, there’s
only one right answer. You give them all the assumptions that they’re supposed to make, you
spoon feed all of the inputs. That’s not how it is in the real world.
When I show up in class with the first module and show them these drawings, and tell
them homework 1, which is a model-based cost estimating assignment, that they’re not going
to find the specs exactly like it’s stated in that project, in those drawings, in the specifications
for that project, in the National Standard for Cost Estimating. They have to make assumptions,
they have to interpolate, they have to find approximations.
Some students go crazy, because they are just not used to that world. [They say], “What
do you mean I have to make an assumption? What is the right answer? What is the number
that I’m supposed to get?” This is a cost estimate, it’s an estimate. There is no one right answer.
Everybody’s going to have a different answer in the end, based on their assumptions. If two
teams come back to me with the same answer, that’s when I know there’s a problem.
At the beginning, they’re a little shocked in the first assignment of the semester, but then
they get used to it. When they understand that, they really flourish in the class. I notice that
a lot, especially with students that have had no internship, no industry experience, a lot of our
undergrads are like that. This is a cross-listed course, with graduate and undergraduate students.
It’s about half undergrad/half grad. I see that reaction a lot with undergraduate students.
Each module is basically the same structure. Lecture, which is the theoretical basis for
that module. We typically have a reading associated with that. The two labs, the first lab to get
them jump-started; second lab, time for questions. Then the reflection class. Which is, if there
7. FINDING HER NICHE WITH HANDS-ON, PRACTICAL, AND REAL-WORLD 63
are eight teams in that semester and four homework assignments, then two teams would present
for every homework assignment. Each team has specific points to cover in their presentation,
so that the presentations are not really repeated. It’s more meant as a discussion. Everybody is
expected to participate in the discussion and chime in, because everyone has had that experience.
It’s not like a project that only that [one] team did that one thing. They’re presenting everybody
else’s, that’s all new information, no. In this case, everyone had that same experience, so they’re
all expected to provide their input, as well.
There tends to be a lot of interaction and discussion in this class. Even in the first lecture-
type class, I start the lecture, I just leave the PowerPoint up in the background, and we’re dis-
cussing the assigned reading for that class. There’s no PowerPoint, it’s just the first slide is up,
but I’m sitting there trying to get their perspective on what were the main take-home messages
from that reading. I really try to cement the important concepts in that reading. Then we get
into the lecture. Then I basically tell them what the structure of that assignment, that module,
is going to be after that.
The reflection just caps it all off. We’re able to provide some closure for that module, and
then discuss: “What were the limitations?” “How would you do this differently if you had this
other piece of information?” And so forth. There’s a lot of discussion that happens. It’s very, very
interactive.
[There are] four modules that are structured that way throughout the semester, and then
there are several guest lecturers. I’m very much a believer of real-world knowledge. I invite people
to come and guest lecture to talk about how they’re using BIM in the field. We actually have
two site visits. We’re actually going to observe people using BIM in the field.
We’ve already done this two weeks ago on our first site visit. This is a high-rise project
on West Campus, a residential tower. We went and visited their BIM office to see how they’re
using the models in their field office, and then visited the job site to see here’s what they did,
talk about in the virtual world, in the 3D model, here’s what they’re doing in the field. When
they actually see it, and see other people using it in the field, it really sparks their interest. All
of these different perspectives help them cement these concepts. It’s not like they’re just getting
a one-sided perspective, it’s [that] other people are showing them how they do this as well. It’s
not just one true reality. People apply this in different ways. It’s important for them to see that
experience. Also, each team of students also has an assigned industry mentor. They work with
that mentor throughout the semester developing a case study on a real-world project that uses
BIM.
As you can see, all of these assignments, there’s nothing that’s very spoon fed, like assump-
tions. Everything [assigned requires that] you’ve got to go out and try it. There’s some structure.
There’s some material. But you’ve got to make decisions on your own. You’ve got to understand
that, in the end, you’re the professional. You have to take ownership for your work. We’re not
going to spoon feed you. You’ve got to go after all this data.
64
7. FINDING HER NICHE WITH HANDS-ON, PRACTICAL, AND REAL-WORLD
RETURN THRESHOLD: BRINGING THE REAL WORLD
INTO THE CLASSROOM
Before the semester started in January 2015, in December 2014, I held a brainstorming ses-
sion with an industry group called Safety Community Practice. These are about 32 professional
safety engineers in this group from all over the world. We had a brainstorming session, over Go-
ToMeeting, on what the next generation of safety engineers should know. That’s how I created
the topics, the structure of the lectures.
It’s not my domain, expertise, but I really wanted to teach that class. But I also wanted to
reach out to people that it is their domain of expertise. That’s how the lectures came about. I put
together the syllabus. Then for each lecture, I thought about, “How can they do that reflection
in the class?” Because I don’t want to just read lecture slides, I want them to be able to reflect
on their own, on that theme. For each lecture, I did something hands-on. It was either a case
analysis, which happened a lot, and they really loved it. A real-world case and they have to deal
with, let’s say, an accident investigation. What were the different steps? How they would do an
accident investigation for that case?
One day, we used the intersection of Dean Keeton and San Jacinto, right here, [outside]
of our building. We considered that an active job site. Each time somebody crossed, J-walked,
basically, that was considered a near miss. They basically classified how many near misses were
in that intersection. We were doing behavioral-based safety, that was a theme. They were using
a lot of concepts that we had learned throughout the semester applied to something that’s not a
construction job site, but you could, literally, think about it as a job site. Because you still have
behavioral issues. You could think about a car as being a construction equipment, and pedestrians
as being the laborers on the job site.
We also went to this job site here, the new engineering building, and we did job hazard
analysis, a theme in one of the classes. We went and looked at a set of workers that were doing
a specific task around a column, we just identified all of the hazards in the field, in person. I
got several exam questions just from [a] picture I took of this construction site from my office
window that they had to do an analysis on, of a real-world problem. Nothing is memorizing and
applying, it’s really understanding the problem, and how do you connect the concepts that we
covered in class to that real-world problem? How do you do something that people in the real
world, a safety engineer, is actually doing? Like an accident investigation, a hazard analysis, and
so forth.
MASTER OF BOTH WORLDS AND FREEDOM TO LIVE:
ENCOURAGING OTHER ACADEMICS
I think the biggest barrier when I presented my approach in the summer of 2015 to other aca-
demics teaching BIM all over the U.S., people are just shocked at the amount of work. I think
that’s the first thing. It’s too much work. Because you have all these mentors, you have all these
7. FINDING HER NICHE WITH HANDS-ON, PRACTICAL, AND REAL-WORLD 65
case studies, you have these site visits, you have all these modules. It’s five or six different soft-
ware systems. If people just look at it from a distance, they are overwhelmed, because they’re
going to think it’s too much work. I think people are scared of things like that. Most faculty
want to be able to not depend on other people. “I want to be able to go to my lecture and just
do my thing. If I have to depend on other people, then it’s a bottleneck.”
The type of support that I have found helps is a teaching assistant; I have always had one.
The teaching assistant helps update the tutorial material for the lab, and helps teach the button
pressing in the lab. Because if I had to update all of these, because the software systems are
updated every year, every single year... I’ve got to make sure that the right version is installed in
the lab, I’ve got to make sure that the tutorial material’s up to date. If I were to do that myself
every year, that would be a huge barrier for me to keep doing it this way, because it’s just a huge
amount of time. Frankly, I’m not even interested in that part of the process. I do all the other
[parts] of identifying the real-world problem that they’re going to be using for the assignment.
Identifying the mentors, connecting them to the mentors. Frankly, updating the material, I
think, is the barrier. If people don’t have that same kind of support, it becomes overwhelming
to teach a class like this.
I try to do something, a smaller version of this, in my required undergraduate course, that
we have five different structures on teaching. I try to make it very problem-based as well in class,
but I don’t use any software systems, which also minimizes that barrier as well. Same thing, I
have a real-world project—they do an estimating project. I pick some area around campus, so
they can actually go and see it. Normally, it’s little plazas like the Barbara Jordan statue plaza.
They do a quantity take off and a cost estimate for that. Then they have a panel of judges that are
all UT project managers that actually worked in the construction of that plaza, and they evaluate
the student’s work, the student’s cost estimate. That mini-project is probably the highlight of
that course. I still do it. That takes about two to three weeks in a semester. Very hands-on, and
it’s completely real world. It’s much better than teaching them how to cost estimate using a
very standard problem from the book. It’s boring, and it doesn’t really show them the multiple
dimensions that go into the problem. It’s too simplified. It’s nice to get the students out of their
comfort zone, to really make them think, and not just blindly apply things.
I’m the only one that [teaches in a hands-on way.] Because most people will say that, “Oh,
it’s a lot of work, because we have to get all those projects, you have to get those plans, then you
have to get that panel of judges.” Honestly, I don’t think it’s a lot of work. I think the benefit is
much larger than the work involved. Once you have a structure in place, all I change every year is
the project. But this general structure is pretty much the same. I know if you build relationships
with the right people, you can rely on them every time you teach that class.
If you just plan ahead, people are happy to help. It doesn’t become that overwhelming.
The students really value that, and they really enjoyed something that they can say, “I’ve worked
on that project right there.” Barbara Jordan statue plaza, or the Cesar Chavez plaza, or whatever
it is on campus. They’re really going to take that and remember that for the rest of their careers.
66
7. FINDING HER NICHE WITH HANDS-ON, PRACTICAL, AND REAL-WORLD
Again, people will say, “Oh, that’s too much work.” It is much easier to just get a book,
walk to a lecture hall, and just teach straight from a book. Yeah, that’s easy. For me, that’s not
fulfilling, and that’s not why I chose this profession. That wouldn’t make me happy. I would feel
very frustrated.
STORIES FROM MY CLASS: TEACHING WITH LEGOS
Last Thursday in class, we did a Lego exercise. I can send you the examples. Basically, it’s like
a 2D set of drawings, elevations, plans, section cuts, of a 3D model made in Lego. They are in
teams of 4, and there are 4 colors in this exercise. Each student is a different color. They have to
build that model in 3D, but all they have are the 2D drawings.
We timed them. We basically see how long it takes for them to build a 3D model. It takes
them between 7–10 minutes, which is a pretty long time, if you think about it. But it’s to make
a point. There are about 30 pieces, 4 colors, 4 team members per group. They worked together,
and they have to communicate their ideas to make sure they’re putting their pieces in the right
place. But also the fact that they have to interpret that 2D information, translate that into 3D,
that’s also the point of this assignment.
The second round of it is I give them a 3D perspective of another model. They have to
repeat the same exercise. They still have the same colors, same number of pieces. Now they’re
able to build that in 55 seconds to a minute and a half. It really decreases the amount of time.
That’s to make the point that if you’re able to communicate in 3D, which is part of BIM,
building information modelling, if you’re able to communicate in 3D, your crews in the field
that are building that will have a very clear understanding of your design, of what you’re trying
to build. They’re able to work more efficiently, because they’re not spending a lot of time trying
to translate something in 2D, which was already translated from your 3D original idea.
We run those through, two simulations. Between each one, we reflect. We think, “Okay,
what did you learn?” In the end, we reflect again: “What did you learn?” This whole process was
just this exercise, and reflection on what they learned. Little things like that I sprinkle throughout
the semester as well.
I do a Lego exercise in my required project management and economics class too. It’s
literally two teams of students, one on one side of the table, another one on the other side of
the table. There’s 10 Lego pieces, that start with the first person on each end. Each team has a
die, they roll it. If you get one, that means that your productivity rate for that day was one unit.
You pass one Lego piece to the person next to you. The person next to you rolls their die, and
they get three. That means their maximum productivity rate would have been three that day. But
since the previous person was a bottleneck, they can only pass on one. Then the next person...
We keep doing that exercise. Say the first person on the other side got six at the first
roll of the die, so he or she passes on six. The second person got four. He passes on four, he
keeps two in his station. All of the pieces that are left over after that round, then I ask the class,
“What do they mean in the concepts that we covered?” Because the pieces that stayed in your
7. FINDING HER NICHE WITH HANDS-ON, PRACTICAL, AND REAL-WORLD 67
station, they’re called work in progress, in construction. The roll [of ] the die of one, that’s a
low productivity rate, that’s a bottleneck. If they’re able to play that and see those concepts, it
cements it in a much better way. It’s a very simple exercise that takes less than 10 minutes with
a discussion in class. It’s more effective than me going through those definitions in a regular
lecture. Then they’re able to really see it, and experience it, and they tend to quickly learn it.
And they’ll probably never forget it.
MASTER OF BOTH WORLDS AND FREEDOM TO LIVE:
INTEGRATING TEACHING AND RESEARCH THROUGH
AN INDUSTRY FOCUS
I have my niche, which is, I’m very much connected to industry, even from my research as well.
My department supports me. I can be productive and can build these connections. People respect
me for what I am, because this is what I am, this is who I am. I’m just one person. I can’t separate
my teaching person from my research person. I’m one person. My experiences are all combined
experience. That’s what’s important. I have to put in the same dedication that I do for teaching
in research. That’s the only way to keep teaching cutting-edge as well.
Especially teaching something that’s very much information technology. That gets old
really fast. You really, really have to be connected to research to keep students engaged, and
what’s the most innovative piece of it? Keep them ahead of the curve. That’s the last thing. Just
always maintaining that connection between research and teaching.
I actually have a student right now, one of Mary’s (pseudonym) Ph.D. students. He’s
observing every single one of my classes for his research. One thing that he’s probably noticing
is that whenever I ask a question, out of the 21 students in the class, I see at least 7 or 8 hands go
up. One third of the class, they immediately put their hands up when I ask a question, because
they’ve had plenty of time to reflect on the question that I’m asking. I get to know all of them
individually, so I tend to ask questions sometimes about their specific experience. They’ll share
that. They’ll be able to communicate that well, because they’ve lived it as well. That’s one thing
that’s amazing. You get a lot more participation that way, because they feel more confident.
All I know is that it’s not going to stay the same. My hope is that a class like the one
that I teach, the BIM class, is not going to be needed in the future, because it’s just going to be
industry practice. That’s what I tell my students. My ultimate goal, my dream, is that I’m not
going to be teaching this class in 10 years, because this is just industry practice, there’s not going
to be a need. I’m going to have to come up with something new that’s going to be the next big
thing in the industry. I’m going to adapt with time. Luckily, I have that luxury of being able to
tweak things throughout the semester, between semesters, and think about new courses as well.
I think it’s stimulating, also, to teach new courses, because it forces me to think differently,
and to teach things in a different way. Because each domain has their own specificities that
require you to adapt to that and try to deliver that material in a different way.
68
7. FINDING HER NICHE WITH HANDS-ON, PRACTICAL, AND REAL-WORLD
I think I’m still a work in progress. You said [that I’m a] rock star, but I just see myself as
somebody that’s just constantly learning, and constantly trying to provide our students the best
education that they deserve to have. That just is a work in progress. Honestly, there are still days
that I come out of a class and I said, “I could have done that better. Next time I’ll do it better.”
It’s always a work in progress. That keeps me motivated.
Doing that also makes me come up with ideas for research. Ideas that I have in research
become modules, or lectures, in my course. For me, a lot of people say, “Teaching takes away
from research. That’s just taking up too much time. I don’t want to do that, because I’m so busy
with research.” For me, it’s all one thing. The way that I see it is that I build off of the experiences
that I have in the classroom. That gives me lots of ideas for research, and vice versa.
C H A P T E R 8
69
Creating a Community of
Collaborators to Achieve
Curriculum Change
Charles Pierce
Narrative constructed by Audrey Boklage, Brooke Coley, and Nadia Kellam
I want to share what I think engineering is, because that certainly has changed over time…
Engineering is helping people. That’s what I think of it as. We solve problems for [the]
purpose [of ] trying to help come up with solutions to problems that impact society. That’s
kind of by definition what we do.
Charles Pierce is an Associate Professor of Civil and Environmental Engineering at The
University of South Carolina.
CALL TO ADVENTURE: TEACHING RUNS IN THE FAMILY
My dad was a civil engineer and also taught years ago back when you could teach with your
Master’s degree, which is what he had. He taught at URI (the University of Rhode Island) for
a few years. I bring that up because I was aware of what his professional trajectory was, [and]
most of his time was [in] professional practice, but I know he had some teaching experience. My
mom was a nurse, which is important. I think in many ways, I’m a perfect blend of my parents,
because I’ve got the technical side from my dad and my mom helped people. I had a pretty good
idea that I wanted to go into engineering from high school, [and] into college. My parents never
pushed, but I was well aware of what my dad did and I seemed to have those interests.
I earned a civil engineering degree from the University of New Hampshire (UNH). It
was a reasonably small state program, which was a good choice for me, and I really got to know
a lot of the faculty. I had some very important relationships with professors as an undergraduate
student.
70
8. CREATING A COMMUNITY OF COLLABORATORS TO ACHIEVE CHANGE
SUPERNATURAL AID: GRADUATE SCHOOL ADVICE
I was pretty involved at UNH. I joined ASCE (American Society of Civil Engineering) and
became president and all those things. I do remember one professor in particular during my
junior year encouraged me to look into doing a Research Experience for Undergraduates (REU)
program. Of course, at that time, I knew almost nothing about [doing research]. I applied to an
REU program at Cornell and was accepted. I did that the summer after my junior year.
As a junior, I already had a sense that I would go to graduate school. When the opportunity
to gain research experience [was presented to me], I figured it was a smart move. Plus, at the
time, [the market for] finding part-time jobs in engineering was really bad. Even with my dad
being in the industry, he was unsuccessful at finding an opportunity for me to do an internship.
It’s not like I had other options.
I went and had the summer experience at Cornell University, which was very favorable.
I got a sense for what graduate students did. I think that really set the stage for me to apply to
grad school my senior year. I even went to one of my professors and asked him, “Hey, I think I’m
interested in geotechnical [engineering]. Are there some programs that you would recommend
for me?” Northwestern University was one of them and Purdue University was another where he
also had a classmate. I applied to four schools. I applied to Virginia Tech, where my uncle was
on the faculty in mechanical engineering, so there was a connection there. I applied to Purdue
University, Northwestern University, and Cornell, of course, where I had gained the summer
experience. I was accepted at all four and received funding from two.
I distinctly remember having a conversation with one of my professors at UNH. He sug-
gested, “go where there’s funding. You should be funded to go to grad school.” That meant
deciding between Northwestern and Cornell, which were the two that had made me offers.
Ultimately, I thought Northwestern was a better fit for me than Cornell, partly because I had
attended a fairly rural, small town school in New Hampshire. The thought of being able to go
to Chicago and go to school there was appealing.
I visited all four schools my senior year to make a decision. I remember visiting North-
western [and] meeting the faculty in the geotechnical program, which was my specific interest
within civil and, more importantly, meeting the graduate students. Of course, I didn’t know
what to expect, so I’m like, “I’m going to meet all of these really intellectual, you know, above
me kinds of students.” I was pleasantly surprised to find that there were students I thought were
very much like me, which was important in making that decision. That’s not to say that [it]
wasn’t the same at Cornell. But, for whatever reason, at Northwestern that really struck me.
SUPERNATURAL AID: PUSH TO PH.D.
[I decided to attend Northwestern] for the purpose of getting a Master’s degree. [One day after
I had applied to] the Master’s program, I received a phone call from a professor there. He said,
“Hey, I saw your application. I saw that you applied for a Master’s degree. I really encourage you
8. CREATING A COMMUNITY OF COLLABORATORS TO ACHIEVE CHANGE 71
to apply for the Ph.D., because then you’re eligible for fellowships.” I would not have otherwise
been eligible for such funding. I remember thinking, “Sure, I’ll do that. Why not?” [I’m unsure
whether I had an] inkling to get a Ph.D. [at that time]. I remember really having to think about
that decision, though I was intrigued by the idea of getting a Ph.D. My goal at the time was
[to] get a Master’s degree, be like my dad, go into professional practice. That was my intent. I
now had a better sense of what a Ph.D. was, which I did not as an undergraduate, and could
see what I could do with it, which was to go into academia. I enjoyed doing research, but never
thought that was my strength.
I really did like teaching. I should probably rephrase that... I liked having good teachers. I
liked being a student in classes where I resonated with professors that I thought helped me learn.
I do believe much of that wasn’t necessarily from my graduate program, but much of that was
from my undergraduate program, because I feel to this day that I had some absolutely fantastic
engineering professors in my undergraduate program. I knew very clearly, I could not go do that
unless I had a Ph.D. I think [that] had there been more options, like it was back in the day
with my dad, I might have stopped at the master’s and then tried to pursue getting a teaching
position. But I knew those opportunities didn’t really exist anymore, in large part.
That was a big part of my decision making to get the Ph.D. knowing that I had to go
through the research process. My end goal was I wanted to be a teacher. I think during my Ph.D.
program, one of the things that resonated with me was having the opportunity to be a Teaching
Assistant (TA). I think most of us that were Ph.D. students had at least one opportunity to TA.
I don’t think it was a requirement, but [being a TA] was certainly an opportunity.
MEETING WITH THE ALL KNOWER: AN OPPORTUNITY
TO TEACH AUTONOMOUSLY
I was asked to TA the soils lab which is pretty common in geotechnical. Once I knew I was
going to TA, [I went to the] professor who normally teaches that class and asked him, “What
do I need to cover? Just tell me and I’ll do it.” [His response was], “This is your class. You do
what you want.” I was flabbergasted by [being given] that amount of responsibility. I was like,
“Okay. All right, I can do that.”
He just told me to run with it. I mean, he didn’t hold my hand in any way, shape, or form.
I don’t think he even came down to the lab with me. He said, “You know where the lab is. Go
find the equipment. Figure out what tests you need to run.” I mean, he completely left it up to
me. Whether he knew it or not, I have no idea, but that was the perfect thing to do for me,
because it really did make me think about how the decisions about what to do in a lab class or
any class [are made].
Just all the planning that’s required for managing students and managing equipment and
thinking about what you want them to get out of it. I’m sure I didn’t do a particularly good job
with that at the time, but I remember having to think about it. That was really important, because
it confirmed for me that I liked doing those things. Given everything else I was supposed to be
72
8. CREATING A COMMUNITY OF COLLABORATORS TO ACHIEVE CHANGE
doing with research, I was spending too much time TA-ing. But it was important to me, so I
did.
[After finishing my Ph.D.], I was interested in finding a place that had more of a balance
with research and teaching. I knew I didn’t want to go to a top tier research institution. I just
didn’t feel like that suited me the best. To be honest, I don’t even know how I made that distinc-
tion. How [was I even defining] a good teaching institution? I don’t know. I think it was more a
process of elimination than anything else. Like, “Okay, I know that’s not what I would consider
a top tier research institution, so if it’s not, then maybe it’s more teaching oriented. I’ll apply
there.” I was minimally informed back then in that process, but probably not as [informed] as I
wish I had been.
I ended up getting an offer here at The University of South Carolina (USC), which seemed
like a good fit. I had a sense from most of the faculty here at the time that teaching was important,
and it was valued within the department. That was significant for me.
SUPERNATURAL AID: FUNDING SUPPORT
I came into USC knowing I wanted to be a good teacher. I [was also aware] of the research
expectations and [knew that I’d have to] balance that. I was reasonably fortunate to get some
grants funded early on. In fact, I think the first NSF proposal I wrote was funded. That, in large
part, had to do with my collaborator. I was set up with a senior person living in Georgia at the
time who had been a faculty member elsewhere and was looking to get back into academia.
We were connected and wrote a joint proposal together. He was phenomenally helpful in that
process in terms of learning how to write a proposal, which I knew very little about. I had some
experience as a graduate student, but not enough.
I was fortunate to have had that. I was also able to get some local funding through the
Department of Transportation. I was getting grants and [using them to support] students. I felt
like I was doing the things I was supposed to be doing at the time, which was good.
ROAD OF TRIALS: NEW(ISH) CONTENT
I think the [funding success] allowed me to feel less pressured about the [research process]
enabling me to spend time on teaching. I came in and was asked to teach a little bit outside of
my comfort zone. I was asked to come in and teach a civil materials class with the associated lab.
I had a little experience working with cement-based materials and concrete, which was part of
this course, but I was not that familiar with a lot of the other content [without referring] back
to my undergraduate days.
I needed a lot of prep time to learn [the course content] on my own and [be able] to share
that with the students. Then I also had the corresponding lab, which I think was very good
because it really forced me to understand material behavior. Not only did I have to teach it, but
I had to be able to demonstrate it in the lab. I spent a lot of time working on that and then that
8. CREATING A COMMUNITY OF COLLABORATORS TO ACHIEVE CHANGE 73
next semester, I was [put in a similar situation]. I was asked to teach our soils class, which is a
junior level class, and the corresponding lab as well. In some ways, that was good because it was
the same type of teaching, just different material.
ATONEMENT: STUDENT FEEDBACK
[Starting off ] at 28 or 29 [years old], I really enjoyed meeting the students, talking to the stu-
dents, and trying to get to know them. I wanted to find out as best I could whether or not they
were learning anything from me. There was one student who was also very willing to get to
know me. During the middle of the first semester when I was teaching that civil materials class
and the lab, I individually asked him, “Hey, how do you think the lab is going? Is this helping
you learn?” He was actually honest and said, “Yeah, but I think you could do this, you could do
that.” [I found his critical feedback to be] great and was actually happy to receive it. It made a
big difference for me by [enabling me to] understand [his perspective] of what was working or
was not working. From that point forward, I always felt comfortable trying to solicit that kind
of information from students. That was always an important thing for me, to try to get a sense
from the student of what they thought they were learning.
ROAD OF TRIALS: COMMUNICATION IN THE
CLASSROOM
Here I am, first year, teaching two classes, teaching two labs. I felt okay about that process.
I know I worked hard in developing materials and strategies, although I don’t know if that’s
really what I was thinking of at the time. [My conscious focus was more], “How do I try to
get this information across?” The teachers that I liked as an undergraduate student were ones
that I thought made the class engaging and entertaining. One of my environmental engineering
professors, I particularly loved. She was hard and I did not do well in her classes, but I really
liked her and I liked her classes, regardless of how I did.
I was not ever really good at having a plan for content to cover [during the class period].
[The plan consisted of ] compartmentalized units of notes that [enabled me to know] exactly
what was to be taught, [and] when. I quickly realized that [such structure] didn’t suit me, because
if students had questions on a concept that took 15 minutes of class discussion, I was okay with
that. Many of them were still struggling with basics. I really needed to spend more time making
sure they really understand the stress-strain curve, where the stress came from, how it could be
calculated, and the difference between load and stress. I also took for granted students could
make the connections on their own.
ULTIMATE BOONE: CONCEPTS NOT SCHEDULES
I realized I had to step back and make sure they understood the basics. Eventually, [I accepted
that] if all I did was get them to understand those basics, [that would be] really good. As long
74
8. CREATING A COMMUNITY OF COLLABORATORS TO ACHIEVE CHANGE
as I feel like what we’ve covered they’ve learned pretty well, I feel like I’ve done my job. But it’s
still a challenge.
SUPERNATURAL AID: CLASSMATE ASSISTANCE
A former classmate of mine was also teaching a civil materials class that was more his back-
ground. We were classmates at Northwestern and he knew a lot more about cement and con-
crete than I did. I asked him for help with notes. He mentioned that he used this little exercise
when teaching cement hydration. Cement hydration basically is looking at a chemical reaction
between cement particles when they become in contact with water. They go through a hydration
process and it’s exothermic; there’s heat released. Sure, I made sure I understood the reactions
so I could explain them in class. But I remember thinking, “How do I get the students to better
understand what’s happening?” He shared with me that he used atomic fireballs, little candies,
to illustrate that process.
Basically, it’s a little exercise where you just go through showing several of the chemical
reactions while [the students are] sucking on an atomic fireball. You associate where the fireball
gets spicy to the heat release. Then once that wears off, you don’t really notice it anymore, and
that is correlated to a decrease in the heat released. I just thought [the exercise] was the coolest
thing. I used it in class and I remember people loved it. So I’m like, “Yeah, that’s a really neat
idea. I need to do more of this, whatever this is.”
ULTIMATE BOONE: CANDY AND PERSONALITY
I ended up teaching that civil materials class for a number of years, and over that period of
time, I ended up developing a whole series of mostly food-related activities to try to illustrate
certain concepts. I have one where I’ve [heated and frozen samples] of Laffy Taffy. When you
pull on the heated sample, it really stretches, and so I tried to use that to very grossly exaggerate
ductile behavior. [In comparison, the frozen samples] would become brittle and just fracture
demonstrating that behavior. In doing a whole series of little things like that I recognized that
students always appreciated when you did things that were a little bit different. Whether or not
it was actually helping them learn, I don’t know that I knew that at the time, but it seemed like
the right thing to do.
I ended up trying to create a number of these kinds of activities in all courses I was teach-
ing. Never in my graduate courses interestingly enough, but in all of the undergraduate courses
I was teaching. I think having been a slightly above average student, I feel like that has dictated
a lot of how I teach. I teach in such a way that I’m trying to reach everyone knowing that I won’t
necessarily reach everyone. I’m trying to really make sure students understand the most basic
principles first and then build on those.
Maybe that was just going to be my personality in the classroom anyway, but it felt like,
“I want to make it interesting and entertaining.” I tend to talk fast and loudly. I definitely tend
8. CREATING A COMMUNITY OF COLLABORATORS TO ACHIEVE CHANGE 75
to not stand still, and that’s just how I am. Even back then the students commented, “Can’t you
just stand still? I’m having a hard time following you walking around the classroom.” [I’m] pretty
animated in the classroom. That seemed to go over well, so that was reassurance for me, that I
was at least approaching this the right way. Although, I feel like that was just my nature anyway.
It might have been hard to change it. I don’t know, but I felt like I was getting the feedback that
worked well.
FREEDOM TO LIVE: FLEXIBLE SYLLABI
I wasn’t teaching things like statics or solid mechanics, which are largely solving problems, doing
calculations. I was sort of in this middle, more conceptual. The textbook for my civil materials
course was very conceptual. There were certainly problems and equations, but not all the way
[throughout the text. In the text] you’re learning about concrete, what it is, what it is made of,
and how you construct it. What is steel, what it is made of, how you construct it, so on and so
forth. I think it made sense to me [at the time] that I had to do these sort of demonstration
activities for them to understand the concepts. It wasn’t just a matter of using calculations to
solve problems, that was part of it. But they had to understand the concepts, too. I think that
was the breeding ground for me, through what I now know as active learning, to realize that
conceptual understanding was really important. I’d always been asking students questions in
class like, “Do something. Do you understand that? Are you sure? Let’s discuss it again.” If I
was asked a question, I would try to explain it in a different way if I could. I think it was always
more natural for me to do that.
[I no longer] have a schedule in [the course syllabus] that says, “Day one or week one,
do this. Week two, do that.” With the way I teach now, I can’t do that because I let my class
evolve. Early on, my struggle was internal. I wanted to be that good professor that taught them
everything. I would say, “Okay, no more questions. We got to move on. I need to cover this
stuff.” I was very good about covering material because I was fast.
MASTER OF TWO WORLDS: PROBLEM SOLVING IN THE
CLASSROOM
I think what’s happened over time is [that] I find [questions I pose during class] become more of
the emphasis than a side note. That, of course, in turn, has evolved into doing things like work-
sheets associated with an activity. Not only do we do the activity, but I may have them complete
a worksheet. Maybe that’s individual. Maybe it’s group. Where I am—on the conceptual side in
terms of how things have changed—I do a lot more of those sorts of things. I also do a lot more
in-class problem solving and I never used to do that. I guess this was somewhat initiated by the
idea of wanting to flip a classroom. I got very intrigued by this concept a few years ago.
I like problem-based learning. “Here’s a problem. I haven’t touched everything about this,
but dissect it. Here’s the problem statement. What do you need? What’s given to you? Ask me
76
8. CREATING A COMMUNITY OF COLLABORATORS TO ACHIEVE CHANGE
questions.” That’s what I like to do. I walk around in the room and get questions. Once I get a
question that’s been asked a couple of times, I raise it with the class. I love that. It is so much
more suited for me. Again, I guess I have just gotten to a point where it’s also [the] content I’m
comfortable with, which I think makes all the difference. I feel like I should be able to answer
any question they ask me, but I also feel like if they ask me and I don’t know, I’ll go try to figure
that one out. Again, if it’s in-class problem solving, I usually have a pretty good sense of what
kind of questions are going to come up beforehand. I guess that experience helps me.
I’ll pick a specific problem for a reason, because I know it’s going to teach them X and Y
when they go through this problem. Instead of me just writing stuff on the board, they’re going
to learn it by going through this problem. [I’ll ask], “Does everybody understand the X and Y?
Because that’s what’s important from this problem.” I always try to make sure to emphasize that
when we’re done. I do a lot of that.
ROAD OF TRIALS: IMPROVING THE CLASSROOM
There was a group of us in the department, sort of like-minded, who enjoyed teaching but really
wanted a better sense of what students were learning, [and] how to do a better job. We sat
down to put together an NSF proposal for what was the Course Curriculum and Laboratory
Improvement (CCLI) grant program at the time. The purpose of the grant proposal was to
develop a new course. We wanted to develop an introduction to civil engineering course that we
did not have [yet]. It was driven by wanting to improve the curriculum. We knew we needed
a first-year course for our students to have a better sense of what they were getting into in the
major.
That was the purpose, but at the same time, we knew, “Okay, what can we do different?
What can we do to make this class more unique?” We didn’t want it to be a class with a whole
series of PowerPoint presentations on the different disciplines of civil engineering and off you go.
We didn’t want to do that. We really wanted to make it a more engaging class. I can’t remember
now exactly how we arrived at this, but we did want to introduce technology. More specifically,
we wanted to introduce sensors. I think part of the reason [we wanted to introduce sensors] was
that most of us on the proposal did experimental research, understood the value of sensors, and
wanted to get that concept across to the students, thinking that [introducing sensors] would help
them learn some of the things that we do in civil engineering, some of the things we measure,
why we measure them, and how we measure them.
We were captured with this idea of critical thinking. I’m not sure that any of us really
understood what that meant in the context of engineering learning, but we knew that was an
important thing. I do think a lot of it was driven by [reflecting on] most of our classes in the
curriculum, how students could go through calculations, solve problems and almost never think
about the output—never think about the number, the value, does it make sense within some
bounds of reasonableness, never think about the units... Of course, we complain about that all
the time. Students don’t really have a good understanding of units and how it’s associated with
8. CREATING A COMMUNITY OF COLLABORATORS TO ACHIEVE CHANGE 77
the answer. One of my colleagues, I heard him say this in a class one time a few years ago and I
loved it—I steal it and use it now—he explains that in engineering problems, the solutions have
a name. They have a first name and a last name. The first name is the value and the last name is
the unit. I just thought that was really cool. I like that because I think it really emphasizes that
both are equally important and one depends on the other. You have to understand what units
you’re dealing with to make some sense of the numbers.
I think that was a big driver for what we wanted to introduce [to] civil engineering students
coming into the major—how important that was. Ultimately, that evolved [into recognizing
that] in order to do that, you almost have to think beforehand what would make sense for an
answer to a problem. I think what we realized is [that] we never give our students a chance to
do that [before].
We never ask them in advance, “Okay, here’s a problem. What do you think would be a
reasonable answer to this? Don’t calculate anything. Look at it. Try to dissect it. What do you
think is going to be a reasonable answer? What’s the order of magnitude? Are you going to be
in the thousands? Are you going to be in the thousandths? Like, where are you?” It’s amazing
when you start really stepping back and asking students these questions, how far off some of
them can be. You realize, “Oh my gosh, I really need to help these students understand. Get a
sense of what they’re doing.”
ULTIMATE BOONE: WITH FUNDING COMES CHANGE
I think we were fortunate to get that grant funded to develop that course. I think we offered it
for the first time in 2007, if I remember correctly. That changed everything for me. Getting that
grant, having to develop this novel course in a very unique, problem-based learning teaching style
[was a significant opportunity. Additionally], working on that proposal with my co-investigators
gave me a new appreciation for collaboration and what that could really mean.
I don’t think we even knew that at the time to be honest. I don’t know that we even used
that terminology in the proposal, but that’s essentially what we were doing. [We were giving]
students these realistic engineering problems and ask them to estimate a solution knowing that
they knew nothing other than any prior knowledge they might have had about how [to approach
the problem] and solve. Our purpose was [to] get them to think about the problem, what are
the factors, and what’s even important. One of the problems I developed for the class was to
introduce students to what geotechnical engineering is within civil engineering. I wanted to pick
something that would resonate with the students. Again, this was ’07. I picked a problem that
was set in the context of Hurricane Katrina and the levee failures, because that was very recent
at the time. I knew that was something that students would understand and were aware of.
Basically, through a long process we decided we wanted to ask the students a problem
related to the failed levee section in New Orleans. It needed to be rebuilt and we specified the
length. We said, “It’s a 100-foot long section of earthen levee that needs to be rebuilt. How
much soil do you need?” I specified in tons because we had a long discussion about assigning
78
8. CREATING A COMMUNITY OF COLLABORATORS TO ACHIEVE CHANGE
units. We said, “Okay, just to keep it consistent, let’s ask for tons of soil,” which is something
that would probably be used in the field, thinking in those terms. “Okay, I didn’t give you much
other information. What do you need to figure out? First and foremost, draw me a levee. What
does it look like?”
We had a little bit of discussion about what they were, but not a lot. I wanted them to see
what they visualized, [which the research team] thought this was important, this whole concept
of visualization. What they saw in their mind when they thought of a levee was fascinating
stuff. You’d get some really nice two-dimensional drawings with dimensions and units as well as
some abstract looking things that I wasn’t sure what I was looking at. You’d get some that were
asymmetrical versus symmetrical. You’d even get students who would draw three-dimensional
drawings. Then you’d start to realize, “Wow. Now, you can start asking questions about why did
you think about that or why did you choose this?” As part of that, getting to think about, “Okay,
so you have a shape. How do you figure out the weight?” Keep in mind this was for freshmen.
Really, we were just trying to introduce them to what civil engineering was and what we felt like
was the process of engineering problem solving.
The whole purpose of that course was to have opportunities for the students to explore and
refine their answers. Clearly, one of the things we realized very quickly is students can’t really
visualize a ton of anything. You asked them this fairly large magnitude unit, it’s hard for them
to think about. The other thing that was very interesting and that was dealing with earth and
a levee. One of the things I wanted them to learn from this whole exercise was, “Okay, in civil
engineering, when you’re a geotechnical engineer, you work with building and designing earth
structures using different types of soil.” There are different types of soil from an engineering
perspective. There are gravels, there are sands, there are silts, there are clays, primarily what we
work with. The [different types of soil] have different properties, which is ultimately what they
would learn later on as a junior, but I always wanted to expose them to that.
What was interesting was how many students were like, “Oh yeah, soil that’s like the stuff
in your yard with the grass, the top.” They thought that you’re building this whole levee out of
top soil, organics and grass. But, it didn’t occur to me that that’s what a lot of students would
think. You don’t want to work with organics at all, so it was perfect. It ultimately [presented
the] opportunity for them to figure out, and me to reinforce, why you would never want to build
using these kinds of soils. They actually ended up working with some soils. I basically brought
the four types of soils: the gravels, sands, silts, and clays. They would work with them and learn
what the density was. They also learned what affected the density, which was the other thing I
wanted them to figure out.
APOTHEOSIS: ENCOURAGING CRITICAL THINKING
One of the most important things for them to walk away with for me conceptually is that when
you work with soil, soil density changes. It’s not a constant. That for me was a huge change from
the activities and trying to get students engaged. I realized that capturing what they thought they
8. CREATING A COMMUNITY OF COLLABORATORS TO ACHIEVE CHANGE 79
knew, like actually getting that written on paper, was so important. Until you get them to write
the stuff down, you have no idea. It enabled me to look at it and see what they thought, giving
me the opportunity to correct, or better yet, giving them the opportunity to self correct, which
[was the intent behind] how we set up a lot of those exercises. From that point forward, it made
me realize that doing worksheets and those sorts of things in class, not for grading purposes,
[but for understanding], was critical. It’s participation. I make that clear. “Do your best. Provide
me your best set of answers. That’s what I’m interested in. This is not for a grade,” which in that
freshman course, it actually works reasonably well because they’re first year students and they
don’t know what to expect. We tell them, “Hey, this is how this class goes.” They’re okay, they’re
on board with it.
Ultimately, we didn’t grade them until the very end documenting this whole process of
what they’d learned. We essentially did [what ended up being] a report. “Write a report. Give
me a final answer. Explain to me why you think that’s a good answer.” What we also did in
those reports, which I really like, is explain the process of how your answer changed during that
time period. Of course, some of them were better at this than others, but we forced them to
say, “Okay, the first day I guessed it was one ton and then I realized that’s not possible because
I was assuming this, that or the other.” Whatever they thought they knew. “Now, I know a
better answer is, let’s say, 5,000 tons.” We wanted to force them to step back and think about
what they’d actually learned through that process and document it. I think that process has
become infused in basically every class that I teach now, [and every class I teach now] is using
that approach. Active learning, worksheets to document [the learning experience], and in-class
problem solving.
ULTIMATE BOONE: TEACHING AWARD AND
REFLECTING ON MY JOURNEY
[My teaching has been recognized by my academic peers]. I was nominated for the Mungo
Undergraduate Teaching Award, USC’s highest level award for teaching, and I had to prepare
a statement. I spent a lot of time thinking about that statement [for the Mungo Undergraduate
Teaching Award] and [decided that I] wanted to share [about] why I got into teaching.
Maybe this was subconscious until that time, but it made me realize that, I think in many
ways, I’m a perfect blend of my parents because I got the technical side from my dad and my mom
helped people. I had no teachers necessarily in the family, but I think those two characteristics
brought together sort of described me, because I do look at teaching as more than that. I don’t
necessarily want to be a good teacher. I want to be thought of as a good mentor. As a person
who’s there to help students in this whole process of being an engineering student.
I’ve been working more in the past few years with K-12 teachers and students, and part
of the reason for that is I want to share what I think engineering is, because that certainly has
changed over time. You know what? Engineering is helping people. That’s what I think of it as.
80
8. CREATING A COMMUNITY OF COLLABORATORS TO ACHIEVE CHANGE
We solve problems for [the] purpose [of ] trying to help come up with solutions to problems
that impact society. That’s kind of by definition what we do.
MASTER OF TWO WORLDS: COLLABORATION IS KEY
I’m tenured, [but I value collaboration]. I almost don’t want to write any proposal that isn’t a
true collaborative effort with people that I know are just as bought into the research idea as I
am. [My colleagues and I] always put out a proposal I’m very pleased with, whether it’s funded
or not. I feel like we do a nice job. Man, I love that. It’s interesting, because for me, that is so
intellectually stimulating, which is what I think most people want out of being in academia. You
have that freedom, that opportunity to choose how you want to be intellectually engaged. This
stuff is fascinating to me because I feel like I don’t know enough about it, so I want to explore it
more. Having people to share those thoughts and not having a single concern whatsoever that
someone is going to say, “That’s a stupid idea.” Someone may say, “I don’t think that’s a good
idea and here’s why…” I’m fine with that.
To be able to do this type of work on pedagogical strategies and curriculum change, [you
want to be a force]. I cannot do this solo. What’s the impact going to be if I do one thing in my
class? The collaboration with the seven or eight faculty I’ve worked with has been the best part.
It makes a huge difference for me personally to know that there’s a decent size group of faculty
that I brought into writing these kinds of proposals, to doing this kind of work, to [recognizing
its] meaning and potential impact. That goes a long way.
I feel like I am constantly learning about learning. I’m at the point where I’m gaining
some knowledge about student learning, how students learn, what’s effective for them, but not
nearly enough. I definitely feel like the next step for me is getting a better handle on how to
assess that—how to really determine that what I’m doing is effective in the learning process for
the student.
We’ve been very successful in developing the classes in the way we intended to and have
collected data on that. Now, I feel like we have so much data that while we’ve looked at some of
it, I need a better sense of how do I extract from this solid evidence of what worked and what
did not work? It is interesting. I question myself all the time now when I do something in class.
I ask, “I wonder how effective that was...” I still have tests to see what concepts they’ve learned
or what kinds of problems they can solve. That’s all good, but I want to know more about the
process the students go through. I think that’s where I want to move forward—getting a better
handle on how to do those things, which I think should make me, and others, a better instructor.
C H A P T E R 9
81
Teaching with Advocacy:
Buffing the Talent to Break the
Mold of the Monolithic
Engineer
Matthew Fuentes
Narrative constructed by Brooke Coley
[I want to] empower others to do those things. I don’t need my face on the cover of X, Y, Z.
I would like my students’ faces to be there, wherever they are, and to be representative of
those talents that are really sitting around, and not being polished, if you will. Yeah.
So, I kind of think of faculty as more of park rangers, and this information as just kind
of like the parks. It’s not that faculty are less important. It’s just this idea that there’s this
huge landscape of information that students have to navigate. They can consume it anyway
they want, but it’s really damn nice sometimes to have a park ranger around to ask those
questions, to make those connections, to see stuff that maybe you would not have really looked
for or at before. I think that’s really the role of the faculty member, is that guide, that park
ranger if you will, to this information.
Matthew Fuentes is an Engineering Faculty member at Everett Community College.
THE CALL TO ADVENTURE
When I was an undergrad, I was hyper-focused. I wanted to be an aerospace engineer. Nothing
was going to stop that. This was what I wanted to do. It’s sort of like when I started exploring
in grad school, which is completely reverse of what most people do, it was, oh maybe I don’t
want to do, not that I don’t want to do this, but I am interested in a lot of things now. So, I
started exploring more, taking more computer science stuff, taking some more advanced math,
and some algorithms, and just kind of going all over the place.
82
9. TEACHING WITH ADVOCACY: BUFFING THE TALENT TO BREAK THE MOLD
I think I got the hook for teaching when I was a tutor back in undergrad. I was a math
tutor, [for the] math department, and that’s where I started to really focus more on student-
centered learning than just the faculty- or teacher-centered learning paradigm. That more one-
on-one, walking you through the process. I’m a pretty social person, so I think that [the] social
aspect of it—the human aspect of it—was what really kind of struck me at the time.
I guess fast forward a little bit. When I started teaching, I started teaching actually in
graduate school when I was working on my Ph.D. I’m a Ph.D. dropout by the way, so you don’t
have to call me doctor or anything. I guess I realized I didn’t know what I wanted to do, which
probably scared me a lot. Because at the time I guess I wouldn’t admit to myself that that was
true, and I wouldn’t admit that I wanted to change. The reason I left was not because I didn’t
feel supported. Actually, that’s kind of an odd thing. I definitely felt supported. The reason I left
was because I guess I finally recognized that I wasn’t there because I wanted to do this particular
research. I was there for the glitz and the glamour, and I didn’t know exactly what it was that I
wanted to focus on at that moment. And I really liked teaching, so why was I hyper focused on
this if I wanted to teach?
I made the choice to move out West with my wife and just quit everything. I thought,
“hmmm, I’ll take a wild risk.” I had never really taken a risk like that, so I’ll give this up. Of
course, friends and family were like, “What the hell is wrong with you? You’re giving up your RA
[Research Assistant] fellowship to go live in an apartment in Seattle?” [I was] like, “Yeah, but
my wife will work at Microsoft, so we’re fine, and I get to plan the wedding. That sounds fun.”
It was nice. It was nice to change. I think it was really good for me to make that change. I think
it was hard at first for me to make that change, because I had always been the hyper focused,
motivated ... I don’t know, win with all costs comes to mind. It’s like, you know, publish as many
papers kind of person, to what am I doing again?
[I] started to reflect and [tried] to recognize what motivated me and made me really happy
and appreciate things. I think it made me a better person doing that. One, it’s fun to say that
I’m a dropout from college. It at least starts an interesting conversation, but I think it helps me
sort of make peace with the fact that I didn’t need to be in that role to do what I loved to do. I
didn’t need to be a tier-one research faculty leading a research team, especially since I thought
there was a lot of really talented people that didn’t do that, or really didn’t get that opportunity.
I started to feel like, well I wonder how much of my success also is because people trust me
because I look like a typical engineer?
SUPERNATURAL AID: LEARNING TO TEACH IN A
STUDENT-CENTERED WAY
So, when I was working on my Ph.D. I really liked the teaching aspect, and my advisor at the
time, he was pretty big into engineering and pedagogy. In particular, [he was big on] bringing
things into the classroom to make it more, I guess, student-centered. More hands-on was his
real approach. What was kind of interesting about that experience was I was his student at one
9. TEACHING WITH ADVOCACY: BUFFING THE TALENT TO BREAK THE MOLD 83
point in time, and then I was kind of his colleague at some point in time where I was in the
classroom with him. I kind of saw both sides to it, which was kind of a cool experience. The
class that I got him for, and that he was really interested in, was Mechanics of Materials. [At]
some places they call it Mechanics of Solids or Solid Mechanics. It’s kind of, I’d say, one of the
more visual of the engineering courses. It’s pretty hands-on. It’s pretty visual. [And], it’s a pretty
old part of engineering. His take was, “Well, why are we teaching such a hands-off methodology,
you know, this lecture based [approach]? Let’s talk about these problems, and let’s really bring
in some tangibility to this.” That’s probably where I started to really switch [my approach]. I not
only liked teaching from the standpoint of bringing stuff into the students, but I liked learning
in that environment as well.
I think the way that I have approached it is more along the lines of, in today’s world,
faculty aren’t this big ball of knowledge anymore. You can find that information anywhere in
the world. It’s accessible by anyone. So, I kind of think of faculty as more of park rangers, and
this information as just kind of like the parks. It’s not that faculty are less important. It’s just
this idea that there’s this huge landscape of information that students have to navigate. They can
consume it anyway they want, but it’s really damn nice sometimes to have a park ranger around
to ask those questions, to make those connections, to see stuff that maybe you would not have
really looked for or at before. I think that’s really the role of the faculty member, is that guide,
that park ranger if you will, to this information. I think making them more self-sufficient and
self-reliant is important for when they get out into the working world and get to do their own
things. They become lifelong learners.
THE CALL TO ADVENTURE: ASPIRING TO TEACH
STUDENTS WHO ARE LESS PRIVILEGED
So, I started teaching Physics at a community college. Why did I start teaching at a commu-
nity college? I think for me one of the other big light bulb moments was recognizing that not
everybody’s educational experience was smooth. Not everybody had the same opportunities as I
did—a middle-class white man going into engineering—people kind of expected that of me in
some ways. [As an example], I met a guy in the computer lab in the middle of the night, [who
would later end] up becoming my best friend, and he was struggling with some stuff, with some
programming. I ended up helping him out and chit-chatting. Long story short, what I sort of
came to realize from him was [after spending] three years at a community college, he transferred
to the University of Tennessee, where I was, to finish his aerospace degree. [He] now works for
NASA. The part of the story that really stuck out to me was when he started, he was an auto
mechanic and he told the guys in his shop, “I’m [going to] work for NASA as a rocket scien-
tist.” Of course, they thought he was a little crazy. What I really appreciate about his story was
starting from essentially [the] pre-college math level, and then becoming essentially an active
rocket scientist at NASA. That’s what he does now, and we actually collaborate. And that’s the
kind of opportunity I wanted everyone to have.
84
9. TEACHING WITH ADVOCACY: BUFFING THE TALENT TO BREAK THE MOLD
And so, when I realized how much of a difference faculty really made in his life and him
transitioning from that world into university, I sort of changed my focus on going to more of a
four-year school to, all right, what can I do to be in this, I’ll call it the transitional college—the
open enrollment colleges? Universities know [their] baseline, student-wise, because you have
entrance and admissions processes to go through. [But], what about all those other people that
want to get to that point?
That’s where I really—I kind of [decided], ‘You know what, I should really try out this
community college thing,’ and so I started teaching at a small school. [I] started teaching physics
at Cascadia Community College. I think they hired me mostly because they had an emergency
fill. Let’s see, I was hired a week before classes started, and it was courses that I was pretty
familiar with, so I was ready to go. It was a pretty easy thing for me to start up. I think it was
really in Cascadia that I guess I experimented a lot with different styles of teaching, and moving
into the, how can I best empower my students to be these self-centered learners? How can I get
my students to be empowered?
SUPERNATURAL AID: A MENTOR WHO HELPED
ENCOURAGE EXPERIMENTING EDUCATIONALLY
What I really liked about the college that I started in was Cheryl Barnes (pseudonym)—the
faculty member that recommended me—was actually the person that hired me. She’s the type
of person that’s sort of like, “Yeah, try out whatever you want, go for whatever…experiment.”
She’s very much into experimenting educationally. I think [her influence] and that experience
just really helped me grow into an I-can-do-whatever-I-want [believer], teaching wise. Let me
try some stuff out. Let’s see how students react. What I really liked about what Cheryl did was
she used the Physics by Inquiry, I think is the name of the little textbooks. It’s really this motto
of getting students to, in some ways, answer their own questions. You can ask probing questions,
get them to work in groups, and get them to sort of discover all of these nuanced ideas to make
the “aha” connections. I think growing up academically in that teaching system with her gave
me the framework to start branching out from that. Because that [was] physics, but engineering
has been very traditional. And so, it brought out more of the, “huh, well I wonder what kind of
things actually do work in engineering?”
STORIES FROM MY CLASS: GOING OUTSIDE TO BRING
OUT THE INQUISITIVE MIND
And so, I started doing strange things. When I say strange, [an example is], I took the class
outside. I taught Mechanics of Materials in the spring and it wasn’t raining here. So, I went
outside, I brought some sidewalk chalk out, and we had lecture on the sidewalk. Part of the
fun was we got [to be] outside. But, [also], there was some interesting spirit of literally walking
9. TEACHING WITH ADVOCACY: BUFFING THE TALENT TO BREAK THE MOLD 85
through the steps, because you could write down a problem, have it flow, and really make students
walk through the steps of a process.
They couldn’t fall asleep standing up, so that’s a good thing. It actually, what I found was,
it somewhat brought out this, I don’t want to say childlike experience, but kind of the inquisitive
nature of the child mind, like this “oh, huh, I wonder what would happen with this,” when we
did that kind of experiment. It took them out of the classroom, “I’m just going to be absorbing
knowledge brain,” to “huh, okay yeah I can do this!” That was kind of one [part]. I think another
part of that was bringing engineering from an enclosed room [to] out in the open, too. Maybe
in some ways to socialize engineering and engineers.
My real hope was that passersby’s, which sometimes happened, would just kind of stop,
and listen, and be interested in engineering, and ask students questions, and the students would
answer questions. That’s my utopia that didn’t quite take shape, but it did have some strange
impact in that other people would notice, and they’d say, “Oh, that looks complicated.” Then it
would be an opportunity for me to say, “Well, you know, if you take this path and learn these
things, it’s interesting. It might be complex, but you can, you can totally learn this, too.”
I don’t have any data to say it totally reshaped all of these mindsets. But, I did like the
camaraderie it created with my students. I did notice that they felt a whole lot more comfortable
when they saw me doing these kinds of strange things to ask questions—to ask questions maybe
they had been afraid to ask before.
ROAD OF TRIALS: FINDING A FACULTY POSITION AT A
PLACE WHERE I CAN MAKE A DIFFERENCE
I was associate faculty at Cascadia for over three years, and they didn’t really have the funding to
put in a new faculty position. We were right in the financial crisis of 2000 something or other. I
don’t remember the date now. The state had frozen the budget, and so they had a hiring freeze
for a few years. By the time they actually did have a position open, there was a position open at
Everett, which was just north of us. I started teaching at Everett again, hmmm, this is a theme. I
got a phone call over the weekend and a faculty member was pretty ill in Everett, and I had made
some pretty good contact with the faculty up at Everett, and they were like, “Uh, so Matthew,
we know that you teach these particular classes down at Cascadia. Is there any way you could
come fill in midway through a quarter on a class? Because we trust that you could do that.” I’m
like, “Wow, I appreciate that you trust me.” And, “Sure, why not?” I did, and it worked out. One
of the reasons that I left Cascadia and went to Everett was really the students I felt like they
were—the word raw comes to mind or scrappy.
So, Everett in many ways is a pretty rough city. Maybe in the news you’ve seen the city
of Everett actually sued the drug company for the opioid epidemic, so we have a really nasty
epidemic in Everett, and really—Snohomish County, which is the county that this particular
school lives in. One of the kinds of interesting things is almost all [of ] the students there start
out in pre-college math, pre-college English, so not very prepared students. I guess in many
86
9. TEACHING WITH ADVOCACY: BUFFING THE TALENT TO BREAK THE MOLD
ways they are the students that represented my best friend, the kid from Flint. It really kind of
felt like that same group of students. The rough and tumble group as I call it sometimes.
I do feel like in many ways the program was a diamond in the rough, too—the engineering
program out there. It was one faculty member when I started. No, I guess it was two, two faculty
members when I started, to now we have five tenure-track faculty. Some of us are tenured, some
of the others are still tenure track. And four associate faculty, so we’re a really big department
now, and just becoming this kind of center of hands-on engineering education.
ROAD OF TRIALS: BECOMING TRANSPARENT ABOUT
WHO I AM
In one of our intro engineering classes—so one of the things that my colleague and I did when
we first joined Everett was—we completely changed the first-year experience. We made it a lot
more hands-on, a lot more tiered, so that we’re taking you from wherever you’re at and getting
you up to sophomore-level engineering courses, in theory of course. But, not just we’re going to
spray you with a bunch of knowledge and expect you to grow from there.
I appreciated the challenge. It definitely pushed my teaching limits I would say, going to
Everett. I don’t want to say the students were less receptive of my quirkiness, but they were, I
guess, more suspicious of it. Who is this weird guy? It felt like it took longer for students to
buy into sort of my oddball schemes. Yeah, and I think I had to adapt a little bit, too. I had to
better understand that a big chunk of my students in Everett are on the verge of homelessness
every day. I guess the point is I had to academically grow, and change with that group, and
meet whatever that need was, but still maintain this idea of empowering, and student-centered
learning, because it felt like that was one of those things that could help a lot of these students
out. I think a lot of students at Everett just, I don’t know if they’re not good at reading between
the lines, or if they’re not ... They don’t know, so it’s really good for you just to tell them what it
is you’re thinking, and why you’re doing stuff. Because a lot of them don’t have experience with
college, or really very good experiences with the education system at all. I guess I became a lot
more honest and open about who I was.
STORIES FROM MY CLASS: HELPING STUDENTS
OVERCOME IMPOSTER SYNDROME AND BECOME MORE
ENGAGED
I usually talk, so that brings me to the whole conversation of, “Hey what I expect in the classroom
is to have conversations. If something comes up, you need to talk to me, we need to communicate.
We’re a team here. I know that all kinds of things happen throughout the quarter. The big thing
that you can do is communicate with me. Let me know if something’s going on, this, that, or
the other because I’m here to help you.” I give them the speech about, this class is all about you
learning something new, so it requires that you ask questions, don’t be afraid to ask questions.
9. TEACHING WITH ADVOCACY: BUFFING THE TALENT TO BREAK THE MOLD 87
I have a little stamp card that my wife and I came up with, a system. She’s a professor as well.
So, we came up with this little participation stamp card, and I give the spiel to the students like,
“Oh, you need three stamps on this card.” It’s like a frequent coffee customer card is what it
looks like that I put pictures of bunnies or my pets on, so they already know I’m a little different,
right, when I’m giving them these. So, I give them the card and I say, you need three by the
end of the quarter to earn your participation points, and anything above the three you can earn
potentially bonus points on assignments. I explain to them I really want a way to incentivize
you participating. It’s good for me and for them because it’s something tangible for the students
that they can sort of validate themselves that they participated—that they are doing more stuff.
It’s also a way for me to point at something physical to say, “Hey, yeah, um, you want, you want
me to go out of the way to help you with this, yet you really haven’t shown a lot of effort in the
course. You know, what gives here?” And now, I have sort of a physical record of what you’ve
been doing or what you haven’t been doing.
I think they appreciate maybe that I’m just honest with who I am. At least that’s my hope,
that they recognize that you don’t have to be afraid to be an individual, and you don’t have to be
afraid to learn. There’s no stereotypical type of person that needs to be in this classroom.
So, it can come around to the maybe they have the imposter syndrome, or something like
that, then we can have a conversation. I think it’s that aspect and those conversations—really
directed conversations with my individual students—that have helped me transition, I would
say. I think even if I were to go back to Cascadia and teach again, I would probably take all
of that new stuff, and directly apply it to Cascadia as well. Yeah. Now I think I’m a little bit
more focused on what I want as outcomes, and what I want for them in terms of helping them
develop.
STORIES FROM MY CLASS: TEACHING THROUGH
MAKING AND FAILURE
A good example of things that I think work pretty well is in the last of our series of intro classes,
it’s really kind of a maker’s class, so all the students get a kit of sensors, an Arduino, and we have
a 3D printer in the room. We set up the teams to be essentially, I would say, startups. They build
two prototypes throughout the course, so two projects, and along with those two prototypes they
learn how to program. They learn some basic team building skills. They learn how to problem
solve. What I like about that class and what I think works pretty well, is giving students this
freedom to search, try, find, explore, experiment, and have things fail. Failure is totally an option,
and I think a great way for them to learn [is with awareness of ] how hard sometimes things are
to implement. Of course, you don’t want it to be so much of a failure they learn nothing and
they sort of unlearn everything. But, if I can, in those projects, get them to at least keep trying
stuff—and to get stuff to work pretty well, and to have some failures along the way and fix those
failures—that’s the success.
88
9. TEACHING WITH ADVOCACY: BUFFING THE TALENT TO BREAK THE MOLD
ROAD OF TRIALS: INTRODUCING Simulink® BEFORE IT
HAD BEEN DEBUGGED
One of the things that was a total disaster in that class early on was we had this—well there were
a couple of disasters that happened. One, well, I think it was my doing here. I thought we were
going to do this awesome thing. We were going to teach them MATLAB®, but I was going to
teach them how to use Simulink®, and we were going to have the hardware software interaction.
It was going to be amazing. [Instead] it was a disaster. Because Simulink®, at the time, had just
unlocked the capabilities for Arduino, and I should’ve talked to the company beforehand, before
making that decision to have students go down that route. There had to be a lot more debugging
that was advanced for them that they couldn’t really handle with the tools that we gave them. It
wasn’t such a seamless experience for what we were trying to have them do. I guess early on it
was a reminder to me that something that was important for students in the early part of their
career was having something that was challenging, but didn’t sort of make it seem like it was
impossible. In this, I think, early iteration, my guess is that some students sort of felt like some of
the robotics stuff was impossible, right? Unattainable. That was not the message I wanted them
to get. So, like, crap, I made this worse, great. I tried to contextualize it and say that, “This class
is an alpha prototype,” you know at the time, “and we’re going to do some things that probably
won’t work, and let’s just play. Let’s see how it goes.” I tried to remind them like, “Yeah this was
terrible. I’m sorry.” I think they got okay with that. They got through that.
ROAD OF TRIALS: UNCOVERING BIASES AND
EXPECTATIONS AND A NEED FOR ENGINEERING TO
CHANGE, CULTURALLY
I used to get interesting feedback all [of ] the time in my course reviews, was something like,
Matthew is really approachable, blah, blah, blah, awesome. His tests were crazy hard. He expects
a lot. What I was getting in this feedback was students felt blindsided by, I’m a really relaxed
person face-to-face, but not technical competency wise. That was hard for me to deal with,
especially at first. I was like, wait, being nice and being technically competent are exclusive? I
don’t understand. It started to click a little bit more I think when I started seeing what my wife’s
experiences were. Interestingly enough, we’ve taught the same class before, in the same quarter.
She would get dramatically different [evaluations]. I think one of the interesting things that
happened was seeing how her students, I guess, expected her to be nice, like personality wise
[because she was a woman].
Seeing how students reacted to her, because she’s definitely firmer than I am, I guess what
I noticed was, if I were firm or looking at some of my coworkers who are pretty, I don’t want to
say they’re rough, but they’re very strict—they have very strict schedules, they have a very strict
classroom, style-wise—students never complain[ed] about it. But, when someone like my wife
is strict, she’s not really that strict, but she runs the classroom in a different way than maybe I
9. TEACHING WITH ADVOCACY: BUFFING THE TALENT TO BREAK THE MOLD 89
do, the students react, I’ll say, quite negatively to that personality type. I guess what it’s kind
of started clicking in was students’ expectations on things like the gender role of the faculty
member in charge of the classroom.
When I back up to that mismatch of me face-to-face versus the technical competency or
capabilities on exams—back to those first comments—I guess I started to recognize that it was
some kind of me peering into their biases or expectations. I don’t have a good answer for that,
but I think it helped. Those comments happened early in my tenure process at Everett, and I
think that’s where I started to become a lot more upfront with who I am at the beginning of the
course, and lay it out, and not be afraid to talk about things if it arises. It’s just that I feel like
the world is changing, and engineering needs to change, culturally.
ROAD OF TRIALS: EXPERIENCING MARGINALIZATION
THROUGH A LAST NAME CHANGE AND BECOMING AN
ADVOCATE
I can’t lie, having seen the world through my wife’s eyes a lot more, she was at Microsoft for five
years, and I saw indirectly how people reacted to her. I guess it sort of broke down my utopian
vision of everybody lives with kitty cats and unicorns, and everything works great, right? [I went
from thinking] there [are] no real problems, to wow, there’s a lot more to this than I ever realized.
There’s a lot more to what people deal with on a day-to-day basis then I realized. I think the
other part of that picture, of changing our expectations for what an engineer looks like is even a
small part of what I experienced in changing my last name. Until I changed my last name, and
I guess I started to feel—a couple of experiences. It’s not uncommon that people ask where I’m
from. I’ve had people say, “Wow, you’re really white.” Like, no one has ever commented on my
race. What the hell? It was such, I guess, a shock. I really didn’t expect that. For 30-plus years
of my life, I had never had any comments like that. Now to suddenly, by a switch like that, [be
exposed to people], I could see why it was annoying and why it was qualifying. It was sort of that
kicking in like, well now, are you going to qualify that? I mean like now do I become a Hispanic
engineer instead of an engineer?
I think it was really eye opening for me. It was kind of like trying on something new. It’s
like putting on a new face almost. I never anticipated having those experiences in that way, and
being so directed. That’s what was so shocking. But I think honestly looking at myself, I don’t
think I fully understood it, until I experienced it. While I can take that with the knowledge that
I have of going through life I guess in a different skin, if you will. I can parse that now because
I kind of know better. But, I can’t imagine going through the educational system and having to
deal with that and learning new things all at the same time. I guess to bring it all back to the
classroom environment, it’s that kind of conversation that I want to have happen in engineering
classrooms, because I think those are important conversations along with the technical. Now,
I know that you can’t spend all the time in the classroom talking about these experiences, but
it’s important for students to at least be aware. For a long time, I struggled with, “well, all right
90
9. TEACHING WITH ADVOCACY: BUFFING THE TALENT TO BREAK THE MOLD
yeah, but what do I do?” I really thank my wife for saying, “Yeah, but you have a unique role as
the stereotypical white male in engineering to be able to stand up and do something about that,
right? You can be an advocate.”
I’ve tried to do a lot of soul searching on [diversity and people assuming everyone else
is like them]. Quite frankly, I was thinking back to one of my roommates in college. He was
also an engineer. He ended up quitting. [He is] African American, he was my neighbor at one
point in time and we were really good friends. Now I go back to that a lot and think why
was he not successful? What was it? I really wish I would have just asked him the provocative
questions like, “Hey man, what’s going on? You feel alright? Something going on?” Maybe I
just am forever guilty. I guess I still always come back to I think people just don’t have enough
real conversations. They don’t ask about how certain actions or things might make someone
uncomfortable or something of the sort.
I don’t have an answer for changing [that]. But, I at least want to have those conversations
in the classroom. So, it may be over the course of my lifetime the needle is moved, right? If we
make some progress, then we’re stepping in the right direction. Maybe it’s some part of me like,
“Oh, he took his wife’s last name, hmmm, maybe I should think about women differently, or
relationships differently. He’s interested in other cultures…” You know, maybe some part of my
spectrum they’ll at least take away from the classroom and maybe change a little bit, I’m hoping.
Plus, being technically competent.
APOTHEOSIS: EMPOWERING STUDENTS
I recently met with a few of my former students from those—this is kind of an interesting
aside—some of my former students that I had during that timeframe at Cascadia. And quite
a few of them [now] have advanced degrees. One of them is finishing up his Ph.D. in civil
engineering. Another one I just spoke to a couple weeks ago, he finished his Master’s in four
quarters in mechanical engineering. I think the great thing about these students, in particular,
was that they were not very engaged. They didn’t know what they wanted. They were pretty
sloppy with their work. If you looked at them as a snapshot in a particular quarter you would
say, “Yeah, they’re not going to be successful.” What I really liked was that they ... I don’t know
how much a part I was in this transition, but at least I appreciate Cascadia being a part of this
transition in them to let’s say develop. Yeah, so I think that was just kind of looking back and
seeing how I was as an educator, and seeing where those students ended up, yeah it feels good.
[I want to] empower others to do those things. I don’t need my face on the cover of X,
Y, Z. I would like my students’ faces to be there, wherever they are, and to be representative of
those talents that are really sitting around, and not being polished, if you will. Yeah.
C H A P T E R 10
91
Conclusion and Lessons
Learned
Nadia Kellam
I hope you have enjoyed reading these stories of engineering faculty and their diverse stories of
embracing active learning strategies. To me, these stories highlight the complexities inherent
in stories of change. Of these eight stories, there was not a single one that was simple and
straightforward. This was part of the impetus for sharing these stories as people are not born
good teachers; it requires work to become good teachers. While these stories show the difficulties
in becoming exemplary engineering educators, they also highlight the benefits of changing our
ways of teaching.
As I was reading through the chapters I noted some emergent take-aways that strongly
resonated with me. This is not meant to be a comprehensive list of lessons learned. In fact, I
am very interested in lessons that struck you as you read through the stories and anticipate that
people at different points will appreciate varied aspects of these stories.
LESSON 1: IMPORTANCE OF HAVING A COMMUNITY
A lesson that appeared throughout many of the narratives is the importance of having a com-
munity throughout the process of improving your teaching. Sara reflected explicitly on this in
her narrative when she acknowledged the support of her teaching-focused group of all women
where she was able to hear the accounts of others suffering through the same challenges and
feeling assured that, “Oh, it’s not just me, I’m not alone.” This larger network of women enabled
her to not only learn content that would aid her in her pedagogical approach, but also serve as
a critical body of support that would empower her to “get through that” period of her journey.
Charlie also described the importance of collaboration and community in his journey.
Much of his personal satisfaction came from, for example, writing collaborative proposals with
his colleagues in support of their teaching. He found that to be intellectually stimulating. He
also discussed that making changes in a silo would not likely prove effective in achieving the
scale of changes possible within an engineering program; Charlie wanted to have impact and
learned early on that to do so required a community of like-minded individuals.
92
10. CONCLUSION AND LESSONS LEARNED
How can we all incorporate this lesson into our teaching journeys? One way is to find
others who are also interested in improving their teaching. You may find some people like this
within your department or program, or you may have to look more broadly to find others to
work with so that you can inspire each other to continue to improve your teaching.
LESSON 2: THE POWER OF REFLECTION IN IMPROVING
OUR COURSES
Sometimes as faculty we get so busy that we do not take a step back to reflect. As we all embark
on our journeys in becoming better engineering educators, we can learn a lot through reflecting
on our goals for a class, how the class went, how the students responded to the class, and con-
siderations of ways to improve it in future semesters. Donna recommended taking time at the
end of the semester to reflect on what worked well, what did not work so well, how you have
progressed toward your goals, and, finally, what you will change moving forward. She describes
this as keeping “the spirit of innovation alive.” Most engineering programs have ABET require-
ments for continuous improvement, and this can be a good opportunity to reflect on a class and
develop goals and ideas for future iterations of the same course.
How can we all incorporate reflection into our courses? There are many ways this can
be achieved, both individually and collaboratively. One way is to write up your reflection and
include it with your course files so that you can read through it when planning for the class in
the future. Another idea is to have a discussion with peers in your community so that you can
spend time sharing experiences from your class, what went well, and what did not go so well.
This way your community can help you brainstorm ideas of improving the course or be able to
anticipate areas they may wish to invest more energy into if they are to offer that same course
in the future. While it is always helpful to reflect at the end of a course, it can also be helpful to
reflect as the course is happening so that minor corrections and improvements can be made in
real-time throughout the semester. Deploying a course evaluation while the course is happening
can help provide some input from the students in reflecting on the course and to identify some
opportunities for improvement. Another option is to ask a colleague to observe your class so
that you can have another person’s perspective on the strengths of your classes and areas for
improvement.
LESSON 3: TAKE IT SLOW
In many of the stories, the engineering educators discussed taking things slowly and not chang-
ing everything at once. In Donna’s story, she made some significant changes to her thermody-
namics course over 10 years that made it “unrecognizable from the original one.” Even though
there were large changes over a longer time period, she explains that she “never completely
overhauled the class.” She would make one or two changes each semester. She explains that by
changing one thing at a time, it did not become so overwhelming. By taking a slower pace, you
10. CONCLUSION AND LESSONS LEARNED 93
are probably more likely to continue on the journey of changing your teaching. If you change
everything at once, you could easily become overwhelmed and the teaching “experiment” could
end.
How can we take it slower in introducing innovations? When planning our classes, we
can think about our vision for the class and one or two things that we can do to help realize that
vision. Through the stories, we learned that starting with one small change can initiate a series
of beneficial changes. What are the one or two changes that you can make to your class next
semester? How can you learn from those changes for future iterations of the course?
LESSON 4: IMPROVING TEACHING AND LEARNING IS A
LOT OF WORK, BUT IT IS FULFILLING
Many of the stories highlighted in this book admit that their journeys of becoming better educa-
tors is a lot of work. In particular, Chris, Fernanda, and Brad describe their journey as difficult.
While they felt that this journey was a lot of work, there was consensus around the idea that it
was worth it. For example, Fernanda talks about how it would be a lot easier to teach directly
from a textbook, but for her it would not be fulfilling. She explains that if she took this easy
route, she would find herself frustrated and not happy in her position. She also talks about how
she is constantly learning herself. She explains, “Honestly, there are still days that I come out of
a class and I said, ‘I could have done that better. Next time I’ll do it better.’ It’s always a work in
progress. That keeps me motivated.”
Brad also discusses this idea that trying new things is a more difficult route, but that it is
worth it. Brad explains, “Any time you try something new, outside the box, it’s going to take a
lot more time, more time than you probably anticipate.” It will take a lot of time and energy, but
“if you stick with it…it really does pay dividends.”
What would our roles as engineering educators look like if we took the extra time and
energy to become better educators? Would it make our faculty roles more intrinsically satisfying?
These stories serve as an inspiration to try new things and continuously improve our teaching and
student learning in our classes. They also help us see the importance of getting feedback, both
formally and informally, from students to help improve the learning experience in the classroom
and also serve as extra motivation to persist through the difficult times.
LESSON 5: TRADEOFFS BETWEEN TEACHING AND
RESEARCH, OR NOT?
In the stories shared in this book, there was some tension between those that felt that they
needed to decide between being a good teacher and a good researcher with others feeling that
your teaching can inform your research and vice versa. Both Matthew and Charlie discuss being
intentional about finding a faculty position at an institution that valued teaching. Inherent in
these stories is the idea that there is a tradeoff between teaching and research, you are either a
94
10. CONCLUSION AND LESSONS LEARNED
good teacher or a good researcher, but not likely both. Fernanda pushes on this idea by saying
that teaching and research can be symbiotic. In other words, teaching can help give ideas for
research and research can help give ideas for teaching.
What could faculty roles become if we started integrating our teaching and research ef-
forts? How can we have teaching inform our research and our research inform our teaching?
How can we begin to see these not as roles that are pitted against one another, but rather with
each as integral parts of our roles as faculty members? If we are in administrative roles, how
can we value both teaching and research in a way that encourages faculty to integrate these two
aspects of their role.
LESSON 6: CONSIDER AN ASSET-BASED APPROACH TO
YOUR TEACHING
An asset-based approach to teaching is one in which students are seen as having strengths and
prior knowledge when they come into a classroom [Llopart and Esteban-Guitart, 2018]. They
are seen as individuals with a myriad of experiences that will add to the class. In my experience,
this approach is critical in improving our engineering education systems because many engi-
neering faculty take a deficit-based approach, where they believe that students are not “cut-out”
for engineering and lack the prerequisite knowledge (from K-12 or prior engineering classes) to
do well in the class [Secules et al., 2018]. I have had many conversations in curriculum commit-
tees, faculty meetings, and hallways where faculty explain to me that our students are not smart
enough to be engineering students, that our students are not adequately prepared to be engi-
neering students, or that our students should be weeded out of their engineering programs. It
was refreshing to read through stories of engineering faculty who do not take this deficit-based
approach, but instead take an asset-based approach.
Fernanda discussed incorporating real-world projects into her courses. In one example,
she describes asking questions in class and 1/3 of the student’s hands go up immediately. The
students have learned that she is interested in their particular experiences and know that the
classroom is a space where they can share those experiences. She talks about how engaged they
become, how well they can communicate when discussing something they know so well (their
experiences), and how they seem to be more confident.
Also, Chris discussed his Site Remediation Techniques course that he transitioned to
include real-world projects. The students became very engaged in the project as the projects in-
volved real stakeholders in a real community. His students were distributed in this class with
some having substantial practical experience and weaker technical backgrounds while others
had strong technical backgrounds and limited practical experience. The students with practical
experience were encouraged to bring their experience into the classroom to help teach the stu-
dents with little practical experience about real-world engineering. This is different than many
engineering classes, where technical knowledge is valued more than practical experience.
10. CONCLUSION AND LESSONS LEARNED 95
Donna took an asset-based approach in a deliberate way as she incorporated a liberative
pedagogy in her classroom where she worked to change the power imbalance in the classroom.
She adopted several strategies to give students agency and power in the class, primarily by having
students critically reflect and critique the class.
How can we incorporate this lesson into our classrooms? For one, we can start asking
explicitly about students’ experiences as they relate to the course. In some courses it may be easy
to value students’ experiences, but in others it may appear more difficult at first glance. First,
we can think about who has power in our classrooms and consider ways of giving the students
agency within our classes. We could try having students read a handout by Foucalt or we can
think about how to integrate real-world projects that build on experiences students bring into
the classroom.
LESSON 7: EMPOWER ENGINEERING STUDENTS WHO
HAVE OTHERWISE BEEN MARGINALIZED
Another transformative lesson emerged around the ability to empower engineering students
who have traditionally been marginalized through our teaching approaches and Matthew was
an exemplar of this. Early on in his own academic pursuits, Matthew befriended a guy who had
ambitious goals with humble beginnings who would later become his best friend. His friend
started as an auto mechanic attending community college and struggling academically. However,
his desire was to land at NASA as a Rocket Scientist and with persistence and resilience, that
is exactly where he ended up. This relationship was Matthew’s first impactful exposure enabling
him to recognize that not everyone had access to the same opportunities and access, and yet,
where and/or how one started off on their journeys did not limit how far they could climb.
Matthew realized that with the right support, encouragement and confidence, students could
have a greater potential of achieving such laudable goals—inclusive of the “raw and scrappy”
talent and not just those sought-after students that arrived prepared and ready to soar.
Matthew acknowledged that many of these students had not had much experience with
college or positive experiences with the education system. He made a conscious decision
that through his teaching, he would make a way for these students—the atypical recruit for
engineering—to be able to see themselves as engineers. Through his openness and the cama-
raderie created through his teaching approaches, the students would be empowered “to ask ques-
tions maybe they had been afraid to ask before.” The students were representations of his best
friend and he chose to use his position to support them by meeting them where they were to
help them develop and realize their fullest potential. It was important for Matthew that students
learn “there’s no stereotypical type of person that needs to be in this classroom.”
The other experience that solidified the necessity of becoming an advocate for the
marginalized engineering student occurred when Matthew changed his own last name. He took
the last name of his wife, which happened to be a name of Hispanic origin. With this change,
his identity of decades was suddenly challenged and he questioned, “I mean like now do I be-
96
10. CONCLUSION AND LESSONS LEARNED
come a Hispanic engineer instead of an engineer?” Through the microaggressions that started to
become commonplace for Matthew after changing his last name, he gained a different under-
standing of what “people deal with on a day-to-day basis” in terms of being underrepresented in
engineering, and in society, in general. This shift in experience for a “stereotypical, white male
in engineering” created a level of awareness that made Matthew want to create a space for dia-
logue, transparency and real conversations. Matthew reflected in his heightened awareness and
position for advocacy, “But, I can’t imagine going through the educational system and having
to deal with that and learning new things all at the same time. I guess to bring it all back to the
classroom environment, it’s that kind of conversation that I want to have happen in engineering
classrooms, because I think those are important conversations along with the technical.”
How can we strive to have an awareness of the experiences of all students in our classrooms
and empower those who have been marginalized in engineering? We learn through Matthew’s
story that there are several tangible actions we can take that can stand to have a significant impact
on the students we encounter. The most critical of those being a self-reflection and acknowledg-
ment of our own privilege and position (i.e., race, gender, education, socioeconomic status). We
should challenge ourselves to use our position to push back against the systemic barriers facing
students every day rather than being an added barrier to their load. We can also be open about
our own identity with our students and unapologetic about who we are as Matthew exemplified,
“I think they appreciate maybe that I’m just honest with who I am. At least that’s my hope, that
they recognize that you don’t have to be afraid to be an individual, and you don’t have to be
afraid to learn.” The last suggestion in fostering empowerment in the classroom is to encourage
real conversations. Matthew urged that this didn’t happen enough and people just “don’t ask
about how certain actions or things might make someone uncomfortable or something of the
sort.”
We can learn a lot from Matthew’s example. What seems most encouraging is knowing
that there is no perfect approach, it just takes a true desire and commitment to making a differ-
ence. Matthew was focused on helping students develop and what he envisioned as outcomes.
At the end of his narrative, he describes a recent meeting with former students of his from his
first transitional college. Most compelling is Matthew clearly remembers that perceiving these
particular students through a deficit lens at a snapshot in time could have easily rendered, “Yeah,
they’re not going to be successful” for a lack of demonstrating traditional metrics of success.
However, the students came to him—one finishing a Ph.D. in civil engineering and another a
Master’s in mechanical engineering—both of which most would regard as extremely successful.
We cannot stop at how students show up in our classrooms, but as exemplar educators, must
challenge ourselves to identify ways to empower them beyond the barriers to reach their fullest
potential. It is our jobs as educators to help every student envision themselves as the engineer
they wish to be. We are grateful for educators like Matthew and hope that others can be inspired
and learn from his insight.
LESSON 8: CONNECTING THEORY TO THE REAL WORLD
IN THE CLASSROOM
10. CONCLUSION AND LESSONS LEARNED 97
In some of the stories, there was a focus on providing opportunities for students to experience
engineering practice. In many engineering classrooms, there tends to be a focus on technical
solutions only, with no consideration of the complexity of solutions, especially when embedded
in our social systems. Fernanda discusses a need to understand project complexity as we tend
to over-simplify problems in engineering classes. She pushes students to think about “How do
you connect the concepts that we covered in class to that real-world problem? How do you do
something that people in the real world, e.g., a safety engineer, are actually doing?” To do this,
she has strong connections with industry and includes real-world projects in her classes.
Chris also brings socio-technical problems into his classroom. For example, when he has
students do watershed analysis he embeds that discussion with the impacts that watersheds
can have on communities. For example, he discussed the Aberjona River watershed, as it is an
example where there was contamination of a community’s water which led to increased cases
of leukemia. While this case is well known, it is also a case that is less than 20 miles from his
university. Chris helps students make “connections between the abstract concept of watershed
analysis and the concrete reality of understanding a watershed so that you can see its impacts on
the community.”
What are ways that we can make more explicit connections between the course curricula
and our surrounding community? How can we begin to embed engineering problems into the
real world? How can we begin to focus on the assumptions that we are making to teach engi-
neering science courses? In research done by Erin Cech [2014], she found that students become
more disengaged as they continue in an engineering program. Specifically, students’ concerns
about public welfare diminish as they continue in their studies. Through an attempt to make
more connections between engineering and design and the communities that could be impacted
by those designs, we could shift the paradigm to graduate engineers who are more engaged in
public welfare.
LESSON 9: USING IDEAS FROM ENTREPRENEURSHIP IN
ENGINEERING EDUCATION
Throughout many of the faculty stories, there was discussion of an entrepreneurial mindset help-
ing teaching. Chris explains that to him, taking an entrepreneurial approach means thinking
about the values of your students and developing ways of satisfying their values. Thais also ex-
plains that she had an “aha” moment when she realized that the students are her clients “and if
they are not happy or if this is not useful for them, I have to do something.” She then began at-
tending Center for Teaching and Learning meetings and learning different teaching approaches
so that she could begin to serve “more of the students and tailor my teaching to their needs.”
98
10. CONCLUSION AND LESSONS LEARNED
Donna also encourages other engineering educators us to be entrepreneurial in her advice.
She explains that you will always have constraints as you are trying to do things differently and
push the boundaries, and advises you to “work creatively with, around, and through” those con-
straints. She explains that this creative “attitude” will help you continue to push boundaries and
do things differently. Donna explains that “there are unchallenged assumptions everywhere” and
that as innovative engineering educators, we have to begin pushing against those assumptions
to truly be innovative. She also advises engineering educators to not be surprised when you en-
counter pushback from your peers, students, or administrators. This may actually be a sign that
you are doing something right.
In Chris’ story, he poses some questions for us to consider as educators,
Why not be entrepreneurial in applying an educational concept? An educational in-
novation? When most of the education that we still receive today is the traditional
lecture style, when people can deliver it in a different way, why can’t that be an en-
trepreneurial effort?
Maybe considering ways of being more entrepreneurial or innovative in our approaches to teach-
ing could help us become stronger teachers. When we are trying something new and it does not
seem to be working well, when should we pivot to something new? When we are truly propos-
ing something transformative in our teaching, should we expect to get some pushback from
colleagues and students? These ideas of value propositions, customer segments, and pivots could
be helpful as we begin pushing the boundaries of traditional engineering teaching and learning
to do something truly innovative as engineering educators.
LESSON 10: COMFORT WITH AMBIGUITY AND
RELINQUISHING CONTROL ARE REQUIRED
As faculty members, one natural tendency is to aim to cover all of the material as designated by
the course syllabus. However, in the implementation of student-centered pedagogies, and par-
ticularly active learning, the proposed presentation of content does not always execute according
to plan and this was one of the adjustments that the faculty in these stories had to get used to.
These approaches necessitate a flexibility that challenges the certainty of knowing—whether it’s
the faculty’s confidence in adequately covering the material or knowing exactly which topics the
students will walk away from the class having mastered—there is a comfort with ambiguity and
a relinquishing of control that essentially has to happen for faculty to shift their teaching.
This was demonstrated in Thais’ story when she described the lack of a consistent expec-
tation, “I never have the same exact lesson. Never, ever.” For many faculty, the thought of such
variation would be daunting. In fact, one of the reasons we invest such time and effort into
developing a given lesson is the notion that the material will be repeatable and reusable. Brad
mentioned that lecturing would simply be easier and that if he was solely focused on self, lec-
turing would be the rational choice. Thais acknowledged this tension when she admitted, “It’s
10. CONCLUSION AND LESSONS LEARNED 99
very hard to come to grips with the idea that you want to introduce these things and you want
to give freedom for them to lead the class, and at the same time cover the course material.”
How can we can as faculty learn to be open to the ambiguity and surrendering of control
that is required when the outcomes of teaching approaches are less predictable? As we learn from
Thais and Brad, having the confidence to try new things is imperative. Additionally, another
way to navigate this struggle involves simply learning to accept that adopting active learning
strategies may come with tradeoffs and/or require compromises. Students may not cover every
single item in the course as they have historically, but the hope is that they will leave the class
with a richer experience of engagement through thinking critically that fills those gaps all the
same. In the words of Brad, “The workload is immensely higher than traditional teaching, but I
think, just from my standpoint, I really see huge benefits to the students.”
LESSON 11: LEARN SOMETHING NEW
A possible way to become better teachers is to become learners ourselves. In many of the stories,
the faculty discussed becoming inspired when they were a student themselves. As a child, Sara
knew she wanted to be a teacher. She had positive experiences as she pursued her undergraduate
degree at Dartmouth, where she also served as a Teaching Assistant and experienced engineering
faculty who cared about teaching. When she began to pursue her Ph.D. at a research-focused
university, she was shocked by the poor quality of teaching she was experiencing. This, in part,
motivated her even further to become a good teacher herself. Donna was also inspired, not
when taking her engineering courses, but when taking humanities and social science courses as
an undergraduate student. She discusses faculty from non-engineering departments that created
an environment where she felt that she had something important to say, even though she had
not taken some of the prerequisites for the course.
While many of these examples are of faculty becoming initially inspired to become better
teachers through their experiences as students themselves, it makes me wonder if we can emulate
those inspirations as the time between being students ourselves increases. Brad discusses that he
enjoys learning and trying new things, and maybe we can take some inspiration from Brad. We
can continue to learn new things and experience being a student throughout our lives. Recently,
I began to learn to play the guitar. For me, this has been a huge inspiration as I get to experience
first-hand being a complete novice and having so much to learn. In trying different ways of
learning guitar, I have experienced different types of instructors. Some instructors have a fixed
mindset and believe that either you can be a guitar player or you cannot, that somehow some
people are born good guitar players. Another instructor who teaches online, goes to great lengths
to explain that anyone can learn to play the guitar. He explains that babies are not born as guitar
players and that everyone has to work at it. While it may come easier to some people, some of
the greatest guitar players worked very hard to become the guitar players that they are today.
Teaching using this growth mindset is an inspiration and helps you feel like you belong and that
you, too, can become a guitar player. This has inspired me to change the way I talk about learning
100
10. CONCLUSION AND LESSONS LEARNED
Statics in one of my classes. Using this growth mindset explicitly in the class may help many of
my students who were not as prepared academically as others in the class. Students need to be
empowered and not held to the limitations of their preparation.
How can we continue to be inspired in our teaching? Maybe one way is to become a
student ourselves. What is something that you’ve always wanted to learn, but never found the
time? You could even extend the challenge to learn something completely different than your
background—possibly learning to dance or paint. Maybe in learning something new you can
become more inspired to become a stronger engineering educator. Maybe you can also begin to
have more empathy for students who are struggling in your classes or those whose backgrounds
have not equipped them with the tools of success.
CONCLUSION
Hopefully sharing the raw and real stories of engineering faculty in their transformations in
teaching has inspired you in your own personal journey toward becoming an exemplary en-
gineering educator. This book can serve as a catalyst for you to begin learning about others’
teaching stories. Many of us have worked towards becoming better teachers, have encountered
obstacles, while also experiencing some success. Continue these conversations by asking engi-
neering educators about their stories of change and sharing your own journey.
REFERENCES
Cech, E. A. (2014). Culture of disengagement in engineering education? Science, Technology,
and Human Values, 39(1), pp. 42–72. DOI: 10.1177/0162243913504305. 97
Llopart, M. and Esteban-Guitart, M. (2018). Funds of knowledge in 21st century societies:
Inclusive educational practices for under-represented students. A literature review. Journal of
Curriculum Studies. DOI: 10.1080/00220272.2016.1247913. 94
Secules, S., Gupta, A., Elby, A., and Turpen, C. (2018). Zooming out from the struggling
individual student: An account of the cultural construction of engineering ability in an un-
dergraduate programming class. Journal of Engineering Education. DOI: 10.1002/jee.20191.
94
Authors’ Biographies
(in order of appearance)
101
NADIA KELLAM
Nadia Kellam is an Associate Professor in the Polytechnic
School of the Ira A. Fulton Schools of Engineering at Arizona
State University. She is a qualitative researcher who primarily
uses narrative research methods. In her research, Dr. Kellam
is broadly interested in developing critical understandings of
the culture of engineering education and, especially, the ex-
periences of underrepresented undergraduate engineering stu-
dents and engineering educators. In addition to teaching un-
dergraduate engineering courses and a graduate course on en-
trepreneurship, she also enjoys teaching qualitative research
methods in engineering education in the Engineering Educa-
tion Systems and Design Ph.D. program at ASU. Nadia serves as Deputy Editor of the Journal
of Engineering Education.
102 AUTHORS’ BIOGRAPHIES
BROOKE COLEY
Brooke Coley is an Assistant Professor in Engineering at the
Polytechnic School of the Ira A. Fulton Schools of Engineer-
ing at Arizona State University. Dr. Coley is Principal Inves-
tigator of the Shifting Perceptions, Attitudes and Cultures in
Engineering (SPACE) Lab that aspires to elevate the expe-
riences of marginalized populations, dismantle systematic in-
justices, and transform the way inclusion is cultivated in engi-
neering through the implementation of novel technologies and
methodologies in engineering education. Intrigued by the in-
tersections of engineering education, mental health, and social
justice, Dr. Coley’s primary research interest focuses on virtual
reality as a tool for developing empathetic and inclusive mindsets among engineering faculty.
She is also interested in hidden populations in engineering education and innovation for more
inclusive pedagogies.
AUDREY BOKLAGE
Audrey Boklage is a Research Assistant in the Center for En-
gineering Education of the Cockrell School of Engineering at
The University of Texas at Austin. Prior to entering graduate
school, she taught high school science in Texas for seven years.
During this time, she redesigned curriculum and served as a
mentor for new to profession educators. Upon receiving her
doctorate degree in Curriculum and Instruction with a focus on
STEM education, she became specifically interested in narra-
tive research methods and faculty development within schools
of engineering. Her current research interests include creating
inclusive spaces within university engineering environments,
specifically makerspaces and asset-based pedagogies.
DONA RILEY
AUTHORS’ BIOGRAPHIES 103
Donna Riley is Kamyar Haghighi Head of the School of En-
gineering Education and Professor of Engineering Education
at Purdue University. She is the author of two books, Engi-
neering and Social Justice and Engineering Thermodynamics and
21st Century Energy Problems, both published by Morgan &
Claypool. Riley earned a B.S.E. in Chemical Engineering
from Princeton University and a Ph.D. from Carnegie Mellon
University in Engineering and Public Policy. She is a fellow of
the American Society for Engineering Education.
SARA ATWOOD
Sara Atwood is an Associate Professor and Chair of Engineer-
ing and Physics at Elizabethtown College. She received a B.A.
and M.S. in Engineering Sciences from Dartmouth College
and a Ph.D. in Mechanical Engineering from the University
of California at Berkeley. She is passionate about engaging un-
derrepresented students in engineering education, teaching en-
gineers in a liberal arts setting, and encouraging students to use
their engineering skills to be empowered citizens.
104 AUTHORS’ BIOGRAPHIES
BRAD HYATT
Brad Hyatt is an Associate Professor and the Chair of the
Department of Construction Management in the Lyles Col-
lege of Engineering at California State University, Fresno
(Fresno State). He has an M.S. in Engineering with a focus
on Construction Engineering and Project Management from
The University of Texas at Austin and a B.S. in Civil Engi-
neering from the University of Kentucky. He teaches courses
in construction estimating, scheduling, documents, and project
controls. He actively conducts research on data and predictive
analytics in construction, leadership in construction, lean con-
struction practices, and integrating technology into construc-
tion pedagogy. Professor Hyatt continuously participates in leadership roles at Fresno State.
He is a DISCOVERe Faculty Fellow and serves on the steering committee for the President’s
Leadership Academy. These transformational programs provide innovative solutions to mobile
technology in the classroom and to the development of future leaders at Fresno State. Addi-
tionally, Professor Hyatt led a group of faculty to review learning management systems for the
campus during the 2017/2018 academic year. Professor Hyatt is a Registered Professional En-
gineer in California and LEED Accredited Professional (Building, Design & Construction)
with over 20 years of professional experience in program and project management of facilities,
engineering, and construction projects. Professor Hyatt spent nearly ten years as a U.S. Navy
Civil Engineer Corps Officer prior to his academic career. He also worked as a Construction
Project Management consultant in between his military service and academic career. His broad
industry expertise includes sustainable design and construction, facilities management, construc-
tion management, capital improvements planning, energy management, disaster response, and
construction workforce shaping. He has managed a variety of projects from a large, complex
replacement hospital to small fuel tank renovations.
AUTHORS’ BIOGRAPHIES 105
CHRIS SWAN
Chris Swan is Dean of Undergraduate Education in the
School of Engineering at Tufts University and an Associate
Professor in its Civil and Environmental Engineering Depart-
ment. He is also a senior fellow in Tisch College of Civic Life.
Previously, he has served as CEE department chair. He re-
ceived a ScD degree in Civil and Environmental Engineering
from MIT in 1994 and both B.S. and M.S. degrees in Civil
Engineering from the University of Texas at Austin in 1984
and 1986, respectively. An initiator of explicitly incorporating
components of service-learning into engineering curriculum at
Tufts, he continues to champion the development and imple-
mentation of civic engagement in engineering education. For example, he currently serves as
an advisor to Tufts student chapter of Engineers Without Borders. Current engineering edu-
cation research efforts focus on evaluating the impact of service-based learning in engineering
education, as well as applying entrepreneurial principles in examining sustainable and scalable
pathways for innovations in engineering education. He was also an inaugural Faculty Fellow
of Tisch College and of the Center for the Enhancement of Learning and Teaching (CELT).
In addition, Chris researches the development of reuse strategies for waste materials. Most no-
tably, his research efforts have focused on the reuse of fly ash from coal burning facilities with
waste plastics. This has led to the development of synthetic lightweight aggregates (SLA), an
innovative construction material that can be used in place of traditional sand and gravel.
106 AUTHORS’ BIOGRAPHIES
THAIS ALVES
Thais Alves specializes in construction management and
project-based production systems. Her areas of interest in-
clude the application of Lean production/construction con-
cepts, principles, and tools to improve the performance of
production systems and products in different stages of their
life-cycle and supply chains. Additionally, she is interested
in how contracts and delivery methods support collaboration
across supply chains in the Owner-Architecture-Engineering-
Construction industry. For more than 15 years, Thais has
been teaching, advising students, researching, and collaborat-
ing with construction companies toward the dissemination and
implementation of Lean, especially in the field of production planning and control at construc-
tion sites. She is currently the AGC—Paul S. Roel Chair in Construction Engineering and
Management at the J.R. Filanc Construction Engineering and Management Program at San
Diego State University.
FERNANDA LEITE
Fernanda Leite is an Associate Professor in Construction En-
gineering and Project Management, in the Civil, Architectural
and Environmental Engineering (CAEE) Department at the
University of Texas at Austin. She has a Ph.D. in Civil and
Environmental Engineering from Carnegie Mellon Univer-
sity. Prior to her graduate education, she worked as a Project
Manager in her home country of Brazil, in multiple gov-
ernment infrastructure and commercial building construction
projects. Her technical interests include information technol-
ogy for project management, building information modeling,
collaboration and coordination technologies, and information
technology-supported construction safety management. She has taught four unique courses at
UT and has integrated project-based and experiential learning to all of her courses, through
class projects, industry mentorships, and interactive exercises. She serves as Graduate Program
Coordinator for CAEE’s Sustainable Systems graduate program and on the Executive Com-
mittee for the University-wide Grand Challenges effort called Planet Texas 2050. She currently
supervises 8 Ph.D. and 5 M.S. students. She has graduated 7 Ph.D. and 36 M.S. students.
AUTHORS’ BIOGRAPHIES 107
CHARLES E. PIERCE
Charles E. Pierce is an Associate Professor in the Department of Civil and Environmental En-
gineering at the University of South Carolina (USC), where he has been teaching since 1998.
He has an M.S. and Ph.D. in Civil Engineering from Northwestern University and a B.S. de-
gree in Civil Engineering from the University of New Hampshire. He is the current Director for
Diversity and Inclusion in his department and a USC Connect Faculty Fellow for Integrative
Learning. He was awarded the Michael J. Mungo Undergraduate Teaching Award for USC in
2006, and he is also the recipient of the Samuel P. Litman Award and Bell South Teaching
Fellowship in recognition of his contributions to engineering education. Dr. Pierce is an ac-
tive member of ASEE and serves as the campus representative for USC. He is committed to
improving engineering education across the K-20 spectrum. His contributions include leading
professional development activities on engineering for middle and high school math and science
teachers and creating programs for graduate students in engineering to integrate research and
teaching. His undergraduate educational interests include the facilitation and assessment of crit-
ical thinking through problem-based learning using the Environments for Fostering Effective
Critical Thinking (EFFECTs) framework developed with his colleagues at USC.
MATTHEW FUENTES
Matthew Fuentes is currently a member of the engineering faculty at Everett Community Col-
lege. He has been teaching at community colleges for 10 years. He earned a B.S. and M.S. in
Aerospace Engineering from the University of Tennessee.
|
huggingface/hf-endpoints-documentation/blob/main/docs/source/guides/create_endpoint.mdx
|
Create an Endpoint
After your first login, you will be directed to the [Endpoint creation page](https://ui.endpoints.huggingface.co/new). As an example, this guide will go through the steps to deploy [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) for text classification.
## 1. Enter the Hugging Face Repository ID and your desired endpoint name:
<img src="https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/1_repository.png" alt="select repository" />
## 2. Select your Cloud Provider and region. Initially, only AWS will be available as a Cloud Provider with the `us-east-1` and `eu-west-1` regions. We will add Azure soon, and if you need to test Endpoints with other Cloud Providers or regions, please let us know.
<img src="https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/1_region.png" alt="select region" />
## 3. Define the [Security Level](security) for the Endpoint:
<img src="https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/1_security.png" alt="define security" />
## 4. Create your Endpoint by clicking **Create Endpoint**. By default, your Endpoint is created with a medium CPU (2 x 4GB vCPUs with Intel Xeon Ice Lake) The cost estimate assumes the Endpoint will be up for an entire month, and does not take autoscaling into account.
<img src="https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/1_create_cost.png" alt="create endpoint" />
## 5. Wait for the Endpoint to build, initialize and run which can take between 1 to 5 minutes.
<img src="https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/overview.png" alt="overview" />
## 6. Test your Endpoint in the overview with the Inference widget 🏁 🎉!
<img src="https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/1_inference.png" alt="run inference" />
|
huggingface/evaluate/blob/main/docs/source/choosing_a_metric.mdx
|
Choosing a metric for your task
**So you've trained your model and want to see how well it’s doing on a dataset of your choice. Where do you start?**
There is no “one size fits all” approach to choosing an evaluation metric, but some good guidelines to keep in mind are:
## Categories of metrics
There are 3 high-level categories of metrics:
1. *Generic metrics*, which can be applied to a variety of situations and datasets, such as precision and accuracy.
2. *Task-specific metrics*, which are limited to a given task, such as Machine Translation (often evaluated using metrics [BLEU](https://huggingface.co/metrics/bleu) or [ROUGE](https://huggingface.co/metrics/rouge)) or Named Entity Recognition (often evaluated with [seqeval](https://huggingface.co/metrics/seqeval)).
3. *Dataset-specific metrics*, which aim to measure model performance on specific benchmarks: for instance, the [GLUE benchmark](https://huggingface.co/datasets/glue) has a dedicated [evaluation metric](https://huggingface.co/metrics/glue).
Let's look at each of these three cases:
### Generic metrics
Many of the metrics used in the Machine Learning community are quite generic and can be applied in a variety of tasks and datasets.
This is the case for metrics like [accuracy](https://huggingface.co/metrics/accuracy) and [precision](https://huggingface.co/metrics/precision), which can be used for evaluating labeled (supervised) datasets, as well as [perplexity](https://huggingface.co/metrics/perplexity), which can be used for evaluating different kinds of (unsupervised) generative tasks.
To see the input structure of a given metric, you can look at its metric card. For example, in the case of [precision](https://huggingface.co/metrics/precision), the format is:
```
>>> precision_metric = evaluate.load("precision")
>>> results = precision_metric.compute(references=[0, 1], predictions=[0, 1])
>>> print(results)
{'precision': 1.0}
```
### Task-specific metrics
Popular ML tasks like Machine Translation and Named Entity Recognition have specific metrics that can be used to compare models. For example, a series of different metrics have been proposed for text generation, ranging from [BLEU](https://huggingface.co/metrics/bleu) and its derivatives such as [GoogleBLEU](https://huggingface.co/metrics/google_bleu) and [GLEU](https://huggingface.co/metrics/gleu), but also [ROUGE](https://huggingface.co/metrics/rouge), [MAUVE](https://huggingface.co/metrics/mauve), etc.
You can find the right metric for your task by:
- **Looking at the [Task pages](https://huggingface.co/tasks)** to see what metrics can be used for evaluating models for a given task.
- **Checking out leaderboards** on sites like [Papers With Code](https://paperswithcode.com/) (you can search by task and by dataset).
- **Reading the metric cards** for the relevant metrics and see which ones are a good fit for your use case. For example, see the [BLEU metric card](https://github.com/huggingface/evaluate/tree/main/metrics/bleu) or [SQuaD metric card](https://github.com/huggingface/evaluate/tree/main/metrics/squad).
- **Looking at papers and blog posts** published on the topic and see what metrics they report. This can change over time, so try to pick papers from the last couple of years!
### Dataset-specific metrics
Some datasets have specific metrics associated with them -- this is especially in the case of popular benchmarks like [GLUE](https://huggingface.co/metrics/glue) and [SQuAD](https://huggingface.co/metrics/squad).
<Tip warning={true}>
💡
GLUE is actually a collection of different subsets on different tasks, so first you need to choose the one that corresponds to the NLI task, such as mnli, which is described as “crowdsourced collection of sentence pairs with textual entailment annotations”
</Tip>
If you are evaluating your model on a benchmark dataset like the ones mentioned above, you can use its dedicated evaluation metric. Make sure you respect the format that they require. For example, to evaluate your model on the [SQuAD](https://huggingface.co/datasets/squad) dataset, you need to feed the `question` and `context` into your model and return the `prediction_text`, which should be compared with the `references` (based on matching the `id` of the question) :
```
>>> from evaluate import load
>>> squad_metric = load("squad")
>>> predictions = [{'prediction_text': '1976', 'id': '56e10a3be3433e1400422b22'}]
>>> references = [{'answers': {'answer_start': [97], 'text': ['1976']}, 'id': '56e10a3be3433e1400422b22'}]
>>> results = squad_metric.compute(predictions=predictions, references=references)
>>> results
{'exact_match': 100.0, 'f1': 100.0}
```
You can find examples of dataset structures by consulting the "Dataset Preview" function or the dataset card for a given dataset, and you can see how to use its dedicated evaluation function based on the metric card.
|
gradio-app/gradio/blob/main/guides/cn/01_getting-started/02_key-features.md
|
主要特点
让我们来介绍一下 Gradio 最受欢迎的一些功能!这里是 Gradio 的主要特点:
1. [添加示例输入](#example-inputs)
2. [传递自定义错误消息](#errors)
3. [添加描述内容](#descriptive-content)
4. [设置旗标](#flagging)
5. [预处理和后处理](#preprocessing-and-postprocessing)
6. [样式化演示](#styling)
7. [排队用户](#queuing)
8. [迭代输出](#iterative-outputs)
9. [进度条](#progress-bars)
10. [批处理函数](#batch-functions)
11. [在协作笔记本上运行](#colab-notebooks)
## 示例输入
您可以提供用户可以轻松加载到 "Interface" 中的示例数据。这对于演示模型期望的输入类型以及演示数据集和模型一起探索的方式非常有帮助。要加载示例数据,您可以将嵌套列表提供给 Interface 构造函数的 `examples=` 关键字参数。外部列表中的每个子列表表示一个数据样本,子列表中的每个元素表示每个输入组件的输入。有关每个组件的示例数据格式在[Docs](https://gradio.app/docs#components)中有说明。
$code_calculator
$demo_calculator
您可以将大型数据集加载到示例中,通过 Gradio 浏览和与数据集进行交互。示例将自动分页(可以通过 Interface 的 `examples_per_page` 参数进行配置)。
继续了解示例,请参阅[更多示例](https://gradio.app/more-on-examples)指南。
## 错误
您希望向用户传递自定义错误消息。为此,with `gr.Error("custom message")` 来显示错误消息。如果在上面的计算器示例中尝试除以零,将显示自定义错误消息的弹出模态窗口。了解有关错误的更多信息,请参阅[文档](https://gradio.app/docs#error)。
## 描述性内容
在前面的示例中,您可能已经注意到 Interface 构造函数中的 `title=` 和 `description=` 关键字参数,帮助用户了解您的应用程序。
Interface 构造函数中有三个参数用于指定此内容应放置在哪里:
- `title`:接受文本,并可以将其显示在界面的顶部,也将成为页面标题。
- `description`:接受文本、Markdown 或 HTML,并将其放置在标题正下方。
- `article`:也接受文本、Markdown 或 HTML,并将其放置在界面下方。

如果您使用的是 `Blocks` API,则可以 with `gr.Markdown(...)` 或 `gr.HTML(...)` 组件在任何位置插入文本、Markdown 或 HTML,其中描述性内容位于 `Component` 构造函数内部。
另一个有用的关键字参数是 `label=`,它存在于每个 `Component` 中。这修改了每个 `Component` 顶部的标签文本。还可以为诸如 `Textbox` 或 `Radio` 之类的表单元素添加 `info=` 关键字参数,以提供有关其用法的进一步信息。
```python
gr.Number(label='年龄', info='以年为单位,必须大于0')
```
## 旗标
默认情况下,"Interface" 将有一个 "Flag" 按钮。当用户测试您的 `Interface` 时,如果看到有趣的输出,例如错误或意外的模型行为,他们可以将输入标记为您进行查看。在由 `Interface` 构造函数的 `flagging_dir=` 参数提供的目录中,将记录标记的输入到一个 CSV 文件中。如果界面涉及文件数据,例如图像和音频组件,将创建文件夹来存储这些标记的数据。
例如,对于上面显示的计算器界面,我们将在下面的旗标目录中存储标记的数据:
```directory
+-- calculator.py
+-- flagged/
| +-- logs.csv
```
_flagged/logs.csv_
```csv
num1,operation,num2,Output
5,add,7,12
6,subtract,1.5,4.5
```
与早期显示的冷色界面相对应,我们将在下面的旗标目录中存储标记的数据:
```directory
+-- sepia.py
+-- flagged/
| +-- logs.csv
| +-- im/
| | +-- 0.png
| | +-- 1.png
| +-- Output/
| | +-- 0.png
| | +-- 1.png
```
_flagged/logs.csv_
```csv
im,Output
im/0.png,Output/0.png
im/1.png,Output/1.png
```
如果您希望用户提供旗标原因,可以将字符串列表传递给 Interface 的 `flagging_options` 参数。用户在进行旗标时必须选择其中一个字符串,这将作为附加列保存到 CSV 中。
## 预处理和后处理 (Preprocessing and Postprocessing)

如您所见,Gradio 包括可以处理各种不同数据类型的组件,例如图像、音频和视频。大多数组件都可以用作输入或输出。
当组件用作输入时,Gradio 自动处理*预处理*,将数据从用户浏览器发送的类型(例如网络摄像头快照的 base64 表示)转换为您的函数可以接受的形式(例如 `numpy` 数组)。
同样,当组件用作输出时,Gradio 自动处理*后处理*,将数据从函数返回的形式(例如图像路径列表)转换为可以在用户浏览器中显示的形式(例如以 base64 格式显示图像的 `Gallery`)。
您可以使用构建图像组件时的参数控制*预处理*。例如,如果您使用以下参数实例化 `Image` 组件,它将将图像转换为 `PIL` 类型,并将其重塑为`(100, 100)`,而不管提交时的原始大小如何:
```py
img = gr.Image(shape=(100, 100), type="pil")
```
相反,这里我们保留图像的原始大小,但在将其转换为 numpy 数组之前反转颜色:
```py
img = gr.Image(invert_colors=True, type="numpy")
```
后处理要容易得多!Gradio 自动识别返回数据的格式(例如 `Image` 是 `numpy` 数组还是 `str` 文件路径?),并将其后处理为可以由浏览器显示的格式。
请查看[文档](https://gradio.app/docs),了解每个组件的所有与预处理相关的参数。
## 样式 (Styling)
Gradio 主题是自定义应用程序外观和感觉的最简单方法。您可以选择多种主题或创建自己的主题。要这样做,请将 `theme=` 参数传递给 `Interface` 构造函数。例如:
```python
demo = gr.Interface(..., theme=gr.themes.Monochrome())
```
Gradio 带有一组预先构建的主题,您可以从 `gr.themes.*` 加载。您可以扩展这些主题或从头开始创建自己的主题 - 有关更多详细信息,请参阅[主题指南](https://gradio.app/theming-guide)。
要增加额外的样式能力,您可以 with `css=` 关键字将任何 CSS 传递给您的应用程序。
Gradio 应用程序的基类是 `gradio-container`,因此以下是一个更改 Gradio 应用程序背景颜色的示例:
```python
with `gr.Interface(css=".gradio-container {background-color: red}") as demo:
...
```
## 队列 (Queuing)
如果您的应用程序预计会有大量流量,请 with `queue()` 方法来控制处理速率。这将排队处理调用,因此一次只处理一定数量的请求。队列使用 Websockets,还可以防止网络超时,因此如果您的函数的推理时间很长(> 1 分钟),应使用队列。
with `Interface`:
```python
demo = gr.Interface(...).queue()
demo.launch()
```
with `Blocks`:
```python
with gr.Blocks() as demo:
#...
demo.queue()
demo.launch()
```
您可以通过以下方式控制一次处理的请求数量:
```python
demo.queue(concurrency_count=3)
```
查看有关配置其他队列参数的[队列文档](/docs/#queue)。
在 Blocks 中指定仅对某些函数进行排队:
```python
with gr.Blocks() as demo2:
num1 = gr.Number()
num2 = gr.Number()
output = gr.Number()
gr.Button("Add").click(
lambda a, b: a + b, [num1, num2], output)
gr.Button("Multiply").click(
lambda a, b: a * b, [num1, num2], output, queue=True)
demo2.launch()
```
## 迭代输出 (Iterative Outputs)
在某些情况下,您可能需要传输一系列输出而不是一次显示单个输出。例如,您可能有一个图像生成模型,希望显示生成的每个步骤的图像,直到最终图像。或者您可能有一个聊天机器人,它逐字逐句地流式传输响应,而不是一次返回全部响应。
在这种情况下,您可以将**生成器**函数提供给 Gradio,而不是常规函数。在 Python 中创建生成器非常简单:函数不应该有一个单独的 `return` 值,而是应该 with `yield` 连续返回一系列值。通常,`yield` 语句放置在某种循环中。下面是一个简单示例,生成器只是简单计数到给定数字:
```python
def my_generator(x):
for i in range(x):
yield i
```
您以与常规函数相同的方式将生成器提供给 Gradio。例如,这是一个(虚拟的)图像生成模型,它在输出图像之前生成数个步骤的噪音:
$code_fake_diffusion
$demo_fake_diffusion
请注意,我们在迭代器中添加了 `time.sleep(1)`,以创建步骤之间的人工暂停,以便您可以观察迭代器的步骤(在真实的图像生成模型中,这可能是不必要的)。
将生成器提供给 Gradio **需要**在底层 Interface 或 Blocks 中启用队列(请参阅上面的队列部分)。
## 进度条
Gradio 支持创建自定义进度条,以便您可以自定义和控制向用户显示的进度更新。要启用此功能,只需为方法添加一个默认值为 `gr.Progress` 实例的参数即可。然后,您可以直接调用此实例并传入 0 到 1 之间的浮点数来更新进度级别,或者 with `Progress` 实例的 `tqdm()` 方法来跟踪可迭代对象上的进度,如下所示。必须启用队列以进行进度更新。
$code_progress_simple
$demo_progress_simple
如果您 with `tqdm` 库,并且希望从函数内部的任何 `tqdm.tqdm` 自动报告进度更新,请将默认参数设置为 `gr.Progress(track_tqdm=True)`!
## 批处理函数 (Batch Functions)
Gradio 支持传递*批处理*函数。批处理函数只是接受输入列表并返回预测列表的函数。
例如,这是一个批处理函数,它接受两个输入列表(一个单词列表和一个整数列表),并返回修剪过的单词列表作为输出:
```python
import time
def trim_words(words, lens):
trimmed_words = []
time.sleep(5)
for w, l in zip(words, lens):
trimmed_words.append(w[:int(l)])
return [trimmed_words]
for w, l in zip(words, lens):
```
使用批处理函数的优点是,如果启用了队列,Gradio 服务器可以自动*批处理*传入的请求并并行处理它们,从而可能加快演示速度。以下是 Gradio 代码的示例(请注意 `batch=True` 和 `max_batch_size=16` - 这两个参数都可以传递给事件触发器或 `Interface` 类)
with `Interface`:
```python
demo = gr.Interface(trim_words, ["textbox", "number"], ["output"],
batch=True, max_batch_size=16)
demo.queue()
demo.launch()
```
with `Blocks`:
```python
import gradio as gr
with gr.Blocks() as demo:
with gr.Row():
word = gr.Textbox(label="word")
leng = gr.Number(label="leng")
output = gr.Textbox(label="Output")
with gr.Row():
run = gr.Button()
event = run.click(trim_words, [word, leng], output, batch=True, max_batch_size=16)
demo.queue()
demo.launch()
```
在上面的示例中,可以并行处理 16 个请求(总推理时间为 5 秒),而不是分别处理每个请求(总推理时间为 80 秒)。许多 Hugging Face 的 `transformers` 和 `diffusers` 模型在 Gradio 的批处理模式下自然工作:这是[使用批处理生成图像的示例演示](https://github.com/gradio-app/gradio/blob/main/demo/diffusers_with_batching/run.py)
注意:使用 Gradio 的批处理函数 **requires** 在底层 Interface 或 Blocks 中启用队列(请参阅上面的队列部分)。
## Gradio 笔记本 (Colab Notebooks)
Gradio 可以在任何运行 Python 的地方运行,包括本地 Jupyter 笔记本和协作笔记本,如[Google Colab](https://colab.research.google.com/)。对于本地 Jupyter 笔记本和 Google Colab 笔记本,Gradio 在本地服务器上运行,您可以在浏览器中与之交互。(注意:对于 Google Colab,这是通过[服务工作器隧道](https://github.com/tensorflow/tensorboard/blob/master/docs/design/colab_integration.md)实现的,您的浏览器需要启用 cookies。)对于其他远程笔记本,Gradio 也将在服务器上运行,但您需要使用[SSH 隧道](https://coderwall.com/p/ohk6cg/remote-access-to-ipython-notebooks-via-ssh)在本地浏览器中查看应用程序。通常,更简单的选择是使用 Gradio 内置的公共链接,[在下一篇指南中讨论](/sharing-your-app/#sharing-demos)。
|
huggingface/transformers/blob/main/docs/source/en/perf_train_tpu_tf.md
|
!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Training on TPU with TensorFlow
<Tip>
If you don't need long explanations and just want TPU code samples to get started with, check out [our TPU example notebook!](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb)
</Tip>
### What is a TPU?
A TPU is a **Tensor Processing Unit.** They are hardware designed by Google, which are used to greatly speed up the tensor computations within neural networks, much like GPUs. They can be used for both network training and inference. They are generally accessed through Google’s cloud services, but small TPUs can also be accessed directly for free through Google Colab and Kaggle Kernels.
Because [all TensorFlow models in 🤗 Transformers are Keras models](https://huggingface.co/blog/tensorflow-philosophy), most of the methods in this document are generally applicable to TPU training for any Keras model! However, there are a few points that are specific to the HuggingFace ecosystem (hug-o-system?) of Transformers and Datasets, and we’ll make sure to flag them up when we get to them.
### What kinds of TPU are available?
New users are often very confused by the range of TPUs, and the different ways to access them. The first key distinction to understand is the difference between **TPU Nodes** and **TPU VMs.**
When you use a **TPU Node**, you are effectively indirectly accessing a remote TPU. You will need a separate VM, which will initialize your network and data pipeline and then forward them to the remote node. When you use a TPU on Google Colab, you are accessing it in the **TPU Node** style.
Using TPU Nodes can have some quite unexpected behaviour for people who aren’t used to them! In particular, because the TPU is located on a physically different system to the machine you’re running your Python code on, your data cannot be local to your machine - any data pipeline that loads from your machine’s internal storage will totally fail! Instead, data must be stored in Google Cloud Storage where your data pipeline can still access it, even when the pipeline is running on the remote TPU node.
<Tip>
If you can fit all your data in memory as `np.ndarray` or `tf.Tensor`, then you can `fit()` on that data even when using Colab or a TPU Node, without needing to upload it to Google Cloud Storage.
</Tip>
<Tip>
**🤗Specific Hugging Face Tip🤗:** The methods `Dataset.to_tf_dataset()` and its higher-level wrapper `model.prepare_tf_dataset()` , which you will see throughout our TF code examples, will both fail on a TPU Node. The reason for this is that even though they create a `tf.data.Dataset` it is not a “pure” `tf.data` pipeline and uses `tf.numpy_function` or `Dataset.from_generator()` to stream data from the underlying HuggingFace `Dataset`. This HuggingFace `Dataset` is backed by data that is on a local disc and which the remote TPU Node will not be able to read.
</Tip>
The second way to access a TPU is via a **TPU VM.** When using a TPU VM, you connect directly to the machine that the TPU is attached to, much like training on a GPU VM. TPU VMs are generally easier to work with, particularly when it comes to your data pipeline. All of the above warnings do not apply to TPU VMs!
This is an opinionated document, so here’s our opinion: **Avoid using TPU Node if possible.** It is more confusing and more difficult to debug than TPU VMs. It is also likely to be unsupported in future - Google’s latest TPU, TPUv4, can only be accessed as a TPU VM, which suggests that TPU Nodes are increasingly going to become a “legacy” access method. However, we understand that the only free TPU access is on Colab and Kaggle Kernels, which uses TPU Node - so we’ll try to explain how to handle it if you have to! Check the [TPU example notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) for code samples that explain this in more detail.
### What sizes of TPU are available?
A single TPU (a v2-8/v3-8/v4-8) runs 8 replicas. TPUs exist in **pods** that can run hundreds or thousands of replicas simultaneously. When you use more than a single TPU but less than a whole pod (for example, a v3-32), your TPU fleet is referred to as a **pod slice.**
When you access a free TPU via Colab, you generally get a single v2-8 TPU.
### I keep hearing about this XLA thing. What’s XLA, and how does it relate to TPUs?
XLA is an optimizing compiler, used by both TensorFlow and JAX. In JAX it is the only compiler, whereas in TensorFlow it is optional (but mandatory on TPU!). The easiest way to enable it when training a Keras model is to pass the argument `jit_compile=True` to `model.compile()`. If you don’t get any errors and performance is good, that’s a great sign that you’re ready to move to TPU!
Debugging on TPU is generally a bit harder than on CPU/GPU, so we recommend getting your code running on CPU/GPU with XLA first before trying it on TPU. You don’t have to train for long, of course - just for a few steps to make sure that your model and data pipeline are working like you expect them to.
<Tip>
XLA compiled code is usually faster - so even if you’re not planning to run on TPU, adding `jit_compile=True` can improve your performance. Be sure to note the caveats below about XLA compatibility, though!
</Tip>
<Tip warning={true}>
**Tip born of painful experience:** Although using `jit_compile=True` is a good way to get a speed boost and test if your CPU/GPU code is XLA-compatible, it can actually cause a lot of problems if you leave it in when actually training on TPU. XLA compilation will happen implicitly on TPU, so remember to remove that line before actually running your code on a TPU!
</Tip>
### How do I make my model XLA compatible?
In many cases, your code is probably XLA-compatible already! However, there are a few things that work in normal TensorFlow that don’t work in XLA. We’ve distilled them into three core rules below:
<Tip>
**🤗Specific HuggingFace Tip🤗:** We’ve put a lot of effort into rewriting our TensorFlow models and loss functions to be XLA-compatible. Our models and loss functions generally obey rule #1 and #2 by default, so you can skip over them if you’re using `transformers` models. Don’t forget about these rules when writing your own models and loss functions, though!
</Tip>
#### XLA Rule #1: Your code cannot have “data-dependent conditionals”
What that means is that any `if` statement cannot depend on values inside a `tf.Tensor`. For example, this code block cannot be compiled with XLA!
```python
if tf.reduce_sum(tensor) > 10:
tensor = tensor / 2.0
```
This might seem very restrictive at first, but most neural net code doesn’t need to do this. You can often get around this restriction by using `tf.cond` (see the documentation [here](https://www.tensorflow.org/api_docs/python/tf/cond)) or by removing the conditional and finding a clever math trick with indicator variables instead, like so:
```python
sum_over_10 = tf.cast(tf.reduce_sum(tensor) > 10, tf.float32)
tensor = tensor / (1.0 + sum_over_10)
```
This code has exactly the same effect as the code above, but by avoiding a conditional, we ensure it will compile with XLA without problems!
#### XLA Rule #2: Your code cannot have “data-dependent shapes”
What this means is that the shape of all of the `tf.Tensor` objects in your code cannot depend on their values. For example, the function `tf.unique` cannot be compiled with XLA, because it returns a `tensor` containing one instance of each unique value in the input. The shape of this output will obviously be different depending on how repetitive the input `Tensor` was, and so XLA refuses to handle it!
In general, most neural network code obeys rule #2 by default. However, there are a few common cases where it becomes a problem. One very common one is when you use **label masking**, setting your labels to a negative value to indicate that those positions should be ignored when computing the loss. If you look at NumPy or PyTorch loss functions that support label masking, you will often see code like this that uses [boolean indexing](https://numpy.org/doc/stable/user/basics.indexing.html#boolean-array-indexing):
```python
label_mask = labels >= 0
masked_outputs = outputs[label_mask]
masked_labels = labels[label_mask]
loss = compute_loss(masked_outputs, masked_labels)
mean_loss = torch.mean(loss)
```
This code is totally fine in NumPy or PyTorch, but it breaks in XLA! Why? Because the shape of `masked_outputs` and `masked_labels` depends on how many positions are masked - that makes it a **data-dependent shape.** However, just like for rule #1, we can often rewrite this code to yield exactly the same output without any data-dependent shapes.
```python
label_mask = tf.cast(labels >= 0, tf.float32)
loss = compute_loss(outputs, labels)
loss = loss * label_mask # Set negative label positions to 0
mean_loss = tf.reduce_sum(loss) / tf.reduce_sum(label_mask)
```
Here, we avoid data-dependent shapes by computing the loss for every position, but zeroing out the masked positions in both the numerator and denominator when we calculate the mean, which yields exactly the same result as the first block while maintaining XLA compatibility. Note that we use the same trick as in rule #1 - converting a `tf.bool` to `tf.float32` and using it as an indicator variable. This is a really useful trick, so remember it if you need to convert your own code to XLA!
#### XLA Rule #3: XLA will need to recompile your model for every different input shape it sees
This is the big one. What this means is that if your input shapes are very variable, XLA will have to recompile your model over and over, which will create huge performance problems. This commonly arises in NLP models, where input texts have variable lengths after tokenization. In other modalities, static shapes are more common and this rule is much less of a problem.
How can you get around rule #3? The key is **padding** - if you pad all your inputs to the same length, and then use an `attention_mask`, you can get the same results as you’d get from variable shapes, but without any XLA issues. However, excessive padding can cause severe slowdown too - if you pad all your samples to the maximum length in the whole dataset, you might end up with batches consisting endless padding tokens, which will waste a lot of compute and memory!
There isn’t a perfect solution to this problem. However, you can try some tricks. One very useful trick is to **pad batches of samples up to a multiple of a number like 32 or 64 tokens.** This often only increases the number of tokens by a small amount, but it hugely reduces the number of unique input shapes, because every input shape now has to be a multiple of 32 or 64. Fewer unique input shapes means fewer XLA compilations!
<Tip>
**🤗Specific HuggingFace Tip🤗:** Our tokenizers and data collators have methods that can help you here. You can use `padding="max_length"` or `padding="longest"` when calling tokenizers to get them to output padded data. Our tokenizers and data collators also have a `pad_to_multiple_of` argument that you can use to reduce the number of unique input shapes you see!
</Tip>
### How do I actually train my model on TPU?
Once your training is XLA-compatible and (if you’re using TPU Node / Colab) your dataset has been prepared appropriately, running on TPU is surprisingly easy! All you really need to change in your code is to add a few lines to initialize your TPU, and to ensure that your model and dataset are created inside a `TPUStrategy` scope. Take a look at [our TPU example notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) to see this in action!
### Summary
There was a lot in here, so let’s summarize with a quick checklist you can follow when you want to get your model ready for TPU training:
- Make sure your code follows the three rules of XLA
- Compile your model with `jit_compile=True` on CPU/GPU and confirm that you can train it with XLA
- Either load your dataset into memory or use a TPU-compatible dataset loading approach (see [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb))
- Migrate your code either to Colab (with accelerator set to “TPU”) or a TPU VM on Google Cloud
- Add TPU initializer code (see [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb))
- Create your `TPUStrategy` and make sure dataset loading and model creation are inside the `strategy.scope()` (see [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb))
- Don’t forget to take `jit_compile=True` out again when you move to TPU!
- 🙏🙏🙏🥺🥺🥺
- Call model.fit()
- You did it!
|
gradio-app/gradio/blob/main/demo/blocks_random_slider/run.ipynb
|
Gradio Demo: blocks_random_slider
```
!pip install -q gradio
```
```
import gradio as gr
def func(slider_1, slider_2):
return slider_1 * 5 + slider_2
with gr.Blocks() as demo:
slider = gr.Slider(minimum=-10.2, maximum=15, label="Random Slider (Static)", randomize=True)
slider_1 = gr.Slider(minimum=100, maximum=200, label="Random Slider (Input 1)", randomize=True)
slider_2 = gr.Slider(minimum=10, maximum=23.2, label="Random Slider (Input 2)", randomize=True)
slider_3 = gr.Slider(value=3, label="Non random slider")
btn = gr.Button("Run")
btn.click(func, inputs=[slider_1, slider_2], outputs=gr.Number())
if __name__ == "__main__":
demo.launch()
```
|
huggingface/hub-docs/blob/main/docs/hub/security-git-ssh.md
|
Git over SSH
You can access and write data in repositories on huggingface.co using SSH (Secure Shell Protocol). When you connect via SSH, you authenticate using a private key file on your local machine.
Some actions, such as pushing changes, or cloning private repositories, will require you to upload your SSH public key to your account on huggingface.co.
You can use a pre-existing SSH key, or generate a new one specifically for huggingface.co.
## Checking for existing SSH keys
If you have an existing SSH key, you can use that key to authenticate Git operations over SSH.
SSH keys are usually located under `~/.ssh` on Mac & Linux, and under `C:\\Users\\<username>\\.ssh` on Windows. List files under that directory and look for files of the form:
- id_rsa.pub
- id_ecdsa.pub
- id_ed25519.pub
Those files contain your SSH public key.
If you don't have such file under `~/.ssh`, you will have to [generate a new key](#generating-a-new-ssh-keypair). Otherwise, you can [add your existing SSH public key(s) to your huggingface.co account](#add-a-ssh-key-to-your-account).
## Generating a new SSH keypair
If you don't have any SSH keys on your machine, you can use `ssh-keygen` to generate a new SSH key pair (public + private keys):
```
$ ssh-keygen -t ed25519 -C "[email protected]"
```
We recommend entering a passphrase when you are prompted to. A passphrase is an extra layer of security: it is a password that will be prompted whenever you use your SSH key.
Once your new key is generated, add it to your SSH agent with `ssh-add`:
```
$ ssh-add ~/.ssh/id_ed25519
```
If you chose a different location than the default to store your SSH key, you would have to replace `~/.ssh/id_ed25519` with the file location you used.
## Add a SSH key to your account
To access private repositories with SSH, or to push changes via SSH, you will need to add your SSH public key to your huggingface.co account. You can manage your SSH keys [in your user settings](https://huggingface.co/settings/keys).
To add a SSH key to your account, click on the "Add SSH key" button.
Then, enter a name for this key (for example, "Personal computer"), and copy and paste the content of your **public** SSH key in the area below. The public key is located in the `~/.ssh/id_XXXX.pub` file you found or generated in the previous steps.
Click on "Add key", and voilà! You have added a SSH key to your huggingface.co account.
## Testing your SSH authentication
Once you have added your SSH key to your huggingface.co account, you can test that the connection works as expected.
In a terminal, run:
```
$ ssh -T [email protected]
```
If you see a message with your username, congrats! Everything went well, you are ready to use git over SSH.
Otherwise, if the message states something like the following, make sure your SSH key is actually used by your SSH agent.
```
Hi anonymous, welcome to Hugging Face.
```
|
huggingface/transformers/blob/main/examples/research_projects/layoutlmv3/README.md
|
!---
Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# Token classification with LayoutLMv3 (PyTorch version)
This directory contains a script, `run_funsd_cord.py`, that can be used to fine-tune (or evaluate) LayoutLMv3 on form understanding datasets, such as [FUNSD](https://guillaumejaume.github.io/FUNSD/) and [CORD](https://github.com/clovaai/cord).
The script `run_funsd_cord.py` leverages the 🤗 Datasets library and the Trainer API. You can easily customize it to your needs.
## Fine-tuning on FUNSD
Fine-tuning LayoutLMv3 for token classification on [FUNSD](https://guillaumejaume.github.io/FUNSD/) can be done as follows:
```bash
python run_funsd_cord.py \
--model_name_or_path microsoft/layoutlmv3-base \
--dataset_name funsd \
--output_dir layoutlmv3-test \
--do_train \
--do_eval \
--max_steps 1000 \
--evaluation_strategy steps \
--eval_steps 100 \
--learning_rate 1e-5 \
--load_best_model_at_end \
--metric_for_best_model "eval_f1" \
--push_to_hub \
--push_to_hub°model_id layoutlmv3-finetuned-funsd
```
👀 The resulting model can be found here: https://huggingface.co/nielsr/layoutlmv3-finetuned-funsd. By specifying the `push_to_hub` flag, the model gets uploaded automatically to the hub (regularly), together with a model card, which includes metrics such as precision, recall and F1. Note that you can easily update the model card, as it's just a README file of the respective repo on the hub.
There's also the "Training metrics" [tab](https://huggingface.co/nielsr/layoutlmv3-finetuned-funsd/tensorboard), which shows Tensorboard logs over the course of training. Pretty neat, huh?
## Fine-tuning on CORD
Fine-tuning LayoutLMv3 for token classification on [CORD](https://github.com/clovaai/cord) can be done as follows:
```bash
python run_funsd_cord.py \
--model_name_or_path microsoft/layoutlmv3-base \
--dataset_name cord \
--output_dir layoutlmv3-test \
--do_train \
--do_eval \
--max_steps 1000 \
--evaluation_strategy steps \
--eval_steps 100 \
--learning_rate 5e-5 \
--load_best_model_at_end \
--metric_for_best_model "eval_f1" \
--push_to_hub \
--push_to_hub°model_id layoutlmv3-finetuned-cord
```
👀 The resulting model can be found here: https://huggingface.co/nielsr/layoutlmv3-finetuned-cord. Note that a model card gets generated automatically in case you specify the `push_to_hub` flag.
|
gradio-app/gradio/blob/main/guides/03_building-with-blocks/03_state-in-blocks.md
|
State in Blocks
We covered [State in Interfaces](https://gradio.app/interface-state), this guide takes a look at state in Blocks, which works mostly the same.
## Global State
Global state in Blocks works the same as in Interface. Any variable created outside a function call is a reference shared between all users.
## Session State
Gradio supports session **state**, where data persists across multiple submits within a page session, in Blocks apps as well. To reiterate, session data is _not_ shared between different users of your model. To store data in a session state, you need to do three things:
1. Create a `gr.State()` object. If there is a default value to this stateful object, pass that into the constructor.
2. In the event listener, put the `State` object as an input and output.
3. In the event listener function, add the variable to the input parameters and the return value.
Let's take a look at a game of hangman.
$code_hangman
$demo_hangman
Let's see how we do each of the 3 steps listed above in this game:
1. We store the used letters in `used_letters_var`. In the constructor of `State`, we set the initial value of this to `[]`, an empty list.
2. In `btn.click()`, we have a reference to `used_letters_var` in both the inputs and outputs.
3. In `guess_letter`, we pass the value of this `State` to `used_letters`, and then return an updated value of this `State` in the return statement.
With more complex apps, you will likely have many State variables storing session state in a single Blocks app.
Learn more about `State` in the [docs](https://gradio.app/docs#state).
|
gradio-app/gradio/blob/main/guides/cn/05_tabular-data-science-and-plots/plot-component-for-maps.md
|
如何使用地图组件绘制图表
Related spaces:
Tags: PLOTS, MAPS
## 简介
本指南介绍如何使用 Gradio 的 `Plot` 组件在地图上绘制地理数据。Gradio 的 `Plot` 组件可以与 Matplotlib、Bokeh 和 Plotly 一起使用。在本指南中,我们将使用 Plotly 进行操作。Plotly 可以让开发人员轻松创建各种地图来展示他们的地理数据。点击[这里](https://plotly.com/python/maps/)查看一些示例。
## 概述
我们将使用纽约市的 Airbnb 数据集,该数据集托管在 kaggle 上,点击[这里](https://www.kaggle.com/datasets/dgomonov/new-york-city-airbnb-open-data)。我已经将其上传到 Hugging Face Hub 作为一个数据集,方便使用和下载,点击[这里](https://huggingface.co/datasets/gradio/NYC-Airbnb-Open-Data)。使用这些数据,我们将在地图上绘制 Airbnb 的位置,并允许基于价格和位置进行筛选。下面是我们将要构建的演示。 ⚡️
$demo_map_airbnb
## 步骤 1-加载 CSV 数据 💾
让我们首先从 Hugging Face Hub 加载纽约市的 Airbnb 数据。
```python
from datasets import load_dataset
dataset = load_dataset("gradio/NYC-Airbnb-Open-Data", split="train")
df = dataset.to_pandas()
def filter_map(min_price, max_price, boroughs):
new_df = df[(df['neighbourhood_group'].isin(boroughs)) &
(df['price'] > min_price) & (df['price'] < max_price)]
names = new_df["name"].tolist()
prices = new_df["price"].tolist()
text_list = [(names[i], prices[i]) for i in range(0, len(names))]
```
在上面的代码中,我们先将 CSV 数据加载到一个 pandas dataframe 中。让我们首先定义一个函数,这将作为 gradio 应用程序的预测函数。该函数将接受最低价格、最高价格范围和筛选结果地区的列表作为参数。我们可以使用传入的值 (`min_price`、`max_price` 和地区列表) 来筛选数据框并创建 `new_df`。接下来,我们将创建包含每个 Airbnb 的名称和价格的 `text_list`,以便在地图上使用作为标签。
## 步骤 2-地图图表 🌐
Plotly 使得处理地图变得很容易。让我们看一下下面的代码,了解如何创建地图图表。
```python
import plotly.graph_objects as go
fig = go.Figure(go.Scattermapbox(
customdata=text_list,
lat=new_df['latitude'].tolist(),
lon=new_df['longitude'].tolist(),
mode='markers',
marker=go.scattermapbox.Marker(
size=6
),
hoverinfo="text",
hovertemplate='<b>Name</b>: %{customdata[0]}<br><b>Price</b>: $%{customdata[1]}'
))
fig.update_layout(
mapbox_style="open-street-map",
hovermode='closest',
mapbox=dict(
bearing=0,
center=go.layout.mapbox.Center(
lat=40.67,
lon=-73.90
),
pitch=0,
zoom=9
),
)
```
上面的代码中,我们通过传入经纬度列表来创建一个散点图。我们还传入了名称和价格的自定义数据,以便在鼠标悬停在每个标记上时显示额外的信息。接下来,我们使用 `update_layout` 来指定其他地图设置,例如缩放和居中。
有关使用 Mapbox 和 Plotly 创建散点图的更多信息,请点击[这里](https://plotly.com/python/scattermapbox/)。
## 步骤 3-Gradio 应用程序 ⚡️
我们将使用两个 `gr.Number` 组件和一个 `gr.CheckboxGroup` 组件,允许用户指定价格范围和地区位置。然后,我们将使用 `gr.Plot` 组件作为我们之前创建的 Plotly + Mapbox 地图的输出。
```python
with gr.Blocks() as demo:
with gr.Column():
with gr.Row():
min_price = gr.Number(value=250, label="Minimum Price")
max_price = gr.Number(value=1000, label="Maximum Price")
boroughs = gr.CheckboxGroup(choices=["Queens", "Brooklyn", "Manhattan", "Bronx", "Staten Island"], value=["Queens", "Brooklyn"], label="Select Boroughs:")
btn = gr.Button(value="Update Filter")
map = gr.Plot()
demo.load(filter_map, [min_price, max_price, boroughs], map)
btn.click(filter_map, [min_price, max_price, boroughs], map)
```
我们使用 `gr.Column` 和 `gr.Row` 布局这些组件,并为演示加载时和点击 " 更新筛选 " 按钮时添加了事件触发器,以触发地图更新新的筛选条件。
以下是完整演示代码:
$code_map_airbnb
## 步骤 4-部署 Deployment 🤗
如果你运行上面的代码,你的应用程序将在本地运行。
如果要获取临时共享链接,可以将 `share=True` 参数传递给 `launch`。
但如果你想要一个永久的部署解决方案呢?
让我们将我们的 Gradio 应用程序部署到免费的 HuggingFace Spaces 平台。
如果你以前没有使用过 Spaces,请按照之前的指南[这里](/using_hugging_face_integrations)。
## 结论 🎉
你已经完成了!这是构建地图演示所需的所有代码。
链接到演示:[地图演示](https://huggingface.co/spaces/gradio/map_airbnb)和[完整代码](https://huggingface.co/spaces/gradio/map_airbnb/blob/main/run.py)(在 Hugging Face Spaces)
|
huggingface/pytorch-image-models/blob/main/hfdocs/source/models/se-resnet.mdx
|
SE-ResNet
**SE ResNet** is a variant of a [ResNet](https://www.paperswithcode.com/method/resnet) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration.
## How do I use this model on an image?
To load a pretrained model:
```py
>>> import timm
>>> model = timm.create_model('seresnet152d', pretrained=True)
>>> model.eval()
```
To load and preprocess the image:
```py
>>> import urllib
>>> from PIL import Image
>>> from timm.data import resolve_data_config
>>> from timm.data.transforms_factory import create_transform
>>> config = resolve_data_config({}, model=model)
>>> transform = create_transform(**config)
>>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg")
>>> urllib.request.urlretrieve(url, filename)
>>> img = Image.open(filename).convert('RGB')
>>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension
```
To get the model predictions:
```py
>>> import torch
>>> with torch.no_grad():
... out = model(tensor)
>>> probabilities = torch.nn.functional.softmax(out[0], dim=0)
>>> print(probabilities.shape)
>>> # prints: torch.Size([1000])
```
To get the top-5 predictions class names:
```py
>>> # Get imagenet class mappings
>>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt")
>>> urllib.request.urlretrieve(url, filename)
>>> with open("imagenet_classes.txt", "r") as f:
... categories = [s.strip() for s in f.readlines()]
>>> # Print top categories per image
>>> top5_prob, top5_catid = torch.topk(probabilities, 5)
>>> for i in range(top5_prob.size(0)):
... print(categories[top5_catid[i]], top5_prob[i].item())
>>> # prints class names and probabilities like:
>>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)]
```
Replace the model name with the variant you want to use, e.g. `seresnet152d`. You can find the IDs in the model summaries at the top of this page.
To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use.
## How do I finetune this model?
You can finetune any of the pre-trained models just by changing the classifier (the last layer).
```py
>>> model = timm.create_model('seresnet152d', pretrained=True, num_classes=NUM_FINETUNE_CLASSES)
```
To finetune on your own dataset, you have to write a training loop or adapt [timm's training
script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset.
## How do I train this model?
You can follow the [timm recipe scripts](../scripts) for training a new model afresh.
## Citation
```BibTeX
@misc{hu2019squeezeandexcitation,
title={Squeeze-and-Excitation Networks},
author={Jie Hu and Li Shen and Samuel Albanie and Gang Sun and Enhua Wu},
year={2019},
eprint={1709.01507},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
Type: model-index
Collections:
- Name: SE ResNet
Paper:
Title: Squeeze-and-Excitation Networks
URL: https://paperswithcode.com/paper/squeeze-and-excitation-networks
Models:
- Name: seresnet152d
In Collection: SE ResNet
Metadata:
FLOPs: 20161904304
Parameters: 66840000
File Size: 268144497
Architecture:
- 1x1 Convolution
- Batch Normalization
- Bottleneck Residual Block
- Convolution
- Global Average Pooling
- Max Pooling
- ReLU
- Residual Block
- Residual Connection
- Softmax
- Squeeze-and-Excitation Block
Tasks:
- Image Classification
Training Techniques:
- Label Smoothing
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 8x NVIDIA Titan X GPUs
ID: seresnet152d
LR: 0.6
Epochs: 100
Layers: 152
Dropout: 0.2
Crop Pct: '0.94'
Momentum: 0.9
Batch Size: 1024
Image Size: '256'
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1206
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/seresnet152d_ra2-04464dd2.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 83.74%
Top 5 Accuracy: 96.77%
- Name: seresnet50
In Collection: SE ResNet
Metadata:
FLOPs: 5285062320
Parameters: 28090000
File Size: 112621903
Architecture:
- 1x1 Convolution
- Batch Normalization
- Bottleneck Residual Block
- Convolution
- Global Average Pooling
- Max Pooling
- ReLU
- Residual Block
- Residual Connection
- Softmax
- Squeeze-and-Excitation Block
Tasks:
- Image Classification
Training Techniques:
- Label Smoothing
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 8x NVIDIA Titan X GPUs
ID: seresnet50
LR: 0.6
Epochs: 100
Layers: 50
Dropout: 0.2
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 1024
Image Size: '224'
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1180
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/seresnet50_ra_224-8efdb4bb.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 80.26%
Top 5 Accuracy: 95.07%
-->
|
huggingface/evaluate/blob/main/metrics/poseval/README.md
|
--
title: poseval
emoji: 🤗
colorFrom: blue
colorTo: red
sdk: gradio
sdk_version: 3.19.1
app_file: app.py
pinned: false
tags:
- evaluate
- metric
description: >-
The poseval metric can be used to evaluate POS taggers. Since seqeval does not work well with POS data
that is not in IOB format the poseval is an alternative. It treats each token in the dataset as independant
observation and computes the precision, recall and F1-score irrespective of sentences. It uses scikit-learns's
classification report to compute the scores.
---
# Metric Card for peqeval
## Metric description
The poseval metric can be used to evaluate POS taggers. Since seqeval does not work well with POS data (see e.g. [here](https://stackoverflow.com/questions/71327693/how-to-disable-seqeval-label-formatting-for-pos-tagging)) that is not in IOB format the poseval is an alternative. It treats each token in the dataset as independant observation and computes the precision, recall and F1-score irrespective of sentences. It uses scikit-learns's [classification report](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) to compute the scores.
## How to use
Poseval produces labelling scores along with its sufficient statistics from a source against references.
It takes two mandatory arguments:
`predictions`: a list of lists of predicted labels, i.e. estimated targets as returned by a tagger.
`references`: a list of lists of reference labels, i.e. the ground truth/target values.
It can also take several optional arguments:
`zero_division`: Which value to substitute as a metric value when encountering zero division. Should be one of [`0`,`1`,`"warn"`]. `"warn"` acts as `0`, but the warning is raised.
```python
>>> predictions = [['INTJ', 'ADP', 'PROPN', 'NOUN', 'PUNCT', 'INTJ', 'ADP', 'PROPN', 'VERB', 'SYM']]
>>> references = [['INTJ', 'ADP', 'PROPN', 'PROPN', 'PUNCT', 'INTJ', 'ADP', 'PROPN', 'PROPN', 'SYM']]
>>> poseval = evaluate.load("poseval")
>>> results = poseval.compute(predictions=predictions, references=references)
>>> print(list(results.keys()))
['ADP', 'INTJ', 'NOUN', 'PROPN', 'PUNCT', 'SYM', 'VERB', 'accuracy', 'macro avg', 'weighted avg']
>>> print(results["accuracy"])
0.8
>>> print(results["PROPN"]["recall"])
0.5
```
## Output values
This metric returns a a classification report as a dictionary with a summary of scores for overall and per type:
Overall (weighted and macro avg):
`accuracy`: the average [accuracy](https://huggingface.co/metrics/accuracy), on a scale between 0.0 and 1.0.
`precision`: the average [precision](https://huggingface.co/metrics/precision), on a scale between 0.0 and 1.0.
`recall`: the average [recall](https://huggingface.co/metrics/recall), on a scale between 0.0 and 1.0.
`f1`: the average [F1 score](https://huggingface.co/metrics/f1), which is the harmonic mean of the precision and recall. It also has a scale of 0.0 to 1.0.
Per type (e.g. `MISC`, `PER`, `LOC`,...):
`precision`: the average [precision](https://huggingface.co/metrics/precision), on a scale between 0.0 and 1.0.
`recall`: the average [recall](https://huggingface.co/metrics/recall), on a scale between 0.0 and 1.0.
`f1`: the average [F1 score](https://huggingface.co/metrics/f1), on a scale between 0.0 and 1.0.
## Examples
```python
>>> predictions = [['INTJ', 'ADP', 'PROPN', 'NOUN', 'PUNCT', 'INTJ', 'ADP', 'PROPN', 'VERB', 'SYM']]
>>> references = [['INTJ', 'ADP', 'PROPN', 'PROPN', 'PUNCT', 'INTJ', 'ADP', 'PROPN', 'PROPN', 'SYM']]
>>> poseval = evaluate.load("poseval")
>>> results = poseval.compute(predictions=predictions, references=references)
>>> print(list(results.keys()))
['ADP', 'INTJ', 'NOUN', 'PROPN', 'PUNCT', 'SYM', 'VERB', 'accuracy', 'macro avg', 'weighted avg']
>>> print(results["accuracy"])
0.8
>>> print(results["PROPN"]["recall"])
0.5
```
## Limitations and bias
In contrast to [seqeval](https://github.com/chakki-works/seqeval), the poseval metric treats each token independently and computes the classification report over all concatenated sequences..
## Citation
```bibtex
@article{scikit-learn,
title={Scikit-learn: Machine Learning in {P}ython},
author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V.
and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P.
and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and
Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.},
journal={Journal of Machine Learning Research},
volume={12},
pages={2825--2830},
year={2011}
}
```
## Further References
- [README for seqeval at GitHub](https://github.com/chakki-works/seqeval)
- [Classification report](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html)
- [Issues with seqeval](https://stackoverflow.com/questions/71327693/how-to-disable-seqeval-label-formatting-for-pos-tagging)
|
huggingface/blog/blob/main/large-language-models.md
|
--
title: "Large Language Models: A New Moore's Law?"
thumbnail: /blog/assets/33_large_language_models/01_model_size.jpg
authors:
- user: juliensimon
---
# Large Language Models: A New Moore's Law?
A few days ago, Microsoft and NVIDIA [introduced](https://www.microsoft.com/en-us/research/blog/using-deepspeed-and-megatron-to-train-megatron-turing-nlg-530b-the-worlds-largest-and-most-powerful-generative-language-model/) Megatron-Turing NLG 530B, a Transformer-based model hailed as "*the world’s largest and most powerful generative language model*."
This is an impressive show of Machine Learning engineering, no doubt about it. Yet, should we be excited about this mega-model trend? I, for one, am not. Here's why.
<kbd>
<img src="assets/33_large_language_models/01_model_size.jpg">
</kbd>
### This is your Brain on Deep Learning
Researchers estimate that the human brain contains an average of [86 billion neurons](https://pubmed.ncbi.nlm.nih.gov/19226510/) and 100 trillion synapses. It's safe to assume that not all of them are dedicated to language either. Interestingly, GPT-4 is [expected](https://www.wired.com/story/cerebras-chip-cluster-neural-networks-ai/) to have about 100 trillion parameters... As crude as this analogy is, shouldn't we wonder whether building language models that are about the size of the human brain is the best long-term approach?
Of course, our brain is a marvelous device, produced by millions of years of evolution, while Deep Learning models are only a few decades old. Still, our intuition should tell us that something doesn't compute (pun intended).
### Deep Learning, Deep Pockets?
As you would expect, training a 530-billion parameter model on humongous text datasets requires a fair bit of infrastructure. In fact, Microsoft and NVIDIA used hundreds of DGX A100 multi-GPU servers. At $199,000 a piece, and factoring in networking equipment, hosting costs, etc., anyone looking to replicate this experiment would have to spend close to $100 million dollars. Want fries with that?
Seriously, which organizations have business use cases that would justify spending $100 million on Deep Learning infrastructure? Or even $10 million? Very few. So who are these models for, really?
### That Warm Feeling is your GPU Cluster
For all its engineering brilliance, training Deep Learning models on GPUs is a brute force technique. According to the spec sheet, each DGX server can consume up to 6.5 kilowatts. Of course, you'll need at least as much cooling power in your datacenter (or your server closet). Unless you're the Starks and need to keep Winterfell warm in winter, that's another problem you'll have to deal with.
In addition, as public awareness grows on climate and social responsibility issues, organizations need to account for their carbon footprint. According to this 2019 [study](https://arxiv.org/pdf/1906.02243.pdf) from the University of Massachusetts, "*training BERT on GPU is roughly equivalent to a trans-American flight*".
BERT-Large has 340 million parameters. One can only extrapolate what the footprint of Megatron-Turing could be... People who know me wouldn't call me a bleeding-heart environmentalist. Still, some numbers are hard to ignore.
### So?
Am I excited by Megatron-Turing NLG 530B and whatever beast is coming next? No. Do I think that the (relatively small) benchmark improvement is worth the added cost, complexity and carbon footprint? No. Do I think that building and promoting these huge models is helping organizations understand and adopt Machine Learning ? No.
I'm left wondering what's the point of it all. Science for the sake of science? Good old marketing? Technological supremacy? Probably a bit of each. I'll leave them to it, then.
Instead, let me focus on pragmatic and actionable techniques that you can all use to build high quality Machine Learning solutions.
### Use Pretrained Models
In the vast majority of cases, you won't need a custom model architecture. Maybe you'll *want* a custom one (which is a different thing), but there be dragons. Experts only!
A good starting point is to look for [models](https://huggingface.co/models) that have been pretrained for the task you're trying to solve (say, [summarizing English text](https://huggingface.co/models?language=en&pipeline_tag=summarization&sort=downloads)).
Then, you should quickly try out a few models to predict your own data. If metrics tell you that one works well enough, you're done! If you need a little more accuracy, you should consider fine-tuning the model (more on this in a minute).
### Use Smaller Models
When evaluating models, you should pick the smallest one that can deliver the accuracy you need. It will predict faster and require fewer hardware resources for training and inference. Frugality goes a long way.
It's nothing new either. Computer Vision practitioners will remember when [SqueezeNet](https://arxiv.org/abs/1602.07360) came out in 2017, achieving a 50x reduction in model size compared to [AlexNet](https://papers.nips.cc/paper/2012/hash/c399862d3b9d6b76c8436e924a68c45b-Abstract.html), while meeting or exceeding its accuracy. How clever that was!
Downsizing efforts are also under way in the Natural Language Processing community, using transfer learning techniques such as [knowledge distillation](https://en.wikipedia.org/wiki/Knowledge_distillation). [DistilBERT](https://arxiv.org/abs/1910.01108) is perhaps its most widely known achievement. Compared to the original BERT model, it retains 97% of language understanding while being 40% smaller and 60% faster. You can try it [here](https://huggingface.co/distilbert-base-uncased). The same approach has been applied to other models, such as Facebook's [BART](https://arxiv.org/abs/1910.13461), and you can try DistilBART [here](https://huggingface.co/models?search=distilbart).
Recent models from the [Big Science](https://bigscience.huggingface.co/) project are also very impressive. As visible in this graph included in the [research paper](https://arxiv.org/abs/2110.08207), their T0 model outperforms GPT-3 on many tasks while being 16x smaller.
<kbd>
<img src="assets/33_large_language_models/02_t0.png">
</kbd>
You can try T0 [here](https://huggingface.co/bigscience/T0pp). This is the kind of research we need more of!
### Fine-Tune Models
If you need to specialize a model, there should be very few reasons to train it from scratch. Instead, you should fine-tune it, that is to say train it only for a few epochs on your own data. If you're short on data, maybe of one these [datasets](https://huggingface.co/datasets) can get you started.
You guessed it, that's another way to do transfer learning, and it'll help you save on everything!
* Less data to collect, store, clean and annotate,
* Faster experiments and iterations,
* Fewer resources required in production.
In other words: save time, save money, save hardware resources, save the world!
If you need a tutorial, the Hugging Face [course](https://huggingface.co/course) will get you started in no time.
### Use Cloud-Based Infrastructure
Like them or not, cloud companies know how to build efficient infrastructure. Sustainability studies show that cloud-based infrastructure is more energy and carbon efficient than the alternative: see [AWS](https://sustainability.aboutamazon.com/environment/the-cloud), [Azure](https://azure.microsoft.com/en-us/global-infrastructure/sustainability), and [Google](https://cloud.google.com/sustainability). Earth.org [says](https://earth.org/environmental-impact-of-cloud-computing/) that while cloud infrastructure is not perfect, "[*it's] more energy efficient than the alternative and facilitates environmentally beneficial services and economic growth.*"
Cloud certainly has a lot going for it when it comes to ease of use, flexibility and pay as you go. It's also a little greener than you probably thought. If you're short on GPUs, why not try fine-tune your Hugging Face models on [Amazon SageMaker](https://aws.amazon.com/sagemaker/), AWS' managed service for Machine Learning? We've got [plenty of examples](https://huggingface.co/docs/sagemaker/train) for you.
### Optimize Your Models
From compilers to virtual machines, software engineers have long used tools that automatically optimize their code for whatever hardware they're running on.
However, the Machine Learning community is still struggling with this topic, and for good reason. Optimizing models for size and speed is a devilishly complex task, which involves techniques such as:
* Specialized hardware that speeds up training ([Graphcore](https://www.graphcore.ai/), [Habana](https://habana.ai/)) and inference ([Google TPU](https://cloud.google.com/tpu), [AWS Inferentia](https://aws.amazon.com/machine-learning/inferentia/)).
* Pruning: remove model parameters that have little or no impact on the predicted outcome.
* Fusion: merge model layers (say, convolution and activation).
* Quantization: storing model parameters in smaller values (say, 8 bits instead of 32 bits)
Fortunately, automated tools are starting to appear, such as the [Optimum](https://huggingface.co/hardware) open source library, and [Infinity](https://huggingface.co/infinity), a containerized solution that delivers Transformers accuracy at 1-millisecond latency.
### Conclusion
Large language model size has been increasing 10x every year for the last few years. This is starting to look like another [Moore's Law](https://en.wikipedia.org/wiki/Moore%27s_law).
We've been there before, and we should know that this road leads to diminishing returns, higher cost, more complexity, and new risks. Exponentials tend not to end well. Remember [Meltdown and Spectre](https://meltdownattack.com/)? Do we want to find out what that looks like for AI?
Instead of chasing trillion-parameter models (place your bets), wouldn't all be better off if we built practical and efficient solutions that all developers can use to solve real-world problems?
*Interested in how Hugging Face can help your organization build and deploy production-grade Machine Learning solutions? Get in touch at [[email protected]](mailto:[email protected]) (no recruiters, no sales pitches, please).*
|
huggingface/transformers/blob/main/docs/source/en/model_doc/vision-text-dual-encoder.md
|
!--Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# VisionTextDualEncoder
## Overview
The [`VisionTextDualEncoderModel`] can be used to initialize a vision-text dual encoder model with
any pretrained vision autoencoding model as the vision encoder (*e.g.* [ViT](vit), [BEiT](beit), [DeiT](deit)) and any pretrained text autoencoding model as the text encoder (*e.g.* [RoBERTa](roberta), [BERT](bert)). Two projection layers are added on top of both the vision and text encoder to project the output embeddings
to a shared latent space. The projection layers are randomly initialized so the model should be fine-tuned on a
downstream task. This model can be used to align the vision-text embeddings using CLIP like contrastive image-text
training and then can be used for zero-shot vision tasks such image-classification or retrieval.
In [LiT: Zero-Shot Transfer with Locked-image Text Tuning](https://arxiv.org/abs/2111.07991) it is shown how
leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvement on
new zero-shot vision tasks such as image classification or retrieval.
## VisionTextDualEncoderConfig
[[autodoc]] VisionTextDualEncoderConfig
## VisionTextDualEncoderProcessor
[[autodoc]] VisionTextDualEncoderProcessor
<frameworkcontent>
<pt>
## VisionTextDualEncoderModel
[[autodoc]] VisionTextDualEncoderModel
- forward
</pt>
<tf>
## FlaxVisionTextDualEncoderModel
[[autodoc]] FlaxVisionTextDualEncoderModel
- __call__
</tf>
<jax>
## TFVisionTextDualEncoderModel
[[autodoc]] TFVisionTextDualEncoderModel
- call
</jax>
</frameworkcontent>
|
huggingface/course/blob/main/subtitles/en/raw/chapter3/02d_dynamic-padding.md
|
hat is dynamic padding? In the "Batching Inputs together" video, we have seen that to be able to group inputs of different lengths in the same batch, we need to add padding tokens to all the short inputs until they are all of the same length. Here for instance, the longest sentence is the third one, and we need to add 5, 2 and 7 pad tokens to the other to have four sentences of the same lengths. When dealing with a whole dataset, there are various padding strategies we can apply. The most obvious one is to pad all the elements of the dataset to the same length: the length of the longest sample. This will then give us batches that all have the same shape determined by the maximum sequence length. The downside is that batches composed from short sentences will have a lot of padding tokens which introduce more computations in the model we ultimately don't need. To avoid this, another strategy is to pad the elements when we batch them together, to the longest sentence inside the batch. This way batches composed of short inputs will be smaller than the batch containing the longest sentence in the dataset. This will yield some nice speedup on CPU and GPU. The downside is that all batches will then have different shapes, which slows down training on other accelerators like TPUs. Let's see how to apply both strategies in practice. We have actually seen how to apply fixed padding in the Datasets Overview video, when we preprocessed the MRPC dataset: after loading the dataset and tokenizer, we applied the tokenization to all the dataset with padding and truncation to make all samples of length 128. As a result, if we pass this dataset to a PyTorch DataLoader, we get batches of shape batch size (here 16) by 128. To apply dynamic padding, we must defer the padding to the batch preparation, so we remove that part from our tokenize function. We still leave the truncation part so that inputs that are bigger than the maximum length accepted by the model (usually 512) get truncated to that length. Then we pad our samples dynamically by using a data collator. Those classes in the Transformers library are responsible for applying all the final processing needed before forming a batch, here DataCollatorWithPadding will pad the samples to the maximum length inside the batch of sentences. We pass it to the PyTorch DataLoader as a collate function, then observe that the batches generated have various lenghs, all way below the 128 from before. Dynamic batching will almost always be faster on CPUs and GPUs, so you should apply it if you can. Remember to switch back to fixed padding however if you run your training script on TPU or need batches of fixed shapes.
|
huggingface/diffusers/blob/main/docs/source/en/api/pipelines/kandinsky3.md
|
!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Kandinsky 3
Kandinsky 3 is created by [Vladimir Arkhipkin](https://github.com/oriBetelgeuse),[Anastasia Maltseva](https://github.com/NastyaMittseva),[Igor Pavlov](https://github.com/boomb0om),[Andrei Filatov](https://github.com/anvilarth),[Arseniy Shakhmatov](https://github.com/cene555),[Andrey Kuznetsov](https://github.com/kuznetsoffandrey),[Denis Dimitrov](https://github.com/denndimitrov), [Zein Shaheen](https://github.com/zeinsh)
The description from it's Github page:
*Kandinsky 3.0 is an open-source text-to-image diffusion model built upon the Kandinsky2-x model family. In comparison to its predecessors, enhancements have been made to the text understanding and visual quality of the model, achieved by increasing the size of the text encoder and Diffusion U-Net models, respectively.*
Its architecture includes 3 main components:
1. [FLAN-UL2](https://huggingface.co/google/flan-ul2), which is an encoder decoder model based on the T5 architecture.
2. New U-Net architecture featuring BigGAN-deep blocks doubles depth while maintaining the same number of parameters.
3. Sber-MoVQGAN is a decoder proven to have superior results in image restoration.
The original codebase can be found at [ai-forever/Kandinsky-3](https://github.com/ai-forever/Kandinsky-3).
<Tip>
Check out the [Kandinsky Community](https://huggingface.co/kandinsky-community) organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting.
</Tip>
<Tip>
Make sure to check out the schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
</Tip>
## Kandinsky3Pipeline
[[autodoc]] Kandinsky3Pipeline
- all
- __call__
## Kandinsky3Img2ImgPipeline
[[autodoc]] Kandinsky3Img2ImgPipeline
- all
- __call__
|
huggingface/datasets-server/blob/main/services/worker/README.md
|
Datasets server - worker
> Workers that pre-compute and cache the response to /splits, /first-rows, /parquet, /info and /size.
## Configuration
Use environment variables to configure the workers. The prefix of each environment variable gives its scope.
### Uvicorn
The following environment variables are used to configure the Uvicorn server (`WORKER_UVICORN_` prefix). It is used for the /healthcheck and the /metrics endpoints:
- `WORKER_UVICORN_HOSTNAME`: the hostname. Defaults to `"localhost"`.
- `WORKER_UVICORN_NUM_WORKERS`: the number of uvicorn workers. Defaults to `2`.
- `WORKER_UVICORN_PORT`: the port. Defaults to `8000`.
### Prometheus
- `PROMETHEUS_MULTIPROC_DIR`: the directory where the uvicorn workers share their prometheus metrics. See https://github.com/prometheus/client_python#multiprocess-mode-eg-gunicorn. Defaults to empty, in which case every uvicorn worker manages its own metrics, and the /metrics endpoint returns the metrics of a random worker.
## Worker configuration
Set environment variables to configure the worker.
- `WORKER_CONTENT_MAX_BYTES`: the maximum size in bytes of the response content computed by a worker (to prevent returning big responses in the REST API). Defaults to `10_000_000`.
- `WORKER_DIFFICULTY_MAX`: the maximum difficulty of the jobs to process. Defaults to None.
- `WORKER_DIFFICULTY_MIN`: the minimum difficulty of the jobs to process. Defaults to None.
- `WORKER_HEARTBEAT_INTERVAL_SECONDS`: the time interval between two heartbeats. Each heartbeat updates the job "last_heartbeat" field in the queue. Defaults to `60` (1 minute).
- `WORKER_JOB_TYPES_BLOCKED`: comma-separated list of job types that will not be processed, e.g. "dataset-config-names,dataset-split-names". If empty, no job type is blocked. Defaults to empty.
- `WORKER_JOB_TYPES_ONLY`: comma-separated list of the non-blocked job types to process, e.g. "dataset-config-names,dataset-split-names". If empty, the worker processes all the non-blocked jobs. Defaults to empty.
- `WORKER_KILL_LONG_JOB_INTERVAL_SECONDS`: the time interval at which the worker looks for long jobs to kill them. Defaults to `60` (1 minute).
- `WORKER_KILL_ZOMBIES_INTERVAL_SECONDS`: the time interval at which the worker looks for zombie jobs to kill them. Defaults to `600` (10 minutes).
- `WORKER_MAX_DISK_USAGE_PCT`: maximum disk usage of every storage disk in the list (in percentage) to allow a job to start. Set to 0 to disable the test. Defaults to 90.
- `WORKER_MAX_JOB_DURATION_SECONDS`: the maximum duration allowed for a job to run. If the job runs longer, it is killed (see `WORKER_KILL_LONG_JOB_INTERVAL_SECONDS`). Defaults to `1200` (20 minutes).
- `WORKER_MAX_LOAD_PCT`: maximum load of the machine (in percentage: the max between the 1m load and the 5m load divided by the number of CPUs \*100) allowed to start a job. Set to 0 to disable the test. Defaults to 70.
- `WORKER_MAX_MEMORY_PCT`: maximum memory (RAM + SWAP) usage of the machine (in percentage) allowed to start a job. Set to 0 to disable the test. Defaults to 80.
- `WORKER_MAX_MISSING_HEARTBEATS`: the number of hearbeats a job must have missed to be considered a zombie job. Defaults to `5`.
- `WORKER_SLEEP_SECONDS`: wait duration in seconds at each loop iteration before checking if resources are available and processing a job if any is available. Note that the loop doesn't wait just after finishing a job: the next job is immediately processed. Defaults to `15`.
- `WORKER_STORAGE_PATHS`: comma-separated list of paths to check for disk usage. Defaults to empty.
Also, it's possible to force the parent directory in which the temporary files (as the current job state file and its associated lock file) will be created by setting `TMPDIR` to a writable directory. If not set, the worker will use the default temporary directory of the system, as described in https://docs.python.org/3/library/tempfile.html#tempfile.gettempdir.
### Datasets based worker
Set environment variables to configure the datasets-based worker (`DATASETS_BASED_` prefix):
- `DATASETS_BASED_HF_DATASETS_CACHE`: directory where the `datasets` library will store the cached datasets' data. If not set, the datasets library will choose the default location. Defaults to None.
Also, set the modules cache configuration for the datasets-based worker. See [../../libs/libcommon/README.md](../../libs/libcommon/README.md). Note that this variable has no `DATASETS_BASED_` prefix:
- `HF_MODULES_CACHE`: directory where the `datasets` library will store the cached dataset scripts. If not set, the datasets library will choose the default location. Defaults to None.
Note that both directories will be appended to `WORKER_STORAGE_PATHS` (see [../../libs/libcommon/README.md](../../libs/libcommon/README.md)) to hold the workers when the disk is full.
### Numba library
Numba requires setting the `NUMBA_CACHE_DIR` environment variable to a writable directory to cache the compiled functions. Required on cloud infrastructure (see https://stackoverflow.com/a/63367171/7351594):
- `NUMBA_CACHE_DIR`: directory where the `numba` decorators (used by `librosa`) can write cache.
Note that this directory will be appended to `WORKER_STORAGE_PATHS` (see [../../libs/libcommon/README.md](../../libs/libcommon/README.md)) to hold the workers when the disk is full.
### Huggingface_hub library
If the Hub is not https://huggingface.co (i.e., if you set the `COMMON_HF_ENDPOINT` environment variable), you must set the `HF_ENDPOINT` environment variable to the same value. See https://github.com/huggingface/datasets/pull/5196#issuecomment-1322191411 for more details:
- `HF_ENDPOINT`: the URL of the Hub. Defaults to `https://huggingface.co`.
### First rows worker
Set environment variables to configure the `first-rows` worker (`FIRST_ROWS_` prefix):
- `FIRST_ROWS_MAX_BYTES`: the max size of the /first-rows response in bytes. Defaults to `1_000_000` (1 MB).
- `FIRST_ROWS_MAX_NUMBER`: the max number of rows fetched by the worker for the split and provided in the /first-rows response. Defaults to `100`.
- `FIRST_ROWS_MIN_CELL_BYTES`: the minimum size in bytes of a cell when truncating the content of a row (see `FIRST_ROWS_ROWS_MAX_BYTES`). Below this limit, the cell content will not be truncated. Defaults to `100`.
- `FIRST_ROWS_MIN_NUMBER`: the min number of rows fetched by the worker for the split and provided in the /first-rows response. Defaults to `10`.
- `FIRST_ROWS_COLUMNS_MAX_NUMBER`: the max number of columns (features) provided in the /first-rows response. If the number of columns is greater than the limit, an error is returned. Defaults to `1_000`.
Also, set the assets-related configuration for the first-rows worker. See [../../libs/libcommon/README.md](../../libs/libcommon/README.md).
### Parquet and info worker
Set environment variables to configure the `parquet-and-info` worker (`PARQUET_AND_INFO_` prefix):
- `PARQUET_AND_INFO_COMMIT_MESSAGE`: the git commit message when the worker uploads the parquet files to the Hub. Defaults to `Update parquet files`.
- `PARQUET_AND_INFO_COMMITTER_HF_TOKEN`: the HuggingFace token to commit the parquet files to the Hub. The token must be an app token associated with a user that has the right to 1. create the `refs/convert/parquet` branch (see `PARQUET_AND_INFO_TARGET_REVISION`) and 2. push commits to it on any dataset. [Datasets maintainers](https://huggingface.co/datasets-maintainers) members have these rights. The token must have permission to write. If not set, the worker will fail. Defaults to None.
- `PARQUET_AND_INFO_MAX_DATASET_SIZE_BYTES`: the maximum size in bytes of the dataset to pre-compute the parquet files. Bigger datasets, or datasets without that information, are partially streamed to get parquet files up to this value. Defaults to `100_000_000`.
- `PARQUET_AND_INFO_MAX_EXTERNAL_DATA_FILES`: the maximum number of external files of the datasets. Bigger datasets, or datasets without that information, are partially streamed to get parquet files up to `PARQUET_AND_INFO_MAX_DATASET_SIZE_BYTES` bytes. Defaults to `10_000`.
- `PARQUET_AND_INFO_MAX_ROW_GROUP_BYTE_SIZE_FOR_COPY`: the maximum size in bytes of the row groups of parquet datasets that are copied to the target revision. Bigger datasets, or datasets without that information, are partially streamed to get parquet files up to `PARQUET_AND_INFO_MAX_DATASET_SIZE_BYTES` bytes. Defaults to `100_000_000`.
- `PARQUET_AND_INFO_SOURCE_REVISION`: the git revision of the dataset to use to prepare the parquet files. Defaults to `main`.
- `PARQUET_AND_INFO_TARGET_REVISION`: the git revision of the dataset where to store the parquet files. Make sure the committer token (`PARQUET_AND_INFO_COMMITTER_HF_TOKEN`) has the permission to write there. Defaults to `refs/convert/parquet`.
- `PARQUET_AND_INFO_URL_TEMPLATE`: the URL template to build the parquet file URLs. Defaults to `/datasets/%s/resolve/%s/%s`.
### Duckdb Index worker
Set environment variables to configure the `duckdb-index` worker (`DUCKDB_INDEX_` prefix):
- `DUCKDB_INDEX_CACHE_DIRECTORY`: directory where the temporal duckdb index files are stored. Defaults to empty.
- `DUCKDB_INDEX_COMMIT_MESSAGE`: the git commit message when the worker uploads the duckdb index file to the Hub. Defaults to `Update duckdb index file`.
- `DUCKDB_INDEX_COMMITTER_HF_TOKEN`: the HuggingFace token to commit the duckdb index file to the Hub. The token must be an app token associated with a user that has the right to 1. create the `refs/convert/parquet` branch (see `DUCKDB_INDEX_TARGET_REVISION`) and 2. push commits to it on any dataset. [Datasets maintainers](https://huggingface.co/datasets-maintainers) members have these rights. The token must have permission to write. If not set, the worker will fail. Defaults to None.
- `DUCKDB_INDEX_MAX_DATASET_SIZE_BYTES`: the maximum size in bytes of the dataset's parquet files to index. Datasets with bigger size are ignored. Defaults to `100_000_000`.
- `DUCKDB_INDEX_TARGET_REVISION`: the git revision of the dataset where to store the duckdb index file. Make sure the committer token (`DUCKDB_INDEX_COMMITTER_HF_TOKEN`) has the permission to write there. Defaults to `refs/convert/parquet`.
- `DUCKDB_INDEX_URL_TEMPLATE`: the URL template to build the duckdb index file URL. Defaults to `/datasets/%s/resolve/%s/%s`.
- `DUCKDB_INDEX_EXTENSIONS_DIRECTORY`: directory where the duckdb extensions will be downloaded. Defaults to empty.
### Descriptive statistics worker
Set environment variables to configure the `descriptive-statistics` worker (`DESCRIPTIVE_STATISTICS_` prefix):
- `DESCRIPTIVE_STATISTICS_CACHE_DIRECTORY`: directory to which a dataset in parquet format is downloaded. Defaults to empty.
- `DESCRIPTIVE_STATISTICS_HISTOGRAM_NUM_BINS`: number of histogram bins (see examples below for more info).
- `DESCRIPTIVE_STATISTICS_MAX_PARQUET_SIZE_BYTES`: maximum size in bytes of the dataset's parquet files to compute statistics. Datasets with bigger size are ignored. Defaults to `100_000_000`.
#### How descriptive statistics are computed
Descriptive statistics are currently computed for the following data types: strings, floats, and ints (including `ClassLabel` int).
Response has two fields: `num_examples` and `statistics`. `statistics` field is a list of dicts with three keys: `column_name`, `column_type`, and `column_statistics`.
`column_type` is one of the following values:
* `class_label` - for `datasets.ClassLabel` feature
* `float` - for float dtypes ("float16", "float32", "float64")
* `int` - for integer dtypes ("int8", "int16", "int32", "int64", "uint8", "uint16", "uint32", "uint64")
* `string_label` - for string dtypes ("string", "large_string") - if there are less than or equal to `MAX_NUM_STRING_LABELS` unique values (hardcoded in worker's code, for now it's 30)
* `string_text` - for string dtypes ("string", "large_string") - if there are more than `MAX_NUM_STRING_LABELS` unique values
* `bool` - for boolean dtype ("bool")
`column_statistics` content depends on the feature type, see examples below.
##### class_label
<details><summary>example: </summary>
<p>
```python
{
"column_name": "class_col",
"column_type": "class_label",
"column_statistics": {
"nan_count": 0,
"nan_proportion": 0.0,
"no_label_count": 0, # number of -1 values - special value of the `datasets` lib to encode `no label`
"no_label_proportion": 0.0,
"n_unique": 5, # number of unique values (excluding `no label` and nan)
"frequencies": { # mapping value -> its count
"this": 19834,
"are": 20159,
"random": 20109,
"words": 20172,
"test": 19726
}
}
}
```
</p>
</details>
##### float
Bin size for histogram is counted as `(max_value - min_value) / DESCRIPTIVE_STATISTICS_HISTOGRAM_NUM_BINS`
<details><summary>example: </summary>
<p>
```python
{
"column_name": "delay",
"column_type": "float",
"column_statistics": {
"nan_count": 0,
"nan_proportion": 0.0,
"min": -10.206,
"max": 8.48053,
"mean": 2.10174,
"median": 3.4012,
"std": 3.12487,
"histogram": {
"hist": [
2,
34,
256,
15198,
9037,
2342,
12743,
45114,
14904,
370
],
"bin_edges": [
-10.206,
-8.33734,
-6.46869,
-4.60004,
-2.73139,
-0.86273,
1.00592,
2.87457,
4.74322,
6.61188,
8.48053 # includes maximum value, so len is always len(hist) + 1
]
}
}
}
```
</p>
</details>
##### int
As bin edges for integer values also must be integers, bin size is counted as `np.ceil((max_value - min_value + 1) / DESCRIPTIVE_STATISTICS_HISTOGRAM_NUM_BINS)`. Rounding up means that there might be smaller number of bins in response then provided `DESCRIPTIVE_STATISTICS_HISTOGRAM_NUM_BINS`. The last bin's size might be smaller than that of the others if the feature's range is not divisible by the rounded bin size.
<details><summary>examples: </summary>
<p>
```python
{
"column_name": "direction",
"column_type": "int",
"column_statistics": {
"nan_count": 0,
"nan_proportion": 0.0,
"min": 0,
"max": 1,
"mean": 0.49925,
"median": 0.0,
"std": 0.5,
"histogram": {
"hist": [
50075,
49925
],
"bin_edges": [
0,
1,
1 # if the last value is equal to the last but one, that means that this bin includes only this value
]
}
}
},
{
"column_name": "hour",
"column_type": "int",
"column_statistics": {
"nan_count": 0,
"nan_proportion": 0.0,
"min": 0,
"max": 23,
"mean": 13.44402,
"median": 14.0,
"std": 5.49455,
"histogram": {
"hist": [
2694,
2292,
16785,
16326,
16346,
17809,
16546,
11202
],
"bin_edges": [
0,
3,
6,
9,
12,
15,
18,
21,
23
]
}
}
},
{
"column_name": "humidity",
"column_type": "int",
"column_statistics": {
"nan_count": 0,
"nan_proportion": 0.0,
"min": 54,
"max": 99,
"mean": 83.89878,
"median": 85.0,
"std": 8.65174,
"histogram": {
"hist": [
554,
1662,
3823,
6532,
12512,
17536,
23871,
20355,
12896,
259
],
"bin_edges": [
54,
59,
64,
69,
74,
79,
84,
89,
94,
99,
99
]
}
}
},
{
"column_name": "weekday",
"column_type": "int",
"column_statistics": {
"nan_count": 0,
"nan_proportion": 0.0,
"min": 0,
"max": 6,
"mean": 3.08063,
"median": 3.0,
"std": 1.90347,
"histogram": {
"hist": [
10282,
15416,
15291,
15201,
15586,
15226,
12998
],
"bin_edges": [
0,
1,
2,
3,
4,
5,
6,
6
]
}
}
}
```
</p>
</details>
##### string_label
If the number of unique values in a column (within requested split) is <= `MAX_NUM_STRING_LABELS` (currently 30), the column is considered to be a category and the categories counts are computed.
<details><summary>examples: </summary>
<p>
```python
{
'column_name': 'string_col',
'column_type': 'string_label',
'column_statistics':
{
"nan_count": 0,
"nan_proportion": 0.0,
"n_unique": 5, # number of unique values (excluding nan)
"frequencies": { # mapping value -> its count
"this": 19834,
"are": 20159,
"random": 20109,
"words": 20172,
"test": 19726
}
}
}
```
</p>
</details>
##### string_text
If the number of unique values in a column (within requested split) is > `MAX_NUM_STRING_LABELS` (currently 30), the column is considered to be text and the distribution of text **lengths** is computed.
<details><summary>example: </summary>
<p>
```python
{
'column_name': 'text_col',
'column_type': 'string_text',
'column_statistics': {
'max': 296,
'mean': 97.46649,
'median': 88.0,
'min': 11,
'nan_count': 0,
'nan_proportion': 0.0,
'std': 55.82714,
'histogram': {
'bin_edges': [
11,
40,
69,
98,
127,
156,
185,
214,
243,
272,
296
],
'hist': [
171,
224,
235,
180,
102,
99,
53,
28,
10,
2
]
},
}
}
```
</p>
</details>
##### bool
<details><summary>example: </summary>
<p>
```python
{
'column_name': 'bool__nan_column',
'column_type': 'bool',
'column_statistics':
{
'nan_count': 3,
'nan_proportion': 0.15,
'frequencies': {
'False': 7,
'True': 10
}
}
}
```
</p>
</details>
### Splits worker
The `splits` worker does not need any additional configuration.
### Common
See [../../libs/libcommon/README.md](../../libs/libcommon/README.md) for more information about the common configuration.
|
huggingface/datasets/blob/main/docs/source/about_mapstyle_vs_iterable.mdx
|
Differences between Dataset and IterableDataset
There are two types of dataset objects, a [`Dataset`] and an [`IterableDataset`].
Whichever type of dataset you choose to use or create depends on the size of the dataset.
In general, an [`IterableDataset`] is ideal for big datasets (think hundreds of GBs!) due to its lazy behavior and speed advantages, while a [`Dataset`] is great for everything else.
This page will compare the differences between a [`Dataset`] and an [`IterableDataset`] to help you pick the right dataset object for you.
## Downloading and streaming
When you have a regular [`Dataset`], you can access it using `my_dataset[0]`. This provides random access to the rows.
Such datasets are also called "map-style" datasets.
For example you can download ImageNet-1k like this and access any row:
```python
from datasets import load_dataset
imagenet = load_dataset("imagenet-1k", split="train") # downloads the full dataset
print(imagenet[0])
```
But one caveat is that you must have the entire dataset stored on your disk or in memory, which blocks you from accessing datasets bigger than the disk.
Because it can become inconvenient for big datasets, there exists another type of dataset, the [`IterableDataset`].
When you have an `IterableDataset`, you can access it using a `for` loop to load the data progressively as you iterate over the dataset.
This way, only a small fraction of examples is loaded in memory, and you don't write anything on disk.
For example, you can stream the ImageNet-1k dataset without downloading it on disk:
```python
from datasets import load_dataset
imagenet = load_dataset("imagenet-1k", split="train", streaming=True) # will start loading the data when iterated over
for example in imagenet:
print(example)
break
```
Streaming can read online data without writing any file to disk.
For example, you can stream datasets made out of multiple shards, each of which is hundreds of gigabytes like [C4](https://huggingface.co/datasets/c4), [OSCAR](https://huggingface.co/datasets/oscar) or [LAION-2B](https://huggingface.co/datasets/laion/laion2B-en).
Learn more about how to stream a dataset in the [Dataset Streaming Guide](./stream).
This is not the only difference though, because the "lazy" behavior of an `IterableDataset` is also present when it comes to dataset creation and processing.
## Creating map-style datasets and iterable datasets
You can create a [`Dataset`] using lists or dictionaries, and the data is entirely converted to Arrow so you can easily access any row:
```python
my_dataset = Dataset.from_dict({"col_1": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]})
print(my_dataset[0])
```
To create an `IterableDataset` on the other hand, you must provide a "lazy" way to load the data.
In Python, we generally use generator functions. These functions `yield` one example at a time, which means you can't access a row by slicing it like a regular `Dataset`:
```python
def my_generator(n):
for i in range(n):
yield {"col_1": i}
my_iterable_dataset = IterableDataset.from_generator(my_generator, gen_kwargs={"n": 10})
for example in my_iterable_dataset:
print(example)
break
```
## Loading local files entirely and progressively
It is possible to convert local or remote data files to an Arrow [`Dataset`] using [`load_dataset`]:
```python
data_files = {"train": ["path/to/data.csv"]}
my_dataset = load_dataset("csv", data_files=data_files, split="train")
print(my_dataset[0])
```
However, this requires a conversion step from CSV to Arrow format, which takes time and disk space if your dataset is big.
To save disk space and skip the conversion step, you can define an `IterableDataset` by streaming from the local files directly.
This way, the data is read progressively from the local files as you iterate over the dataset:
```python
data_files = {"train": ["path/to/data.csv"]}
my_iterable_dataset = load_dataset("csv", data_files=data_files, split="train", streaming=True)
for example in my_iterable_dataset: # this reads the CSV file progressively as you iterate over the dataset
print(example)
break
```
Many file formats are supported, like CSV, JSONL, and Parquet, as well as image and audio files.
You can find more information in the corresponding guides for loading [tabular](./tabular_load), [text](./nlp_load), [vision](./image_load), and [audio](./audio_load]) datasets.
## Eager data processing and lazy data processing
When you process a [`Dataset`] object using [`Dataset.map`], the entire dataset is processed immediately and returned.
This is similar to how `pandas` works for example.
```python
my_dataset = my_dataset.map(process_fn) # process_fn is applied on all the examples of the dataset
print(my_dataset[0])
```
On the other hand, due to the "lazy" nature of an `IterableDataset`, calling [`IterableDataset.map`] does not apply your `map` function over the full dataset.
Instead, your `map` function is applied on-the-fly.
Because of that, you can chain multiple processing steps and they will all run at once when you start iterating over the dataset:
```python
my_iterable_dataset = my_iterable_dataset.map(process_fn_1)
my_iterable_dataset = my_iterable_dataset.filter(filter_fn)
my_iterable_dataset = my_iterable_dataset.map(process_fn_2)
# process_fn_1, filter_fn and process_fn_2 are applied on-the-fly when iterating over the dataset
for example in my_iterable_dataset:
print(example)
break
```
## Exact and fast approximate shuffling
When you shuffle a [`Dataset`] using [`Dataset.shuffle`], you apply an exact shuffling of the dataset.
It works by taking a list of indices `[0, 1, 2, ... len(my_dataset) - 1]` and shuffling this list.
Then, accessing `my_dataset[0]` returns the row and index defined by the first element of the indices mapping that has been shuffled:
```python
my_dataset = my_dataset.shuffle(seed=42)
print(my_dataset[0])
```
Since we don't have random access to the rows in the case of an `IterableDataset`, we can't use a shuffled list of indices and access a row at an arbitrary position.
This prevents the use of exact shuffling.
Instead, a fast approximate shuffling is used in [`IterableDataset.shuffle`].
It uses a shuffle buffer to sample random examples iteratively from the dataset.
Since the dataset is still read iteratively, it provides excellent speed performance:
```python
my_iterable_dataset = my_iterable_dataset.shuffle(seed=42, buffer_size=100)
for example in my_iterable_dataset:
print(example)
break
```
But using a shuffle buffer is not enough to provide a satisfactory shuffling for machine learning model training. So [`IterableDataset.shuffle`] also shuffles the dataset shards if your dataset is made of multiple files or sources:
```python
# Stream from the internet
my_iterable_dataset = load_dataset("deepmind/code_contests", split="train", streaming=True)
my_iterable_dataset.n_shards # 39
# Stream from local files
data_files = {"train": [f"path/to/data_{i}.csv" for i in range(1024)]}
my_iterable_dataset = load_dataset("csv", data_files=data_files, split="train", streaming=True)
my_iterable_dataset.n_shards # 1024
# From a generator function
def my_generator(n, sources):
for source in sources:
for example_id_for_current_source in range(n):
yield {"example_id": f"{source}_{example_id_for_current_source}"}
gen_kwargs = {"n": 10, "sources": [f"path/to/data_{i}" for i in range(1024)]}
my_iterable_dataset = IterableDataset.from_generator(my_generator, gen_kwargs=gen_kwargs)
my_iterable_dataset.n_shards # 1024
```
## Speed differences
Regular [`Dataset`] objects are based on Arrow which provides fast random access to the rows.
Thanks to memory mapping and the fact that Arrow is an in-memory format, reading data from disk doesn't do expensive system calls and deserialization.
It provides even faster data loading when iterating using a `for` loop by iterating on contiguous Arrow record batches.
However as soon as your [`Dataset`] has an indices mapping (via [`Dataset.shuffle`] for example), the speed can become 10x slower.
This is because there is an extra step to get the row index to read using the indices mapping, and most importantly, you aren't reading contiguous chunks of data anymore.
To restore the speed, you'd need to rewrite the entire dataset on your disk again using [`Dataset.flatten_indices`], which removes the indices mapping.
This may take a lot of time depending of the size of your dataset though:
```python
my_dataset[0] # fast
my_dataset = my_dataset.shuffle(seed=42)
my_dataset[0] # up to 10x slower
my_dataset = my_dataset.flatten_indices() # rewrite the shuffled dataset on disk as contiguous chunks of data
my_dataset[0] # fast again
```
In this case, we recommend switching to an [`IterableDataset`] and leveraging its fast approximate shuffling method [`IterableDataset.shuffle`].
It only shuffles the shards order and adds a shuffle buffer to your dataset, which keeps the speed of your dataset optimal.
You can also reshuffle the dataset easily:
```python
for example in enumerate(my_iterable_dataset): # fast
pass
shuffled_iterable_dataset = my_iterable_dataset.shuffle(seed=42, buffer_size=100)
for example in enumerate(shuffled_iterable_dataset): # as fast as before
pass
shuffled_iterable_dataset = my_iterable_dataset.shuffle(seed=1337, buffer_size=100) # reshuffling using another seed is instantaneous
for example in enumerate(shuffled_iterable_dataset): # still as fast as before
pass
```
If you're using your dataset on multiple epochs, the effective seed to shuffle the shards order in the shuffle buffer is `seed + epoch`.
It makes it easy to reshuffle a dataset between epochs:
```python
for epoch in range(n_epochs):
my_iterable_dataset.set_epoch(epoch)
for example in my_iterable_dataset: # fast + reshuffled at each epoch using `effective_seed = seed + epoch`
pass
```
## Switch from map-style to iterable
If you want to benefit from the "lazy" behavior of an [`IterableDataset`] or their speed advantages, you can switch your map-style [`Dataset`] to an [`IterableDataset`]:
```python
my_iterable_dataset = my_dataset.to_iterable_dataset()
```
If you want to shuffle your dataset or [use it with a PyTorch DataLoader](./use_with_pytorch#stream-data), we recommend generating a sharded [`IterableDataset`]:
```python
my_iterable_dataset = my_dataset.to_iterable_dataset(num_shards=1024)
my_iterable_dataset.n_shards # 1024
```
|
huggingface/deep-rl-class/blob/main/units/en/unit2/q-learning-recap.mdx
|
Q-Learning Recap [[q-learning-recap]]
*Q-Learning* **is the RL algorithm that** :
- Trains a *Q-function*, an **action-value function** encoded, in internal memory, by a *Q-table* **containing all the state-action pair values.**
- Given a state and action, our Q-function **will search its Q-table for the corresponding value.**
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/Q-function-2.jpg" alt="Q function" width="100%"/>
- When the training is done, **we have an optimal Q-function, or, equivalently, an optimal Q-table.**
- And if we **have an optimal Q-function**, we
have an optimal policy, since we **know, for each state, the best action to take.**
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/link-value-policy.jpg" alt="Link value policy" width="100%"/>
But, in the beginning, our **Q-table is useless since it gives arbitrary values for each state-action pair (most of the time we initialize the Q-table to 0 values)**. But, as we explore the environment and update our Q-table it will give us a better and better approximation.
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/unit2/q-learning.jpeg" alt="q-learning.jpeg" width="100%"/>
This is the Q-Learning pseudocode:
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit3/Q-learning-2.jpg" alt="Q-Learning" width="100%"/>
|
huggingface/transformers/blob/main/docs/source/en/tasks/zero_shot_object_detection.md
|
!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Zero-shot object detection
[[open-in-colab]]
Traditionally, models used for [object detection](object_detection) require labeled image datasets for training,
and are limited to detecting the set of classes from the training data.
Zero-shot object detection is supported by the [OWL-ViT](../model_doc/owlvit) model which uses a different approach. OWL-ViT
is an open-vocabulary object detector. It means that it can detect objects in images based on free-text queries without
the need to fine-tune the model on labeled datasets.
OWL-ViT leverages multi-modal representations to perform open-vocabulary detection. It combines [CLIP](../model_doc/clip) with
lightweight object classification and localization heads. Open-vocabulary detection is achieved by embedding free-text queries with the text encoder of CLIP and using them as input to the object classification and localization heads.
associate images and their corresponding textual descriptions, and ViT processes image patches as inputs. The authors
of OWL-ViT first trained CLIP from scratch and then fine-tuned OWL-ViT end to end on standard object detection datasets using
a bipartite matching loss.
With this approach, the model can detect objects based on textual descriptions without prior training on labeled datasets.
In this guide, you will learn how to use OWL-ViT:
- to detect objects based on text prompts
- for batch object detection
- for image-guided object detection
Before you begin, make sure you have all the necessary libraries installed:
```bash
pip install -q transformers
```
## Zero-shot object detection pipeline
The simplest way to try out inference with OWL-ViT is to use it in a [`pipeline`]. Instantiate a pipeline
for zero-shot object detection from a [checkpoint on the Hugging Face Hub](https://huggingface.co/models?other=owlvit):
```python
>>> from transformers import pipeline
>>> checkpoint = "google/owlvit-base-patch32"
>>> detector = pipeline(model=checkpoint, task="zero-shot-object-detection")
```
Next, choose an image you'd like to detect objects in. Here we'll use the image of astronaut Eileen Collins that is
a part of the [NASA](https://www.nasa.gov/multimedia/imagegallery/index.html) Great Images dataset.
```py
>>> import skimage
>>> import numpy as np
>>> from PIL import Image
>>> image = skimage.data.astronaut()
>>> image = Image.fromarray(np.uint8(image)).convert("RGB")
>>> image
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_1.png" alt="Astronaut Eileen Collins"/>
</div>
Pass the image and the candidate object labels to look for to the pipeline.
Here we pass the image directly; other suitable options include a local path to an image or an image url. We also pass text descriptions for all items we want to query the image for.
```py
>>> predictions = detector(
... image,
... candidate_labels=["human face", "rocket", "nasa badge", "star-spangled banner"],
... )
>>> predictions
[{'score': 0.3571370542049408,
'label': 'human face',
'box': {'xmin': 180, 'ymin': 71, 'xmax': 271, 'ymax': 178}},
{'score': 0.28099656105041504,
'label': 'nasa badge',
'box': {'xmin': 129, 'ymin': 348, 'xmax': 206, 'ymax': 427}},
{'score': 0.2110239565372467,
'label': 'rocket',
'box': {'xmin': 350, 'ymin': -1, 'xmax': 468, 'ymax': 288}},
{'score': 0.13790413737297058,
'label': 'star-spangled banner',
'box': {'xmin': 1, 'ymin': 1, 'xmax': 105, 'ymax': 509}},
{'score': 0.11950037628412247,
'label': 'nasa badge',
'box': {'xmin': 277, 'ymin': 338, 'xmax': 327, 'ymax': 380}},
{'score': 0.10649408400058746,
'label': 'rocket',
'box': {'xmin': 358, 'ymin': 64, 'xmax': 424, 'ymax': 280}}]
```
Let's visualize the predictions:
```py
>>> from PIL import ImageDraw
>>> draw = ImageDraw.Draw(image)
>>> for prediction in predictions:
... box = prediction["box"]
... label = prediction["label"]
... score = prediction["score"]
... xmin, ymin, xmax, ymax = box.values()
... draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1)
... draw.text((xmin, ymin), f"{label}: {round(score,2)}", fill="white")
>>> image
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_2.png" alt="Visualized predictions on NASA image"/>
</div>
## Text-prompted zero-shot object detection by hand
Now that you've seen how to use the zero-shot object detection pipeline, let's replicate the same
result manually.
Start by loading the model and associated processor from a [checkpoint on the Hugging Face Hub](https://huggingface.co/models?other=owlvit).
Here we'll use the same checkpoint as before:
```py
>>> from transformers import AutoProcessor, AutoModelForZeroShotObjectDetection
>>> model = AutoModelForZeroShotObjectDetection.from_pretrained(checkpoint)
>>> processor = AutoProcessor.from_pretrained(checkpoint)
```
Let's take a different image to switch things up.
```py
>>> import requests
>>> url = "https://unsplash.com/photos/oj0zeY2Ltk4/download?ixid=MnwxMjA3fDB8MXxzZWFyY2h8MTR8fHBpY25pY3xlbnwwfHx8fDE2Nzc0OTE1NDk&force=true&w=640"
>>> im = Image.open(requests.get(url, stream=True).raw)
>>> im
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_3.png" alt="Beach photo"/>
</div>
Use the processor to prepare the inputs for the model. The processor combines an image processor that prepares the
image for the model by resizing and normalizing it, and a [`CLIPTokenizer`] that takes care of the text inputs.
```py
>>> text_queries = ["hat", "book", "sunglasses", "camera"]
>>> inputs = processor(text=text_queries, images=im, return_tensors="pt")
```
Pass the inputs through the model, post-process, and visualize the results. Since the image processor resized images before
feeding them to the model, you need to use the [`~OwlViTImageProcessor.post_process_object_detection`] method to make sure the predicted bounding
boxes have the correct coordinates relative to the original image:
```py
>>> import torch
>>> with torch.no_grad():
... outputs = model(**inputs)
... target_sizes = torch.tensor([im.size[::-1]])
... results = processor.post_process_object_detection(outputs, threshold=0.1, target_sizes=target_sizes)[0]
>>> draw = ImageDraw.Draw(im)
>>> scores = results["scores"].tolist()
>>> labels = results["labels"].tolist()
>>> boxes = results["boxes"].tolist()
>>> for box, score, label in zip(boxes, scores, labels):
... xmin, ymin, xmax, ymax = box
... draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1)
... draw.text((xmin, ymin), f"{text_queries[label]}: {round(score,2)}", fill="white")
>>> im
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_4.png" alt="Beach photo with detected objects"/>
</div>
## Batch processing
You can pass multiple sets of images and text queries to search for different (or same) objects in several images.
Let's use both an astronaut image and the beach image together.
For batch processing, you should pass text queries as a nested list to the processor and images as lists of PIL images,
PyTorch tensors, or NumPy arrays.
```py
>>> images = [image, im]
>>> text_queries = [
... ["human face", "rocket", "nasa badge", "star-spangled banner"],
... ["hat", "book", "sunglasses", "camera"],
... ]
>>> inputs = processor(text=text_queries, images=images, return_tensors="pt")
```
Previously for post-processing you passed the single image's size as a tensor, but you can also pass a tuple, or, in case
of several images, a list of tuples. Let's create predictions for the two examples, and visualize the second one (`image_idx = 1`).
```py
>>> with torch.no_grad():
... outputs = model(**inputs)
... target_sizes = [x.size[::-1] for x in images]
... results = processor.post_process_object_detection(outputs, threshold=0.1, target_sizes=target_sizes)
>>> image_idx = 1
>>> draw = ImageDraw.Draw(images[image_idx])
>>> scores = results[image_idx]["scores"].tolist()
>>> labels = results[image_idx]["labels"].tolist()
>>> boxes = results[image_idx]["boxes"].tolist()
>>> for box, score, label in zip(boxes, scores, labels):
... xmin, ymin, xmax, ymax = box
... draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1)
... draw.text((xmin, ymin), f"{text_queries[image_idx][label]}: {round(score,2)}", fill="white")
>>> images[image_idx]
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_4.png" alt="Beach photo with detected objects"/>
</div>
## Image-guided object detection
In addition to zero-shot object detection with text queries, OWL-ViT offers image-guided object detection. This means
you can use an image query to find similar objects in the target image.
Unlike text queries, only a single example image is allowed.
Let's take an image with two cats on a couch as a target image, and an image of a single cat
as a query:
```py
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image_target = Image.open(requests.get(url, stream=True).raw)
>>> query_url = "http://images.cocodataset.org/val2017/000000524280.jpg"
>>> query_image = Image.open(requests.get(query_url, stream=True).raw)
```
Let's take a quick look at the images:
```py
>>> import matplotlib.pyplot as plt
>>> fig, ax = plt.subplots(1, 2)
>>> ax[0].imshow(image_target)
>>> ax[1].imshow(query_image)
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_5.png" alt="Cats"/>
</div>
In the preprocessing step, instead of text queries, you now need to use `query_images`:
```py
>>> inputs = processor(images=image_target, query_images=query_image, return_tensors="pt")
```
For predictions, instead of passing the inputs to the model, pass them to [`~OwlViTForObjectDetection.image_guided_detection`]. Draw the predictions
as before except now there are no labels.
```py
>>> with torch.no_grad():
... outputs = model.image_guided_detection(**inputs)
... target_sizes = torch.tensor([image_target.size[::-1]])
... results = processor.post_process_image_guided_detection(outputs=outputs, target_sizes=target_sizes)[0]
>>> draw = ImageDraw.Draw(image_target)
>>> scores = results["scores"].tolist()
>>> boxes = results["boxes"].tolist()
>>> for box, score, label in zip(boxes, scores, labels):
... xmin, ymin, xmax, ymax = box
... draw.rectangle((xmin, ymin, xmax, ymax), outline="white", width=4)
>>> image_target
```
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_6.png" alt="Cats with bounding boxes"/>
</div>
If you'd like to interactively try out inference with OWL-ViT, check out this demo:
<iframe
src="https://adirik-owl-vit.hf.space"
frameborder="0"
width="850"
height="450"
></iframe>
|
huggingface/deep-rl-class/blob/main/units/en/unit6/quiz.mdx
|
Quiz
The best way to learn and [to avoid the illusion of competence](https://www.coursera.org/lecture/learning-how-to-learn/illusions-of-competence-BuFzf) **is to test yourself.** This will help you to find **where you need to reinforce your knowledge**.
### Q1: Which of the following interpretations of bias-variance tradeoff is the most accurate in the field of Reinforcement Learning?
<Question
choices={[
{
text: "The bias-variance tradeoff reflects how my model is able to generalize the knowledge to previously tagged data we give to the model during training time.",
explain: "This is the traditional bias-variance tradeoff in Machine Learning. In our specific case of Reinforcement Learning, we don't have previously tagged data, but only a reward signal.",
correct: false,
},
{
text: "The bias-variance tradeoff reflects how well the reinforcement signal reflects the true reward the agent should get from the enviromment",
explain: "",
correct: true,
},
]}
/>
### Q2: Which of the following statements are true, when talking about models with bias and/or variance in RL?
<Question
choices={[
{
text: "An unbiased reward signal returns rewards similar to the real / expected ones from the environment",
explain: "",
correct: true,
},
{
text: "A biased reward signal returns rewards similar to the real / expected ones from the environment",
explain: "If a reward signal is biased, it means the reward signal we get differs from the real reward we should be getting from an environment",
correct: false,
},
{
text: "A reward signal with high variance has much noise in it and gets affected by, for example, stochastic (non constant) elements in the environment",
explain: "",
correct: true,
},
{
text: "A reward signal with low variance has much noise in it and gets affected by, for example, stochastic (non constant) elements in the environment",
explain: "If a reward signal has low variance, then it's less affected by the noise of the environment and produce similar values regardless the random elements in the environment",
correct: false,
},
]}
/>
### Q3: Which of the following statements are true about Monte Carlo method?
<Question
choices={[
{
text: "It's a sampling mechanism, which means we don't analyze all the possible states, but a sample of those",
explain: "",
correct: true,
},
{
text: "It's very resistant to stochasticity (random elements in the trajectory)",
explain: "Monte Carlo randomly estimates everytime a sample of trajectories. However, even same trajectories can have different reward values if they contain stochastic elements",
correct: false,
},
{
text: "To reduce the impact of stochastic elements in Monte Carlo, we take `n` strategies and average them, reducing their individual impact",
explain: "",
correct: true,
},
]}
/>
### Q4: How would you describe, with your own words, the Actor-Critic Method (A2C)?
<details>
<summary>Solution</summary>
The idea behind Actor-Critic is that we learn two function approximations:
1. A `policy` that controls how our agent acts (π)
2. A `value` function to assist the policy update by measuring how good the action taken is (q)
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit8/step2.jpg" alt="Actor-Critic, step 2"/>
</details>
### Q5: Which of the following statements are true about the Actor-Critic Method?
<Question
choices={[
{
text: "The Critic does not learn any function during the training process",
explain: "Both the Actor and the Critic function parameters are updated during training time",
correct: false,
},
{
text: "The Actor learns a policy function, while the Critic learns a value function",
explain: "",
correct: true,
},
{
text: "It adds resistance to stochasticity and reduces high variance",
explain: "",
correct: true,
},
]}
/>
### Q6: What is `Advantage` in the A2C method?
<details>
<summary>Solution</summary>
Instead of using directly the Action-Value function of the Critic as it is, we could use an `Advantage` function. The idea behind an `Advantage` function is that we calculate the relative advantage of an action compared to the others possible at a state, averaging them.
In other words: how taking that action at a state is better compared to the average value of the state
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit8/advantage1.jpg" alt="Advantage in A2C"/>
</details>
Congrats on finishing this Quiz 🥳, if you missed some elements, take time to read the chapter again to reinforce (😏) your knowledge.
|
huggingface/hf-endpoints-documentation/blob/main/docs/source/guides/logs.mdx
|
Access and read Logs
Hugging Face Endpoints provides access to the logs of your Endpoints through the UI in the “Logs” tab of your Endpoint.
You will have access to the build logs of your Image artifacts as well as access to the Container Logs during inference.
<img src="https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/9_selection.png" alt="select logs" />
The Container Logs are only available when your Endpoint is in the “Running” state.
_Note: If your Endpoint creation is in the “Failed” state, you can check the Build Logs to see what the reason was, e.g. wrong version of a dependency, etc._
**Build Logs:**
<img src="https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/9_build_logs.png" alt="build logs" />
**Container Logs:**
<img src="https://raw.githubusercontent.com/huggingface/hf-endpoints-documentation/main/assets/9_logs.png" alt="container logs" />
|
gradio-app/gradio/blob/main/demo/examples_component/run.ipynb
|
Gradio Demo: examples_component
```
!pip install -q gradio
```
```
# Downloading files from the demo repo
import os
os.mkdir('images')
!wget -q -O images/cheetah1.jpg https://github.com/gradio-app/gradio/raw/main/demo/examples_component/images/cheetah1.jpg
!wget -q -O images/lion.jpg https://github.com/gradio-app/gradio/raw/main/demo/examples_component/images/lion.jpg
!wget -q -O images/lion.webp https://github.com/gradio-app/gradio/raw/main/demo/examples_component/images/lion.webp
!wget -q -O images/logo.png https://github.com/gradio-app/gradio/raw/main/demo/examples_component/images/logo.png
```
```
import gradio as gr
import os
def flip(i):
return i.rotate(180)
with gr.Blocks() as demo:
with gr.Row():
with gr.Column():
img_i = gr.Image(label="Input Image", type="pil")
with gr.Column():
img_o = gr.Image(label="Output Image")
with gr.Row():
btn = gr.Button(value="Flip Image")
btn.click(flip, inputs=[img_i], outputs=[img_o])
gr.Examples(
[
os.path.join(os.path.abspath(''), "images/cheetah1.jpg"),
os.path.join(os.path.abspath(''), "images/lion.jpg"),
],
img_i,
img_o,
flip,
)
demo.launch()
```
|
huggingface/deep-rl-class/blob/main/units/en/unit4/additional-readings.mdx
|
Additional Readings
These are **optional readings** if you want to go deeper.
## Introduction to Policy Optimization
- [Part 3: Intro to Policy Optimization - Spinning Up documentation](https://spinningup.openai.com/en/latest/spinningup/rl_intro3.html)
## Policy Gradient
- [https://johnwlambert.github.io/policy-gradients/](https://johnwlambert.github.io/policy-gradients/)
- [RL - Policy Gradient Explained](https://jonathan-hui.medium.com/rl-policy-gradients-explained-9b13b688b146)
- [Chapter 13, Policy Gradient Methods; Reinforcement Learning, an introduction by Richard Sutton and Andrew G. Barto](http://incompleteideas.net/book/RLbook2020.pdf)
## Implementation
- [PyTorch Reinforce implementation](https://github.com/pytorch/examples/blob/main/reinforcement_learning/reinforce.py)
- [Implementations from DDPG to PPO](https://github.com/MrSyee/pg-is-all-you-need)
|
huggingface/optimum/blob/main/docs/source/onnxruntime/package_reference/quantization.mdx
|
!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Quantization
## ORTQuantizer
[[autodoc]] onnxruntime.quantization.ORTQuantizer
- all
|
gradio-app/gradio/blob/main/demo/number_component/run.ipynb
|
Gradio Demo: number_component
```
!pip install -q gradio
```
```
import gradio as gr
with gr.Blocks() as demo:
gr.Number()
demo.launch()
```
|
gradio-app/gradio/blob/main/demo/map_airbnb/run.ipynb
|
Gradio Demo: map_airbnb
### Display an interactive map of AirBnB locations with Plotly. Data is hosted on HuggingFace Datasets.
```
!pip install -q gradio plotly
```
```
import gradio as gr
import plotly.graph_objects as go
from datasets import load_dataset
dataset = load_dataset("gradio/NYC-Airbnb-Open-Data", split="train")
df = dataset.to_pandas()
def filter_map(min_price, max_price, boroughs):
filtered_df = df[(df['neighbourhood_group'].isin(boroughs)) &
(df['price'] > min_price) & (df['price'] < max_price)]
names = filtered_df["name"].tolist()
prices = filtered_df["price"].tolist()
text_list = [(names[i], prices[i]) for i in range(0, len(names))]
fig = go.Figure(go.Scattermapbox(
customdata=text_list,
lat=filtered_df['latitude'].tolist(),
lon=filtered_df['longitude'].tolist(),
mode='markers',
marker=go.scattermapbox.Marker(
size=6
),
hoverinfo="text",
hovertemplate='<b>Name</b>: %{customdata[0]}<br><b>Price</b>: $%{customdata[1]}'
))
fig.update_layout(
mapbox_style="open-street-map",
hovermode='closest',
mapbox=dict(
bearing=0,
center=go.layout.mapbox.Center(
lat=40.67,
lon=-73.90
),
pitch=0,
zoom=9
),
)
return fig
with gr.Blocks() as demo:
with gr.Column():
with gr.Row():
min_price = gr.Number(value=250, label="Minimum Price")
max_price = gr.Number(value=1000, label="Maximum Price")
boroughs = gr.CheckboxGroup(choices=["Queens", "Brooklyn", "Manhattan", "Bronx", "Staten Island"], value=["Queens", "Brooklyn"], label="Select Boroughs:")
btn = gr.Button(value="Update Filter")
map = gr.Plot()
demo.load(filter_map, [min_price, max_price, boroughs], map)
btn.click(filter_map, [min_price, max_price, boroughs], map)
if __name__ == "__main__":
demo.launch()
```
|
huggingface/pytorch-image-models/blob/main/hfdocs/source/models/res2net.mdx
|
Res2Net
**Res2Net** is an image model that employs a variation on bottleneck residual blocks, [Res2Net Blocks](https://paperswithcode.com/method/res2net-block). The motivation is to be able to represent features at multiple scales. This is achieved through a novel building block for CNNs that constructs hierarchical residual-like connections within one single residual block. This represents multi-scale features at a granular level and increases the range of receptive fields for each network layer.
## How do I use this model on an image?
To load a pretrained model:
```py
>>> import timm
>>> model = timm.create_model('res2net101_26w_4s', pretrained=True)
>>> model.eval()
```
To load and preprocess the image:
```py
>>> import urllib
>>> from PIL import Image
>>> from timm.data import resolve_data_config
>>> from timm.data.transforms_factory import create_transform
>>> config = resolve_data_config({}, model=model)
>>> transform = create_transform(**config)
>>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg")
>>> urllib.request.urlretrieve(url, filename)
>>> img = Image.open(filename).convert('RGB')
>>> tensor = transform(img).unsqueeze(0) # transform and add batch dimension
```
To get the model predictions:
```py
>>> import torch
>>> with torch.no_grad():
... out = model(tensor)
>>> probabilities = torch.nn.functional.softmax(out[0], dim=0)
>>> print(probabilities.shape)
>>> # prints: torch.Size([1000])
```
To get the top-5 predictions class names:
```py
>>> # Get imagenet class mappings
>>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt")
>>> urllib.request.urlretrieve(url, filename)
>>> with open("imagenet_classes.txt", "r") as f:
... categories = [s.strip() for s in f.readlines()]
>>> # Print top categories per image
>>> top5_prob, top5_catid = torch.topk(probabilities, 5)
>>> for i in range(top5_prob.size(0)):
... print(categories[top5_catid[i]], top5_prob[i].item())
>>> # prints class names and probabilities like:
>>> # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)]
```
Replace the model name with the variant you want to use, e.g. `res2net101_26w_4s`. You can find the IDs in the model summaries at the top of this page.
To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use.
## How do I finetune this model?
You can finetune any of the pre-trained models just by changing the classifier (the last layer).
```py
>>> model = timm.create_model('res2net101_26w_4s', pretrained=True, num_classes=NUM_FINETUNE_CLASSES)
```
To finetune on your own dataset, you have to write a training loop or adapt [timm's training
script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset.
## How do I train this model?
You can follow the [timm recipe scripts](../scripts) for training a new model afresh.
## Citation
```BibTeX
@article{Gao_2021,
title={Res2Net: A New Multi-Scale Backbone Architecture},
volume={43},
ISSN={1939-3539},
url={http://dx.doi.org/10.1109/TPAMI.2019.2938758},
DOI={10.1109/tpami.2019.2938758},
number={2},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
publisher={Institute of Electrical and Electronics Engineers (IEEE)},
author={Gao, Shang-Hua and Cheng, Ming-Ming and Zhao, Kai and Zhang, Xin-Yu and Yang, Ming-Hsuan and Torr, Philip},
year={2021},
month={Feb},
pages={652–662}
}
```
<!--
Type: model-index
Collections:
- Name: Res2Net
Paper:
Title: 'Res2Net: A New Multi-scale Backbone Architecture'
URL: https://paperswithcode.com/paper/res2net-a-new-multi-scale-backbone
Models:
- Name: res2net101_26w_4s
In Collection: Res2Net
Metadata:
FLOPs: 10415881200
Parameters: 45210000
File Size: 181456059
Architecture:
- Batch Normalization
- Convolution
- Global Average Pooling
- ReLU
- Res2Net Block
Tasks:
- Image Classification
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 4x Titan Xp GPUs
ID: res2net101_26w_4s
LR: 0.1
Epochs: 100
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 256
Image Size: '224'
Weight Decay: 0.0001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/res2net.py#L152
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-res2net/res2net101_26w_4s-02a759a1.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 79.19%
Top 5 Accuracy: 94.43%
- Name: res2net50_14w_8s
In Collection: Res2Net
Metadata:
FLOPs: 5403546768
Parameters: 25060000
File Size: 100638543
Architecture:
- Batch Normalization
- Convolution
- Global Average Pooling
- ReLU
- Res2Net Block
Tasks:
- Image Classification
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 4x Titan Xp GPUs
ID: res2net50_14w_8s
LR: 0.1
Epochs: 100
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 256
Image Size: '224'
Weight Decay: 0.0001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/res2net.py#L196
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-res2net/res2net50_14w_8s-6527dddc.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 78.14%
Top 5 Accuracy: 93.86%
- Name: res2net50_26w_4s
In Collection: Res2Net
Metadata:
FLOPs: 5499974064
Parameters: 25700000
File Size: 103110087
Architecture:
- Batch Normalization
- Convolution
- Global Average Pooling
- ReLU
- Res2Net Block
Tasks:
- Image Classification
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 4x Titan Xp GPUs
ID: res2net50_26w_4s
LR: 0.1
Epochs: 100
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 256
Image Size: '224'
Weight Decay: 0.0001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/res2net.py#L141
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-res2net/res2net50_26w_4s-06e79181.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 77.99%
Top 5 Accuracy: 93.85%
- Name: res2net50_26w_6s
In Collection: Res2Net
Metadata:
FLOPs: 8130156528
Parameters: 37050000
File Size: 148603239
Architecture:
- Batch Normalization
- Convolution
- Global Average Pooling
- ReLU
- Res2Net Block
Tasks:
- Image Classification
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 4x Titan Xp GPUs
ID: res2net50_26w_6s
LR: 0.1
Epochs: 100
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 256
Image Size: '224'
Weight Decay: 0.0001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/res2net.py#L163
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-res2net/res2net50_26w_6s-19041792.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 78.57%
Top 5 Accuracy: 94.12%
- Name: res2net50_26w_8s
In Collection: Res2Net
Metadata:
FLOPs: 10760338992
Parameters: 48400000
File Size: 194085165
Architecture:
- Batch Normalization
- Convolution
- Global Average Pooling
- ReLU
- Res2Net Block
Tasks:
- Image Classification
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 4x Titan Xp GPUs
ID: res2net50_26w_8s
LR: 0.1
Epochs: 100
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 256
Image Size: '224'
Weight Decay: 0.0001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/res2net.py#L174
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-res2net/res2net50_26w_8s-2c7c9f12.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 79.19%
Top 5 Accuracy: 94.37%
- Name: res2net50_48w_2s
In Collection: Res2Net
Metadata:
FLOPs: 5375291520
Parameters: 25290000
File Size: 101421406
Architecture:
- Batch Normalization
- Convolution
- Global Average Pooling
- ReLU
- Res2Net Block
Tasks:
- Image Classification
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 4x Titan Xp GPUs
ID: res2net50_48w_2s
LR: 0.1
Epochs: 100
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 256
Image Size: '224'
Weight Decay: 0.0001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/res2net.py#L185
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-res2net/res2net50_48w_2s-afed724a.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 77.53%
Top 5 Accuracy: 93.56%
-->
|
huggingface/course/blob/main/subtitles/en/raw/chapter5/03a_slice-and-dice.md
|
ow to slice and dice a dataset. Most of the time, the data you work with won’t be perfectly prepared for training models. In this video we’ll explore various features that Datasets provides to clean up your datasets. The Datasets library provides several built-in methods that allow you to wrangle your data. In this video we'll see how you can shuffle and split your data, select the rows you're interested in, tweak the columns, and apply processing functions with the map() method. Let's start with shuffling. It is generally a good idea to apply shuffling to the training set so that your model doesn't learn any artificial ordering in the data. If you want to shuffle the whole dataset, you can apply the appropriately named shuffle() method to your dataset. You can see an example of this method in action here, where we've downloaded the training split of the SQUAD dataset and shuffled all the rows randomly.Another way to shuffle the data is to create random train and test splits. This can be useful if you have to create your own test splits from raw data. To do this, you just apply the train_test_split method and specify how large the test split should be. In this example, we've specified that the test set should be 10% of the total dataset size. You can see that the output of train_test_split is a DatasetDict object, whose keys correspond to the new splits. Now that we know how to shuffle a dataset, let's take a look at returning the rows we're interested in. The most common way to do this is with the select method. This method expects a list or generator of the dataset's indices, and will then return a new Dataset object containing just those rows. If you want to create a random sample of rows, you can do this by chaining the shuffle and select methods together. In this example, we've created a sample of 5 elements from the SQuAD dataset. The last way to pick out specific rows in a dataset is by applying the filter method. This method checks whether each rows fulfills some condition or not. For example, here we've created a small lambda function that checks whether the title starts with the letter "L". Once we apply this function with the filter method, we get a subset of the data consisting of just these titles. So far we've been talking about the rows of a dataset, but what about the columns? The Datasets library has two main methods for transforming columns: a rename_column method to change the name of a column, and a remove_columns method to delete them. You can see examples of both these method here. Some datasets have nested columns and you can expand these by applying the flatten method. For example in the SQUAD dataset, the answers column contains a text and answer_start field. If we want to promote them to their own separate columns, we can apply flatten as shown here. Of course, no discussion of the Datasets library would be complete without mentioning the famous map method. This method applies a custom processing function to each row in the dataset. For example,here we first define a lowercase_title function that simply lowercases the text in the title column and then we feed that to the map method and voila! we now have lowercase titles. The map method can also be used to feed batches of rows to the processing function. This is especially useful for tokenization, where the tokenizers are backed by the Tokenizers library can use fast multithreading to process batches in parallel.
|
gradio-app/gradio/blob/main/demo/question-answering/run.ipynb
|
Gradio Demo: question-answering
```
!pip install -q gradio torch transformers
```
```
import gradio as gr
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "deepset/roberta-base-squad2"
nlp = pipeline("question-answering", model=model_name, tokenizer=model_name)
context = "The Amazon rainforest, also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species."
question = "Which continent is the Amazon rainforest in?"
def predict(context, question):
res = nlp({"question": question, "context": context})
return res["answer"], res["score"]
gr.Interface(
predict,
inputs=[
gr.Textbox(lines=7, value=context, label="Context Paragraph"),
gr.Textbox(lines=2, value=question, label="Question"),
],
outputs=[gr.Textbox(label="Answer"), gr.Textbox(label="Score")],
).launch()
```
|
huggingface/diffusers/blob/main/docs/source/en/api/loaders/ip_adapter.md
|
!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# IP-Adapter
[IP-Adapter](https://hf.co/papers/2308.06721) is a lightweight adapter that enables prompting a diffusion model with an image. This method decouples the cross-attention layers of the image and text features. The image features are generated from an image encoder. Files generated from IP-Adapter are only ~100MBs.
<Tip>
Learn how to load an IP-Adapter checkpoint and image in the [IP-Adapter](../../using-diffusers/loading_adapters#ip-adapter) loading guide.
</Tip>
## IPAdapterMixin
[[autodoc]] loaders.ip_adapter.IPAdapterMixin
|
huggingface/peft/blob/main/docs/source/package_reference/config.md
|
!--⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Configuration
[`PeftConfigMixin`] is the base configuration class for storing the adapter configuration of a [`PeftModel`], and [`PromptLearningConfig`] is the base configuration class for soft prompt methods (p-tuning, prefix tuning, and prompt tuning). These base classes contain methods for saving and loading model configurations from the Hub, specifying the PEFT method to use, type of task to perform, and model configurations like number of layers and number of attention heads.
## PeftConfigMixin
[[autodoc]] config.PeftConfigMixin
- all
## PeftConfig
[[autodoc]] PeftConfig
- all
## PromptLearningConfig
[[autodoc]] PromptLearningConfig
- all
|
huggingface/transformers/blob/main/README_ru.md
|
!---
Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-dark.svg">
<source media="(prefers-color-scheme: light)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg">
<img alt="Hugging Face Transformers Library" src="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg" width="352" height="59" style="max-width: 100%;">
</picture>
<br/>
<br/>
</p>
<p align="center">
<a href="https://circleci.com/gh/huggingface/transformers">
<img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main">
</a>
<a href="https://github.com/huggingface/transformers/blob/main/LICENSE">
<img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue">
</a>
<a href="https://huggingface.co/docs/transformers/index">
<img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online">
</a>
<a href="https://github.com/huggingface/transformers/releases">
<img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg">
</a>
<a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md">
<img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg">
</a>
<a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
</p>
<h4 align="center">
<p>
<a href="https://github.com/huggingface/transformers/blob/main/README.md">English</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_zh-hans.md">简体中文</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_zh-hant.md">繁體中文</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ko.md">한국어</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_es.md">Español</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_ja.md">日本語</a> |
<a href="https://github.com/huggingface/transformers/blob/main/README_hd.md">हिन्दी</a> |
<b>Русский</b>
<a href="https://github.com/huggingface/transformers//blob/main/README_te.md">తెలుగు</a> |
<p>
</h4>
<h3 align="center">
<p>Современное машинное обучение для JAX, PyTorch и TensorFlow</p>
</h3>
<h3 align="center">
<a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
</h3>
🤗 Transformers предоставляет тысячи предварительно обученных моделей для выполнения различных задач, таких как текст, зрение и аудио.
Эти модели могут быть применены к:
* 📝 Тексту для таких задач, как классификация текстов, извлечение информации, ответы на вопросы, обобщение, перевод, генерация текстов на более чем 100 языках.
* 🖼️ Изображениям для задач классификации изображений, обнаружения объектов и сегментации.
* 🗣️ Аудио для задач распознавания речи и классификации аудио.
Модели transformers также могут выполнять несколько задач, такие как ответы на табличные вопросы, распознавание оптических символов, извлечение информации из отсканированных документов, классификация видео и ответы на визуальные вопросы.
🤗 Transformers предоставляет API для быстрой загрузки и использования предварительно обученных моделей, их тонкой настройки на собственных датасетах и последующего взаимодействия ими с сообществом на нашем [сайте](https://huggingface.co/models). В то же время каждый python модуль, определяющий архитектуру, полностью автономен и может быть модифицирован для проведения быстрых исследовательских экспериментов.
🤗 Transformers опирается на три самые популярные библиотеки глубокого обучения - [Jax](https://jax.readthedocs.io/en/latest/), [PyTorch](https://pytorch.org/) и [TensorFlow](https://www.tensorflow.org/) - и легко интегрируется между ними. Это позволяет легко обучать модели с помощью одной из них, а затем загружать их для выводов с помощью другой.
## Онлайн демонстрация
Большинство наших моделей можно протестировать непосредственно на их страницах с [сайта](https://huggingface.co/models). Мы также предлагаем [привтаный хостинг моделей, контроль версий и API для выводов](https://huggingface.co/pricing) для публичных и частных моделей.
Вот несколько примеров:
В области NLP ( Обработка текстов на естественном языке ):
- [Маскированное заполнение слов с помощью BERT](https://huggingface.co/bert-base-uncased?text=Paris+is+the+%5BMASK%5D+of+France)
- [Распознавание сущностей с помощью Electra](https://huggingface.co/dbmdz/electra-large-discriminator-finetuned-conll03-english?text=My+name+is+Sarah+and+I+live+in+London+city)
- [Генерация текста с помощью GPT-2](https://huggingface.co/gpt2?text=A+long+time+ago%2C+)
- [Выводы на естественном языке с помощью RoBERTa](https://huggingface.co/roberta-large-mnli?text=The+dog+was+lost.+Nobody+lost+any+animal)
- [Обобщение с помощью BART](https://huggingface.co/facebook/bart-large-cnn?text=The+tower+is+324+metres+%281%2C063+ft%29+tall%2C+about+the+same+height+as+an+81-storey+building%2C+and+the+tallest+structure+in+Paris.+Its+base+is+square%2C+measuring+125+metres+%28410+ft%29+on+each+side.+During+its+construction%2C+the+Eiffel+Tower+surpassed+the+Washington+Monument+to+become+the+tallest+man-made+structure+in+the+world%2C+a+title+it+held+for+41+years+until+the+Chrysler+Building+in+New+York+City+was+finished+in+1930.+It+was+the+first+structure+to+reach+a+height+of+300+metres.+Due+to+the+addition+of+a+broadcasting+aerial+at+the+top+of+the+tower+in+1957%2C+it+is+now+taller+than+the+Chrysler+Building+by+5.2+metres+%2817+ft%29.+Excluding+transmitters%2C+the+Eiffel+Tower+is+the+second+tallest+free-standing+structure+in+France+after+the+Millau+Viaduct)
- [Ответы на вопросы с помощью DistilBERT](https://huggingface.co/distilbert-base-uncased-distilled-squad?text=Which+name+is+also+used+to+describe+the+Amazon+rainforest+in+English%3F&context=The+Amazon+rainforest+%28Portuguese%3A+Floresta+Amaz%C3%B4nica+or+Amaz%C3%B4nia%3B+Spanish%3A+Selva+Amaz%C3%B3nica%2C+Amazon%C3%ADa+or+usually+Amazonia%3B+French%3A+For%C3%AAt+amazonienne%3B+Dutch%3A+Amazoneregenwoud%29%2C+also+known+in+English+as+Amazonia+or+the+Amazon+Jungle%2C+is+a+moist+broadleaf+forest+that+covers+most+of+the+Amazon+basin+of+South+America.+This+basin+encompasses+7%2C000%2C000+square+kilometres+%282%2C700%2C000+sq+mi%29%2C+of+which+5%2C500%2C000+square+kilometres+%282%2C100%2C000+sq+mi%29+are+covered+by+the+rainforest.+This+region+includes+territory+belonging+to+nine+nations.+The+majority+of+the+forest+is+contained+within+Brazil%2C+with+60%25+of+the+rainforest%2C+followed+by+Peru+with+13%25%2C+Colombia+with+10%25%2C+and+with+minor+amounts+in+Venezuela%2C+Ecuador%2C+Bolivia%2C+Guyana%2C+Suriname+and+French+Guiana.+States+or+departments+in+four+nations+contain+%22Amazonas%22+in+their+names.+The+Amazon+represents+over+half+of+the+planet%27s+remaining+rainforests%2C+and+comprises+the+largest+and+most+biodiverse+tract+of+tropical+rainforest+in+the+world%2C+with+an+estimated+390+billion+individual+trees+divided+into+16%2C000+species)
- [Перевод с помощью T5](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin)
В области компьютерного зрения:
- [Классификация изображений с помощью ViT](https://huggingface.co/google/vit-base-patch16-224)
- [Обнаружение объектов с помощью DETR](https://huggingface.co/facebook/detr-resnet-50)
- [Семантическая сегментация с помощью SegFormer](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512)
- [Сегментация паноптикума с помощью MaskFormer](https://huggingface.co/facebook/maskformer-swin-small-coco)
- [Оценка глубины с помощью DPT](https://huggingface.co/docs/transformers/model_doc/dpt)
- [Классификация видео с помощью VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae)
- [Универсальная сегментация с помощью OneFormer](https://huggingface.co/shi-labs/oneformer_ade20k_dinat_large)
В области звука:
- [Автоматическое распознавание речи с помощью Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base-960h)
- [Поиск ключевых слов с помощью Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks)
- [Классификация аудиоданных с помощью траснформера аудиоспектрограмм](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593)
В мультимодальных задачах:
- [Ответы на вопросы по таблице с помощью TAPAS](https://huggingface.co/google/tapas-base-finetuned-wtq)
- [Визуальные ответы на вопросы с помощью ViLT](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa)
- [Zero-shot классификация изображений с помощью CLIP](https://huggingface.co/openai/clip-vit-large-patch14)
- [Ответы на вопросы по документам с помощью LayoutLM](https://huggingface.co/impira/layoutlm-document-qa)
- [Zero-shot классификация видео с помощью X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip)
## 100 проектов, использующих Transformers
Transformers - это не просто набор инструментов для использования предварительно обученных моделей: это сообщество проектов, созданное на его основе, и
Hugging Face Hub. Мы хотим, чтобы Transformers позволил разработчикам, исследователям, студентам, профессорам, инженерам и всем желающим
создавать проекты своей мечты.
Чтобы отпраздновать 100 тысяч звезд Transformers, мы решили сделать акцент на сообществе, и создали страницу [awesome-transformers](./awesome-transformers.md), на которой перечислены 100
невероятных проектов, созданных с помощью transformers.
Если вы являетесь владельцем или пользователем проекта, который, по вашему мнению, должен быть включен в этот список, пожалуйста, откройте PR для его добавления!
## Если вы хотите получить индивидуальную поддержку от команды Hugging Face
<a target="_blank" href="https://huggingface.co/support">
<img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
</a><br>
## Быстрый гайд
Для использования модели на заданном входе (текст, изображение, звук, ...) мы предоставляем API `pipeline`. Конвейеры объединяют предварительно обученную модель с препроцессингом, который использовался при ее обучении. Вот как можно быстро использовать конвейер для классификации положительных и отрицательных текстов:
```python
>>> from transformers import pipeline
# Выделение конвейера для анализа настроений
>>> classifier = pipeline('sentiment-analysis')
>>> classifier('Мы очень рады представить конвейер в transformers.')
[{'label': 'POSITIVE', 'score': 0.9996980428695679}]
```
Вторая строка кода загружает и кэширует предварительно обученную модель, используемую конвейером, а третья оценивает ее на заданном тексте. Здесь ответ "POSITIVE" с уверенностью 99,97%.
Во многих задачах, как в НЛП, так и в компьютерном зрении и речи, уже есть готовый `pipeline`. Например, мы можем легко извлечь обнаруженные объекты на изображении:
``` python
>>> import requests
>>> from PIL import Image
>>> from transformers import pipeline
# Скачиваем изображение с милыми котиками
>>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png"
>>> image_data = requests.get(url, stream=True).raw
>>> image = Image.open(image_data)
# Выделение конвейера для обнаружения объектов
>>> object_detector = pipeline('object-detection')
>>> object_detector(image)
[{'score': 0.9982201457023621,
'label': 'remote',
'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}},
{'score': 0.9960021376609802,
'label': 'remote',
'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}},
{'score': 0.9954745173454285,
'label': 'couch',
'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}},
{'score': 0.9988006353378296,
'label': 'cat',
'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}},
{'score': 0.9986783862113953,
'label': 'cat',
'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}]
```
Здесь мы получаем список объектов, обнаруженных на изображении, с рамкой вокруг объекта и оценкой достоверности. Слева - исходное изображение, справа прогнозы:
<h3 align="center">
<a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png" width="400"></a>
<a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample_post_processed.png" width="400"></a>
</h3>
Подробнее о задачах, поддерживаемых API `pipeline`, можно узнать в [этом учебном пособии](https://huggingface.co/docs/transformers/task_sum)
В дополнение к `pipeline`, для загрузки и использования любой из предварительно обученных моделей в заданной задаче достаточно трех строк кода. Вот версия для PyTorch:
```python
>>> from transformers import AutoTokenizer, AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> model = AutoModel.from_pretrained("bert-base-uncased")
>>> inputs = tokenizer("Привет мир!", return_tensors="pt")
>>> outputs = model(**inputs)
```
А вот эквивалентный код для TensorFlow:
```python
>>> from transformers import AutoTokenizer, TFAutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> model = TFAutoModel.from_pretrained("bert-base-uncased")
>>> inputs = tokenizer("Привет мир!", return_tensors="tf")
>>> outputs = model(**inputs)
```
Токенизатор отвечает за всю предварительную обработку, которую ожидает предварительно обученная модель, и может быть вызван непосредственно с помощью одной строки (как в приведенных выше примерах) или на списке. В результате будет получен словарь, который можно использовать в последующем коде или просто напрямую передать в модель с помощью оператора распаковки аргументов **.
Сама модель представляет собой обычный [Pytorch `nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) или [TensorFlow `tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) (в зависимости от используемого бэкенда), который можно использовать как обычно. [В этом руководстве](https://huggingface.co/docs/transformers/training) рассказывается, как интегрировать такую модель в классический цикл обучения PyTorch или TensorFlow, или как использовать наш API `Trainer` для быстрой тонкой настройки на новом датасете.
## Почему необходимо использовать transformers?
1. Простые в использовании современные модели:
- Высокая производительность в задачах понимания и генерации естественного языка, компьютерного зрения и аудио.
- Низкий входной барьер для преподавателей и практиков.
- Небольшое количество абстракций для пользователя и всего три класса для изучения.
- Единый API для использования всех наших предварительно обученных моделей.
1. Более низкие вычислительные затраты, меньший "углеродный след":
- Исследователи могут обмениваться обученными моделями вместо того, чтобы постоянно их переобучать.
- Практики могут сократить время вычислений и производственные затраты.
- Десятки архитектур с более чем 60 000 предварительно обученных моделей для всех модальностей.
1. Выбор подходящего фреймворка для каждого этапа жизни модели:
- Обучение самых современных моделей за 3 строки кода.
- Перемещайте одну модель между фреймворками TF2.0/PyTorch/JAX по своему усмотрению.
- Беспрепятственный выбор подходящего фреймворка для обучения, оценки и производства.
1. Легко настроить модель или пример под свои нужды:
- Мы предоставляем примеры для каждой архитектуры, чтобы воспроизвести результаты, опубликованные их авторами.
- Внутренние компоненты модели раскрываются максимально последовательно.
- Файлы моделей можно использовать независимо от библиотеки для проведения быстрых экспериментов.
## Почему я не должен использовать transformers?
- Данная библиотека не является модульным набором строительных блоков для нейронных сетей. Код в файлах моделей специально не рефакторится дополнительными абстракциями, чтобы исследователи могли быстро итеративно работать с каждой из моделей, не погружаясь в дополнительные абстракции/файлы.
- API обучения не предназначен для работы с любой моделью, а оптимизирован для работы с моделями, предоставляемыми библиотекой. Для работы с общими циклами машинного обучения следует использовать другую библиотеку (возможно, [Accelerate](https://huggingface.co/docs/accelerate)).
- Несмотря на то, что мы стремимся представить как можно больше примеров использования, скрипты в нашей папке [примеров](https://github.com/huggingface/transformers/tree/main/examples) являются именно примерами. Предполагается, что они не будут работать "из коробки" для решения вашей конкретной задачи, и вам придется изменить несколько строк кода, чтобы адаптировать их под свои нужды.
## Установка
### С помощью pip
Данный репозиторий протестирован на Python 3.8+, Flax 0.4.1+, PyTorch 1.10+ и TensorFlow 2.6+.
Устанавливать 🤗 Transformers следует в [виртуальной среде](https://docs.python.org/3/library/venv.html). Если вы не знакомы с виртуальными средами Python, ознакомьтесь с [руководством пользователя](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
Сначала создайте виртуальную среду с той версией Python, которую вы собираетесь использовать, и активируйте ее.
Затем необходимо установить хотя бы один бекенд из Flax, PyTorch или TensorFlow.
Пожалуйста, обратитесь к страницам [TensorFlow установочная страница](https://www.tensorflow.org/install/), [PyTorch установочная страница](https://pytorch.org/get-started/locally/#start-locally) и/или [Flax](https://github.com/google/flax#quick-install) и [Jax](https://github.com/google/jax#installation), где описаны команды установки для вашей платформы.
После установки одного из этих бэкендов 🤗 Transformers может быть установлен с помощью pip следующим образом:
```bash
pip install transformers
```
Если вы хотите поиграть с примерами или вам нужен самый современный код и вы не можете ждать нового релиза, вы должны [установить библиотеку из исходного кода](https://huggingface.co/docs/transformers/installation#installing-from-source).
### С помощью conda
Начиная с версии Transformers v4.0.0, у нас появилсась поддержка conda: `huggingface`.
Установить Transformers с помощью conda можно следующим образом:
```bash
conda install -c huggingface transformers
```
О том, как установить Flax, PyTorch или TensorFlow с помощью conda, читайте на страницах, посвященных их установке.
> **_ЗАМЕТКА:_** В операционной системе Windows вам может быть предложено активировать режим разработчика, чтобы воспользоваться преимуществами кэширования. Если для вас это невозможно, сообщите нам об этом [здесь](https://github.com/huggingface/huggingface_hub/issues/1062).
## Модельные архитектуры
**[Все контрольные точки моделей](https://huggingface.co/models)**, предоставляемые 🤗 Transformers, беспрепятственно интегрируются с huggingface.co [model hub](https://huggingface.co/models), куда они загружаются непосредственно [пользователями](https://huggingface.co/users) и [организациями](https://huggingface.co/organizations).
Текущее количество контрольных точек: 
🤗 В настоящее время Transformers предоставляет следующие архитектуры (подробное описание каждой из них см. [здесь](https://huggingface.co/docs/transformers/model_summary)):
1. **[ALBERT](https://huggingface.co/docs/transformers/model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
1. **[ALIGN](https://huggingface.co/docs/transformers/model_doc/align)** (from Google Research) released with the paper [Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918) by Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig.
1. **[AltCLIP](https://huggingface.co/docs/transformers/model_doc/altclip)** (from BAAI) released with the paper [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679) by Chen, Zhongzhi and Liu, Guang and Zhang, Bo-Wen and Ye, Fulong and Yang, Qinghong and Wu, Ledell.
1. **[Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer)** (from MIT) released with the paper [AST: Audio Spectrogram Transformer](https://arxiv.org/abs/2104.01778) by Yuan Gong, Yu-An Chung, James Glass.
1. **[Autoformer](https://huggingface.co/docs/transformers/model_doc/autoformer)** (from Tsinghua University) released with the paper [Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting](https://arxiv.org/abs/2106.13008) by Haixu Wu, Jiehui Xu, Jianmin Wang, Mingsheng Long.
1. **[Bark](https://huggingface.co/docs/transformers/model_doc/bark)** (from Suno) released in the repository [suno-ai/bark](https://github.com/suno-ai/bark) by Suno AI team.
1. **[BART](https://huggingface.co/docs/transformers/model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
1. **[BARThez](https://huggingface.co/docs/transformers/model_doc/barthez)** (from École polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis.
1. **[BARTpho](https://huggingface.co/docs/transformers/model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen.
1. **[BEiT](https://huggingface.co/docs/transformers/model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei.
1. **[BERT](https://huggingface.co/docs/transformers/model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
1. **[BERT For Sequence Generation](https://huggingface.co/docs/transformers/model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
1. **[BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen.
1. **[BigBird-Pegasus](https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
1. **[BigBird-RoBERTa](https://huggingface.co/docs/transformers/model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
1. **[BioGpt](https://huggingface.co/docs/transformers/model_doc/biogpt)** (from Microsoft Research AI4Science) released with the paper [BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu.
1. **[BiT](https://huggingface.co/docs/transformers/model_doc/bit)** (from Google AI) released with the paper [Big Transfer (BiT): General Visual Representation Learning](https://arxiv.org/abs/1912.11370) by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby.
1. **[Blenderbot](https://huggingface.co/docs/transformers/model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
1. **[BlenderbotSmall](https://huggingface.co/docs/transformers/model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
1. **[BLIP](https://huggingface.co/docs/transformers/model_doc/blip)** (from Salesforce) released with the paper [BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation](https://arxiv.org/abs/2201.12086) by Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi.
1. **[BLIP-2](https://huggingface.co/docs/transformers/model_doc/blip-2)** (from Salesforce) released with the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi.
1. **[BLOOM](https://huggingface.co/docs/transformers/model_doc/bloom)** (from BigScience workshop) released by the [BigScience Workshop](https://bigscience.huggingface.co/).
1. **[BORT](https://huggingface.co/docs/transformers/model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry.
1. **[BridgeTower](https://huggingface.co/docs/transformers/model_doc/bridgetower)** (from Harbin Institute of Technology/Microsoft Research Asia/Intel Labs) released with the paper [BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning](https://arxiv.org/abs/2206.08657) by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan.
1. **[BROS](https://huggingface.co/docs/transformers/model_doc/bros)** (from NAVER CLOVA) released with the paper [BROS: A Pre-trained Language Model Focusing on Text and Layout for Better Key Information Extraction from Documents](https://arxiv.org/abs/2108.04539) by Teakgyu Hong, Donghyun Kim, Mingi Ji, Wonseok Hwang, Daehyun Nam, Sungrae Park.
1. **[ByT5](https://huggingface.co/docs/transformers/model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel.
1. **[CamemBERT](https://huggingface.co/docs/transformers/model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
1. **[CANINE](https://huggingface.co/docs/transformers/model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting.
1. **[Chinese-CLIP](https://huggingface.co/docs/transformers/model_doc/chinese_clip)** (from OFA-Sys) released with the paper [Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese](https://arxiv.org/abs/2211.01335) by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou.
1. **[CLAP](https://huggingface.co/docs/transformers/model_doc/clap)** (from LAION-AI) released with the paper [Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation](https://arxiv.org/abs/2211.06687) by Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov.
1. **[CLIP](https://huggingface.co/docs/transformers/model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
1. **[CLIPSeg](https://huggingface.co/docs/transformers/model_doc/clipseg)** (from University of Göttingen) released with the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Timo Lüddecke and Alexander Ecker.
1. **[CodeGen](https://huggingface.co/docs/transformers/model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong.
1. **[CodeLlama](https://huggingface.co/docs/transformers/model_doc/llama_code)** (from MetaAI) released with the paper [Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) by Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve.
1. **[Conditional DETR](https://huggingface.co/docs/transformers/model_doc/conditional_detr)** (from Microsoft Research Asia) released with the paper [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) by Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang.
1. **[ConvBERT](https://huggingface.co/docs/transformers/model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
1. **[ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie.
1. **[ConvNeXTV2](https://huggingface.co/docs/transformers/model_doc/convnextv2)** (from Facebook AI) released with the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie.
1. **[CPM](https://huggingface.co/docs/transformers/model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun.
1. **[CPM-Ant](https://huggingface.co/docs/transformers/model_doc/cpmant)** (from OpenBMB) released by the [OpenBMB](https://www.openbmb.org/).
1. **[CTRL](https://huggingface.co/docs/transformers/model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher.
1. **[CvT](https://huggingface.co/docs/transformers/model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang.
1. **[Data2Vec](https://huggingface.co/docs/transformers/model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli.
1. **[DeBERTa](https://huggingface.co/docs/transformers/model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
1. **[DeBERTa-v2](https://huggingface.co/docs/transformers/model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
1. **[Decision Transformer](https://huggingface.co/docs/transformers/model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch.
1. **[Deformable DETR](https://huggingface.co/docs/transformers/model_doc/deformable_detr)** (from SenseTime Research) released with the paper [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai.
1. **[DeiT](https://huggingface.co/docs/transformers/model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
1. **[DePlot](https://huggingface.co/docs/transformers/model_doc/deplot)** (from Google AI) released with the paper [DePlot: One-shot visual language reasoning by plot-to-table translation](https://arxiv.org/abs/2212.10505) by Fangyu Liu, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Wenhu Chen, Nigel Collier, Yasemin Altun.
1. **[DETA](https://huggingface.co/docs/transformers/model_doc/deta)** (from The University of Texas at Austin) released with the paper [NMS Strikes Back](https://arxiv.org/abs/2212.06137) by Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krähenbühl.
1. **[DETR](https://huggingface.co/docs/transformers/model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
1. **[DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan.
1. **[DiNAT](https://huggingface.co/docs/transformers/model_doc/dinat)** (from SHI Labs) released with the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Ali Hassani and Humphrey Shi.
1. **[DINOv2](https://huggingface.co/docs/transformers/model_doc/dinov2)** (from Meta AI) released with the paper [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193) by Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mahmoud Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Hervé Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, Piotr Bojanowski.
1. **[DistilBERT](https://huggingface.co/docs/transformers/model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) and a German version of DistilBERT.
1. **[DiT](https://huggingface.co/docs/transformers/model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei.
1. **[Donut](https://huggingface.co/docs/transformers/model_doc/donut)** (from NAVER), released together with the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park.
1. **[DPR](https://huggingface.co/docs/transformers/model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih.
1. **[DPT](https://huggingface.co/docs/transformers/master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun.
1. **[EfficientFormer](https://huggingface.co/docs/transformers/model_doc/efficientformer)** (from Snap Research) released with the paper [EfficientFormer: Vision Transformers at MobileNetSpeed](https://arxiv.org/abs/2206.01191) by Yanyu Li, Geng Yuan, Yang Wen, Ju Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren.
1. **[EfficientNet](https://huggingface.co/docs/transformers/model_doc/efficientnet)** (from Google Brain) released with the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) by Mingxing Tan, Quoc V. Le.
1. **[ELECTRA](https://huggingface.co/docs/transformers/model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
1. **[EnCodec](https://huggingface.co/docs/transformers/model_doc/encodec)** (from Meta AI) released with the paper [High Fidelity Neural Audio Compression](https://arxiv.org/abs/2210.13438) by Alexandre Défossez, Jade Copet, Gabriel Synnaeve, Yossi Adi.
1. **[EncoderDecoder](https://huggingface.co/docs/transformers/model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
1. **[ERNIE](https://huggingface.co/docs/transformers/model_doc/ernie)** (from Baidu) released with the paper [ERNIE: Enhanced Representation through Knowledge Integration](https://arxiv.org/abs/1904.09223) by Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu.
1. **[ErnieM](https://huggingface.co/docs/transformers/model_doc/ernie_m)** (from Baidu) released with the paper [ERNIE-M: Enhanced Multilingual Representation by Aligning Cross-lingual Semantics with Monolingual Corpora](https://arxiv.org/abs/2012.15674) by Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang.
1. **[ESM](https://huggingface.co/docs/transformers/model_doc/esm)** (from Meta AI) are transformer protein language models. **ESM-1b** was released with the paper [Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences](https://www.pnas.org/content/118/15/e2016239118) by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus. **ESM-1v** was released with the paper [Language models enable zero-shot prediction of the effects of mutations on protein function](https://doi.org/10.1101/2021.07.09.450648) by Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu and Alexander Rives. **ESM-2 and ESMFold** were released with the paper [Language models of protein sequences at the scale of evolution enable accurate structure prediction](https://doi.org/10.1101/2022.07.20.500902) by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, Alexander Rives.
1. **[Falcon](https://huggingface.co/docs/transformers/model_doc/falcon)** (from Technology Innovation Institute) by Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme.
1. **[FLAN-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei
1. **[FLAN-UL2](https://huggingface.co/docs/transformers/model_doc/flan-ul2)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-ul2-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei
1. **[FlauBERT](https://huggingface.co/docs/transformers/model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab.
1. **[FLAVA](https://huggingface.co/docs/transformers/model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela.
1. **[FNet](https://huggingface.co/docs/transformers/model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon.
1. **[FocalNet](https://huggingface.co/docs/transformers/model_doc/focalnet)** (from Microsoft Research) released with the paper [Focal Modulation Networks](https://arxiv.org/abs/2203.11926) by Jianwei Yang, Chunyuan Li, Xiyang Dai, Lu Yuan, Jianfeng Gao.
1. **[Funnel Transformer](https://huggingface.co/docs/transformers/model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le.
1. **[Fuyu](https://huggingface.co/docs/transformers/model_doc/fuyu)** (from ADEPT) Rohan Bavishi, Erich Elsen, Curtis Hawthorne, Maxwell Nye, Augustus Odena, Arushi Somani, Sağnak Taşırlar. Released with the paper [blog post](https://www.adept.ai/blog/fuyu-8b)
1. **[GIT](https://huggingface.co/docs/transformers/model_doc/git)** (from Microsoft Research) released with the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, Lijuan Wang.
1. **[GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim.
1. **[GPT](https://huggingface.co/docs/transformers/model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
1. **[GPT Neo](https://huggingface.co/docs/transformers/model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
1. **[GPT NeoX](https://huggingface.co/docs/transformers/model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
1. **[GPT NeoX Japanese](https://huggingface.co/docs/transformers/model_doc/gpt_neox_japanese)** (from ABEJA) released by Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori.
1. **[GPT-2](https://huggingface.co/docs/transformers/model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
1. **[GPT-J](https://huggingface.co/docs/transformers/model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki.
1. **[GPT-Sw3](https://huggingface.co/docs/transformers/model_doc/gpt-sw3)** (from AI-Sweden) released with the paper [Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.376.pdf) by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey Öhman, Fredrik Carlsson, Magnus Sahlgren.
1. **[GPTBigCode](https://huggingface.co/docs/transformers/model_doc/gpt_bigcode)** (from BigCode) released with the paper [SantaCoder: don't reach for the stars!](https://arxiv.org/abs/2301.03988) by Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra.
1. **[GPTSAN-japanese](https://huggingface.co/docs/transformers/model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama).
1. **[Graphormer](https://huggingface.co/docs/transformers/model_doc/graphormer)** (from Microsoft) released with the paper [Do Transformers Really Perform Bad for Graph Representation?](https://arxiv.org/abs/2106.05234) by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu.
1. **[GroupViT](https://huggingface.co/docs/transformers/model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang.
1. **[HerBERT](https://huggingface.co/docs/transformers/model_doc/herbert)** (from Allegro.pl, AGH University of Science and Technology) released with the paper [KLEJ: Comprehensive Benchmark for Polish Language Understanding](https://www.aclweb.org/anthology/2020.acl-main.111.pdf) by Piotr Rybak, Robert Mroczkowski, Janusz Tracz, Ireneusz Gawlik.
1. **[Hubert](https://huggingface.co/docs/transformers/model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
1. **[I-BERT](https://huggingface.co/docs/transformers/model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer.
1. **[IDEFICS](https://huggingface.co/docs/transformers/model_doc/idefics)** (from HuggingFace) released with the paper [OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents](https://huggingface.co/papers/2306.16527) by Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh.
1. **[ImageGPT](https://huggingface.co/docs/transformers/model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever.
1. **[Informer](https://huggingface.co/docs/transformers/model_doc/informer)** (from Beihang University, UC Berkeley, Rutgers University, SEDD Company) released with the paper [Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting](https://arxiv.org/abs/2012.07436) by Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang.
1. **[InstructBLIP](https://huggingface.co/docs/transformers/model_doc/instructblip)** (from Salesforce) released with the paper [InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning](https://arxiv.org/abs/2305.06500) by Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, Steven Hoi.
1. **[Jukebox](https://huggingface.co/docs/transformers/model_doc/jukebox)** (from OpenAI) released with the paper [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf) by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever.
1. **[LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou.
1. **[LayoutLMv2](https://huggingface.co/docs/transformers/model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou.
1. **[LayoutLMv3](https://huggingface.co/docs/transformers/model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei.
1. **[LayoutXLM](https://huggingface.co/docs/transformers/model_doc/layoutxlm)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei.
1. **[LED](https://huggingface.co/docs/transformers/model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
1. **[LeViT](https://huggingface.co/docs/transformers/model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, Matthijs Douze.
1. **[LiLT](https://huggingface.co/docs/transformers/model_doc/lilt)** (from South China University of Technology) released with the paper [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) by Jiapeng Wang, Lianwen Jin, Kai Ding.
1. **[LLaMA](https://huggingface.co/docs/transformers/model_doc/llama)** (from The FAIR team of Meta AI) released with the paper [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971) by Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample.
1. **[Llama2](https://huggingface.co/docs/transformers/model_doc/llama2)** (from The FAIR team of Meta AI) released with the paper [Llama2: Open Foundation and Fine-Tuned Chat Models](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/XXX) by Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushka rMishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing EllenTan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom.
1. **[Longformer](https://huggingface.co/docs/transformers/model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
1. **[LongT5](https://huggingface.co/docs/transformers/model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang.
1. **[LUKE](https://huggingface.co/docs/transformers/model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto.
1. **[LXMERT](https://huggingface.co/docs/transformers/model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal.
1. **[M-CTC-T](https://huggingface.co/docs/transformers/model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert.
1. **[M2M100](https://huggingface.co/docs/transformers/model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
1. **[MADLAD-400](https://huggingface.co/docs/transformers/model_doc/madlad-400)** (from Google) released with the paper [MADLAD-400: A Multilingual And Document-Level Large Audited Dataset](https://arxiv.org/abs/2309.04662) by Sneha Kudugunta, Isaac Caswell, Biao Zhang, Xavier Garcia, Christopher A. Choquette-Choo, Katherine Lee, Derrick Xin, Aditya Kusupati, Romi Stella, Ankur Bapna, Orhan Firat.
1. **[MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jörg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team.
1. **[MarkupLM](https://huggingface.co/docs/transformers/model_doc/markuplm)** (from Microsoft Research Asia) released with the paper [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) by Junlong Li, Yiheng Xu, Lei Cui, Furu Wei.
1. **[Mask2Former](https://huggingface.co/docs/transformers/model_doc/mask2former)** (from FAIR and UIUC) released with the paper [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527) by Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar.
1. **[MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov.
1. **[MatCha](https://huggingface.co/docs/transformers/model_doc/matcha)** (from Google AI) released with the paper [MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering](https://arxiv.org/abs/2212.09662) by Fangyu Liu, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Yasemin Altun, Nigel Collier, Julian Martin Eisenschlos.
1. **[mBART](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
1. **[mBART-50](https://huggingface.co/docs/transformers/model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan.
1. **[MEGA](https://huggingface.co/docs/transformers/model_doc/mega)** (from Meta/USC/CMU/SJTU) released with the paper [Mega: Moving Average Equipped Gated Attention](https://arxiv.org/abs/2209.10655) by Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, and Luke Zettlemoyer.
1. **[Megatron-BERT](https://huggingface.co/docs/transformers/model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
1. **[Megatron-GPT2](https://huggingface.co/docs/transformers/model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro.
1. **[MGP-STR](https://huggingface.co/docs/transformers/model_doc/mgp-str)** (from Alibaba Research) released with the paper [Multi-Granularity Prediction for Scene Text Recognition](https://arxiv.org/abs/2209.03592) by Peng Wang, Cheng Da, and Cong Yao.
1. **[mLUKE](https://huggingface.co/docs/transformers/model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka.
1. **[MMS](https://huggingface.co/docs/transformers/model_doc/mms)** (from Facebook) released with the paper [Scaling Speech Technology to 1,000+ Languages](https://arxiv.org/abs/2305.13516) by Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Yossi Adi, Xiaohui Zhang, Wei-Ning Hsu, Alexis Conneau, Michael Auli.
1. **[MobileBERT](https://huggingface.co/docs/transformers/model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou.
1. **[MobileNetV1](https://huggingface.co/docs/transformers/model_doc/mobilenet_v1)** (from Google Inc.) released with the paper [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam.
1. **[MobileNetV2](https://huggingface.co/docs/transformers/model_doc/mobilenet_v2)** (from Google Inc.) released with the paper [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen.
1. **[MobileViT](https://huggingface.co/docs/transformers/model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari.
1. **[MobileViTV2](https://huggingface.co/docs/transformers/model_doc/mobilevitv2)** (from Apple) released with the paper [Separable Self-attention for Mobile Vision Transformers](https://arxiv.org/abs/2206.02680) by Sachin Mehta and Mohammad Rastegari.
1. **[MPNet](https://huggingface.co/docs/transformers/model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
1. **[MPT](https://huggingface.co/docs/transformers/model_doc/mpt)** (from MosaiML) released with the repository [llm-foundry](https://github.com/mosaicml/llm-foundry/) by the MosaicML NLP Team.
1. **[MRA](https://huggingface.co/docs/transformers/model_doc/mra)** (from the University of Wisconsin - Madison) released with the paper [Multi Resolution Analysis (MRA) for Approximate Self-Attention](https://arxiv.org/abs/2207.10284) by Zhanpeng Zeng, Sourav Pal, Jeffery Kline, Glenn M Fung, Vikas Singh.
1. **[MT5](https://huggingface.co/docs/transformers/model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
1. **[MusicGen](https://huggingface.co/docs/transformers/model_doc/musicgen)** (from Meta) released with the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi and Alexandre Défossez.
1. **[MVP](https://huggingface.co/docs/transformers/model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
1. **[NAT](https://huggingface.co/docs/transformers/model_doc/nat)** (from SHI Labs) released with the paper [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) by Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi.
1. **[Nezha](https://huggingface.co/docs/transformers/model_doc/nezha)** (from Huawei Noah’s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu.
1. **[NLLB](https://huggingface.co/docs/transformers/model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team.
1. **[NLLB-MOE](https://huggingface.co/docs/transformers/model_doc/nllb-moe)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team.
1. **[Nyströmformer](https://huggingface.co/docs/transformers/model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh.
1. **[OneFormer](https://huggingface.co/docs/transformers/model_doc/oneformer)** (from SHI Labs) released with the paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi.
1. **[OpenLlama](https://huggingface.co/docs/transformers/model_doc/open-llama)** (from [s-JoL](https://huggingface.co/s-JoL)) released on GitHub (now removed).
1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.
1. **[Pegasus](https://huggingface.co/docs/transformers/model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu.
1. **[PEGASUS-X](https://huggingface.co/docs/transformers/model_doc/pegasus_x)** (from Google) released with the paper [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao, and Peter J. Liu.
1. **[Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira.
1. **[Persimmon](https://huggingface.co/docs/transformers/main/model_doc/persimmon)** (from ADEPT) released in a [blog post](https://www.adept.ai/blog/persimmon-8b) by Erich Elsen, Augustus Odena, Maxwell Nye, Sağnak Taşırlar, Tri Dao, Curtis Hawthorne, Deepak Moparthi, Arushi Somani.
1. **[Phi](https://huggingface.co/docs/main/transformers/model_doc/phi)** (from Microsoft Research) released with the papers - [Textbooks Are All You Need](https://arxiv.org/abs/2306.11644) by Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee and Yuanzhi Li, [Textbooks Are All You Need II: phi-1.5 technical report](https://arxiv.org/abs/2309.05463) by Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar and Yin Tat Lee.
1. **[PhoBERT](https://huggingface.co/docs/transformers/model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen.
1. **[Pix2Struct](https://huggingface.co/docs/transformers/model_doc/pix2struct)** (from Google) released with the paper [Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding](https://arxiv.org/abs/2210.03347) by Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova.
1. **[PLBart](https://huggingface.co/docs/transformers/model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang.
1. **[PoolFormer](https://huggingface.co/docs/transformers/model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng.
1. **[Pop2Piano](https://huggingface.co/docs/transformers/model_doc/pop2piano)** released with the paper [Pop2Piano : Pop Audio-based Piano Cover Generation](https://arxiv.org/abs/2211.00895) by Jongho Choi and Kyogu Lee.
1. **[ProphetNet](https://huggingface.co/docs/transformers/model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
1. **[PVT](https://huggingface.co/docs/transformers/model_doc/pvt)** (from Nanjing University, The University of Hong Kong etc.) released with the paper [Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions](https://arxiv.org/pdf/2102.12122.pdf) by Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao.
1. **[QDQBert](https://huggingface.co/docs/transformers/model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius.
1. **[RAG](https://huggingface.co/docs/transformers/model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela.
1. **[REALM](https://huggingface.co/docs/transformers/model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang.
1. **[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
1. **[RegNet](https://huggingface.co/docs/transformers/model_doc/regnet)** (from META Platforms) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár.
1. **[RemBERT](https://huggingface.co/docs/transformers/model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821) by Hyung Won Chung, Thibault Févry, Henry Tsai, M. Johnson, Sebastian Ruder.
1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.
1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
1. **[RoBERTa-PreLayerNorm](https://huggingface.co/docs/transformers/model_doc/roberta-prelayernorm)** (from Facebook) released with the paper [fairseq: A Fast, Extensible Toolkit for Sequence Modeling](https://arxiv.org/abs/1904.01038) by Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli.
1. **[RoCBert](https://huggingface.co/docs/transformers/model_doc/roc_bert)** (from WeChatAI) released with the paper [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou.
1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
1. **[RWKV](https://huggingface.co/docs/transformers/model_doc/rwkv)** (from Bo Peng), released on [this repo](https://github.com/BlinkDL/RWKV-LM) by Bo Peng.
1. **[SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
1. **[Segment Anything](https://huggingface.co/docs/transformers/model_doc/sam)** (from Meta AI) released with the paper [Segment Anything](https://arxiv.org/pdf/2304.02643v1.pdf) by Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alex Berg, Wan-Yen Lo, Piotr Dollar, Ross Girshick.
1. **[SEW](https://huggingface.co/docs/transformers/model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
1. **[SEW-D](https://huggingface.co/docs/transformers/model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi.
1. **[SpeechT5](https://huggingface.co/docs/transformers/model_doc/speecht5)** (from Microsoft Research) released with the paper [SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing](https://arxiv.org/abs/2110.07205) by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.
1. **[SpeechToTextTransformer](https://huggingface.co/docs/transformers/model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino.
1. **[SpeechToTextTransformer2](https://huggingface.co/docs/transformers/model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau.
1. **[Splinter](https://huggingface.co/docs/transformers/model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy.
1. **[SqueezeBERT](https://huggingface.co/docs/transformers/model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
1. **[SwiftFormer](https://huggingface.co/docs/transformers/model_doc/swiftformer)** (from MBZUAI) released with the paper [SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications](https://arxiv.org/abs/2303.15446) by Abdelrahman Shaker, Muhammad Maaz, Hanoona Rasheed, Salman Khan, Ming-Hsuan Yang, Fahad Shahbaz Khan.
1. **[Swin Transformer](https://huggingface.co/docs/transformers/model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
1. **[Swin Transformer V2](https://huggingface.co/docs/transformers/model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo.
1. **[Swin2SR](https://huggingface.co/docs/transformers/model_doc/swin2sr)** (from University of Würzburg) released with the paper [Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration](https://arxiv.org/abs/2209.11345) by Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte.
1. **[SwitchTransformers](https://huggingface.co/docs/transformers/model_doc/switch_transformers)** (from Google) released with the paper [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) by William Fedus, Barret Zoph, Noam Shazeer.
1. **[T5](https://huggingface.co/docs/transformers/model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
1. **[T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
1. **[Table Transformer](https://huggingface.co/docs/transformers/model_doc/table-transformer)** (from Microsoft Research) released with the paper [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061) by Brandon Smock, Rohith Pesala, Robin Abraham.
1. **[TAPAS](https://huggingface.co/docs/transformers/model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweł Krzysztof Nowak, Thomas Müller, Francesco Piccinno and Julian Martin Eisenschlos.
1. **[TAPEX](https://huggingface.co/docs/transformers/model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou.
1. **[Time Series Transformer](https://huggingface.co/docs/transformers/model_doc/time_series_transformer)** (from HuggingFace).
1. **[TimeSformer](https://huggingface.co/docs/transformers/model_doc/timesformer)** (from Facebook) released with the paper [Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Gedas Bertasius, Heng Wang, Lorenzo Torresani.
1. **[Trajectory Transformer](https://huggingface.co/docs/transformers/model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine
1. **[Transformer-XL](https://huggingface.co/docs/transformers/model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
1. **[TrOCR](https://huggingface.co/docs/transformers/model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.
1. **[TVLT](https://huggingface.co/docs/transformers/model_doc/tvlt)** (from UNC Chapel Hill) released with the paper [TVLT: Textless Vision-Language Transformer](https://arxiv.org/abs/2209.14156) by Zineng Tang, Jaemin Cho, Yixin Nie, Mohit Bansal.
1. **[UL2](https://huggingface.co/docs/transformers/model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler
1. **[UMT5](https://huggingface.co/docs/transformers/model_doc/umt5)** (from Google Research) released with the paper [UniMax: Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining](https://openreview.net/forum?id=kXwdL1cWOAi) by Hyung Won Chung, Xavier Garcia, Adam Roberts, Yi Tay, Orhan Firat, Sharan Narang, Noah Constant.
1. **[UniSpeech](https://huggingface.co/docs/transformers/model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang.
1. **[UniSpeechSat](https://huggingface.co/docs/transformers/model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu.
1. **[UPerNet](https://huggingface.co/docs/transformers/model_doc/upernet)** (from Peking University) released with the paper [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221) by Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun.
1. **[VAN](https://huggingface.co/docs/transformers/model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu.
1. **[VideoMAE](https://huggingface.co/docs/transformers/model_doc/videomae)** (from Multimedia Computing Group, Nanjing University) released with the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang.
1. **[ViLT](https://huggingface.co/docs/transformers/model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim.
1. **[Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
1. **[VisualBERT](https://huggingface.co/docs/transformers/model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang.
1. **[ViT Hybrid](https://huggingface.co/docs/transformers/model_doc/vit_hybrid)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
1. **[VitDet](https://huggingface.co/docs/transformers/model_doc/vitdet)** (from Meta AI) released with the paper [Exploring Plain Vision Transformer Backbones for Object Detection](https://arxiv.org/abs/2203.16527) by Yanghao Li, Hanzi Mao, Ross Girshick, Kaiming He.
1. **[ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick.
1. **[ViTMatte](https://huggingface.co/docs/transformers/main/model_doc/vitmatte)** (from HUST-VL) rreleased with the paper [ViTMatte: Boosting Image Matting with Pretrained Plain Vision Transformers](https://arxiv.org/abs/2305.15272) by Jingfeng Yao, Xinggang Wang, Shusheng Yang, Baoyuan Wang.
1. **[ViTMSN](https://huggingface.co/docs/transformers/model_doc/vit_msn)** (from Meta AI) released with the paper [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas.
1. **[VITS](https://huggingface.co/docs/transformers/model_doc/vits)** (from Kakao Enterprise) released with the paper [Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech](https://arxiv.org/abs/2106.06103) by Jaehyeon Kim, Jungil Kong, Juhee Son.
1. **[ViViT](https://huggingface.co/docs/transformers/model_doc/vivit)** (from Google Research) released with the paper [ViViT: A Video Vision Transformer](https://arxiv.org/abs/2103.15691) by Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lučić, Cordelia Schmid.
1. **[Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
1. **[Wav2Vec2-Conformer](https://huggingface.co/docs/transformers/model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino.
1. **[Wav2Vec2Phoneme](https://huggingface.co/docs/transformers/model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli.
1. **[WavLM](https://huggingface.co/docs/transformers/model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei.
1. **[Whisper](https://huggingface.co/docs/transformers/model_doc/whisper)** (from OpenAI) released with the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever.
1. **[X-CLIP](https://huggingface.co/docs/transformers/model_doc/xclip)** (from Microsoft Research) released with the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling.
1. **[X-MOD](https://huggingface.co/docs/transformers/model_doc/xmod)** (from Meta AI) released with the paper [Lifting the Curse of Multilinguality by Pre-training Modular Transformers](http://dx.doi.org/10.18653/v1/2022.naacl-main.255) by Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, Mikel Artetxe.
1. **[XGLM](https://huggingface.co/docs/transformers/model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
1. **[XLM](https://huggingface.co/docs/transformers/model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau.
1. **[XLM-ProphetNet](https://huggingface.co/docs/transformers/model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou.
1. **[XLM-RoBERTa](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
1. **[XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/model_doc/xlm-roberta-xl)** (from Facebook AI), released together with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau.
1. **[XLM-V](https://huggingface.co/docs/transformers/model_doc/xlm-v)** (from Meta AI) released with the paper [XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models](https://arxiv.org/abs/2301.10472) by Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer, Madian Khabsa.
1. **[XLNet](https://huggingface.co/docs/transformers/model_doc/xlnet)** (from Google/CMU) released with the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
1. **[XLS-R](https://huggingface.co/docs/transformers/model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli.
1. **[XLSR-Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli.
1. **[YOLOS](https://huggingface.co/docs/transformers/model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu.
1. **[YOSO](https://huggingface.co/docs/transformers/model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh.
1. Want to contribute a new model? We have added a **detailed guide and templates** to guide you in the process of adding a new model. You can find them in the [`templates`](./templates) folder of the repository. Be sure to check the [contributing guidelines](./CONTRIBUTING.md) and contact the maintainers or open an issue to collect feedbacks before starting your PR.
Чтобы проверить, есть ли у каждой модели реализация на Flax, PyTorch или TensorFlow, или связанный с ней токенизатор, поддерживаемый библиотекой 🤗 Tokenizers, обратитесь к [этой таблице](https://huggingface.co/docs/transformers/index#supported-frameworks).
Эти реализации были протестированы на нескольких наборах данных (см. примеры скриптов) и должны соответствовать производительности оригинальных реализаций. Более подробную информацию о производительности можно найти в разделе "Примеры" [документации](https://github.com/huggingface/transformers/tree/main/examples).
## Изучи больше
| Секция | Описание |
|-|-|
| [Документация](https://huggingface.co/docs/transformers/) | Полная документация по API и гайды |
| [Краткие описания задач](https://huggingface.co/docs/transformers/task_summary) | Задачи поддерживаются 🤗 Transformers |
| [Пособие по предварительной обработке](https://huggingface.co/docs/transformers/preprocessing) | Использование класса `Tokenizer` для подготовки данных для моделей |
| [Обучение и доработка](https://huggingface.co/docs/transformers/training) | Использование моделей, предоставляемых 🤗 Transformers, в цикле обучения PyTorch/TensorFlow и API `Trainer`. |
| [Быстрый тур: Тонкая настройка/скрипты использования](https://github.com/huggingface/transformers/tree/main/examples) | Примеры скриптов для тонкой настройки моделей на широком спектре задач |
| [Совместное использование и загрузка моделей](https://huggingface.co/docs/transformers/model_sharing) | Загружайте и делитесь с сообществом своими доработанными моделями |
## Цитирование
Теперь у нас есть [статья](https://www.aclweb.org/anthology/2020.emnlp-demos.6/), которую можно цитировать для библиотеки 🤗 Transformers:
```bibtex
@inproceedings{wolf-etal-2020-transformers,
title = "Transformers: State-of-the-Art Natural Language Processing",
author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = oct,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
pages = "38--45"
}
```
|
huggingface/transformers/blob/main/docs/source/en/model_sharing.md
|
!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Share a model
The last two tutorials showed how you can fine-tune a model with PyTorch, Keras, and 🤗 Accelerate for distributed setups. The next step is to share your model with the community! At Hugging Face, we believe in openly sharing knowledge and resources to democratize artificial intelligence for everyone. We encourage you to consider sharing your model with the community to help others save time and resources.
In this tutorial, you will learn two methods for sharing a trained or fine-tuned model on the [Model Hub](https://huggingface.co/models):
- Programmatically push your files to the Hub.
- Drag-and-drop your files to the Hub with the web interface.
<iframe width="560" height="315" src="https://www.youtube.com/embed/XvSGPZFEjDY" title="YouTube video player"
frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope;
picture-in-picture" allowfullscreen></iframe>
<Tip>
To share a model with the community, you need an account on [huggingface.co](https://huggingface.co/join). You can also join an existing organization or create a new one.
</Tip>
## Repository features
Each repository on the Model Hub behaves like a typical GitHub repository. Our repositories offer versioning, commit history, and the ability to visualize differences.
The Model Hub's built-in versioning is based on git and [git-lfs](https://git-lfs.github.com/). In other words, you can treat one model as one repository, enabling greater access control and scalability. Version control allows *revisions*, a method for pinning a specific version of a model with a commit hash, tag or branch.
As a result, you can load a specific model version with the `revision` parameter:
```py
>>> model = AutoModel.from_pretrained(
... "julien-c/EsperBERTo-small", revision="v2.0.1" # tag name, or branch name, or commit hash
... )
```
Files are also easily edited in a repository, and you can view the commit history as well as the difference:

## Setup
Before sharing a model to the Hub, you will need your Hugging Face credentials. If you have access to a terminal, run the following command in the virtual environment where 🤗 Transformers is installed. This will store your access token in your Hugging Face cache folder (`~/.cache/` by default):
```bash
huggingface-cli login
```
If you are using a notebook like Jupyter or Colaboratory, make sure you have the [`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library) library installed. This library allows you to programmatically interact with the Hub.
```bash
pip install huggingface_hub
```
Then use `notebook_login` to sign-in to the Hub, and follow the link [here](https://huggingface.co/settings/token) to generate a token to login with:
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## Convert a model for all frameworks
To ensure your model can be used by someone working with a different framework, we recommend you convert and upload your model with both PyTorch and TensorFlow checkpoints. While users are still able to load your model from a different framework if you skip this step, it will be slower because 🤗 Transformers will need to convert the checkpoint on-the-fly.
Converting a checkpoint for another framework is easy. Make sure you have PyTorch and TensorFlow installed (see [here](installation) for installation instructions), and then find the specific model for your task in the other framework.
<frameworkcontent>
<pt>
Specify `from_tf=True` to convert a checkpoint from TensorFlow to PyTorch:
```py
>>> pt_model = DistilBertForSequenceClassification.from_pretrained("path/to/awesome-name-you-picked", from_tf=True)
>>> pt_model.save_pretrained("path/to/awesome-name-you-picked")
```
</pt>
<tf>
Specify `from_pt=True` to convert a checkpoint from PyTorch to TensorFlow:
```py
>>> tf_model = TFDistilBertForSequenceClassification.from_pretrained("path/to/awesome-name-you-picked", from_pt=True)
```
Then you can save your new TensorFlow model with its new checkpoint:
```py
>>> tf_model.save_pretrained("path/to/awesome-name-you-picked")
```
</tf>
<jax>
If a model is available in Flax, you can also convert a checkpoint from PyTorch to Flax:
```py
>>> flax_model = FlaxDistilBertForSequenceClassification.from_pretrained(
... "path/to/awesome-name-you-picked", from_pt=True
... )
```
</jax>
</frameworkcontent>
## Push a model during training
<frameworkcontent>
<pt>
<Youtube id="Z1-XMy-GNLQ"/>
Sharing a model to the Hub is as simple as adding an extra parameter or callback. Remember from the [fine-tuning tutorial](training), the [`TrainingArguments`] class is where you specify hyperparameters and additional training options. One of these training options includes the ability to push a model directly to the Hub. Set `push_to_hub=True` in your [`TrainingArguments`]:
```py
>>> training_args = TrainingArguments(output_dir="my-awesome-model", push_to_hub=True)
```
Pass your training arguments as usual to [`Trainer`]:
```py
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=small_train_dataset,
... eval_dataset=small_eval_dataset,
... compute_metrics=compute_metrics,
... )
```
After you fine-tune your model, call [`~transformers.Trainer.push_to_hub`] on [`Trainer`] to push the trained model to the Hub. 🤗 Transformers will even automatically add training hyperparameters, training results and framework versions to your model card!
```py
>>> trainer.push_to_hub()
```
</pt>
<tf>
Share a model to the Hub with [`PushToHubCallback`]. In the [`PushToHubCallback`] function, add:
- An output directory for your model.
- A tokenizer.
- The `hub_model_id`, which is your Hub username and model name.
```py
>>> from transformers import PushToHubCallback
>>> push_to_hub_callback = PushToHubCallback(
... output_dir="./your_model_save_path", tokenizer=tokenizer, hub_model_id="your-username/my-awesome-model"
... )
```
Add the callback to [`fit`](https://keras.io/api/models/model_training_apis/), and 🤗 Transformers will push the trained model to the Hub:
```py
>>> model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=3, callbacks=push_to_hub_callback)
```
</tf>
</frameworkcontent>
## Use the `push_to_hub` function
You can also call `push_to_hub` directly on your model to upload it to the Hub.
Specify your model name in `push_to_hub`:
```py
>>> pt_model.push_to_hub("my-awesome-model")
```
This creates a repository under your username with the model name `my-awesome-model`. Users can now load your model with the `from_pretrained` function:
```py
>>> from transformers import AutoModel
>>> model = AutoModel.from_pretrained("your_username/my-awesome-model")
```
If you belong to an organization and want to push your model under the organization name instead, just add it to the `repo_id`:
```py
>>> pt_model.push_to_hub("my-awesome-org/my-awesome-model")
```
The `push_to_hub` function can also be used to add other files to a model repository. For example, add a tokenizer to a model repository:
```py
>>> tokenizer.push_to_hub("my-awesome-model")
```
Or perhaps you'd like to add the TensorFlow version of your fine-tuned PyTorch model:
```py
>>> tf_model.push_to_hub("my-awesome-model")
```
Now when you navigate to your Hugging Face profile, you should see your newly created model repository. Clicking on the **Files** tab will display all the files you've uploaded to the repository.
For more details on how to create and upload files to a repository, refer to the Hub documentation [here](https://huggingface.co/docs/hub/how-to-upstream).
## Upload with the web interface
Users who prefer a no-code approach are able to upload a model through the Hub's web interface. Visit [huggingface.co/new](https://huggingface.co/new) to create a new repository:

From here, add some information about your model:
- Select the **owner** of the repository. This can be yourself or any of the organizations you belong to.
- Pick a name for your model, which will also be the repository name.
- Choose whether your model is public or private.
- Specify the license usage for your model.
Now click on the **Files** tab and click on the **Add file** button to upload a new file to your repository. Then drag-and-drop a file to upload and add a commit message.

## Add a model card
To make sure users understand your model's capabilities, limitations, potential biases and ethical considerations, please add a model card to your repository. The model card is defined in the `README.md` file. You can add a model card by:
* Manually creating and uploading a `README.md` file.
* Clicking on the **Edit model card** button in your model repository.
Take a look at the DistilBert [model card](https://huggingface.co/distilbert-base-uncased) for a good example of the type of information a model card should include. For more details about other options you can control in the `README.md` file such as a model's carbon footprint or widget examples, refer to the documentation [here](https://huggingface.co/docs/hub/models-cards).
|
huggingface/diffusers/blob/main/docs/source/en/training/lora.md
|
!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# LoRA
<Tip warning={true}>
This is experimental and the API may change in the future.
</Tip>
[LoRA (Low-Rank Adaptation of Large Language Models)](https://hf.co/papers/2106.09685) is a popular and lightweight training technique that significantly reduces the number of trainable parameters. It works by inserting a smaller number of new weights into the model and only these are trained. This makes training with LoRA much faster, memory-efficient, and produces smaller model weights (a few hundred MBs), which are easier to store and share. LoRA can also be combined with other training techniques like DreamBooth to speedup training.
<Tip>
LoRA is very versatile and supported for [DreamBooth](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora.py), [Kandinsky 2.2](https://github.com/huggingface/diffusers/blob/main/examples/kandinsky2_2/text_to_image/train_text_to_image_lora_decoder.py), [Stable Diffusion XL](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora_sdxl.py), [text-to-image](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py), and [Wuerstchen](https://github.com/huggingface/diffusers/blob/main/examples/wuerstchen/text_to_image/train_text_to_image_lora_prior.py).
</Tip>
This guide will explore the [train_text_to_image_lora.py](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py) script to help you become more familiar with it, and how you can adapt it for your own use-case.
Before running the script, make sure you install the library from source:
```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
```
Navigate to the example folder with the training script and install the required dependencies for the script you're using:
<hfoptions id="installation">
<hfoption id="PyTorch">
```bash
cd examples/text_to_image
pip install -r requirements.txt
```
</hfoption>
<hfoption id="Flax">
```bash
cd examples/text_to_image
pip install -r requirements_flax.txt
```
</hfoption>
</hfoptions>
<Tip>
🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate [Quick tour](https://huggingface.co/docs/accelerate/quicktour) to learn more.
</Tip>
Initialize an 🤗 Accelerate environment:
```bash
accelerate config
```
To setup a default 🤗 Accelerate environment without choosing any configurations:
```bash
accelerate config default
```
Or if your environment doesn't support an interactive shell, like a notebook, you can use:
```bash
from accelerate.utils import write_basic_config
write_basic_config()
```
Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script.
<Tip>
The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn't cover every aspect of the script in detail. If you're interested in learning more, feel free to read through the [script](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/text_to_image_lora.py) and let us know if you have any questions or concerns.
</Tip>
## Script parameters
The training script has many parameters to help you customize your training run. All of the parameters and their descriptions are found in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/dd9a5caf61f04d11c0fa9f3947b69ab0010c9a0f/examples/text_to_image/train_text_to_image_lora.py#L85) function. Default values are provided for most parameters that work pretty well, but you can also set your own values in the training command if you'd like.
For example, to increase the number of epochs to train:
```bash
accelerate launch train_text_to_image_lora.py \
--num_train_epochs=150 \
```
Many of the basic and important parameters are described in the [Text-to-image](text2image#script-parameters) training guide, so this guide just focuses on the LoRA relevant parameters:
- `--rank`: the number of low-rank matrices to train
- `--learning_rate`: the default learning rate is 1e-4, but with LoRA, you can use a higher learning rate
## Training script
The dataset preprocessing code and training loop are found in the [`main()`](https://github.com/huggingface/diffusers/blob/dd9a5caf61f04d11c0fa9f3947b69ab0010c9a0f/examples/text_to_image/train_text_to_image_lora.py#L371) function, and if you need to adapt the training script, this is where you'll make your changes.
As with the script parameters, a walkthrough of the training script is provided in the [Text-to-image](text2image#training-script) training guide. Instead, this guide takes a look at the LoRA relevant parts of the script.
The script begins by adding the [new LoRA weights](https://github.com/huggingface/diffusers/blob/dd9a5caf61f04d11c0fa9f3947b69ab0010c9a0f/examples/text_to_image/train_text_to_image_lora.py#L447) to the attention layers. This involves correctly configuring the weight size for each block in the UNet. You'll see the `rank` parameter is used to create the [`~models.attention_processor.LoRAAttnProcessor`]:
```py
lora_attn_procs = {}
for name in unet.attn_processors.keys():
cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim
if name.startswith("mid_block"):
hidden_size = unet.config.block_out_channels[-1]
elif name.startswith("up_blocks"):
block_id = int(name[len("up_blocks.")])
hidden_size = list(reversed(unet.config.block_out_channels))[block_id]
elif name.startswith("down_blocks"):
block_id = int(name[len("down_blocks.")])
hidden_size = unet.config.block_out_channels[block_id]
lora_attn_procs[name] = LoRAAttnProcessor(
hidden_size=hidden_size,
cross_attention_dim=cross_attention_dim,
rank=args.rank,
)
unet.set_attn_processor(lora_attn_procs)
lora_layers = AttnProcsLayers(unet.attn_processors)
```
The [optimizer](https://github.com/huggingface/diffusers/blob/dd9a5caf61f04d11c0fa9f3947b69ab0010c9a0f/examples/text_to_image/train_text_to_image_lora.py#L519) is initialized with the `lora_layers` because these are the only weights that'll be optimized:
```py
optimizer = optimizer_cls(
lora_layers.parameters(),
lr=args.learning_rate,
betas=(args.adam_beta1, args.adam_beta2),
weight_decay=args.adam_weight_decay,
eps=args.adam_epsilon,
)
```
Aside from setting up the LoRA layers, the training script is more or less the same as train_text_to_image.py!
## Launch the script
Once you've made all your changes or you're okay with the default configuration, you're ready to launch the training script! 🚀
Let's train on the [Pokémon BLIP captions](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions) dataset to generate our yown Pokémon. Set the environment variables `MODEL_NAME` and `DATASET_NAME` to the model and dataset respectively. You should also specify where to save the model in `OUTPUT_DIR`, and the name of the model to save to on the Hub with `HUB_MODEL_ID`. The script creates and saves the following files to your repository:
- saved model checkpoints
- `pytorch_lora_weights.safetensors` (the trained LoRA weights)
If you're training on more than one GPU, add the `--multi_gpu` parameter to the `accelerate launch` command.
<Tip warning={true}>
A full training run takes ~5 hours on a 2080 Ti GPU with 11GB of VRAM.
</Tip>
```bash
export MODEL_NAME="runwayml/stable-diffusion-v1-5"
export OUTPUT_DIR="/sddata/finetune/lora/pokemon"
export HUB_MODEL_ID="pokemon-lora"
export DATASET_NAME="lambdalabs/pokemon-blip-captions"
accelerate launch --mixed_precision="fp16" train_text_to_image_lora.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--dataset_name=$DATASET_NAME \
--dataloader_num_workers=8 \
--resolution=512 \
--center_crop \
--random_flip \
--train_batch_size=1 \
--gradient_accumulation_steps=4 \
--max_train_steps=15000 \
--learning_rate=1e-04 \
--max_grad_norm=1 \
--lr_scheduler="cosine" \
--lr_warmup_steps=0 \
--output_dir=${OUTPUT_DIR} \
--push_to_hub \
--hub_model_id=${HUB_MODEL_ID} \
--report_to=wandb \
--checkpointing_steps=500 \
--validation_prompt="A pokemon with blue eyes." \
--seed=1337
```
Once training has been completed, you can use your model for inference:
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda")
pipeline.load_lora_weights("path/to/lora/model", weight_name="pytorch_lora_weights.safetensors")
image = pipeline("A pokemon with blue eyes").images[0]
```
## Next steps
Congratulations on training a new model with LoRA! To learn more about how to use your new model, the following guides may be helpful:
- Learn how to [load different LoRA formats](../using-diffusers/loading_adapters#LoRA) trained using community trainers like Kohya and TheLastBen.
- Learn how to use and [combine multiple LoRA's](../tutorials/using_peft_for_inference) with PEFT for inference.
|
huggingface/evaluate/blob/main/metrics/mape/README.md
|
--
title: MAPE
emoji: 🤗
colorFrom: blue
colorTo: red
sdk: gradio
sdk_version: 3.19.1
app_file: app.py
pinned: false
tags:
- evaluate
- metric
description: >-
Mean Absolute Percentage Error (MAPE) is the mean percentage error difference between the predicted and actual
values.
---
# Metric Card for MAPE
## Metric Description
Mean Absolute Error (MAPE) is the mean of the percentage error of difference between the predicted $x_i$ and actual $y_i$ numeric values:

## How to Use
At minimum, this metric requires predictions and references as inputs.
```python
>>> mape_metric = evaluate.load("mape")
>>> predictions = [2.5, 0.0, 2, 8]
>>> references = [3, -0.5, 2, 7]
>>> results = mape_metric.compute(predictions=predictions, references=references)
```
### Inputs
Mandatory inputs:
- `predictions`: numeric array-like of shape (`n_samples,`) or (`n_samples`, `n_outputs`), representing the estimated target values.
- `references`: numeric array-like of shape (`n_samples,`) or (`n_samples`, `n_outputs`), representing the ground truth (correct) target values.
Optional arguments:
- `sample_weight`: numeric array-like of shape (`n_samples,`) representing sample weights. The default is `None`.
- `multioutput`: `raw_values`, `uniform_average` or numeric array-like of shape (`n_outputs,`), which defines the aggregation of multiple output values. The default value is `uniform_average`.
- `raw_values` returns a full set of errors in case of multioutput input.
- `uniform_average` means that the errors of all outputs are averaged with uniform weight.
- the array-like value defines weights used to average errors.
### Output Values
This metric outputs a dictionary, containing the mean absolute error score, which is of type:
- `float`: if multioutput is `uniform_average` or an ndarray of weights, then the weighted average of all output errors is returned.
- numeric array-like of shape (`n_outputs,`): if multioutput is `raw_values`, then the score is returned for each output separately.
Each MAPE `float` value is postive with the best value being 0.0.
Output Example(s):
```python
{'mape': 0.5}
```
If `multioutput="raw_values"`:
```python
{'mape': array([0.5, 1. ])}
```
#### Values from Popular Papers
### Examples
Example with the `uniform_average` config:
```python
>>> mape_metric = evaluate.load("mape")
>>> predictions = [2.5, 0.0, 2, 8]
>>> references = [3, -0.5, 2, 7]
>>> results = mape_metric.compute(predictions=predictions, references=references)
>>> print(results)
{'mape': 0.3273...}
```
Example with multi-dimensional lists, and the `raw_values` config:
```python
>>> mape_metric = evaluate.load("mape", "multilist")
>>> predictions = [[0.5, 1], [-1, 1], [7, -6]]
>>> references = [[0.1, 2], [-1, 2], [8, -5]]
>>> results = mape_metric.compute(predictions=predictions, references=references)
>>> print(results)
{'mape': 0.8874...}
>>> results = mape_metric.compute(predictions=predictions, references=references, multioutput='raw_values')
>>> print(results)
{'mape': array([1.3749..., 0.4])}
```
## Limitations and Bias
One limitation of MAPE is that it cannot be used if the ground truth is zero or close to zero. This metric is also asymmetric in that it puts a heavier penalty on predictions less than the ground truth and a smaller penalty on predictions bigger than the ground truth and thus can lead to a bias of methods being select which under-predict if selected via this metric.
## Citation(s)
```bibtex
@article{scikit-learn,
title={Scikit-learn: Machine Learning in {P}ython},
author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V.
and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P.
and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and
Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.},
journal={Journal of Machine Learning Research},
volume={12},
pages={2825--2830},
year={2011}
}
```
```bibtex
@article{DEMYTTENAERE201638,
title = {Mean Absolute Percentage Error for regression models},
journal = {Neurocomputing},
volume = {192},
pages = {38--48},
year = {2016},
note = {Advances in artificial neural networks, machine learning and computational intelligence},
issn = {0925-2312},
doi = {https://doi.org/10.1016/j.neucom.2015.12.114},
url = {https://www.sciencedirect.com/science/article/pii/S0925231216003325},
author = {Arnaud {de Myttenaere} and Boris Golden and Bénédicte {Le Grand} and Fabrice Rossi},
}
```
## Further References
- [Mean absolute percentage error - Wikipedia](https://en.wikipedia.org/wiki/Mean_absolute_percentage_error)
|
huggingface/pytorch-image-models/blob/main/docs/models/ensemble-adversarial.md
|
# Ensemble Adversarial Inception ResNet v2
**Inception-ResNet-v2** is a convolutional neural architecture that builds on the Inception family of architectures but incorporates [residual connections](https://paperswithcode.com/method/residual-connection) (replacing the filter concatenation stage of the Inception architecture).
This particular model was trained for study of adversarial examples (adversarial training).
The weights from this model were ported from [Tensorflow/Models](https://github.com/tensorflow/models).
## How do I use this model on an image?
To load a pretrained model:
```python
import timm
model = timm.create_model('ens_adv_inception_resnet_v2', pretrained=True)
model.eval()
```
To load and preprocess the image:
```python
import urllib
from PIL import Image
from timm.data import resolve_data_config
from timm.data.transforms_factory import create_transform
config = resolve_data_config({}, model=model)
transform = create_transform(**config)
url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg")
urllib.request.urlretrieve(url, filename)
img = Image.open(filename).convert('RGB')
tensor = transform(img).unsqueeze(0) # transform and add batch dimension
```
To get the model predictions:
```python
import torch
with torch.no_grad():
out = model(tensor)
probabilities = torch.nn.functional.softmax(out[0], dim=0)
print(probabilities.shape)
# prints: torch.Size([1000])
```
To get the top-5 predictions class names:
```python
# Get imagenet class mappings
url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt")
urllib.request.urlretrieve(url, filename)
with open("imagenet_classes.txt", "r") as f:
categories = [s.strip() for s in f.readlines()]
# Print top categories per image
top5_prob, top5_catid = torch.topk(probabilities, 5)
for i in range(top5_prob.size(0)):
print(categories[top5_catid[i]], top5_prob[i].item())
# prints class names and probabilities like:
# [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)]
```
Replace the model name with the variant you want to use, e.g. `ens_adv_inception_resnet_v2`. You can find the IDs in the model summaries at the top of this page.
To extract image features with this model, follow the [timm feature extraction examples](https://rwightman.github.io/pytorch-image-models/feature_extraction/), just change the name of the model you want to use.
## How do I finetune this model?
You can finetune any of the pre-trained models just by changing the classifier (the last layer).
```python
model = timm.create_model('ens_adv_inception_resnet_v2', pretrained=True, num_classes=NUM_FINETUNE_CLASSES)
```
To finetune on your own dataset, you have to write a training loop or adapt [timm's training
script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset.
## How do I train this model?
You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
## Citation
```BibTeX
@article{DBLP:journals/corr/abs-1804-00097,
author = {Alexey Kurakin and
Ian J. Goodfellow and
Samy Bengio and
Yinpeng Dong and
Fangzhou Liao and
Ming Liang and
Tianyu Pang and
Jun Zhu and
Xiaolin Hu and
Cihang Xie and
Jianyu Wang and
Zhishuai Zhang and
Zhou Ren and
Alan L. Yuille and
Sangxia Huang and
Yao Zhao and
Yuzhe Zhao and
Zhonglin Han and
Junjiajia Long and
Yerkebulan Berdibekov and
Takuya Akiba and
Seiya Tokui and
Motoki Abe},
title = {Adversarial Attacks and Defences Competition},
journal = {CoRR},
volume = {abs/1804.00097},
year = {2018},
url = {http://arxiv.org/abs/1804.00097},
archivePrefix = {arXiv},
eprint = {1804.00097},
timestamp = {Thu, 31 Oct 2019 16:31:22 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1804-00097.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<!--
Type: model-index
Collections:
- Name: Ensemble Adversarial
Paper:
Title: Adversarial Attacks and Defences Competition
URL: https://paperswithcode.com/paper/adversarial-attacks-and-defences-competition
Models:
- Name: ens_adv_inception_resnet_v2
In Collection: Ensemble Adversarial
Metadata:
FLOPs: 16959133120
Parameters: 55850000
File Size: 223774238
Architecture:
- 1x1 Convolution
- Auxiliary Classifier
- Average Pooling
- Average Pooling
- Batch Normalization
- Convolution
- Dense Connections
- Dropout
- Inception-v3 Module
- Max Pooling
- ReLU
- Softmax
Tasks:
- Image Classification
Training Data:
- ImageNet
ID: ens_adv_inception_resnet_v2
Crop Pct: '0.897'
Image Size: '299'
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/inception_resnet_v2.py#L351
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/ens_adv_inception_resnet_v2-2592a550.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 1.0%
Top 5 Accuracy: 17.32%
-->
|
huggingface/hub-docs/blob/main/docs/hub/flair.md
|
Using Flair at Hugging Face
[Flair](https://github.com/flairNLP/flair) is a very simple framework for state-of-the-art NLP.
Developed by [Humboldt University of Berlin](https://www.informatik.hu-berlin.de/en/forschung-en/gebiete/ml-en/) and friends.
## Exploring Flair in the Hub
You can find `flair` models by filtering at the left of the [models page](https://huggingface.co/models?library=flair).
All models on the Hub come with these useful features:
1. An automatically generated model card with a brief description.
2. An interactive widget you can use to play with the model directly in the browser.
3. An Inference API that allows you to make inference requests.
## Installation
To get started, you can follow the [Flair installation guide](https://github.com/flairNLP/flair?tab=readme-ov-file#requirements-and-installation).
You can also use the following one-line install through pip:
```
$ pip install -U flair
```
## Using existing models
All `flair` models can easily be loaded from the Hub:
```py
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/ner-multi")
```
Once loaded, you can use `predict()` to perform inference:
```py
sentence = Sentence("George Washington ging nach Washington.")
tagger.predict(sentence)
# print sentence
print(sentence)
```
It outputs the following:
```text
Sentence[6]: "George Washington ging nach Washington." → ["George Washington"/PER, "Washington"/LOC]
```
If you want to load a specific Flair model, you can click `Use in Flair` in the model card and you will be given a working snippet!
<div class="flex justify-center">
<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/libraries-flair_snippet1.png"/>
<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/libraries-flair_snippet1-dark.png"/>
</div>
<div class="flex justify-center">
<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/libraries-flair_snippet2.png"/>
<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/libraries-flair_snippet2-dark.png"/>
</div>
## Additional resources
* Flair [repository](https://github.com/flairNLP/flair)
* Flair [docs](https://flairnlp.github.io/docs/intro)
* Official Flair [models](https://huggingface.co/flair) on the Hub (mainly trained by [@alanakbik](https://huggingface.co/alanakbik) and [@stefan-it](https://huggingface.co/stefan-it))
|
gradio-app/gradio/blob/main/js/accordion/README.md
|
`@gradio/button`
```html
<script>
import { Button } from "@gradio/button";
</script>
<button type="primary|secondary" href="string" on:click="{e.detail === href}">
content
</button>
```
|
huggingface/course/blob/main/subtitles/en/raw/chapter3/02a_datasets-overview-pt.md
|
he Hugging Face Datasets library: A Quick overview. The Hugging Face Datasets library is a library that provides an API to quickly download many public datasets and preprocess them. In this video we will explore how to do that. The downloading part is easy: with the load_dataset function, you can directly download and cache a dataset from its identifier on the Dataset hub. Here we fetch the MRPC dataset from the GLUE benchmark, which is a dataset containing pairs of sentences where the task is to determine the paraphrases. The object returned by the load_dataset function is a DatasetDict, which is a sort of dictionary containing each split of our dataset. We can access each split by indexing with its name. This split is then an instance of the Dataset class, with columns (here sentence1, sentence2. label and idx) and rows. We can access a given element by its index. The amazing thing about the Hugging Face Datasets library is that everything is saved to disk using Apache Arrow, which means that even if your dataset is huge you won't get out of RAM: only the elements you request are loaded in memory. Accessing a slice of your dataset is as easy as one element. The result is then a dictionary with list of values for each keys (here the list of labels, the list of first sentences and the list of second sentences). The features attribute of a Dataset gives us more information about its columns. In particular, we can see here it gives us the correspondence between the integers and names for the labels. 0 stands for not equivalent and 1 for equivalent. To preprocess all the elements of our dataset, we need to tokenize them. Have a look at the video "Preprocess sentence pairs" for a refresher, but you just have to send the two sentences to the tokenizer with some additional keyword arguments. Here we indicate a maximum length of 128 and pad inputs shorter than this length, truncate inputs that are longer. We put all of this in a tokenize_function that we can directly apply to all the splits in our dataset with the map method. As long as the function returns a dictionary-like object, the map method will add new columns as needed or update existing ones. To speed up preprocessing and take advantage of the fact our tokenizer is backed by Rust thanks to the Hugging Face Tokenizers library, we can process several elements at the same time to our tokenize function, using the batched=True argument. Since the tokenizer can handle list of first/second sentences, the tokenize_function does not need to change for this. You can also use multiprocessing with the map method, check out its documentation! Once this is done, we are almost ready for training: we just remove the columns we don't need anymore with the remove_columns method, rename label to labels (since the models from Hugging Face Transformers expect that) and set the output format to our desired backend: torch, tensorflow or numpy. If needed, we can also generate a short sample of a dataset using the select method.
|
huggingface/blog/blob/main/mantis-case-study.md
|
--
title: "Why we’re switching to Hugging Face Inference Endpoints, and maybe you should too"
thumbnail: /blog/assets/78_ml_director_insights/mantis1.png
authors:
- user: mattupson
guest: true
---
# Why we’re switching to Hugging Face Inference Endpoints, and maybe you should too
Hugging Face recently launched [Inference Endpoints](https://huggingface.co/inference-endpoints); which as they put it: solves transformers in production. Inference Endpoints is a managed service that allows you to:
- Deploy (almost) any model on Hugging Face Hub
- To any cloud (AWS, and Azure, GCP on the way)
- On a range of instance types (including GPU)
- We’re switching some of our Machine Learning (ML) models that do inference on a CPU to this new service. This blog is about why, and why you might also want to consider it.
## What were we doing?
The models that we have switched over to Inference Endpoints were previously managed internally and were running on AWS [Elastic Container Service](https://aws.amazon.com/ecs/) (ECS) backed by [AWS Fargate](https://aws.amazon.com/fargate/). This gives you a serverless cluster which can run container based tasks. Our process was as follows:
- Train model on a GPU instance (provisioned by [CML](https://cml.dev/), trained with [transformers](https://huggingface.co/docs/transformers/main/))
- Upload to [Hugging Face Hub](https://huggingface.co/models)
- Build API to serve model [(FastAPI)](https://fastapi.tiangolo.com/)
- Wrap API in container [(Docker)](https://www.docker.com/)
- Upload container to AWS [Elastic Container Repository](https://aws.amazon.com/ecr/) (ECR)
- Deploy model to ECS Cluster
Now, you can reasonably argue that ECS was not the best approach to serving ML models, but it served us up until now, and also allowed ML models to sit alongside other container based services, so it reduced cognitive load.
## What do we do now?
With Inference Endpoints, our flow looks like this:
- Train model on a GPU instance (provisioned by [CML](https://cml.dev/), trained with [transformers](https://huggingface.co/docs/transformers/main/))
- Upload to [Hugging Face Hub](https://huggingface.co/models)
- Deploy using Hugging Face Inference Endpoints.
So this is significantly easier. We could also use another managed service such as [SageMaker](https://aws.amazon.com/es/sagemaker/), [Seldon](https://www.seldon.io/), or [Bento ML](https://www.bentoml.com/), etc., but since we are already uploading our model to Hugging Face hub to act as a model registry, and we’re pretty invested in Hugging Face’s other tools (like transformers, and [AutoTrain](https://huggingface.co/autotrain)) using Inference Endpoints makes a lot of sense for us.
## What about Latency and Stability?
Before switching to Inference Endpoints we tested different CPU endpoints types using [ab](https://httpd.apache.org/docs/2.4/programs/ab.html).
For ECS we didn’t test so extensively, but we know that a large container had a latency of about ~200ms from an instance in the same region. The tests we did for Inference Endpoints we based on text classification model fine tuned on [RoBERTa](https://huggingface.co/roberta-base) with the following test parameters:
- Requester region: eu-east-1
- Requester instance size: t3-medium
- Inference endpoint region: eu-east-1
- Endpoint Replicas: 1
- Concurrent connections: 1
- Requests: 1000 (1000 requests in 1–2 minutes even from a single connection would represent very heavy use for this particular application)
The following table shows latency (ms ± standard deviation and time to complete test in seconds) for four Intel Ice Lake equipped CPU endpoints.
```bash
size | vCPU (cores) | Memory (GB) | ECS (ms) | 🤗 (ms)
----------------------------------------------------------------------
small | 1 | 2 | _ | ~ 296
medium | 2 | 4 | _ | 156 ± 51 (158s)
large | 4 | 8 | ~200 | 80 ± 30 (80s)
xlarge | 8 | 16 | _ | 43 ± 31 (43s)
```
What we see from these results is pretty encouraging. The application that will consume these endpoints serves requests in real time, so we need as low latency as possible. We can see that the vanilla Hugging Face container was more than twice as fast as our bespoke container run on ECS — the slowest response we received from the large Inference Endpoint was just 108ms.
## What about the cost?
So how much does this all cost? The table below shows a price comparison for what we were doing previously (ECS + Fargate) and using Inference Endpoints.
```bash
size | vCPU | Memory (GB) | ECS | 🤗 | % diff
----------------------------------------------------------------------
small | 1 | 2 | $ 33.18 | $ 43.80 | 0.24
medium | 2 | 4 | $ 60.38 | $ 87.61 | 0.31
large | 4 | 8 | $ 114.78 | $ 175.22 | 0.34
xlarge | 8 | 16 | $ 223.59 | $ 350.44 | 0.5
```
We can say a couple of things about this. Firstly, we want a managed solution to deployment, we don’t have a dedicated MLOPs team (yet), so we’re looking for a solution that helps us minimize the time we spend on deploying models, even if it costs a little more than handling the deployments ourselves.
Inference Endpoints are more expensive that what we were doing before, there’s an increased cost of between 24% and 50%. At the scale we’re currently operating, this additional cost, a difference of ~$60 a month for a large CPU instance is nothing compared to the time and cognitive load we are saving by not having to worry about APIs, and containers. If we were deploying 100s of ML microservices we would probably want to think again, but that is probably true of many approaches to hosting.
## Some notes and caveats:
- You can find pricing for Inference Endpoints [here](https://huggingface.co/pricing#endpoints), but a different number is displayed when you deploy a new endpoint from the [GUI](https://ui.endpoints.huggingface.co/new). I’ve used the latter, which is higher.
- The values that I present in the table for ECS + Fargate are an underestimate, but probably not by much. I extracted them from the [fargate pricing page](https://aws.amazon.com/fargate/pricing/) and it includes just the cost of hosting the instance. I’m not including the data ingress/egress (probably the biggest thing is downloading the model from Hugging Face hub), nor have I included the costs related to ECR.
## Other considerations
### Deployment Options
Currently you can deploy an Inference Endpoint from the [GUI](https://ui.endpoints.huggingface.co/new) or using a [RESTful API](https://huggingface.co/docs/inference-endpoints/api_reference). You can also make use of our command line tool [hugie](https://github.com/MantisAI/hfie) (which will be the subject of a future blog) to launch Inference Endpoints in one line of code by passing a configuration, it’s really this simple:
```bash
hugie endpoint create example/development.json
```
For me, what’s lacking is a [custom terraform provider](https://www.hashicorp.com/blog/writing-custom-terraform-providers). It’s all well and good deploying an inference endpoint from a [GitHub action](https://github.com/features/actions) using hugie, as we do, but it would be better if we could use the awesome state machine that is terraform to keep track of these. I’m pretty sure that someone (if not Hugging Face) will write one soon enough — if not, we will.
### Hosting multiple models on a single endpoint
Philipp Schmid posted a really nice blog about how to write a custom [Endpoint Handler](https://www.philschmid.de/multi-model-inference-endpoints) class to allow you to host multiple models on a single endpoint, potentially saving you quite a bit of money. His blog was about GPU inference, and the only real limitation is how many models you can fit into the GPU memory. I assume this will also work for CPU instances, though I’ve not tried yet.
## To conclude…
We find Hugging Face Inference Endpoints to be a very simple and convenient way to deploy transformer (and [sklearn](https://huggingface.co/scikit-learn)) models into an endpoint so they can be consumed by an application. Whilst they cost a little more than the ECS approach we were using before, it’s well worth it because it saves us time on thinking about deployment, we can concentrate on the thing we want to: building NLP solutions for our clients to help solve their problems.
_If you’re interested in Hugging Face Inference Endpoints for your company, please contact us [here](https://huggingface.co/inference-endpoints/enterprise) - our team will contact you to discuss your requirements!_
_This article was originally published on February 15, 2023 [in Medium](https://medium.com/mantisnlp/why-were-switching-to-hugging-face-inference-endpoints-and-maybe-you-should-too-829371dcd330)._
|
huggingface/evaluate/blob/main/metrics/rouge/README.md
|
--
title: ROUGE
emoji: 🤗
colorFrom: blue
colorTo: red
sdk: gradio
sdk_version: 3.19.1
app_file: app.py
pinned: false
tags:
- evaluate
- metric
description: >-
ROUGE, or Recall-Oriented Understudy for Gisting Evaluation, is a set of metrics and a software package used for
evaluating automatic summarization and machine translation software in natural language processing.
The metrics compare an automatically produced summary or translation against a reference or a set of references (human-produced) summary or translation.
Note that ROUGE is case insensitive, meaning that upper case letters are treated the same way as lower case letters.
This metrics is a wrapper around Google Research reimplementation of ROUGE:
https://github.com/google-research/google-research/tree/master/rouge
---
# Metric Card for ROUGE
## Metric Description
ROUGE, or Recall-Oriented Understudy for Gisting Evaluation, is a set of metrics and a software package used for evaluating automatic summarization and machine translation software in natural language processing. The metrics compare an automatically produced summary or translation against a reference or a set of references (human-produced) summary or translation.
Note that ROUGE is case insensitive, meaning that upper case letters are treated the same way as lower case letters.
This metrics is a wrapper around the [Google Research reimplementation of ROUGE](https://github.com/google-research/google-research/tree/master/rouge)
## How to Use
At minimum, this metric takes as input a list of predictions and a list of references:
```python
>>> rouge = evaluate.load('rouge')
>>> predictions = ["hello there", "general kenobi"]
>>> references = ["hello there", "general kenobi"]
>>> results = rouge.compute(predictions=predictions,
... references=references)
>>> print(results)
{'rouge1': 1.0, 'rouge2': 1.0, 'rougeL': 1.0, 'rougeLsum': 1.0}
```
One can also pass a custom tokenizer which is especially useful for non-latin languages.
```python
>>> results = rouge.compute(predictions=predictions,
... references=references,
tokenizer=lambda x: x.split())
>>> print(results)
{'rouge1': 1.0, 'rouge2': 1.0, 'rougeL': 1.0, 'rougeLsum': 1.0}
```
It can also deal with lists of references for each predictions:
```python
>>> rouge = evaluate.load('rouge')
>>> predictions = ["hello there", "general kenobi"]
>>> references = [["hello", "there"], ["general kenobi", "general yoda"]]
>>> results = rouge.compute(predictions=predictions,
... references=references)
>>> print(results)
{'rouge1': 0.8333, 'rouge2': 0.5, 'rougeL': 0.8333, 'rougeLsum': 0.8333}```
```
### Inputs
- **predictions** (`list`): list of predictions to score. Each prediction
should be a string with tokens separated by spaces.
- **references** (`list` or `list[list]`): list of reference for each prediction or a list of several references per prediction. Each
reference should be a string with tokens separated by spaces.
- **rouge_types** (`list`): A list of rouge types to calculate. Defaults to `['rouge1', 'rouge2', 'rougeL', 'rougeLsum']`.
- Valid rouge types:
- `"rouge1"`: unigram (1-gram) based scoring
- `"rouge2"`: bigram (2-gram) based scoring
- `"rougeL"`: Longest common subsequence based scoring.
- `"rougeLSum"`: splits text using `"\n"`
- See [here](https://github.com/huggingface/datasets/issues/617) for more information
- **use_aggregator** (`boolean`): If True, returns aggregates. Defaults to `True`.
- **use_stemmer** (`boolean`): If `True`, uses Porter stemmer to strip word suffixes. Defaults to `False`.
### Output Values
The output is a dictionary with one entry for each rouge type in the input list `rouge_types`. If `use_aggregator=False`, each dictionary entry is a list of scores, with one score for each sentence. E.g. if `rouge_types=['rouge1', 'rouge2']` and `use_aggregator=False`, the output is:
```python
{'rouge1': [0.6666666666666666, 1.0], 'rouge2': [0.0, 1.0]}
```
If `rouge_types=['rouge1', 'rouge2']` and `use_aggregator=True`, the output is of the following format:
```python
{'rouge1': 1.0, 'rouge2': 1.0}
```
The ROUGE values are in the range of 0 to 1.
#### Values from Popular Papers
### Examples
An example without aggregation:
```python
>>> rouge = evaluate.load('rouge')
>>> predictions = ["hello goodbye", "ankh morpork"]
>>> references = ["goodbye", "general kenobi"]
>>> results = rouge.compute(predictions=predictions,
... references=references,
... use_aggregator=False)
>>> print(list(results.keys()))
['rouge1', 'rouge2', 'rougeL', 'rougeLsum']
>>> print(results["rouge1"])
[0.5, 0.0]
```
The same example, but with aggregation:
```python
>>> rouge = evaluate.load('rouge')
>>> predictions = ["hello goodbye", "ankh morpork"]
>>> references = ["goodbye", "general kenobi"]
>>> results = rouge.compute(predictions=predictions,
... references=references,
... use_aggregator=True)
>>> print(list(results.keys()))
['rouge1', 'rouge2', 'rougeL', 'rougeLsum']
>>> print(results["rouge1"])
0.25
```
The same example, but only calculating `rouge_1`:
```python
>>> rouge = evaluate.load('rouge')
>>> predictions = ["hello goodbye", "ankh morpork"]
>>> references = ["goodbye", "general kenobi"]
>>> results = rouge.compute(predictions=predictions,
... references=references,
... rouge_types=['rouge_1'],
... use_aggregator=True)
>>> print(list(results.keys()))
['rouge1']
>>> print(results["rouge1"])
0.25
```
## Limitations and Bias
See [Schluter (2017)](https://aclanthology.org/E17-2007/) for an in-depth discussion of many of ROUGE's limits.
## Citation
```bibtex
@inproceedings{lin-2004-rouge,
title = "{ROUGE}: A Package for Automatic Evaluation of Summaries",
author = "Lin, Chin-Yew",
booktitle = "Text Summarization Branches Out",
month = jul,
year = "2004",
address = "Barcelona, Spain",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/W04-1013",
pages = "74--81",
}
```
## Further References
- This metrics is a wrapper around the [Google Research reimplementation of ROUGE](https://github.com/google-research/google-research/tree/master/rouge)
|
huggingface/diffusers/blob/main/docs/source/en/api/pipelines/audioldm.md
|
!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# AudioLDM
AudioLDM was proposed in [AudioLDM: Text-to-Audio Generation with Latent Diffusion Models](https://huggingface.co/papers/2301.12503) by Haohe Liu et al. Inspired by [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/overview), AudioLDM
is a text-to-audio _latent diffusion model (LDM)_ that learns continuous audio representations from [CLAP](https://huggingface.co/docs/transformers/main/model_doc/clap)
latents. AudioLDM takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional
sound effects, human speech and music.
The abstract from the paper is:
*Text-to-audio (TTA) system has recently gained attention for its ability to synthesize general audio based on text descriptions. However, previous studies in TTA have limited generation quality with high computational costs. In this study, we propose AudioLDM, a TTA system that is built on a latent space to learn the continuous audio representations from contrastive language-audio pretraining (CLAP) latents. The pretrained CLAP models enable us to train LDMs with audio embedding while providing text embedding as a condition during sampling. By learning the latent representations of audio signals and their compositions without modeling the cross-modal relationship, AudioLDM is advantageous in both generation quality and computational efficiency. Trained on AudioCaps with a single GPU, AudioLDM achieves state-of-the-art TTA performance measured by both objective and subjective metrics (e.g., frechet distance). Moreover, AudioLDM is the first TTA system that enables various text-guided audio manipulations (e.g., style transfer) in a zero-shot fashion. Our implementation and demos are available at [this https URL](https://audioldm.github.io/).*
The original codebase can be found at [haoheliu/AudioLDM](https://github.com/haoheliu/AudioLDM).
## Tips
When constructing a prompt, keep in mind:
* Descriptive prompt inputs work best; you can use adjectives to describe the sound (for example, "high quality" or "clear") and make the prompt context specific (for example, "water stream in a forest" instead of "stream").
* It's best to use general terms like "cat" or "dog" instead of specific names or abstract objects the model may not be familiar with.
During inference:
* The _quality_ of the predicted audio sample can be controlled by the `num_inference_steps` argument; higher steps give higher quality audio at the expense of slower inference.
* The _length_ of the predicted audio sample can be controlled by varying the `audio_length_in_s` argument.
<Tip>
Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
</Tip>
## AudioLDMPipeline
[[autodoc]] AudioLDMPipeline
- all
- __call__
## AudioPipelineOutput
[[autodoc]] pipelines.AudioPipelineOutput
|
huggingface/course/blob/main/subtitles/en/raw/chapter2/04c_character-based-tokenizers.md
|
efore diving in character-based tokenization, understanding why this kind of tokenization is interesting requires understanding the flaws of word-based tokenization. If you haven't seen the first video on word-based tokenization we recommend you check it out before looking at this video. Let's take a look at character-based tokenization. We now split our text into individual characters, rather than words. There are generally a lot of different words in languages, while the number of characters stays low. Here for example, for the English language that has an estimated 170,000 different words, we would need a very large vocabulary to encompass all words. With a character-based vocabulary, we can get by with only 256 characters! Even languages with a lot of different characters like the Chinese languages have dictionaries with ~20,000 different characters but more than 375,000 different words. Character-based vocabularies let us fewer different tokens than the word-based tokenization dictionaries we would otherwise use. These vocabularies are also more complete than their word-based vocabularies counterparts. As our vocabulary contains all characters used in a language, even words unseen during the tokenizer training can still be tokenized, so out-of-vocabulary tokens will be less frequent. This includes the ability to correctly tokenize misspelled words, rather than discarding them as unknown straight away. However, this algorithm isn't perfect either! Intuitively, characters do not hold as much information individually as a word would hold. For example, "Let's" holds more information than "l". Of course, this is not true for all languages, as some languages like ideogram-based languages have a lot of information held in single characters, but for others like roman-based languages, the model will have to make sense of multiple tokens at a time to get the information held in a single word. This leads to another issue with character-based tokenizers: their sequences are translated into very large amount of tokens to be processed by the model. This can have an impact on the size of the context the model will carry around, and will reduce the size of the text we can use as input for our model. This tokenization, while it has some issues, has seen some very good results in the past and should be considered when approaching a new problem as it solves some issues encountered in the word-based algorithm.
|
huggingface/deep-rl-class/blob/main/units/en/unit7/hands-on.mdx
|
Hands-on
Now that you learned the basics of multi-agents, you're ready to train your first agents in a multi-agent system: **a 2vs2 soccer team that needs to beat the opponent team**.
And you’re going to participate in AI vs. AI challenges where your trained agent will compete against other classmates’ **agents every day and be ranked on a new leaderboard.**
To validate this hands-on for the certification process, you just need to push a trained model. There **are no minimal results to attain to validate it.**
For more information about the certification process, check this section 👉 [https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process](https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process)
This hands-on will be different since to get correct results **you need to train your agents from 4 hours to 8 hours**. And given the risk of timeout in Colab, we advise you to train on your computer. You don’t need a supercomputer: a simple laptop is good enough for this exercise.
Let's get started! 🔥
## What is AI vs. AI?
AI vs. AI is an open-source tool we developed at Hugging Face to compete agents on the Hub against one another in a multi-agent setting. These models are then ranked in a leaderboard.
The idea of this tool is to have a robust evaluation tool: **by evaluating your agent with a lot of others, you’ll get a good idea of the quality of your policy.**
More precisely, AI vs. AI is three tools:
- A *matchmaking process* defining the matches (which model against which) and running the model fights using a background task in the Space.
- A *leaderboard* getting the match history results and displaying the models’ ELO ratings: [https://huggingface.co/spaces/huggingface-projects/AIvsAI-SoccerTwos](https://huggingface.co/spaces/huggingface-projects/AIvsAI-SoccerTwos)
- A *Space demo* to visualize your agents playing against others: [https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos](https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos)
In addition to these three tools, your classmate cyllum created a 🤗 SoccerTwos Challenge Analytics where you can check the detailed match results of a model: [https://huggingface.co/spaces/cyllum/soccertwos-analytics](https://huggingface.co/spaces/cyllum/soccertwos-analytics)
We're [wrote a blog post to explain this AI vs. AI tool in detail](https://huggingface.co/blog/aivsai), but to give you the big picture it works this way:
- Every four hours, our algorithm **fetches all the available models for a given environment (in our case ML-Agents-SoccerTwos).**
- It creates a **queue of matches with the matchmaking algorithm.**
- We simulate the match in a Unity headless process and **gather the match result** (1 if the first model won, 0.5 if it’s a draw, 0 if the second model won) in a Dataset.
- Then, when all matches from the matches queue are done, **we update the ELO score for each model and update the leaderboard.**
### Competition Rules
This first AI vs. AI competition **is an experiment**: the goal is to improve the tool in the future with your feedback. So some **breakups can happen during the challenge**. But don't worry
**all the results are saved in a dataset so we can always restart the calculation correctly without losing information**.
In order for your model to get correctly evaluated against others you need to follow these rules:
1. **You can't change the observation space or action space of the agent.** By doing that your model will not work during evaluation.
2. You **can't use a custom trainer for now,** you need to use the Unity MLAgents ones.
3. We provide executables to train your agents. You can also use the Unity Editor if you prefer **, but to avoid bugs, we advise that you use our executables**.
What will make the difference during this challenge are **the hyperparameters you choose**.
We're constantly trying to improve our tutorials, so **if you find some issues in this notebook**, please [open an issue on the GitHub Repo](https://github.com/huggingface/deep-rl-class/issues).
### Chat with your classmates, share advice and ask questions on Discord
- We created a new channel called `ai-vs-ai-challenge` to exchange advice and ask questions.
- If you didn’t join the discord server yet, you can [join here](https://discord.gg/ydHrjt3WP5)
## Step 0: Install MLAgents and download the correct executable
We advise you to use [conda](https://docs.conda.io/en/latest/) as a package manager and create a new environment.
With conda, we create a new environment called rl with **Python 3.10.12**:
```bash
conda create --name rl python=3.10.12
conda activate rl
```
To be able to train our agents correctly and push to the Hub, we need to install ML-Agents
```bash
git clone https://github.com/Unity-Technologies/ml-agents
```
When the cloning is done (it takes 2.63 GB), we go inside the repository and install the package
```bash
cd ml-agents
pip install -e ./ml-agents-envs
pip install -e ./ml-agents
```
Finally, you need to install git-lfs: https://git-lfs.com/
Now that it’s installed, we need to add the environment training executable. Based on your operating system you need to download one of them, unzip it and place it in a new folder inside `ml-agents` that you call `training-envs-executables`
At the end your executable should be in `ml-agents/training-envs-executables/SoccerTwos`
Windows: Download [this executable](https://drive.google.com/file/d/1sqFxbEdTMubjVktnV4C6ICjp89wLhUcP/view?usp=sharing)
Linux (Ubuntu): Download [this executable](https://drive.google.com/file/d/1KuqBKYiXiIcU4kNMqEzhgypuFP5_45CL/view?usp=sharing)
Mac: Download [this executable](https://drive.google.com/drive/folders/1h7YB0qwjoxxghApQdEUQmk95ZwIDxrPG?usp=share_link)
⚠ For Mac you need also to call this `xattr -cr training-envs-executables/SoccerTwos/SoccerTwos.app` to be able to run SoccerTwos
## Step 1: Understand the environment
The environment is called `SoccerTwos`. The Unity MLAgents Team made it. You can find its documentation [here](https://github.com/Unity-Technologies/ml-agents/blob/develop/docs/Learning-Environment-Examples.md#soccer-twos)
The goal in this environment **is to get the ball into the opponent's goal while preventing the ball from entering your own goal.**
<figure>
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit10/soccertwos.gif" alt="SoccerTwos"/>
<figcaption>This environment was made by the <a href="https://github.com/Unity-Technologies/ml-agents"> Unity MLAgents Team</a></figcaption>
</figure>
### The reward function
The reward function is:
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit10/soccerreward.png" alt="SoccerTwos Reward"/>
### The observation space
The observation space is composed of vectors of size 336:
- 11 ray-casts forward distributed over 120 degrees (264 state dimensions)
- 3 ray-casts backward distributed over 90 degrees (72 state dimensions)
- Both of these ray-casts can detect 6 objects:
- Ball
- Blue Goal
- Purple Goal
- Wall
- Blue Agent
- Purple Agent
### The action space
The action space is three discrete branches:
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit10/socceraction.png" alt="SoccerTwos Action"/>
## Step 2: Understand MA-POCA
We know how to train agents to play against others: **we can use self-play.** This is a perfect technique for a 1vs1.
But in our case we’re 2vs2, and each team has 2 agents. How then can we **train cooperative behavior for groups of agents?**
As explained in the [Unity Blog](https://blog.unity.com/technology/ml-agents-v20-release-now-supports-training-complex-cooperative-behaviors), agents typically receive a reward as a group (+1 - penalty) when the team scores a goal. This implies that **every agent on the team is rewarded even if each agent didn’t contribute the same to the win**, which makes it difficult to learn what to do independently.
The Unity MLAgents team developed the solution in a new multi-agent trainer called *MA-POCA (Multi-Agent POsthumous Credit Assignment)*.
The idea is simple but powerful: a centralized critic **processes the states of all agents in the team to estimate how well each agent is doing**. Think of this critic as a coach.
This allows each agent to **make decisions based only on what it perceives locally**, and **simultaneously evaluate how good its behavior is in the context of the whole group**.
<figure>
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit10/mapoca.png" alt="MA POCA"/>
<figcaption>This illustrates MA-POCA’s centralized learning and decentralized execution. Source: <a href="https://blog.unity.com/technology/ml-agents-plays-dodgeball">MLAgents Plays Dodgeball</a>
</figcaption>
</figure>
The solution then is to use Self-Play with an MA-POCA trainer (called poca). The poca trainer will help us to train cooperative behavior and self-play to win against an opponent team.
If you want to dive deeper into this MA-POCA algorithm, you need to read the paper they published [here](https://arxiv.org/pdf/2111.05992.pdf) and the sources we put on the additional readings section.
## Step 3: Define the config file
We already learned in [Unit 5](https://huggingface.co/deep-rl-course/unit5/introduction) that in ML-Agents, you define **the training hyperparameters in `config.yaml` files.**
There are multiple hyperparameters. To understand them better, you should read the explanations for each of them in **[the documentation](https://github.com/Unity-Technologies/ml-agents/blob/release_20_docs/docs/Training-Configuration-File.md)**
The config file we’re going to use here is in `./config/poca/SoccerTwos.yaml`. It looks like this:
```csharp
behaviors:
SoccerTwos:
trainer_type: poca
hyperparameters:
batch_size: 2048
buffer_size: 20480
learning_rate: 0.0003
beta: 0.005
epsilon: 0.2
lambd: 0.95
num_epoch: 3
learning_rate_schedule: constant
network_settings:
normalize: false
hidden_units: 512
num_layers: 2
vis_encode_type: simple
reward_signals:
extrinsic:
gamma: 0.99
strength: 1.0
keep_checkpoints: 5
max_steps: 5000000
time_horizon: 1000
summary_freq: 10000
self_play:
save_steps: 50000
team_change: 200000
swap_steps: 2000
window: 10
play_against_latest_model_ratio: 0.5
initial_elo: 1200.0
```
Compared to Pyramids or SnowballTarget, we have new hyperparameters with a self-play part. How you modify them can be critical in getting good results.
The advice I can give you here is to check the explanation and recommended value for each parameters (especially self-play ones) against **[the documentation](https://github.com/Unity-Technologies/ml-agents/blob/release_20_docs/docs/Training-Configuration-File.md).**
Now that you’ve modified our config file, you’re ready to train your agents.
## Step 4: Start the training
To train the agents, we need to **launch mlagents-learn and select the executable containing the environment.**
We define four parameters:
1. `mlagents-learn <config>`: the path where the hyperparameter config file is.
2. `-env`: where the environment executable is.
3. `-run_id`: the name you want to give to your training run id.
4. `-no-graphics`: to not launch the visualization during the training.
Depending on your hardware, 5M timesteps (the recommended value, but you can also try 10M) will take 5 to 8 hours of training. You can continue using your computer in the meantime, but I advise deactivating the computer standby mode to prevent the training from being stopped.
Depending on the executable you use (windows, ubuntu, mac) the training command will look like this (your executable path can be different so don’t hesitate to check before running).
```bash
mlagents-learn ./config/poca/SoccerTwos.yaml --env=./training-envs-executables/SoccerTwos.exe --run-id="SoccerTwos" --no-graphics
```
The executable contains 8 copies of SoccerTwos.
⚠️ It’s normal if you don’t see a big increase of ELO score (and even a decrease below 1200) before 2M timesteps, since your agents will spend most of their time moving randomly on the field before being able to goal.
⚠️ You can stop the training with Ctrl + C but beware of typing this command only once to stop the training since MLAgents needs to generate a final .onnx file before closing the run.
## Step 5: **Push the agent to the Hugging Face Hub**
Now that we trained our agents, we’re **ready to push them to the Hub to be able to participate in the AI vs. AI challenge and visualize them playing on your browser🔥.**
To be able to share your model with the community, there are three more steps to follow:
1️⃣ (If it’s not already done) create an account to HF ➡ [https://huggingface.co/join](https://huggingface.co/join)
2️⃣ Sign in and store your authentication token from the Hugging Face website.
Create a new token (https://huggingface.co/settings/tokens) **with write role**
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/notebooks/create-token.jpg" alt="Create HF Token">
Copy the token, run this, and paste the token
```bash
huggingface-cli login
```
Then, we need to run `mlagents-push-to-hf`.
And we define four parameters:
1. `-run-id`: the name of the training run id.
2. `-local-dir`: where the agent was saved, it’s results/<run_id name>, so in my case results/First Training.
3. `-repo-id`: the name of the Hugging Face repo you want to create or update. It’s always <your huggingface username>/<the repo name>
If the repo does not exist **it will be created automatically**
4. `--commit-message`: since HF repos are git repositories you need to give a commit message.
In my case
```bash
mlagents-push-to-hf --run-id="SoccerTwos" --local-dir="./results/SoccerTwos" --repo-id="ThomasSimonini/poca-SoccerTwos" --commit-message="First Push"`
```
```bash
mlagents-push-to-hf --run-id= # Add your run id --local-dir= # Your local dir --repo-id= # Your repo id --commit-message="First Push"
```
If everything worked you should see this at the end of the process (but with a different url 😆) :
Your model is pushed to the Hub. You can view your model here: https://huggingface.co/ThomasSimonini/poca-SoccerTwos
It's the link to your model. It contains a model card that explains how to use it, your Tensorboard, and your config file. **What's awesome is that it's a git repository, which means you can have different commits, update your repository with a new push, etc.**
## Step 6: Verify that your model is ready for AI vs AI Challenge
Now that your model is pushed to the Hub, **it’s going to be added automatically to the AI vs AI Challenge model pool.** It can take a little bit of time before your model is added to the leaderboard given we do a run of matches every 4h.
But to ensure that everything works perfectly you need to check:
1. That you have this tag in your model: ML-Agents-SoccerTwos. This is the tag we use to select models to be added to the challenge pool. To do that go to your model and check the tags
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit10/verify1.png" alt="Verify"/>
If it’s not the case you just need to modify the readme and add it
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit10/verify2.png" alt="Verify"/>
2. That you have a `SoccerTwos.onnx` file
<img src="https://huggingface.co/datasets/huggingface-deep-rl-course/course-images/resolve/main/en/unit10/verify3.png" alt="Verify"/>
We strongly suggest that you create a new model when you push to the Hub if you want to train it again or train a new version.
## Step 7: Visualize some match in our demo
Now that your model is part of AI vs AI Challenge, **you can visualize how good it is compared to others**: https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
In order to do that, you just need to go to this demo:
- Select your model as team blue (or team purple if you prefer) and another model to compete against. The best opponents to compare your model to are either whoever is on top of the leaderboard or the [baseline model](https://huggingface.co/unity/MLAgents-SoccerTwos)
The matches you see live are not used in the calculation of your result **but they are a good way to visualize how good your agent is**.
And don't hesitate to share the best score your agent gets on discord in the #rl-i-made-this channel 🔥
|
gradio-app/gradio/blob/main/demo/sales_projections/run.ipynb
|
Gradio Demo: sales_projections
```
!pip install -q gradio pandas numpy matplotlib
```
```
import matplotlib.pyplot as plt
import numpy as np
import gradio as gr
def sales_projections(employee_data):
sales_data = employee_data.iloc[:, 1:4].astype("int").to_numpy()
regression_values = np.apply_along_axis(
lambda row: np.array(np.poly1d(np.polyfit([0, 1, 2], row, 2))), 0, sales_data
)
projected_months = np.repeat(
np.expand_dims(np.arange(3, 12), 0), len(sales_data), axis=0
)
projected_values = np.array(
[
month * month * regression[0] + month * regression[1] + regression[2]
for month, regression in zip(projected_months, regression_values)
]
)
plt.plot(projected_values.T)
plt.legend(employee_data["Name"])
return employee_data, plt.gcf(), regression_values
demo = gr.Interface(
sales_projections,
gr.Dataframe(
headers=["Name", "Jan Sales", "Feb Sales", "Mar Sales"],
value=[["Jon", 12, 14, 18], ["Alice", 14, 17, 2], ["Sana", 8, 9.5, 12]],
),
["dataframe", "plot", "numpy"],
description="Enter sales figures for employees to predict sales trajectory over year.",
)
if __name__ == "__main__":
demo.launch()
```
|
huggingface/datasets/blob/main/metrics/f1/README.md
|
Metric Card for F1
## Metric Description
The F1 score is the harmonic mean of the precision and recall. It can be computed with the equation:
F1 = 2 * (precision * recall) / (precision + recall)
## How to Use
At minimum, this metric requires predictions and references as input
```python
>>> f1_metric = datasets.load_metric("f1")
>>> results = f1_metric.compute(predictions=[0, 1], references=[0, 1])
>>> print(results)
["{'f1': 1.0}"]
```
### Inputs
- **predictions** (`list` of `int`): Predicted labels.
- **references** (`list` of `int`): Ground truth labels.
- **labels** (`list` of `int`): The set of labels to include when `average` is not set to `'binary'`, and the order of the labels if `average` is `None`. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class. Labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in `predictions` and `references` are used in sorted order. Defaults to None.
- **pos_label** (`int`): The class to be considered the positive class, in the case where `average` is set to `binary`. Defaults to 1.
- **average** (`string`): This parameter is required for multiclass/multilabel targets. If set to `None`, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data. Defaults to `'binary'`.
- 'binary': Only report results for the class specified by `pos_label`. This is applicable only if the classes found in `predictions` and `references` are binary.
- 'micro': Calculate metrics globally by counting the total true positives, false negatives and false positives.
- 'macro': Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.
- 'weighted': Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters `'macro'` to account for label imbalance. This option can result in an F-score that is not between precision and recall.
- 'samples': Calculate metrics for each instance, and find their average (only meaningful for multilabel classification).
- **sample_weight** (`list` of `float`): Sample weights Defaults to None.
### Output Values
- **f1**(`float` or `array` of `float`): F1 score or list of f1 scores, depending on the value passed to `average`. Minimum possible value is 0. Maximum possible value is 1. Higher f1 scores are better.
Output Example(s):
```python
{'f1': 0.26666666666666666}
```
```python
{'f1': array([0.8, 0.0, 0.0])}
```
This metric outputs a dictionary, with either a single f1 score, of type `float`, or an array of f1 scores, with entries of type `float`.
#### Values from Popular Papers
### Examples
Example 1-A simple binary example
```python
>>> f1_metric = datasets.load_metric("f1")
>>> results = f1_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0])
>>> print(results)
{'f1': 0.5}
```
Example 2-The same simple binary example as in Example 1, but with `pos_label` set to `0`.
```python
>>> f1_metric = datasets.load_metric("f1")
>>> results = f1_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], pos_label=0)
>>> print(round(results['f1'], 2))
0.67
```
Example 3-The same simple binary example as in Example 1, but with `sample_weight` included.
```python
>>> f1_metric = datasets.load_metric("f1")
>>> results = f1_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], sample_weight=[0.9, 0.5, 3.9, 1.2, 0.3])
>>> print(round(results['f1'], 2))
0.35
```
Example 4-A multiclass example, with different values for the `average` input.
```python
>>> predictions = [0, 2, 1, 0, 0, 1]
>>> references = [0, 1, 2, 0, 1, 2]
>>> results = f1_metric.compute(predictions=predictions, references=references, average="macro")
>>> print(round(results['f1'], 2))
0.27
>>> results = f1_metric.compute(predictions=predictions, references=references, average="micro")
>>> print(round(results['f1'], 2))
0.33
>>> results = f1_metric.compute(predictions=predictions, references=references, average="weighted")
>>> print(round(results['f1'], 2))
0.27
>>> results = f1_metric.compute(predictions=predictions, references=references, average=None)
>>> print(results)
{'f1': array([0.8, 0. , 0. ])}
```
## Limitations and Bias
## Citation(s)
```bibtex
@article{scikit-learn,
title={Scikit-learn: Machine Learning in {P}ython},
author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V.
and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P.
and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and
Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.},
journal={Journal of Machine Learning Research},
volume={12},
pages={2825--2830},
year={2011}
}
```
## Further References
|
huggingface/transformers/blob/main/docs/source/en/model_doc/timesformer.md
|
!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# TimeSformer
## Overview
The TimeSformer model was proposed in [TimeSformer: Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Facebook Research.
This work is a milestone in action-recognition field being the first video transformer. It inspired many transformer based video understanding and classification papers.
The abstract from the paper is the following:
*We present a convolution-free approach to video classification built exclusively on self-attention over space and time. Our method, named "TimeSformer," adapts the standard Transformer architecture to video by enabling spatiotemporal feature learning directly from a sequence of frame-level patches. Our experimental study compares different self-attention schemes and suggests that "divided attention," where temporal attention and spatial attention are separately applied within each block, leads to the best video classification accuracy among the design choices considered. Despite the radically new design, TimeSformer achieves state-of-the-art results on several action recognition benchmarks, including the best reported accuracy on Kinetics-400 and Kinetics-600. Finally, compared to 3D convolutional networks, our model is faster to train, it can achieve dramatically higher test efficiency (at a small drop in accuracy), and it can also be applied to much longer video clips (over one minute long). Code and models are available at: [this https URL](https://github.com/facebookresearch/TimeSformer).*
This model was contributed by [fcakyon](https://huggingface.co/fcakyon).
The original code can be found [here](https://github.com/facebookresearch/TimeSformer).
## Usage tips
There are many pretrained variants. Select your pretrained model based on the dataset it is trained on. Moreover,
the number of input frames per clip changes based on the model size so you should consider this parameter while selecting your pretrained model.
## Resources
- [Video classification task guide](../tasks/video_classification)
## TimesformerConfig
[[autodoc]] TimesformerConfig
## TimesformerModel
[[autodoc]] TimesformerModel
- forward
## TimesformerForVideoClassification
[[autodoc]] TimesformerForVideoClassification
- forward
|
huggingface/transformers/blob/main/docs/source/en/model_doc/swinv2.md
|
!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Swin Transformer V2
## Overview
The Swin Transformer V2 model was proposed in [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo.
The abstract from the paper is the following:
*Large-scale NLP models have been shown to significantly improve the performance on language tasks with no signs of saturation. They also demonstrate amazing few-shot capabilities like that of human beings. This paper aims to explore large-scale models in computer vision. We tackle three major issues in training and application of large vision models, including training instability, resolution gaps between pre-training and fine-tuning, and hunger on labelled data. Three main techniques are proposed: 1) a residual-post-norm method combined with cosine attention to improve training stability; 2) A log-spaced continuous position bias method to effectively transfer models pre-trained using low-resolution images to downstream tasks with high-resolution inputs; 3) A self-supervised pre-training method, SimMIM, to reduce the needs of vast labeled images. Through these techniques, this paper successfully trained a 3 billion-parameter Swin Transformer V2 model, which is the largest dense vision model to date, and makes it capable of training with images of up to 1,536×1,536 resolution. It set new performance records on 4 representative vision tasks, including ImageNet-V2 image classification, COCO object detection, ADE20K semantic segmentation, and Kinetics-400 video action classification. Also note our training is much more efficient than that in Google's billion-level visual models, which consumes 40 times less labelled data and 40 times less training time.*
This model was contributed by [nandwalritik](https://huggingface.co/nandwalritik).
The original code can be found [here](https://github.com/microsoft/Swin-Transformer).
## Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Swin Transformer v2.
<PipelineTag pipeline="image-classification"/>
- [`Swinv2ForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
- See also: [Image classification task guide](../tasks/image_classification)
Besides that:
- [`Swinv2ForMaskedImageModeling`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining).
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
## Swinv2Config
[[autodoc]] Swinv2Config
## Swinv2Model
[[autodoc]] Swinv2Model
- forward
## Swinv2ForMaskedImageModeling
[[autodoc]] Swinv2ForMaskedImageModeling
- forward
## Swinv2ForImageClassification
[[autodoc]] transformers.Swinv2ForImageClassification
- forward
|
huggingface/transformers/blob/main/docs/source/en/model_doc/rembert.md
|
!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# RemBERT
## Overview
The RemBERT model was proposed in [Rethinking Embedding Coupling in Pre-trained Language Models](https://arxiv.org/abs/2010.12821) by Hyung Won Chung, Thibault Févry, Henry Tsai, Melvin Johnson, Sebastian Ruder.
The abstract from the paper is the following:
*We re-evaluate the standard practice of sharing weights between input and output embeddings in state-of-the-art
pre-trained language models. We show that decoupled embeddings provide increased modeling flexibility, allowing us to
significantly improve the efficiency of parameter allocation in the input embedding of multilingual models. By
reallocating the input embedding parameters in the Transformer layers, we achieve dramatically better performance on
standard natural language understanding tasks with the same number of parameters during fine-tuning. We also show that
allocating additional capacity to the output embedding provides benefits to the model that persist through the
fine-tuning stage even though the output embedding is discarded after pre-training. Our analysis shows that larger
output embeddings prevent the model's last layers from overspecializing to the pre-training task and encourage
Transformer representations to be more general and more transferable to other tasks and languages. Harnessing these
findings, we are able to train models that achieve strong performance on the XTREME benchmark without increasing the
number of parameters at the fine-tuning stage.*
## Usage tips
For fine-tuning, RemBERT can be thought of as a bigger version of mBERT with an ALBERT-like factorization of the
embedding layer. The embeddings are not tied in pre-training, in contrast with BERT, which enables smaller input
embeddings (preserved during fine-tuning) and bigger output embeddings (discarded at fine-tuning). The tokenizer is
also similar to the Albert one rather than the BERT one.
## Resources
- [Text classification task guide](../tasks/sequence_classification)
- [Token classification task guide](../tasks/token_classification)
- [Question answering task guide](../tasks/question_answering)
- [Causal language modeling task guide](../tasks/language_modeling)
- [Masked language modeling task guide](../tasks/masked_language_modeling)
- [Multiple choice task guide](../tasks/multiple_choice)
## RemBertConfig
[[autodoc]] RemBertConfig
## RemBertTokenizer
[[autodoc]] RemBertTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
## RemBertTokenizerFast
[[autodoc]] RemBertTokenizerFast
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
<frameworkcontent>
<pt>
## RemBertModel
[[autodoc]] RemBertModel
- forward
## RemBertForCausalLM
[[autodoc]] RemBertForCausalLM
- forward
## RemBertForMaskedLM
[[autodoc]] RemBertForMaskedLM
- forward
## RemBertForSequenceClassification
[[autodoc]] RemBertForSequenceClassification
- forward
## RemBertForMultipleChoice
[[autodoc]] RemBertForMultipleChoice
- forward
## RemBertForTokenClassification
[[autodoc]] RemBertForTokenClassification
- forward
## RemBertForQuestionAnswering
[[autodoc]] RemBertForQuestionAnswering
- forward
</pt>
<tf>
## TFRemBertModel
[[autodoc]] TFRemBertModel
- call
## TFRemBertForMaskedLM
[[autodoc]] TFRemBertForMaskedLM
- call
## TFRemBertForCausalLM
[[autodoc]] TFRemBertForCausalLM
- call
## TFRemBertForSequenceClassification
[[autodoc]] TFRemBertForSequenceClassification
- call
## TFRemBertForMultipleChoice
[[autodoc]] TFRemBertForMultipleChoice
- call
## TFRemBertForTokenClassification
[[autodoc]] TFRemBertForTokenClassification
- call
## TFRemBertForQuestionAnswering
[[autodoc]] TFRemBertForQuestionAnswering
- call
</tf>
</frameworkcontent>
|
huggingface/diffusers/blob/main/docs/source/en/using-diffusers/inference_with_lcm_lora.md
|
!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
[[open-in-colab]]
# Performing inference with LCM-LoRA
Latent Consistency Models (LCM) enable quality image generation in typically 2-4 steps making it possible to use diffusion models in almost real-time settings.
From the [official website](https://latent-consistency-models.github.io/):
> LCMs can be distilled from any pre-trained Stable Diffusion (SD) in only 4,000 training steps (~32 A100 GPU Hours) for generating high quality 768 x 768 resolution images in 2~4 steps or even one step, significantly accelerating text-to-image generation. We employ LCM to distill the Dreamshaper-V7 version of SD in just 4,000 training iterations.
For a more technical overview of LCMs, refer to [the paper](https://huggingface.co/papers/2310.04378).
However, each model needs to be distilled separately for latent consistency distillation. The core idea with LCM-LoRA is to train just a few adapter layers, the adapter being LoRA in this case.
This way, we don't have to train the full model and keep the number of trainable parameters manageable. The resulting LoRAs can then be applied to any fine-tuned version of the model without distilling them separately.
Additionally, the LoRAs can be applied to image-to-image, ControlNet/T2I-Adapter, inpainting, AnimateDiff etc.
The LCM-LoRA can also be combined with other LoRAs to generate styled images in very few steps (4-8).
LCM-LoRAs are available for [stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5), [stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0), and the [SSD-1B](https://huggingface.co/segmind/SSD-1B) model. All the checkpoints can be found in this [collection](https://huggingface.co/collections/latent-consistency/latent-consistency-models-loras-654cdd24e111e16f0865fba6).
For more details about LCM-LoRA, refer to [the technical report](https://huggingface.co/papers/2311.05556).
This guide shows how to perform inference with LCM-LoRAs for
- text-to-image
- image-to-image
- combined with styled LoRAs
- ControlNet/T2I-Adapter
- inpainting
- AnimateDiff
Before going through this guide, we'll take a look at the general workflow for performing inference with LCM-LoRAs.
LCM-LoRAs are similar to other Stable Diffusion LoRAs so they can be used with any [`DiffusionPipeline`] that supports LoRAs.
- Load the task specific pipeline and model.
- Set the scheduler to [`LCMScheduler`].
- Load the LCM-LoRA weights for the model.
- Reduce the `guidance_scale` between `[1.0, 2.0]` and set the `num_inference_steps` between [4, 8].
- Perform inference with the pipeline with the usual parameters.
Let's look at how we can perform inference with LCM-LoRAs for different tasks.
First, make sure you have [peft](https://github.com/huggingface/peft) installed, for better LoRA support.
```bash
pip install -U peft
```
## Text-to-image
You'll use the [`StableDiffusionXLPipeline`] with the scheduler: [`LCMScheduler`] and then load the LCM-LoRA. Together with the LCM-LoRA and the scheduler, the pipeline enables a fast inference workflow overcoming the slow iterative nature of diffusion models.
```python
import torch
from diffusers import DiffusionPipeline, LCMScheduler
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
variant="fp16",
torch_dtype=torch.float16
).to("cuda")
# set scheduler
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
# load LCM-LoRA
pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl")
prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
generator = torch.manual_seed(42)
image = pipe(
prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=1.0
).images[0]
```

Notice that we use only 4 steps for generation which is way less than what's typically used for standard SDXL.
<Tip>
You may have noticed that we set `guidance_scale=1.0`, which disables classifer-free-guidance. This is because the LCM-LoRA is trained with guidance, so the batch size does not have to be doubled in this case. This leads to a faster inference time, with the drawback that negative prompts don't have any effect on the denoising process.
You can also use guidance with LCM-LoRA, but due to the nature of training the model is very sensitve to the `guidance_scale` values, high values can lead to artifacts in the generated images. In our experiments, we found that the best values are in the range of [1.0, 2.0].
</Tip>
### Inference with a fine-tuned model
As mentioned above, the LCM-LoRA can be applied to any fine-tuned version of the model without having to distill them separately. Let's look at how we can perform inference with a fine-tuned model. In this example, we'll use the [animagine-xl](https://huggingface.co/Linaqruf/animagine-xl) model, which is a fine-tuned version of the SDXL model for generating anime.
```python
from diffusers import DiffusionPipeline, LCMScheduler
pipe = DiffusionPipeline.from_pretrained(
"Linaqruf/animagine-xl",
variant="fp16",
torch_dtype=torch.float16
).to("cuda")
# set scheduler
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
# load LCM-LoRA
pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl")
prompt = "face focus, cute, masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck"
generator = torch.manual_seed(0)
image = pipe(
prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=1.0
).images[0]
```

## Image-to-image
LCM-LoRA can be applied to image-to-image tasks too. Let's look at how we can perform image-to-image generation with LCMs. For this example we'll use the [dreamshaper-7](https://huggingface.co/Lykon/dreamshaper-7) model and the LCM-LoRA for `stable-diffusion-v1-5 `.
```python
import torch
from diffusers import AutoPipelineForImage2Image, LCMScheduler
from diffusers.utils import make_image_grid, load_image
pipe = AutoPipelineForImage2Image.from_pretrained(
"Lykon/dreamshaper-7",
torch_dtype=torch.float16,
variant="fp16",
).to("cuda")
# set scheduler
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
# load LCM-LoRA
pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5")
# prepare image
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png"
init_image = load_image(url)
prompt = "Astronauts in a jungle, cold color palette, muted colors, detailed, 8k"
# pass prompt and image to pipeline
generator = torch.manual_seed(0)
image = pipe(
prompt,
image=init_image,
num_inference_steps=4,
guidance_scale=1,
strength=0.6,
generator=generator
).images[0]
make_image_grid([init_image, image], rows=1, cols=2)
```

<Tip>
You can get different results based on your prompt and the image you provide. To get the best results, we recommend trying different values for `num_inference_steps`, `strength`, and `guidance_scale` parameters and choose the best one.
</Tip>
## Combine with styled LoRAs
LCM-LoRA can be combined with other LoRAs to generate styled-images in very few steps (4-8). In the following example, we'll use the LCM-LoRA with the [papercut LoRA](TheLastBen/Papercut_SDXL).
To learn more about how to combine LoRAs, refer to [this guide](https://huggingface.co/docs/diffusers/tutorials/using_peft_for_inference#combine-multiple-adapters).
```python
import torch
from diffusers import DiffusionPipeline, LCMScheduler
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
variant="fp16",
torch_dtype=torch.float16
).to("cuda")
# set scheduler
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
# load LoRAs
pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl", adapter_name="lcm")
pipe.load_lora_weights("TheLastBen/Papercut_SDXL", weight_name="papercut.safetensors", adapter_name="papercut")
# Combine LoRAs
pipe.set_adapters(["lcm", "papercut"], adapter_weights=[1.0, 0.8])
prompt = "papercut, a cute fox"
generator = torch.manual_seed(0)
image = pipe(prompt, num_inference_steps=4, guidance_scale=1, generator=generator).images[0]
image
```

## ControlNet/T2I-Adapter
Let's look at how we can perform inference with ControlNet/T2I-Adapter and LCM-LoRA.
### ControlNet
For this example, we'll use the SD-v1-5 model and the LCM-LoRA for SD-v1-5 with canny ControlNet.
```python
import torch
import cv2
import numpy as np
from PIL import Image
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, LCMScheduler
from diffusers.utils import load_image
image = load_image(
"https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png"
).resize((512, 512))
image = np.array(image)
low_threshold = 100
high_threshold = 200
image = cv2.Canny(image, low_threshold, high_threshold)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
canny_image = Image.fromarray(image)
controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
controlnet=controlnet,
torch_dtype=torch.float16,
safety_checker=None,
variant="fp16"
).to("cuda")
# set scheduler
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
# load LCM-LoRA
pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5")
generator = torch.manual_seed(0)
image = pipe(
"the mona lisa",
image=canny_image,
num_inference_steps=4,
guidance_scale=1.5,
controlnet_conditioning_scale=0.8,
cross_attention_kwargs={"scale": 1},
generator=generator,
).images[0]
make_image_grid([canny_image, image], rows=1, cols=2)
```

<Tip>
The inference parameters in this example might not work for all examples, so we recommend you to try different values for `num_inference_steps`, `guidance_scale`, `controlnet_conditioning_scale` and `cross_attention_kwargs` parameters and choose the best one.
</Tip>
### T2I-Adapter
This example shows how to use the LCM-LoRA with the [Canny T2I-Adapter](TencentARC/t2i-adapter-canny-sdxl-1.0) and SDXL.
```python
import torch
import cv2
import numpy as np
from PIL import Image
from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, LCMScheduler
from diffusers.utils import load_image, make_image_grid
# Prepare image
# Detect the canny map in low resolution to avoid high-frequency details
image = load_image(
"https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_canny.jpg"
).resize((384, 384))
image = np.array(image)
low_threshold = 100
high_threshold = 200
image = cv2.Canny(image, low_threshold, high_threshold)
image = image[:, :, None]
image = np.concatenate([image, image, image], axis=2)
canny_image = Image.fromarray(image).resize((1024, 1024))
# load adapter
adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, varient="fp16").to("cuda")
pipe = StableDiffusionXLAdapterPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
adapter=adapter,
torch_dtype=torch.float16,
variant="fp16",
).to("cuda")
# set scheduler
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
# load LCM-LoRA
pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl")
prompt = "Mystical fairy in real, magic, 4k picture, high quality"
negative_prompt = "extra digit, fewer digits, cropped, worst quality, low quality, glitch, deformed, mutated, ugly, disfigured"
generator = torch.manual_seed(0)
image = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
image=canny_image,
num_inference_steps=4,
guidance_scale=1.5,
adapter_conditioning_scale=0.8,
adapter_conditioning_factor=1,
generator=generator,
).images[0]
make_image_grid([canny_image, image], rows=1, cols=2)
```

## Inpainting
LCM-LoRA can be used for inpainting as well.
```python
import torch
from diffusers import AutoPipelineForInpainting, LCMScheduler
from diffusers.utils import load_image, make_image_grid
pipe = AutoPipelineForInpainting.from_pretrained(
"runwayml/stable-diffusion-inpainting",
torch_dtype=torch.float16,
variant="fp16",
).to("cuda")
# set scheduler
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
# load LCM-LoRA
pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5")
# load base and mask image
init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png")
mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png")
# generator = torch.Generator("cuda").manual_seed(92)
prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k"
generator = torch.manual_seed(0)
image = pipe(
prompt=prompt,
image=init_image,
mask_image=mask_image,
generator=generator,
num_inference_steps=4,
guidance_scale=4,
).images[0]
make_image_grid([init_image, mask_image, image], rows=1, cols=3)
```

## AnimateDiff
[`AnimateDiff`] allows you to animate images using Stable Diffusion models. To get good results, we need to generate multiple frames (16-24), and doing this with standard SD models can be very slow.
LCM-LoRA can be used to speed up the process significantly, as you just need to do 4-8 steps for each frame. Let's look at how we can perform animation with LCM-LoRA and AnimateDiff.
```python
import torch
from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler, LCMScheduler
from diffusers.utils import export_to_gif
adapter = MotionAdapter.from_pretrained("diffusers/animatediff-motion-adapter-v1-5")
pipe = AnimateDiffPipeline.from_pretrained(
"frankjoshua/toonyou_beta6",
motion_adapter=adapter,
).to("cuda")
# set scheduler
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
# load LCM-LoRA
pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5", adapter_name="lcm")
pipe.load_lora_weights("guoyww/animatediff-motion-lora-zoom-in", weight_name="diffusion_pytorch_model.safetensors", adapter_name="motion-lora")
pipe.set_adapters(["lcm", "motion-lora"], adapter_weights=[0.55, 1.2])
prompt = "best quality, masterpiece, 1girl, looking at viewer, blurry background, upper body, contemporary, dress"
generator = torch.manual_seed(0)
frames = pipe(
prompt=prompt,
num_inference_steps=5,
guidance_scale=1.25,
cross_attention_kwargs={"scale": 1},
num_frames=24,
generator=generator
).frames[0]
export_to_gif(frames, "animation.gif")
```

|
huggingface/diffusers/blob/main/examples/consistency_distillation/README_sdxl.md
|
Latent Consistency Distillation Example:
[Latent Consistency Models (LCMs)](https://arxiv.org/abs/2310.04378) is a method to distill a latent diffusion model to enable swift inference with minimal steps. This example demonstrates how to use latent consistency distillation to distill SDXL for inference with few timesteps.
## Full model distillation
### Running locally with PyTorch
#### Installing the dependencies
Before running the scripts, make sure to install the library's training dependencies:
**Important**
To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install -e .
```
Then cd in the example folder and run
```bash
pip install -r requirements.txt
```
And initialize an [🤗 Accelerate](https://github.com/huggingface/accelerate/) environment with:
```bash
accelerate config
```
Or for a default accelerate configuration without answering questions about your environment
```bash
accelerate config default
```
Or if your environment doesn't support an interactive shell e.g. a notebook
```python
from accelerate.utils import write_basic_config
write_basic_config()
```
When running `accelerate config`, if we specify torch compile mode to True there can be dramatic speedups.
#### Example
The following uses the [Conceptual Captions 12M (CC12M) dataset](https://github.com/google-research-datasets/conceptual-12m) as an example, and for illustrative purposes only. For best results you may consider large and high-quality text-image datasets such as [LAION](https://laion.ai/blog/laion-400-open-dataset/). You may also need to search the hyperparameter space according to the dataset you use.
```bash
export MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0"
export OUTPUT_DIR="path/to/saved/model"
accelerate launch train_lcm_distill_sdxl_wds.py \
--pretrained_teacher_model=$MODEL_NAME \
--pretrained_vae_model_name_or_path=madebyollin/sdxl-vae-fp16-fix \
--output_dir=$OUTPUT_DIR \
--mixed_precision=fp16 \
--resolution=1024 \
--learning_rate=1e-6 --loss_type="huber" --use_fix_crop_and_size --ema_decay=0.95 --adam_weight_decay=0.0 \
--max_train_steps=1000 \
--max_train_samples=4000000 \
--dataloader_num_workers=8 \
--train_shards_path_or_url="pipe:curl -L -s https://huggingface.co/datasets/laion/conceptual-captions-12m-webdataset/resolve/main/data/{00000..01099}.tar?download=true" \
--validation_steps=200 \
--checkpointing_steps=200 --checkpoints_total_limit=10 \
--train_batch_size=12 \
--gradient_checkpointing --enable_xformers_memory_efficient_attention \
--gradient_accumulation_steps=1 \
--use_8bit_adam \
--resume_from_checkpoint=latest \
--report_to=wandb \
--seed=453645634 \
--push_to_hub \
```
## LCM-LoRA
Instead of fine-tuning the full model, we can also just train a LoRA that can be injected into any SDXL model.
### Example
The following uses the [Conceptual Captions 12M (CC12M) dataset](https://github.com/google-research-datasets/conceptual-12m) as an example. For best results you may consider large and high-quality text-image datasets such as [LAION](https://laion.ai/blog/laion-400-open-dataset/).
```bash
export MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0"
export OUTPUT_DIR="path/to/saved/model"
accelerate launch train_lcm_distill_lora_sdxl_wds.py \
--pretrained_teacher_model=$MODEL_DIR \
--pretrained_vae_model_name_or_path=madebyollin/sdxl-vae-fp16-fix \
--output_dir=$OUTPUT_DIR \
--mixed_precision=fp16 \
--resolution=1024 \
--lora_rank=64 \
--learning_rate=1e-6 --loss_type="huber" --use_fix_crop_and_size --adam_weight_decay=0.0 \
--max_train_steps=1000 \
--max_train_samples=4000000 \
--dataloader_num_workers=8 \
--train_shards_path_or_url="pipe:curl -L -s https://huggingface.co/datasets/laion/conceptual-captions-12m-webdataset/resolve/main/data/{00000..01099}.tar?download=true" \
--validation_steps=200 \
--checkpointing_steps=200 --checkpoints_total_limit=10 \
--train_batch_size=12 \
--gradient_checkpointing --enable_xformers_memory_efficient_attention \
--gradient_accumulation_steps=1 \
--use_8bit_adam \
--resume_from_checkpoint=latest \
--report_to=wandb \
--seed=453645634 \
--push_to_hub \
```
|
huggingface/transformers/blob/main/docs/source/en/model_doc/autoformer.md
|
!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Autoformer
## Overview
The Autoformer model was proposed in [Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting](https://arxiv.org/abs/2106.13008) by Haixu Wu, Jiehui Xu, Jianmin Wang, Mingsheng Long.
This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process.
The abstract from the paper is the following:
*Extending the forecasting time is a critical demand for real applications, such as extreme weather early warning and long-term energy consumption planning. This paper studies the long-term forecasting problem of time series. Prior Transformer-based models adopt various self-attention mechanisms to discover the long-range dependencies. However, intricate temporal patterns of the long-term future prohibit the model from finding reliable dependencies. Also, Transformers have to adopt the sparse versions of point-wise self-attentions for long series efficiency, resulting in the information utilization bottleneck. Going beyond Transformers, we design Autoformer as a novel decomposition architecture with an Auto-Correlation mechanism. We break with the pre-processing convention of series decomposition and renovate it as a basic inner block of deep models. This design empowers Autoformer with progressive decomposition capacities for complex time series. Further, inspired by the stochastic process theory, we design the Auto-Correlation mechanism based on the series periodicity, which conducts the dependencies discovery and representation aggregation at the sub-series level. Auto-Correlation outperforms self-attention in both efficiency and accuracy. In long-term forecasting, Autoformer yields state-of-the-art accuracy, with a 38% relative improvement on six benchmarks, covering five practical applications: energy, traffic, economics, weather and disease.*
This model was contributed by [elisim](https://huggingface.co/elisim) and [kashif](https://huggingface.co/kashif).
The original code can be found [here](https://github.com/thuml/Autoformer).
## Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
- Check out the Autoformer blog-post in HuggingFace blog: [Yes, Transformers are Effective for Time Series Forecasting (+ Autoformer)](https://huggingface.co/blog/autoformer)
## AutoformerConfig
[[autodoc]] AutoformerConfig
## AutoformerModel
[[autodoc]] AutoformerModel
- forward
## AutoformerForPrediction
[[autodoc]] AutoformerForPrediction
- forward
|
huggingface/blog/blob/main/hub-duckdb.md
|
--
title: "DuckDB: analyze 50,000+ datasets stored on the Hugging Face Hub"
thumbnail: /blog/assets/hub_duckdb/hub_duckdb.png
authors:
- user: stevhliu
- user: lhoestq
- user: severo
---
# DuckDB: run SQL queries on 50,000+ datasets on the Hugging Face Hub
The Hugging Face Hub is dedicated to providing open access to datasets for everyone and giving users the tools to explore and understand them. You can find many of the datasets used to train popular large language models (LLMs) like [Falcon](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), [Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k), [MPT](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf), and [StarCoder](https://huggingface.co/datasets/bigcode/the-stack). There are tools for addressing fairness and bias in datasets like [Disaggregators](https://huggingface.co/spaces/society-ethics/disaggregators), and tools for previewing examples inside a dataset like the Dataset Viewer.
<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets-server/oasst1_light.png"/>
</div>
<small>A preview of the OpenAssistant dataset with the Dataset Viewer.</small>
We are happy to share that we recently added another feature to help you analyze datasets on the Hub; you can run SQL queries with DuckDB on any dataset stored on the Hub! According to the 2022 [StackOverflow Developer Survey](https://survey.stackoverflow.co/2022/#section-most-popular-technologies-programming-scripting-and-markup-languages), SQL is the 3rd most popular programming language. We also wanted a fast database management system (DBMS) designed for running analytical queries, which is why we’re excited about integrating with [DuckDB](https://duckdb.org/). We hope this allows even more users to access and analyze datasets on the Hub!
## TLDR
[Datasets Server](https://huggingface.co/docs/datasets-server/index) **automatically converts all public datasets on the Hub to Parquet files**, that you can see by clicking on the "Auto-converted to Parquet" button at the top of a dataset page. You can also access the list of the Parquet files URLs with a simple HTTP call.
```py
r = requests.get("https://datasets-server.huggingface.co/parquet?dataset=blog_authorship_corpus")
j = r.json()
urls = [f['url'] for f in j['parquet_files'] if f['split'] == 'train']
urls
['https://huggingface.co/datasets/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/blog_authorship_corpus/blog_authorship_corpus-train-00000-of-00002.parquet',
'https://huggingface.co/datasets/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/blog_authorship_corpus/blog_authorship_corpus-train-00001-of-00002.parquet']
```
Create a connection to DuckDB and install and load the `httpfs` extension to allow reading and writing remote files:
```py
import duckdb
url = "https://huggingface.co/datasets/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/blog_authorship_corpus/blog_authorship_corpus-train-00000-of-00002.parquet"
con = duckdb.connect()
con.execute("INSTALL httpfs;")
con.execute("LOAD httpfs;")
```
Once you’re connected, you can start writing SQL queries!
```sql
con.sql(f"""SELECT horoscope,
count(*),
AVG(LENGTH(text)) AS avg_blog_length
FROM '{url}'
GROUP BY horoscope
ORDER BY avg_blog_length
DESC LIMIT(5)"""
)
```
To learn more, check out the [documentation](https://huggingface.co/docs/datasets-server/parquet_process).
## From dataset to Parquet
[Parquet](https://parquet.apache.org/docs/) files are columnar, making them more efficient to store, load and analyze. This is especially important when you're working with large datasets, which we’re seeing more and more of in the LLM era. To support this, Datasets Server automatically converts and publishes any public dataset on the Hub as Parquet files. The URL to the Parquet files can be retrieved with the [`/parquet`](https://huggingface.co/docs/datasets-server/quick_start#access-parquet-files) endpoint.
## Analyze with DuckDB
DuckDB offers super impressive performance for running complex analytical queries. It is able to execute a SQL query directly on a remote Parquet file without any overhead. With the [`httpfs`](https://duckdb.org/docs/extensions/httpfs) extension, DuckDB is able to query remote files such as datasets stored on the Hub using the URL provided from the `/parquet` endpoint. DuckDB also supports querying multiple Parquet files which is really convenient because Datasets Server shards big datasets into smaller 500MB chunks.
## Looking forward
Knowing what’s inside a dataset is important for developing models because it can impact model quality in all sorts of ways! By allowing users to write and execute any SQL query on Hub datasets, this is another way for us to enable open access to datasets and help users be more aware of the datasets contents. We are excited for you to try this out, and we’re looking forward to what kind of insights your analysis uncovers!
|
gradio-app/gradio/blob/main/guides/06_integrating-other-frameworks/Gradio-and-Wandb-Integration.md
|
Gradio and W&B Integration
Related spaces: https://huggingface.co/spaces/akhaliq/JoJoGAN
Tags: WANDB, SPACES
Contributed by Gradio team
## Introduction
In this Guide, we'll walk you through:
- Introduction of Gradio, and Hugging Face Spaces, and Wandb
- How to setup a Gradio demo using the Wandb integration for JoJoGAN
- How to contribute your own Gradio demos after tracking your experiments on wandb to the Wandb organization on Hugging Face
## What is Wandb?
Weights and Biases (W&B) allows data scientists and machine learning scientists to track their machine learning experiments at every stage, from training to production. Any metric can be aggregated over samples and shown in panels in a customizable and searchable dashboard, like below:
<img alt="Screen Shot 2022-08-01 at 5 54 59 PM" src="https://user-images.githubusercontent.com/81195143/182252755-4a0e1ca8-fd25-40ff-8c91-c9da38aaa9ec.png">
## What are Hugging Face Spaces & Gradio?
### Gradio
Gradio lets users demo their machine learning models as a web app, all in a few lines of Python. Gradio wraps any Python function (such as a machine learning model's inference function) into a user interface and the demos can be launched inside jupyter notebooks, colab notebooks, as well as embedded in your own website and hosted on Hugging Face Spaces for free.
Get started [here](https://gradio.app/getting_started)
### Hugging Face Spaces
Hugging Face Spaces is a free hosting option for Gradio demos. Spaces comes with 3 SDK options: Gradio, Streamlit and Static HTML demos. Spaces can be public or private and the workflow is similar to github repos. There are over 2000+ spaces currently on Hugging Face. Learn more about spaces [here](https://huggingface.co/spaces/launch).
## Setting up a Gradio Demo for JoJoGAN
Now, let's walk you through how to do this on your own. We'll make the assumption that you're new to W&B and Gradio for the purposes of this tutorial.
Let's get started!
1. Create a W&B account
Follow [these quick instructions](https://app.wandb.ai/login) to create your free account if you don’t have one already. It shouldn't take more than a couple minutes. Once you're done (or if you've already got an account), next, we'll run a quick colab.
2. Open Colab Install Gradio and W&B
We'll be following along with the colab provided in the JoJoGAN repo with some minor modifications to use Wandb and Gradio more effectively.
[](https://colab.research.google.com/github/mchong6/JoJoGAN/blob/main/stylize.ipynb)
Install Gradio and Wandb at the top:
```sh
pip install gradio wandb
```
3. Finetune StyleGAN and W&B experiment tracking
This next step will open a W&B dashboard to track your experiments and a gradio panel showing pretrained models to choose from a drop down menu from a Gradio Demo hosted on Huggingface Spaces. Here's the code you need for that:
```python
alpha = 1.0
alpha = 1-alpha
preserve_color = True
num_iter = 100
log_interval = 50
samples = []
column_names = ["Reference (y)", "Style Code(w)", "Real Face Image(x)"]
wandb.init(project="JoJoGAN")
config = wandb.config
config.num_iter = num_iter
config.preserve_color = preserve_color
wandb.log(
{"Style reference": [wandb.Image(transforms.ToPILImage()(target_im))]},
step=0)
# load discriminator for perceptual loss
discriminator = Discriminator(1024, 2).eval().to(device)
ckpt = torch.load('models/stylegan2-ffhq-config-f.pt', map_location=lambda storage, loc: storage)
discriminator.load_state_dict(ckpt["d"], strict=False)
# reset generator
del generator
generator = deepcopy(original_generator)
g_optim = optim.Adam(generator.parameters(), lr=2e-3, betas=(0, 0.99))
# Which layers to swap for generating a family of plausible real images -> fake image
if preserve_color:
id_swap = [9,11,15,16,17]
else:
id_swap = list(range(7, generator.n_latent))
for idx in tqdm(range(num_iter)):
mean_w = generator.get_latent(torch.randn([latents.size(0), latent_dim]).to(device)).unsqueeze(1).repeat(1, generator.n_latent, 1)
in_latent = latents.clone()
in_latent[:, id_swap] = alpha*latents[:, id_swap] + (1-alpha)*mean_w[:, id_swap]
img = generator(in_latent, input_is_latent=True)
with torch.no_grad():
real_feat = discriminator(targets)
fake_feat = discriminator(img)
loss = sum([F.l1_loss(a, b) for a, b in zip(fake_feat, real_feat)])/len(fake_feat)
wandb.log({"loss": loss}, step=idx)
if idx % log_interval == 0:
generator.eval()
my_sample = generator(my_w, input_is_latent=True)
generator.train()
my_sample = transforms.ToPILImage()(utils.make_grid(my_sample, normalize=True, range=(-1, 1)))
wandb.log(
{"Current stylization": [wandb.Image(my_sample)]},
step=idx)
table_data = [
wandb.Image(transforms.ToPILImage()(target_im)),
wandb.Image(img),
wandb.Image(my_sample),
]
samples.append(table_data)
g_optim.zero_grad()
loss.backward()
g_optim.step()
out_table = wandb.Table(data=samples, columns=column_names)
wandb.log({"Current Samples": out_table})
```
alpha = 1.0
alpha = 1-alpha
preserve_color = True
num_iter = 100
log_interval = 50
samples = []
column_names = ["Referece (y)", "Style Code(w)", "Real Face Image(x)"]
wandb.init(project="JoJoGAN")
config = wandb.config
config.num_iter = num_iter
config.preserve_color = preserve_color
wandb.log(
{"Style reference": [wandb.Image(transforms.ToPILImage()(target_im))]},
step=0)
# load discriminator for perceptual loss
discriminator = Discriminator(1024, 2).eval().to(device)
ckpt = torch.load('models/stylegan2-ffhq-config-f.pt', map_location=lambda storage, loc: storage)
discriminator.load_state_dict(ckpt["d"], strict=False)
# reset generator
del generator
generator = deepcopy(original_generator)
g_optim = optim.Adam(generator.parameters(), lr=2e-3, betas=(0, 0.99))
# Which layers to swap for generating a family of plausible real images -> fake image
if preserve_color:
id_swap = [9,11,15,16,17]
else:
id_swap = list(range(7, generator.n_latent))
for idx in tqdm(range(num_iter)):
mean_w = generator.get_latent(torch.randn([latents.size(0), latent_dim]).to(device)).unsqueeze(1).repeat(1, generator.n_latent, 1)
in_latent = latents.clone()
in_latent[:, id_swap] = alpha*latents[:, id_swap] + (1-alpha)*mean_w[:, id_swap]
img = generator(in_latent, input_is_latent=True)
with torch.no_grad():
real_feat = discriminator(targets)
fake_feat = discriminator(img)
loss = sum([F.l1_loss(a, b) for a, b in zip(fake_feat, real_feat)])/len(fake_feat)
wandb.log({"loss": loss}, step=idx)
if idx % log_interval == 0:
generator.eval()
my_sample = generator(my_w, input_is_latent=True)
generator.train()
my_sample = transforms.ToPILImage()(utils.make_grid(my_sample, normalize=True, range=(-1, 1)))
wandb.log(
{"Current stylization": [wandb.Image(my_sample)]},
step=idx)
table_data = [
wandb.Image(transforms.ToPILImage()(target_im)),
wandb.Image(img),
wandb.Image(my_sample),
]
samples.append(table_data)
g_optim.zero_grad()
loss.backward()
g_optim.step()
out_table = wandb.Table(data=samples, columns=column_names)
wandb.log({"Current Samples": out_table})
````
4. Save, Download, and Load Model
Here's how to save and download your model.
```python
from PIL import Image
import torch
torch.backends.cudnn.benchmark = True
from torchvision import transforms, utils
from util import *
import math
import random
import numpy as np
from torch import nn, autograd, optim
from torch.nn import functional as F
from tqdm import tqdm
import lpips
from model import *
from e4e_projection import projection as e4e_projection
from copy import deepcopy
import imageio
import os
import sys
import torchvision.transforms as transforms
from argparse import Namespace
from e4e.models.psp import pSp
from util import *
from huggingface_hub import hf_hub_download
from google.colab import files
torch.save({"g": generator.state_dict()}, "your-model-name.pt")
files.download('your-model-name.pt')
latent_dim = 512
device="cuda"
model_path_s = hf_hub_download(repo_id="akhaliq/jojogan-stylegan2-ffhq-config-f", filename="stylegan2-ffhq-config-f.pt")
original_generator = Generator(1024, latent_dim, 8, 2).to(device)
ckpt = torch.load(model_path_s, map_location=lambda storage, loc: storage)
original_generator.load_state_dict(ckpt["g_ema"], strict=False)
mean_latent = original_generator.mean_latent(10000)
generator = deepcopy(original_generator)
ckpt = torch.load("/content/JoJoGAN/your-model-name.pt", map_location=lambda storage, loc: storage)
generator.load_state_dict(ckpt["g"], strict=False)
generator.eval()
plt.rcParams['figure.dpi'] = 150
transform = transforms.Compose(
[
transforms.Resize((1024, 1024)),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]
)
def inference(img):
img.save('out.jpg')
aligned_face = align_face('out.jpg')
my_w = e4e_projection(aligned_face, "out.pt", device).unsqueeze(0)
with torch.no_grad():
my_sample = generator(my_w, input_is_latent=True)
npimage = my_sample[0].cpu().permute(1, 2, 0).detach().numpy()
imageio.imwrite('filename.jpeg', npimage)
return 'filename.jpeg'
````
5. Build a Gradio Demo
```python
import gradio as gr
title = "JoJoGAN"
description = "Gradio Demo for JoJoGAN: One Shot Face Stylization. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below."
demo = gr.Interface(
inference,
gr.Image(type="pil"),
gr.Image(type="file"),
title=title,
description=description
)
demo.launch(share=True)
```
6. Integrate Gradio into your W&B Dashboard
The last step—integrating your Gradio demo with your W&B dashboard—is just one extra line:
```python
demo.integrate(wandb=wandb)
```
Once you call integrate, a demo will be created and you can integrate it into your dashboard or report
Outside of W&B with Web components, using the gradio-app tags allows anyone can embed Gradio demos on HF spaces directly into their blogs, websites, documentation, etc.:
```html
<gradio-app space="akhaliq/JoJoGAN"> </gradio-app>
```
7. (Optional) Embed W&B plots in your Gradio App
It's also possible to embed W&B plots within Gradio apps. To do so, you can create a W&B Report of your plots and
embed them within your Gradio app within a `gr.HTML` block.
The Report will need to be public and you will need to wrap the URL within an iFrame like this:
```python
import gradio as gr
def wandb_report(url):
iframe = f'<iframe src={url} style="border:none;height:1024px;width:100%">'
return gr.HTML(iframe)
with gr.Blocks() as demo:
report_url = 'https://wandb.ai/_scott/pytorch-sweeps-demo/reports/loss-22-10-07-16-00-17---VmlldzoyNzU2NzAx'
report = wandb_report(report_url)
demo.launch(share=True)
```
## Conclusion
We hope you enjoyed this brief demo of embedding a Gradio demo to a W&B report! Thanks for making it to the end. To recap:
- Only one single reference image is needed for fine-tuning JoJoGAN which usually takes about 1 minute on a GPU in colab. After training, style can be applied to any input image. Read more in the paper.
- W&B tracks experiments with just a few lines of code added to a colab and you can visualize, sort, and understand your experiments in a single, centralized dashboard.
- Gradio, meanwhile, demos the model in a user friendly interface to share anywhere on the web.
## How to contribute Gradio demos on HF spaces on the Wandb organization
- Create an account on Hugging Face [here](https://huggingface.co/join).
- Add Gradio Demo under your username, see this [course](https://huggingface.co/course/chapter9/4?fw=pt) for setting up Gradio Demo on Hugging Face.
- Request to join wandb organization [here](https://huggingface.co/wandb).
- Once approved transfer model from your username to Wandb organization
|
gradio-app/gradio/blob/main/demo/duplicatebutton_component/run.ipynb
|
Gradio Demo: duplicatebutton_component
```
!pip install -q gradio
```
```
import gradio as gr
with gr.Blocks() as demo:
gr.DuplicateButton()
demo.launch()
```
|
huggingface/hub-docs/blob/main/docs/hub/models-widgets-examples.md
|
Widget Examples
Note that each widget example can also optionally describe the corresponding model output, directly in the `output` property. See [the spec](./models-widgets#example-outputs) for more details.
## Natural Language Processing
### Fill-Mask
```yaml
widget:
- text: "Paris is the <mask> of France."
example_title: "Capital"
- text: "The goal of life is <mask>."
example_title: "Philosophy"
```
### Question Answering
```yaml
widget:
- text: "What's my name?"
context: "My name is Clara and I live in Berkeley."
example_title: "Name"
- text: "Where do I live?"
context: "My name is Sarah and I live in London"
example_title: "Location"
```
### Summarization
```yaml
widget:
- text: "The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct."
example_title: "Eiffel Tower"
- text: "Laika, a dog that was the first living creature to be launched into Earth orbit, on board the Soviet artificial satellite Sputnik 2, on November 3, 1957. It was always understood that Laika would not survive the mission, but her actual fate was misrepresented for decades. Laika was a small (13 pounds [6 kg]), even-tempered, mixed-breed dog about two years of age. She was one of a number of stray dogs that were taken into the Soviet spaceflight program after being rescued from the streets. Only female dogs were used because they were considered to be anatomically better suited than males for close confinement."
example_title: "First in Space"
```
### Table Question Answering
```yaml
widget:
- text: "How many stars does the transformers repository have?"
table:
Repository:
- "Transformers"
- "Datasets"
- "Tokenizers"
Stars:
- 36542
- 4512
- 3934
Contributors:
- 651
- 77
- 34
Programming language:
- "Python"
- "Python"
- "Rust, Python and NodeJS"
example_title: "Github stars"
```
### Text Classification
```yaml
widget:
- text: "I love football so much"
example_title: "Positive"
- text: "I don't really like this type of food"
example_title: "Negative"
```
### Text Generation
```yaml
widget:
- text: "My name is Julien and I like to"
example_title: "Julien"
- text: "My name is Merve and my favorite"
example_title: "Merve"
```
### Text2Text Generation
```yaml
widget:
- text: "My name is Julien and I like to"
example_title: "Julien"
- text: "My name is Merve and my favorite"
example_title: "Merve"
```
### Token Classification
```yaml
widget:
- text: "My name is Sylvain and I live in Paris"
example_title: "Parisian"
- text: "My name is Sarah and I live in London"
example_title: "Londoner"
```
### Translation
```yaml
widget:
- text: "My name is Sylvain and I live in Paris"
example_title: "Parisian"
- text: "My name is Sarah and I live in London"
example_title: "Londoner"
```
### Zero-Shot Classification
```yaml
widget:
- text: "I have a problem with my car that needs to be resolved asap!!"
candidate_labels: "urgent, not urgent, phone, tablet, computer"
multi_class: true
example_title: "Car problem"
- text: "Last week I upgraded my iOS version and ever since then my phone has been overheating whenever I use your app."
candidate_labels: "mobile, website, billing, account access"
multi_class: false
example_title: "Phone issue"
```
### Sentence Similarity
```yaml
widget:
- source_sentence: "That is a happy person"
sentences:
- "That is a happy dog"
- "That is a very happy person"
- "Today is a sunny day"
example_title: "Happy"
```
### Conversational
```yaml
widget:
- text: "Hey my name is Julien! How are you?"
example_title: "Julien"
- text: "Hey my name is Clara! How are you?"
example_title: "Clara"
```
### Feature Extraction
```yaml
widget:
- text: "My name is Sylvain and I live in Paris"
example_title: "Parisian"
- text: "My name is Sarah and I live in London"
example_title: "Londoner"
```
## Audio
### Text-to-Speech
```yaml
widget:
- text: "My name is Sylvain and I live in Paris"
example_title: "Parisian"
- text: "My name is Sarah and I live in London"
example_title: "Londoner"
```
### Automatic Speech Recognition
```yaml
widget:
- src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
example_title: Librispeech sample 1
- src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
example_title: Librispeech sample 2
```
### Audio-to-Audio
```yaml
widget:
- src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
example_title: Librispeech sample 1
- src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
example_title: Librispeech sample 2
```
### Audio Classification
```yaml
widget:
- src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
example_title: Librispeech sample 1
- src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
example_title: Librispeech sample 2
```
### Voice Activity Detection
```yaml
widget:
- src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
example_title: Librispeech sample 1
- src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
example_title: Librispeech sample 2
```
## Computer Vision
### Image Classification
```yaml
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
```
### Object Detection
```yaml
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
example_title: Football Match
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg
example_title: Airport
```
### Image Segmentation
```yaml
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
example_title: Football Match
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg
example_title: Airport
```
### Image-to-Image
```yaml
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/canny-edge.jpg
prompt: Girl with Pearl Earring # `prompt` field is optional in case the underlying model supports text guidance
```
### Text-to-Image
```yaml
widget:
- text: "A cat playing with a ball"
example_title: "Cat"
- text: "A dog jumping over a fence"
example_title: "Dog"
```
### Document Question Answering
```yaml
widget:
- text: "What is the invoice number?"
src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png"
- text: "What is the purchase amount?"
src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/contract.jpeg"
```
### Visual Question Answering
```yaml
widget:
- text: "What animal is it?"
src: "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg"
- text: "Where is it?"
src: "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg"
```
### Zero-Shot Image Classification
```yaml
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: playing music, playing sports
example_title: Cat & Dog
```
## Other
### Structured Data Classification
```yaml
widget:
- structured_data:
fixed_acidity:
- 7.4
- 7.8
- 10.3
volatile_acidity:
- 0.7
- 0.88
- 0.32
citric_acid:
- 0
- 0
- 0.45
residual_sugar:
- 1.9
- 2.6
- 6.4
chlorides:
- 0.076
- 0.098
- 0.073
free_sulfur_dioxide:
- 11
- 25
- 5
total_sulfur_dioxide:
- 34
- 67
- 13
density:
- 0.9978
- 0.9968
- 0.9976
pH:
- 3.51
- 3.2
- 3.23
sulphates:
- 0.56
- 0.68
- 0.82
alcohol:
- 9.4
- 9.8
- 12.6
example_title: "Wine"
```
|
huggingface/diffusers/blob/main/docs/source/en/api/models/unet.md
|
!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# UNet1DModel
The [UNet](https://huggingface.co/papers/1505.04597) model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it's number of dimensions and whether it is a conditional model or not. This is a 1D UNet model.
The abstract from the paper is:
*There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net.*
## UNet1DModel
[[autodoc]] UNet1DModel
## UNet1DOutput
[[autodoc]] models.unet_1d.UNet1DOutput
|
huggingface/blog/blob/main/ai-webtv.md
|
--
title: "Building an AI WebTV"
thumbnail: /blog/assets/156_ai_webtv/thumbnail.gif
authors:
- user: jbilcke-hf
---
# Building an AI WebTV
The AI WebTV is an experimental demo to showcase the latest advancements in automatic video and music synthesis.
👉 Watch the stream now by going to the [AI WebTV Space](https://huggingface.co/spaces/jbilcke-hf/AI-WebTV).
If you are using a mobile device, you can view the stream from the [Twitch mirror](https://www.twitch.tv/ai_webtv).

# Concept
The motivation for the AI WebTV is to demo videos generated with open-source [text-to-video models](https://huggingface.co/tasks/text-to-video) such as Zeroscope and MusicGen, in an entertaining and accessible way.
You can find those open-source models on the Hugging Face hub:
- For video: [zeroscope_v2_576](https://huggingface.co/cerspense/zeroscope_v2_576w) and [zeroscope_v2_XL](https://huggingface.co/cerspense/zeroscope_v2_XL)
- For music: [musicgen-melody](https://huggingface.co/facebook/musicgen-melody)
The individual video sequences are purposely made to be short, meaning the WebTV should be seen as a tech demo/showreel rather than an actual show (with an art direction or programming).
# Architecture
The AI WebTV works by taking a sequence of [video shot](https://en.wikipedia.org/wiki/Shot_(filmmaking)) prompts and passing them to a [text-to-video model](https://huggingface.co/tasks/text-to-video) to generate a sequence of [takes](https://en.wikipedia.org/wiki/Take).
Additionally, a base theme and idea (written by a human) are passed through a LLM (in this case, ChatGPT), in order to generate a variety of individual prompts for each video clip.
Here's a diagram of the current architecture of the AI WebTV:

# Implementing the pipeline
The WebTV is implemented in NodeJS and TypeScript, and uses various services hosted on Hugging Face.
## The text-to-video model
The central video model is Zeroscope V2, a model based on [ModelScope](https://huggingface.co/damo-vilab/modelscope-damo-text-to-video-synthesis).
Zeroscope is comprised of two parts that can be chained together:
- A first pass with [zeroscope_v2_576](https://huggingface.co/cerspense/zeroscope_v2_576w), to generate a 576x320 video clip
- An optional second pass with [zeroscope_v2_XL](https://huggingface.co/cerspense/zeroscope_v2_XL) to upscale the video to 1024x576
👉 You will need to use the same prompt for both the generation and upscaling.
## Calling the video chain
To make a quick prototype, the WebTV runs Zeroscope from two duplicated Hugging Face Spaces running [Gradio](https://github.com/gradio-app/gradio/), which are called using the [@gradio/client](https://www.npmjs.com/package/@gradio/client) NPM package. You can find the original spaces here:
- [zeroscope-v2](https://huggingface.co/spaces/hysts/zeroscope-v2/tree/main) by @hysts
- [Zeroscope XL](https://huggingface.co/spaces/fffiloni/zeroscope-XL) by @fffiloni
Other spaces deployed by the community can also be found if you [search for Zeroscope on the Hub](https://huggingface.co/spaces?search=zeroscope).
👉 Public Spaces may become overcrowded and paused at any time. If you intend to deploy your own system, please duplicate those Spaces and run them under your own account.
## Using a model hosted on a Space
Spaces using Gradio have the ability to [expose a REST API](https://www.gradio.app/guides/sharing-your-app#api-page), which can then be called from Node using the [@gradio/client](https://www.npmjs.com/package/@gradio/client) module.
Here is an example:
```typescript
import { client } from "@gradio/client"
export const generateVideo = async (prompt: string) => {
const api = await client("*** URL OF THE SPACE ***")
// call the "run()" function with an array of parameters
const { data } = await api.predict("/run", [
prompt,
42, // seed
24, // nbFrames
35 // nbSteps
])
const { orig_name } = data[0][0]
const remoteUrl = `${instance}/file=${orig_name}`
// the file can then be downloaded and stored locally
}
```
## Post-processing
Once an individual take (a video clip) is upscaled, it is then passed to FILM (Frame Interpolation for Large Motion), a frame interpolation algorithm:
- Original links: [website](https://film-net.github.io/), [source code](https://github.com/google-research/frame-interpolation)
- Model on Hugging Face: [/frame-interpolation-film-style](https://huggingface.co/akhaliq/frame-interpolation-film-style)
- A Hugging Face Space you can duplicate: [video_frame_interpolation](https://huggingface.co/spaces/fffiloni/video_frame_interpolation/blob/main/app.py) by @fffiloni
During post-processing, we also add music generated with MusicGen:
- Original links: [website](https://ai.honu.io/papers/musicgen/), [source code](https://github.com/facebookresearch/audiocraft)
- Hugging Face Space you can duplicate: [MusicGen](https://huggingface.co/spaces/facebook/MusicGen)
## Broadcasting the stream
Note: there are multiple tools you can use to create a video stream. The AI WebTV currently uses [FFmpeg](https://ffmpeg.org/documentation.html) to read a playlist made of mp4 videos files and m4a audio files.
Here is an example of creating such a playlist:
```typescript
import { promises as fs } from "fs"
import path from "path"
const allFiles = await fs.readdir("** PATH TO VIDEO FOLDER **")
const allVideos = allFiles
.map(file => path.join(dir, file))
.filter(filePath => filePath.endsWith('.mp4'))
let playlist = 'ffconcat version 1.0\n'
allFilePaths.forEach(filePath => {
playlist += `file '${filePath}'\n`
})
await fs.promises.writeFile("playlist.txt", playlist)
```
This will generate the following playlist content:
```bash
ffconcat version 1.0
file 'video1.mp4'
file 'video2.mp4'
...
```
FFmpeg is then used again to read this playlist and send a [FLV stream](https://en.wikipedia.org/wiki/Flash_Video) to a [RTMP server](https://en.wikipedia.org/wiki/Real-Time_Messaging_Protocol). FLV is an old format but still popular in the world of real-time streaming due to its low latency.
```bash
ffmpeg -y -nostdin \
-re \
-f concat \
-safe 0 -i channel_random.txt -stream_loop -1 \
-loglevel error \
-c:v libx264 -preset veryfast -tune zerolatency \
-shortest \
-f flv rtmp://<SERVER>
```
There are many different configuration options for FFmpeg, for more information in the [official documentation](http://trac.ffmpeg.org/wiki/StreamingGuide).
For the RTMP server, you can find [open-source implementations on GitHub](https://github.com/topics/rtmp-server), such as the [NGINX-RTMP module](https://github.com/arut/nginx-rtmp-module).
The AI WebTV itself uses [node-media-server](https://github.com/illuspas/Node-Media-Server).
💡 You can also directly stream to [one of the Twitch RTMP entrypoints](https://help.twitch.tv/s/twitch-ingest-recommendation?language=en_US). Check out the Twitch documentation for more details.
# Observations and examples
Here are some examples of the generated content.
The first thing we notice is that applying the second pass of Zeroscope XL significantly improves the quality of the image. The impact of frame interpolation is also clearly visible.
## Characters and scene composition
<figure class="image flex flex-col items-center text-center m-0 w-full">
<video
alt="demo4.mp4"
autoplay loop autobuffer muted playsinline
>
<source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/156_ai_webtv/demo4.mp4" type="video/mp4">
</video>
<figcaption>Prompt: <i>Photorealistic movie of a <strong>llama acting as a programmer, wearing glasses and a hoodie</strong>, intensely <strong>staring at a screen</strong> with lines of code, in a cozy, <strong>dimly lit room</strong>, Canon EOS, ambient lighting, high details, cinematic, trending on artstation</i></figcaption>
</figure>
<figure class="image flex flex-col items-center text-center m-0 w-full">
<video
alt="demo5.mp4"
autoplay loop autobuffer muted playsinline
>
<source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/156_ai_webtv/demo5.mp4" type="video/mp4">
</video>
<figcaption>Prompt: <i>3D rendered animation showing a group of food characters <strong>forming a pyramid</strong>, with a <strong>banana</strong> standing triumphantly on top. In a city with <strong>cotton candy clouds</strong> and <strong>chocolate road</strong>, Pixar's style, CGI, ambient lighting, direct sunlight, rich color scheme, ultra realistic, cinematic, photorealistic.</i></figcaption>
</figure>
<figure class="image flex flex-col items-center text-center m-0 w-full">
<video
alt="demo7.mp4"
autoplay loop autobuffer muted playsinline
>
<source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/156_ai_webtv/demo7.mp4" type="video/mp4">
</video>
<figcaption>Prompt: <i>Intimate <strong>close-up of a red fox, gazing into the camera with sharp eyes</strong>, ambient lighting creating a high contrast silhouette, IMAX camera, <strong>high detail</strong>, <strong>cinematic effect</strong>, golden hour, film grain.</i></figcaption>
</figure>
## Simulation of dynamic scenes
Something truly fascinating about text-to-video models is their ability to emulate real-life phenomena they have been trained on.
We've seen it with large language models and their ability to synthesize convincing content that mimics human responses, but this takes things to a whole new dimension when applied to video.
A video model predicts the next frames of a scene, which might include objects in motion such as fluids, people, animals, or vehicles. Today, this emulation isn't perfect, but it will be interesting to evaluate future models (trained on larger or specialized datasets, such as animal locomotion) for their accuracy when reproducing physical phenomena, and also their ability to simulate the behavior of agents.
<figure class="image flex flex-col items-center text-center m-0 w-full">
<video
alt="demo17.mp4"
autoplay loop autobuffer muted playsinline
>
<source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/156_ai_webtv/demo17.mp4" type="video/mp4">
</video>
<figcaption>Prompt: <i>Cinematic movie shot of <strong>bees energetically buzzing around a flower</strong>, sun rays illuminating the scene, captured in 4k IMAX with a soft bokeh background.</i></figcaption>
</figure>
<figure class="image flex flex-col items-center text-center m-0 w-full">
<video
alt="demo8.mp4"
autoplay loop autobuffer muted playsinline
>
<source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/156_ai_webtv/demo8.mp4" type="video/mp4">
</video>
<figcaption>Prompt: <i><strong>Dynamic footage of a grizzly bear catching a salmon in a rushing river</strong>, ambient lighting highlighting the splashing water, low angle, IMAX camera, 4K movie quality, golden hour, film grain.</i></figcaption>
</figure>
<figure class="image flex flex-col items-center text-center m-0 w-full">
<video
alt="demo18.mp4"
autoplay loop autobuffer muted playsinline
>
<source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/156_ai_webtv/demo18.mp4" type="video/mp4">
</video>
<figcaption>Prompt: <i>Aerial footage of a quiet morning at the coast of California, with <strong>waves gently crashing against the rocky shore</strong>. A startling sunrise illuminates the coast with vibrant colors, captured beautifully with a DJI Phantom 4 Pro. Colors and textures of the landscape come alive under the soft morning light. Film grain, cinematic, imax, movie</i></figcaption>
</figure>
💡 It will be interesting to see these capabilities explored more in the future, for instance by training video models on larger video datasets covering more phenomena.
## Styling and effects
<figure class="image flex flex-col items-center text-center m-0 w-full">
<video
alt="demo0.mp4"
autoplay loop autobuffer muted playsinline
>
<source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/156_ai_webtv/demo0.mp4" type="video/mp4">
</video>
<figcaption>Prompt: <i>
<strong>3D rendered video</strong> of a friendly broccoli character wearing a hat, walking in a candy-filled city street with gingerbread houses, under a <strong>bright sun and blue skies</strong>, <strong>Pixar's style</strong>, cinematic, photorealistic, movie, <strong>ambient lighting</strong>, natural lighting, <strong>CGI</strong>, wide-angle view, daytime, ultra realistic.</i>
</figcaption>
</figure>
<figure class="image flex flex-col items-center text-center m-0 w-full">
<video
alt="demo2.mp4"
autoplay loop autobuffer muted playsinline
>
<source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/156_ai_webtv/demo2.mp4" type="video/mp4">
</video>
<figcaption>Prompt: <i><strong>Cinematic movie</strong>, shot of an astronaut and a llama at dawn, the mountain landscape bathed in <strong>soft muted colors</strong>, early morning fog, dew glistening on fur, craggy peaks, vintage NASA suit, Canon EOS, high detailed skin, epic composition, high quality, 4K, trending on artstation, beautiful</i>
</figcaption>
</figure>
<figure class="image flex flex-col items-center text-center m-0 w-full">
<video
alt="demo1.mp4"
autoplay loop autobuffer muted playsinline
>
<source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/156_ai_webtv/demo1.mp4" type="video/mp4">
</video>
<figcaption>Prompt: <i>Panda and black cat <strong>navigating down the flowing river</strong> in a small boat, Studio Ghibli style > Cinematic, beautiful composition > IMAX <strong>camera panning following the boat</strong> > High quality, cinematic, movie, mist effect, film grain, trending on Artstation</i>
</figcaption>
</figure>
## Failure cases
**Wrong direction:** the model sometimes has trouble with movement and direction. For instance, here the clip seems to be played in reverse. Also the modifier keyword ***green*** was not taken into account.
<figure class="image flex flex-col items-center text-center m-0 w-full">
<video
alt="fail1.mp4"
autoplay loop autobuffer muted playsinline
>
<source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/156_ai_webtv/fail1.mp4" type="video/mp4">
</video>
<figcaption>Prompt: <i>Movie showing a <strong>green pumpkin</strong> falling into a bed of nails, slow-mo explosion with chunks flying all over, ambient fog adding to the dramatic lighting, filmed with IMAX camera, 8k ultra high definition, high quality, trending on artstation.</i>
</figcaption>
</figure>
**Rendering errors on realistic scenes:** sometimes we can see artifacts such as moving vertical lines or waves. It is unclear what causes this, but it may be due to the combination of keywords used.
<figure class="image flex flex-col items-center text-center m-0 w-full">
<video
alt="fail2.mp4"
autoplay loop autobuffer muted playsinline
>
<source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/156_ai_webtv/fail2.mp4" type="video/mp4">
</video>
<figcaption>Prompt: <i>Film shot of a captivating flight above the Grand Canyon, ledges and plateaus etched in orange and red. <strong>Deep shadows contrast</strong> with the fiery landscape under the midday sun, shot with DJI Phantom 4 Pro. The camera rotates to capture the vastness, <strong>textures</strong> and colors, in imax quality. Film <strong>grain</strong>, cinematic, movie.</i>
</figcaption>
</figure>
**Text or objects inserted into the image:** the model sometimes injects words from the prompt into the scene, such as "IMAX". Mentioning "Canon EOS" or "Drone footage" in the prompt can also make those objects appear in the video.
In the following example, we notice the word "llama" inserts a llama but also two occurrences of the word llama in flames.
<figure class="image flex flex-col items-center text-center m-0 w-full">
<video
alt="fail3.mp4"
autoplay loop autobuffer muted playsinline
>
<source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/156_ai_webtv/fail3.mp4" type="video/mp4">
</video>
<figcaption>Prompt: <i>Movie scene of a <strong>llama</strong> acting as a firefighter, in firefighter uniform, dramatically spraying water at <strong>roaring flames</strong>, amidst a chaotic urban scene, Canon EOS, ambient lighting, high quality, award winning, highly detailed fur, cinematic, trending on artstation.</i>
</figcaption>
</figure>
# Recommendations
Here are some early recommendations that can be made from the previous observations:
## Using video-specific prompt keywords
You may already know that if you don’t prompt a specific aspect of the image with Stable Diffusion, things like the color of clothes or the time of the day might become random, or be assigned a generic value such as a neutral mid-day light.
The same is true for video models: you will want to be specific about things. Examples include camera and character movement, their orientation, speed and direction. You can leave it unspecified for creative purposes (idea generation), but this might not always give you the results you want (e.g., entities animated in reverse).
## Maintaining consistency between scenes
If you plan to create sequences of multiple videos, you will want to make sure you add as many details as possible in each prompt, otherwise you may lose important details from one sequence to another, such as the color.
💡 This will also improve the quality of the image since the prompt is used for the upscaling part with Zeroscope XL.
## Leverage frame interpolation
Frame interpolation is a powerful tool which can repair small rendering errors and turn many defects into features, especially in scenes with a lot of animation, or where a cartoon effect is acceptable. The [FILM algorithm](https://film-net.github.io/) will smoothen out elements of a frame with previous and following events in the video clip.
This works great to displace the background when the camera is panning or rotating, and will also give you creative freedom, such as control over the number of frames after the generation, to make slow-motion effects.
# Future work
We hope you enjoyed watching the AI WebTV stream and that it will inspire you to build more in this space.
As this was a first trial, a lot of things were not the focus of the tech demo: generating longer and more varied sequences, adding audio (sound effects, dialogue), generating and orchestrating complex scenarios, or letting a language model agent have more control over the pipeline.
Some of these ideas may make their way into future updates to the AI WebTV, but we also can’t wait to see what the community of researchers, engineers and builders will come up with!
|
huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/README.md
|
Stable Diffusion
## Overview
Stable Diffusion was proposed in [Stable Diffusion Announcement](https://stability.ai/blog/stable-diffusion-announcement) by Patrick Esser and Robin Rombach and the Stability AI team.
The summary of the model is the following:
*Stable Diffusion is a text-to-image model that will empower billions of people to create stunning art within seconds. It is a breakthrough in speed and quality meaning that it can run on consumer GPUs. You can see some of the amazing output that has been created by this model without pre or post-processing on this page. The model itself builds upon the work of the team at CompVis and Runway in their widely used latent diffusion model combined with insights from the conditional diffusion models by our lead generative AI developer Katherine Crowson, Dall-E 2 by Open AI, Imagen by Google Brain and many others. We are delighted that AI media generation is a cooperative field and hope it can continue this way to bring the gift of creativity to all.*
## Tips:
- Stable Diffusion has the same architecture as [Latent Diffusion](https://arxiv.org/abs/2112.10752) but uses a frozen CLIP Text Encoder instead of training the text encoder jointly with the diffusion model.
- An in-detail explanation of the Stable Diffusion model can be found under [Stable Diffusion with 🧨 Diffusers](https://huggingface.co/blog/stable_diffusion).
- If you don't want to rely on the Hugging Face Hub and having to pass a authentication token, you can
download the weights with `git lfs install; git clone https://huggingface.co/runwayml/stable-diffusion-v1-5` and instead pass the local path to the cloned folder to `from_pretrained` as shown below.
- Stable Diffusion can work with a variety of different samplers as is shown below.
## Available Pipelines:
| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_stable_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py) | *Text-to-Image Generation* | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb)
| [pipeline_stable_diffusion_img2img](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py) | *Image-to-Image Text-Guided Generation* | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb)
| [pipeline_stable_diffusion_inpaint](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py) | *Text-Guided Image Inpainting* | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb)
## Examples:
### Using Stable Diffusion without being logged into the Hub.
If you want to download the model weights using a single Python line, you need to be logged in via `huggingface-cli login`.
```python
from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
```
This however can make it difficult to build applications on top of `diffusers` as you will always have to pass the token around. A potential way to solve this issue is by downloading the weights to a local path `"./stable-diffusion-v1-5"`:
```
git lfs install
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5
```
and simply passing the local path to `from_pretrained`:
```python
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("./stable-diffusion-v1-5")
```
### Text-to-Image with default PLMS scheduler
```python
# make sure you're logged in with `huggingface-cli login`
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
### Text-to-Image with DDIM scheduler
```python
# make sure you're logged in with `huggingface-cli login`
from diffusers import StableDiffusionPipeline, DDIMScheduler
scheduler = DDIMScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
scheduler=scheduler,
).to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
### Text-to-Image with K-LMS scheduler
```python
# make sure you're logged in with `huggingface-cli login`
from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler
lms = LMSDiscreteScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler")
pipe = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
scheduler=lms,
).to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
### CycleDiffusion using Stable Diffusion and DDIM scheduler
```python
import requests
import torch
from PIL import Image
from io import BytesIO
from diffusers import CycleDiffusionPipeline, DDIMScheduler
# load the scheduler. CycleDiffusion only supports stochastic schedulers.
# load the pipeline
# make sure you're logged in with `huggingface-cli login`
model_id_or_path = "CompVis/stable-diffusion-v1-4"
scheduler = DDIMScheduler.from_pretrained(model_id_or_path, subfolder="scheduler")
pipe = CycleDiffusionPipeline.from_pretrained(model_id_or_path, scheduler=scheduler).to("cuda")
# let's download an initial image
url = "https://raw.githubusercontent.com/ChenWu98/cycle-diffusion/main/data/dalle2/An%20astronaut%20riding%20a%20horse.png"
response = requests.get(url)
init_image = Image.open(BytesIO(response.content)).convert("RGB")
init_image = init_image.resize((512, 512))
init_image.save("horse.png")
# let's specify a prompt
source_prompt = "An astronaut riding a horse"
prompt = "An astronaut riding an elephant"
# call the pipeline
image = pipe(
prompt=prompt,
source_prompt=source_prompt,
image=init_image,
num_inference_steps=100,
eta=0.1,
strength=0.8,
guidance_scale=2,
source_guidance_scale=1,
).images[0]
image.save("horse_to_elephant.png")
# let's try another example
# See more samples at the original repo: https://github.com/ChenWu98/cycle-diffusion
url = "https://raw.githubusercontent.com/ChenWu98/cycle-diffusion/main/data/dalle2/A%20black%20colored%20car.png"
response = requests.get(url)
init_image = Image.open(BytesIO(response.content)).convert("RGB")
init_image = init_image.resize((512, 512))
init_image.save("black.png")
source_prompt = "A black colored car"
prompt = "A blue colored car"
# call the pipeline
torch.manual_seed(0)
image = pipe(
prompt=prompt,
source_prompt=source_prompt,
image=init_image,
num_inference_steps=100,
eta=0.1,
strength=0.85,
guidance_scale=3,
source_guidance_scale=1,
).images[0]
image.save("black_to_blue.png")
```
|
huggingface/huggingface_hub/blob/main/README_hi.md
|
p align="center">
<br/>
<img alt="huggingface_hub library logo" src="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/huggingface_hub.svg" width="376" height="59" style="max-width: 100%;">
<br/>
</p>
<p align="center">
<i>Huggingface Hub के लिए आधिकारिक पायथन क्लाइंट।</i>
</p>
<p align="center">
<a href="https://huggingface.co/docs/huggingface_hub/ko/index"><img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/huggingface_hub/index.svg?down_color=red&down_message=offline&up_message=online&label=doc"></a>
<a href="https://github.com/huggingface/huggingface_hub/releases"><img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/huggingface_hub.svg"></a>
<a href="https://github.com/huggingface/huggingface_hub"><img alt="PyPi version" src="https://img.shields.io/pypi/pyversions/huggingface_hub.svg"></a>
<a href="https://pypi.org/project/huggingface-hub"><img alt="downloads" src="https://static.pepy.tech/badge/huggingface_hub/month"></a>
<a href="https://codecov.io/gh/huggingface/huggingface_hub"><img alt="Code coverage" src="https://codecov.io/gh/huggingface/huggingface_hub/branch/main/graph/badge.svg?token=RXP95LE2XL"></a>
</p>
<h4 align="center">
<p>
<a href="https://github.com/huggingface/huggingface_hub/blob/main/README.md">English</a> |
<a href="https://github.com/huggingface/huggingface_hub/blob/main/README_de.md">Deutsch</a> |
<b>हिंदी</b> |
<a href="https://github.com/huggingface/huggingface_hub/blob/main/README_ko.md">한국어</a> |
<a href="https://github.com/huggingface/huggingface_hub/blob/main/README_cn.md">中文(简体)</a>
<p>
</h4>
---
**दस्तावेज़ीकरण**: <a href="https://hf.co/docs/huggingface_hub" target="_blank">https://hf.co/docs/huggingface_hub</a>
**सोर्स कोड**: <a href="https://github.com/huggingface/huggingface_hub" target="_blank">https://github.com/huggingface/huggingface_hub</a>
---
## huggingface_hub लाइब्रेरी में आपका स्वागत है
`huggingface_hub` लाइब्रेरी आपको [हगिंग फेस हब](https://huggingface.co/) के साथ बातचीत करने की अनुमति देती है, जो रचनाकारों और सहयोगियों के लिए ओपन-सोर्स मशीन लर्निंग का लोकतंत्रीकरण करने वाला एक मंच है। अपनी परियोजनाओं के लिए पूर्व-प्रशिक्षित मॉडल और डेटासेट खोजें या हब पर होस्ट किए गए हजारों मशीन लर्निंग ऐप्स के साथ खेलें। आप समुदाय के साथ अपने स्वयं के मॉडल, डेटासेट और डेमो भी बना और साझा कर सकते हैं। `huggingface_hub` लाइब्रेरी पायथन के साथ इन सभी चीजों को करने का एक आसान तरीका प्रदान करती है।
## प्रमुख विशेषताऐं
- [फ़ाइलें डाउनलोड करें](https://huggingface.co/docs/huggingface_hub/en/guides/download) हब से।
- [फ़ाइलें अपलोड करें](https://huggingface.co/docs/huggingface_hub/en/guides/upload) हब पर।
- [अपनी रिपॉजिटरी प्रबंधित करें](https://huggingface.co/docs/huggingface_hub/en/guides/repository)।
- तैनात मॉडलों पर [अनुमान चलाएँ](https://huggingface.co/docs/huggingface_hub/en/guides/inference)।
- मॉडल, डेटासेट और स्पेस के लिए [खोज](https://huggingface.co/docs/huggingface_hub/en/guides/search)।
- [मॉडल कार्ड साझा करें](https://huggingface.co/docs/huggingface_hub/en/guides/model-cards) अपने मॉडलों का दस्तावेजीकरण करने के लिए।
- [समुदाय के साथ जुड़ें](https://huggingface.co/docs/huggingface_hub/en/guides/community) पीआर और टिप्पणियों के माध्यम से।
## स्थापना
[pip](https://pypi.org/project/huggingface-hub/) के साथ `huggingface_hub` पैकेज इंस्टॉल करें:
```bash
pip install huggingface_hub
```
यदि आप चाहें, तो आप इसे [conda](https://huggingface.co/docs/huggingface_hub/en/installation#install-with-conda) से भी इंस्टॉल कर सकते हैं।
पैकेज को डिफ़ॉल्ट रूप से न्यूनतम रखने के लिए, `huggingface_hub` कुछ उपयोग मामलों के लिए उपयोगी वैकल्पिक निर्भरता के साथ आता है। उदाहरण के लिए, यदि आप अनुमान के लिए संपूर्ण अनुभव चाहते हैं, तो चलाएँ:
```bash
pip install huggingface_hub[inference]
```
अधिक इंस्टॉलेशन और वैकल्पिक निर्भरता जानने के लिए, [इंस्टॉलेशन गाइड](https://huggingface.co/docs/huggingface_hub/en/installation) देखें।
## जल्दी शुरू
### फ़ाइलें डाउनलोड करें
एकल फ़ाइल डाउनलोड करें
```py
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="tiiuae/falcon-7b-instruct", filename="config.json")
```
या एक संपूर्ण भंडार
```py
from huggingface_hub import snapshot_download
snapshot_download("stabilityai/stable-diffusion-2-1")
```
फ़ाइलें स्थानीय कैश फ़ोल्डर में डाउनलोड की जाएंगी. [this_guide] में अधिक विवरण (https://huggingface.co/docs/huggingface_hub/en/guides/manage-cache)।
### लॉग इन करें
Hugging Face Hub एप्लिकेशन को प्रमाणित करने के लिए टोकन का उपयोग करता है (देखें [docs](https://huggingface.co/docs/hub/security-tokens))। अपनी मशीन में लॉगिन करने के लिए, निम्नलिखित सीएलआई चलाएँ:
```bash
huggingface-cli login
# या कृपया इसे एक पर्यावरण चर के रूप में निर्दिष्ट करें।
huggingface-cli login --token $HUGGINGFACE_TOKEN
```
### एक रिपॉजिटरी बनाएं
```py
from huggingface_hub import create_repo
create_repo(repo_id="super-cool-model")
```
### फाइलें अपलोड करें
एकल फ़ाइल अपलोड करें
```py
from huggingface_hub import upload_file
upload_file(
path_or_fileobj="/home/lysandre/dummy-test/README.md",
path_in_repo="README.md",
repo_id="lysandre/test-model",
)
```
या एक संपूर्ण फ़ोल्डर
```py
from huggingface_hub import upload_folder
upload_folder(
folder_path="/path/to/local/space",
repo_id="username/my-cool-space",
repo_type="space",
)
```
[अपलोड गाइड](https://huggingface.co/docs/huggingface_hub/en/guides/upload) में विवरण के लिए।
## हब से एकीकरण।
हम मुफ्त मॉडल होस्टिंग और वर्जनिंग प्रदान करने के लिए शानदार ओपन सोर्स एमएल लाइब्रेरीज़ के साथ साझेदारी कर रहे हैं। आप मौजूदा एकीकरण [यहां](https://huggingface.co/docs/hub/libraries) पा सकते हैं।
फायदे ये हैं:
- पुस्तकालयों और उनके उपयोगकर्ताओं के लिए निःशुल्क मॉडल या डेटासेट होस्टिंग।
- गिट-आधारित दृष्टिकोण के कारण, बहुत बड़ी फ़ाइलों के साथ भी अंतर्निहित फ़ाइल संस्करणिंग।
- सभी मॉडलों के लिए होस्टेड अनुमान एपीआई सार्वजनिक रूप से उपलब्ध है।
- अपलोड किए गए मॉडलों के साथ खेलने के लिए इन-ब्राउज़र विजेट।
- कोई भी आपकी लाइब्रेरी के लिए एक नया मॉडल अपलोड कर सकता है, उन्हें मॉडल को खोजने योग्य बनाने के लिए बस संबंधित टैग जोड़ना होगा।
- तेज़ डाउनलोड! हम डाउनलोड को जियो-रेप्लिकेट करने के लिए क्लाउडफ्रंट (एक सीडीएन) का उपयोग करते हैं ताकि वे दुनिया में कहीं से भी तेजी से चमक सकें।
- उपयोग आँकड़े और अधिक सुविधाएँ आने वाली हैं।
यदि आप अपनी लाइब्रेरी को एकीकृत करना चाहते हैं, तो चर्चा शुरू करने के लिए बेझिझक एक मुद्दा खोलें। हमने ❤️ के साथ एक [चरण-दर-चरण मार्गदर्शिका](https://huggingface.co/docs/hub/adding-a-library) लिखी, जिसमें दिखाया गया कि यह एकीकरण कैसे करना है।
## योगदान (सुविधा अनुरोध, बग, आदि) का अति स्वागत है 💙💚💛💜🧡❤️
योगदान के लिए हर किसी का स्वागत है और हम हर किसी के योगदान को महत्व देते हैं। कोड समुदाय की मदद करने का एकमात्र तरीका नहीं है।
प्रश्नों का उत्तर देना, दूसरों की मदद करना, उन तक पहुंचना और दस्तावेज़ों में सुधार करना समुदाय के लिए बेहद मूल्यवान है।
हमने संक्षेप में बताने के लिए एक [योगदान मार्गदर्शिका](https://github.com/huggingface/huggingface_hub/blob/main/CONTRIBUTING.md) लिखी है
इस भंडार में योगदान करने की शुरुआत कैसे करें।
|
huggingface/transformers/blob/main/docs/source/en/model_doc/lxmert.md
|
!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# LXMERT
## Overview
The LXMERT model was proposed in [LXMERT: Learning Cross-Modality Encoder Representations from Transformers](https://arxiv.org/abs/1908.07490) by Hao Tan & Mohit Bansal. It is a series of bidirectional transformer encoders
(one for the vision modality, one for the language modality, and then one to fuse both modalities) pretrained using a
combination of masked language modeling, visual-language text alignment, ROI-feature regression, masked
visual-attribute modeling, masked visual-object modeling, and visual-question answering objectives. The pretraining
consists of multiple multi-modal datasets: MSCOCO, Visual-Genome + Visual-Genome Question Answering, VQA 2.0, and GQA.
The abstract from the paper is the following:
*Vision-and-language reasoning requires an understanding of visual concepts, language semantics, and, most importantly,
the alignment and relationships between these two modalities. We thus propose the LXMERT (Learning Cross-Modality
Encoder Representations from Transformers) framework to learn these vision-and-language connections. In LXMERT, we
build a large-scale Transformer model that consists of three encoders: an object relationship encoder, a language
encoder, and a cross-modality encoder. Next, to endow our model with the capability of connecting vision and language
semantics, we pre-train the model with large amounts of image-and-sentence pairs, via five diverse representative
pretraining tasks: masked language modeling, masked object prediction (feature regression and label classification),
cross-modality matching, and image question answering. These tasks help in learning both intra-modality and
cross-modality relationships. After fine-tuning from our pretrained parameters, our model achieves the state-of-the-art
results on two visual question answering datasets (i.e., VQA and GQA). We also show the generalizability of our
pretrained cross-modality model by adapting it to a challenging visual-reasoning task, NLVR, and improve the previous
best result by 22% absolute (54% to 76%). Lastly, we demonstrate detailed ablation studies to prove that both our novel
model components and pretraining strategies significantly contribute to our strong results; and also present several
attention visualizations for the different encoders*
This model was contributed by [eltoto1219](https://huggingface.co/eltoto1219). The original code can be found [here](https://github.com/airsplay/lxmert).
## Usage tips
- Bounding boxes are not necessary to be used in the visual feature embeddings, any kind of visual-spacial features
will work.
- Both the language hidden states and the visual hidden states that LXMERT outputs are passed through the
cross-modality layer, so they contain information from both modalities. To access a modality that only attends to
itself, select the vision/language hidden states from the first input in the tuple.
- The bidirectional cross-modality encoder attention only returns attention values when the language modality is used
as the input and the vision modality is used as the context vector. Further, while the cross-modality encoder
contains self-attention for each respective modality and cross-attention, only the cross attention is returned and
both self attention outputs are disregarded.
## Resources
- [Question answering task guide](../tasks/question_answering)
## LxmertConfig
[[autodoc]] LxmertConfig
## LxmertTokenizer
[[autodoc]] LxmertTokenizer
## LxmertTokenizerFast
[[autodoc]] LxmertTokenizerFast
## Lxmert specific outputs
[[autodoc]] models.lxmert.modeling_lxmert.LxmertModelOutput
[[autodoc]] models.lxmert.modeling_lxmert.LxmertForPreTrainingOutput
[[autodoc]] models.lxmert.modeling_lxmert.LxmertForQuestionAnsweringOutput
[[autodoc]] models.lxmert.modeling_tf_lxmert.TFLxmertModelOutput
[[autodoc]] models.lxmert.modeling_tf_lxmert.TFLxmertForPreTrainingOutput
<frameworkcontent>
<pt>
## LxmertModel
[[autodoc]] LxmertModel
- forward
## LxmertForPreTraining
[[autodoc]] LxmertForPreTraining
- forward
## LxmertForQuestionAnswering
[[autodoc]] LxmertForQuestionAnswering
- forward
</pt>
<tf>
## TFLxmertModel
[[autodoc]] TFLxmertModel
- call
## TFLxmertForPreTraining
[[autodoc]] TFLxmertForPreTraining
- call
</tf>
</frameworkcontent>
|
huggingface/tokenizers/blob/main/docs/source-doc-builder/pipeline.mdx
|
The tokenization pipeline
When calling `Tokenizer.encode` or
`Tokenizer.encode_batch`, the input
text(s) go through the following pipeline:
- `normalization`
- `pre-tokenization`
- `model`
- `post-processing`
We'll see in details what happens during each of those steps in detail,
as well as when you want to `decode <decoding>` some token ids, and how the 🤗 Tokenizers library allows you
to customize each of those steps to your needs. If you're already
familiar with those steps and want to learn by seeing some code, jump to
`our BERT from scratch example <example>`.
For the examples that require a `Tokenizer` we will use the tokenizer we trained in the
`quicktour`, which you can load with:
<tokenizerslangcontent>
<python>
<literalinclude>
{"path": "../../bindings/python/tests/documentation/test_pipeline.py",
"language": "python",
"start-after": "START reload_tokenizer",
"end-before": "END reload_tokenizer",
"dedent": 12}
</literalinclude>
</python>
<rust>
<literalinclude>
{"path": "../../tokenizers/tests/documentation.rs",
"language": "rust",
"start-after": "START pipeline_reload_tokenizer",
"end-before": "END pipeline_reload_tokenizer",
"dedent": 4}
</literalinclude>
</rust>
<node>
<literalinclude>
{"path": "../../bindings/node/examples/documentation/pipeline.test.ts",
"language": "js",
"start-after": "START reload_tokenizer",
"end-before": "END reload_tokenizer",
"dedent": 8}
</literalinclude>
</node>
</tokenizerslangcontent>
## Normalization
Normalization is, in a nutshell, a set of operations you apply to a raw
string to make it less random or "cleaner". Common operations include
stripping whitespace, removing accented characters or lowercasing all
text. If you're familiar with [Unicode
normalization](https://unicode.org/reports/tr15), it is also a very
common normalization operation applied in most tokenizers.
Each normalization operation is represented in the 🤗 Tokenizers library
by a `Normalizer`, and you can combine
several of those by using a `normalizers.Sequence`. Here is a normalizer applying NFD Unicode normalization
and removing accents as an example:
<tokenizerslangcontent>
<python>
<literalinclude>
{"path": "../../bindings/python/tests/documentation/test_pipeline.py",
"language": "python",
"start-after": "START setup_normalizer",
"end-before": "END setup_normalizer",
"dedent": 8}
</literalinclude>
</python>
<rust>
<literalinclude>
{"path": "../../tokenizers/tests/documentation.rs",
"language": "rust",
"start-after": "START pipeline_setup_normalizer",
"end-before": "END pipeline_setup_normalizer",
"dedent": 4}
</literalinclude>
</rust>
<node>
<literalinclude>
{"path": "../../bindings/node/examples/documentation/pipeline.test.ts",
"language": "js",
"start-after": "START setup_normalizer",
"end-before": "END setup_normalizer",
"dedent": 8}
</literalinclude>
</node>
</tokenizerslangcontent>
You can manually test that normalizer by applying it to any string:
<tokenizerslangcontent>
<python>
<literalinclude>
{"path": "../../bindings/python/tests/documentation/test_pipeline.py",
"language": "python",
"start-after": "START test_normalizer",
"end-before": "END test_normalizer",
"dedent": 8}
</literalinclude>
</python>
<rust>
<literalinclude>
{"path": "../../tokenizers/tests/documentation.rs",
"language": "rust",
"start-after": "START pipeline_test_normalizer",
"end-before": "END pipeline_test_normalizer",
"dedent": 4}
</literalinclude>
</rust>
<node>
<literalinclude>
{"path": "../../bindings/node/examples/documentation/pipeline.test.ts",
"language": "js",
"start-after": "START test_normalizer",
"end-before": "END test_normalizer",
"dedent": 8}
</literalinclude>
</node>
</tokenizerslangcontent>
When building a `Tokenizer`, you can
customize its normalizer by just changing the corresponding attribute:
<tokenizerslangcontent>
<python>
<literalinclude>
{"path": "../../bindings/python/tests/documentation/test_pipeline.py",
"language": "python",
"start-after": "START replace_normalizer",
"end-before": "END replace_normalizer",
"dedent": 8}
</literalinclude>
</python>
<rust>
<literalinclude>
{"path": "../../tokenizers/tests/documentation.rs",
"language": "rust",
"start-after": "START pipeline_replace_normalizer",
"end-before": "END pipeline_replace_normalizer",
"dedent": 4}
</literalinclude>
</rust>
<node>
<literalinclude>
{"path": "../../bindings/node/examples/documentation/pipeline.test.ts",
"language": "js",
"start-after": "START replace_normalizer",
"end-before": "END replace_normalizer",
"dedent": 8}
</literalinclude>
</node>
</tokenizerslangcontent>
Of course, if you change the way a tokenizer applies normalization, you
should probably retrain it from scratch afterward.
## Pre-Tokenization
Pre-tokenization is the act of splitting a text into smaller objects
that give an upper bound to what your tokens will be at the end of
training. A good way to think of this is that the pre-tokenizer will
split your text into "words" and then, your final tokens will be parts
of those words.
An easy way to pre-tokenize inputs is to split on spaces and
punctuations, which is done by the
`pre_tokenizers.Whitespace`
pre-tokenizer:
<tokenizerslangcontent>
<python>
<literalinclude>
{"path": "../../bindings/python/tests/documentation/test_pipeline.py",
"language": "python",
"start-after": "START setup_pre_tokenizer",
"end-before": "END setup_pre_tokenizer",
"dedent": 8}
</literalinclude>
</python>
<rust>
<literalinclude>
{"path": "../../tokenizers/tests/documentation.rs",
"language": "rust",
"start-after": "START pipeline_setup_pre_tokenizer",
"end-before": "END pipeline_setup_pre_tokenizer",
"dedent": 4}
</literalinclude>
</rust>
<node>
<literalinclude>
{"path": "../../bindings/node/examples/documentation/pipeline.test.ts",
"language": "js",
"start-after": "START setup_pre_tokenizer",
"end-before": "END setup_pre_tokenizer",
"dedent": 8}
</literalinclude>
</node>
</tokenizerslangcontent>
The output is a list of tuples, with each tuple containing one word and
its span in the original sentence (which is used to determine the final
`offsets` of our `Encoding`). Note that splitting on
punctuation will split contractions like `"I'm"` in this example.
You can combine together any `PreTokenizer` together. For instance, here is a pre-tokenizer that will
split on space, punctuation and digits, separating numbers in their
individual digits:
<tokenizerslangcontent>
<python>
<literalinclude>
{"path": "../../bindings/python/tests/documentation/test_pipeline.py",
"language": "python",
"start-after": "START combine_pre_tokenizer",
"end-before": "END combine_pre_tokenizer",
"dedent": 8}
</literalinclude>
</python>
<rust>
<literalinclude>
{"path": "../../tokenizers/tests/documentation.rs",
"language": "rust",
"start-after": "START pipeline_combine_pre_tokenizer",
"end-before": "END pipeline_combine_pre_tokenizer",
"dedent": 4}
</literalinclude>
</rust>
<node>
<literalinclude>
{"path": "../../bindings/node/examples/documentation/pipeline.test.ts",
"language": "js",
"start-after": "START combine_pre_tokenizer",
"end-before": "END combine_pre_tokenizer",
"dedent": 8}
</literalinclude>
</node>
</tokenizerslangcontent>
As we saw in the `quicktour`, you can
customize the pre-tokenizer of a `Tokenizer` by just changing the corresponding attribute:
<tokenizerslangcontent>
<python>
<literalinclude>
{"path": "../../bindings/python/tests/documentation/test_pipeline.py",
"language": "python",
"start-after": "START replace_pre_tokenizer",
"end-before": "END replace_pre_tokenizer",
"dedent": 8}
</literalinclude>
</python>
<rust>
<literalinclude>
{"path": "../../tokenizers/tests/documentation.rs",
"language": "rust",
"start-after": "START pipeline_replace_pre_tokenizer",
"end-before": "END pipeline_replace_pre_tokenizer",
"dedent": 4}
</literalinclude>
</rust>
<node>
<literalinclude>
{"path": "../../bindings/node/examples/documentation/pipeline.test.ts",
"language": "js",
"start-after": "START replace_pre_tokenizer",
"end-before": "END replace_pre_tokenizer",
"dedent": 8}
</literalinclude>
</node>
</tokenizerslangcontent>
Of course, if you change the way the pre-tokenizer, you should probably
retrain your tokenizer from scratch afterward.
## Model
Once the input texts are normalized and pre-tokenized, the
`Tokenizer` applies the model on the
pre-tokens. This is the part of the pipeline that needs training on your
corpus (or that has been trained if you are using a pretrained
tokenizer).
The role of the model is to split your "words" into tokens, using the
rules it has learned. It's also responsible for mapping those tokens to
their corresponding IDs in the vocabulary of the model.
This model is passed along when intializing the
`Tokenizer` so you already know how to
customize this part. Currently, the 🤗 Tokenizers library supports:
- `models.BPE`
- `models.Unigram`
- `models.WordLevel`
- `models.WordPiece`
For more details about each model and its behavior, you can check
[here](components#models)
## Post-Processing
Post-processing is the last step of the tokenization pipeline, to
perform any additional transformation to the
`Encoding` before it's returned, like
adding potential special tokens.
As we saw in the quick tour, we can customize the post processor of a
`Tokenizer` by setting the
corresponding attribute. For instance, here is how we can post-process
to make the inputs suitable for the BERT model:
<tokenizerslangcontent>
<python>
<literalinclude>
{"path": "../../bindings/python/tests/documentation/test_pipeline.py",
"language": "python",
"start-after": "START setup_processor",
"end-before": "END setup_processor",
"dedent": 8}
</literalinclude>
</python>
<rust>
<literalinclude>
{"path": "../../tokenizers/tests/documentation.rs",
"language": "rust",
"start-after": "START pipeline_setup_processor",
"end-before": "END pipeline_setup_processor",
"dedent": 4}
</literalinclude>
</rust>
<node>
<literalinclude>
{"path": "../../bindings/node/examples/documentation/pipeline.test.ts",
"language": "js",
"start-after": "START setup_processor",
"end-before": "END setup_processor",
"dedent": 8}
</literalinclude>
</node>
</tokenizerslangcontent>
Note that contrarily to the pre-tokenizer or the normalizer, you don't
need to retrain a tokenizer after changing its post-processor.
## All together: a BERT tokenizer from scratch
Let's put all those pieces together to build a BERT tokenizer. First,
BERT relies on WordPiece, so we instantiate a new
`Tokenizer` with this model:
<tokenizerslangcontent>
<python>
<literalinclude>
{"path": "../../bindings/python/tests/documentation/test_pipeline.py",
"language": "python",
"start-after": "START bert_setup_tokenizer",
"end-before": "END bert_setup_tokenizer",
"dedent": 8}
</literalinclude>
</python>
<rust>
<literalinclude>
{"path": "../../tokenizers/tests/documentation.rs",
"language": "rust",
"start-after": "START bert_setup_tokenizer",
"end-before": "END bert_setup_tokenizer",
"dedent": 4}
</literalinclude>
</rust>
<node>
<literalinclude>
{"path": "../../bindings/node/examples/documentation/pipeline.test.ts",
"language": "js",
"start-after": "START bert_setup_tokenizer",
"end-before": "END bert_setup_tokenizer",
"dedent": 8}
</literalinclude>
</node>
</tokenizerslangcontent>
Then we know that BERT preprocesses texts by removing accents and
lowercasing. We also use a unicode normalizer:
<tokenizerslangcontent>
<python>
<literalinclude>
{"path": "../../bindings/python/tests/documentation/test_pipeline.py",
"language": "python",
"start-after": "START bert_setup_normalizer",
"end-before": "END bert_setup_normalizer",
"dedent": 8}
</literalinclude>
</python>
<rust>
<literalinclude>
{"path": "../../tokenizers/tests/documentation.rs",
"language": "rust",
"start-after": "START bert_setup_normalizer",
"end-before": "END bert_setup_normalizer",
"dedent": 4}
</literalinclude>
</rust>
<node>
<literalinclude>
{"path": "../../bindings/node/examples/documentation/pipeline.test.ts",
"language": "js",
"start-after": "START bert_setup_normalizer",
"end-before": "END bert_setup_normalizer",
"dedent": 8}
</literalinclude>
</node>
</tokenizerslangcontent>
The pre-tokenizer is just splitting on whitespace and punctuation:
<tokenizerslangcontent>
<python>
<literalinclude>
{"path": "../../bindings/python/tests/documentation/test_pipeline.py",
"language": "python",
"start-after": "START bert_setup_pre_tokenizer",
"end-before": "END bert_setup_pre_tokenizer",
"dedent": 8}
</literalinclude>
</python>
<rust>
<literalinclude>
{"path": "../../tokenizers/tests/documentation.rs",
"language": "rust",
"start-after": "START bert_setup_pre_tokenizer",
"end-before": "END bert_setup_pre_tokenizer",
"dedent": 4}
</literalinclude>
</rust>
<node>
<literalinclude>
{"path": "../../bindings/node/examples/documentation/pipeline.test.ts",
"language": "js",
"start-after": "START bert_setup_pre_tokenizer",
"end-before": "END bert_setup_pre_tokenizer",
"dedent": 8}
</literalinclude>
</node>
</tokenizerslangcontent>
And the post-processing uses the template we saw in the previous
section:
<tokenizerslangcontent>
<python>
<literalinclude>
{"path": "../../bindings/python/tests/documentation/test_pipeline.py",
"language": "python",
"start-after": "START bert_setup_processor",
"end-before": "END bert_setup_processor",
"dedent": 8}
</literalinclude>
</python>
<rust>
<literalinclude>
{"path": "../../tokenizers/tests/documentation.rs",
"language": "rust",
"start-after": "START bert_setup_processor",
"end-before": "END bert_setup_processor",
"dedent": 4}
</literalinclude>
</rust>
<node>
<literalinclude>
{"path": "../../bindings/node/examples/documentation/pipeline.test.ts",
"language": "js",
"start-after": "START bert_setup_processor",
"end-before": "END bert_setup_processor",
"dedent": 8}
</literalinclude>
</node>
</tokenizerslangcontent>
We can use this tokenizer and train on it on wikitext like in the
`quicktour`:
<tokenizerslangcontent>
<python>
<literalinclude>
{"path": "../../bindings/python/tests/documentation/test_pipeline.py",
"language": "python",
"start-after": "START bert_train_tokenizer",
"end-before": "END bert_train_tokenizer",
"dedent": 8}
</literalinclude>
</python>
<rust>
<literalinclude>
{"path": "../../tokenizers/tests/documentation.rs",
"language": "rust",
"start-after": "START bert_train_tokenizer",
"end-before": "END bert_train_tokenizer",
"dedent": 4}
</literalinclude>
</rust>
<node>
<literalinclude>
{"path": "../../bindings/node/examples/documentation/pipeline.test.ts",
"language": "js",
"start-after": "START bert_train_tokenizer",
"end-before": "END bert_train_tokenizer",
"dedent": 8}
</literalinclude>
</node>
</tokenizerslangcontent>
## Decoding
On top of encoding the input texts, a `Tokenizer` also has an API for decoding, that is converting IDs
generated by your model back to a text. This is done by the methods
`Tokenizer.decode` (for one predicted text) and `Tokenizer.decode_batch` (for a batch of predictions).
The `decoder` will first convert the IDs back to tokens
(using the tokenizer's vocabulary) and remove all special tokens, then
join those tokens with spaces:
<tokenizerslangcontent>
<python>
<literalinclude>
{"path": "../../bindings/python/tests/documentation/test_pipeline.py",
"language": "python",
"start-after": "START test_decoding",
"end-before": "END test_decoding",
"dedent": 8}
</literalinclude>
</python>
<rust>
<literalinclude>
{"path": "../../tokenizers/tests/documentation.rs",
"language": "rust",
"start-after": "START pipeline_test_decoding",
"end-before": "END pipeline_test_decoding",
"dedent": 4}
</literalinclude>
</rust>
<node>
<literalinclude>
{"path": "../../bindings/node/examples/documentation/pipeline.test.ts",
"language": "js",
"start-after": "START test_decoding",
"end-before": "END test_decoding",
"dedent": 8}
</literalinclude>
</node>
</tokenizerslangcontent>
If you used a model that added special characters to represent subtokens
of a given "word" (like the `"##"` in
WordPiece) you will need to customize the `decoder` to treat
them properly. If we take our previous `bert_tokenizer` for instance the
default decoding will give:
<tokenizerslangcontent>
<python>
<literalinclude>
{"path": "../../bindings/python/tests/documentation/test_pipeline.py",
"language": "python",
"start-after": "START bert_test_decoding",
"end-before": "END bert_test_decoding",
"dedent": 8}
</literalinclude>
</python>
<rust>
<literalinclude>
{"path": "../../tokenizers/tests/documentation.rs",
"language": "rust",
"start-after": "START bert_test_decoding",
"end-before": "END bert_test_decoding",
"dedent": 4}
</literalinclude>
</rust>
<node>
<literalinclude>
{"path": "../../bindings/node/examples/documentation/pipeline.test.ts",
"language": "js",
"start-after": "START bert_test_decoding",
"end-before": "END bert_test_decoding",
"dedent": 8}
</literalinclude>
</node>
</tokenizerslangcontent>
But by changing it to a proper decoder, we get:
<tokenizerslangcontent>
<python>
<literalinclude>
{"path": "../../bindings/python/tests/documentation/test_pipeline.py",
"language": "python",
"start-after": "START bert_proper_decoding",
"end-before": "END bert_proper_decoding",
"dedent": 8}
</literalinclude>
</python>
<rust>
<literalinclude>
{"path": "../../tokenizers/tests/documentation.rs",
"language": "rust",
"start-after": "START bert_proper_decoding",
"end-before": "END bert_proper_decoding",
"dedent": 4}
</literalinclude>
</rust>
<node>
<literalinclude>
{"path": "../../bindings/node/examples/documentation/pipeline.test.ts",
"language": "js",
"start-after": "START bert_proper_decoding",
"end-before": "END bert_proper_decoding",
"dedent": 8}
</literalinclude>
</node>
</tokenizerslangcontent>
|
huggingface/pytorch-image-models/blob/main/docs/models/tresnet.md
|
TResNet
A **TResNet** is a variant on a [ResNet](https://paperswithcode.com/method/resnet) that aim to boost accuracy while maintaining GPU training and inference efficiency. They contain several design tricks including a SpaceToDepth stem, [Anti-Alias downsampling](https://paperswithcode.com/method/anti-alias-downsampling), In-Place Activated BatchNorm, Blocks selection and [squeeze-and-excitation layers](https://paperswithcode.com/method/squeeze-and-excitation-block).
## How do I use this model on an image?
To load a pretrained model:
```python
import timm
model = timm.create_model('tresnet_l', pretrained=True)
model.eval()
```
To load and preprocess the image:
```python
import urllib
from PIL import Image
from timm.data import resolve_data_config
from timm.data.transforms_factory import create_transform
config = resolve_data_config({}, model=model)
transform = create_transform(**config)
url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg")
urllib.request.urlretrieve(url, filename)
img = Image.open(filename).convert('RGB')
tensor = transform(img).unsqueeze(0) # transform and add batch dimension
```
To get the model predictions:
```python
import torch
with torch.no_grad():
out = model(tensor)
probabilities = torch.nn.functional.softmax(out[0], dim=0)
print(probabilities.shape)
# prints: torch.Size([1000])
```
To get the top-5 predictions class names:
```python
# Get imagenet class mappings
url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt")
urllib.request.urlretrieve(url, filename)
with open("imagenet_classes.txt", "r") as f:
categories = [s.strip() for s in f.readlines()]
# Print top categories per image
top5_prob, top5_catid = torch.topk(probabilities, 5)
for i in range(top5_prob.size(0)):
print(categories[top5_catid[i]], top5_prob[i].item())
# prints class names and probabilities like:
# [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)]
```
Replace the model name with the variant you want to use, e.g. `tresnet_l`. You can find the IDs in the model summaries at the top of this page.
To extract image features with this model, follow the [timm feature extraction examples](https://rwightman.github.io/pytorch-image-models/feature_extraction/), just change the name of the model you want to use.
## How do I finetune this model?
You can finetune any of the pre-trained models just by changing the classifier (the last layer).
```python
model = timm.create_model('tresnet_l', pretrained=True, num_classes=NUM_FINETUNE_CLASSES)
```
To finetune on your own dataset, you have to write a training loop or adapt [timm's training
script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset.
## How do I train this model?
You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
## Citation
```BibTeX
@misc{ridnik2020tresnet,
title={TResNet: High Performance GPU-Dedicated Architecture},
author={Tal Ridnik and Hussam Lawen and Asaf Noy and Emanuel Ben Baruch and Gilad Sharir and Itamar Friedman},
year={2020},
eprint={2003.13630},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
Type: model-index
Collections:
- Name: TResNet
Paper:
Title: 'TResNet: High Performance GPU-Dedicated Architecture'
URL: https://paperswithcode.com/paper/tresnet-high-performance-gpu-dedicated
Models:
- Name: tresnet_l
In Collection: TResNet
Metadata:
FLOPs: 10873416792
Parameters: 53456696
File Size: 224440219
Architecture:
- 1x1 Convolution
- Anti-Alias Downsampling
- Convolution
- Global Average Pooling
- InPlace-ABN
- Leaky ReLU
- ReLU
- Residual Connection
- Squeeze-and-Excitation Block
Tasks:
- Image Classification
Training Techniques:
- AutoAugment
- Cutout
- Label Smoothing
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 8x NVIDIA 100 GPUs
ID: tresnet_l
LR: 0.01
Epochs: 300
Crop Pct: '0.875'
Momentum: 0.9
Image Size: '224'
Weight Decay: 0.0001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/tresnet.py#L267
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tresnet/tresnet_l_81_5-235b486c.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 81.49%
Top 5 Accuracy: 95.62%
- Name: tresnet_l_448
In Collection: TResNet
Metadata:
FLOPs: 43488238584
Parameters: 53456696
File Size: 224440219
Architecture:
- 1x1 Convolution
- Anti-Alias Downsampling
- Convolution
- Global Average Pooling
- InPlace-ABN
- Leaky ReLU
- ReLU
- Residual Connection
- Squeeze-and-Excitation Block
Tasks:
- Image Classification
Training Techniques:
- AutoAugment
- Cutout
- Label Smoothing
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 8x NVIDIA 100 GPUs
ID: tresnet_l_448
LR: 0.01
Epochs: 300
Crop Pct: '0.875'
Momentum: 0.9
Image Size: '448'
Weight Decay: 0.0001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/tresnet.py#L285
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tresnet/tresnet_l_448-940d0cd1.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 82.26%
Top 5 Accuracy: 95.98%
- Name: tresnet_m
In Collection: TResNet
Metadata:
FLOPs: 5733048064
Parameters: 41282200
File Size: 125861314
Architecture:
- 1x1 Convolution
- Anti-Alias Downsampling
- Convolution
- Global Average Pooling
- InPlace-ABN
- Leaky ReLU
- ReLU
- Residual Connection
- Squeeze-and-Excitation Block
Tasks:
- Image Classification
Training Techniques:
- AutoAugment
- Cutout
- Label Smoothing
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 8x NVIDIA 100 GPUs
Training Time: < 24 hours
ID: tresnet_m
LR: 0.01
Epochs: 300
Crop Pct: '0.875'
Momentum: 0.9
Image Size: '224'
Weight Decay: 0.0001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/tresnet.py#L261
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tresnet/tresnet_m_80_8-dbc13962.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 80.8%
Top 5 Accuracy: 94.86%
- Name: tresnet_m_448
In Collection: TResNet
Metadata:
FLOPs: 22929743104
Parameters: 29278464
File Size: 125861314
Architecture:
- 1x1 Convolution
- Anti-Alias Downsampling
- Convolution
- Global Average Pooling
- InPlace-ABN
- Leaky ReLU
- ReLU
- Residual Connection
- Squeeze-and-Excitation Block
Tasks:
- Image Classification
Training Techniques:
- AutoAugment
- Cutout
- Label Smoothing
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 8x NVIDIA 100 GPUs
ID: tresnet_m_448
LR: 0.01
Epochs: 300
Crop Pct: '0.875'
Momentum: 0.9
Image Size: '448'
Weight Decay: 0.0001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/tresnet.py#L279
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tresnet/tresnet_m_448-bc359d10.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 81.72%
Top 5 Accuracy: 95.57%
- Name: tresnet_xl
In Collection: TResNet
Metadata:
FLOPs: 15162534034
Parameters: 75646610
File Size: 314378965
Architecture:
- 1x1 Convolution
- Anti-Alias Downsampling
- Convolution
- Global Average Pooling
- InPlace-ABN
- Leaky ReLU
- ReLU
- Residual Connection
- Squeeze-and-Excitation Block
Tasks:
- Image Classification
Training Techniques:
- AutoAugment
- Cutout
- Label Smoothing
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 8x NVIDIA 100 GPUs
ID: tresnet_xl
LR: 0.01
Epochs: 300
Crop Pct: '0.875'
Momentum: 0.9
Image Size: '224'
Weight Decay: 0.0001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/tresnet.py#L273
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tresnet/tresnet_xl_82_0-a2d51b00.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 82.05%
Top 5 Accuracy: 95.93%
- Name: tresnet_xl_448
In Collection: TResNet
Metadata:
FLOPs: 60641712730
Parameters: 75646610
File Size: 224440219
Architecture:
- 1x1 Convolution
- Anti-Alias Downsampling
- Convolution
- Global Average Pooling
- InPlace-ABN
- Leaky ReLU
- ReLU
- Residual Connection
- Squeeze-and-Excitation Block
Tasks:
- Image Classification
Training Techniques:
- AutoAugment
- Cutout
- Label Smoothing
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 8x NVIDIA 100 GPUs
ID: tresnet_xl_448
LR: 0.01
Epochs: 300
Crop Pct: '0.875'
Momentum: 0.9
Image Size: '448'
Weight Decay: 0.0001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/tresnet.py#L291
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-tresnet/tresnet_l_448-940d0cd1.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 83.06%
Top 5 Accuracy: 96.19%
-->
|
huggingface/hub-docs/blob/main/docs/hub/datasets-viewer-configure.md
|
Configure the Dataset Viewer
The Dataset Viewer supports many [data files formats](./datasets-adding#file-formats), from text to tabular and from image to audio formats.
It also separates the train/validation/test splits based on file and folder names.
To configure the Dataset Viewer for your dataset, first make sure your dataset is in a [supported data format](./datasets-adding#files-formats).
## Configure dropdowns for splits or subsets
In the Dataset Viewer you can view the [train/validation/test](https://en.wikipedia.org/wiki/Training,_validation,_and_test_data_sets) splits of datasets, and sometimes additionally choose between multiple subsets (e.g. one per language).
To define those dropdowns, you can name the data files or their folder after their split names (train/validation/test).
It is also possible to customize your splits manually using YAML.
For more information, feel free to check out the documentation on [Data files Configuration](./datasets-data-files-configuration).
## Disable the viewer
The dataset viewer can be disabled. To do this, add a YAML section to the dataset's `README.md` file (create one if it does not already exist) and add a `viewer` property with the value `false`.
```
---
viewer: false
---
```
Note that the viewer is always disabled on the private datasets.
|
huggingface/transformers/blob/main/examples/research_projects/jax-projects/big_bird/README.md
|
Author: [@vasudevgupta7](https://github.com/thevasudevgupta/)
## Intro
In this project, we fine-tuned [**BigBird**](https://arxiv.org/abs/2007.14062) on [**natural-questions**](https://huggingface.co/datasets/natural_questions) dataset for **question-answering** task on long documents. **BigBird**, is a **sparse-attention based transformer** which extends Transformer based models, such as BERT to much **longer sequences**.
Read more about BigBird at https://huggingface.co/blog/big-bird
## Fine-tuning
**Setup**
You need to install jax yourself by following the official docs ([refer this](https://github.com/google/jax#installation)). Other requirements for this project can be installed by running following command:
```shell
pip3 install -qr requirements.txt
```
**Download & prepare dataset**
The Natural Questions corpus contains questions from real users, and it requires QA systems to read and comprehend an entire Wikipedia article that may or may not contain the answer to the question. This corpus takes ~100 GB on disk. We have used HuggingFace datasets to download & process the dataset.
```shell
# just run following CMD
python3 prepare_natural_questions.py
# this will download the whole dataset from HuggingFace Hub & will make it ready for training
# this script takes ~3 hours to process the dataset
```
**Launch Training**
We have trained on Cloud's TPU v3-8. Each epoch took around 4.5 hours and the model got converged in just 2 epochs. You can see complete training args in [this script](bigbird_flax.py).
```shell
# just run following CMD
python3 train.py
# In case, you want to try hparams tuning, you can run wandb sweep
wandb sweep --project=bigbird sweep_flax.yaml
wandb agent <agent-id-obtained-by-above-CMD>
```
## Evaluation
Our evaluation script is different from the original script and we are evaluating sequences with length up to 4096 for simplicity. We managed to get the **EM score of ~55.2** using our evaluation script.
```shell
# download validation-dataset first
mkdir natural-questions-validation
wget https://huggingface.co/datasets/vasudevgupta/natural-questions-validation/resolve/main/natural_questions-validation.arrow -P natural-questions-validation
wget https://huggingface.co/datasets/vasudevgupta/natural-questions-validation/resolve/main/dataset_info.json -P natural-questions-validation
wget https://huggingface.co/datasets/vasudevgupta/natural-questions-validation/resolve/main/state.json -P natural-questions-validation
# simply run following command
python3 evaluate.py
```
You can find our checkpoint on HuggingFace Hub ([see this](https://huggingface.co/vasudevgupta/flax-bigbird-natural-questions)). In case you are interested in PyTorch BigBird fine-tuning, you can refer to [this repositary](https://github.com/thevasudevgupta/bigbird).
|
huggingface/blog/blob/main/fine-tune-clip-rsicd.md
|
--
title: Fine tuning CLIP with Remote Sensing (Satellite) images and captions
thumbnail: /blog/assets/30_clip_rsicd/clip_schematic.png
authors:
- user: arampacha
guest: true
- user: devv
guest: true
- user: goutham794
guest: true
- user: cataluna84
guest: true
- user: ghosh-r
guest: true
- user: sujitpal
guest: true
---
# Fine tuning CLIP with Remote Sensing (Satellite) images and captions
## Fine tuning CLIP with Remote Sensing (Satellite) images and captions
<img src="/blog/assets/30_clip_rsicd/clip-rsicd-header-image.png"/>
In July this year, [Hugging Face](https://huggingface.co/) organized a [Flax/JAX Community Week](https://github.com/huggingface/transformers/blob/master/examples/research_projects/jax-projects/README.md), and invited the community to submit projects to train Hugging Face [transformers](https://github.com/huggingface/transformers) models in the areas of Natural Language Processing (NLP) and Computer Vision (CV).
Participants used Tensor Processing Units (TPUs) with [Flax](https://github.com/google/flax) and [JAX](https://github.com/google/jax). JAX is a linear algebra library (like `numpy`) that can do automatic differentiation ([Autograd](https://github.com/hips/autograd)) and compile down to [XLA](https://www.tensorflow.org/xla), and Flax is a neural network library and ecosystem for JAX. TPU compute time was provided free by [Google Cloud](https://cloud.google.com/), who co-sponsored the event.
Over the next two weeks, teams participated in lectures from Hugging Face and Google, trained one or more models using JAX/Flax, shared them with the community, and provided a [Hugging Face Spaces](https://huggingface.co/spaces) demo showcasing the capabilities of their model. Approximately 100 teams participated in the event, and it resulted in 170 models and 36 demos.
Our team, like probably many others, is a distributed one, spanning 12 time zones. Our common thread is that we all belong to the [TWIML Slack Channel](https://twimlai.slack.com/), where we came together based on a shared interest in Artificial Intelligence (AI) and Machine Learning (ML) topics.
We fine-tuned the [CLIP Network from OpenAI](https://openai.comclip/) with satellite images and captions from the [RSICD dataset](https://github.com/201528014227051/RSICD_optimal). The CLIP network learns visual concepts by being trained with image and caption pairs in a self-supervised manner, by using text paired with images found across the Internet. During inference, the model can predict the most relevant image given a text description or the most relevant text description given an image. CLIP is powerful enough to be used in zero-shot manner on everyday images. However, we felt that satellite images were sufficiently different from everyday images that it would be useful to fine-tune CLIP with them. Our intuition turned out to be correct, as the evaluation results (described below) shows. In this post, we describe details of our training and evaluation process, and our plans for future work on this project.
The goal of our project was to provide a useful service and demonstrate how to use CLIP for practical use cases. Our model can be used by applications to search through large collections of satellite images using textual queries. Such queries could describe the image in totality (for example, beach, mountain, airport, baseball field, etc) or search or mention specific geographic or man-made features within these images. CLIP can similarly be fine-tuned for other domains as well, as shown by the [medclip-demo team](https://huggingface.co/spaces/flax-community/medclip-demo) for medical images.
The ability to search through large collections of images using text queries is an immensely powerful feature, and can be used as much for social good as for malign purposes. Possible applications include national defense and anti-terrorism activities, the ability to spot and address effects of climate change before they become unmanageable, etc. Unfortunately, this power can also be misused, such as for military and police surveillance by authoritarian nation-states, so it does raise some ethical questions as well.
You can read about the project on our [project page](https://github.com/arampacha/CLIP-rsicd), download our [trained model](https://huggingface.co/flax-community/clip-rsicd-v2) to use for inference on your own data, or see it in action on our [demo](https://huggingface.co/spaces/sujitpal/clip-rsicd-demo).
### Training
#### Dataset
We fine-tuned the CLIP model primarily with the [RSICD dataset](https://github.com/201528014227051/RSICD_optimal). This dataset consists of about 10,000 images collected from Google Earth, Baidu Map, MapABC, and Tianditu. It is provided freely to the research community to advance remote sensing captioning via [Exploring Models and Data for Remote Sensing Image Caption Generation](https://arxiv.org/abs/1712.0783) (Lu et al, 2017). The images are (224, 224) RGB images at various resolutions, and each image has up to 5 captions associated with it.
<img src="/blog/assets/30_clip_rsicd/rsicd-images-sampling.png"/>
<center><i>Some examples of images from the RSICD dataset</i></center>
In addition, we used the [UCM Dataset](https://mega.nz/folder/wCpSzSoS#RXzIlrv--TDt3ENZdKN8JA) and the [Sydney dataset](https://mega.nz/folder/pG4yTYYA#4c4buNFLibryZnlujsrwEQ) for training, The UCM dataset is based on the UC Merced Land Use dataset. It consists of 2100 images belonging to 21 classes (100 images per class), and each image has 5 captions. The Sydney dataset contains images of Sydney, Australia from Google Earth. It contains 613 images belonging to 7 classes. Images are (500, 500) RGB and provides 5 captions for each image. We used these additional datasets because we were not sure if the RSICD dataset would be large enough to fine-tune CLIP.
#### Model
Our model is just the fine-tuned version of the original CLIP model shown below. Inputs to the model are a batch of captions and a batch of images passed through the CLIP text encoder and image encoder respectively. The training process uses [contrastive learning](https://towardsdatascience.com/understanding-contrastive-learning-d5b19fd96607) to learn a joint embedding representation of image and captions. In this embedding space, images and their respective captions are pushed close together, as are similar images and similar captions. Conversely, images and captions for different images, or dissimilar images and captions, are likely to be pushed further apart.
<img src="/blog/assets/30_clip_rsicd/clip_schematic.png"/>
<center><i>CLIP Training and Inference (Image Credit: CLIP: Connecting Text and Images (https://openai.comclip/))</i></center>
#### Data Augmentation
In order to regularize our dataset and prevent overfitting due to the size of the dataset, we used both image and text augmentation.
Image augmentation was done inline using built-in transforms from Pytorch's [Torchvision](https://pytorch.org/vision/stable/index.html) package. The transformations used were Random Cropping, Random Resizing and Cropping, Color Jitter, and Random Horizontal and Vertical flipping.
We augmented the text with backtranslation to generate captions for images with less than 5 unique captions per image. The [Marian MT]((https://huggingface.co/transformers/model_doc/marian.html)) family of models from Hugging Face was used to translate the existing captions into French, Spanish, Italian, and Portuguese and back to English to fill out the captions for these images.
As shown in these loss plots below, image augmentation reduced overfitting significantly, and text and image augmentation reduced overfitting even further.
<img src="/blog/assets/30_clip_rsicd/image-augment-loss.png"/>
<img src="/blog/assets/30_clip_rsicd/image-text-aug-loss.png"/>
<center><i>Evaluation and Training loss plots comparing (top) no augmentation vs image augmentation, and (bottom) image augmentation vs text+image augmentation</i></center>
### Evaluation
#### Metrics
A subset of the RSICD test set was used for evaluation. We found 30 categories of images in this subset. The evaluation was done by comparing each image with a set of 30 caption sentences of the form `"An aerial photograph of {category}"`. The model produced a ranked list of the 30 captions, from most relevant to least relevant. Categories corresponding to captions with the top k scores (for k=1, 3, 5, and 10) were compared with the category provided via the image file name. The scores are averaged over the entire set of images used for evaluation and reported for various values of k, as shown below.
The `baseline` model represents the pre-trained `openai/clip-vit-base-path32` CLIP model. This model was fine-tuned with captions and images from the RSICD dataset, which resulted in a significant performance boost, as shown below.
Our best model was trained with image and text augmentation, with batch size 1024 (128 on each of the 8 TPU cores), and the Adam optimizer with learning rate 5e-6. We trained our second base model with the same hyperparameters, except that we used the Adafactor optimizer with learning rate 1e-4. You can download either model from their model repos linked to in the table below.
| Model-name | k=1 | k=3 | k=5 | k=10 |
| ---------------------------------------- | ----- | ----- | ----- | ----- |
| baseline | 0.572 | 0.745 | 0.837 | 0.939 |
| bs128x8-lr1e-4-augs/ckpt-2 | 0.819 | 0.950 | 0.974 | 0.994 |
| bs128x8-lr1e-4-imgaugs/ckpt-2 | 0.812 | 0.942 | 0.970 | 0.991 |
| [bs128x8-lr1e-4-imgaugs-textaugs/ckpt-4](https://huggingface.co/flax-community/clip-rsicd)<sup>2</sup> | 0.843 | 0.958 | 0.977 | 0.993 |
| bs128x8-lr5e-5-imgaugs-textaugs/ckpt-8 | 0.831 | 0.959 | 0.977 | 0.994 |
| bs128x8-lr5e-5-imgaugs/ckpt-4 | 0.746 | 0.906 | 0.956 | 0.989 |
| bs128x8-lr5e-5-imgaugs-textaugs-2/ckpt-4 | 0.811 | 0.945 | 0.972 | 0.993 |
| bs128x8-lr5e-5-imgaugs-textaugs-3/ckpt-5 | 0.823 | 0.946 | 0.971 | 0.992 |
| bs128x8-lr5e-5-wd02/ckpt-4 | 0.820 | 0.946 | 0.965 | 0.990 |
| [bs128x8-lr5e-6-adam/ckpt-1](https://huggingface.co/flax-community/clip-rsicd-v2)<sup>1</sup> | **0.883** | **0.968** | **0.982** | **0.998** |
_1 - our best model, 2 - our second best model_
#### Demo
You can access the [CLIP-RSICD Demo](https://huggingface.co/spaces/sujitpal/clip-rsicd-demo) here. It uses our fine-tuned CLIP model to provide the following functionality:
* Text to Image search
* Image to Image search
* Find text feature in image
The first two functionalities use the RSICD test set as its image corpus. They are encoded using our best fine-tuned CLIP model and stored in a [NMSLib](https://github.com/nmslib/nmslib) index which allows Approximate Nearest Neighbor based retrieval. For text-to-image and image-to-image search respectively, the query text or image are encoded with our model and matched against the image vectors in the corpus. For the third functionality, we divide the incoming image into patches and encode them, encode the queried text feature, match the text vector with each image patch vector, and return the probability of finding the feature in each patch.
### Future Work
We are grateful that we have been given an opportunity to further refine our model. Some ideas we have for future work are as follows:
1. Construct a sequence to sequence model using a CLIP encoder and a GPT-3 decoder and train it for image captioning.
2. Fine-tune the model on more image caption pairs from other datasets and investigate if we can improve its performance.
3. Investigate how fine-tuning affects the performance of model on non-RSICD image caption pairs.
4. Investigate the capability of the fine-tuned model to classify outside the categories it has been fine-tuned on.
5. Evaluate the model using other criteria such as image classification.
|
huggingface/transformers/blob/main/notebooks/README.md
|
!---
Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# 🤗 Transformers Notebooks
You can find here a list of the official notebooks provided by Hugging Face.
Also, we would like to list here interesting content created by the community.
If you wrote some notebook(s) leveraging 🤗 Transformers and would like to be listed here, please open a
Pull Request so it can be included under the Community notebooks.
## Hugging Face's notebooks 🤗
### Documentation notebooks
You can open any page of the documentation as a notebook in Colab (there is a button directly on said pages) but they are also listed here if you need them:
| Notebook | Description | | |
|:----------|:-------------|:-------------|------:|
| [Quicktour of the library](https://github.com/huggingface/notebooks/blob/main/transformers_doc/en/quicktour.ipynb) | A presentation of the various APIs in Transformers |[](https://colab.research.google.com/github/huggingface/notebooks/blob/main/transformers_doc/en/quicktour.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/en/transformers_doc/quicktour.ipynb)|
| [Summary of the tasks](https://github.com/huggingface/notebooks/blob/main/transformers_doc/en/task_summary.ipynb) | How to run the models of the Transformers library task by task |[](https://colab.research.google.com/github/huggingface/notebooks/blob/main/transformers_doc/en/task_summary.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/transformers_doc/en/task_summary.ipynb)|
| [Preprocessing data](https://github.com/huggingface/notebooks/blob/main/transformers_doc/en/preprocessing.ipynb) | How to use a tokenizer to preprocess your data |[](https://colab.research.google.com/github/huggingface/notebooks/blob/main/transformers_doc/en/preprocessing.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/transformers_doc/en/preprocessing.ipynb)|
| [Fine-tuning a pretrained model](https://github.com/huggingface/notebooks/blob/main/transformers_doc/en/training.ipynb) | How to use the Trainer to fine-tune a pretrained model |[](https://colab.research.google.com/github/huggingface/notebooks/blob/main/transformers_doc/en/training.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/transformers_doc/en/training.ipynb)|
| [Summary of the tokenizers](https://github.com/huggingface/notebooks/blob/main/transformers_doc/en/tokenizer_summary.ipynb) | The differences between the tokenizers algorithm |[](https://colab.research.google.com/github/huggingface/notebooks/blob/main/transformers_doc/en/tokenizer_summary.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/transformers_doc/en/tokenizer_summary.ipynb)|
| [Multilingual models](https://github.com/huggingface/notebooks/blob/main/transformers_doc/en/multilingual.ipynb) | How to use the multilingual models of the library |[](https://colab.research.google.com/github/huggingface/notebooks/blob/main/transformers_doc/en/multilingual.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/transformers_doc/en/multilingual.ipynb)|
### PyTorch Examples
#### Natural Language Processing[[pytorch-nlp]]
| Notebook | Description | | |
|:----------|:-------------|:-------------|------:|
| [Train your tokenizer](https://github.com/huggingface/notebooks/blob/main/examples/tokenizer_training.ipynb) | How to train and use your very own tokenizer |[](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tokenizer_training.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/tokenizer_training.ipynb)|
| [Train your language model](https://github.com/huggingface/notebooks/blob/main/examples/language_modeling_from_scratch.ipynb) | How to easily start using transformers |[](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling_from_scratch.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/language_modeling_from_scratch.ipynb)|
| [How to fine-tune a model on text classification](https://github.com/huggingface/notebooks/blob/main/examples/text_classification.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on any GLUE task. | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb)|
| [How to fine-tune a model on language modeling](https://github.com/huggingface/notebooks/blob/main/examples/language_modeling.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on a causal or masked LM task. | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb)|
| [How to fine-tune a model on token classification](https://github.com/huggingface/notebooks/blob/main/examples/token_classification.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on a token classification task (NER, PoS). | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb)|
| [How to fine-tune a model on question answering](https://github.com/huggingface/notebooks/blob/main/examples/question_answering.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on SQUAD. | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb)|
| [How to fine-tune a model on multiple choice](https://github.com/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on SWAG. | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb)|
| [How to fine-tune a model on translation](https://github.com/huggingface/notebooks/blob/main/examples/translation.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on WMT. | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/translation.ipynb)|
| [How to fine-tune a model on summarization](https://github.com/huggingface/notebooks/blob/main/examples/summarization.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on XSUM. | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/summarization.ipynb)|
| [How to train a language model from scratch](https://github.com/huggingface/blog/blob/main/notebooks/01_how_to_train.ipynb)| Highlight all the steps to effectively train Transformer model on custom data | [](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/01_how_to_train.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/blog/blob/main/notebooks/01_how_to_train.ipynb)|
| [How to generate text](https://github.com/huggingface/blog/blob/main/notebooks/02_how_to_generate.ipynb)| How to use different decoding methods for language generation with transformers | [](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/02_how_to_generate.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/blog/blob/main/notebooks/02_how_to_generate.ipynb)|
| [How to generate text (with constraints)](https://github.com/huggingface/blog/blob/main/notebooks/53_constrained_beam_search.ipynb)| How to guide language generation with user-provided constraints | [](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/53_constrained_beam_search.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/blog/blob/main/notebooks/53_constrained_beam_search.ipynb)|
| [Reformer](https://github.com/huggingface/blog/blob/main/notebooks/03_reformer.ipynb)| How Reformer pushes the limits of language modeling | [](https://colab.research.google.com/github/patrickvonplaten/blog/blob/main/notebooks/03_reformer.ipynb)| [](https://studiolab.sagemaker.aws/import/github/patrickvonplaten/blog/blob/main/notebooks/03_reformer.ipynb)|
#### Computer Vision[[pytorch-cv]]
| Notebook | Description | | |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------:|
| [How to fine-tune a model on image classification (Torchvision)](https://github.com/huggingface/notebooks/blob/main/examples/image_classification.ipynb) | Show how to preprocess the data using Torchvision and fine-tune any pretrained Vision model on Image Classification | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb) | [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb)|
| [How to fine-tune a model on image classification (Albumentations)](https://github.com/huggingface/notebooks/blob/main/examples/image_classification_albumentations.ipynb) | Show how to preprocess the data using Albumentations and fine-tune any pretrained Vision model on Image Classification | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_albumentations.ipynb) | [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/image_classification_albumentations.ipynb)|
| [How to fine-tune a model on image classification (Kornia)](https://github.com/huggingface/notebooks/blob/main/examples/image_classification_kornia.ipynb) | Show how to preprocess the data using Kornia and fine-tune any pretrained Vision model on Image Classification | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_kornia.ipynb) | [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/image_classification_kornia.ipynb)|
| [How to perform zero-shot object detection with OWL-ViT](https://github.com/huggingface/notebooks/blob/main/examples/zeroshot_object_detection_with_owlvit.ipynb) | Show how to perform zero-shot object detection on images with text queries | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/zeroshot_object_detection_with_owlvit.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/zeroshot_object_detection_with_owlvit.ipynb)|
| [How to fine-tune an image captioning model](https://github.com/huggingface/notebooks/blob/main/examples/image_captioning_blip.ipynb) | Show how to fine-tune BLIP for image captioning on a custom dataset | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_captioning_blip.ipynb) | [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/image_captioning_blip.ipynb)|
| [How to build an image similarity system with Transformers](https://github.com/huggingface/notebooks/blob/main/examples/image_similarity.ipynb) | Show how to build an image similarity system | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_similarity.ipynb) | [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/image_similarity.ipynb)|
| [How to fine-tune a SegFormer model on semantic segmentation](https://github.com/huggingface/notebooks/blob/main/examples/semantic_segmentation.ipynb) | Show how to preprocess the data and fine-tune a pretrained SegFormer model on Semantic Segmentation | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/semantic_segmentation.ipynb) | [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/semantic_segmentation.ipynb)|
| [How to fine-tune a VideoMAE model on video classification](https://github.com/huggingface/notebooks/blob/main/examples/video_classification.ipynb) | Show how to preprocess the data and fine-tune a pretrained VideoMAE model on Video Classification | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/video_classification.ipynb) | [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/video_classification.ipynb)|
#### Audio[[pytorch-audio]]
| Notebook | Description | | |
|:----------|:-------------|:-------------|------:|
| [How to fine-tune a speech recognition model in English](https://github.com/huggingface/notebooks/blob/main/examples/speech_recognition.ipynb)| Show how to preprocess the data and fine-tune a pretrained Speech model on TIMIT | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/speech_recognition.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/speech_recognition.ipynb)|
| [How to fine-tune a speech recognition model in any language](https://github.com/huggingface/notebooks/blob/main/examples/multi_lingual_speech_recognition.ipynb)| Show how to preprocess the data and fine-tune a multi-lingually pretrained speech model on Common Voice | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multi_lingual_speech_recognition.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/multi_lingual_speech_recognition.ipynb)|
| [How to fine-tune a model on audio classification](https://github.com/huggingface/notebooks/blob/main/examples/audio_classification.ipynb)| Show how to preprocess the data and fine-tune a pretrained Speech model on Keyword Spotting | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb)|
#### Biological Sequences[[pytorch-bio]]
| Notebook | Description | | |
|:----------|:----------------------------------------------------------------------------------------|:-------------|------:|
| [How to fine-tune a pre-trained protein model](https://github.com/huggingface/notebooks/blob/main/examples/protein_language_modeling.ipynb) | See how to tokenize proteins and fine-tune a large pre-trained protein "language" model | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_language_modeling.ipynb) | [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/protein_language_modeling.ipynb) |
| [How to generate protein folds](https://github.com/huggingface/notebooks/blob/main/examples/protein_folding.ipynb) | See how to go from protein sequence to a full protein model and PDB file | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_folding.ipynb) | [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/protein_folding.ipynb) |
| [How to fine-tune a Nucleotide Transformer model](https://github.com/huggingface/notebooks/blob/main/examples/nucleotide_transformer_dna_sequence_modelling.ipynb) | See how to tokenize DNA and fine-tune a large pre-trained DNA "language" model | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/nucleotide_transformer_dna_sequence_modelling.ipynb) | [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/nucleotide_transformer_dna_sequence_modelling.ipynb) |
| [Fine-tune a Nucleotide Transformer model with LoRA](https://github.com/huggingface/notebooks/blob/main/examples/nucleotide_transformer_dna_sequence_modelling_with_peft.ipynb) | Train even larger DNA models in a memory-efficient way | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/nucleotide_transformer_dna_sequence_modelling_with_peft.ipynb) | [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/nucleotide_transformer_dna_sequence_modelling_with_peft.ipynb) |
#### Other modalities[[pytorch-other]]
| Notebook | Description | | |
|:----------|:----------------------------------------------------------------------------------------|:-------------|------:|
| [Probabilistic Time Series Forecasting](https://github.com/huggingface/notebooks/blob/main/examples/time-series-transformers.ipynb) | See how to train Time Series Transformer on a custom dataset | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/time-series-transformers.ipynb) | [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/time-series-transformers.ipynb) |
#### Utility notebooks[[pytorch-utility]]
| Notebook | Description | | |
|:----------|:-------------|:-------------|------:|
| [How to export model to ONNX](https://github.com/huggingface/notebooks/blob/main/examples/onnx-export.ipynb)| Highlight how to export and run inference workloads through ONNX |
| [How to use Benchmarks](https://github.com/huggingface/notebooks/blob/main/examples/benchmark.ipynb)| How to benchmark models with transformers | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/benchmark.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/benchmark.ipynb)|
### TensorFlow Examples
#### Natural Language Processing[[tensorflow-nlp]]
| Notebook | Description | | |
|:----------|:-------------|:-------------|------:|
| [Train your tokenizer](https://github.com/huggingface/notebooks/blob/main/examples/tokenizer_training.ipynb) | How to train and use your very own tokenizer |[](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tokenizer_training.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/tokenizer_training.ipynb)|
| [Train your language model](https://github.com/huggingface/notebooks/blob/main/examples/language_modeling_from_scratch-tf.ipynb) | How to easily start using transformers |[](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling_from_scratch-tf.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/language_modeling_from_scratch-tf.ipynb)|
| [How to fine-tune a model on text classification](https://github.com/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on any GLUE task. | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb)|
| [How to fine-tune a model on language modeling](https://github.com/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on a causal or masked LM task. | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb)|
| [How to fine-tune a model on token classification](https://github.com/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on a token classification task (NER, PoS). | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb)|
| [How to fine-tune a model on question answering](https://github.com/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on SQUAD. | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb)|
| [How to fine-tune a model on multiple choice](https://github.com/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on SWAG. | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb)|
| [How to fine-tune a model on translation](https://github.com/huggingface/notebooks/blob/main/examples/translation-tf.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on WMT. | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb)|
| [How to fine-tune a model on summarization](https://github.com/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb)| Show how to preprocess the data and fine-tune a pretrained model on XSUM. | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb)|
#### Computer Vision[[tensorflow-cv]]
| Notebook | Description | | |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------|:-------------|------:|
| [How to fine-tune a model on image classification](https://github.com/huggingface/notebooks/blob/main/examples/image_classification-tf.ipynb) | Show how to preprocess the data and fine-tune any pretrained Vision model on Image Classification | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification-tf.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/image_classification-tf.ipynb)|
| [How to fine-tune a SegFormer model on semantic segmentation](https://github.com/huggingface/notebooks/blob/main/examples/semantic_segmentation-tf.ipynb) | Show how to preprocess the data and fine-tune a pretrained SegFormer model on Semantic Segmentation | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/semantic_segmentation-tf.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/semantic_segmentation-tf.ipynb)|
#### Biological Sequences[[tensorflow-bio]]
| Notebook | Description | | |
|:----------|:-------------|:-------------|------:|
| [How to fine-tune a pre-trained protein model](https://github.com/huggingface/notebooks/blob/main/examples/protein_language_modeling-tf.ipynb) | See how to tokenize proteins and fine-tune a large pre-trained protein "language" model | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/protein_language_modeling-tf.ipynb) | [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/protein_language_modeling-tf.ipynb) |
#### Utility notebooks[[tensorflow-utility]]
| Notebook | Description | | |
|:----------|:-------------|:-------------|------:|
| [How to train TF/Keras models on TPU](https://github.com/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) | See how to train at high speed on Google's TPU hardware | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) | [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) |
### Optimum notebooks
🤗 [Optimum](https://github.com/huggingface/optimum) is an extension of 🤗 Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on targeted hardwares.
| Notebook | Description | | |
|:----------|:-------------|:-------------|------:|
| [How to quantize a model with ONNX Runtime for text classification](https://github.com/huggingface/notebooks/blob/main/examples/text_classification_quantization_ort.ipynb)| Show how to apply static and dynamic quantization on a model using [ONNX Runtime](https://github.com/microsoft/onnxruntime) for any GLUE task. | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_quantization_ort.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/text_classification_quantization_ort.ipynb)|
| [How to quantize a model with Intel Neural Compressor for text classification](https://github.com/huggingface/notebooks/blob/main/examples/text_classification_quantization_inc.ipynb)| Show how to apply static, dynamic and aware training quantization on a model using [Intel Neural Compressor (INC)](https://github.com/intel/neural-compressor) for any GLUE task. | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_quantization_inc.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/text_classification_quantization_inc.ipynb)|
| [How to fine-tune a model on text classification with ONNX Runtime](https://github.com/huggingface/notebooks/blob/main/examples/text_classification_ort.ipynb)| Show how to preprocess the data and fine-tune a model on any GLUE task using [ONNX Runtime](https://github.com/microsoft/onnxruntime). | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_ort.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/text_classification_ort.ipynb)|
| [How to fine-tune a model on summarization with ONNX Runtime](https://github.com/huggingface/notebooks/blob/main/examples/summarization_ort.ipynb)| Show how to preprocess the data and fine-tune a model on XSUM using [ONNX Runtime](https://github.com/microsoft/onnxruntime). | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization_ort.ipynb)| [](https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/main/examples/summarization_ort.ipynb)|
## Community notebooks:
More notebooks developed by the community are available [here](https://hf.co/docs/transformers/community#community-notebooks).
|
huggingface/blog/blob/main/informer.md
|
--
title: "Multivariate Probabilistic Time Series Forecasting with Informer"
thumbnail: /blog/assets/134_informer/thumbnail.png
authors:
- user: elisim
guest: true
- user: nielsr
- user: kashif
---
# Multivariate Probabilistic Time Series Forecasting with Informer
<script async defer src="https://unpkg.com/medium-zoom-element@0/dist/medium-zoom-element.min.js"></script>
<a target="_blank" href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multivariate_informer.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
## Introduction
A few months ago we introduced the [Time Series Transformer](https://huggingface.co/blog/time-series-transformers), which is the vanilla Transformer ([Vaswani et al., 2017](https://arxiv.org/abs/1706.03762)) applied to forecasting, and showed an example for the **univariate** probabilistic forecasting task (i.e. predicting each time series' 1-d distribution individually). In this post we introduce the _Informer_ model ([Zhou, Haoyi, et al., 2021](https://arxiv.org/abs/2012.07436)), AAAI21 best paper which is [now available](https://huggingface.co/docs/transformers/main/en/model_doc/informer) in 🤗 Transformers. We will show how to use the Informer model for the **multivariate** probabilistic forecasting task, i.e., predicting the distribution of a future **vector** of time-series target values. Note that this will also work for the vanilla Time Series Transformer model.
## Multivariate Probabilistic Time Series Forecasting
As far as the modeling aspect of probabilistic forecasting is concerned, the Transformer/Informer will require no change when dealing with multivariate time series. In both the univariate and multivariate setting, the model will receive a sequence of vectors and thus the only change is on the output or emission side.
Modeling the full joint conditional distribution of high dimensional data can get computationally expensive and thus methods resort to some approximation of the distribution, the easiest being to model the data as an independent distribution from the same family, or some low-rank approximation to the full covariance, etc. Here we will just resort to the independent (or diagonal) emissions which are supported for the families of distributions we have implemented [here](https://huggingface.co/docs/transformers/main/en/internal/time_series_utils).
## Informer - Under The Hood
Based on the vanilla Transformer ([Vaswani et al., 2017](https://arxiv.org/abs/1706.03762)), Informer employs two major improvements. To understand these improvements, let's recall the drawbacks of the vanilla Transformer:
1. **Quadratic computation of canonical self-attention:** The vanilla Transformer has a computational complexity of \\(O(T^2 D)\\) where \\(T\\) is the time series length and \\(D\\) is the dimension of the hidden states. For long sequence time-series forecasting (also known as the _LSTF problem_), this might be really computationally expensive. To solve this problem, Informer employs a new self-attention mechanism called _ProbSparse_ attention, which has \\(O(T \log T)\\) time and space complexity.
1. **Memory bottleneck when stacking layers:** When stacking \\(N\\) encoder/decoder layers, the vanilla Transformer has a memory usage of \\(O(N T^2)\\), which limits the model's capacity for long sequences. Informer uses a _Distilling_ operation, for reducing the input size between layers into its half slice. By doing so, it reduces the whole memory usage to be \\(O(N\cdot T \log T)\\).
As you can see, the motivation for the Informer model is similar to Longformer ([Beltagy et el., 2020](https://arxiv.org/abs/2004.05150)), Sparse Transformer ([Child et al., 2019](https://arxiv.org/abs/1904.10509)) and other NLP papers for reducing the quadratic complexity of the self-attention mechanism **when the input sequence is long**. Now, let's dive into _ProbSparse_ attention and the _Distilling_ operation with code examples.
### ProbSparse Attention
The main idea of ProbSparse is that the canonical self-attention scores form a long-tail distribution, where the "active" queries lie in the "head" scores and "lazy" queries lie in the "tail" area. By "active" query we mean a query \\(q_i\\) such that the dot-product \\(\langle q_i,k_i \rangle\\) **contributes** to the major attention, whereas a "lazy" query forms a dot-product which generates **trivial** attention. Here, \\(q_i\\) and \\(k_i\\) are the \\(i\\)-th rows in \\(Q\\) and \\(K\\) attention matrices respectively.
|  |
|:--:|
| Vanilla self attention vs ProbSparse attention from [Autoformer (Wu, Haixu, et al., 2021)](https://wuhaixu2016.github.io/pdf/NeurIPS2021_Autoformer.pdf) |
Given the idea of "active" and "lazy" queries, the ProbSparse attention selects the "active" queries, and creates a reduced query matrix \\(Q_{reduced}\\) which is used to calculate the attention weights in \\(O(T \log T)\\). Let's see this more in detail with a code example.
Recall the canonical self-attention formula:
$$
\textrm{Attention}(Q, K, V) = \textrm{softmax}(\frac{QK^T}{\sqrt{d_k}} )V
$$
Where \\(Q\in \mathbb{R}^{L_Q \times d}\\), \\(K\in \mathbb{R}^{L_K \times d}\\) and \\(V\in \mathbb{R}^{L_V \times d}\\). Note that in practice, the input length of queries and keys are typically equivalent in the self-attention computation, i.e. \\(L_Q = L_K = T\\) where \\(T\\) is the time series length. Therefore, the \\(QK^T\\) multiplication takes \\(O(T^2 \cdot d)\\) computational complexity. In ProbSparse attention, our goal is to create a new \\(Q_{reduce}\\) matrix and define:
$$
\textrm{ProbSparseAttention}(Q, K, V) = \textrm{softmax}(\frac{Q_{reduce}K^T}{\sqrt{d_k}} )V
$$
where the \\(Q_{reduce}\\) matrix only selects the Top \\(u\\) "active" queries. Here, \\(u = c \cdot \log L_Q\\) and \\(c\\) called the _sampling factor_ hyperparameter for the ProbSparse attention. Since \\(Q_{reduce}\\) selects only the Top \\(u\\) queries, its size is \\(c\cdot \log L_Q \times d\\), so the multiplication \\(Q_{reduce}K^T\\) takes only \\(O(L_K \log L_Q) = O(T \log T)\\).
This is good! But how can we select the \\(u\\) "active" queries to create \\(Q_{reduce}\\)? Let's define the _Query Sparsity Measurement_.
#### Query Sparsity Measurement
Query Sparsity Measurement \\(M(q_i, K)\\) is used for selecting the \\(u\\) "active" queries \\(q_i\\) in \\(Q\\) to create \\(Q_{reduce}\\). In theory, the dominant \\(\langle q_i,k_i \rangle\\) pairs encourage the "active" \\(q_i\\)'s probability distribution **away** from the uniform distribution as can be seen in the figure below. Hence, the [KL divergence](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence) between the actual queries distribution and the uniform distribution is used to define the sparsity measurement.
|  |
|:--:|
| The illustration of ProbSparse Attention from official [repository](https://github.com/zhouhaoyi/Informer2020)|
In practice, the measurement is defined as:
$$
M(q_i, K) = \max_j \frac{q_ik_j^T}{\sqrt{d}}-\frac{1}{L_k} \sum_{j=1}^{L_k}\frac{q_ik_j^T}{\sqrt{d}}
$$
The important thing to understand here is when \\(M(q_i, K)\\) is larger, the query \\(q_i\\) should be in \\(Q_{reduce}\\) and vice versa.
But how can we calculate the term \\(q_ik_j^T\\) in non-quadratic time? Recall that most of the dot-product \\(\langle q_i,k_i \rangle\\) generate either way the trivial attention (i.e. long-tail distribution property), so it is enough to randomly sample a subset of keys from \\(K\\), which will be called `K_sample` in the code.
Now, we are ready to see the code of `probsparse_attention`:
```python
from torch import nn
import math
def probsparse_attention(query_states, key_states, value_states, sampling_factor=5):
"""
Compute the probsparse self-attention.
Input shape: Batch x Time x Channel
Note the additional `sampling_factor` input.
"""
# get input sizes with logs
L_K = key_states.size(1)
L_Q = query_states.size(1)
log_L_K = np.ceil(np.log1p(L_K)).astype("int").item()
log_L_Q = np.ceil(np.log1p(L_Q)).astype("int").item()
# calculate a subset of samples to slice from K and create Q_K_sample
U_part = min(sampling_factor * L_Q * log_L_K, L_K)
# create Q_K_sample (the q_i * k_j^T term in the sparsity measurement)
index_sample = torch.randint(0, L_K, (U_part,))
K_sample = key_states[:, index_sample, :]
Q_K_sample = torch.bmm(query_states, K_sample.transpose(1, 2))
# calculate the query sparsity measurement with Q_K_sample
M = Q_K_sample.max(dim=-1)[0] - torch.div(Q_K_sample.sum(dim=-1), L_K)
# calculate u to find the Top-u queries under the sparsity measurement
u = min(sampling_factor * log_L_Q, L_Q)
M_top = M.topk(u, sorted=False)[1]
# calculate Q_reduce as query_states[:, M_top]
dim_for_slice = torch.arange(query_states.size(0)).unsqueeze(-1)
Q_reduce = query_states[dim_for_slice, M_top] # size: c*log_L_Q x channel
# and now, same as the canonical
d_k = query_states.size(-1)
attn_scores = torch.bmm(Q_reduce, key_states.transpose(-2, -1)) # Q_reduce x K^T
attn_scores = attn_scores / math.sqrt(d_k)
attn_probs = nn.functional.softmax(attn_scores, dim=-1)
attn_output = torch.bmm(attn_probs, value_states)
return attn_output, attn_scores
```
Note that in the implementation, \\(U_{part}\\) contain \\(L_Q\\) in the calculation, for stability issues (see [this disccusion](https://discuss.huggingface.co/t/probsparse-attention-in-informer/34428) for more information).
We did it! Please be aware that this is only a partial implementation of the `probsparse_attention`, and the full implementation can be found in 🤗 Transformers.
### Distilling
Because of the ProbSparse self-attention, the encoder’s feature map has some redundancy that can be removed. Therefore,
the distilling operation is used to reduce the input size between encoder layers into its half slice, thus in theory removing this redundancy. In practice, Informer's "distilling" operation just adds 1D convolution layers with max pooling between each of the encoder layers. Let \\(X_n\\) be the output of the \\(n\\)-th encoder layer, the distilling operation is then defined as:
$$
X_{n+1} = \textrm{MaxPool} ( \textrm{ELU}(\textrm{Conv1d}(X_n))
$$
Let's see this in code:
```python
from torch import nn
# ConvLayer is a class with forward pass applying ELU and MaxPool1d
def informer_encoder_forward(x_input, num_encoder_layers=3, distil=True):
# Initialize the convolution layers
if distil:
conv_layers = nn.ModuleList([ConvLayer() for _ in range(num_encoder_layers - 1)])
conv_layers.append(None)
else:
conv_layers = [None] * num_encoder_layers
# Apply conv_layer between each encoder_layer
for encoder_layer, conv_layer in zip(encoder_layers, conv_layers):
output = encoder_layer(x_input)
if conv_layer is not None:
output = conv_layer(loutput)
return output
```
By reducing the input of each layer by two, we get a memory usage of \\(O(N\cdot T \log T)\\) instead of \\(O(N\cdot T^2)\\) where \\(N\\) is the number of encoder/decoder layers. This is what we wanted!
The Informer model in [now available](https://huggingface.co/docs/transformers/main/en/model_doc/informer) in the 🤗 Transformers library, and simply called `InformerModel`. In the sections below, we will show how to train this model on a custom multivariate time-series dataset.
## Set-up Environment
First, let's install the necessary libraries: 🤗 Transformers, 🤗 Datasets, 🤗 Evaluate, 🤗 Accelerate and [GluonTS](https://github.com/awslabs/gluonts).
As we will show, GluonTS will be used for transforming the data to create features as well as for creating appropriate training, validation and test batches.
```python
!pip install -q transformers datasets evaluate accelerate gluonts ujson
```
## Load Dataset
In this blog post, we'll use the `traffic_hourly` dataset, which is available on the [Hugging Face Hub](https://huggingface.co/datasets/monash_tsf). This dataset contains the San Francisco Traffic dataset used by [Lai et al. (2017)](https://arxiv.org/abs/1703.07015). It contains 862 hourly time series showing the road occupancy rates in the range \\([0, 1]\\) on the San Francisco Bay area freeways from 2015 to 2016.
This dataset is part of the [Monash Time Series Forecasting](https://forecastingdata.org/) repository, a collection of time series datasets from a number of domains. It can be viewed as the [GLUE benchmark](https://gluebenchmark.com/) of time series forecasting.
```python
from datasets import load_dataset
dataset = load_dataset("monash_tsf", "traffic_hourly")
```
As can be seen, the dataset contains 3 splits: train, validation and test.
```python
dataset
>>> DatasetDict({
train: Dataset({
features: ['start', 'target', 'feat_static_cat', 'feat_dynamic_real', 'item_id'],
num_rows: 862
})
test: Dataset({
features: ['start', 'target', 'feat_static_cat', 'feat_dynamic_real', 'item_id'],
num_rows: 862
})
validation: Dataset({
features: ['start', 'target', 'feat_static_cat', 'feat_dynamic_real', 'item_id'],
num_rows: 862
})
})
```
Each example contains a few keys, of which `start` and `target` are the most important ones. Let us have a look at the first time series in the dataset:
```python
train_example = dataset["train"][0]
train_example.keys()
>>> dict_keys(['start', 'target', 'feat_static_cat', 'feat_dynamic_real', 'item_id'])
```
The `start` simply indicates the start of the time series (as a datetime), and the `target` contains the actual values of the time series.
The `start` will be useful to add time related features to the time series values, as extra input to the model (such as "month of year"). Since we know the frequency of the data is `hourly`, we know for instance that the second value has the timestamp `2015-01-01 01:00:01`, `2015-01-01 02:00:01`, etc.
```python
print(train_example["start"])
print(len(train_example["target"]))
>>> 2015-01-01 00:00:01
17448
```
The validation set contains the same data as the training set, just for a `prediction_length` longer amount of time. This allows us to validate the model's predictions against the ground truth.
The test set is again one `prediction_length` longer data compared to the validation set (or some multiple of `prediction_length` longer data compared to the training set for testing on multiple rolling windows).
```python
validation_example = dataset["validation"][0]
validation_example.keys()
>>> dict_keys(['start', 'target', 'feat_static_cat', 'feat_dynamic_real', 'item_id'])
```
The initial values are exactly the same as the corresponding training example. However, this example has `prediction_length=48` (48 hours, or 2 days) additional values compared to the training example. Let us verify it.
```python
freq = "1H"
prediction_length = 48
assert len(train_example["target"]) + prediction_length == len(
dataset["validation"][0]["target"]
)
```
Let's visualize this:
```python
import matplotlib.pyplot as plt
num_of_samples = 150
figure, axes = plt.subplots()
axes.plot(train_example["target"][-num_of_samples:], color="blue")
axes.plot(
validation_example["target"][-num_of_samples - prediction_length :],
color="red",
alpha=0.5,
)
plt.show()
```

Let's split up the data:
```python
train_dataset = dataset["train"]
test_dataset = dataset["test"]
```
## Update `start` to `pd.Period`
The first thing we'll do is convert the `start` feature of each time series to a pandas `Period` index using the data's `freq`:
```python
from functools import lru_cache
import pandas as pd
import numpy as np
@lru_cache(10_000)
def convert_to_pandas_period(date, freq):
return pd.Period(date, freq)
def transform_start_field(batch, freq):
batch["start"] = [convert_to_pandas_period(date, freq) for date in batch["start"]]
return batch
```
We now use `datasets`' [`set_transform`](https://huggingface.co/docs/datasets/v2.7.0/en/package_reference/main_classes#datasets.Dataset.set_transform) functionality to do this on-the-fly in place:
```python
from functools import partial
train_dataset.set_transform(partial(transform_start_field, freq=freq))
test_dataset.set_transform(partial(transform_start_field, freq=freq))
```
Now, let's convert the dataset into a multivariate time series using the `MultivariateGrouper` from GluonTS. This grouper will convert the individual 1-dimensional time series into a single 2D matrix.
```python
from gluonts.dataset.multivariate_grouper import MultivariateGrouper
num_of_variates = len(train_dataset)
train_grouper = MultivariateGrouper(max_target_dim=num_of_variates)
test_grouper = MultivariateGrouper(
max_target_dim=num_of_variates,
num_test_dates=len(test_dataset) // num_of_variates, # number of rolling test windows
)
multi_variate_train_dataset = train_grouper(train_dataset)
multi_variate_test_dataset = test_grouper(test_dataset)
```
Note that the target is now 2-dimensional, where the first dimension is the number of variates (number of time series) and the second is the time series values (time dimension):
```python
multi_variate_train_example = multi_variate_train_dataset[0]
print("multi_variate_train_example["target"].shape =", multi_variate_train_example["target"].shape)
>>> multi_variate_train_example["target"].shape = (862, 17448)
```
## Define the Model
Next, let's instantiate a model. The model will be trained from scratch, hence we won't use the `from_pretrained` method here, but rather randomly initialize the model from a [`config`](https://huggingface.co/docs/transformers/main/en/model_doc/informer#transformers.InformerConfig).
We specify a couple of additional parameters to the model:
- `prediction_length` (in our case, `48` hours): this is the horizon that the decoder of the Informer will learn to predict for;
- `context_length`: the model will set the `context_length` (input of the encoder) equal to the `prediction_length`, if no `context_length` is specified;
- `lags` for a given frequency: these specify an efficient "look back" mechanism, where we concatenate values from the past to the current values as additional features, e.g. for a `Daily` frequency we might consider a look back of `[1, 7, 30, ...]` or for `Minute` data we might consider `[1, 30, 60, 60*24, ...]` etc.;
- the number of time features: in our case, this will be `5` as we'll add `HourOfDay`, `DayOfWeek`, ..., and `Age` features (see below).
Let us check the default lags provided by GluonTS for the given frequency ("hourly"):
```python
from gluonts.time_feature import get_lags_for_frequency
lags_sequence = get_lags_for_frequency(freq)
print(lags_sequence)
>>> [1, 2, 3, 4, 5, 6, 7, 23, 24, 25, 47, 48, 49, 71, 72, 73, 95, 96, 97, 119, 120,
121, 143, 144, 145, 167, 168, 169, 335, 336, 337, 503, 504, 505, 671, 672, 673, 719, 720, 721]
```
This means that this would look back up to 721 hours (~30 days) for each time step, as additional features. However, the resulting feature vector would end up being of size `len(lags_sequence)*num_of_variates` which for our case will be 34480! This is not going to work so we will use our own sensible lags.
Let us also check the default time features which GluonTS provides us:
```python
from gluonts.time_feature import time_features_from_frequency_str
time_features = time_features_from_frequency_str(freq)
print(time_features)
>>> [<function hour_of_day at 0x7f3809539240>, <function day_of_week at 0x7f3809539360>, <function day_of_month at 0x7f3809539480>, <function day_of_year at 0x7f38095395a0>]
```
In this case, there are four additional features, namely "hour of day", "day of week", "day of month" and "day of year". This means that for each time step, we'll add these features as a scalar values. For example, consider the timestamp `2015-01-01 01:00:01`. The four additional features will be:
```python
from pandas.core.arrays.period import period_array
timestamp = pd.Period("2015-01-01 01:00:01", freq=freq)
timestamp_as_index = pd.PeriodIndex(data=period_array([timestamp]))
additional_features = [
(time_feature.__name__, time_feature(timestamp_as_index))
for time_feature in time_features
]
print(dict(additional_features))
>>> {'hour_of_day': array([-0.45652174]), 'day_of_week': array([0.]), 'day_of_month': array([-0.5]), 'day_of_year': array([-0.5])}
```
Note that hours and days are encoded as values between `[-0.5, 0.5]` from GluonTS. For more information about `time_features`, please see [this](https://github.com/awslabs/gluonts/blob/dev/src/gluonts/time_feature/_base.py). Besides those 4 features, we'll also add an "age" feature as we'll see later on in the data transformations.
We now have everything to define the model:
```python
from transformers import InformerConfig, InformerForPrediction
config = InformerConfig(
# in the multivariate setting, input_size is the number of variates in the time series per time step
input_size=num_of_variates,
# prediction length:
prediction_length=prediction_length,
# context length:
context_length=prediction_length * 2,
# lags value copied from 1 week before:
lags_sequence=[1, 24 * 7],
# we'll add 5 time features ("hour_of_day", ..., and "age"):
num_time_features=len(time_features) + 1,
# informer params:
dropout=0.1,
encoder_layers=6,
decoder_layers=4,
# project input from num_of_variates*len(lags_sequence)+num_time_features to:
d_model=64,
)
model = InformerForPrediction(config)
```
By default, the model uses a diagonal Student-t distribution (but this is [configurable](https://huggingface.co/docs/transformers/main/en/internal/time_series_utils)):
```python
model.config.distribution_output
>>> 'student_t'
```
## Define Transformations
Next, we define the transformations for the data, in particular for the creation of the time features (based on the dataset or universal ones).
Again, we'll use the GluonTS library for this. We define a `Chain` of transformations (which is a bit comparable to `torchvision.transforms.Compose` for images). It allows us to combine several transformations into a single pipeline.
```python
from gluonts.time_feature import TimeFeature
from gluonts.dataset.field_names import FieldName
from gluonts.transform import (
AddAgeFeature,
AddObservedValuesIndicator,
AddTimeFeatures,
AsNumpyArray,
Chain,
ExpectedNumInstanceSampler,
InstanceSplitter,
RemoveFields,
SelectFields,
SetField,
TestSplitSampler,
Transformation,
ValidationSplitSampler,
VstackFeatures,
RenameFields,
)
```
The transformations below are annotated with comments, to explain what they do. At a high level, we will iterate over the individual time series of our dataset and add/remove fields or features:
```python
from transformers import PretrainedConfig
def create_transformation(freq: str, config: PretrainedConfig) -> Transformation:
# create list of fields to remove later
remove_field_names = []
if config.num_static_real_features == 0:
remove_field_names.append(FieldName.FEAT_STATIC_REAL)
if config.num_dynamic_real_features == 0:
remove_field_names.append(FieldName.FEAT_DYNAMIC_REAL)
if config.num_static_categorical_features == 0:
remove_field_names.append(FieldName.FEAT_STATIC_CAT)
return Chain(
# step 1: remove static/dynamic fields if not specified
[RemoveFields(field_names=remove_field_names)]
# step 2: convert the data to NumPy (potentially not needed)
+ (
[
AsNumpyArray(
field=FieldName.FEAT_STATIC_CAT,
expected_ndim=1,
dtype=int,
)
]
if config.num_static_categorical_features > 0
else []
)
+ (
[
AsNumpyArray(
field=FieldName.FEAT_STATIC_REAL,
expected_ndim=1,
)
]
if config.num_static_real_features > 0
else []
)
+ [
AsNumpyArray(
field=FieldName.TARGET,
# we expect an extra dim for the multivariate case:
expected_ndim=1 if config.input_size == 1 else 2,
),
# step 3: handle the NaN's by filling in the target with zero
# and return the mask (which is in the observed values)
# true for observed values, false for nan's
# the decoder uses this mask (no loss is incurred for unobserved values)
# see loss_weights inside the xxxForPrediction model
AddObservedValuesIndicator(
target_field=FieldName.TARGET,
output_field=FieldName.OBSERVED_VALUES,
),
# step 4: add temporal features based on freq of the dataset
# these serve as positional encodings
AddTimeFeatures(
start_field=FieldName.START,
target_field=FieldName.TARGET,
output_field=FieldName.FEAT_TIME,
time_features=time_features_from_frequency_str(freq),
pred_length=config.prediction_length,
),
# step 5: add another temporal feature (just a single number)
# tells the model where in the life the value of the time series is
# sort of running counter
AddAgeFeature(
target_field=FieldName.TARGET,
output_field=FieldName.FEAT_AGE,
pred_length=config.prediction_length,
log_scale=True,
),
# step 6: vertically stack all the temporal features into the key FEAT_TIME
VstackFeatures(
output_field=FieldName.FEAT_TIME,
input_fields=[FieldName.FEAT_TIME, FieldName.FEAT_AGE]
+ (
[FieldName.FEAT_DYNAMIC_REAL]
if config.num_dynamic_real_features > 0
else []
),
),
# step 7: rename to match HuggingFace names
RenameFields(
mapping={
FieldName.FEAT_STATIC_CAT: "static_categorical_features",
FieldName.FEAT_STATIC_REAL: "static_real_features",
FieldName.FEAT_TIME: "time_features",
FieldName.TARGET: "values",
FieldName.OBSERVED_VALUES: "observed_mask",
}
),
]
)
```
## Define `InstanceSplitter`
For training/validation/testing we next create an `InstanceSplitter` which is used to sample windows from the dataset (as, remember, we can't pass the entire history of values to the model due to time- and memory constraints).
The instance splitter samples random `context_length` sized and subsequent `prediction_length` sized windows from the data, and appends a `past_` or `future_` key to any temporal keys in `time_series_fields` for the respective windows. The instance splitter can be configured into three different modes:
1. `mode="train"`: Here we sample the context and prediction length windows randomly from the dataset given to it (the training dataset)
2. `mode="validation"`: Here we sample the very last context length window and prediction window from the dataset given to it (for the back-testing or validation likelihood calculations)
3. `mode="test"`: Here we sample the very last context length window only (for the prediction use case)
```python
from gluonts.transform.sampler import InstanceSampler
from typing import Optional
def create_instance_splitter(
config: PretrainedConfig,
mode: str,
train_sampler: Optional[InstanceSampler] = None,
validation_sampler: Optional[InstanceSampler] = None,
) -> Transformation:
assert mode in ["train", "validation", "test"]
instance_sampler = {
"train": train_sampler
or ExpectedNumInstanceSampler(
num_instances=1.0, min_future=config.prediction_length
),
"validation": validation_sampler
or ValidationSplitSampler(min_future=config.prediction_length),
"test": TestSplitSampler(),
}[mode]
return InstanceSplitter(
target_field="values",
is_pad_field=FieldName.IS_PAD,
start_field=FieldName.START,
forecast_start_field=FieldName.FORECAST_START,
instance_sampler=instance_sampler,
past_length=config.context_length + max(config.lags_sequence),
future_length=config.prediction_length,
time_series_fields=["time_features", "observed_mask"],
)
```
## Create DataLoaders
Next, it's time to create the DataLoaders, which allow us to have batches of (input, output) pairs - or in other words (`past_values`, `future_values`).
```python
from typing import Iterable
import torch
from gluonts.itertools import Cached, Cyclic
from gluonts.dataset.loader import as_stacked_batches
def create_train_dataloader(
config: PretrainedConfig,
freq,
data,
batch_size: int,
num_batches_per_epoch: int,
shuffle_buffer_length: Optional[int] = None,
cache_data: bool = True,
**kwargs,
) -> Iterable:
PREDICTION_INPUT_NAMES = [
"past_time_features",
"past_values",
"past_observed_mask",
"future_time_features",
]
if config.num_static_categorical_features > 0:
PREDICTION_INPUT_NAMES.append("static_categorical_features")
if config.num_static_real_features > 0:
PREDICTION_INPUT_NAMES.append("static_real_features")
TRAINING_INPUT_NAMES = PREDICTION_INPUT_NAMES + [
"future_values",
"future_observed_mask",
]
transformation = create_transformation(freq, config)
transformed_data = transformation.apply(data, is_train=True)
if cache_data:
transformed_data = Cached(transformed_data)
# we initialize a Training instance
instance_splitter = create_instance_splitter(config, "train")
# the instance splitter will sample a window of
# context length + lags + prediction length (from all the possible transformed time series, 1 in our case)
# randomly from within the target time series and return an iterator.
stream = Cyclic(transformed_data).stream()
training_instances = instance_splitter.apply(stream)
return as_stacked_batches(
training_instances,
batch_size=batch_size,
shuffle_buffer_length=shuffle_buffer_length,
field_names=TRAINING_INPUT_NAMES,
output_type=torch.tensor,
num_batches_per_epoch=num_batches_per_epoch,
)
```
```python
def create_backtest_dataloader(
config: PretrainedConfig,
freq,
data,
batch_size: int,
**kwargs,
):
PREDICTION_INPUT_NAMES = [
"past_time_features",
"past_values",
"past_observed_mask",
"future_time_features",
]
if config.num_static_categorical_features > 0:
PREDICTION_INPUT_NAMES.append("static_categorical_features")
if config.num_static_real_features > 0:
PREDICTION_INPUT_NAMES.append("static_real_features")
transformation = create_transformation(freq, config)
transformed_data = transformation.apply(data)
# we create a Validation Instance splitter which will sample the very last
# context window seen during training only for the encoder.
instance_sampler = create_instance_splitter(config, "validation")
# we apply the transformations in train mode
testing_instances = instance_sampler.apply(transformed_data, is_train=True)
return as_stacked_batches(
testing_instances,
batch_size=batch_size,
output_type=torch.tensor,
field_names=PREDICTION_INPUT_NAMES,
)
def create_test_dataloader(
config: PretrainedConfig,
freq,
data,
batch_size: int,
**kwargs,
):
PREDICTION_INPUT_NAMES = [
"past_time_features",
"past_values",
"past_observed_mask",
"future_time_features",
]
if config.num_static_categorical_features > 0:
PREDICTION_INPUT_NAMES.append("static_categorical_features")
if config.num_static_real_features > 0:
PREDICTION_INPUT_NAMES.append("static_real_features")
transformation = create_transformation(freq, config)
transformed_data = transformation.apply(data, is_train=False)
# We create a test Instance splitter to sample the very last
# context window from the dataset provided.
instance_sampler = create_instance_splitter(config, "test")
# We apply the transformations in test mode
testing_instances = instance_sampler.apply(transformed_data, is_train=False)
return as_stacked_batches(
testing_instances,
batch_size=batch_size,
output_type=torch.tensor,
field_names=PREDICTION_INPUT_NAMES,
)
```
```python
train_dataloader = create_train_dataloader(
config=config,
freq=freq,
data=multi_variate_train_dataset,
batch_size=256,
num_batches_per_epoch=100,
num_workers=2,
)
test_dataloader = create_backtest_dataloader(
config=config,
freq=freq,
data=multi_variate_test_dataset,
batch_size=32,
)
```
Let's check the first batch:
```python
batch = next(iter(train_dataloader))
for k, v in batch.items():
print(k, v.shape, v.type())
>>> past_time_features torch.Size([256, 264, 5]) torch.FloatTensor
past_values torch.Size([256, 264, 862]) torch.FloatTensor
past_observed_mask torch.Size([256, 264, 862]) torch.FloatTensor
future_time_features torch.Size([256, 48, 5]) torch.FloatTensor
future_values torch.Size([256, 48, 862]) torch.FloatTensor
future_observed_mask torch.Size([256, 48, 862]) torch.FloatTensor
```
As can be seen, we don't feed `input_ids` and `attention_mask` to the encoder (as would be the case for NLP models), but rather `past_values`, along with `past_observed_mask`, `past_time_features` and `static_real_features`.
The decoder inputs consist of `future_values`, `future_observed_mask` and `future_time_features`. The `future_values` can be seen as the equivalent of `decoder_input_ids` in NLP.
We refer to the [docs](https://huggingface.co/docs/transformers/main/en/model_doc/informer#transformers.InformerModel.forward.past_values) for a detailed explanation for each of them.
## Forward Pass
Let's perform a single forward pass with the batch we just created:
```python
# perform forward pass
outputs = model(
past_values=batch["past_values"],
past_time_features=batch["past_time_features"],
past_observed_mask=batch["past_observed_mask"],
static_categorical_features=batch["static_categorical_features"]
if config.num_static_categorical_features > 0
else None,
static_real_features=batch["static_real_features"]
if config.num_static_real_features > 0
else None,
future_values=batch["future_values"],
future_time_features=batch["future_time_features"],
future_observed_mask=batch["future_observed_mask"],
output_hidden_states=True,
)
```
```python
print("Loss:", outputs.loss.item())
>>> Loss: -1071.5718994140625
```
Note that the model is returning a loss. This is possible as the decoder automatically shifts the `future_values` one position to the right in order to have the labels. This allows computing a loss between the predicted values and the labels. The loss is the negative log-likelihood of the predicted distribution with respect to the ground truth values and tends to negative infinity.
Also note that the decoder uses a causal mask to not look into the future as the values it needs to predict are in the `future_values` tensor.
## Train the Model
It's time to train the model! We'll use a standard PyTorch training loop.
We will use the 🤗 [Accelerate](https://huggingface.co/docs/accelerate/index) library here, which automatically places the model, optimizer and dataloader on the appropriate `device`.
```python
from accelerate import Accelerator
from torch.optim import AdamW
epochs = 25
loss_history = []
accelerator = Accelerator()
device = accelerator.device
model.to(device)
optimizer = AdamW(model.parameters(), lr=6e-4, betas=(0.9, 0.95), weight_decay=1e-1)
model, optimizer, train_dataloader = accelerator.prepare(
model,
optimizer,
train_dataloader,
)
model.train()
for epoch in range(epochs):
for idx, batch in enumerate(train_dataloader):
optimizer.zero_grad()
outputs = model(
static_categorical_features=batch["static_categorical_features"].to(device)
if config.num_static_categorical_features > 0
else None,
static_real_features=batch["static_real_features"].to(device)
if config.num_static_real_features > 0
else None,
past_time_features=batch["past_time_features"].to(device),
past_values=batch["past_values"].to(device),
future_time_features=batch["future_time_features"].to(device),
future_values=batch["future_values"].to(device),
past_observed_mask=batch["past_observed_mask"].to(device),
future_observed_mask=batch["future_observed_mask"].to(device),
)
loss = outputs.loss
# Backpropagation
accelerator.backward(loss)
optimizer.step()
loss_history.append(loss.item())
if idx % 100 == 0:
print(loss.item())
>>> -1081.978515625
...
-2877.723876953125
```
```python
# view training
loss_history = np.array(loss_history).reshape(-1)
x = range(loss_history.shape[0])
plt.figure(figsize=(10, 5))
plt.plot(x, loss_history, label="train")
plt.title("Loss", fontsize=15)
plt.legend(loc="upper right")
plt.xlabel("iteration")
plt.ylabel("nll")
plt.show()
```

## Inference
At inference time, it's recommended to use the `generate()` method for autoregressive generation, similar to NLP models.
Forecasting involves getting data from the test instance sampler, which will sample the very last `context_length` sized window of values from each time series in the dataset, and pass it to the model. Note that we pass `future_time_features`, which are known ahead of time, to the decoder.
The model will autoregressively sample a certain number of values from the predicted distribution and pass them back to the decoder to return the prediction outputs:
```python
model.eval()
forecasts_ = []
for batch in test_dataloader:
outputs = model.generate(
static_categorical_features=batch["static_categorical_features"].to(device)
if config.num_static_categorical_features > 0
else None,
static_real_features=batch["static_real_features"].to(device)
if config.num_static_real_features > 0
else None,
past_time_features=batch["past_time_features"].to(device),
past_values=batch["past_values"].to(device),
future_time_features=batch["future_time_features"].to(device),
past_observed_mask=batch["past_observed_mask"].to(device),
)
forecasts_.append(outputs.sequences.cpu().numpy())
```
The model outputs a tensor of shape (`batch_size`, `number of samples`, `prediction length`, `input_size`).
In this case, we get `100` possible values for the next `48` hours for each of the `862` time series (for each example in the batch which is of size `1` since we only have a single multivariate time series):
```python
forecasts_[0].shape
>>> (1, 100, 48, 862)
```
We'll stack them vertically, to get forecasts for all time-series in the test dataset (just in case there are more time series in the test set):
```python
forecasts = np.vstack(forecasts_)
print(forecasts.shape)
>>> (1, 100, 48, 862)
```
We can evaluate the resulting forecast with respect to the ground truth out of sample values present in the test set. For that, we'll use the 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index) library, which includes the [MASE](https://huggingface.co/spaces/evaluate-metric/mase) and [sMAPE](https://huggingface.co/spaces/evaluate-metric/smape) metrics.
We calculate both metrics for each time series variate in the dataset:
```python
from evaluate import load
from gluonts.time_feature import get_seasonality
mase_metric = load("evaluate-metric/mase")
smape_metric = load("evaluate-metric/smape")
forecast_median = np.median(forecasts, 1).squeeze(0).T
mase_metrics = []
smape_metrics = []
for item_id, ts in enumerate(test_dataset):
training_data = ts["target"][:-prediction_length]
ground_truth = ts["target"][-prediction_length:]
mase = mase_metric.compute(
predictions=forecast_median[item_id],
references=np.array(ground_truth),
training=np.array(training_data),
periodicity=get_seasonality(freq),
)
mase_metrics.append(mase["mase"])
smape = smape_metric.compute(
predictions=forecast_median[item_id],
references=np.array(ground_truth),
)
smape_metrics.append(smape["smape"])
```
```python
print(f"MASE: {np.mean(mase_metrics)}")
>>> MASE: 1.1913437728068093
print(f"sMAPE: {np.mean(smape_metrics)}")
>>> sMAPE: 0.5322665081607634
```
```python
plt.scatter(mase_metrics, smape_metrics, alpha=0.2)
plt.xlabel("MASE")
plt.ylabel("sMAPE")
plt.show()
```

To plot the prediction for any time series variate with respect the ground truth test data we define the following helper:
```python
import matplotlib.dates as mdates
def plot(ts_index, mv_index):
fig, ax = plt.subplots()
index = pd.period_range(
start=multi_variate_test_dataset[ts_index][FieldName.START],
periods=len(multi_variate_test_dataset[ts_index][FieldName.TARGET]),
freq=multi_variate_test_dataset[ts_index][FieldName.START].freq,
).to_timestamp()
ax.xaxis.set_minor_locator(mdates.HourLocator())
ax.plot(
index[-2 * prediction_length :],
multi_variate_test_dataset[ts_index]["target"][mv_index, -2 * prediction_length :],
label="actual",
)
ax.plot(
index[-prediction_length:],
forecasts[ts_index, ..., mv_index].mean(axis=0),
label="mean",
)
ax.fill_between(
index[-prediction_length:],
forecasts[ts_index, ..., mv_index].mean(0)
- forecasts[ts_index, ..., mv_index].std(axis=0),
forecasts[ts_index, ..., mv_index].mean(0)
+ forecasts[ts_index, ..., mv_index].std(axis=0),
alpha=0.2,
interpolate=True,
label="+/- 1-std",
)
ax.legend()
fig.autofmt_xdate()
```
For example:
```python
plot(0, 344)
```

## Conclusion
How do we compare against other models? The [Monash Time Series Repository](https://forecastingdata.org/#results) has a comparison table of test set MASE metrics which we can add to:
|Dataset | SES| Theta | TBATS| ETS | (DHR-)ARIMA| PR| CatBoost | FFNN | DeepAR | N-BEATS | WaveNet| Transformer (uni.) | **Informer (mv. our)**|
|:------------------:|:-----------------:|:--:|:--:|:--:|:--:|:--:|:--:|:---:|:---:|:--:|:--:|:--:|:--:|
|Traffic Hourly | 1.922 | 1.922 | 2.482 | 2.294| 2.535| 1.281| 1.571 |0.892| 0.825 |1.100| 1.066 | **0.821** | 1.191 |
As can be seen, and perhaps surprising to some, the multivariate forecasts are typically _worse_ than the univariate ones, the reason being the difficulty in estimating the cross-series correlations/relationships. The additional variance added by the estimates often harms the resulting forecasts or the model learns spurious correlations. We refer to [this paper](https://openreview.net/forum?id=GpW327gxLTF) for further reading. Multivariate models tend to work well when trained on a lot of data.
So the vanilla Transformer still performs best here! In the future, we hope to better benchmark these models in a central place to ease reproducing the results of several papers. Stay tuned for more!
## Resources
We recommend to check out the [Informer docs](https://huggingface.co/docs/transformers/main/en/model_doc/informer) and the [example notebook](https://github.com/huggingface/notebooks/blob/main/examples/multivariate_informer.ipynb) linked at the top of this blog post.
|
huggingface/pytorch-image-models/blob/main/docs/models/se-resnet.md
|
SE-ResNet
**SE ResNet** is a variant of a [ResNet](https://www.paperswithcode.com/method/resnet) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration.
## How do I use this model on an image?
To load a pretrained model:
```python
import timm
model = timm.create_model('seresnet152d', pretrained=True)
model.eval()
```
To load and preprocess the image:
```python
import urllib
from PIL import Image
from timm.data import resolve_data_config
from timm.data.transforms_factory import create_transform
config = resolve_data_config({}, model=model)
transform = create_transform(**config)
url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg")
urllib.request.urlretrieve(url, filename)
img = Image.open(filename).convert('RGB')
tensor = transform(img).unsqueeze(0) # transform and add batch dimension
```
To get the model predictions:
```python
import torch
with torch.no_grad():
out = model(tensor)
probabilities = torch.nn.functional.softmax(out[0], dim=0)
print(probabilities.shape)
# prints: torch.Size([1000])
```
To get the top-5 predictions class names:
```python
# Get imagenet class mappings
url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt")
urllib.request.urlretrieve(url, filename)
with open("imagenet_classes.txt", "r") as f:
categories = [s.strip() for s in f.readlines()]
# Print top categories per image
top5_prob, top5_catid = torch.topk(probabilities, 5)
for i in range(top5_prob.size(0)):
print(categories[top5_catid[i]], top5_prob[i].item())
# prints class names and probabilities like:
# [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)]
```
Replace the model name with the variant you want to use, e.g. `seresnet152d`. You can find the IDs in the model summaries at the top of this page.
To extract image features with this model, follow the [timm feature extraction examples](https://rwightman.github.io/pytorch-image-models/feature_extraction/), just change the name of the model you want to use.
## How do I finetune this model?
You can finetune any of the pre-trained models just by changing the classifier (the last layer).
```python
model = timm.create_model('seresnet152d', pretrained=True, num_classes=NUM_FINETUNE_CLASSES)
```
To finetune on your own dataset, you have to write a training loop or adapt [timm's training
script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset.
## How do I train this model?
You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
## Citation
```BibTeX
@misc{hu2019squeezeandexcitation,
title={Squeeze-and-Excitation Networks},
author={Jie Hu and Li Shen and Samuel Albanie and Gang Sun and Enhua Wu},
year={2019},
eprint={1709.01507},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
Type: model-index
Collections:
- Name: SE ResNet
Paper:
Title: Squeeze-and-Excitation Networks
URL: https://paperswithcode.com/paper/squeeze-and-excitation-networks
Models:
- Name: seresnet152d
In Collection: SE ResNet
Metadata:
FLOPs: 20161904304
Parameters: 66840000
File Size: 268144497
Architecture:
- 1x1 Convolution
- Batch Normalization
- Bottleneck Residual Block
- Convolution
- Global Average Pooling
- Max Pooling
- ReLU
- Residual Block
- Residual Connection
- Softmax
- Squeeze-and-Excitation Block
Tasks:
- Image Classification
Training Techniques:
- Label Smoothing
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 8x NVIDIA Titan X GPUs
ID: seresnet152d
LR: 0.6
Epochs: 100
Layers: 152
Dropout: 0.2
Crop Pct: '0.94'
Momentum: 0.9
Batch Size: 1024
Image Size: '256'
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1206
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/seresnet152d_ra2-04464dd2.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 83.74%
Top 5 Accuracy: 96.77%
- Name: seresnet50
In Collection: SE ResNet
Metadata:
FLOPs: 5285062320
Parameters: 28090000
File Size: 112621903
Architecture:
- 1x1 Convolution
- Batch Normalization
- Bottleneck Residual Block
- Convolution
- Global Average Pooling
- Max Pooling
- ReLU
- Residual Block
- Residual Connection
- Softmax
- Squeeze-and-Excitation Block
Tasks:
- Image Classification
Training Techniques:
- Label Smoothing
- SGD with Momentum
- Weight Decay
Training Data:
- ImageNet
Training Resources: 8x NVIDIA Titan X GPUs
ID: seresnet50
LR: 0.6
Epochs: 100
Layers: 50
Dropout: 0.2
Crop Pct: '0.875'
Momentum: 0.9
Batch Size: 1024
Image Size: '224'
Interpolation: bicubic
Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1180
Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/seresnet50_ra_224-8efdb4bb.pth
Results:
- Task: Image Classification
Dataset: ImageNet
Metrics:
Top 1 Accuracy: 80.26%
Top 5 Accuracy: 95.07%
-->
|
huggingface/diffusers/blob/main/docs/source/en/api/pipelines/kandinsky_v22.md
|
!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Kandinsky 2.2
Kandinsky 2.2 is created by [Arseniy Shakhmatov](https://github.com/cene555), [Anton Razzhigaev](https://github.com/razzant), [Aleksandr Nikolich](https://github.com/AlexWortega), [Vladimir Arkhipkin](https://github.com/oriBetelgeuse), [Igor Pavlov](https://github.com/boomb0om), [Andrey Kuznetsov](https://github.com/kuznetsoffandrey), and [Denis Dimitrov](https://github.com/denndimitrov).
The description from it's GitHub page is:
*Kandinsky 2.2 brings substantial improvements upon its predecessor, Kandinsky 2.1, by introducing a new, more powerful image encoder - CLIP-ViT-G and the ControlNet support. The switch to CLIP-ViT-G as the image encoder significantly increases the model's capability to generate more aesthetic pictures and better understand text, thus enhancing the model's overall performance. The addition of the ControlNet mechanism allows the model to effectively control the process of generating images. This leads to more accurate and visually appealing outputs and opens new possibilities for text-guided image manipulation.*
The original codebase can be found at [ai-forever/Kandinsky-2](https://github.com/ai-forever/Kandinsky-2).
<Tip>
Check out the [Kandinsky Community](https://huggingface.co/kandinsky-community) organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting.
</Tip>
<Tip>
Make sure to check out the schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
</Tip>
## KandinskyV22PriorPipeline
[[autodoc]] KandinskyV22PriorPipeline
- all
- __call__
- interpolate
## KandinskyV22Pipeline
[[autodoc]] KandinskyV22Pipeline
- all
- __call__
## KandinskyV22CombinedPipeline
[[autodoc]] KandinskyV22CombinedPipeline
- all
- __call__
## KandinskyV22ControlnetPipeline
[[autodoc]] KandinskyV22ControlnetPipeline
- all
- __call__
## KandinskyV22PriorEmb2EmbPipeline
[[autodoc]] KandinskyV22PriorEmb2EmbPipeline
- all
- __call__
- interpolate
## KandinskyV22Img2ImgPipeline
[[autodoc]] KandinskyV22Img2ImgPipeline
- all
- __call__
## KandinskyV22Img2ImgCombinedPipeline
[[autodoc]] KandinskyV22Img2ImgCombinedPipeline
- all
- __call__
## KandinskyV22ControlnetImg2ImgPipeline
[[autodoc]] KandinskyV22ControlnetImg2ImgPipeline
- all
- __call__
## KandinskyV22InpaintPipeline
[[autodoc]] KandinskyV22InpaintPipeline
- all
- __call__
## KandinskyV22InpaintCombinedPipeline
[[autodoc]] KandinskyV22InpaintCombinedPipeline
- all
- __call__
|
huggingface/blog/blob/main/huggy-lingo.md
|
--
title: "Huggy Lingo: Using Machine Learning to Improve Language Metadata on the Hugging Face Hub"
thumbnail: blog/assets/156_huggylingo/Huggy_Lingo.png
authors:
- user: davanstrien
---
## Huggy Lingo: Using Machine Learning to Improve Language Metadata on the Hugging Face Hub
**tl;dr**: We're using machine learning to detect the language of Hub datasets with no language metadata, and [librarian-bots](https://huggingface.co/librarian-bots) to make pull requests to add this metadata.
The Hugging Face Hub has become the repository where the community shares machine learning models, datasets, and applications. As the number of datasets grows, metadata becomes increasingly important as a tool for finding the right resource for your use case.
In this blog post, I'm excited to share some early experiments which seek to use machine learning to improve the metadata for datasets hosted on the Hugging Face Hub.
### Language Metadata for Datasets on the Hub
There are currently ~50K public datasets on the Hugging Face Hub. Metadata about the language used in a dataset can be specified using a [YAML](https://en.wikipedia.org/wiki/YAML) field at the top of the [dataset card](https://huggingface.co/docs/datasets/upload_dataset#create-a-dataset-card).
All public datasets specify 1,716 unique languages via a language tag in their metadata. Note that some of them will be the result of languages being specified in different ways i.e. `en` vs `eng` vs `english` vs `English`.
For example, the [IMDB dataset](https://huggingface.co/datasets/imdb) specifies `en` in the YAML metadata (indicating English):
<p align="center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/huggy_lingo/lang_metadata.png" alt="Screenshot of YAML metadata"><br>
<em>Section of the YAML metadata for the IMDB dataset</em>
</p>
It is perhaps unsurprising that English is by far the most common language for datasets on the Hub, with around 19% of datasets on the Hub listing their language as `en` (not including any variations of `en`, so the actual percentage is likely much higher).
<p align="center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/huggy_lingo/lang_freq.png" alt="Distribution of language tags"><br>
<em>The frequency and percentage frequency for datasets on the Hugging Face Hub</em>
</p>
What does the distribution of languages look like if we exclude English? We can see that there is a grouping of a few dominant languages and after that there is a pretty smooth fall in the frequencies at which languages appear.
<p align="center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/huggy_lingo/lang_freq_distribution.png" alt="Distribution of language tags"><br>
<em>Distribution of language tags for datasets on the hub excluding English.</em>
</p>
However, there is a major caveat to this. Most datasets (around 87%) do not specify the language used; only approximately 13% of datasets include language information in their metadata.
<p align="center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/huggy_lingo/has_lang_info_bar.png" alt="Barchart"><br>
<em>The percent of datasets which have language metadata. True indicates language metadata is specified, False means no language data is listed. No card data means that there isn't any metadata or it couldn't be loaded by the `huggingface_hub` Python library.</em>
</p>
#### Why is Language Metadata Important?
Language metadata can be a vital tool for finding relevant datasets. The Hugging Face Hub allows you to filter datasets by language. For example, if we want to find datasets with Dutch language we can use [a filter](https://huggingface.co/datasets?language=language:nl&sort=trending) on the Hub to include only datasets with Dutch data.
Currently this filter returns 184 datasets. However, there are datasets on the Hub which include Dutch but don't specify this in the metadata. These datasets become more difficult to find, particularly as the number of datasets on the Hub grows.
Many people want to be able to find datasets for a particular language. One of the major barriers to training good open source LLMs for a particular language is a lack of high quality training data.
If we switch to the task of finding relevant machine learning models, knowing what languages were included in the training data for a model can help us find models for the language we are interested in. This relies on the dataset specifying this information.
Finally, knowing what languages are represented on the Hub (and which are not), helps us understand the language biases of the Hub and helps inform community efforts to address gaps in particular languages.
### Predicting the Languages of Datasets Using Machine Learning
We’ve already seen that many of the datasets on the Hugging Face Hub haven’t included metadata for the language used. However, since these datasets are already shared openly, perhaps we can look at the dataset and try to identify the language using machine learning.
#### Getting the Data
One way we could access some examples from a dataset is by using the datasets library to download the datasets i.e.
```python
from datasets import load_dataset
dataset = load_dataset("biglam/on_the_books")
```
However, for some of the datasets on the Hub, we might be keen not to download the whole dataset. We could instead try to load a sample of the dataset. However, depending on how the dataset was created, we might still end up downloading more data than we’d need onto the machine we’re working on.
Luckily, many datasets on the Hub are available via the [datasets server](https://huggingface.co/docs/datasets-server/index). The datasets server is an API that allows us to access datasets hosted on the Hub without downloading the dataset locally. The Datasets Server powers the Datasets Viewer preview you will see for many datasets hosted on the Hub.
For this first experiment with predicting language for datasets, we define a list of column names and data types likely to contain textual content i.e. `text` or `prompt` column names and `string` features are likely to be relevant `image` is not. This means we can avoid predicting the language for datasets where language information is less relevant, for example, image classification datasets. We use the Datasets Server to get 20 rows of text data to pass to a machine learning model (we could modify this to take more or fewer examples from the dataset).
This approach means that for the majority of datasets on the Hub we can quickly request the contents of likely text columns for the first 20 rows in a dataset.
#### Predicting the Language of a Dataset
Once we have some examples of text from a dataset, we need to predict the language. There are various options here, but for this work, we used the [facebook/fasttext-language-identification](https://huggingface.co/facebook/fasttext-language-identification) fastText model created by [Meta](https://huggingface.co/facebook) as part of the [No Language Left Behind](https://ai.facebook.com/research/no-language-left-behind/) work. This model can detect 217 languages which will likely represent the majority of languages for datasets hosted on the Hub.
We pass 20 examples to the model representing rows from a dataset. This results in 20 individual language predictions (one per row) for each dataset.
Once we have these predictions, we do some additional filtering to determine if we will accept the predictions as a metadata suggestion. This roughly consists of:
- Grouping the predictions for each dataset by language: some datasets return predictions for multiple languages. We group these predictions by the language predicted i.e. if a dataset returns predictions for English and Dutch, we group the English and Dutch predictions together.
- For datasets with multiple languages predicted, we count how many predictions we have for each language. If a language is predicted less than 20% of the time, we discard this prediction. i.e. if we have 18 predictions for English and only 2 for Dutch we discard the Dutch predictions.
- We calculate the mean score for all predictions for a language. If the mean score associated with a languages prediction is below 80% we discard this prediction.
<p align="center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/huggy_lingo/prediction-flow.png" alt="Prediction workflow"><br>
<em>Diagram showing how predictions are handled.</em>
</p>
Once we’ve done this filtering, we have a further step of deciding how to use these predictions. The fastText language prediction model returns predictions as an [ISO 639-3](https://en.wikipedia.org/wiki/ISO_639-3) code (an international standard for language codes) along with a script type. i.e. `kor_Hang` is the ISO 693-3 language code for Korean (kor) + Hangul script (Hang) a [ISO 15924](https://en.wikipedia.org/wiki/ISO_15924) code representing the script of a language.
We discard the script information since this isn't currently captured consistently as metadata on the Hub and, where possible, we convert the language prediction returned by the model from [ISO 639-3](https://en.wikipedia.org/wiki/ISO_639-3) to [ISO 639-1](https://en.wikipedia.org/wiki/ISO_639-1) language codes. This is largely done because these language codes have better support in the Hub UI for navigating datasets.
For some ISO 639-3 codes, there is no ISO 639-1 equivalent. For these cases we manually specify a mapping if we deem it to make sense, for example Standard Arabic (`arb`) is mapped to Arabic (`ar`). Where an obvious mapping is not possible, we currently don't suggest metadata for this dataset. In future iterations of this work we may take a different approach. It is important to recognise this approach does come with downsides, since it reduces the diversity of languages which might be suggested and also relies on subjective judgments about what languages can be mapped to others.
But the process doesn't stop here. After all, what use is predicting the language of the datasets if we can't share that information with the rest of the community?
### Using Librarian-Bot to Update Metadata
To ensure this valuable language metadata is incorporated back into the Hub, we turn to Librarian-Bot! Librarian-Bot takes the language predictions generated by Meta's [facebook/fasttext-language-identification](https://huggingface.co/facebook/fasttext-language-identification) fastText model and opens pull requests to add this information to the metadata of each respective dataset.
This system not only updates the datasets with language information, but also does it swiftly and efficiently, without requiring manual work from humans. If the owner of a repo decided to approve and merge the pull request, then the language metadata becomes available for all users, significantly enhancing the usability of the Hugging Face Hub. You can keep track of what the librarian-bot is doing [here](https://huggingface.co/librarian-bot/activity/community)!
#### Next Steps
As the number of datasets on the Hub grows, metadata becomes increasingly important. Language metadata, in particular, can be incredibly valuable for identifying the correct dataset for your use case.
With the assistance of the Datasets Server and the [Librarian-Bots](https://huggingface.co/librarian-bots), we can update our dataset metadata at a scale that wouldn't be possible manually. As a result, we're enriching the Hub and making it an even more powerful tool for data scientists, linguists, and AI enthusiasts around the world.
As the machine learning librarian at Hugging Face, I continue exploring opportunities for automatic metadata enrichment for machine learning artefacts hosted on the Hub. Feel free to reach out (daniel at thiswebsite dot co) if you have ideas or want to collaborate on this effort!
|
huggingface/transformers/blob/main/docs/source/en/main_classes/trainer.md
|
!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# Trainer
The [`Trainer`] class provides an API for feature-complete training in PyTorch, and it supports distributed training on multiple GPUs/TPUs, mixed precision for [NVIDIA GPUs](https://nvidia.github.io/apex/), [AMD GPUs](https://rocm.docs.amd.com/en/latest/rocm.html), and [`torch.amp`](https://pytorch.org/docs/stable/amp.html) for PyTorch. [`Trainer`] goes hand-in-hand with the [`TrainingArguments`] class, which offers a wide range of options to customize how a model is trained. Together, these two classes provide a complete training API.
[`Seq2SeqTrainer`] and [`Seq2SeqTrainingArguments`] inherit from the [`Trainer`] and [`TrainingArgument`] classes and they're adapted for training models for sequence-to-sequence tasks such as summarization or translation.
<Tip warning={true}>
The [`Trainer`] class is optimized for 🤗 Transformers models and can have surprising behaviors
when used with other models. When using it with your own model, make sure:
- your model always return tuples or subclasses of [`~utils.ModelOutput`]
- your model can compute the loss if a `labels` argument is provided and that loss is returned as the first
element of the tuple (if your model returns tuples)
- your model can accept multiple label arguments (use `label_names` in [`TrainingArguments`] to indicate their name to the [`Trainer`]) but none of them should be named `"label"`
</Tip>
## Trainer[[api-reference]]
[[autodoc]] Trainer
- all
## Seq2SeqTrainer
[[autodoc]] Seq2SeqTrainer
- evaluate
- predict
## TrainingArguments
[[autodoc]] TrainingArguments
- all
## Seq2SeqTrainingArguments
[[autodoc]] Seq2SeqTrainingArguments
- all
|
gradio-app/gradio/blob/main/demo/upload_button_component_events/run.ipynb
|
Gradio Demo: upload_button_component_events
```
!pip install -q gradio
```
```
import gradio as gr
with gr.Blocks() as demo:
with gr.Row():
with gr.Column():
upload_btn = gr.UploadButton(label="Upload Single File", file_count="single")
with gr.Column():
output_file_1 = gr.File(label="Upload Single File Output", file_count="single")
num_load_btn_1 = gr.Number(label="# Load Upload Single File", value=0)
output_click_1 = gr.Number(label="# Click Upload Single File Output", value=0)
upload_btn.upload(lambda s,n: (s, n + 1), [upload_btn, num_load_btn_1], [output_file_1, num_load_btn_1])
upload_btn.click(lambda n: (n + 1), output_click_1, [output_click_1])
with gr.Row():
with gr.Column():
upload_btn_multiple = gr.UploadButton(label="Upload Multiple Files", file_count="multiple")
with gr.Column():
output_file_2 = gr.File(label="Upload Multiple Files Output", file_count="multiple")
num_load_btn_2 = gr.Number(label="# Load Upload Multiple Files", value=0)
output_click_2 = gr.Number(label="# Click Upload Multiple Files Output", value=0)
upload_btn_multiple.upload(lambda s,n: (s, n + 1), [upload_btn_multiple, num_load_btn_2], [output_file_2, num_load_btn_2])
upload_btn_multiple.click(lambda n: (n + 1), output_click_2, [output_click_2])
if __name__ == "__main__":
demo.launch()
```
|
huggingface/course/blob/main/chapters/en/chapter6/4.mdx
|
Normalization and pre-tokenization[[normalization-and-pre-tokenization]]
<CourseFloatingBanner chapter={6}
classNames="absolute z-10 right-0 top-0"
notebooks={[
{label: "Google Colab", value: "https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter6/section4.ipynb"},
{label: "Aws Studio", value: "https://studiolab.sagemaker.aws/import/github/huggingface/notebooks/blob/master/course/en/chapter6/section4.ipynb"},
]} />
Before we dive more deeply into the three most common subword tokenization algorithms used with Transformer models (Byte-Pair Encoding [BPE], WordPiece, and Unigram), we'll first take a look at the preprocessing that each tokenizer applies to text. Here's a high-level overview of the steps in the tokenization pipeline:
<div class="flex justify-center">
<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter6/tokenization_pipeline.svg" alt="The tokenization pipeline.">
<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter6/tokenization_pipeline-dark.svg" alt="The tokenization pipeline.">
</div>
Before splitting a text into subtokens (according to its model), the tokenizer performs two steps: _normalization_ and _pre-tokenization_.
## Normalization[[normalization]]
<Youtube id="4IIC2jI9CaU"/>
The normalization step involves some general cleanup, such as removing needless whitespace, lowercasing, and/or removing accents. If you're familiar with [Unicode normalization](http://www.unicode.org/reports/tr15/) (such as NFC or NFKC), this is also something the tokenizer may apply.
The 🤗 Transformers `tokenizer` has an attribute called `backend_tokenizer` that provides access to the underlying tokenizer from the 🤗 Tokenizers library:
```py
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
print(type(tokenizer.backend_tokenizer))
```
```python out
<class 'tokenizers.Tokenizer'>
```
The `normalizer` attribute of the `tokenizer` object has a `normalize_str()` method that we can use to see how the normalization is performed:
```py
print(tokenizer.backend_tokenizer.normalizer.normalize_str("Héllò hôw are ü?"))
```
```python out
'hello how are u?'
```
In this example, since we picked the `bert-base-uncased` checkpoint, the normalization applied lowercasing and removed the accents.
<Tip>
✏️ **Try it out!** Load a tokenizer from the `bert-base-cased` checkpoint and pass the same example to it. What are the main differences you can see between the cased and uncased versions of the tokenizer?
</Tip>
## Pre-tokenization[[pre-tokenization]]
<Youtube id="grlLV8AIXug"/>
As we will see in the next sections, a tokenizer cannot be trained on raw text alone. Instead, we first need to split the texts into small entities, like words. That's where the pre-tokenization step comes in. As we saw in [Chapter 2](/course/chapter2), a word-based tokenizer can simply split a raw text into words on whitespace and punctuation. Those words will be the boundaries of the subtokens the tokenizer can learn during its training.
To see how a fast tokenizer performs pre-tokenization, we can use the `pre_tokenize_str()` method of the `pre_tokenizer` attribute of the `tokenizer` object:
```py
tokenizer.backend_tokenizer.pre_tokenizer.pre_tokenize_str("Hello, how are you?")
```
```python out
[('Hello', (0, 5)), (',', (5, 6)), ('how', (7, 10)), ('are', (11, 14)), ('you', (16, 19)), ('?', (19, 20))]
```
Notice how the tokenizer is already keeping track of the offsets, which is how it can give us the offset mapping we used in the previous section. Here the tokenizer ignores the two spaces and replaces them with just one, but the offset jumps between `are` and `you` to account for that.
Since we're using a BERT tokenizer, the pre-tokenization involves splitting on whitespace and punctuation. Other tokenizers can have different rules for this step. For example, if we use the GPT-2 tokenizer:
```py
tokenizer = AutoTokenizer.from_pretrained("gpt2")
tokenizer.backend_tokenizer.pre_tokenizer.pre_tokenize_str("Hello, how are you?")
```
it will split on whitespace and punctuation as well, but it will keep the spaces and replace them with a `Ġ` symbol, enabling it to recover the original spaces if we decode the tokens:
```python out
[('Hello', (0, 5)), (',', (5, 6)), ('Ġhow', (6, 10)), ('Ġare', (10, 14)), ('Ġ', (14, 15)), ('Ġyou', (15, 19)),
('?', (19, 20))]
```
Also note that unlike the BERT tokenizer, this tokenizer does not ignore the double space.
For a last example, let's have a look at the T5 tokenizer, which is based on the SentencePiece algorithm:
```py
tokenizer = AutoTokenizer.from_pretrained("t5-small")
tokenizer.backend_tokenizer.pre_tokenizer.pre_tokenize_str("Hello, how are you?")
```
```python out
[('▁Hello,', (0, 6)), ('▁how', (7, 10)), ('▁are', (11, 14)), ('▁you?', (16, 20))]
```
Like the GPT-2 tokenizer, this one keeps spaces and replaces them with a specific token (`_`), but the T5 tokenizer only splits on whitespace, not punctuation. Also note that it added a space by default at the beginning of the sentence (before `Hello`) and ignored the double space between `are` and `you`.
Now that we've seen a little of how some different tokenizers process text, we can start to explore the underlying algorithms themselves. We'll begin with a quick look at the broadly widely applicable SentencePiece; then, over the next three sections, we'll examine how the three main algorithms used for subword tokenization work.
## SentencePiece[[sentencepiece]]
[SentencePiece](https://github.com/google/sentencepiece) is a tokenization algorithm for the preprocessing of text that you can use with any of the models we will see in the next three sections. It considers the text as a sequence of Unicode characters, and replaces spaces with a special character, `▁`. Used in conjunction with the Unigram algorithm (see [section 7](/course/chapter7/7)), it doesn't even require a pre-tokenization step, which is very useful for languages where the space character is not used (like Chinese or Japanese).
The other main feature of SentencePiece is *reversible tokenization*: since there is no special treatment of spaces, decoding the tokens is done simply by concatenating them and replacing the `_`s with spaces -- this results in the normalized text. As we saw earlier, the BERT tokenizer removes repeating spaces, so its tokenization is not reversible.
## Algorithm overview[[algorithm-overview]]
In the following sections, we'll dive into the three main subword tokenization algorithms: BPE (used by GPT-2 and others), WordPiece (used for example by BERT), and Unigram (used by T5 and others). Before we get started, here's a quick overview of how they each work. Don't hesitate to come back to this table after reading each of the next sections if it doesn't make sense to you yet.
Model | BPE | WordPiece | Unigram
:----:|:---:|:---------:|:------:
Training | Starts from a small vocabulary and learns rules to merge tokens | Starts from a small vocabulary and learns rules to merge tokens | Starts from a large vocabulary and learns rules to remove tokens
Training step | Merges the tokens corresponding to the most common pair | Merges the tokens corresponding to the pair with the best score based on the frequency of the pair, privileging pairs where each individual token is less frequent | Removes all the tokens in the vocabulary that will minimize the loss computed on the whole corpus
Learns | Merge rules and a vocabulary | Just a vocabulary | A vocabulary with a score for each token
Encoding | Splits a word into characters and applies the merges learned during training | Finds the longest subword starting from the beginning that is in the vocabulary, then does the same for the rest of the word | Finds the most likely split into tokens, using the scores learned during training
Now let's dive into BPE!
|
huggingface/simulate/blob/main/docs/source/howto/map_pools.mdx
|
!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
# Map pools
Map pools allow you to instantiate multiple versions of your environment on the backend, the enables higher
throughput with parallelization of interaction in simulations and embodied environments.
Using map pools is simple with 🤗 Simulate. First define a function that will generate your environment, we call each environment instance a "map".
```
def generate_map(index):
root = sm.Asset(name=f"root_{index}")
root += sm.Box(
name=f"floor_{index}",
position=[0, -0.05, 0],
scaling=[10, 0.1, 10],
material=sm.Material.BLUE,
with_collider=True,
)
root += sm.Box(
name=f"wall1_{index}",
position=[-1, 0.5, 0],
scaling=[0.1, 1, 5.1],
material=sm.Material.GRAY75,
with_collider=True,
)
root += sm.Box(
name=f"wall2_{index}",
position=[1, 0.5, 0],
scaling=[0.1, 1, 5.1],
material=sm.Material.GRAY75,
with_collider=True,
)
root += sm.Box(
name=f"wall3_{index}",
position=[0, 0.5, 4.5],
scaling=[5.9, 1, 0.1],
material=sm.Material.GRAY75,
with_collider=True,
)
# add actors, sensors, reward functions etc ...
return root
```
You can then provide the `generate_map` method as an argument to the `sm.ParallelRLEnv` class, which will instantiate `n_maps`.
Training with a subset of the maps is possible using the `n_show` option. At each environment reset, it cycles through to the next map.
[[autodoc]] ParallelRLEnv
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.